query_id
stringlengths
32
32
query
stringlengths
5
5.38k
positive_passages
listlengths
1
23
negative_passages
listlengths
4
100
subset
stringclasses
7 values
baea4282b3ced97fb5fa543f2e934499
A Leaf Recognition Technique for Plant Classification Using RBPNN and Zernike Moments
[ { "docid": "84ca7dc9cac79fe14ea2061919c44a05", "text": "We describe two new color indexing techniques. The rst one is a more robust version of the commonly used color histogram indexing. In the index we store the cumulative color histograms. The L 1-, L 2-, or L 1-distance between two cumulative color histograms can be used to deene a similarity measure of these two color distributions. We show that while this method produces only slightly better results than color histogram methods, it is more robust with respect to the quantization parameter of the histograms. The second technique is an example of a new approach to color indexing. Instead of storing the complete color distributions, the index contains only their dominant features. We implement this approach by storing the rst three moments of each color channel of an image in the index, i.e., for a HSV image we store only 9 oating point numbers per image. The similarity function which is used for the retrieval is a weighted sum of the absolute diierences between corresponding moments. Our tests clearly demonstrate that a retrieval based on this technique produces better results and runs faster than the histogram-based methods.", "title": "" }, { "docid": "1256f0799ed585092e60b50fb41055be", "text": "So far, plant identification has challenges for sev eral researchers. Various methods and features have been proposed. However, there are still many approaches could be investigated to develop robust plant identification systems. This paper reports several xperiments in using Zernike moments to build folia ge plant identification systems. In this case, Zernike moments were combined with other features: geometr ic features, color moments and gray-level co-occurrenc e matrix (GLCM). To implement the identifications systems, two approaches has been investigated. Firs t approach used a distance measure and the second u sed Probabilistic Neural Networks (PNN). The results sh ow that Zernike Moments have a prospect as features in leaf identification systems when they are combin ed with other features.", "title": "" } ]
[ { "docid": "9a7e491e4d4490f630b55a94703a6f00", "text": "Learning generic and robust feature representations with data from multiple domains for the same problem is of great value, especially for the problems that have multiple datasets but none of them are large enough to provide abundant data variations. In this work, we present a pipeline for learning deep feature representations from multiple domains with Convolutional Neural Networks (CNNs). When training a CNN with data from all the domains, some neurons learn representations shared across several domains, while some others are effective only for a specific one. Based on this important observation, we propose a Domain Guided Dropout algorithm to improve the feature learning procedure. Experiments show the effectiveness of our pipeline and the proposed algorithm. Our methods on the person re-identification problem outperform stateof-the-art methods on multiple datasets by large margins.", "title": "" }, { "docid": "7605ae0f6c5148195caa33c54e8e7a1b", "text": "Recently Dutch government, as well as many other governments around the world, has digitized a major portion of its public services. With this development electronic services finally arrive at the transaction level. The risks of electronic services on the transactional level are more profound than at the informational level. The public needs to trust the integrity and ‘information management capacities’ of the government or other involved organizations, as well as trust the infrastructure and those managing the infrastructure. In this process, the individual citizen will have to decide to adopt the new electronic government services by weighing its benefits and risks. In this paper, we present a study which aims to identify the role of risk perception and trust in the intention to adopt government e-services. In January 2003, a sample of 238 persons completed a questionnaire. The questionnaire tapped people’s intention to adopt e-government electronic services. Based on previous research and theories on technology acceptance, the questionnaire measured perceived usefulness of e-services, risk perception, worry, perceived behavioural control, subjective norm, trust and experience with e-services. Structural equation modelling was used to further analyze the data (Amos) and to design a theoretical model predicting the individual’s intention to adopt e-services. This analysis showed that the perceived usefulness of electronic services in general is the main determinant of the intention to use e-government services. Risk perception, personal experience, perceived behavioural control and subjective norm were found to significantly predict the perceived usefulness of electronic services in general, while trust in e-government was the main determinant of the perceived usefulness of e-government services. 2006 Elsevier Ltd. All rights reserved. 0747-5632/$ see front matter 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.chb.2005.11.003 * Corresponding author. E-mail addresses: Margot.Kuttschreuter@utwente.nl (M. Kuttschreuter), J.M.Gutteling@utwente.nl (J.M. Gutteling). M. Horst et al. / Computers in Human Behavior 23 (2007) 1838–1852 1839", "title": "" }, { "docid": "e4464e87dec7499a946208746fafe135", "text": "Jan Carlo Barca and Y. Ahmet Sekercioglu Robotica / Volume 31 / Issue 03 / May 2013, pp 345 ­ 359 DOI: 10.1017/S026357471200032X, Published online: 03 July 2012 Link to this article: http://journals.cambridge.org/abstract_S026357471200032X How to cite this article: Jan Carlo Barca and Y. Ahmet Sekercioglu (2013). Swarm robotics reviewed. Robotica, 31, pp 345­359 doi:10.1017/ S026357471200032X Request Permissions : Click here", "title": "" }, { "docid": "208b4cb4dc4cee74b9357a5ebb2f739c", "text": "We report improved AMR parsing results by adding a new action to a transitionbased AMR parser to infer abstract concepts and by incorporating richer features produced by auxiliary analyzers such as a semantic role labeler and a coreference resolver. We report final AMR parsing results that show an improvement of 7% absolute in F1 score over the best previously reported result. Our parser is available at: https://github.com/ Juicechuan/AMRParsing", "title": "" }, { "docid": "4289b6f73a5e402b49d1daab464d26b5", "text": "Run-time Partial Reconfiguration (PR) speed is significant in applications especially when fast IP core switching is required. In this paper, we propose to use Direct Memory Access (DMA), Master (MST) burst, and a dedicated Block RAM (BRAM) cache respectively to reduce the reconfiguration time. Based on the Xilinx PR technology and the Internal Configuration Access Port (ICAP) primitive in the FPGA fabric, we discuss multiple design architectures and thoroughly investigate their performance with measurements for different partial bitstream sizes. Compared to the reference OPB HWICAP and XPS HWICAP designs, experimental results showthatDMA HWICAP and MST HWICAP reduce the reconfiguration time by one order of magnitude, with little resource consumption overhead. The BRAM HWICAP design can even approach the reconfiguration speed limit of the ICAP primitive at the cost of large Block RAM utilization.", "title": "" }, { "docid": "7834cad6190a019c3b0086a3f0231182", "text": "In modern train control systems, a moving train retrieves its location information through passive transponders called balises, which are placed on the sleepers of the track at regular intervals. When the train-borne antenna energizes them using tele-powering signals, balises backscatter preprogrammed telegrams, which carry information about the train's current location. Since the telegrams are static in the existing implementations, the uplink signals from the balises could be recorded by an adversary and then replayed at a different location of the track, leading to what is well-known as the replay attack. Such an attack, while the legitimate balise is still functional, introduces ambiguity to the train about its location, can impact the physical operations of the trains. For balise-to-train communication, we propose a new communication framework referred to as cryptographic random fountains (CRF), where each balise, instead of transmitting telegrams with fixed information, transmits telegrams containing random signals. A salient feature of CRF is the use of challenge-response based interaction between the train and the balise for communication integrity. We present a thorough security analysis of CRF to showcase its ability to mitigate sophisticated replay attacks. Finally, we also discuss the implementation aspects of our framework.", "title": "" }, { "docid": "b634d8eb5016f93604ed460cebe07468", "text": "The basis of science is the hypothetico-deductive method and the recording of experiments in sufficient detail to enable reproducibility. We report the development of Robot Scientist \"Adam,\" which advances the automation of both. Adam has autonomously generated functional genomics hypotheses about the yeast Saccharomyces cerevisiae and experimentally tested these hypotheses by using laboratory automation. We have confirmed Adam's conclusions through manual experiments. To describe Adam's research, we have developed an ontology and logical language. The resulting formalization involves over 10,000 different research units in a nested treelike structure, 10 levels deep, that relates the 6.6 million biomass measurements to their logical description. This formalization describes how a machine contributed to scientific knowledge.", "title": "" }, { "docid": "0d56b30aef52bfdf2cb6426a834126e5", "text": "The wide adoption of social media has increased the competition among ideas for our finite attention. We employ a parsimonious agent-based model to study whether such a competition may affect the popularity of different memes, the diversity of information we are exposed to, and the fading of our collective interests for specific topics. Agents share messages on a social network but can only pay attention to a portion of the information they receive. In the emerging dynamics of information diffusion, a few memes go viral while most do not. The predictions of our model are consistent with empirical data from Twitter, a popular microblogging platform. Surprisingly, we can explain the massive heterogeneity in the popularity and persistence of memes as deriving from a combination of the competition for our limited attention and the structure of the social network, without the need to assume different intrinsic values among ideas.", "title": "" }, { "docid": "a8b40a058d1cc890cc7fc2bfe0809a0d", "text": "Xen is an x86 virtual machine monitor produced by the University of Cambridge Computer Laboratory and released under the GNU General Public License. Performance results comparing XenoLinux (Linux running in a Xen virtual machine) to native Linux as well as to other virtualization tools such as User Mode Linux (UML) were recently published in the paper “Xen and the Art of Virtualization” at the Symposium on Operating Systems Principles (October 2003). In this study, we repeat this performance analysis of Xen. We also extend the analysis in several ways, including comparing XenoLinux on x86 to an IBM zServer. We use this study as an example of repeated research. We argue that this model of research, which is enabled by open source software, is an important step in transferring the results of computer science research into production environments.", "title": "" }, { "docid": "d95a8f19720fdaa208e0081239934e6e", "text": "Multi-access Edge Computing (MEC) can be defined as a model for enabling business oriented, cloud computing platform within multiple types of the access network (e.g., LTE, 5G, WiFi, FTTH, etc.) at the close proximity of subscribers to serve delay sensitive, context aware applications. To pull out the most of the potential, MEC has to be designed as infrastructure, to support many kind of IoT applications and their eco system, in addition to sufficiently management mechanism. In this context, various research and standardization efforts are ongoing. This paper provides a comprehensive survey of the state-of-the-art research efforts on MEC domain, with focus on the architectural proposals as infrastracture, the issue of the partitioning of processing among the user devices, edge servers, and a cloud, and the issue of the resource management.", "title": "" }, { "docid": "147270ce3991745440473e698bb1f0a8", "text": "In celiac disease (CD), the intestinal lesions can be patchy and partial villous atrophy may elude detection at standard endoscopy (SE). Narrow Band Imaging (NBI) system in combination with a magnifying endoscope (ME) is a simple tool able to obtain targeted biopsy specimens. The aim of the study was to assess the correlation between NBI-ME and histology in CD diagnosis and to compare diagnostic accuracy between NBI-ME and SE in detecting villous abnormalities in CD. Forty-four consecutive patients with suspected CD undergoing upper gastrointestinal endoscopy have been prospectively evaluated. Utilizing both SE and NBI-ME, observed surface patterns were compared with histological results obtained from biopsy specimens using the k-Cohen agreement coefficient. NBI-ME identified partial villous atrophy in 12 patients in whom SE was normal, with sensitivity, specificity, and accuracy of 100%, 92.6%, and 95%, respectively. The overall agreement between NBI-ME and histology was significantly higher when compared with SE and histology (kappa score: 0.90 versus 0.46; P = 0.001) in diagnosing CD. NBI-ME could help identify partial mucosal atrophy in the routine endoscopic practice, potentially reducing the need for blind biopsies. NBI-ME was superior to SE and can reliably predict in vivo the villous changes of CD.", "title": "" }, { "docid": "5ceb415b17cc36e9171ddc72a860ccc8", "text": "Word embeddings and convolutional neural networks (CNN) have attracted extensive attention in various classification tasks for Twitter, e.g. sentiment classification. However, the effect of the configuration used to generate the word embeddings on the classification performance has not been studied in the existing literature. In this paper, using a Twitter election classification task that aims to detect election-related tweets, we investigate the impact of the background dataset used to train the embedding models, as well as the parameters of the word embedding training process, namely the context window size, the dimensionality and the number of negative samples, on the attained classification performance. By comparing the classification results of word embedding models that have been trained using different background corpora (e.g. Wikipedia articles and Twitter microposts), we show that the background data should align with the Twitter classification dataset both in data type and time period to achieve significantly better performance compared to baselines such as SVM with TF-IDF. Moreover, by evaluating the results of word embedding models trained using various context window sizes and dimensionalities, we find that large context window and dimension sizes are preferable to improve the performance. However, the number of negative samples parameter does not significantly affect the performance of the CNN classifiers. Our experimental results also show that choosing the correct word embedding model for use with CNN leads to statistically significant improvements over various baselines such as random, SVM with TF-IDF and SVM with word embeddings. Finally, for out-of-vocabulary (OOV) words that are not available in the learned word embedding models, we show that a simple OOV strategy to randomly initialise the OOV words without any prior knowledge is sufficient to attain a good classification performance among the current OOV strategies (e.g. a random initialisation using statistics of the pre-trained word embedding models).", "title": "" }, { "docid": "e2ed500ce298ea175554af97bd0f2f98", "text": "The Climate CoLab is a system to help thousands of people around the world collectively develop plans for what humans should do about global climate change. This paper shows how the system combines three design elements (model-based planning, on-line debates, and electronic voting) in a synergistic way. The paper also reports early usage experience showing that: (a) the system is attracting a continuing stream of new and returning visitors from all over the world, and (b) the nascent community can use the platform to generate interesting and high quality plans to address climate change. These initial results indicate significant progress towards an important goal in developing a collective intelligence system—the formation of a large and diverse community collectively engaged in solving a single problem.", "title": "" }, { "docid": "258f246b97bba091e521cd265126191a", "text": "This paper presents a method of electric tunability using varactor diodes installed on SIR coaxial resonators and associated filters. Using varactor diodes connected in parallel, in combination with the SIR coaxial resonator, makes it possible, by increasing the number of varactor diodes, to expand the tuning range and maintain the unloaded quality factor of the resonator. A second order filter, tunable in center frequency, was built with these resonators, providing a very large tuning range.", "title": "" }, { "docid": "f614df1c1775cd4e2a6927fce95ffa46", "text": "In this paper we have designed and implemented (15, k) a BCH Encoder and decoder using VHDL for reliable data transfer in AWGN channel with multiple error correction control. The digital logic implementation of binary encoding of multiple error correcting BCH code (15, k) of length n=15 over GF (2 4 ) with irreducible primitive polynomial x 4 +x+1 is organized into shift register circuits. Using the cyclic codes, the reminder b(x) can be obtained in a linear (15-k) stage shift register with feedback connections corresponding to the coefficients of the generated polynomial. Three encoders are designed using VHDL to encode the single, double and triple error correcting BCH code (15, k) corresponding to the coefficient of generated polynomial. Information bit is transmitted in unchanged form up to K clock cycles and during this period parity bits are calculated in the LFSR then the parity bits are transmitted from k+1 to 15 clock cycles. Total 15-k numbers of parity bits with k information bits are transmitted in 15 code word. In multiple error correction method, we have implemented (15, 5 ,3 ) ,(15,7, 2) and (15, 11, 1) BCH encoder and decoder using VHDL and the simulation is done using Xilinx ISE 14.2. KeywordsBCH, BER, SNR, BCH Encoder, Decoder VHDL, Error Correction, AWGN, LFSR", "title": "" }, { "docid": "e17a1429f4ca9de808caaa842ee5a441", "text": "Large scale visual understanding is challenging, as it requires a model to handle the widely-spread and imbalanced distribution of 〈subject, relation, object〉 triples. In real-world scenarios with large numbers of objects and relations, some are seen very commonly while others are barely seen. We develop a new relationship detection model that embeds objects and relations into two vector spaces where both discriminative capability and semantic affinity are preserved. We learn a visual and a semantic module that map features from the two modalities into a shared space, where matched pairs of features have to discriminate against those unmatched, but also maintain close distances to semantically similar ones. Benefiting from that, our model can achieve superior performance even when the visual entity categories scale up to more than 80, 000, with extremely skewed class distribution. We demonstrate the efficacy of our model on a large and imbalanced benchmark based of Visual Genome that comprises 53, 000+ objects and 29, 000+ relations, a scale at which no previous work has been evaluated at. We show superiority of our model over competitive baselines on the original Visual Genome dataset with 80, 000+ categories. We also show state-of-the-art performance on the VRD dataset and the scene graph dataset which is a subset of Visual Genome with 200 categories.", "title": "" }, { "docid": "a17cf9c0d9be4f25b605b986b368445a", "text": "The amyloid-β peptide (Aβ) is a key protein in Alzheimer’s disease (AD) pathology. We previously reported in vitro evidence suggesting that Aβ is an antimicrobial peptide. We present in vivo data showing that Aβ expression protects against fungal and bacterial infections in mouse, nematode, and cell culture models of AD. We show that Aβ oligomerization, a behavior traditionally viewed as intrinsically pathological, may be necessary for the antimicrobial activities of the peptide. Collectively, our data are consistent with a model in which soluble Aβ oligomers first bind to microbial cell wall carbohydrates via a heparin-binding domain. Developing protofibrils inhibited pathogen adhesion to host cells. Propagating β-amyloid fibrils mediate agglutination and eventual entrapment of unatttached microbes. Consistent with our model, Salmonella Typhimurium bacterial infection of the brains of transgenic 5XFAD mice resulted in rapid seeding and accelerated β-amyloid deposition, which closely colocalized with the invading bacteria. Our findings raise the intriguing possibility that β-amyloid may play a protective role in innate immunity and infectious or sterile inflammatory stimuli may drive amyloidosis. These data suggest a dual protective/damaging role for Aβ, as has been described for other antimicrobial peptides.", "title": "" }, { "docid": "4c23cf2b996e8c2cf1adf915174d70a8", "text": "Our goal was to establish a quantitative real-time PCR (QRT-PCR) method to detect Bacteroides fragilis group and related organisms from clinical specimens. Compared to conventional anaerobic culture, QRT-PCR can provide accurate and more rapid detection and identification of B. fragilis group and similar species. B. fragilis group and related organisms are the most frequently isolated anaerobic pathogens from clinical samples. However, culture and phenotypic identification is quite time-consuming. We designed specific primers and probes based on the 16S rRNA gene sequences of Bacteroides caccae, Bacteroides eggerthii, B. fragilis, Bacteroides ovatus, Bacteroides stercoris, Bacteroides thetaiotaomicron, Bacteroides uniformis, Bacteroides vulgatus, Odoribacter splanchnicus (Bacteroides splanchnicus), Parabacteroides distasonis (Bacteroides distasonis) and Parabacteroides merdae (Bacteroides merdae), and detected these species by means of QRT-PCR in 400 human surgical wound infection samples or closed abscesses. The target bacteria were detected from 31 samples (8%) by culture, but from 132 samples (33%) by QRT-PCR (p-value < 0.001). B. uniformis was the most common species (44 positive samples) according to QRT-PCR while culture showed it to be B. fragilis (16 positive samples). Additionally, for each species QRT-PCR detected higher counts than culture did; this may reflect detecting DNA of dead organisms by QRT-PCR. QRT-PCR is a rapid and sensitive method which has great potential for detection of B. fragilis group and related organisms in wound samples.", "title": "" }, { "docid": "cff9a7f38ca6699b235c774232a56f54", "text": "This paper presents a Miniature Aerial Vehicle (MAV) capable of handsoff autonomous operation within indoor environments. Our prototype is a Quadrotor weighing approximately 600g, with a diameter of 550mm, which carries the necessary electronics for stability control, altitude control, collision avoidance and anti-drift control. This MAV is equipped with three rate gyroscopes, three accelerometers, one ultrasonic sensor, four infrared sensors, a high-speed motor controller and a flight computer. Autonomous flight tests have been carried out in a 7x6-m room.", "title": "" }, { "docid": "2cacc319693079eb420c51f602dc45ec", "text": "We provide code that produces beautiful poetry. Our sonnet-generation algorithm includes several novel elements that improve over the state-of-the-art, leading to rhythmic and inspiring poems. The work discussed here is the winner of the 2018 PoetiX Literary Turing Test Award for computer-generated poetry.", "title": "" } ]
scidocsrr
e1d059ca39bac9ddb31f11d22c934cfe
RaBit EscAPE: a board game for computational thinking
[ { "docid": "31954ceaa223884fa27a9c446288b8a9", "text": "Computational thinking (CT) has been described as the use of abstraction, automation, and analysis in problem-solving [3]. We examine how these ways of thinking take shape for middle and high school youth in a set of NSF-supported programs. We discuss opportunities and challenges in both in-school and after-school contexts. Based on these observations, we present a \"use-modify-create\" framework, representing three phases of students' cognitive and practical activity in computational thinking. We recommend continued investment in the development of CT-rich learning environments, in educators who can facilitate their use, and in research on the broader value of computational thinking.", "title": "" } ]
[ { "docid": "6d4ba8028f71da5205351be3cff61d6e", "text": "Training robots to perceive, act and communicate using multiple modalities still represents a challenging problem, particularly if robots are expected to learn efficiently from small sets of example interactions. We describe a learning approach as a step in this direction, where we teach a humanoid robot how to play the game of noughts and crosses. Given that multiple multimodal skills can be trained to play this game, we focus our attention to training the robot to perceive the game, and to interact in this game. Our multimodal deep reinforcement learning agent perceives multimodal features and exhibits verbal and non-verbal actions while playing. Experimental results using simulations show that the robot can learn to win or draw up to 98% of the games. A pilot test of the proposed multimodal system for the targeted game—integrating speech, vision and gestures—reports that reasonable and fluent interactions can be achieved using the proposed approach.", "title": "" }, { "docid": "66fc8ff7073579314c50832a6f06c10d", "text": "Endodontic management of the permanent immature tooth continues to be a challenge for both clinicians and researchers. Clinical concerns are primarily related to achieving adequate levels of disinfection as 'aggressive' instrumentation is contraindicated and hence there exists a much greater reliance on endodontic irrigants and medicaments. The open apex has also presented obturation difficulties, notably in controlling length. Long-term apexification procedures with calcium hydroxide have proven to be successful in retaining many of these immature infected teeth but due to their thin dentinal walls and perceived problems associated with long-term placement of calcium hydroxide, they have been found to be prone to cervical fracture and subsequent tooth loss. In recent years there has developed an increasing interest in the possibility of 'regenerating' pulp tissue in an infected immature tooth. It is apparent that although the philosophy and hope of 'regeneration' is commendable, recent histologic studies appear to suggest that the calcified material deposited on the canal wall is bone/cementum rather than dentine, hence the absence of pulp tissue with or without an odontoblast layer.", "title": "" }, { "docid": "6871d514bca855a9f948939a3e8a02f7", "text": "The problem of tracking targets in the presence of reflections from sea or ground is addressed. Both types of reflections (specular and diffuse) are considered. Specular reflection causes large peak errors followed by an approximately constant bias in the monopulse ratio, while diffuse reflection has random variations which on the average generate a bias in the monopulse ratio. Expressions for the average error (bias) in the monopulse ratio due to specular and diffuse reflections and the corresponding variance in the presence of noise in the radar channels are derived. A maximum maneuver-based filter and a multiple model estimator are used for tracking. Simulation results for five scenarios, typical of sea skimmers, with Swerling III fluctuating radar cross sections (RCSs) indicate the significance and efficiency of the technique developed in this paper-a 65% reduction of the rms error in the target height estimate.", "title": "" }, { "docid": "2de75d4b75d2215a55538d71cc618dde", "text": "Experimental prediction of drug-target interactions is expensive, time-consuming and tedious. Fortunately, computational methods help narrow down the search space for interaction candidates to be further examined via wet-lab techniques. Nowadays, the number of attributes/features for drugs and targets, as well as the amount of their interactions, are increasing, making these computational methods inefficient or occasionally prohibitive. This motivates us to derive a reduced feature set for prediction. In addition, since ensemble learning techniques are widely used to improve the classification performance, it is also worthwhile to design an ensemble learning framework to enhance the performance for drug-target interaction prediction. In this paper, we propose a framework for drug-target interaction prediction leveraging both feature dimensionality reduction and ensemble learning. First, we conducted feature subspacing to inject diversity into the classifier ensemble. Second, we applied three different dimensionality reduction methods to the subspaced features. Third, we trained homogeneous base learners with the reduced features and then aggregated their scores to derive the final predictions. For base learners, we selected two classifiers, namely Decision Tree and Kernel Ridge Regression, resulting in two variants of ensemble models, EnsemDT and EnsemKRR, respectively. In our experiments, we utilized AUC (Area under ROC Curve) as an evaluation metric. We compared our proposed methods with various state-of-the-art methods under 5-fold cross validation. Experimental results showed EnsemKRR achieving the highest AUC (94.3%) for predicting drug-target interactions. In addition, dimensionality reduction helped improve the performance of EnsemDT. In conclusion, our proposed methods produced significant improvements for drug-target interaction prediction.", "title": "" }, { "docid": "70c1364ab09793493cd552d45b5a2602", "text": "INTRODUCTION\nFor safe and effective neonatal antibiotic therapy, knowledge of the pharmacokinetic parameters of antibacterial agents in neonates is a prerequisite. Fast maturational changes during the neonatal period influence pharmacokinetic and pharmacodynamic parameters and their variability. Consequently, the need for applying quantitative clinical pharmacology and determining optimal drug dosing regimens in neonates has become increasingly recognized.\n\n\nAREAS COVERED\nModern quantitative approaches, such as pharmacometrics, are increasingly utilized to characterize, understand and predict the pharmacokinetics of a drug and its effect, and to quantify the variability in the neonatal population. Individual factors, called covariates in modeling, are integrated in such approaches to explain inter-individual pharmacokinetic variability. Pharmacometrics has been shown to be a relevant tool to evaluate, optimize and individualize drug dosing regimens.\n\n\nEXPERT OPINION\nChallenges for optimal use of antibiotics in neonates can largely be overcome with quantitative clinical pharmacology practice. Clinicians should be aware that there is a next step to support the clinical decision-making based on clinical characteristics and therapeutic drug monitoring, through Bayesian-based modeling and simulation methods. Pharmacometric modeling and simulation approaches permit us to characterize population average, inter-subject and intra-subject variability of pharmacokinetic parameters such as clearance and volume of distribution, and to identify and quantify key factors that influence the pharmacokinetic behavior of antibiotics during the neonatal period.", "title": "" }, { "docid": "ab49abf9905090789e08beece7a98d1d", "text": "Inferring dense depth from stereo is crucial for several computer vision applications and Semi Global Matching (SGM) is often the preferred choice due to its good tradeoff between accuracy and computation requirements. Nevertheless, it suffers of two major issues: streaking artifacts caused by the Scanline Optimization (SO) approach, at the core of this algorithm, may lead to inaccurate results and the high memory footprint that may become prohibitive with high resolution images or devices with constrained resources. In this paper, we propose a smart scanline aggregation approach for SGM aimed at dealing with both issues. In particular, the contribution of this paper is threefold: i) leveraging on machine learning, proposes a novel generalpurpose confidence measure suited for any for stereo algorithm, based on O(1) features, that outperforms state of-the-art ii) taking advantage of this confidence measure proposes a smart aggregation strategy for SGM enabling significant improvements with a very small overhead iii) the overall strategy drastically reduces the memory footprint of SGM and, at the same time, improves its effectiveness and execution time. We provide extensive experimental results, including a cross-validation with multiple datasets (KITTI 2012, KITTI 2015 and Middlebury 2014).", "title": "" }, { "docid": "1bc4aabbc8aed4f3034358912d9728d5", "text": "Anjali Mishra1, Amit Mishra2 1 Master’s Degree Student, Electronics and Communication Engineering 2 Assistant Professor, Electronics and Communication Engineering 1,2 Vindhya Institute of Technology & Science, Jabalpur, Madhya Pradesh, India PIN – 482021 Email: 10309anjali@gmail.com , 2 amit12488@gmail.com ---------------------------------------------------------------------***--------------------------------------------------------------------Abstract Cognitive Radio presents a new opportunity area to explore for better utilization of a scarce natural resource like spectrum which is under focus due to increased presence of new communication devices, density of users and development of new data intensive applications. Cognitive Radio utilizes dynamic utilization of spectrum and is positioned as a promising solution to spectrum underutilization problem. However, reliability of a CR system in a noisy environment remains a challenge area. Especially manmade impulsive noise makes spectrum sensing difficult. In this paper we have presented a simulation model to analyze the effect of impulsive noise in Cognitive Radio system. Primary user detection in presence of impulsive noise is investigated for different noise thresholds and other signal parameters of interest using the unconventional power spectral density based detection approach. Also, possible alternatives for accurate primary user detection which are of interest for future research in this area are discussed for practical implementation.", "title": "" }, { "docid": "30740e33cdb2c274dbd4423e8f56405e", "text": "A conspicuous ability of the brain is to seamlessly assimilate and process spatial and temporal features of sensory stimuli. This ability is indispensable for the recognition of natural stimuli. Yet, a general computational framework for processing spatiotemporal stimuli remains elusive. Recent theoretical and experimental work suggests that spatiotemporal processing emerges from the interaction between incoming stimuli and the internal dynamic state of neural networks, including not only their ongoing spiking activity but also their 'hidden' neuronal states, such as short-term synaptic plasticity.", "title": "" }, { "docid": "abf6f1218543ce69b0095bba24f40ced", "text": "Evolution of cooperation and competition can appear when multiple adaptive agents share a biological, social, or technological niche. In the present work we study how cooperation and competition emerge between autonomous agents that learn by reinforcement while using only their raw visual input as the state representation. In particular, we extend the Deep Q-Learning framework to multiagent environments to investigate the interaction between two learning agents in the well-known video game Pong. By manipulating the classical rewarding scheme of Pong we show how competitive and collaborative behaviors emerge. We also describe the progression from competitive to collaborative behavior when the incentive to cooperate is increased. Finally we show how learning by playing against another adaptive agent, instead of against a hard-wired algorithm, results in more robust strategies. The present work shows that Deep Q-Networks can become a useful tool for studying decentralized learning of multiagent systems coping with high-dimensional environments.", "title": "" }, { "docid": "0583b36c9dfa3080ab94b16a7410b7cd", "text": "In this paper we present a simple yet effective approach to automatic OCR error detection and correction on a corpus of French clinical reports of variable OCR quality within the domain of foetopathology. While traditional OCR error detection and correction systems rely heavily on external information such as domain-specific lexicons, OCR process information or manually corrected training material, these are not always available given the constraints placed on using medical corpora. We therefore propose a novel method that only needs a representative corpus of acceptable OCR quality in order to train models. Our method uses recurrent neural networks (RNNs) to model sequential information on character level for a given medical text corpus. By inserting noise during the training process we can simultaneously learn the underlying (character-level) language model and as well as learning to detect and eliminate random noise from the textual input. The resulting models are robust to the variability of OCR quality but do not require additional, external information such as lexicons. We compare two different ways of injecting noise into the training process and evaluate our models on a manually corrected data set. We find that the best performing system achieves a 73% accuracy.", "title": "" }, { "docid": "86feba94dcc3e89097af2e50e5b7e908", "text": "Concerned about the Turing test’s ability to correctly evaluate if a system exhibits human-like intelligence, the Winograd Schema Challenge (WSC) has been proposed as an alternative. A Winograd Schema consists of a sentence and a question. The answers to the questions are intuitive for humans but are designed to be difficult for machines, as they require various forms of commonsense knowledge about the sentence. In this paper we demonstrate our progress towards addressing the WSC. We present an approach that identifies the knowledge needed to answer a challenge question, hunts down that knowledge from text repositories, and then reasons with them to come up with the answer. In the process we develop a semantic parser (www.kparser.org). We show that our approach works well with respect to a subset of Winograd schemas.", "title": "" }, { "docid": "472f2d8adb1c35fa7d4195323e53a8c2", "text": "Serverless computing promises to provide applications with cost savings and extreme elasticity. Unfortunately, slow application and container initialization can hurt common-case latency on serverless platforms. In this work, we analyze Linux container primitives, identifying scalability bottlenecks related to storage and network isolation. We also analyze Python applications from GitHub and show that importing many popular libraries adds about 100 ms to startup. Based on these findings, we implement SOCK, a container system optimized for serverless workloads. Careful avoidance of kernel scalability bottlenecks gives SOCK an 18× speedup over Docker. A generalized-Zygote provisioning strategy yields an additional 3× speedup. A more sophisticated three-tier caching strategy based on Zygotes provides a 45× speedup over SOCK without Zygotes. Relative to AWS Lambda and OpenWhisk, OpenLambda with SOCK reduces platform overheads by 2.8× and 5.3× respectively in an image processing case study.", "title": "" }, { "docid": "e60ff761b0acca53dcdad8fbf92f21a2", "text": "In this paper, we present a new, efficient displacement sensor using core-less planar coils that are magnetically coupled. The sensor consists of two planar stationary coils and one moving coil. The mutual inductance between the stationary coils and the moving coils are measured, and the displacement is computed. The sensor design was validated using numerical computation. Two prototype sensors of different dimensions were fabricated and tested. The first prototype sensor developed has a measurement range of 70 mm and an R.M.S. error of 0.8% and the second sensor has a measurement range of 56 mm and an R.M.S. error in measurement of 0.9%. The signal output from the sensor is made tolerant to errors due to variations in the vertical position of the moving coil. The new sensor is low in cost, easy to manufacture, and can be used in a number of industrial displacement sensing applications.", "title": "" }, { "docid": "5dce9610b3985fb7d9628d4c201ef66e", "text": "The recent advances in state estimation, perception, and navigation algorithms have significantly contributed to the ubiquitous use of quadrotors for inspection, mapping, and aerial imaging. To further increase the versatility of quadrotors, recent works investigated the use of an adaptive morphology, which consists of modifying the shape of the vehicle during flight to suit a specific task or environment. However, these works either increase the complexity of the platform or decrease its controllability. In this letter, we propose a novel, simpler, yet effective morphing design for quadrotors consisting of a frame with four independently rotating arms that fold around the main frame. To guarantee stable flight at all times, we exploit an optimal control strategy that adapts on the fly to the drone morphology. We demonstrate the versatility of the proposed adaptive morphology in different tasks, such as negotiation of narrow gaps, close inspection of vertical surfaces, and object grasping and transportation. The experiments are performed on an actual, fully autonomous quadrotor relying solely on onboard visual-inertial sensors and compute. No external motion tracking systems and computers are used. This is the first work showing stable flight without requiring any symmetry of the morphology.", "title": "" }, { "docid": "e870f2fe9a26b241bdeca882b6186169", "text": "Some people may be laughing when looking at you reading in your spare time. Some may be admired of you. And some may want be like you who have reading hobby. What about your own feel? Have you felt right? Reading is a need and a hobby at once. This condition is the on that will make you feel that you must read. If you know are looking for the book enPDFd recommender systems handbook as the choice of reading, you can find here.", "title": "" }, { "docid": "3473417f1701c82a4a06c00545437a3c", "text": "The eXtensible Markup Language (XML) and related technologies offer promise for (among other things) applying data management technology to documents, and also for providing a neutral syntax for interoperability among disparate systems. But like many new technologies, it has raised unrealistic expectations. We give an overview of XML and related standards, and offer opinions to help separate vaporware (with a chance of solidifying) from hype. In some areas, XML technologies may offer revolutionary improvements, such as in processing databases' outputs and extending data management to semi-structured data. For some goals, either a new class of DBMSs is required, or new standards must be built. For such tasks, progress will occur, but may be measured in ordinary years rather than Web time. For hierarchical formatted messages that do not need maximum compression (e.g., many military messages), XML may have considerable benefit. For interoperability among enterprise systems, XML's impact may be moderate as an improved basis for software, but great in generating enthusiasm for standardizing concepts and schemas.", "title": "" }, { "docid": "19c7311bd71763ff246ac598c174c379", "text": "Contextual word representations derived from large-scale neural language models are successful across a diverse set of NLP tasks, suggesting that they encode useful and transferable features of language. To shed light on the linguistic knowledge they capture, we study the representations produced by several recent pretrained contextualizers (variants of ELMo, the OpenAI transformer LM, and BERT) with a suite of sixteen diverse probing tasks. We find that linear models trained on top of frozen contextual representations are competitive with state-of-the-art task-specific models in many cases, but fail on tasks requiring fine-grained linguistic knowledge (e.g., conjunct identification). To investigate the transferability of contextual word representations, we quantify differences in the transferability of individual layers within contextualizers, especially between RNNs and transformers. For instance, higher layers of RNNs are more task-specific, while transformer layers do not exhibit the same monotonic trend. In addition, to better understand what makes contextual word representations transferable, we compare language model pretraining with eleven supervised pretraining tasks. For any given task, pretraining on a closely related task yields better performance than language model pretraining (which is better on average) when the pretraining dataset is fixed. However, language model pretraining on more data gives the best results.", "title": "" }, { "docid": "e36bc2b20c8fb5ba6d03672f7896a92c", "text": "We study the adaptation of convolutional neural networks to the complex temporal radio signal domain. We compare the efficacy of radio modulation classification using naively learned features against using expert features, which are currently used widely and well regarded in the field and we show significant performance improvements. We show that blind temporal learning on large and densely encoded time series using deep convolutional neural networks is viable and a strong candidate approach for this task.", "title": "" }, { "docid": "40f56ea7cb0894dde09729c98a038c93", "text": "Software Defined Networking (SDN) provides an environment to test and use custom ideas in networking. One of the areas that needs this flexibility is routing in networking. In this study we design and implement a custom intra-domain routing approach in an SDN environment. In SDN routing can be implemented as part of a controller or as an application on top of a controller. In this study we implemented a module in Floodlight controller v1.1 with OpenFlow 1.3 support. This module interacts with another custom module that monitors active bandwidth use of inter-switch links inside a network. Using the information provided by monitoring module, routing module uses available capacity in inter-switch links to determine widest path between any given two points. We tested and evaluated the developed system to show its efficiency. Newly developed module can be used in traffic engineering with additional control options.", "title": "" } ]
scidocsrr
ad1a7fdf546cafdb3e3f6cc6e98d7194
Unconscious Emotion
[ { "docid": "a52ac0402ca65a4e7a239c343f79df44", "text": "How does the brain cause positive affective reactions to sensory pleasure? An answer to pleasure causation requires knowing not only which brain systems are activated by pleasant stimuli, but also which systems actually cause their positive affective properties. This paper focuses on brain causation of behavioral positive affective reactions to pleasant sensations, such as sweet tastes. Its goal is to understand how brain systems generate 'liking,' the core process that underlies sensory pleasure and causes positive affective reactions. Evidence suggests activity in a subcortical network involving portions of the nucleus accumbens shell, ventral pallidum, and brainstem causes 'liking' and positive affective reactions to sweet tastes. Lesions of ventral pallidum also impair normal sensory pleasure. Recent findings regarding this subcortical network's causation of core 'liking' reactions help clarify how the essence of a pleasure gloss gets added to mere sensation. The same subcortical 'liking' network, via connection to brain systems involved in explicit cognitive representations, may also in turn cause conscious experiences of sensory pleasure.", "title": "" } ]
[ { "docid": "ee3b9382afc9455e53dd41d3725eb74a", "text": "Deep convolutional neural networks have liberated its extraordinary power on various tasks. However, it is still very challenging to deploy stateof-the-art models into real-world applications due to their high computational complexity. How can we design a compact and effective network without massive experiments and expert knowledge? In this paper, we propose a simple and effective framework to learn and prune deep models in an end-to-end manner. In our framework, a new type of parameter – scaling factor is first introduced to scale the outputs of specific structures, such as neurons, groups or residual blocks. Then we add sparsity regularizations on these factors, and solve this optimization problem by a modified stochastic Accelerated Proximal Gradient (APG) method. By forcing some of the factors to zero, we can safely remove the corresponding structures, thus prune the unimportant parts of a CNN. Comparing with other structure selection methods that may need thousands of trials or iterative fine-tuning, our method is trained fully end-to-end in one training pass without bells and whistles. We evaluate our method, Sparse Structure Selection with several state-of-the-art CNNs, and demonstrate very promising results with adaptive depth and width selection. Code is available at: https://github.com/huangzehao/ sparse-structure-selection.", "title": "" }, { "docid": "1fcaa9ebde2922c13ce42f8f90c9c6ba", "text": "Despite advances in HIV treatment, there continues to be great variability in the progression of this disease. This paper reviews the evidence that depression, stressful life events, and trauma account for some of the variation in HIV disease course. Longitudinal studies both before and after the advent of highly active antiretroviral therapies (HAART) are reviewed. To ensure a complete review, PubMed was searched for all English language articles from January 1990 to July 2007. We found substantial and consistent evidence that chronic depression, stressful events, and trauma may negatively affect HIV disease progression in terms of decreases in CD4 T lymphocytes, increases in viral load, and greater risk for clinical decline and mortality. More research is warranted to investigate biological and behavioral mediators of these psychoimmune relationships, and the types of interventions that might mitigate the negative health impact of chronic depression and trauma. Given the high rates of depression and past trauma in persons living with HIV/AIDS, it is important for healthcare providers to address these problems as part of standard HIV care.", "title": "" }, { "docid": "82d4b2aa3e3d3ec10425c6250268861c", "text": "Deep Neural Networks (DNNs) are typically trained by backpropagation in a batch learning setting, which requires the entire training data to be made available prior to the learning task. This is not scalable for many real-world scenarios where new data arrives sequentially in a stream form. We aim to address an open challenge of “Online Deep Learning” (ODL) for learning DNNs on the fly in an online setting. Unlike traditional online learning that often optimizes some convex objective function with respect to a shallow model (e.g., a linear/kernel-based hypothesis), ODL is significantly more challenging since the optimization of the DNN objective function is non-convex, and regular backpropagation does not work well in practice, especially for online learning settings. In this paper, we present a new online deep learning framework that attempts to tackle the challenges by learning DNN models of adaptive depth from a sequence of training data in an online learning setting. In particular, we propose a novel Hedge Backpropagation (HBP) method for online updating the parameters of DNN effectively, and validate the efficacy of our method on large-scale data sets, including both stationary and concept drifting scenarios.", "title": "" }, { "docid": "cf9b2356f2f4d6500aea3225fab0011e", "text": "Received Dec 9, 2016 Revised Jun 17, 2017 Accepted Sep 17, 2017 Digital understanding of Indian classical dance is least studied work, though it has been a part of Indian Culture from around 200BC. This work explores the possibilities of recognizing classical dance mudras in various dance forms in India. The images of hand mudras of various classical dances are collected form the internet and a database is created for this job. Histogram of oriented (HOG) features of hand mudras input the classifier. Support vector machine (SVM) classifies the HOG features into mudras as text messages. The mudra recognition frequency (MRF) is calculated for each mudra using graphical user interface (GUI) developed from the model. Popular feature vectors such as SIFT, SURF, LBP and HAAR are tested against HOG for precision and swiftness. This work helps new learners and dance enthusiastic people to learn and understand dance forms and related information on their mobile devices. Keyword:", "title": "" }, { "docid": "73beec89ce06abfe10edb9e446b8b2f8", "text": "Pinching is an important capability for mobile robots handling small items or tools. Successful pinching requires force-closure and, in underwater applications, gentle suction flow at the fingertips can dramatically improve the handling of light objects by counteracting the negative effects of water lubrication and enhancing friction. In addition, monitoring the flow gives a measure of suction-engagement and can act as a binary tactile sensor. Although a suction system adds complexity, elastic tubes can double as passive spring elements for desired finger kinematics.", "title": "" }, { "docid": "84470a2a19c09a3c5d898f37f196dddf", "text": "Breast cancer is the leading type of malignant tumor observed in women and the effective treatment depends on its early diagnosis. Diagnosis from histopathological images remains the \"gold standard\" for breast cancer. The complexity of breast cell histopathology (BCH) images makes reliable segmentation and classification hard. In this paper, an automatic quantitative image analysis technique of BCH images is proposed. For the nuclei segmentation, top-bottom hat transform is applied to enhance image quality. Wavelet decomposition and multi-scale region-growing (WDMR) are combined to obtain regions of interest (ROIs) thereby realizing precise location. A double-strategy splitting model (DSSM) containing adaptive mathematical morphology and Curvature Scale Space (CSS) corner detection method is applied to split overlapped cells for better accuracy and robustness. For the classification of cell nuclei, 4 shape-based features and 138 textural features based on color spaces are extracted. Optimal feature set is obtained by support vector machine (SVM) with chain-like agent genetic algorithm (CAGA). The proposed method was tested on 68 BCH images containing more than 3600 cells. Experimental results show that the mean segmentation sensitivity was 91.53% (74.05%) and specificity was 91.64% (74.07%). The classification performance of normal and malignant cell images can achieve 96.19% (70.31%) for accuracy, 99.05% (70.27%) for sensitivity and 93.33% (70.81%) for specificity. & 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "b9b267cc96e2cb8b31ac63a278757dec", "text": "Evolutionary considerations suggest aging is caused not by active gene programming but by evolved limitations in somatic maintenance, resulting in a build-up of damage. Ecological factors such as hazard rates and food availability influence the trade-offs between investing in growth, reproduction, and somatic survival, explaining why species evolved different life spans and why aging rate can sometimes be altered, for example, by dietary restriction. To understand the cell and molecular basis of aging is to unravel the multiplicity of mechanisms causing damage to accumulate and the complex array of systems working to keep damage at bay.", "title": "" }, { "docid": "a094869c9f79d0fccbc6892a345fec8b", "text": "Recent years have seen an exploration of data volumes from a myriad of IoT devices, such as various sensors and ubiquitous cameras. The deluge of IoT data creates enormous opportunities for us to explore the physical world, especially with the help of deep learning techniques. Traditionally, the Cloud is the option for deploying deep learning based applications. However, the challenges of Cloud-centric IoT systems are increasing due to significant data movement overhead, escalating energy needs, and privacy issues. Rather than constantly moving a tremendous amount of raw data to the Cloud, it would be beneficial to leverage the emerging powerful IoT devices to perform the inference task. Nevertheless, the statically trained model could not efficiently handle the dynamic data in the real in-situ environments, which leads to low accuracy. Moreover, the big raw IoT data challenges the traditional supervised training method in the Cloud. To tackle the above challenges, we propose In-situ AI, the first Autonomous and Incremental computing framework and architecture for deep learning based IoT applications. We equip deep learning based IoT system with autonomous IoT data diagnosis (minimize data movement), and incremental and unsupervised training method (tackle the big raw IoT data generated in ever-changing in-situ environments). To provide efficient architectural support for this new computing paradigm, we first characterize the two In-situ AI tasks (i.e. inference and diagnosis tasks) on two popular IoT devices (i.e. mobile GPU and FPGA) and explore the design space and tradeoffs. Based on the characterization results, we propose two working modes for the In-situ AI tasks, including Single-running and Co-running modes. Moreover, we craft analytical models for these two modes to guide the best configuration selection. We also develop a novel two-level weight shared In-situ AI architecture to efficiently deploy In-situ tasks to IoT node. Compared with traditional IoT systems, our In-situ AI can reduce data movement by 28-71%, which further yields 1.4X-3.3X speedup on model update and contributes to 30-70% energy saving.", "title": "" }, { "docid": "81a45cb4ca02c38839a81ad567eb1491", "text": "Big data is often mined using clustering algorithms. Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is a popular spatial clustering algorithm. However, it is computationally expensive and thus for clustering big data, parallel processing is required. The two prevalent paradigms for parallel processing are High-Performance Computing (HPC) based on Message Passing Interface (MPI) or Open Multi-Processing (OpenMP) and the newer big data frameworks such as Apache Spark or Hadoop. This report surveys for these two different paradigms publicly available implementations that aim at parallelizing DBSCAN and compares their performance. As a result, it is found that the big data implementations are not yet mature and in particular for skewed data, the implementation’s decomposition of the input data into parallel tasks has a huge influence on the performance in terms of running time.", "title": "" }, { "docid": "9d08b5e74b62a66c8521f2c6dc254920", "text": "A recognition with a large-scale network is simulated on a PDP-11/34 minicomputer and is shown to have a great capability for visual pattern recognition. The model consists of nine layers of cells. The authors demonstrate that the model can be trained to recognize handwritten Arabic numerals even with considerable deformations in shape. A learning-with-a-teacher process is used for the reinforcement of the modifiable synapses in the new large-scale model, instead of the learning-without-a-teacher process applied to a previous model. The authors focus on the mechanism for pattern recognition rather than that for self-organization.", "title": "" }, { "docid": "bd6757398f7e612efa66bf60f81d4fa7", "text": "In this paper we consider the problem of human pose estimation from a single still image. We propose a novel approach where each location in the image votes for the position of each keypoint using a convolutional neural net. The voting scheme allows us to utilize information from the whole image, rather than rely on a sparse set of keypoint locations. Using dense, multi-target votes, not only produces good keypoint predictions, but also enables us to compute image-dependent joint keypoint probabilities by looking at consensus voting. This differs from most previous methods where joint probabilities are learned from relative keypoint locations and are independent of the image. We finally combine the keypoints votes and joint probabilities in order to identify the optimal pose configuration. We show our competitive performance on the MPII Human Pose and Leeds Sports Pose datasets.", "title": "" }, { "docid": "062f6ecc9d26310de82572f500cb5f05", "text": "The processes underlying environmental, economic, and social unsustainability derive in part from the food system. Building sustainable food systems has become a predominating endeavor aiming to redirect our food systems and policies towards better-adjusted goals and improved societal welfare. Food systems are complex social-ecological systems involving multiple interactions between human and natural components. Policy needs to encourage public perception of humanity and nature as interdependent and interacting. The systemic nature of these interdependencies and interactions calls for systems approaches and integrated assessment tools. Identifying and modeling the intrinsic properties of the food system that will ensure its essential outcomes are maintained or enhanced over time and across generations, will help organizations and governmental institutions to track progress towards sustainability, and set policies that encourage positive transformations. This paper proposes a conceptual model that articulates crucial vulnerability and resilience factors to global environmental and socio-economic changes, postulating specific food and nutrition security issues as priority outcomes of food systems. By acknowledging the systemic nature of sustainability, this approach allows consideration of causal factor dynamics. In a stepwise approach, a logical application is schematized for three Mediterranean countries, namely Spain, France, and Italy.", "title": "" }, { "docid": "ca1d5c5da03fb9c3b6f7c023dc8f9e9c", "text": "Recent introduction of all-oral direct-acting antiviral (DAA) treatment has revolutionized care of patients with chronic hepatitis C virus infection. Because patients with different liver disease stages have been treated with great success including those awaiting liver transplantation, therapy has been extended to patients with hepatocellular carcinoma as well. From observational studies among compensated cirrhotic hepatitis C patients treated with interferon-containing regimens, it would have been expected that the rate of hepatocellular carcinoma occurrence is markedly decreased after a sustained virological response. However, recently 2 studies have been published reporting markedly increased rates of tumor recurrence and occurrence after viral clearance with DAA agents. Over the last decades, it has been established that chronic antigen stimulation during persistent infection with hepatitis C virus is associated with continuous activation and impaired function of several immune cell populations, such as natural killer cells and virus-specific T cells. This review therefore focuses on recent studies evaluating the restoration of adaptive and innate immune cell populations after DAA therapy in patients with chronic hepatitis C virus infection in the context of the immune responses in hepatocarcinogenesis.", "title": "" }, { "docid": "0950052c92b4526c253acc0d4f0f45a0", "text": "Pictogram communication is successful when participants at both end of the communication channel share a common pictogram interpretation. Not all pictograms carry universal interpretation, however; the issue of ambiguous pictogram interpretation must be addressed to assist pictogram communication. To unveil the ambiguity possible in pictogram interpretation, we conduct a human subject experiment to identify culture-specific criteria employed by humans by detecting cultural differences in pictogram interpretations. Based on the findings, we propose a categorical semantic relevance measure which calculates how relevant a pictogram is to a given interpretation in terms of a given pictogram category. The proposed measure is applied to categorized pictogram interpretations to enhance pictogram retrieval performance. The WordNet, the ChaSen, and the EDR Electronic Dictionary registered to the Language Grid are utilized to merge synonymous pictogram interpretations and to categorize pictogram interpretations into super-concept categories. We show how the Language Grid can assist the crosscultural research process.", "title": "" }, { "docid": "9eee83bc5d6a9918a003d48351df04db", "text": "Buffer overflow attacks are known to be the most common type of attacks that allow attackers to hijack a remote system by sending a specially crafted packet to a vulnerable network application running on it. A comprehensive defense strategy against such attacks should include (1) an attack detection component that determines the fact that a program is compromised and prevents the attack from further propagation, (2) an attack identification component that identifies attack packets so that one can block such packets in the future, and (3) an attack repair component that restores the compromised application’s state to that before the attack and allows it to continue running normally. Over the last decade, a significant amount of research has been vested in the systems that can detect buffer overflow attacks either statically at compile time or dynamically at run time. However, not much effort is spent on automated attack packet identification or attack repair. In this paper we present a unified solution to the three problems mentioned above. We implemented this solution as a GCC compiler extension called DIRA that transforms a program’s source code so that the resulting program can automatically detect any buffer overflow attack against it, repair the memory damage left by the attack, and identify the actual attack packet(s). We used DIRA to compile several network applications with known vulnerabilities and tested DIRA’s effectiveness by attacking the transformed programs with publicly available exploit code. The DIRA-compiled programs were always able to detect the attacks, identify the attack packets and most often repair themselves to continue normal execution. The average run-time performance overhead for attack detection and attack repair/identification is 4% and 25% respectively.", "title": "" }, { "docid": "32d79366936e301c44ae4ac11784e9d8", "text": "A vast literature describes transformational leadership in terms of leader having charismatic and inspiring personality, stimulating followers, and providing them with individualized consideration. A considerable empirical support exists for transformation leadership in terms of its positive effect on followers with respect to criteria like effectiveness, extra role behaviour and organizational learning. This study aims to explore the effect of transformational leadership characteristics on followers’ job satisfaction. Survey method was utilized to collect the data from the respondents. The study reveals that individualized consideration and intellectual stimulation affect followers’ job satisfaction. However, intellectual stimulation is positively related with job satisfaction and individualized consideration is negatively related with job satisfaction. Leader’s charisma or inspiration was found to be having no affect on the job satisfaction. The three aspects of transformational leadership were tested against job satisfaction through structural equation modeling using Amos.", "title": "" }, { "docid": "eafa6403e38d2ceb63ef7c00f84efe77", "text": "We propose a novel approach to learning distributed representations of variable-length text sequences in multiple languages simultaneously. Unlike previous work which often derive representations of multi-word sequences as weighted sums of individual word vectors, our model learns distributed representations for phrases and sentences as a whole. Our work is similar in spirit to the recent paragraph vector approach but extends to the bilingual context so as to efficiently encode meaning-equivalent text sequences of multiple languages in the same semantic space. Our learned embeddings achieve state-of-theart performance in the often used crosslingual document classification task (CLDC) with an accuracy of 92.7 for English to German and 91.5 for German to English. By learning text sequence representations as a whole, our model performs equally well in both classification directions in the CLDC task in which past work did not achieve.", "title": "" }, { "docid": "051b819eeb22e71eff526f1aa7248db6", "text": "Technical studies on automated driving of passenger cars were started in the 1950s, but those on heavy trucks were started in the mid-1990s, and only a few projects have dealt with truck automation, which include “Chauffeur” within the EU project T-TAP from the mid-1990s, truck automation by California PATH from around 2000, “KONVOI” in Germany from 2005, and “Energy ITS” by Japan from 2008. The objectives of truck automation are energy saving and enhanced transportation capacity by platooning, and eventually possible reduction of personnel cost by unmanned operation of following vehicles. The sensing technologies for automated vehicle control are computer vision, radar, lidar, laser scanners, localization by GNSS, and vehicle to vehicle communications. Experiments of platooning of three or four heavy trucks have shown the effectiveness of platooning in achieving energy saving due to short gaps between vehicles.", "title": "" }, { "docid": "3eb8a99236905f59af8a32e281189925", "text": "F2FS is a Linux file system designed to perform well on modern flash storage devices. The file system builds on append-only logging and its key design decisions were made with the characteristics of flash storage in mind. This paper describes the main design ideas, data structures, algorithms and the resulting performance of F2FS. Experimental results highlight the desirable performance of F2FS; on a state-of-the-art mobile system, it outperforms EXT4 under synthetic workloads by up to 3.1× (iozone) and 2× (SQLite). It reduces elapsed time of several realistic workloads by up to 40%. On a server system, F2FS is shown to perform better than EXT4 by up to 2.5× (SATA SSD) and 1.8× (PCIe SSD).", "title": "" }, { "docid": "8cb73e631ab6957bb9866ead9670441b", "text": "This paper explores a robust μ-synthesis control scheme for structural resonance vibration suppression of high-speed rotor systems supported by active magnetic bearings (AMBs) in the magnetically suspended double-gimbal control moment gyro (MSDGCMG). The derivation of a nominal linearized model about an operating point was presented. Sine sweep test was conducted on each component of AMB control system to obtain parameter variations and high-frequency unmodeled dynamics, including the structural resonance modes. A fictitious uncertainty block was introduced to represent the performance requirements for the augmented system. Finally, D-K iteration procedure was employed to solve the robust μ-controller. Rotor run-up experiments on the originally developed MSDGCMG prototype show that the designed μ-controller has a good performance for vibration rejection of structural resonance mode with the excitation of coupling torques. Further investigations indicate that the proposed method can also ensure the robust stability and performance of high-speed rotor system subject to the reaction of a large gyro torque.", "title": "" } ]
scidocsrr
9bc8ad922ad52676fbad7d2ba49b09d3
The Yale cTAKES extensions for document classification: architecture and application
[ { "docid": "8c308305b4a04934126c4746c8333b52", "text": "The authors report on the development of the Cancer Tissue Information Extraction System (caTIES)--an application that supports collaborative tissue banking and text mining by leveraging existing natural language processing methods and algorithms, grid communication and security frameworks, and query visualization methods. The system fills an important need for text-derived clinical data in translational research such as tissue-banking and clinical trials. The design of caTIES addresses three critical issues for informatics support of translational research: (1) federation of research data sources derived from clinical systems; (2) expressive graphical interfaces for concept-based text mining; and (3) regulatory and security model for supporting multi-center collaborative research. Implementation of the system at several Cancer Centers across the country is creating a potential network of caTIES repositories that could provide millions of de-identified clinical reports to users. The system provides an end-to-end application of medical natural language processing to support multi-institutional translational research programs.", "title": "" } ]
[ { "docid": "f918ffb39733fa8d3b360187d922161d", "text": "BACKGROUND\nRoux-en-Y gastric bypass (RYGB) causes extensive changes in gastrointestinal anatomy and leads to reduced appetite and large weight loss, which partly is due to an exaggerated release of anorexigenic gut hormones.\n\n\nMETHODS\nTo examine whether the altered passage of foods through the gastrointestinal tract after RYGB could be responsible for the changes in gut hormone release, we studied gastrointestinal motility with a scintigraphic technique as well as the secretion of the gut hormones glucagon-like peptide (GLP)-1 and peptide YY3-36 (PYY3-36 ) in 17 patients>1 year after RYGB and in nine healthy control subjects.\n\n\nKEY RESULTS\nAt meal completion, a smaller fraction of liquid and solid radiolabeled marker was retained in the pouch of RYGB patients than in the stomach of control subjects (P = 0.002 and P < 0.001, respectively). Accordingly, pouch emptying in patients was faster than gastric emptying in control subjects (P < 0.001 and P = 0.004, respectively liquid and solid markers). For the solid marker, small intestinal transit was slower in patients than control subjects (P = 0.034). Colonic transit rate did not differ between the groups. GLP-1 and PYY3-36 secretion was increased in patients compared to control subjects and fast pouch emptying of the liquid marker was associated with high gut hormone secretion.\n\n\nCONCLUSIONS & INFERENCES\nAfter RYGB, the bulk of foods pass without hindrance into the small intestine, while the small intestinal transit is prolonged. The rapid exposure of the gut epithelium contributes to the exaggerated release of GLP-1 and PYY3-36 after RYGB.", "title": "" }, { "docid": "23d8b5456eb169d24b58b76d8af42c82", "text": "Learning interpretable features from complex multilayer networks is a challenging and important problem. The need for such representations is particularly evident in multilayer networks of the brain, where nodal characteristics may help model and differentiate regions of the brain according to individual, cognitive task, or disease. Motivated by this problem, we introduce the multi-node2vec algorithm, an efficient and scalable feature engineering method that automatically learns continuous node feature representations from multilayer networks. Multi-node2vec relies upon a second-order random walk sampling procedure that efficiently explores the innerand intralayer ties of the observed multilayer network is utilized to identify multilayer neighborhoods. Maximum likelihood estimators of the nodal features are identified through the use of the Skip-gram neural network model on the collection of sampled neighborhoods. We investigate the conditions under which multi-node2vec is an approximation of a closedform matrix factorization problem. We demonstrate the efficacy of multi-node2vec on a multilayer functional brain network from resting state fMRI scans over a group of 74 healthy individuals. We find that multi-node2vec outperforms contemporary methods on complex networks, and that multi-node2vec identifies nodal characteristics that closely associate with the functional organization of the brain.", "title": "" }, { "docid": "f437bc0ca447dd771bec547e28530415", "text": "We focus on learning open-vocabulary visual classifiers, which scale up to a large portion of natural language vocabulary (e.g., over tens of thousands of classes). In particular, the training data are large-scale weakly labeled Web images since it is difficult to acquire sufficient well-labeled data at this category scale. In this paper, we propose a novel online learning paradigm towards this challenging task. Different from traditional N-way independent classifiers that generally fail to handle the extremely sparse and inter-related labels, our classifiers learn from continuous label embeddings discovered by collaboratively decomposing the sparse image-label matrix. Leveraging on the structure of the proposed collaborative learning formulation, we develop an efficient online algorithm that can jointly learn the label embeddings and visual classifiers. The algorithm can learn over 30,000 classes of 1,000 training images within 1 second on a standard GPU. Extensively experimental results on four benchmarks demonstrate the effectiveness of our method.", "title": "" }, { "docid": "59a4471695fff7d42f49d94fc9755772", "text": "We introduce a computationally efficient algorithm for multi-object tracking by detection that addresses four main challenges: appearance similarity among targets, missing data due to targets being out of the field of view or occluded behind other objects, crossing trajectories, and camera motion. The proposed method uses motion dynamics as a cue to distinguish targets with similar appearance, minimize target mis-identification and recover missing data. Computational efficiency is achieved by using a Generalized Linear Assignment (GLA) coupled with efficient procedures to recover missing data and estimate the complexity of the underlying dynamics. The proposed approach works with track lets of arbitrary length and does not assume a dynamical model a priori, yet it captures the overall motion dynamics of the targets. Experiments using challenging videos show that this framework can handle complex target motions, non-stationary cameras and long occlusions, on scenarios where appearance cues are not available or poor.", "title": "" }, { "docid": "db31a8887bfc1b24c2d2c2177d4ef519", "text": "The equilibrium microstructure of a fluid may only be described exactly in terms of a complete set of n-body atomic distribution functions, where n is 1, 2, 3 , . . . , N, and N is the total number of particles in the system. The higher order functions, i. e. n > 2, are complex and practically inaccessible but con­ siderable qualitative information can already be derived from studies of the mean radial occupation function n(r) defined as the average number of atoms in a sphere of radius r centred on a particular atom. The function for a perfect gas of non-inter­ acting particles is", "title": "" }, { "docid": "86dc15207ddb57fb6f247017c9ea6abd", "text": "The distribution of microfilaments and microtubules was examined in pleopod tegumental glands of male and female lobsters (Homarus americanus). Glands were labeled with rhodamine-phalloidin or antibodies to tubulin, the antitubulin antibodies being demonstrated with secondary antibodies conjugated to fluorescein. The labeled glands were then examined using either a Zeiss epifluorescence microscope or a Bio-Rad confocal scanning microscope. Some glands were examined using transmission electron microscopy. Glands from males and females showed the same distribution of microfilaments and microtubules, which appeared most abundantly in the common locus and around the main duct of each rosette. F-actin was specifically found in the central lobe of the central cell, around the ductules of the common locus, surrounding the finger-like projection of secretory cells, and encircling the main duct draining the rosette. Microtubules were most abundant in the finger-like projections of the secretory cells, in the cytoplasm of the central cell and canal cell, and around the main duct of the canal cell. Contraction of the microfilaments may facilitate movement of secretory product from the rosette, while the microtubules may provide structural support for the attenuated finger-like projections and the main duct. Electron micrographs suggest there is some interaction between these two elements of the cytoskeleton.", "title": "" }, { "docid": "9244acef01812d757639bd4f09631c22", "text": "This paper describes the results of the first shared task on Multilingual Emoji Prediction, organized as part of SemEval 2018. Given the text of a tweet, the task consists of predicting the most likely emoji to be used along such tweet. Two subtasks were proposed, one for English and one for Spanish, and participants were allowed to submit a system run to one or both subtasks. In total, 49 teams participated in the English subtask and 22 teams submitted a system run to the Spanish subtask. Evaluation was carried out emoji-wise, and the final ranking was based on macro F-Score. Data and further information about this task can be found at https://competitions. codalab.org/competitions/17344.", "title": "" }, { "docid": "f16b013db80ad448ab31040f75b8bcb2", "text": "In the world of recommender systems, it is a common practice to use public available datasets from different application environments (e.g. MovieLens, Book-Crossing, or Each-Movie) in order to evaluate recommendation algorithms. These datasets are used as benchmarks to develop new recommendation algorithms and to compare them to other algorithms in given settings. In this paper, we explore datasets that capture learner interactions with tools and resources. We use the datasets to evaluate and compare the performance of different recommendation algorithms for learning. We present an experimental comparison of the accuracy of several collaborative filtering algorithms applied to these TEL datasets and elaborate on implicit relevance data, such as downloads and tags, that can be used to improve the performance of recommendation algorithms.", "title": "" }, { "docid": "c6cb6b1cb964d0e2eb8ad344ee4a62b3", "text": "Associative classifiers have proven to be very effective in classification problems. Unfortunately, the algorithms used for learning these classifiers are not able to adequately manage big data because of time complexity and memory constraints. To overcome such drawbacks, we propose a distributed association rule-based classification scheme shaped according to the MapReduce programming model. The scheme mines classification association rules (CARs) using a properly enhanced, distributed version of the well-known FP-Growth algorithm. Once CARs have been mined, the proposed scheme performs a distributed rule pruning. The set of survived CARs is used to classify unlabeled patterns. The memory usage and time complexity for each phase of the learning process are discussed, and the scheme is evaluated on seven real-world big datasets on the Hadoop framework, characterizing its scalability and achievable speedup on small computer clusters. The proposed solution for associative classifiers turns to be suitable to practically address ∗Corresponding Author: Tel: +39 05", "title": "" }, { "docid": "6325188ee21b6baf65dbce6855c19bc2", "text": "A knowledgeable observer of a game of football (soccer) can make a subjective evaluation of the quality of passes made between players during the game, such as rating them as Good, OK, or Bad. In this article, we consider the problem of producing an automated system to make the same evaluation of passes and present a model to solve this problem.\n Recently, many professional football leagues have installed object tracking systems in their stadiums that generate high-resolution and high-frequency spatiotemporal trajectories of the players and the ball. Beginning with the thesis that much of the information required to make the pass ratings is available in the trajectory signal, we further postulated that using complex data structures derived from computational geometry would enable domain football knowledge to be included in the model by computing metric variables in a principled and efficient manner. We designed a model that computes a vector of predictor variables for each pass made and uses machine learning techniques to determine a classification function that can accurately rate passes based only on the predictor variable vector.\n Experimental results show that the learned classification functions can rate passes with 90.2% accuracy. The agreement between the classifier ratings and the ratings made by a human observer is comparable to the agreement between the ratings made by human observers, and suggests that significantly higher accuracy is unlikely to be achieved. Furthermore, we show that the predictor variables computed using methods from computational geometry are among the most important to the learned classifiers.", "title": "" }, { "docid": "78007b3276e795d76b692b40c4808c51", "text": "The construct of trait emotional intelligence (trait EI or trait emotional self-efficacy) provides a comprehensive operationalization of emotion-related self-perceptions and dispositions. In the first part of the present study (N=274, 92 males), we performed two joint factor analyses to determine the location of trait EI in Eysenckian and Big Five factor space. The results showed that trait EI is a compound personality construct located at the lower levels of the two taxonomies. In the second part of the study, we performed six two-step hierarchical regressions to investigate the incremental validity of trait EI in predicting, over and above the Giant Three and Big Five personality dimensions, six distinct criteria (life satisfaction, rumination, two adaptive and two maladaptive coping styles). Trait EI incrementally predicted four criteria over the Giant Three and five criteria over the Big Five. The discussion addresses common questions about the operationalization of emotional intelligence as a personality trait.", "title": "" }, { "docid": "46c8336f395d04d49369d406f41b0602", "text": "Several RGB-D datasets have been publicized over the past few years for facilitating research in computer vision and robotics. However, the lack of comprehensive and fine-grained annotation in these RGB-D datasets has posed challenges to their widespread usage. In this paper, we introduce SceneNN, an RGB-D scene dataset consisting of 100 scenes. All scenes are reconstructed into triangle meshes and have per-vertex and per-pixel annotation. We further enriched the dataset with fine-grained information such as axis-aligned bounding boxes, oriented bounding boxes, and object poses. We used the dataset as a benchmark to evaluate the state-of-the-art methods on relevant research problems such as intrinsic decomposition and shape completion. Our dataset and annotation tools are available at http://www.scenenn.net.", "title": "" }, { "docid": "06909d0ffbc52e14e0f6f1c9ffe29147", "text": "DistributedLog is a high performance, strictly ordered, durably replicated log. It is multi-tenant, designed with a layered architecture that allows reads and writes to be scaled independently and supports OLTP, stream processing and batch workloads. It also supports a globally synchronous consistent replicated log spanning multiple geographically separated regions. This paper describes how DistributedLog is structured, its components and the rationale underlying various design decisions. We have been using DistributedLog in production for several years, supporting applications ranging from transactional database journaling, real-time data ingestion, and analytics to general publish-subscribe messaging.", "title": "" }, { "docid": "457ea53f0a303e8eba8847422ef61e5a", "text": "Tele-operated hydraulic underwater manipulators are commonly used to perform remote underwater intervention tasks such as weld inspection or mating of connectors. Automation of these tasks to use tele-assistance requires a suitable hybrid position/force control scheme, to specify simultaneously the robot motion and contact forces. Classical linear control does not allow for the highly non-linear and time varying robot dynamics in this situation. Adequate control performance requires more advanced controllers. This paper presents and compares two different advanced hybrid control algorithms. The first is based on a modified Variable Structure Control (VSC-HF) with a virtual environment, and the second uses a multivariable self-tuning adaptive controller. A direct comparison of the two proposed control schemes is performed in simulation, using a model of the dynamics of a hydraulic underwater manipulator (a Slingsby TA9) in contact with a surface. These comparisons look at the performance of the controllers under a wide variety of operating conditions, including different environment stiffnesses, positions of the robot and", "title": "" }, { "docid": "c817e872fa02f93ae967168a5aa15d20", "text": "We introduce an SIR particle filter for tracking civilian targets including vehicles and pedestrians in dual-band midwave/longwave infrared imagery as well as a novel dual-band track consistency check for triggering appearance model updates. Because of the paucity of available dual-band data, we constructed a custom sensor to acquire the test sequences. The proposed algorithm is robust against magnification changes, aspect changes, and clutter and successfully tracked all 17 cases tested, including two partial occlusions. Future work is needed to comprehensively evaluate performance of the algorithm against state-of-the-art video trackers, especially considering the relatively small number of previous dual-band tracking results that have appeared.", "title": "" }, { "docid": "b1958bbb9348a05186da6db649490cdd", "text": "Fourier ptychography (FP) utilizes illumination control and computational post-processing to increase the resolution of bright-field microscopes. In effect, FP extends the fixed numerical aperture (NA) of an objective lens to form a larger synthetic system NA. Here, we build an FP microscope (FPM) using a 40X 0.75NA objective lens to synthesize a system NA of 1.45. This system achieved a two-slit resolution of 335 nm at a wavelength of 632 nm. This resolution closely adheres to theoretical prediction and is comparable to the measured resolution (315 nm) associated with a standard, commercially available 1.25 NA oil immersion microscope. Our work indicates that Fourier ptychography is an attractive method to improve the resolution-versus-NA performance, increase the working distance, and enlarge the field-of-view of high-resolution bright-field microscopes by employing lower NA objectives.", "title": "" }, { "docid": "7c254a96816b8ad1aa68a9a4927b3764", "text": "The purpose of this study is to explore cost and management accounting practices utilized by manufacturing companies operating in Istanbul, Turkey. The sample of the study consists of 61 companies, containing both small and medium-sized enterprises, and large companies. The data collection methodology of the study is questionnaire survey. The content of the questionnaire survey is based on several previous studies. The major findings of the study are as follows: the most widely used product costing method is job costing; the complexity in production poses as the highest ranking difficulty in product costing; the most widely used three overhead allocation bases are prime costs, units produced, and direct labor cost; pricing decisions is the most important area where costing information is used; overall mean of the ratio of overhead to total cost is 34.48 percent for all industries; and the most important three management accounting practices are budgeting, planning and control, and cost-volume-profit analysis. Furthermore, decreasing profitability, increasing costs and competition, and economic crises are the factors, which increase the perceived importance of cost accounting. The findings indicate that companies perceive traditional management accounting tools still important. However, new management accounting practices such as strategic planning, and transfer pricing are perceived less important than traditional ones. Therefore, companies need to improve themselves in this aspect.", "title": "" }, { "docid": "8e130ed8d69cc677c6b44e830b16f101", "text": "We present a simple proof of the Littlewood-Richardson rule using a sign-reversing involution, and show that a similar involution provides a com-binatorial proof of the SXP algorithm of Chen, Garsia, and Remmel 2] which computes the Schur function expansion of the plethysm of a Schur function and a power sum symmetric function. The methods of this paper have also been applied to prove combinatorial formulas for the characters of coordinate rings of nilpotent conjugacy classes of matrices 14].", "title": "" }, { "docid": "1e9c7c97256e7778dbb1ef4f09c1b28e", "text": "A new neural paradigm called diagonal recurrent neural network (DRNN) is presented. The architecture of DRNN is a modified model of the fully connected recurrent neural network with one hidden layer, and the hidden layer comprises self-recurrent neurons. Two DRNN's are utilized in a control system, one as an identifier called diagonal recurrent neuroidentifier (DRNI) and the other as a controller called diagonal recurrent neurocontroller (DRNC). A controlled plant is identified by the DRNI, which then provides the sensitivity information of the plant to the DRNC. A generalized dynamic backpropagation algorithm (DBP) is developed and used to train both DRNC and DRNI. Due to the recurrence, the DRNN can capture the dynamic behavior of a system. To guarantee convergence and for faster learning, an approach that uses adaptive learning rates is developed by introducing a Lyapunov function. Convergence theorems for the adaptive backpropagation algorithms are developed for both DRNI and DRNC. The proposed DRNN paradigm is applied to numerical problems and the simulation results are included.", "title": "" } ]
scidocsrr
c4eedc71b62029bcf2f2c6bd4bfdd969
The evolutionary psychology of facial beauty.
[ { "docid": "0e74994211d0e3c1e85ba0c85aba3df5", "text": "Images of faces manipulated to make their shapes closer to the average are perceived as more attractive. The influences of symmetry and averageness are often confounded in studies based on full-face views of faces. Two experiments are reported that compared the effect of manipulating the averageness of female faces in profile and full-face views. Use of a profile view allows a face to be \"morphed\" toward an average shape without creating an image that becomes more symmetrical. Faces morphed toward the average were perceived as more attractive in both views, but the effect was significantly stronger for full-face views. Both full-face and profile views morphed away from the average shape were perceived as less attractive. It is concluded that the effect of averageness is independent of any effect of symmetry on the perceived attractiveness of female faces.", "title": "" }, { "docid": "1fc10d626c7a06112a613f223391de26", "text": "The question of what makes a face attractive, and whether our preferences come from culture or biology, has fascinated scholars for centuries. Variation in the ideals of beauty across societies and historical periods suggests that standards of beauty are set by cultural convention. Recent evidence challenges this view, however, with infants as young as 2 months of age preferring to look at faces that adults find attractive (Langlois et al., 1987), and people from different cultures showing considerable agreement about which faces are attractive (Cun-for a review). These findings raise the possibility that some standards of beauty may be set by nature rather than culture. Consistent with this view, specific preferences have been identified that appear to be part of our biological rather than Such a preference would be adaptive if stabilizing selection operates on facial traits (Symons, 1979), or if averageness is associated with resistance to pathogens , as some have suggested Evolutionary biologists have proposed that a preference for symmetry would also be adaptive because symmetry is a signal of health and genetic quality Only high-quality individuals can maintain symmetric development in the face of environmental and genetic stresses. Symmetric bodies are certainly attractive to humans and many other animals but what about symmetric faces? Biologists suggest that facial symmetry should be attractive because it may signal mate quality High levels of facial asymmetry in individuals with chro-mosomal abnormalities (e.g., Down's syndrome and Tri-somy 14; for a review, see Thornhill & Møller, 1997) are consistent with this view, as is recent evidence that facial symmetry levels correlate with emotional and psychological health (Shackelford & Larsen, 1997). In this paper, we investigate whether people can detect subtle differences in facial symmetry and whether these differences are associated with differences in perceived attractiveness. Recently, Kowner (1996) has reported that faces with normal levels of asymmetry are more attractive than perfectly symmetric versions of the same faces. 3 Similar results have been reported by Langlois et al. and an anonymous reviewer for helpful comments on an earlier version of the manuscript. We also thank Graham Byatt for assistance with stimulus construction, Linda Jeffery for assistance with the figures, and Alison Clark and Catherine Hickford for assistance with data collection and statistical analysis in Experiment 1A. Evolutionary, as well as cultural, pressures may contribute to our perceptions of facial attractiveness. Biologists predict that facial symmetry should be attractive, because it may signal …", "title": "" }, { "docid": "6b6943e2b263fa0d4de934e563a6cc39", "text": "Average faces are attractive, but what is average depends on experience. We examined the effect of brief exposure to consistent facial distortions on what looks normal (average) and what looks attractive. Adaptation to a consistent distortion shifted what looked most normal, and what looked most attractive, toward that distortion. These normality and attractiveness aftereffects occurred when the adapting and test faces differed in orientation by 90 degrees (+45 degrees vs. -45 degrees ), suggesting adaptation of high-level neurons whose coding is not strictly retino- topic. Our results suggest that perceptual adaptation can rapidly recalibrate people's preferences to fit the faces they see. The results also suggest that average faces are attractive because of their central location in a distribution of faces (i.e., prototypicality), rather than because of any intrinsic appeal of particular physical characteristics. Recalibration of preferences may have important consequences, given the powerful effects of perceived attractiveness on person perception, mate choice, social interactions, and social outcomes for individuals.", "title": "" } ]
[ { "docid": "3b988fe1c91096f67461dc9fc7bb6fae", "text": "The paper analyzes the test setup required by the International Electrotechnical Commission (IEC) 61000-4-4 to evaluate the immunity of electronic equipment to electrical fast transients (EFTs), and proposes an electrical model of the capacitive coupling clamp, which is employed to add disturbances to nominal signals. The study points out limits on accuracy of this model, and shows how it can be fruitfully employed to predict the interference waveform affecting nominal system signals through computer simulations.", "title": "" }, { "docid": "85eb1b34bf15c6b5dcd8778146bfcfca", "text": "A novel face recognition algorithm is presented in this paper. Histogram of Oriented Gradient features are extracted both for the test image and also for the training images and given to the Support Vector Machine classifier. The detailed steps of HOG feature extraction and the classification using SVM is presented. The algorithm is compared with the Eigen feature based face recognition algorithm. The proposed algorithm and PCA are verified using 8 different datasets. Results show that in all the face datasets the proposed algorithm shows higher face recognition rate when compared with the traditional Eigen feature based face recognition algorithm. There is an improvement of 8.75% face recognition rate when compared with PCA based face recognition algorithm. The experiment is conducted on ORL database with 2 face images for testing and 8 face images for training for each person. Three performance curves namely CMC, EPC and ROC are considered. The curves show that the proposed algorithm outperforms when compared with PCA algorithm. IndexTerms: Facial features, Histogram of Oriented Gradients, Support Vector Machine, Principle Component Analysis.", "title": "" }, { "docid": "ebedc7f86c7a424091777f360f979122", "text": "Synaptic plasticity is thought to be the principal neuronal mechanism underlying learning. Models of plastic networks typically combine point neurons with spike-timing-dependent plasticity (STDP) as the learning rule. However, a point neuron does not capture the local non-linear processing of synaptic inputs allowed for by dendrites. Furthermore, experimental evidence suggests that STDP is not the only learning rule available to neurons. By implementing biophysically realistic neuron models, we study how dendrites enable multiple synaptic plasticity mechanisms to coexist in a single cell. In these models, we compare the conditions for STDP and for synaptic strengthening by local dendritic spikes. We also explore how the connectivity between two cells is affected by these plasticity rules and by different synaptic distributions. Finally, we show that how memory retention during associative learning can be prolonged in networks of neurons by including dendrites. Synaptic plasticity is the neuronal mechanism underlying learning. Here the authors construct biophysical models of pyramidal neurons that reproduce observed plasticity gradients along the dendrite and show that dendritic spike dependent LTP which is predominant in distal sections can prolong memory retention.", "title": "" }, { "docid": "1df39d26ed1d156c1c093d7ffd1bb5bf", "text": "Contemporary advances in addiction neuroscience have paralleled increasing interest in the ancient mental training practice of mindfulness meditation as a potential therapy for addiction. In the past decade, mindfulness-based interventions (MBIs) have been studied as a treatment for an array addictive behaviors, including drinking, smoking, opioid misuse, and use of illicit substances like cocaine and heroin. This article reviews current research evaluating MBIs as a treatment for addiction, with a focus on findings pertaining to clinical outcomes and biobehavioral mechanisms. Studies indicate that MBIs reduce substance misuse and craving by modulating cognitive, affective, and psychophysiological processes integral to self-regulation and reward processing. This integrative review provides the basis for manifold recommendations regarding the next wave of research needed to firmly establish the efficacy of MBIs and elucidate the mechanistic pathways by which these therapies ameliorate addiction. Issues pertaining to MBI treatment optimization and sequencing, dissemination and implementation, dose-response relationships, and research rigor and reproducibility are discussed.", "title": "" }, { "docid": "fb809c5e2a15a49a449a818a1b0d59a5", "text": "Neural responses are modulated by brain state, which varies with arousal, attention, and behavior. In mice, running and whisking desynchronize the cortex and enhance sensory responses, but the quiescent periods between bouts of exploratory behaviors have not been well studied. We found that these periods of \"quiet wakefulness\" were characterized by state fluctuations on a timescale of 1-2 s. Small fluctuations in pupil diameter tracked these state transitions in multiple cortical areas. During dilation, the intracellular membrane potential was desynchronized, sensory responses were enhanced, and population activity was less correlated. In contrast, constriction was characterized by increased low-frequency oscillations and higher ensemble correlations. Specific subtypes of cortical interneurons were differentially activated during dilation and constriction, consistent with their participation in the observed state changes. Pupillometry has been used to index attention and mental effort in humans, but the intracellular dynamics and differences in population activity underlying this phenomenon were previously unknown.", "title": "" }, { "docid": "39c597ee9c9d9392e803aedeeeb28de9", "text": "BACKGROUND\nApalutamide, a competitive inhibitor of the androgen receptor, is under development for the treatment of prostate cancer. We evaluated the efficacy of apalutamide in men with nonmetastatic castration-resistant prostate cancer who were at high risk for the development of metastasis.\n\n\nMETHODS\nWe conducted a double-blind, placebo-controlled, phase 3 trial involving men with nonmetastatic castration-resistant prostate cancer and a prostate-specific antigen doubling time of 10 months or less. Patients were randomly assigned, in a 2:1 ratio, to receive apalutamide (240 mg per day) or placebo. All the patients continued to receive androgen-deprivation therapy. The primary end point was metastasis-free survival, which was defined as the time from randomization to the first detection of distant metastasis on imaging or death.\n\n\nRESULTS\nA total of 1207 men underwent randomization (806 to the apalutamide group and 401 to the placebo group). In the planned primary analysis, which was performed after 378 events had occurred, median metastasis-free survival was 40.5 months in the apalutamide group as compared with 16.2 months in the placebo group (hazard ratio for metastasis or death, 0.28; 95% confidence interval [CI], 0.23 to 0.35; P<0.001). Time to symptomatic progression was significantly longer with apalutamide than with placebo (hazard ratio, 0.45; 95% CI, 0.32 to 0.63; P<0.001). The rate of adverse events leading to discontinuation of the trial regimen was 10.6% in the apalutamide group and 7.0% in the placebo group. The following adverse events occurred at a higher rate with apalutamide than with placebo: rash (23.8% vs. 5.5%), hypothyroidism (8.1% vs. 2.0%), and fracture (11.7% vs. 6.5%).\n\n\nCONCLUSIONS\nAmong men with nonmetastatic castration-resistant prostate cancer, metastasis-free survival and time to symptomatic progression were significantly longer with apalutamide than with placebo. (Funded by Janssen Research and Development; SPARTAN ClinicalTrials.gov number, NCT01946204 .).", "title": "" }, { "docid": "68612f23057840e01bec9673c5d31865", "text": "The current status of studies of online shopping attitudes and behavior is investigated through an analysis of 35 empirical articles found in nine primary Information Systems (IS) journals and three major IS conference proceedings. A taxonomy is developed based on our analysis. A conceptual model of online shopping is presented and discussed in light of existing empirical studies. Areas for further research are discussed.", "title": "" }, { "docid": "dd66e07814419e3c2515d882d662df93", "text": "Excess body weight (adiposity) and physical inactivity are increasingly being recognized as major nutritional risk factors for cancer, and especially for many of those cancer types that have increased incidence rates in affluent, industrialized parts of the world. In this review, an overview is presented of some key biological mechanisms that may provide important metabolic links between nutrition, physical activity and cancer, including insulin resistance and reduced glucose tolerance, increased activation of the growth hormone/IGF-I axis, alterations in sex-steroid synthesis and/or bioavailability, and low-grade chronic inflammation through the effects of adipokines and cytokines.", "title": "" }, { "docid": "46c8336f395d04d49369d406f41b0602", "text": "Several RGB-D datasets have been publicized over the past few years for facilitating research in computer vision and robotics. However, the lack of comprehensive and fine-grained annotation in these RGB-D datasets has posed challenges to their widespread usage. In this paper, we introduce SceneNN, an RGB-D scene dataset consisting of 100 scenes. All scenes are reconstructed into triangle meshes and have per-vertex and per-pixel annotation. We further enriched the dataset with fine-grained information such as axis-aligned bounding boxes, oriented bounding boxes, and object poses. We used the dataset as a benchmark to evaluate the state-of-the-art methods on relevant research problems such as intrinsic decomposition and shape completion. Our dataset and annotation tools are available at http://www.scenenn.net.", "title": "" }, { "docid": "3d56d2c4b3b326bc676536d35b4bd77f", "text": "In this work an experimental study about the capability of the LBP, HOG descriptors and color for clothing attribute classification is presented. Two different variants of the LBP descriptor are considered, the original LBP and the uniform LBP. Two classifiers, Linear SVM and Random Forest, have been included in the comparison because they have been frequently used in clothing attributes classification. The experiments are carried out with a public available dataset, the clothing attribute dataset, that has 26 attributes in total. The obtained accuracies are over 75% in most cases, reaching 80% for the necktie or sleeve length attributes.", "title": "" }, { "docid": "f7a1624a4827e95b961eb164022aa2a2", "text": "Mitotic chromosome condensation, sister chromatid cohesion, and higher order folding of interphase chromatin are mediated by condensin and cohesin, eukaryotic members of the SMC (structural maintenance of chromosomes)-kleisin protein family. Other members facilitate chromosome segregation in bacteria [1]. A hallmark of these complexes is the binding of the two ends of a kleisin subunit to the apices of V-shaped Smc dimers, creating a tripartite ring capable of entrapping DNA (Figure 1A). In addition to creating rings, kleisins recruit regulatory subunits. One family of regulators, namely Kite dimers (Kleisin interacting winged-helix tandem elements), interact with Smc-kleisin rings from bacteria, archaea and the eukaryotic Smc5-6 complex, but not with either condensin or cohesin [2]. These instead possess proteins containing HEAT (Huntingtin/EF3/PP2A/Tor1) repeat domains whose origin and distribution have not yet been characterized. Using a combination of profile Hidden Markov Model (HMM)-based homology searches, network analysis and structural alignments, we identify a common origin for these regulators, for which we propose the name Hawks, i.e. HEAT proteins associated with kleisins.", "title": "" }, { "docid": "3f88c453eab8b2fbfffbf98fee34d086", "text": "Face recognition become one of the most important and fastest growing area during the last several years and become the most successful application of image analysis and broadly used in security system. It has been a challenging, interesting, and fast growing area in real time applications. The propose method is tested using a benchmark ORL database that contains 400 images of 40 persons. Pre-Processing technique are applied on the ORL database to increase the recognition rate. The best recognition rate is 97.5% when tested using 9 training images and 1 testing image. Increasing image database brightness is efficient and will increase the recognition rate. Resizing images using 0.3 scale is also efficient and will increase the recognition rate. PCA is used for feature extraction and dimension reduction. Euclidean distance is used for matching process.", "title": "" }, { "docid": "785a6d08ef585302d692864d09b026fe", "text": "Linear Discriminant Analysis (LDA) is a well-known method for dimensionality reduction and classification. LDA in the binaryclass case has been shown to be equivalent to linear regression with the class label as the output. This implies that LDA for binary-class classifications can be formulated as a least squares problem. Previous studies have shown certain relationship between multivariate linear regression and LDA for the multi-class case. Many of these studies show that multivariate linear regression with a specific class indicator matrix as the output can be applied as a preprocessing step for LDA. However, directly casting LDA as a least squares problem is challenging for the multi-class case. In this paper, a novel formulation for multivariate linear regression is proposed. The equivalence relationship between the proposed least squares formulation and LDA for multi-class classifications is rigorously established under a mild condition, which is shown empirically to hold in many applications involving high-dimensional data. Several LDA extensions based on the equivalence relationship are discussed.", "title": "" }, { "docid": "b3b050c35a1517dc52351cd917d0665a", "text": "The amount of information shared via social media is rapidly increasing amid growing concerns over online privacy. This study investigates the effect of controversiality and social endorsement of media content on sharing behavior when choosing between sharing publicly or anonymously. Anonymous sharing is found to be a popular choice (59% of shares), especially for controversial content which is 3.2x more likely to be shard anonymously. Social endorsement was not found to affect sharing behavior, except for sports-related content. Implications for social media interface design are dis-", "title": "" }, { "docid": "5724b84f9c00c503066bd6a178664c3c", "text": "A simple quantitative model is presented that is consistent with the available evidence about the British economy during the early phase of the Industrial Revolution. The basic model is a variant of a standard growth model, calibrated to data from Great Britain for the period 1780-1850. The model is used to study the importance of foreign trade and the role of the declining cost of power during this period. The British Industrial Revolution was an amazing episode, with economic consequences that changed the world. But our understanding of the economic events of this ¤Research Department, Federal Reserve Bank of Minneapolis, and Department of Economics, University of Chicago. I am grateful to Matthias Doepke for many stimulating conversations, as well as several useful leads on data sources. I also owe more than the ususal thanks to Joel Mokyr for many helpful comments, including several that changed the direction of the paper in a fundamental way. Finally, I am grateful to the Research Division of Federal Reserve Bank of Minneapolis for support while much of this work was done. This paper is being prepared for the Carnegie-Rochester conference in November, 2000.", "title": "" }, { "docid": "567d165eb9ad5f9860f3e0602cbe3e03", "text": "This paper presents new image sensors with multi- bucket pixels that enable time-multiplexed exposure, an alter- native imaging approach. This approach deals nicely with scene motion, and greatly improves high dynamic range imaging, structured light illumination, motion corrected photography, etc. To implement an in-pixel memory or a bucket, the new image sensors incorporate the virtual phase CCD concept into a standard 4-transistor CMOS imager pixel. This design allows us to create a multi-bucket pixel which is compact, scalable, and supports true correlated double sampling to cancel kTC noise. Two image sensors with dual and quad-bucket pixels have been designed and fabricated. The dual-bucket sensor consists of a 640H × 576V array of 5.0 μm pixel in 0.11 μm CMOS technology while the quad-bucket sensor comprises 640H × 512V array of 5.6 μm pixel in 0.13 μm CMOS technology. Some computational photography applications were implemented using the two sensors to demonstrate their values in eliminating artifacts that currently plague computational photography.", "title": "" }, { "docid": "4706f9e8d9892543aaeb441c45816b24", "text": "The mood of a text and the intention of the writer can be reflected in the typeface. However, in designing a typeface, it is difficult to keep the style of various characters consistent, especially for languages with lots of morphological variations such as Chinese. In this paper, we propose a Typeface Completion Network (TCN) which takes one character as an input, and automatically completes the entire set of characters in the same style as the input characters. Unlike existing models proposed for image-to-image translation, TCN embeds a character image into two separate vectors representing typeface and content. Combined with a reconstruction loss from the latent space, and with other various losses, TCN overcomes the inherent difficulty in designing a typeface. Also, compared to previous image-to-image translation models, TCN generates high quality character images of the same typeface with a much smaller number of model parameters. We validate our proposed model on the Chinese and English character datasets, which is paired data, and the CelebA dataset, which is unpaired data. In these datasets, TCN outperforms recently proposed state-of-the-art models for image-to-image translation. The source code of our model is available at https://github.com/yongqyu/TCN.", "title": "" }, { "docid": "b49e61ecb2afbaa8c3b469238181ec26", "text": "Stylistic variations of language, such as formality, carry speakers’ intention beyond literal meaning and should be conveyed adequately in translation. We propose to use lexical formality models to control the formality level of machine translation output. We demonstrate the effectiveness of our approach in empirical evaluations, as measured by automatic metrics and human assessments.", "title": "" }, { "docid": "ef49eeb766313743edb77f8505e491a0", "text": "In 1998, a clinical classification of pulmonary hypertension (PH) was established, categorizing PH into groups which share similar pathological and hemodynamic characteristics and therapeutic approaches. During the 5th World Symposium held in Nice, France, in 2013, the consensus was reached to maintain the general scheme of previous clinical classifications. However, modifications and updates especially for Group 1 patients (pulmonary arterial hypertension [PAH]) were proposed. The main change was to withdraw persistent pulmonary hypertension of the newborn (PPHN) from Group 1 because this entity carries more differences than similarities with other PAH subgroups. In the current classification, PPHN is now designated number 1. Pulmonary hypertension associated with chronic hemolytic anemia has been moved from Group 1 PAH to Group 5, unclear/multifactorial mechanism. In addition, it was decided to add specific items related to pediatric pulmonary hypertension in order to create a comprehensive, common classification for both adults and children. Therefore, congenital or acquired left-heart inflow/outflow obstructive lesions and congenital cardiomyopathies have been added to Group 2, and segmental pulmonary hypertension has been added to Group 5. Last, there were no changes for Groups 2, 3, and 4.", "title": "" }, { "docid": "36fa816c5e738ea6171851fb3200f68d", "text": "Vehicle speed prediction provides important information for many intelligent vehicular and transportation applications. Accurate on-road vehicle speed prediction is challenging, because an individual vehicle speed is affected by many factors, e.g., the traffic condition, vehicle type, and driver’s behavior, in either deterministic or stochastic way. This paper proposes a novel data-driven vehicle speed prediction method in the context of vehicular networks, in which the real-time traffic information is accessible and utilized for vehicle speed prediction. It first predicts the average traffic speeds of road segments by using neural network models based on historical traffic data. Hidden Markov models (HMMs) are then utilized to present the statistical relationship between individual vehicle speeds and the traffic speed. Prediction for individual vehicle speeds is realized by applying the forward–backward algorithm on HMMs. To evaluate the prediction performance, simulations are set up in the SUMO microscopic traffic simulator with the application of a real Luxembourg motorway network and traffic count data. The vehicle speed prediction result shows that our proposed method outperforms other ones in terms of prediction accuracy.", "title": "" } ]
scidocsrr
4872b8b5e098dddf801348cab41a96f0
Learning Color Constancy
[ { "docid": "3647b5e0185c0120500fff8061265abd", "text": "Human and machine visual sensing is enhanced when surface properties of objects in scenes, including color, can be reliably estimated despite changes in the ambient lighting conditions. We describe a computational method for estimating surface spectral reflectance when the spectral power distribution of the ambient light is not known.", "title": "" } ]
[ { "docid": "c2b0dfb06f82541fca0d2700969cf0d9", "text": "Magnetic resonance is an exceptionally powerful and versatile measurement technique. The basic structure of a magnetic resonance experiment has remained largely unchanged for almost 50 years, being mainly restricted to the qualitative probing of only a limited set of the properties that can in principle be accessed by this technique. Here we introduce an approach to data acquisition, post-processing and visualization—which we term ‘magnetic resonance fingerprinting’ (MRF)—that permits the simultaneous non-invasive quantification of multiple important properties of a material or tissue. MRF thus provides an alternative way to quantitatively detect and analyse complex changes that can represent physical alterations of a substance or early indicators of disease. MRF can also be used to identify the presence of a specific target material or tissue, which will increase the sensitivity, specificity and speed of a magnetic resonance study, and potentially lead to new diagnostic testing methodologies. When paired with an appropriate pattern-recognition algorithm, MRF inherently suppresses measurement errors and can thus improve measurement accuracy.", "title": "" }, { "docid": "95bb07e57d9bd2b7e9a9a59c29806b66", "text": "Breast cancer is one of the most common cancers and the second most responsible for cancer mortality worldwide. In 2014, in Portugal approximately 27,200 people died of cancer, of which 1,791 were women with breast cancer. Flaxseed has been one of the most studied foods, regarding possible relations to breast cancer, though mainly in experimental studies in animals, yet in few clinical trials. It is rich in omega-3 fatty acids, α-linolenic acid, lignan, and fibers. One of the main components of flaxseed is the lignans, of which 95% are made of the predominant secoisolariciresinol diglucoside (SDG). SDG is converted into enterolactone and enterodiol, both with antiestrogen activity and structurally similar to estrogen; they can bind to cell receptors, decreasing cell growth. Some studies have shown that the intake of omega-3 fatty acids is related to the reduction of breast cancer risk. In animal studies, α-linolenic acids have been shown to be able to suppress growth, size, and proliferation of cancer cells and also to promote breast cancer cell death. Other animal studies found that the intake of flaxseed combined with tamoxifen can reduce tumor size to a greater extent than taking tamoxifen alone. Additionally, some clinical trials showed that flaxseed can have an important role in decreasing breast cancer risk, mainly in postmenopausal women. Further studies are needed, specifically clinical trials that may demonstrate the potential benefits of flaxseed in breast cancer.", "title": "" }, { "docid": "2b1caf45164e7453453eaaf006dc3827", "text": "This paper presents an estimation of the longitudinal movement of an aircraft using the STM32 microcontroller F1 Family. The focus of this paper is on developing code to implement the famous Luenberger Observer and using the different devices existing in STM32 F1 micro-controllers. The suggested Luenberger observer was achieved using the Keil development tools designed for devices microcontrollers based on the ARM processor and labor with C / C ++ language. The Characteristics that show variations in time of the state variables and step responses prove that the identification of the longitudinal movement of an aircraft were performed with minor errors in the right conditions. These results lead to easily develop predictive algorithms for programmable hardware in the industry.", "title": "" }, { "docid": "6f16ccc24022c4fc46f8b0b106b0f3c4", "text": "We reviewed 25 patients ascertained through the finding of trigonocephaly/metopic synostosis as a prominent manifestation. In 16 patients, trigonocephaly/metopic synostosis was the only significant finding (64%); 2 patients had metopic/sagittal synostosis (8%) and in 7 patients the trigonocephaly was part of a syndrome (28%). Among the nonsyndromic cases, 12 were males and 6 were females and the sex ratio was 2 M:1 F. Only one patient with isolated trigonocephaly had an affected parent (5.6%). All nonsyndromic patients had normal psychomotor development. In 2 patients with isolated metopic/sagittal synostosis, FGFR2 and FGFR3 mutations were studied and none were detected. Among the syndromic cases, two had Jacobsen syndrome associated with deletion of chromosome 11q 23 (28.5%). Of the remaining five syndromic cases, different conditions were found including Say-Meyer syndrome, multiple congenital anomalies and bilateral retinoblastoma with no detectable deletion in chromosome 13q14.2 by G-banding chromosomal analysis and FISH, I-cell disease, a new acrocraniofacial dysostosis syndrome, and Opitz C trigonocephaly syndrome. The last two patients were studied for cryptic chromosomal rearrangements, with SKY and subtelomeric FISH probes. Also FGFR2 and FGFR3 mutations were studied in two syndromic cases, but none were found. This study demonstrates that the majority of cases with nonsyndromic trigonocephaly are sporadic and benign, apart from the associated cosmetic implications. Syndromic trigonocephaly cases are causally heterogeneous and associated with chromosomal as well as single gene disorders. An investigation to delineate the underlying cause of trigonocephaly is indicated because of its important implications on medical management for the patient and the reproductive plans for the family.", "title": "" }, { "docid": "457a23b087e59c6076ef6f9da7214fea", "text": "Supervised learning is widely used in training autonomous driving vehicle. However, it is trained with large amount of supervised labeled data. Reinforcement learning can be trained without abundant labeled data, but we cannot train it in reality because it would involve many unpredictable accidents. Nevertheless, training an agent with good performance in virtual environment is relatively much easier. Because of the huge difference between virtual and real, how to fill the gap between virtual and real is challenging. In this paper, we proposed a novel framework of reinforcement learning with image semantic segmentation network to make the whole model adaptable to reality. The agent is trained in TORCS, a car racing simulator.", "title": "" }, { "docid": "71c81eb75f55ad6efaf8977b93e6dbef", "text": "Autonomous vehicle navigation is challenging since various types of road scenarios in real urban environments have to be considered, particularly when only perception sensors are used, without position information. This paper presents a novel real-time optimal-drivable-region and lane detection system for autonomous driving based on the fusion of light detection and ranging (LIDAR) and vision data. Our system uses a multisensory scheme to cover the most drivable areas in front of a vehicle. We propose a feature-level fusion method for the LIDAR and vision data and an optimal selection strategy for detecting the best drivable region. Then, a conditional lane detection algorithm is selectively executed depending on the automatic classification of the optimal drivable region. Our system successfully handles both structured and unstructured roads. The results of several experiments are provided to demonstrate the reliability, effectiveness, and robustness of the system.", "title": "" }, { "docid": "4ffc94f329b404b89b86df07f8503866", "text": "A new isolated push-pull very high frequency (VHF) resonant DC-DC converter is proposed. The primary side of the converter is a push-pull topology derived from the Class EF2 inverter. The secondary side is a class E based low dv/dt full-wave rectifier. A two-channel multi-stage resonant gate driver is applied to provide two complementary drive signals. The advantages of the converter are as follows: 1) the power isolation is achieved; 2) the MOSFETs and diodes are under soft-switching condition for high efficiency; 3) the voltage stress of the MOSFET is much reduced; 4) the parasitic inductance and capacitance can be absorbed. A 30~36 VDC input, 50-W/ 24-VDC output, 30-MHz prototype has been built to verify the functionality.", "title": "" }, { "docid": "35e33ddfa05149dea9b0aef4983c8cc1", "text": "We propose a fast approximation method of a softmax function with a very large vocabulary using singular value decomposition (SVD). SVD-softmax targets fast and accurate probability estimation of the topmost probable words during inference of neural network language models. The proposed method transforms the weight matrix used in the calculation of the output vector by using SVD. The approximate probability of each word can be estimated with only a small part of the weight matrix by using a few large singular values and the corresponding elements for most of the words. We applied the technique to language modeling and neural machine translation and present a guideline for good approximation. The algorithm requires only approximately 20% of arithmetic operations for an 800K vocabulary case and shows more than a three-fold speedup on a GPU.", "title": "" }, { "docid": "3f6f191d3d60cd68238545f4b809d4b4", "text": "This paper examines the dependence of the healthcare waste (HCW) generation rate on several social-economic and environmental parameters. Correlations were calculated between the quantities of healthcare waste generated (expressed in kg/bed/day) versus economic indices (GDP, healthcare expenditure per capita), social indices (HDI, IHDI, MPI, life expectancy, mean years of schooling, HIV prevalence, deaths due to tuberculosis and malaria, and under five mortality rate), and an environmental sustainability index (total CO2 emissions) from 42 countries worldwide. The statistical analysis included the examination of the normality of the data and the formation of linear multiple regression models to further investigate the correlation between those indices and HCW generation rates. Pearson and Spearman correlation coefficients were also calculated for all pairwise comparisons. Results showed that the life expectancy, the HDI, the mean years of schooling and the CO2 emissions positively affect the HCW generation rates and can be used as statistical predictors of those rates. The resulting best reduced regression model included the life expectancy and the CO2 emissions and explained 85% of the variability of the response.", "title": "" }, { "docid": "34bbc3054be98f2cc0edc25a00fe835d", "text": "The increasing prevalence of co-processors such as the Intel Xeon Phi, has been reshaping the high performance computing (HPC) landscape. The Xeon Phi comes with a large number of power efficient CPU cores, but at the same time, it's a highly memory constraint environment leaving the task of memory management entirely up to application developers. To reduce programming complexity, we are focusing on application transparent, operating system (OS) level hierarchical memory management.\n In particular, we first show that state of the art page replacement policies, such as approximations of the least recently used (LRU) policy, are not good candidates for massive many-cores due to their inherent cost of remote translation lookaside buffer (TLB) invalidations, which are inevitable for collecting page usage statistics. The price of concurrent remote TLB invalidations grows rapidly with the number of CPU cores in many-core systems and outpace the benefits of the page replacement algorithm itself. Building upon our previous proposal, per-core Partially Separated Page Tables (PSPT), in this paper we propose Core-Map Count based Priority (CMCP) page replacement policy, which exploits the auxiliary knowledge of the number of mapping CPU cores of each page and prioritizes them accordingly. In turn, it can avoid TLB invalidations for page usage statistic purposes altogether. Additionally, we describe and provide an implementation of the experimental 64kB page support of the Intel Xeon Phi and reveal some intriguing insights regarding its performance. We evaluate our proposal on various applications and find that CMCP can outperform state of the art page replacement policies by up to 38%. We also show that the choice of appropriate page size depends primarily on the degree of memory constraint in the system.", "title": "" }, { "docid": "085ef3104f22263be11f3a2b5f16ff34", "text": "ARTICLE INFO Tumor is the one of the most common brain diesease and this is the reason for the diagnosis & treatment of the brain tumor has vital importance. MRI is the technique used to produce computerised image of internal body tissues. Cells are growing in uncontrollable manner this results in mass of unwanted tissue which is called as tumor. CT-Scan and MRI image which are diagnostic technique are used to detect brain tumor and classifies in types malignant & benign. This is difficult due to variations hence techniques like image preprocessing, feature extraction are used, there are many methods developed but they have different results. In this paper we are going to discuss the methods for detection of brain tumor and evaluate them.", "title": "" }, { "docid": "f408bbec7f44c1df81fca447f3a022e0", "text": "D-Flow is a software system designed for the development of interactive and immersive virtual reality applications, for the purpose of clinical research and rehabilitation. Key concept of the D-Flow software system is that the subject is regarded as an integral part of a real-time feedback loop, in which multi-sensory input devices measure the behavior of the subject, while output devices return motor-sensory, visual and auditory feedback to the subject. The D-Flow software system allows an operator to define feedback strategies through a flexible and extensible application development framework, based on visual programming. We describe the requirements, architecture and design considerations of the D-Flow software system, as well as a number of applications that have been developed using D-Flow, both for clinical research and rehabilitation.", "title": "" }, { "docid": "1339549c6e013e6e573fc5bdbe077d12", "text": "Auction-style pricing policies can effectively reflect the underlying trends in demand and supply for the cloud resources, and thereby attracted a research interest recently. In particular, a desirable cloud auction design should be (1) online to timely reflect the fluctuation of supply-demand relations, (2) expressive to support the heterogeneous user demands, and (3) truthful to discourage users from cheating behaviors. Meeting these requirements simultaneously is non-trivial, and most existing auction mechanism designs do not directly apply. To meet these goals, this paper conducts the first work on a framework for truthful online cloud auctions where users with heterogeneous demands could come and leave on the fly. Concretely speaking, we first design a novel bidding language, wherein users' heterogeneous requirement on their desired allocation time, application type, and even how they value among different possible allocations can be flexibly and concisely expressed. Besides, building on top of our bidding language we propose COCA, an incentive-Compatible (truthful) Online Cloud Auction mechanism. To ensure truthfulness with heterogenous and online user demand, the design of COCA is driven by a monotonic payment rule and a utility-maximizing allocation rule. Moreover, our theoretical analysis shows that the worst-case performance of COCA can be well-bounded, and our further discussion shows that COCA performs well when some other important factors in online auction design are taken into consideration. Finally, in simulations the performance of COCA is seen to be comparable to the well-known off-line Vickrey-Clarke-Groves (VCG) mechanism [19].", "title": "" }, { "docid": "5b50e84437dc27f5b38b53d8613ae2c7", "text": "We present a practical vision-based robotic bin-picking sy stem that performs detection and 3D pose estimation of objects in an unstr ctu ed bin using a novel camera design, picks up parts from the bin, and p erforms error detection and pose correction while the part is in the gri pper. Two main innovations enable our system to achieve real-time robust a nd accurate operation. First, we use a multi-flash camera that extracts rob ust depth edges. Second, we introduce an efficient shape-matching algorithm called fast directional chamfer matching (FDCM), which is used to reliabl y detect objects and estimate their poses. FDCM improves the accuracy of cham fer atching by including edge orientation. It also achieves massive improvements in matching speed using line-segment approximations of edges , a 3D distance transform, and directional integral images. We empiricall y show that these speedups, combined with the use of bounds in the spatial and h ypothesis domains, give the algorithm sublinear computational compl exity. We also apply our FDCM method to other applications in the context of deformable and articulated shape matching. In addition to significantl y improving upon the accuracy of previous chamfer matching methods in all of t he evaluated applications, FDCM is up to two orders of magnitude faster th an the previous methods.", "title": "" }, { "docid": "e93c5395f350d44b59f549a29e65d75c", "text": "Software Defined Networking (SDN) is an exciting technology that enables innovation in how we design and manage networks. Although this technology seems to have appeared suddenly, SDN is part of a long history of efforts to make computer networks more programmable. In this paper, we trace the intellectual history of programmable networks, including active networks, early efforts to separate the control and data plane, and more recent work on OpenFlow and network operating systems. We highlight key concepts, as well as the technology pushes and application pulls that spurred each innovation. Along the way, we debunk common myths and misconceptions about the technologies and clarify the relationship between SDN and related technologies such as network virtualization.", "title": "" }, { "docid": "7072c7b94fc6376b13649ec748612705", "text": "Performing link prediction in Knowledge Bases (KBs) with embedding-based models, like with the model TransE (Bordes et al., 2013) which represents relationships as translations in the embedding space, have shown promising results in recent years. Most of these works are focused on modeling single relationships and hence do not take full advantage of the graph structure of KBs. In this paper, we propose an extension of TransE that learns to explicitly model composition of relationships via the addition of their corresponding translation vectors. We show empirically that this allows to improve performance for predicting single relationships as well as compositions of pairs of them.", "title": "" }, { "docid": "57c66291a54ae565e087ffe2ee0d6b7b", "text": "We develop the relational topic model (RTM), a hierarchical model of both network structure and node attributes. We focus on document networks, where the attributes of each document are its words, i.e., discrete observations taken from a fixed vocabulary. For each pair of documents, the RTM models their link as a binary random variable that is conditioned on their contents. The model can be used to summarize a network of documents, predict links between them, and predict words within them. We derive efficient inference and estimation algorithms based on variational methods that take advantage of sparsity and scale with the number of links. We evaluate the predictive performance of the RTM for large networks of scientific abstracts, web documents, and geographically tagged news.", "title": "" }, { "docid": "ecdc10e9337a54e2d4dd820c8f99bdfa", "text": "This study examines the relationships of spiritually and physically related variables to well-being among homeless adults. A convenience sample of 61 sheltered homeless persons completed the Spiritual Perspective Scale, the Self-Transcendence Scale, the Index of Well-Being, and items measuring fatigue and health status. The data were subjected to correlational and multiple regression analysis. Positive, significant correlations were found among spiritual perspective, self-transcendence, health status, and well-being. Fatigue was inversely correlated with health status and well-being. Self-transcendence and health status together explained 59% of the variance in well-being. The findings support Reed's theory of self-transcendence, in which there is the basic assumption that human beings have the potential to integrate difficult life situations. This study contributes to the growing body of evidence that conceptualizes homeless persons as having spiritual, emotional, and physical capacities that can be used by health care professionals to promote well-being in this vulnerable population.", "title": "" }, { "docid": "34a593d0d79550d2c93ed462e5f5fc4e", "text": "BACKGROUND\nTo evaluate the antibacterial activity of 21 plant essential oils against six bacterial species.\n\n\nMETHODS\nThe selected essential oils were screened against four gram-negative bacteria (Escherichia coli, Klebsiella pneumoniae, Pseudomonas aeruginosa, Proteus vulgaris) and two gram-positive bacteria Bacillus subtilis and Staphylococcus aureus at four different concentrations (1:1, 1:5, 1:10 and 1:20) using disc diffusion method. The MIC of the active essential oils were tested using two fold agar dilution method at concentrations ranging from 0.2 to 25.6 mg/ml.\n\n\nRESULTS\nOut of 21 essential oils tested, 19 oils showed antibacterial activity against one or more strains. Cinnamon, clove, geranium, lemon, lime, orange and rosemary oils exhibited significant inhibitory effect. Cinnamon oil showed promising inhibitory activity even at low concentration, whereas aniseed, eucalyptus and camphor oils were least active against the tested bacteria. In general, B. subtilis was the most susceptible. On the other hand, K. pneumoniae exhibited low degree of sensitivity.\n\n\nCONCLUSION\nMajority of the oils showed antibacterial activity against the tested strains. However Cinnamon, clove and lime oils were found to be inhibiting both gram-positive and gram-negative bacteria. Cinnamon oil can be a good source of antibacterial agents.", "title": "" }, { "docid": "a181d2f3ccafda3975342122071132c5", "text": "A small-size tablet device antenna formed by a simple folded metal plate excited by a low-band feed (direct feed with matching network) and a high-band feed (gap-coupled feed with matching network) for the LTE/WWAN operation in the 698-960 and 1710-2690 MHz bands is presented. The folded metal plate has a size of 3 × 5 × 30 mm3 and is disposed in a clearance area of 10 × 30 mm2 (300 mm2) above the top edge of the device ground plane. The matching network in the low-band feed compensates for additional capacitance for the folded metal plate with a resonant length much less than 0.25 wavelength in the lower band (698-960 MHz) and enhances the antenna's impedance matching to achieve a wide lower band. The matching network in the high-band feed also enhances the antenna's impedance matching to achieve a wide higher band (1710-2690 MHz). In addition, the coupling gap in the high-band feed can reject the low-band frequency. Also, the matching network in the low-band (high-band) feed includes a bandpass circuit which can help reject the high-band (low-band) frequency. Good isolation between the two hybrid feeds can, hence, be obtained, which makes the antenna with simple structure and small size have good radiation performances in both lower and higher bands. Details of the proposed antenna are presented.", "title": "" } ]
scidocsrr
68e2eb9cc4929b15b0e7a9c13ebe57d7
Touchscreens vs. traditional controllers in handheld gaming
[ { "docid": "e7ca25b51dc50a65450911802ec67fb9", "text": "This paper describes a two-phase study conducted to determine optimal target sizes for one-handed thumb use of mobile handheld devices equipped with a touch-sensitive screen. Similar studies have provided recommendations for target sizes when using a mobile device with two hands plus a stylus, and interacting with a desktop-sized display with an index finger, but never for thumbs when holding a small device in a single hand. The first phase explored the required target size for single-target (discrete) pointing tasks, such as activating buttons, radio buttons or checkboxes. The second phase investigated optimal sizes for widgets used for tasks that involve a sequence of taps (serial), such as text entry. Since holding a device in one hand constrains thumb movement, we varied target positions to determine if performance depended on screen location. The results showed that while speed generally improved as targets grew, there were no significant differences in error rate between target sizes =9.6 mm in discrete tasks and targets =7.7 mm in serial tasks. Along with subjective ratings and the findings on hit response variability, we found that target size of 9.2 mm for discrete tasks and targets of 9.6 mm for serial tasks should be sufficiently large for one-handed thumb use on touchscreen-based handhelds without degrading performance and preference.", "title": "" } ]
[ { "docid": "567f3840296c29e4982603f0cae4d91a", "text": "This paper is focused on the analysis of coplanar waveguides (CPWs) loaded with circularly shaped electric-LC (ELC) resonators, the latter consisting of two coplanar loops connected in parallel through a common gap. Specifically, the resonator axis is aligned with the CPW axis, and a dynamic loading with ELC rotation is considered. Since the ELC resonator is bisymmetric, i.e., it exhibits two orthogonal symmetry planes, the angular orientation range is limited to 90°. It is shown that the transmission and reflection coefficients of the structure depend on the angular orientation of the ELC. In particular, the loaded CPW behaves as a transmission line-type (i.e., all-pass) structure for a certain ELC orientation (0°) since the resonator is not excited. However, by rotating the ELC, magnetic coupling to the line arises, and a notch in the transmission coefficient (with orientation dependent depth and bandwidth) appears. This feature is exploited to implement angular displacement sensors by measuring the notch depth in the transmission coefficient. To gain more insight on sensor design, the lumped element equivalent-circuit model for ELC-loaded CPWs with arbitrary ELC orientation is proposed and validated. Based on this approach, a prototype displacement sensor is designed and characterized. It is shown that by introducing additional elements (a circulator and an envelope detector), novel and high precision angular velocity sensors can also be implemented. An angular velocity sensor is thus proposed, characterized, and satisfactorily validated. The proposed solution for angular sensing is robust against environmental variations since it is based on the geometrical alignment/misalignment between the symmetry planes of the coupled elements.", "title": "" }, { "docid": "0cfa7e43d557fa6d68349b637367341a", "text": "This paper proposes a model of selective attention for visual search tasks, based on a framework for sequential decision-making. The model is implemented using a fixed pan-tilt-zoom camera in a visually cluttered lab environment, which samples the environment at discrete time steps. The agent has to decide where to fixate next based purely on visual information, in order to reach the region where a target object is most likely to be found. The model consists of two interacting modules. A reinforcement learning module learns a policy on a set of regions in the room for reaching the target object, using as objective function the expected value of the sum of discounted rewards. By selecting an appropriate gaze direction at each step, this module provides top-down control in the selection of the next fixation point. The second module performs “within fixation” processing, based exclusively on visual information. Its purpose is twofold: to provide the agent with a set of locations of interest in the current image, and to perform the detection and identification of the target object. Detailed experimental results show that the number of saccades to a target object significantly decreases with the number of training epochs. The results also show the learned policy to find the target object is invariant to small physical displacements as well as object inversion.", "title": "" }, { "docid": "486e15d89ea8d0f6da3b5133c9811ee1", "text": "Frequency-modulated continuous wave radar systems suffer from permanent leakage of the transmit signal into the receive path. Besides leakage within the radar device itself, an unwanted object placed in front of the antennas causes so-called short-range (SR) leakage. In an automotive application, for instance, it originates from signal reflections of the car’s own bumper. Particularly the residual phase noise of the downconverted SR leakage signal causes a severe degradation of the achievable sensitivity. In an earlier work, we proposed an SR leakage cancellation concept that is feasible for integration in a monolithic microwave integrated circuit. In this brief, we present a hardware prototype that holistically proves our concept with discrete components. The fundamental theory and properties of the concept are proven with measurements. Further, we propose a digital design for real-time operation of the cancellation algorithm on a field programmable gate array. Ultimately, by employing measurements with a bumper mounted in front of the antennas, we show that the leakage canceller significantly improves the sensitivity of the radar.", "title": "" }, { "docid": "91b116c4b2e19096b2ae55e40c4946e3", "text": "Nanotechnology is expected to open some new aspects to fight and prevent diseases using atomic scale tailoring of materials. The ability to uncover the structure and function of biosystems at the nanoscale, stimulates research leading to improvement in biology, biotechnology, medicine and healthcare. The size of nanomaterials is similar to that of most biological molecules and structures; therefore, nanomaterials can be useful for both in vivo and in vitro biomedical research and applications. The integration of nanomaterials with biology has led to the development of diagnostic devices, contrast agents, analytical tools, physical therapy applications, and drug delivery vehicles. In all the nanomaterials with antibacterial properties, metallic nanoparticles are the best. Nanoparticles increase chemical activity due to crystallographic surface structure with their large surface to volume ratio. The importance of bactericidal nanomaterials study is because of the increase in new resistant strains of bacteria against most potent antibiotics. This has promoted research in the well known activity of silver ions and silver-based compounds, including silver nanoparticles. This effect was size and dose dependent and was more pronounced against gram-negative bacteria than gram-positive organisms.", "title": "" }, { "docid": "5cc1f15c45f57d1206e9181dc601ee4a", "text": "In an end-to-end dialog system, the aim of dialog state tracking is to accurately estimate a compact representation of the current dialog status from a sequence of noisy observations produced by the speech recognition and the natural language understanding modules. This paper introduces a novel method of dialog state tracking based on the general paradigm of machine reading and proposes to solve it using an End-to-End Memory Network, MemN2N, a memory-enhanced neural network architecture. We evaluate the proposed approach on the second Dialog State Tracking Challenge (DSTC-2) dataset. The corpus has been converted for the occasion in order to frame the hidden state variable inference as a questionanswering task based on a sequence of utterances extracted from a dialog. We show that the proposed tracker gives encouraging results. Then, we propose to extend the DSTC-2 dataset and the definition of this dialog state task with specific reasoning capabilities like counting, list maintenance, yes-no question answering and indefinite knowledge management. Finally, we present encouraging results using our proposed MemN2N based tracking model.", "title": "" }, { "docid": "66f46290a9194d4e982b8d1b59a73090", "text": "Sensor to body calibration is a key requirement for capturing accurate body movements in applications based on wearable systems. In this paper, we consider the specific problem of estimating the positions of multiple inertial measurement units (IMUs) relative to the adjacent body joints. To derive an efficient, robust and precise method based on a practical procedure is a crucial as well as challenging task when developing a wearable system with multiple embedded IMUs. In this work, first, we perform a theoretical analysis of an existing position calibration method, showing its limited applicability for the hip and knee joint. Based on this, we propose a method for simultaneously estimating the positions of three IMUs (mounted on pelvis, upper leg, lower leg) relative to these joints. The latter are here considered as an ensemble. Finally, we perform an experimental evaluation based on simulated and real data, showing the improvements of our calibration method as well as lines of future work.", "title": "" }, { "docid": "d7a75e98a1faa39262c50ef03edc8708", "text": "Executive Overview The strategic leadership of ethical behavior in business can no longer be ignored. Executives must accept the fact that the moral impact of their leadership presence and behaviors will rarely, if ever, be neutral. In the leadership capacity, executives have great power to shift the ethics mindfulness of organizational members in positive as well as negative directions. Rather than being left to chance, this power to serve as ethics leaders must be used to establish a social context within which positive self-regulation of ethical behavior becomes a clear and compelling organizational norm and in which people act ethically as a matter of routine. This article frames the responsibility for strategic leadership of ethical behavior on three premises: (1) It must be done—a stakeholder analysis of the total costs of ethical failures confirms the urgency for ethics change; (2) It can be done—exemplars show that a compelling majority of an organization’s membership can be influenced to make ethical choices; (3) It is sustainable—integrity programs help build and confirm corporate cultures in which principled actions and ethics norms predominate. ........................................................................................................................................................................", "title": "" }, { "docid": "a532355548d9937c496555b868181d06", "text": "• We have modeled an opportunistic ad library that aggressively collects targeted data from Android devices. • We demonstrate that the access channels considered are realistic. • We have designed a reliable and extensible framework that can be leveraged to assess user data exposure by an app to a library. Classifier Age Marital Status Sex P(%) R(%) P(%) R(%) P(%) R(%) Random Forest 88.6 88.6 95.0 93.8 93.8 92.9", "title": "" }, { "docid": "330438e58f75c21605cde6c4df1c8802", "text": "Visual surveillance from low-altitude airborne platforms has been widely addressed in recent years. Moving vehicle detection is an important component of such a system, which is a very challenging task due to illumination variance and scene complexity. Therefore, a boosting Histogram Orientation Gradients (boosting HOG) feature is proposed in this paper. This feature is not sensitive to illumination change and shows better performance in characterizing object shape and appearance. Each of the boosting HOG feature is an output of an adaboost classifier, which is trained using all bins upon a cell in traditional HOG features. All boosting HOG features are combined to establish the final feature vector to train a linear SVM classifier for vehicle classification. Compared with classical approaches, the proposed method achieved better performance in higher detection rate, lower false positive rate and faster detection speed.", "title": "" }, { "docid": "a0f24500f3729b0a2b6e562114eb2a45", "text": "In this work, the smallest reported inkjet-printed UWB antenna is proposed that utilizes a fractal matching network to increase the performance of a UWB microstrip monopole. The antenna is inkjet-printed on a paper substrate to demonstrate the ability to produce small and low-cost UWB antennas with inkjet-printing technology which can enable compact, low-cost, and environmentally friendly wireless sensor network.", "title": "" }, { "docid": "a06dd136a8b8df05c1cf97ef16e6cc2b", "text": "The performance of most conventional classification systems relies on appropriate data representation and much of the efforts are dedicated to feature engineering, a difficult and time-consuming process that uses prior expert domain knowledge of the data to create useful features. On the other hand, deep learning can extract and organize the discriminative information from the data, not requiring the design of feature extractors by a domain expert. Convolutional Neural Networks (CNNs) are a particular type of deep, feedforward network that have gained attention from research community and industry, achieving empirical successes in tasks such as speech recognition, signal processing, object recognition, natural language processing and transfer learning. In this paper, we conduct some preliminary experiments using the deep learning approach to classify breast cancer histopathological images from BreaKHis, a publicly dataset available at http://web.inf.ufpr.br/vri/breast-cancer-database. We propose a method based on the extraction of image patches for training the CNN and the combination of these patches for final classification. This method aims to allow using the high-resolution histopathological images from BreaKHis as input to existing CNN, avoiding adaptations of the model that can lead to a more complex and computationally costly architecture. The CNN performance is better when compared to previously reported results obtained by other machine learning models trained with hand-crafted textural descriptors. Finally, we also investigate the combination of different CNNs using simple fusion rules, achieving some improvement in recognition rates.", "title": "" }, { "docid": "a1c9553dbe9d4f9f9b5d81feb9ece9d5", "text": "Knowledge tracing is a sequence prediction problem where the goal is to predict the outcomes of students over questions as they are interacting with a learning platform. By tracking the evolution of the knowledge of some student, one can optimize instruction. Existing methods are either based on temporal latent variable models, or factor analysis with temporal features. We here show that factorization machines (FMs), a model for regression or classification, encompasses several existing models in the educational literature as special cases, notably additive factor model, performance factor model, and multidimensional item response theory. We show, using several real datasets of tens of thousands of users and items, that FMs can estimate student knowledge accurately and fast even when student data is sparsely observed, and handle side information such as multiple knowledge components and number of attempts at item or skill level. Our approach allows to fit student models of higher dimension than existing models, and provides a testbed to try new combinations of features in order to improve existing models. Modeling student learning is key to be able to detect students that need further attention, or recommend automatically relevant learning resources. Initially, models were developed for students sitting for standardized tests, where students could read every problem statement, and missing answers could be treated as incorrect. However, in online platforms such as MOOCs, students attempt some exercises, but do not even look at other ones. Also, they may learn between different attempts. How to measure knowledge when students have attempted different questions? We want to predict the performance of a set I of students, say users, over a set J of questions, say items (we will interchangeably refer to questions as items, problems, or tasks). Each student can attempt a question multiple times, and may learn between successive attempts. We assume we observe ordered triplets (i, j, o) ∈ I × J × {0, 1} which encode the fact that student i attempted question j and got it either correct (o = 1) or incorrect (o = 0). Triplets are sorted chronologically. Then, given a new pair (i′, j′), we need to predict whether student i′ will get question j′ correct or incorrect. We can also assume extra knowledge about users, or items. So far, various models have been designed for student Copyright c © 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. modeling, either based on prediction of sequences (Piech et al. 2015), or factor analysis (Thai-Nghe et al. 2011; Lavoué et al. 2018). Most of existing techniques model students or questions with unidimensional parameters. In this paper, we generalize these models to higher dimensions and manage to train efficiently student models of dimension up to 20. Our family of models is particularly convenient when observations from students are sparse, e.g. when some students attempted few questions, or some questions were answered by few students, which is most of the data usually encountered in online platforms such as MOOCs. When fitting student models, it is better to rely on all the information available at hand. In order to get information about questions, one can identify the knowledge components (KCs) involved in each question. This side information is usually encoded under the form of a q-matrix, that maps items to knowledge components: qjk is 1 if item j involves KC k, 0 otherwise. In this paper, we will also note KC(j) the sets of skills involved by question j, i.e. KC(j) = {k|qjk = 1}. In order to model different attempts, one can keep track of how many times a student has attempted a question, or how many times a student has had the opportunity to acquire a skill, while interacting with the learning material. Our experiments show, in particular, that: • It is better to estimate a bias for each item (not only skill), which popular educational data mining (EDM) models do not. • Most existing models in EDM cannot handle side information such as multiple skills for one item, but the proposed approach does. • Side information improves performance more than increasing the latent dimension. To the best of our knowledge, this is the most generic framework that incorporates side information into a student model. For the sake of reproducibility, our implementation is available on GitHub1. The interested reader can check our code and reuse it in order to try new combinations and devise new models. In Section 2, we show related work. In Section 3, we present a family of models, knowledge tracing machines, and recover famous models of the EDM literature as special https://github.com/jilljenn/ktm ar X iv :1 81 1. 03 38 8v 2 [ cs .I R ] 1 5 N ov 2 01 8 cases. Then, in Section 4 we conduct experiments and show our results in Section 5. We conclude with further work in Section 6.", "title": "" }, { "docid": "819de9493806b5baed90d68ebb71bb90", "text": "ING AND INDEXING SERVICES OR SPECIALIST BIBLIOGRAPHIC DATABASES Major subject A&Is – e.g. Scopus, PubMed, Web of Science, focus on structured access to the highest quality information within a discipline. They typically cover all the key literature but not necessarily all the literature in a discipline. Their utility flows from the perceived certainty and reassurance that they offer to users in providing the authoritative source of search results within a discipline. However, they cannot boast universal coverage of the literature – they provide good coverage of a defined subject niche, but reduce the serendipitous discovery of peripheral material. Also, many A&Is are sold at a premium, which in itself is a barrier to their use. Examples from a wide range of subjects were given in the survey questions to help surveyees understand this classification.", "title": "" }, { "docid": "9bd94070d7542a466ca5cafd3429251e", "text": "With the rise of increasingly advanced reverse engineering technique, especially more scalable symbolic execution tools, software obfuscation faces great challenges. Branch conditions contain important control flow logic of a program. Adversaries can use powerful program analysis tools to collect sensitive program properties and recover a program’s internal logic, stealing intellectual properties from the original owner. In this paper, we propose a novel control obfuscation technique that uses lambda calculus to hide the original computation semantics and makes the original program more obscure to understand and reverse engineer. Our obfuscator replaces the conditional instructions with lambda calculus function calls that simulate the same behavior with a more complicated execution model. Our experiment result shows that our obfuscation method can protect sensitive branch conditions from stateof-the-art symbolic execution techniques, with only modest overhead.", "title": "" }, { "docid": "96c4b307391d049924cb6f06191d3bae", "text": "Theory and research on media violence provides evidence that aggressive youth seek out media violence and that media violence prospectively predicts aggression in youth.The authors argue that both relationships,when modeled over time, should be mutually reinforcing, in what they call a downward spiral model. This study uses multilevel modeling to examine individual growth curves in aggressiveness and violent media use. The measure of use of media violence included viewing action films, playing violent computer and video games, and visiting violence-oriented Internet sites by students from 20 middle schools in 10 different regions in the United States. The findings appear largely consistent with the proposed model. In particular, concurrent effects of aggressiveness on violent-media use and concurrent and lagged effects of violent media use on aggressiveness were found. The implications of this model for theorizing about media effects on youth, and for bridging active audience with media effects perspectives, are discussed.", "title": "" }, { "docid": "791f889ddb18c375d38f809805bc66cd", "text": "INTRODUCTION\nNasopharyngeal cysts are uncommon, and are mostly asymptomatic. However, these lesions are infrequently found during routine endoscopies and imaging studies. In even more rare cases, they may be the source for unexplained sinonasal symptoms, such as CSF rhinorrhea, visual disturbances and nasal obstruction.\n\n\nPURPOSE OF REVIEW\nThis presentation systematically reviews the different nasopharyngeal cysts encountered in children, emphasizing the current knowledge on pathophysiology, recent advances in molecular biology and prenatal diagnosis, clinical presentation, imaging and treatment options.\n\n\nSUMMARY\nWith the advent of flexible and rigid fiber-optic technology and modern imaging techniques, and in particularly prenatal diagnostic techniques, nasopharyngeal cysts recognition is more common than previous times and requires an appropriate consideration. Familiarity with these lesions is essential for the pediatric otolaryngologist.", "title": "" }, { "docid": "ba974ef3b1724a0b31331f558ed13e8e", "text": "The paper presents a simple and effective sketch-based algorithm for large scale image retrieval. One of the main challenges in image retrieval is to localize a region in an image which would be matched with the query image in contour. To tackle this problem, we use the human perception mechanism to identify two types of regions in one image: the first type of region (the main region) is defined by a weighted center of image features, suggesting that we could retrieve objects in images regardless of their sizes and positions. The second type of region, called region of interests (ROI), is to find the most salient part of an image, and is helpful to retrieve images with objects similar to the query in a complicated scene. So using the two types of regions as candidate regions for feature extraction, our algorithm could increase the retrieval rate dramatically. Besides, to accelerate the retrieval speed, we first extract orientation features and then organize them in a hierarchal way to generate global-to-local features. Based on this characteristic, a hierarchical database index structure could be built which makes it possible to retrieve images on a very large scale image database online. Finally a real-time image retrieval system on 4.5 million database is developed to verify the proposed algorithm. The experiment results show excellent retrieval performance of the proposed algorithm and comparisons with other algorithms are also given.", "title": "" }, { "docid": "b0155714f0b0c8c24ee7d30b6fc62ace", "text": "It has become almost routine practice to incorporate balance exercises into training programs for athletes from different sports. However, the type of training that is most efficient remains unclear, as well as the frequency, intensity and duration of the exercise that would be most beneficial have not yet been determined. The following review is based on papers that were found through computerized searches of PubMed and SportDiscus from 2000 to 2016. Articles related to balance training, testing, and injury prevention in young healthy athletes were considered. Based on a Boolean search strategy the independent researchers performed a literature review. A total of 2395 articles were evaluated, yet only 50 studies met the inclusion criteria. In most of the reviewed articles, balance training has proven to be an effective tool for the improvement of postural control. It is difficult to establish one model of training that would be appropriate for each sport discipline, including its characteristics and demands. The main aim of this review was to identify a training protocol based on most commonly used interventions that led to improvements in balance. Our choice was specifically established on the assessment of the effects of balance training on postural control and injury prevention as well as balance training methods. The analyses including papers in which training protocols demonstrated positive effects on balance performance suggest that an efficient training protocol should last for 8 weeks, with a frequency of two training sessions per week, and a single training session of 45 min. This standard was established based on 36 reviewed studies.", "title": "" }, { "docid": "074624b6db03cca1e83e9c40679ce62b", "text": "In this project a human robot interaction system was developed in order to let people naturally play rock-paper-scissors games against a smart robotic opponent. The robot does not perform random choices, the system is able to analyze the previous rounds trying to forecast the next move. A Machine Learning algorithm based on Gaussian Mixture Model (GMM) allows us to increase the percentage of robot victories. This is a very important aspect in the natural interaction between human and robot, in fact, people do not like playing against “stupid” machines, while they are stimulated in confronting with a skilled opponent.", "title": "" } ]
scidocsrr
4b13c346f6d7b42327fdffddc6c4bcb8
Computer-aided system for defect inspection in the PCB manufacturing process
[ { "docid": "a433ebaeeb5dc5b68976b3ecb770c0cd", "text": "1 abstract The importance of the inspection process has been magniied by the requirements of the modern manufacturing environment. In electronics mass-production manufacturing facilities, an attempt is often made to achieve 100 % quality assurance of all parts, subassemblies, and nished goods. A variety of approaches for automated visual inspection of printed circuits have been reported over the last two decades. In this survey, algorithms and techniques for the automated inspection of printed circuit boards are examined. A classiication tree for these algorithms is presented and the algorithms are grouped according to this classiication. This survey concentrates mainly on image analysis and fault detection strategies, these also include the state-of-the-art techniques. A summary of the commercial PCB inspection systems is also presented. 2 Introduction Many important applications of vision are found in the manufacturing and defense industries. In particular, the areas in manufacturing where vision plays a major role are inspection, measurements , and some assembly tasks. The order among these topics closely reeects the manufacturing needs. In most mass-production manufacturing facilities, an attempt is made to achieve 100% quality assurance of all parts, subassemblies, and nished products. One of the most diicult tasks in this process is that of inspecting for visual appearance-an inspection that seeks to identify both functional and cosmetic defects. With the advances in computers (including high speed, large memory and low cost) image processing, pattern recognition, and artiicial intelligence have resulted in better and cheaper equipment for industrial image analysis. This development has made the electronics industry active in applying automated visual inspection to manufacturing/fabricating processes that include printed circuit boards, IC chips, photomasks, etc. Nello 1] gives a summary of the machine vision inspection applications in electronics industry. 01", "title": "" } ]
[ { "docid": "b9538c45fc55caff8b423f6ecc1fe416", "text": " Summary. The Probabilistic I/O Automaton model of [31] is used as the basis for a formal presentation and proof of the randomized consensus algorithm of Aspnes and Herlihy. The algorithm guarantees termination within expected polynomial time. The Aspnes-Herlihy algorithm is a rather complex algorithm. Processes move through a succession of asynchronous rounds, attempting to agree at each round. At each round, the agreement attempt involves a distributed random walk. The algorithm is hard to analyze because of its use of nontrivial results of probability theory (specifically, random walk theory which is based on infinitely many coin flips rather than on finitely many coin flips), because of its complex setting, including asynchrony and both nondeterministic and probabilistic choice, and because of the interplay among several different sub-protocols. We formalize the Aspnes-Herlihy algorithm using probabilistic I/O automata. In doing so, we decompose it formally into three subprotocols: one to carry out the agreement attempts, one to conduct the random walks, and one to implement a shared counter needed by the random walks. Properties of all three subprotocols are proved separately, and combined using general results about automaton composition. It turns out that most of the work involves proving non-probabilistic properties (invariants, simulation mappings, non-probabilistic progress properties, etc.). The probabilistic reasoning is isolated to a few small sections of the proof. The task of carrying out this proof has led us to develop several general proof techniques for probabilistic I/O automata. These include ways to combine expectations for different complexity measures, to compose expected complexity properties, to convert probabilistic claims to deterministic claims, to use abstraction mappings to prove probabilistic properties, and to apply random walk theory in a distributed computational setting. We apply all of these techniques to analyze the expected complexity of the algorithm.", "title": "" }, { "docid": "072b17732d8b628d3536e7045cd0047d", "text": "In this paper, we propose a high-speed parallel 128 bit multiplier for Ghash Function in conjunction with its FPGA implementation. Through the use of Verilog the designs are evaluated by using Xilinx Vertax5 with 65nm technic and 30,000 logic cells. The highest throughput of 30.764Gpbs can be achieved on virtex5 with the consumption of 8864 slices LUT. The proposed design of the multiplier can be utilized as a design IP core for the implementation of the Ghash Function. The architecture of the multiplier can also apply in more general polynomial basis. Moreover it can be used as arithmetic module in other encryption field.", "title": "" }, { "docid": "d9980c59c79374c5b1ee107d6a5c978f", "text": "A software module named flash translation layer (FTL) running in the controller of a flash SSD exposes the linear flash memory to the system as a block storage device. The effectiveness of an FTL significantly impacts the performance and durability of a flash SSD. In this research, we propose a new FTL called PCFTL (Plane-Centric FTL), which fully exploits plane-level parallelism supported by modern flash SSDs. Its basic idea is to allocate updates onto the same plane where their associated original data resides on so that the write distribution among planes is balanced. Furthermore, it utilizes fast intra-plane copy-back operations to transfer valid pages of a victim block when a garbage collection occurs. We largely extend a validated simulation environment called SSDsim to implement PCFTL. Comprehensive experiments using realistic enterprise-scale workloads are performed to evaluate its performance with respect to mean response time and durability in terms of standard deviation of writes per plane. Experimental results demonstrate that compared with the well-known DFTL, PCFTL improves performance and durability by up to 47 and 80 percent, respectively. Compared with its earlier version (called DLOOP), PCFTL enhances durability by up to 74 percent while delivering a similar I/O performance.", "title": "" }, { "docid": "617881631aa33d3a942882d632386782", "text": "Single-core architectures are rapidly on the decline, and even the most common computational devices now contain multiple cores. With this easy access to parallelism, the machine learning community needs to go beyond treating the running time as the only computational resource and needs to study approaches that take this additional form of flexibility into account. In this work, we study inference in tree-shaped models. Specifically, we focus on balancing accuracy and efficient use of multiple cores when faced with running time constraints.", "title": "" }, { "docid": "fb43cec4064dfad44d54d1f2a4981262", "text": "Knowledge representation is a major topic in AI, and many studies attempt to represent entities and relations of know ledge base in a continuous vector space. Among these attempts, translation-based methods build entity and relati on vectors by minimizing the translation loss from a head entity to a tail one. In spite of the success of these methods, translation-based methods also suffer from the oversimpli fied loss metric, and are not competitive enough to model various and complex entities/relations in knowledge bases. To address this issue, we propose TransA, an adaptive metric approach for embedding, utilizing the metric learning idea s to provide a more flexible embedding method. Experiments are conducted on the benchmark datasets and our proposed method makes significant and consistent improvements over the state-of-the-art baselines.", "title": "" }, { "docid": "047c486e94c217a9ce84cdd57fc647fe", "text": "There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide insights into their behavior and thought processes. XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias/problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems is neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, we describe foundational concepts of explainability and show how they can be used to classify existing literature. We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient. Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.", "title": "" }, { "docid": "53f28f66d99f5e706218447e226cf7cc", "text": "The Connectionist Inductive Learning and Logic Programming System, C-IL2P, integrates the symbolic and connectionist paradigms of Artificial Intelligence through neural networks that perform massively parallel Logic Programming and inductive learning from examples and background knowledge. This work presents an extension of C-IL2P that allows the implementation of Extended Logic Programs in Neural Networks. This extension makes C-IL2P applicable to problems where the background knowledge is represented in a Default Logic. As a case example, we have applied the system for fault diagnosis of a simplified power system generation plant, obtaining good preliminary results.", "title": "" }, { "docid": "b207f2efab5abaf254ec34a8c1559d49", "text": "Image processing algorithms used in surveillance systems are designed to work under good weather conditions. For example, in a rainy day, raindrops are adhered to camera lenses and windshields, resulting in partial occlusions in acquired images, and making performance of image processing algorithms significantly degraded. To improve performance of surveillance systems in a rainy day, raindrops have to be automatically detected and removed from images. Addressing this problem, this paper proposes an adherent raindrop detection method from a single image which does not need training data and special devices. The proposed method employs image segmentation using Maximally Stable Extremal Regions (MSER) and qualitative metrics to detect adherent raindrops from the result of MSER-based image segmentation. Through a set of experiments, we demonstrate that the proposed method exhibits efficient performance of adherent raindrop detection compared with conventional methods.", "title": "" }, { "docid": "1d161bf47ac2efd6597d20fdb100291e", "text": "Amphetamine (AMPH) and its derivatives are regularly used in the treatment of a wide array of disorders such as attention-deficit hyperactivity disorder (ADHD), obesity, traumatic brain injury, and narcolepsy (Prog Neurobiol 75:406–433, 2005; J Am Med Assoc 105:2051–2054, 1935; J Am Acad Child Adolesc Psychiatry 41:514–521, 2002; Neuron 43:261–269, 2004; Annu Rev Pharmacol Toxicol 47:681–698, 2007; Drugs Aging 21:67–79, 2004). Despite the important medicinal role for AMPH, it is more widely known for its psychostimulant and addictive properties as a drug of abuse. The primary molecular targets of AMPH are both the vesicular monoamine transporters (VMATs) and plasma membrane monoamine—dopamine (DA), norepinephrine (NE), and serotonin (5-HT)—transporters. The rewarding and addicting properties of AMPH rely on its ability to act as a substrate for these transporters and ultimately increase extracellular levels of monoamines. AMPH achieves this elevation in extracellular levels of neurotransmitter by inducing synaptic vesicle depletion, which increases intracellular monoamine levels, and also by promoting reverse transport (efflux) through plasma membrane monoamine transporters (J Biol Chem 237:2311–2317, 1962; Med Exp Int J Exp Med 6:47–53, 1962; Neuron 19:1271–1283, 1997; J Physiol 144:314–336, 1958; J Neurosci 18:1979–1986, 1998; Science 237:1219–1223, 1987; J Neurosc 15:4102–4108, 1995). This review will focus on two important aspects of AMPH-induced regulation of the plasma membrane monoamine transporters—transporter mediated monoamine efflux and transporter trafficking.", "title": "" }, { "docid": "2232f81da81ced942da548d0669bafc6", "text": "Quantitative prediction of quality properties (i.e. extra-functional properties such as performance, reliability, and cost) of software architectures during design supports a systematic software engineering approach. Designing architectures that exhibit a good trade-off between multiple quality criteria is hard, because even after a functional design has been created, many remaining degrees of freedom in the software architecture span a large, discontinuous design space. In current practice, software architects try to find solutions manually, which is time-consuming, can be error-prone and can lead to suboptimal designs. We propose an automated approach to search the design space for good solutions. Starting with a given initial architectural model, the approach iteratively modifies and evaluates architectural models. Our approach applies a multi-criteria genetic algorithm to software architectures modelled with the Palladio Component Model. It supports quantitative performance, reliability, and cost prediction and can be extended to other quantitative quality criteria of software architectures. We validate the applicability of our approach by applying it to an architecture model of a component-based business information system and analyse its quality criteria trade-offs by automatically investigating more than 1200 alternative design candidates.", "title": "" }, { "docid": "e82df2786524c8a427c8aecfc5ab817a", "text": "This paper presents 2×2 patch array antenna for 2.45 GHz industrial, scientific and medical (ISM) band application. In this design, four array radiating elements interconnected with a transmission line and excited by 50Ω subminiature (SMA). The proposed antenna structure is combined with a reflector in order to investigate the effect of air gap between radiating element and reflector in terms of reflection coefficient (S11) bandwidth and realized gain. The analysis on the effect of air gap has significantly achieved maximum reflection coefficient and realized gain of -16 dB and 19.29 dBi respectively at 2.45 GHz.", "title": "" }, { "docid": "5d48cd6c8cc00aec5f7f299c346405c9", "text": ".................................................................................................................................... iii Acknowledgments..................................................................................................................... iv Table of", "title": "" }, { "docid": "e863555c127f673fa0e57d918dc17414", "text": "Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations. Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions. Using the real-world case of road sign classification, we show that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Due to the current lack of a standardized testing method, we propose a two-stage evaluation methodology for robust physical adversarial examples consisting of lab and field tests. Using this methodology, we evaluate the efficacy of physical adversarial manipulations on real objects. With a perturbation in the form of only black and white stickers, we attack a real stop sign, causing targeted misclassification in 100% of the images obtained in lab settings, and in 84.8% of the captured video frames obtained on a moving vehicle (field test) for the target classifier.", "title": "" }, { "docid": "95db9ce9faaf13e8ff8d5888a6737683", "text": "Measurements of pH, acidity, and alkalinity are commonly used to describe water quality. The three variables are interrelated and can sometimes be confused. The pH of water is an intensity factor, while the acidity and alkalinity of water are capacity factors. More precisely, acidity and alkalinity are defined as a water’s capacity to neutralize strong bases or acids, respectively. The term “acidic” for pH values below 7 does not imply that the water has no alkalinity; likewise, the term “alkaline” for pH values above 7 does not imply that the water has no acidity. Water with a pH value between 4.5 and 8.3 has both total acidity and total alkalinity. The definition of pH, which is based on logarithmic transformation of the hydrogen ion concentration ([H+]), has caused considerable disagreement regarding the appropriate method of describing average pH. The opinion that pH values must be transformed to [H+] values before averaging appears to be based on the concept of mixing solutions of different pH. In practice, however, the averaging of [H+] values will not provide the correct average pH because buffers present in natural waters have a greater effect on final pH than does dilution alone. For nearly all uses of pH in fisheries and aquaculture, pH values may be averaged directly. When pH data sets are transformed to [H+] to estimate average pH, extreme pH values will distort the average pH. Values of pH conform more closely to a normal distribution than do values of [H+], making the pH values more acceptable for use in statistical analysis. Moreover, electrochemical measurements of pH and many biological responses to [H+] are described by the Nernst equation, which states that the measured or observed response is linearly related to 10-fold changes in [H+]. Based on these considerations, pH rather than [H+] is usually the most appropriate variable for use in statistical analysis. *Corresponding author: boydce1@auburn.edu Received November 2, 2010; accepted February 7, 2011 Published online September 27, 2011 Temperature, salinity, hardness, pH, acidity, and alkalinity are fundamental variables that define the quality of water. Although all six variables have precise, unambiguous definitions, the last three variables are often misinterpreted in aquaculture and fisheries studies. In this paper, we explain the concepts of pH, acidity, and alkalinity, and we discuss practical relationships among those variables. We also discuss the concept of pH averaging as an expression of the central tendency of pH measurements. The concept of pH averaging is poorly understood, if not controversial, because many believe that pH values, which are log-transformed numbers, cannot be averaged directly. We argue that direct averaging of pH values is the simplest and most logical approach for most uses and that direct averaging is based on sound practical and statistical principles. THE pH CONCEPT The pH is an index of the hydrogen ion concentration ([H+]) in water. The [H+] affects most chemical and biological processes; thus, pH is an important variable in water quality endeavors. Water temperature probably is the only water quality variable that is measured more commonly than pH. The pH concept has its basis in the ionization of water:", "title": "" }, { "docid": "82c327ecd5402e7319ecaa416dc8e008", "text": "The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined.", "title": "" }, { "docid": "fbdb8df8bfb46db664723cd255c56a5a", "text": "In this paper we present an analysis of a 280 GB AltaVista Sear ch Engine query log consisting of approximately 1 billion entries for search requests over a period of six weeks. This represents approximately 28 5 million user sessions, each an attempt to fill a single information need. We present an analysis of individual queries, query duplica tion, and query sessions. Furthermore we present results of a correlation a nalysis of the log entries, studying the interaction of terms within queries. Our data supports the conjecture that web users differ signi fica tly from the user assumed in the standard information retrieval lite rature. Specifically, we show that web users type in short queries, mostly look at th e first 10 results only, and seldom modify the query. This suggests that t raditional information retrieval techniques might not work well for answeri ng web search requests. The correlation analysis showed that the most highly correl ated items are constituents of phrases. This result indicates it may be useful for search engines to consider search terms as parts of phrases even if t he user did not explicitly specify them as such.", "title": "" }, { "docid": "60a9030ddf88347f9a75ce24f52f9768", "text": "The phenotype of patients with a chromosome 1q43q44 microdeletion (OMIM; 612337) is characterized by intellectual disability with no or very limited speech, microcephaly, growth retardation, a recognizable facial phenotype, seizures, and agenesis of the corpus callosum. Comparison of patients with different microdeletions has previously identified ZBTB18 (ZNF238) as a candidate gene for the 1q43q44 microdeletion syndrome. Mutations in this gene have not yet been described. We performed exome sequencing in a patient with features of the 1q43q44 microdeletion syndrome that included short stature, microcephaly, global developmental delay, pronounced speech delay, and dysmorphic facial features. A single de novo non-sense mutation was detected, which was located in ZBTB18. This finding is consistent with an important role for haploinsufficiency of ZBTB18 in the phenotype of chromosome 1q43q44 microdeletions. The corpus callosum is abnormal in mice with a brain-specific knock-out of ZBTB18. Similarly, most (but not all) patients with the 1q43q44 microdeletion syndrome have agenesis or hypoplasia of the corpus callosum. In contrast, the patient with a ZBTB18 point mutation reported here had a structurally normal corpus callosum on brain MRI. Incomplete penetrance or haploinsufficiency of other genes from the critical region may explain the absence of corpus callosum agenesis in this patient with a ZBTB18 point mutation. The findings in this patient with a mutation in ZBTB18 will contribute to our understanding of the 1q43q44 microdeletion syndrome.", "title": "" }, { "docid": "944521c30d94122fa1dfe69105db71cd", "text": "The head related impulse response (HRIR) characterizes the auditory cues created by scattering of sound off a person's anatomy. The experimentally measured HRIR depends on several factors such as reflections from body parts (torso, shoulder, and knees), head diffraction, and reflection/ diffraction effects due to the pinna. Structural models (Algazi et al., 2002; Brown and Duda, 1998) seek to establish direct relationships between the features in the HRIR and the anatomy. While there is evidence that particular features in the HRIR can be explained by anthropometry, the creation of such models from experimental data is hampered by the fact that the extraction of the features in the HRIR is not automatic. One of the prominent features observed in the HRIR, and one that has been shown to be important for elevation perception, are the deep spectral notches attributed to the pinna. In this paper we propose a method to robustly extract the frequencies of the pinna spectral notches from the measured HRIR, distinguishing them from other confounding features. The method also extracts the resonances described by Shaw (1997). The techniques are applied to the publicly available CIPIC HRIR database (Algazi et al., 2001c). The extracted notch frequencies are related to the physical dimensions and shape of the pinna.", "title": "" }, { "docid": "a6247333d00afb3b79cb93c2a036062b", "text": "Privacy decision making can be surprising or even appear contradictory: we feel entitled to protection of information about ourselves that we do not control, yet willingly trade away the same information for small rewards; we worry about privacy invasions of little significance, yet overlook those that may cause significant damages. Dichotomies between attitudes and behaviors, inconsistencies in discounting future costs or rewards, and other systematic behavioral biases have long been studied in the psychology and behavioral economics literatures. In this paper we draw from those literatures to discuss the role of uncertainty, ambiguity, and behavioral biases in privacy decision making.", "title": "" } ]
scidocsrr
a4fed8b3c8cd87d441f99f105565201d
An investigation of imitation learning algorithms for structured prediction
[ { "docid": "61ae61d0950610ee2ad5e07f64f9b983", "text": "We present Searn, an algorithm for integrating search and learning to solve complex structured prediction problems such as those that occur in natural language, speech, computational biology, and vision. Searn is a meta-algorithm that transforms these complex problems into simple classification problems to which any binary classifier may be applied. Unlike current algorithms for structured learning that require decomposition of both the loss function and the feature functions over the predicted structure, Searn is able to learn prediction functions for any loss function and any class of features. Moreover, Searn comes with a strong, natural theoretical guarantee: good performance on the derived classification problems implies good performance on the structured prediction problem.", "title": "" }, { "docid": "26a599c22c173f061b5d9579f90fd888", "text": "markov logic an interface layer for artificial markov logic an interface layer for artificial shinichi tsukada in size 22 syyjdjbook.buncivy yumina ooba in size 24 ajfy7sbook.ztoroy okimi in size 15 edemembookkey.16mb markov logic an interface layer for artificial intelligent systems (ai-2) ubc computer science interface layer for artificial intelligence daniel lowd essential principles for autonomous robotics markovlogic: aninterfacelayerfor arti?cialintelligence official encyclopaedia of sheffield united football club hot car hot car firext answers || 2007 acura tsx hitch manual course syllabus university of texas at dallas jump frog jump cafebr 1994 chevy silverado 1500 engine ekpbs readings in earth science alongs johnson owners manual pdf firext thomas rescues the diesels cafebr dead sea scrolls and the jewish origins of christianity install gimp help manual by iitsuka asao vox diccionario abreviado english spanis mdmtv nobutaka in size 26 bc13xqbookog.xxuz mechanisms in b cell neoplasia 1992 workshop at the spocks world diane duane nabbit treasury of saints fiores reasoning with probabilistic university of texas at austin gp1300r yamaha waverunner service manua by takisawa tomohide repair manual haier hpr10xc6 air conditioner birdz mexico icons mexico icons oobags asus z53 manual by hatsutori yoshino industrial level measurement by haruyuki morimoto", "title": "" } ]
[ { "docid": "25c815f5fc0cf87bdef5e069cbee23a8", "text": "This paper presents a 9-bit subrange analog-to-digital converter (ADC) consisting of a 3.5-bit flash coarse ADC, a 6-bit successive-approximation-register (SAR) fine ADC, and a differential segmented capacitive digital-to-analog converter (DAC). The flash ADC controls the thermometer coarse capacitors of the DAC and the SAR ADC controls the binary fine ones. Both theoretical analysis and behavioral simulations show that the differential non-linearity (DNL) of a SAR ADC with a segmented DAC is better than that of a binary ADC. The merged switching of the coarse capacitors significantly enhances overall operation speed. At 150 MS/s, the ADC consumes 1.53 mW from a 1.2-V supply. The effective number of bits (ENOB) is 8.69 bits and the effective resolution bandwidth (ERBW) is 100 MHz. With a 1.3-V supply voltage, the sampling rate is 200 MS/s with 2.2-mW power consumption. The ENOB is 8.66 bits and the ERBW is 100 MHz. The FOMs at 1.3 V and 200 MS/s, 1.2 V and 150 MS/s and 1 V and 100 MS/s are 27.2, 24.7, and 17.7 fJ/conversion-step, respectively.", "title": "" }, { "docid": "24957794ed251c2e970d787df6d87064", "text": "Glyph as a powerful multivariate visualization technique is used to visualize data through its visual channels. To visualize 3D volumetric dataset, glyphs are usually placed on 2D surface, such as the slicing plane or the feature surface, to avoid occluding each other. However, the 3D spatial structure of some features may be missing. On the other hand, placing large number of glyphs over the entire 3D space results in occlusion and visual clutter that make the visualization ineffective. To avoid the occlusion, we propose a view-dependent interactive 3D lens that removes the occluding glyphs by pulling the glyphs aside through the animation. We provide two space deformation models and two lens shape models to displace the glyphs based on their spatial distributions. After the displacement, the glyphs around the user-interested region are still visible as the context information, and their spatial structures are preserved. Besides, we attenuate the brightness of the glyphs inside the lens based on their depths to provide more depth cue. Furthermore, we developed an interactive glyph visualization system to explore different glyph-based visualization applications. In the system, we provide a few lens utilities that allows users to pick a glyph or a feature and look at it from different view directions. We compare different display/interaction techniques to visualize/manipulate our lens and glyphs.", "title": "" }, { "docid": "2e9d0bf42b8bb6eb8752e89eb46f2fc5", "text": "What is the growth pattern of social networks, like Facebook and WeChat? Does it truly exhibit exponential early growth, as predicted by textbook models like the Bass model, SI, or the Branching Process? How about the count of links, over time, for which there are few published models?\n We examine the growth of several real networks, including one of the world's largest online social network, ``WeChat'', with 300 million nodes and 4.75 billion links by 2013; and we observe power law growth for both nodes and links, a fact that completely breaks the sigmoid models (like SI, and Bass). In its place, we propose NETTIDE, along with differential equations for the growth of the count of nodes, as well as links. Our model accurately fits the growth patterns of real graphs; it is general, encompassing as special cases all the known, traditional models (including Bass, SI, log-logistic growth); while still remaining parsimonious, requiring only a handful of parameters. Moreover, our NETTIDE for link growth is the first one of its kind, accurately fitting real data, and naturally leading to the densification phenomenon. We validate our model with four real, time-evolving social networks, where NETTIDE gives good fitting accuracy, and, more importantly, applied on the WeChat data, our NETTIDE forecasted more than 730 days into the future, with 3% error.", "title": "" }, { "docid": "4301af5b0c7910480af37f01847fb1fe", "text": "Cross-modal retrieval is a very hot research topic that is imperative to many applications involving multi-modal data. Discovering an appropriate representation for multi-modal data and learning a ranking function are essential to boost the cross-media retrieval. Motivated by the assumption that a compositional cross-modal semantic representation (pairs of images and text) is more attractive for cross-modal ranking, this paper exploits the existing image-text databases to optimize a ranking function for cross-modal retrieval, called deep compositional cross-modal learning to rank (C2MLR). In this paper, C2MLR considers learning a multi-modal embedding from the perspective of optimizing a pairwise ranking problem while enhancing both local alignment and global alignment. In particular, the local alignment (i.e., the alignment of visual objects and textual words) and the global alignment (i.e., the image-level and sentence-level alignment) are collaboratively utilized to learn the multi-modal embedding common space in a max-margin learning to rank manner. The experiments demonstrate the superiority of our proposed C2MLR due to its nature of multi-modal compositional embedding.", "title": "" }, { "docid": "01cb25375745cd8fdc6d2a546910acb4", "text": "Digital technology innovations have led to significant changes in everyday life, made possible by the widespread use of computers and continuous developments in information technology (IT). Based on the utilization of systems applying 3D(three-dimensional) technology, as well as virtual and augmented reality techniques, IT has become the basis for a new fashion industry model, featuring consumer-centered service and production methods. Because of rising wages and production costs, the fashion industry’s international market power has been significantly weakened in recent years. To overcome this situation, new markets must be established by building a new knowledge and technology-intensive fashion industry. Development of virtual clothing simulation software, which has played an important role in the fashion industry’s IT-based digitalization, has led to continuous technological improvements for systems that can virtually adapt existing 2D(two-dimensional) design work to 3D design work. Such adaptions have greatly influenced the fashion industry by increasing profits. Both here and abroad, studies have been conducted to support the development of consumercentered, high value-added clothing and fashion products by employing digital technology. This study proposes a system that uses a depth camera to capture the figure of a user standing in front of a large display screen. The display can show fashion concepts and various outfits to the user, coordinated to his or her body. Thus, a “magic mirror” effect is produced. Magic mirror-based fashion apparel simulation can support total fashion coordination for accessories and outfits automatically, and does not require computer or fashion expertise. This system can provide convenience for users by assuming the role of a professional fashion coordinator giving an appearance presentation. It can also be widely used to support a customized method for clothes shopping.", "title": "" }, { "docid": "c8767f1fbcd84b1973b0007110a77d2c", "text": "OBJECTIVES\nThe purpose of present article was to review the classifications suggested for assessment of the jawbone anatomy, to evaluate the diagnostic possibilities of mandibular canal identification and risk of inferior alveolar nerve injury, aesthetic considerations in aesthetic zone, as well as to suggest new classification system of the jawbone anatomy in endosseous dental implant treatment.\n\n\nMATERIAL AND METHODS\nLiterature was selected through a search of PubMed, Embase and Cochrane electronic databases. The keywords used for search were mandible; mandibular canal; alveolar nerve, inferior; anatomy, cross-sectional; dental implants; classification. The search was restricted to English language articles, published from 1972 to March 2013. Additionally, a manual search in the major anatomy and oral surgery books were performed. The publications there selected by including clinical and human anatomy studies.\n\n\nRESULTS\nIn total 109 literature sources were obtained and reviewed. The classifications suggested for assessment of the jawbone anatomy, diagnostic possibilities of mandibular canal identification and risk of inferior alveolar nerve injury, aesthetic considerations in aesthetic zone were discussed. New classification system of the jawbone anatomy in endosseous dental implant treatment based on anatomical and radiologic findings and literature review results was suggested.\n\n\nCONCLUSIONS\nThe classification system proposed here based on anatomical and radiological jawbone quantity and quality evaluation is a helpful tool for planning of treatment strategy and collaboration among specialists. Further clinical studies should be conducted for new classification validation and reliability evaluation.", "title": "" }, { "docid": "ea5ff4f4060818d0f83cbc8314af2b9e", "text": "A winglet is a device attached at the wingtip, used to improve aircraft efficiency by lowering the induced drag caused by wingtip vortices. It is a vertical or angled extension at the tips of each wing. Winglets work by increasing the effective aspect ratio of a wing without adding greatly to the structural stress and hence necessary weight of the wing structure. This paper describes a CFD 3-dimensional winglets analysis that was performed on a rectangular wing of NACA65 3 218 cross sectional airfoil. The wing is of 660 mm span and 121 mm chord and was analyzed for two shape configurations, semicircle and elliptical. The objectives of the analysis were to compare the aerodynamic characteristics of the two winglet configurations and to investigate the performance of the two winglets shape simulated at selected cant angle of 0, 45 and 60 degrees.", "title": "" }, { "docid": "4cc3f3a5e166befe328b6e18bc836e89", "text": "Virtual human characters are found in a broad range of applications, from movies, games and networked virtual environments to teleconferencing and tutoring applications. Such applications are available on a variety of platforms, from desktop and web to mobile devices. High-quality animation is an essential prerequisite for realistic and believable virtual characters. Though researchers and application developers have ample animation techniques for virtual characters at their disposal, implementation of these techniques into an existing application tends to be a daunting and time-consuming task. In this paper we present visage|SDK, a versatile framework for real-time character animation based on MPEG-4 FBA standard that offers a wide spectrum of features that includes animation playback, lip synchronization and facial motion tracking, while facilitating rapid production of art assets and easy integration with existing graphics engines.", "title": "" }, { "docid": "681f36fde6ec060baa76a6722a62ccbc", "text": "This study determined if any of six endodontic solutions would have a softening effect on resorcinol-formalin paste in extracted teeth, and if there were any differences in the solvent action between these solutions. Forty-nine single-rooted extracted teeth were decoronated 2 mm coronal to the CEJ, and the roots sectioned apically to a standard length of 15 mm. Canals were prepared to a 12 mm WL and a uniform size with a #7 Parapost drill. Teeth were then mounted in a cylinder ring with acrylic. The resorcinol-formalin mixture was placed into the canals and was allowed to set for 60 days in a humidor. The solutions tested were 0.9% sodium chloride, 5.25% sodium hypochlorite, chloroform, Endosolv R (Endosolv R), 3% hydrogen peroxide, and 70% isopropyl alcohol. Seven samples per solution were tested and seven samples using water served as controls. One drop of the solution was placed over the set mixture in the canal, and the depth of penetration of a 1.5-mm probe was measured at 2, 5, 10, and 20 min using a dial micrometer gauge. A repeated-measures ANOVA showed a difference in penetration between the solutions at 10 min (p = 0.04) and at 20 min (p = 0.0004). At 20 min, Endosolv R, had significantly greater penetration than 5.25% sodium hypochlorite (p = 0.0033) and chloroform (p = 0.0018); however, it was not significantly better than the control (p = 0.0812). Although Endosolv R, had statistically superior probe penetration at 20 min, the softening effect could not be detected clinically at this time.", "title": "" }, { "docid": "33edd1c2ad88c3693a96f7d3340b061c", "text": "The strength of diapycnal mixing by small-scale motions in a stratified fluid is investigated through changes to the mean buoyancy profile. We study the mixing in laboratory experiments in which an initially linearly stratified fluid is stirred with a rake of vertical bars. The flow evolution Ž . depends on the Richardson number Ri , defined as the ratio of buoyancy forces to inertial forces. At low Ri, the buoyancy flux is a function of the local buoyancy gradient only, and may be modelled as gradient diffusion with a Ri-dependent eddy diffusivity. At high Ri, vertical vorticity shed in the wakes of the bars interacts with the stratification and produces well-mixed layers separated by interfaces. This process leads to layers with a thickness proportional to the ratio of Ž . grid velocity to buoyancy frequency for a wide range of Reynolds numbers Re and grid solidities. In this regime, the buoyancy flux is not a function of the local gradient alone, but also depends on the local structure of the buoyancy profile. Consequently, the layers are not formed by the PhillipsrPosmentier mechanism, and we show that they result from vortical mixing previously thought to occur only at low Re. The initial mixing efficiency shows a maximum at a critical Ri which separates the two classes of behaviour. The mixing efficiency falls as the fluid mixes and as the layered structure intensifies and, therefore, the mixing efficiency depends not only on the overall Ri, but also on the dynamics of the structure in the buoyancy field. We discuss some implications of these results to the atmosphere and oceans. q 1999 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "078d3fde34bbcdbb3806d13c3e6cb2dd", "text": "This paper reviews the concept of adaptation of human communities to global changes, especially climate change, in the context of adaptive capacity and vulnerability. It focuses on scholarship that contributes to practical implementation of adaptations at the community scale. In numerous social science fields, adaptations are considered as responses to risks associated with the interaction of environmental hazards and human vulnerability or adaptive capacity. In the climate change field, adaptation analyses have been undertaken for several distinct purposes. Impact assessments assume adaptations to estimate damages to longer term climate scenarios with and without adjustments. Evaluations of specified adaptation options aim to identify preferred measures. Vulnerability indices seek to provide relative vulnerability scores for countries, regions or communities. The main purpose of participatory vulnerability assessments is to identify adaptation strategies that are feasible and practical in communities. The distinctive features of adaptation analyses with this purpose are outlined, and common elements of this approach are described. Practical adaptation initiatives tend to focus on risks that are already problematic, climate is considered together with other environmental and social stresses, and adaptations are mostly integrated or mainstreamed into other resource management, disaster preparedness and sustainable development programs. r 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5d557ecb67df253662e37d6ec030d055", "text": "Low-rank matrix approximation methods provide one of the simplest and most effective approaches to collaborative filtering. Such models are usually fitted to data by finding a MAP estimate of the model parameters, a procedure that can be performed efficiently even on very large datasets. However, unless the regularization parameters are tuned carefully, this approach is prone to overfitting because it finds a single point estimate of the parameters. In this paper we present a fully Bayesian treatment of the Probabilistic Matrix Factorization (PMF) model in which model capacity is controlled automatically by integrating over all model parameters and hyperparameters. We show that Bayesian PMF models can be efficiently trained using Markov chain Monte Carlo methods by applying them to the Netflix dataset, which consists of over 100 million movie ratings. The resulting models achieve significantly higher prediction accuracy than PMF models trained using MAP estimation.", "title": "" }, { "docid": "5f33fb32ac9a278f7184ac384dc367ab", "text": "The new technologies characterizing the Internet of Things (IoT) allow realizing real smart environments able to provide advanced services to the users. Recently, these smart environments are also being exploited to renovate the users' interest on the cultural heritage, by guaranteeing real interactive cultural experiences. In this paper, we design and validate an indoor location-aware architecture able to enhance the user experience in a museum. In particular, the proposed system relies on a wearable device that combines image recognition and localization capabilities to automatically provide the users with cultural contents related to the observed artworks. The localization information is obtained by a Bluetooth low energy (BLE) infrastructure installed in the museum. Moreover, the system interacts with the Cloud to store multimedia contents produced by the user and to share environment-generated events on his/her social networks. Finally, several location-aware services, running in the system, control the environment status also according to users' movements. These services interact with physical devices through a multiprotocol middleware. The system has been designed to be easily extensible to other IoT technologies and its effectiveness has been evaluated in the MUST museum, Lecce, Italy.", "title": "" }, { "docid": "44402fdc3c9f2c6efaf77a00035f38ad", "text": "A multi-objective optimization strategy to find optimal designs of composite multi-rim flywheel rotors is presented. Flywheel energy storage systems have been expanding into applications such as rail and automotive transportation, where the construction volume is limited. Common flywheel rotor optimization approaches for these applications are single-objective, aiming to increase the stored energy or stored energy density. The proposed multi-objective optimization offers more information for decision-makers optimizing three objectives separately: stored energy, cost and productivity. A novel approach to model the manufacturing of multi-rim composite rotors facilitates the consideration of manufacturing cost and time within the optimization. An analytical stress calculation for multi-rim rotors is used, which also takes interference fits and residual stresses into account. Constrained by a failure prediction based on the Maximum Strength, Maximum Strain and Tsai-Wu criterion, the discrete and nonlinear optimization was solved. A hybrid optimization strategy is presented that combines a genetic algorithm with a local improvement executed by a sequential quadratic program. The problem was solved for two rotor geometries used for light rail transit applications showing similar design results as in industry.", "title": "" }, { "docid": "d98f60a2a0453954543da840076e388a", "text": "The back-propagation algorithm is the cornerstone of deep learning. Despite its importance, few variations of the algorithm have been attempted. This work presents an approach to discover new variations of the back-propagation equation. We use a domain specific language to describe update equations as a list of primitive functions. An evolution-based method is used to discover new propagation rules that maximize the generalization performance after a few epochs of training. We find several update equations that can train faster with short training times than standard back-propagation, and perform similar as standard back-propagation at convergence.", "title": "" }, { "docid": "505a150ad558f60a57d7f708a05288f3", "text": "Probiotic supplements in food industry have attracted a lot of attention and shown a remarkable growth in this field. Metabolic engineering (ME) approaches enable understanding their mechanism of action and increases possibility of designing probiotic strains with desired functions. Probiotic microorganisms generally referred as industrially important lactic acid bacteria (LAB) which are involved in fermenting dairy products, food, beverages and produces lactic acid as final product. A number of illustrations of metabolic engineering approaches in industrial probiotic bacteria have been described in this review including transcriptomic studies of Lactobacillus reuteri and improvement in exopolysaccharide (EPS) biosynthesis yield in Lactobacillus casei LC2W. This review summaries various metabolic engineering approaches for exploring metabolic pathways. These approaches enable evaluation of cellular metabolic state and effective editing of microbial genome or introduction of novel enzymes to redirect the carbon fluxes. In addition, various system biology tools such as in silico design commonly used for improving strain performance is also discussed. Finally, we discuss the integration of metabolic engineering and genome profiling which offers a new way to explore metabolic interactions, fluxomics and probiogenomics using probiotic bacteria like Bifidobacterium spp and Lactobacillus spp.", "title": "" }, { "docid": "5d48b6fcc1d8f1050b5b5dc60354fedb", "text": "The latency in the current neural based dialogue state tracking models prohibits them from being used efficiently for deployment in production systems, albeit their highly accurate performance. This paper proposes a new scalable and accurate neural dialogue state tracking model, based on the recently proposed Global-Local Self-Attention encoder (GLAD) model by Zhong et al. (2018) which uses global modules to share parameters between estimators for different types (called slots) of dialogue states, and uses local modules to learn slot-specific features. By using only one recurrent networks with global conditioning, compared to (1 + # slots) recurrent networks with global and local conditioning used in the GLAD model, our proposed model reduces the latency in training and inference times by 35% on average, while preserving performance of belief state tracking, by 97.38% on turn request and 88.51% on joint goal and accuracy. Evaluation on Multi-domain dataset (Multi-WoZ) also demonstrates that our model outperforms GLAD on turn inform and joint goal accuracy.", "title": "" }, { "docid": "707b75a5fa5e796c18bcaf17cd43075d", "text": "This paper presents a new feedback control strategy for balancing individual DC capacitor voltages in a three-phase cascade multilevel inverter-based static synchronous compensator. The design of the control strategy is based on the detailed small-signal model. The key part of the proposed controller is a compensator to cancel the variation parts in the model. The controller can balance individual DC capacitor voltages when H-bridges run with different switching patterns and have parameter variations. It has two advantages: 1) the controller can work well in all operation modes (the capacitive mode, the inductive mode, and the standby mode) and 2) the impact of the individual DC voltage controller on the voltage quality is small. Simulation results and experimental results verify the performance of the controller.", "title": "" }, { "docid": "e6d5781d32e76d9c5f7c4ea985568986", "text": "We present a baseline convolutional neural network (CNN) structure and image preprocessing methodology to improve facial expression recognition algorithm using CNN. To analyze the most efficient network structure, we investigated four network structures that are known to show good performance in facial expression recognition. Moreover, we also investigated the effect of input image preprocessing methods. Five types of data input (raw, histogram equalization, isotropic smoothing, diffusion-based normalization, difference of Gaussian) were tested, and the accuracy was compared. We trained 20 different CNN models (4 networks × 5 data input types) and verified the performance of each network with test images from five different databases. The experiment result showed that a three-layer structure consisting of a simple convolutional and a max pooling layer with histogram equalization image input was the most efficient. We describe the detailed training procedure and analyze the result of the test accuracy based on considerable observation.", "title": "" }, { "docid": "cac0de9be06166653af16275a9b54878", "text": "Community-based question answering(CQA) services have arisen as a popular knowledge sharing pattern for netizens. With abundant interactions among users, individuals are capable of obtaining satisfactory information. However, it is not effective for users to attain answers within minutes. Users have to check the progress over time until the satisfying answers submitted. We address this problem as a user personalized satisfaction prediction task. Existing methods usually exploit manual feature selection. It is not desirable as it requires careful design and is labor intensive. In this paper, we settle this issue by developing a new multiple instance deep learning framework. Specifically, in our settings, each question follows a weakly supervised learning (multiple instance learning) assumption, where its obtained answers can be regarded as instance sets and we define the question resolved with at least one satisfactory answer. We thus design an efficient framework exploiting multiple instance learning property with deep learning tactic to model the question-answer pairs relevance and rank the asker’s satisfaction possibility. Extensive experiments on large-scale datasets from Stack Exchange demonstrate the feasibility of our proposed framework in predicting askers personalized satisfaction. Our framework can be extended to numerous applications such as UI satisfaction Prediction, multi-armed bandit problem, expert finding and so on.", "title": "" } ]
scidocsrr
151e6bb887437c7b8641864b6b120166
Secure and Efficient Cloud Data Deduplication With Randomized Tag
[ { "docid": "78a6af6e87f82ac483b213f04b1ce405", "text": "Data deduplication is one of important data compression techniques for eliminating duplicate copies of repeating data, and has been widely used in cloud storage to reduce the amount of storage space and save bandwidth. To protect the confidentiality been proposed to encrypt the data before outsourcing. To better protect data security, this paper makes the first attempt to formally address the problem of authorized data deduplication. Different from traditional deduplication systems, the differential privileges of users are further considered in duplicate check besides the data itself. We also present several new deduplication constructions supporting authorized duplicate check in a hybrid cloud architecture. Security analysis demonstrates that our scheme is secure in terms of the definitions specified in the proposed security model. As a proof of concept, we implement a prototype of our proposed authorized duplicate check scheme and conduct testbed experiments using our prototype. We show that our proposed authorized duplicate check scheme incurs minimal overhead compared to normal operations.", "title": "" } ]
[ { "docid": "0f7420282b9e16ef6fd26b87fe40eae2", "text": "This paper presents a robot localization system for indoor environments using WiFi signal strength measure. We analyse the main causes of the WiFi signal strength variation and we experimentally demonstrate that a localization technique based on a propagation model doesn’t work properly in our test-bed. We have carried out a localization system based on a priori radio-map obtained automatically from a robot navigation in the environment in a semi-autonomous way. We analyse the effect of reducing calibration effort in order to diminish practical barriers to wider adoption of this type of location measurement technique. Experimental results using a real robot moving are shown. Finally, the conclusions and future works are", "title": "" }, { "docid": "e849812c12446d78885c0f0dc9e4b318", "text": "OBJECTIVES\nTo differentiate the porphyrias by clinical and biochemical methods.\n\n\nDESIGN AND METHODS\nWe describe levels of blood, urine, and fecal porphyrins and their precursors in the porphyrias and present an algorithm for their biochemical differentiation. Diagnoses were established using clinical and biochemical data. Porphyrin analyses were performed by high performance liquid chromatography.\n\n\nRESULTS AND CONCLUSIONS\nPlasma and urine porphyrin patterns were useful for diagnosis of porphyria cutanea tarda, but not the acute porphyrias. Erythropoietic protoporphyria was confirmed by erythrocyte protoporphyrin assay and erythrocyte fluorescence. Acute intermittent porphyria was diagnosed by increases in urine delta-aminolevulinic acid and porphobilinogen and confirmed by reduced erythrocyte porphobilinogen deaminase activity and normal or near-normal stool porphyrins. Variegate porphyria and hereditary coproporphyria were diagnosed by their characteristic stool porphyrin patterns. This appears to be the most convenient diagnostic approach until molecular abnormalities become more extensively defined and more widely available.", "title": "" }, { "docid": "c9631debac218b948df580dcad548390", "text": "We propose and empirically evaluate a method for the extraction of expertcomprehensible rules from trained neural networks. Our method operates in the context of a three-step process for learning that uses rule-based domain knowledge in combination with neural networks. Empirical tests using realworlds problems from molecular biology show that the rules our method extracts from trained neural networks: closely reproduce the accuracy of the network from which they came, are superior to the rules derived by a learning system that directly refines symbolic rules, and are expert-comprehensible.", "title": "" }, { "docid": "fc29f8e0d932140b5f48b35e4175b51a", "text": "A three-dimensional (3D) geometric model obtained from a 3D device or other approaches is not necessarily watertight due to the presence of geometric deficiencies. These inadequacies must be repaired to create a valid surface mesh on the model as a pre-process of computational engineering analyses. This procedure has been a tedious and labor-intensive step, as there are many kinds of deficiencies that can make the geometry to be nonwatertight, such as gaps and holes. It is still challenging to repair discrete surface models based on available geometric information. The focus of this paper is to develop a new automated method for patching holes on the surface models in order to achieve watertightness. It describes a numerical algorithm utilizing Non-Uniform Rational B-Splines (NURBS) surfaces to generate smooth triangulated surface patches for topologically simple holes on discrete surface models. The Delaunay criterion for point insertion and edge swapping is used in this algorithm to improve the outcome. Surface patches are generated based on existing points surrounding the holes without altering them. The watertight geometry produced can be used in a wide range of engineering applications in the field of computational engineering simulation studies.", "title": "" }, { "docid": "10b8aa3bc47a05d2e0eddc83f6922005", "text": "Bluetooth Low Energy (BLE), a low-power wireless protocol, is widely used in industrial automation for monitoring field devices. Although the BLE standard defines advanced security mechanisms, there are known security attacks for BLE and BLE-enabled field devices must be tested thoroughly against these attacks. This article identifies the possible attacks for BLE-enabled field devices relevant for industrial automation. It also presents a framework for defining and executing BLE security attacks and evaluates it on three BLE devices. All tested devices are vulnerable and this confirms that there is a need for better security testing tools as well as for additional defense mechanisms for BLE devices.", "title": "" }, { "docid": "d8d102c3d6ac7d937bb864c69b4d3cd9", "text": "Question Answering (QA) systems are becoming the inspiring model for the future of search engines. While recently, underlying datasets for QA systems have been promoted from unstructured datasets to structured datasets with highly semantic-enriched metadata, but still question answering systems involve serious challenges which cause to be far beyond desired expectations. In this paper, we raise the challenges for building a Question Answering (QA) system especially with the focus of employing structured data (i.e. knowledge graph). This paper provide an exhaustive insight of the known challenges, so far. Thus, it helps researchers to easily spot open rooms for the future research agenda.", "title": "" }, { "docid": "a97f71e0d5501add1ae08eeee5378045", "text": "Machine learning is being implemented in bioinformatics and computational biology to solve challenging problems emerged in the analysis and modeling of biological data such as DNA, RNA, and protein. The major problems in classifying protein sequences into existing families/superfamilies are the following: the selection of a suitable sequence encoding method, the extraction of an optimized subset of features that possesses significant discriminatory information, and the adaptation of an appropriate learning algorithm that classifies protein sequences with higher classification accuracy. The accurate classification of protein sequence would be helpful in determining the structure and function of novel protein sequences. In this article, we have proposed a distance-based sequence encoding algorithm that captures the sequence’s statistical characteristics along with amino acids sequence order information. A statistical metric-based feature selection algorithm is then adopted to identify the reduced set of features to represent the original feature space. The performance of the proposed technique is validated using some of the best performing classifiers implemented previously for protein sequence classification. An average classification accuracy of 92% was achieved on the yeast protein sequence data set downloaded from the benchmark UniProtKB database.", "title": "" }, { "docid": "522e384f4533ca656210561be9afbdab", "text": "Every software program that interacts with a user requires a user interface. Model-View-Controller (MVC) is a common design pattern to integrate a user interface with the application domain logic. MVC separates the representation of the application domain (Model) from the display of the application's state (View) and user interaction control (Controller). However, studying the literature reveals that a variety of other related patterns exists, which we denote with Model-View- (MV) design patterns. This paper discusses existing MV patterns classified in three main families: Model-View-Controller (MVC), Model-View-View Model (MVVM), and Model-View-Presenter (MVP). We take a practitioners' point of view and emphasize the essentials of each family as well as the differences. The study shows that the selection of patterns should take into account the use cases and quality requirements at hand, and chosen technology. We illustrate the selection of a pattern with an example of our practice. The study results aim to bring more clarity in the variety of MV design patterns and help practitioners to make better grounded decisions when selecting patterns.", "title": "" }, { "docid": "6e26ec8dc5024b2b64da355c9f30d478", "text": "With each eye fixation, we experience a richly detailed visual world. Yet recent work on visual integration and change direction reveals that we are surprisingly unaware of the details of our environment from one view to the next: we often do not detect large changes to objects and scenes ('change blindness'). Furthermore, without attention, we may not even perceive objects ('inattentional blindness'). Taken together, these findings suggest that we perceive and remember only those objects and details that receive focused attention. In this paper, we briefly review and discuss evidence for these cognitive forms of 'blindness'. We then present a new study that builds on classic studies of divided visual attention to examine inattentional blindness for complex objects and events in dynamic scenes. Our results suggest that the likelihood of noticing an unexpected object depends on the similarity of that object to other objects in the display and on how difficult the priming monitoring task is. Interestingly, spatial proximity of the critical unattended object to attended locations does not appear to affect detection, suggesting that observers attend to objects and events, not spatial positions. We discuss the implications of these results for visual representations and awareness of our visual environment.", "title": "" }, { "docid": "f672df401b24571f81648066b3181890", "text": "We consider the general problem of modeling temporal data with long-range dependencies, wherein new observations are fully or partially predictable based on temporally-distant, past observations. A sufficiently powerful temporal model should separate predictable elements of the sequence from unpredictable elements, express uncertainty about those unpredictable elements, and rapidly identify novel elements that may help to predict the future. To create such models, we introduce Generative Temporal Models augmented with external memory systems. They are developed within the variational inference framework, which provides both a practical training methodology and methods to gain insight into the models’ operation. We show, on a range of problems with sparse, long-term temporal dependencies, that these models store information from early in a sequence, and reuse this stored information efficiently. This allows them to perform substantially better than existing models based on well-known recurrent neural networks, like LSTMs.", "title": "" }, { "docid": "3ae6440666a5ea56dee2000991a50444", "text": "Flexible medical robots can improve surgical procedures by decreasing invasiveness and increasing accessibility within the body. Using preoperative images, these robots can be designed to optimize a procedure for a particular patient. To minimize invasiveness and maximize biocompatibility, the actuation units of flexible medical robots should be placed fully outside the patient's body. In this letter, we present a novel, compact, lightweight, modular actuation, and control system for driving a class of these flexible robots, known as concentric tube robots. A key feature of the design is the use of three-dimensional printed waffle gears to enable compact control of two degrees of freedom within each module. We measure the precision and accuracy of a single actuation module and demonstrate the ability of an integrated set of three actuation modules to control six degrees of freedom. The integrated system drives a three-tube concentric tube robot to reach a final tip position that is on average less than 2 mm from a given target. In addition, we show a handheld manifestation of the device and present its potential applications.", "title": "" }, { "docid": "c7f944e3c31fbb45dcd83252b43f73ff", "text": "The moderation of content in many social media systems, such as Twitter and Facebook, motivated the emergence of a new social network system that promotes free speech, named Gab. Soon after that, Gab has been removed from Google Play Store for violating the company's hate speech policy and it has been rejected by Apple for similar reasons. In this paper we characterize Gab, aiming at understanding who are the users who joined it and what kind of content they share in this system. Our findings show that Gab is a very politically oriented system that hosts banned users from other social networks, some of them due to possible cases of hate speech and association with extremism. We provide the first measurement of news dissemination inside a right-leaning echo chamber, investigating a social media where readers are rarely exposed to content that cuts across ideological lines, but rather are fed with content that reinforces their current political or social views.", "title": "" }, { "docid": "ce04dd56c71acc8752b1965fd89d5c35", "text": "Existing methods for 3D scene flow estimation often fail in the presence of large displacement or local ambiguities, e.g., at texture-less or reflective surfaces. However, these challenges are omnipresent in dynamic road scenes, which is the focus of this work. Our main contribution is to overcome these 3D motion estimation problems by exploiting recognition. In particular, we investigate the importance of recognition granularity, from coarse 2D bounding box estimates over 2D instance segmentations to fine-grained 3D object part predictions. We compute these cues using CNNs trained on a newly annotated dataset of stereo images and integrate them into a CRF-based model for robust 3D scene flow estimation - an approach we term Instance Scene Flow. We analyze the importance of each recognition cue in an ablation study and observe that the instance segmentation cue is by far strongest, in our setting. We demonstrate the effectiveness of our method on the challenging KITTI 2015 scene flow benchmark where we achieve state-of-the-art performance at the time of submission.", "title": "" }, { "docid": "1314ecb495ed518d875e3bad4a285dcd", "text": "Gaze-tracking technology is highly valuable in many interactive and diagnostic applications. For many gaze estimation systems, calibration is an unavoidable procedure necessary to determine certain person-specific parameters, either explicitly or implicitly. Recently, several offline implicit calibration methods have been proposed to ease the calibration burden. However, the calibration procedure is still cumbersome, and gaze estimation accuracy needs further improvement. In this article, the authors present a novel 3D gaze estimation system with online calibration. The proposed system is based on a new 3D model-based gaze estimation method using a single consumer depth camera sensor (via Kinect). Unlike previous gaze estimation methods using explicit offline calibration with fixed number of calibration points or implicit calibration, their approach constantly improves person-specific eye parameters through online calibration, which enables the system to adapt gradually to a new user. The experimental results and the human-computer interaction (HCI) application show that the proposed system can work in real time with superior gaze estimation accuracy and minimal calibration burden.", "title": "" }, { "docid": "454f59bbfb1bacc245b5aeb1eacf3d3a", "text": "John Bowlby's ( 1973, 1980, 1982) attachment theory is one of the most influential theories in personality and developmental psychology and provides insights into adjustment and psychopathology across the lifespan. The theory is also helpful in defining the target of change in psychotherapy, understanding the processes by which change occurs, and conceptualizing cases and planning treatment (Daniel, 2006; Obegi & Berant, 2008; Sable, 2004 ; Wallin, 2007). Here, we propose a model of Animal-Assisted Therapy (AAT) based on attachment theory and on the unique characteristics of human-pet relationships. The model includes clients' unmet attachment needs, individual differences in attachment insecurity, coping, and responsiveness to therapy. It also suggests ways to foster the development of more adaptive patterns of attachment and healthier modes of relating to others.", "title": "" }, { "docid": "8a4436b621021ca7553a408749e722fb", "text": "The main contribution of the paper is to develop a wearable arm band for safety and protection of women and girls. This objective is achieved by the analysis of physiological signal in conjunction with body position. The physiological signals that are analyzed are pulse rate sensor, vibration sensor and if there is any fault it additionally uses a fault detection sensor. Acquisition of raw data makes the Arduino controller function by activating the GPS to send alert messages via GSM and the wireless camera captures images and videos and sends images to the pre-decided contacts and also shares video calling to the family contact. The alarm is employed to alert the surroundings by its sound and meanwhile, she can also use a TAZER as a self-defense mechanism.", "title": "" }, { "docid": "42a81e39b411ba4613ff22090097548c", "text": "We present a neural network method for review rating prediction in this paper. Existing neural network methods for sentiment prediction typically only capture the semantics of texts, but ignore the user who expresses the sentiment. This is not desirable for review rating prediction as each user has an influence on how to interpret the textual content of a review. For example, the same word (e.g. “good”) might indicate different sentiment strengths when written by different users. We address this issue by developing a new neural network that takes user information into account. The intuition is to factor in user-specific modification to the meaning of a certain word. Specifically, we extend the lexical semantic composition models and introduce a userword composition vector model (UWCVM), which effectively captures how user acts as a function affecting the continuous word representation. We integrate UWCVM into a supervised learning framework for review rating prediction, and conduct experiments on two benchmark review datasets. Experimental results demonstrate the effectiveness of our method. It shows superior performances over several strong baseline methods.", "title": "" }, { "docid": "982ebb6c33a1675d3073896e3768212a", "text": "Morphometric analysis of nuclei play an essential role in cytological diagnostics. Cytological samples contain hundreds or thousands of nuclei that need to be examined for cancer. The process is tedious and time-consuming but can be automated. Unfortunately, segmentation of cytological samples is very challenging due to the complexity of cellular structures. To deal with this problem, we are proposing an approach, which combines convolutional neural network and ellipse fitting algorithm to segment nuclei in cytological images of breast cancer. Images are preprocessed by the colour deconvolution procedure to extract hematoxylin-stained objects (nuclei). Next, convolutional neural network is performing semantic segmentation of preprocessed image to extract nuclei silhouettes. To find the exact location of nuclei and to separate touching and overlapping nuclei, we approximate them using ellipses of various sizes and orientations. They are fitted using the Bayesian object recognition approach. The accuracy of the proposed approach is evaluated with the help of reference nuclei segmented manually. Tests carried out on breast cancer images have shown that the proposed method can accurately segment elliptic-shaped objects.", "title": "" }, { "docid": "1e7721225d84896a72f2ea790570ecbd", "text": "We have developed a Blumlein line pulse generator which utilizes the superposition of electrical pulses launched from two individually switched pulse forming lines. By using a fast power MOSFET as a switch on each end of the Blumlein line, we were able to generate pulses with amplitudes of 1 kV across a 100-Omega load. Pulse duration and polarity can be controlled by the temporal delay in the triggering of the two switches. In addition, the use of identical switches allows us to overcome pulse distortions arising from the use of non-ideal switches in the traditional Blumlein configuration. With this pulse generator, pulses with durations between 8 and 300 ns were applied to Jurkat cells (a leukemia cell line) to investigate the pulse dependent increase in calcium levels. The development of the calcium levels in individual cells was studied by spinning-disc confocal fluorescent microscopy with the calcium indicator, fluo-4. With this fast imaging system, fluorescence changes, representing calcium mobilization, could be resolved with an exposure of 5 ms every 18 ms. For a 60-ns pulse duration, each rise in intracellular calcium was greater as the electric field strength was increased from 25 kV/cm to 100 kV/cm. Only for the highest electric field strength is the response dependent on the presence of extracellular calcium. The results complement ion-exchange mechanisms previously observed during the charging of cellular membranes, which were suggested by observations of membrane potential changes during exposure.", "title": "" }, { "docid": "795b64fb3ebead2b565f66558a7be063", "text": "Agent-based computing represents an exciting new synthesis both for Artificial Intelligence (AI) and, more generally, Computer Science. It has the potential to significantly improve the theory and the practice of modeling, designing, and implementing computer systems. Yet, to date, there has been little systematic analysis of what makes the agent-based approach such an appealing and powerful computational model. Moreover, even less effort has been devoted to discussing the inherent disadvantages that stem from adopting an agent-oriented view. Here both sets of issues are explored. The standpoint of this analysis is the role of agent-based software in solving complex, real-world problems. In particular, it will be argued that the development of robust and scalable software systems requires autonomous agents that can complete their objectives while situated in a dynamic and uncertain environment, that can engage in rich, high-level social interactions, and that can operate within flexible organisational structures.  2000 Elsevier Science B.V. All rights reserved.", "title": "" } ]
scidocsrr
4adcc06cc2aff58f7a40d1a83f876330
Vein Detection Using Infrared for Venepuncture
[ { "docid": "1f9940ff3e31267cfeb62b2a7915aba9", "text": "Infrared vein detection is one of the newest biomedical techniques researched today. Basic principal behind this is, when IR light transmitted on palm it passes through tissue and veins absorbs that light and the vein appears darker than surrounding tissue. This paper presents vein detection system using strong IR light source, webcam, Matlab based image processing algorithm. Using the Strong IR light source consisting of high intensity led and webcam camera we captured transillumination image of palm. Image processing algorithm is used to separate out the veins from palm.", "title": "" } ]
[ { "docid": "ef44e3456962ed4a857614b0782ed4d2", "text": "A sketching system for spline-based free-form surfaces on the Responsive Workbench is presented. We propose 3D tools for curve drawing and deformation techniques for curves and surfaces, adapted to the needs of designers. The user directly draws curves in the virtual environment, using a tracked stylus as an input device. A curve network can be formed, describing the skeleton of a virtual model. The non-dominant hand positions and orients the model while the dominant hand uses the editing tools. The curves and the resulting skinning surfaces can interactively be deformed.", "title": "" }, { "docid": "4bee15ff014c7f7e43079ac0e17f6ce8", "text": "Injuries to the lateral ligament complex of the ankle are common problems in acute care practice. We believe that a well-developed knowledge of the anatomy provides a foundation for understanding the basic mechanism of injury, diagnosis, and treatment, especially surgical treatment, of lateral collateral ankle ligament injury. To address this issue we performed this review with regard to the anatomy of the lateral collateral ankle ligaments.", "title": "" }, { "docid": "2c597e49524d641ddfe1ec552bee2014", "text": "This paper presents a fully integrated CMOS start-up circuit for a low voltage battery-less harvesting application. The proposed topology is based on a step-up charge pump using depletion transistors instead of enhancement transistors. With this architecture, we can obtain a self-starting voltage below the enhancement transistor's threshold due to its normally-on operation. The key advantages are the CMOS compatibility, inductor-less solution and no extra post-fabrication processing. The topology has been simulated in 0.18μm technology using a transistor-level model and has been compared to the traditional charge pump structure. The depletion-based voltage doubler charge pump enables operation from an input voltage as low as 250mV compared to 400mV in an enhancement-based one. The proposed topology can also achieve other conversion ratios such as 1:-1 inverter or 1:N step-up.", "title": "" }, { "docid": "705b9a7a6bc5ad364fe3b14c85570896", "text": "Cytokines are small, short-lived proteins secreted by many different cell types. As signaling molecules, cytokines provide communication between cells and play a crucial role in modulating innate and adaptive immune response. The family of cytokines includes interferons, interleukins, chemokines, mesenchymal growth factors, tumor necrosis factor family and adipokines. Interferons (IFNs) are a multigene family of inducible cytokines with antiviral, antiproliferative, and immunomodulatory function. Recombinant DNA technology can be useful in the production of human IFNs. This process includes fermentation, purification, and formation of the final product. Interleukins are classified in families based on sequence homology, receptor-binding properties, biological function, and cellular sources. TNF and IL-1 are considered to be key mediators of inflammatory response, while IL-6 plays a key role in the transition from acute to chronic inflammation. The inhibition of TNF includes administration of anti-TNF antibody and TNF receptor (TNFR). The reduction of IL-1 level can be achieved by the administration of anti-IL-1 antibody or IL-1 receptor antagonist (IL-1Ra), and the reduction of IL-6 level in the treatment of chronic inflammatory diseases can be achieved by the administration of anti-IL-6 antibody and anti-IL-6 receptor antibody. Recombinant cytokines and cytokine antagonists (antibodies and receptors) can be used in treating many different diseases.", "title": "" }, { "docid": "676445f43b7b8fa44afaa47ff74b176c", "text": "The study of light at the nanoscale has become a vibrant field of research, as researchers now master the flow of light at length scales far below the optical wavelength, largely surpassing the classical limits imposed by diffraction. Using metallic and dielectric nanostructures precisely sculpted into two-dimensional (2D) and 3D nanoarchitectures, light can be scattered, refracted, confined, filtered, and processed in fascinating new ways that are impossible to achieve with natural materials and in conventional geometries. This control over light at the nanoscale has not only unveiled a plethora of new phenomena but has also led to a variety of relevant applications, including new venues for integrated circuitry, optical computing, solar, and medical technologies, setting high expectations for many novel discoveries in the years to come.", "title": "" }, { "docid": "abea38a143932cc7372fa19f0c494908", "text": "Applications of reinforcement learning for robotic manipulation often assume an episodic setting. However, controllers trained with reinforcement learning are often situated in the context of a more complex compound task, where multiple controllers might be invoked in sequence to accomplish a higher-level goal. Furthermore, training such controllers typically requires resetting the environment between episodes, which is typically handled manually. We describe an approach for training chains of controllers with reinforcement learning. This requires taking into account the state distributions induced by preceding controllers in the chain, as well as automatically training reset controllers that can reset the task between episodes. The initial state of each controller is determined by the controller that precedes it, resulting in a non-stationary learning problem. We demonstrate that a recently developed method that optimizes linear-Gaussian controllers under learned local linear models can tackle this sort of non-stationary problem, and that training controllers concurrently with a corresponding reset controller only minimally increases training time. We also demonstrate this method on a complex tool use task that consists of seven stages and requires using a toy wrench to screw in a bolt. This compound task requires grasping and handling complex contact dynamics. After training, the controllers can execute the entire task quickly and efficiently. Finally, we show that this method can be combined with guided policy search to automatically train nonlinear neural network controllers for a grasping task with considerable variation in target position.", "title": "" }, { "docid": "3dc800707ecbbf0fed60e445cfe02fcc", "text": "We extend the method introduced by Cinzano et al. (2000a) to map the artificial sky brightness in large territories from DMSP satellite data, in order to map the naked eye star visibility and telescopic limiting magnitudes. For these purposes we take into account the altitude of each land area from GTOPO30 world elevation data, the natural sky brightness in the chosen sky direction, based on Garstang modelling, the eye capability with naked eye or a telescope, based on the Schaefer (1990) and Garstang (2000b) approach, and the stellar extinction in the visual photometric band. For near zenith sky directions we also take into account screening by terrain elevation. Maps of naked eye star visibility and telescopic limiting magnitudes are useful to quantify the capability of the population to perceive our Universe, to evaluate the future evolution, to make cross correlations with statistical parameters and to recognize areas where astronomical observations or popularisation can still acceptably be made. We present, as an application, maps of naked eye star visibility and total sky brightness in V band in Europe at the zenith with a resolution of approximately 1 km.", "title": "" }, { "docid": "f028bf7bbaa4d182013771e9079b5e21", "text": "Hepatoblastoma (HB), a primary liver tumor in childhood, is often accompanied by alpha-fetoprotein (AFP) secretion, and sometimes by β-human chorionic gonadotropin hormone (β-hCG) secretion, and this can cause peripheral precocious puberty (PPP). We describe a case of PPP associated with HB. Laboratory tests showed an increase in AFP, β-hCG and testosterone values, and suppression of follicle-stimulating hormone and luteinizing hormone levels. After chemotherapy and surgery, AFP, β-hCG and testosterone levels normalized and signs of virilization did not progress further. The child did not show evidence for tumor recurrence after 16 months of follow-up. New therapeutic approaches and early diagnosis may ensure a better prognosis of virilizing HB, than reported in the past. Assessment of PPP should always take into account the possibility of a tumoral source.", "title": "" }, { "docid": "91cb8726930e39db53814ceab69b7a50", "text": "Traditional methods for processing large images are extremely time intensive. Also, conventional image processing methods do not take advantage of available computing resources such as multicore central processing unit (CPU) and manycore general purpose graphics processing unit (GP-GPU). Studies suggest that applying parallel programming techniques to various image filters should improve the overall performance without compromising the existing resources. Recent studies also suggest that parallel implementation of image processing on compute unified device architecture (CUDA)-accelerated CPU/GPU system has potential to process the image very fast. In this paper, we introduce a CUDA-accelerated image processing method suitable for multicore/manycore systems. Using a bitmap file, we implement image processing and filtering through traditional sequential C and newly introduced parallel CUDA/C programs. A key step of the proposed algorithm is to load the pixel's bytes in a one dimensional array with length equal to matrix width * matrix height * bytes per pixel. This is done to process the image concurrently in parallel. According to experimental results, the proposed CUDA-accelerated parallel image processing algorithm provides benefit with a speedup factor up to 365 for an image with 8,192×8,192 pixels.", "title": "" }, { "docid": "c36bfde4e2f1cd3a5d6d8c0bcb8806d8", "text": "A 20/20 vision in ophthalmology implies a perfect view of things that are in front of you. The term is also used to mean a perfect sight of the things to come. Here we focus on a speculative vision of the VLDB in the year 2020. This panel is the follow-up of the one I organised (with S. Navathe) at the Kyoto VLDB in 1986, with the title: \"Anyone for a VLDB in the Year 2000?\". In that panel, the members discussed the major advances made in the database area and conjectured on its future, following a concern of many researchers that the database area was running out of interesting research topics and therefore it might disappear into other research topics, such as software engineering, operating systems and distributed systems. That did not happen.", "title": "" }, { "docid": "9dab240226eee04ae78dc3e2b98cd00d", "text": "The use of whole plants for the synthesis of recombinant proteins has received a great deal of attention recently because of advantages in economy, scalability and safety compared with traditional microbial and mammalian production systems. However, production systems that use whole plants lack several of the intrinsic benefits of cultured cells, including the precise control over growth conditions, batch-to-batch product consistency, a high level of containment and the ability to produce recombinant proteins in compliance with good manufacturing practice. Plant cell cultures combine the merits of whole-plant systems with those of microbial and animal cell cultures, and already have an established track record for the production of valuable therapeutic secondary metabolites. Although no recombinant proteins have yet been produced commercially using plant cell cultures, there have been many proof-of-principle studies and several companies are investigating the commercial feasibility of such production systems.", "title": "" }, { "docid": "0084faef0e08c4025ccb3f8fd50892f1", "text": "Steganography is a method of hiding secret messages in a cover object while communication takes place between sender and receiver. Security of confidential information has always been a major issue from the past times to the present time. It has always been the interested topic for researchers to develop secure techniques to send data without revealing it to anyone other than the receiver. Therefore from time to time researchers have developed many techniques to fulfill secure transfer of data and steganography is one of them. In this paper we have proposed a new technique of image steganography i.e. Hash-LSB with RSA algorithm for providing more security to data as well as our data hiding method. The proposed technique uses a hash function to generate a pattern for hiding data bits into LSB of RGB pixel values of the cover image. This technique makes sure that the message has been encrypted before hiding it into a cover image. If in any case the cipher text got revealed from the cover image, the intermediate person other than receiver can't access the message as it is in encrypted form.", "title": "" }, { "docid": "a8cdb14a123f12788b5a8a8ca0f5f415", "text": "Medical image data is naturally distributed among clinical institutions. This partitioning, combined with security and privacy restrictions on medical data, imposes limitations on machine learning algorithms in clinical applications, especially for small and newly established institutions. We present InsuLearn: an intuitive and robust open-source (open-source code available at: https://github.com/ DistributedML/InsuLearn) platform designed to facilitate distributed learning (classification and regression) on medical image data, while preserving data security and privacy. InsuLearn is built on ensemble learning, in which statistical models are developed at each institution independently and combined at secure coordinator nodes. InsuLearn protocols are designed such that the liveness of the system is guaranteed as institutions join and leave the network. Coordination is implemented as a cluster of replicated state machines, making it tolerant to individual node failures. We demonstrate that InsuLearn successfully integrates accurate models for horizontally partitioned data while preserving privacy.", "title": "" }, { "docid": "760403bb332465093386859841a62a5d", "text": "Learning to rank is a new statistical learning technology on creating a ranking model for sorting objects. The technology has been successfully applied to web search, and is becoming one of the key machineries for building search engines. Existing approaches to learning to rank, however, did not consider the cases in which there exists relationship between the objects to be ranked, despite of the fact that such situations are very common in practice. For example, in web search, given a query certain relationships usually exist among the the retrieved documents, e.g., URL hierarchy, similarity, etc., and sometimes it is necessary to utilize the information in ranking of the documents. This paper addresses the issue and formulates it as a novel learning problem, referred to as, 'learning to rank relational objects'. In the new learning task, the ranking model is defined as a function of not only the contents (features) of objects but also the relations between objects. The paper further focuses on one setting of the learning problem in which the way of using relation information is predetermined. It formalizes the learning task as an optimization problem in the setting. The paper then proposes a new method to perform the optimization task, particularly an implementation based on SVM. Experimental results show that the proposed method outperforms the baseline methods for two ranking tasks (Pseudo Relevance Feedback and Topic Distillation) in web search, indicating that the proposed method can indeed make effective use of relation information and content information in ranking.", "title": "" }, { "docid": "3604763dd721f4bb3f46b65556a50563", "text": "-Information extraction encountered a new challenge while the spatial resolution is increasing quickly. People suppose that the higher the spatial resolution is, the better the result of classification is. To prove this guess we use two approaches: pixel-based classification and object-oriented classification. The former site test shows one class has different accuracy from various resolution images. Object-oriented approach is an advanced solution for image analysis. The accuracy of objectoriented approach is much higher than those of based-pixel approach. The site result shows that each class has its optimal image segmentation scale. Keywords--Feature, Scale; Resolution; Image analysis.", "title": "" }, { "docid": "0867fcbf4c6a9d30e27ae8c3328e643e", "text": "Although much is known about the representation and processing of concrete concepts, knowledge of what abstract semantics might be is severely limited. In this article we first address the adequacy of the 2 dominant accounts (dual coding theory and the context availability model) put forward in order to explain representation and processing differences between concrete and abstract words. We find that neither proposal can account for experimental findings and that this is, at least partly, because abstract words are considered to be unrelated to experiential information in both of these accounts. We then address a particular type of experiential information, emotional content, and demonstrate that it plays a crucial role in the processing and representation of abstract concepts: Statistically, abstract words are more emotionally valenced than are concrete words, and this accounts for a residual latency advantage for abstract words, when variables such as imageability (a construct derived from dual coding theory) and rated context availability are held constant. We conclude with a discussion of our novel hypothesis for embodied abstract semantics.", "title": "" }, { "docid": "c16499b3945603d04cf88fec7a2c0a85", "text": "Recovering structure and motion parameters given a image pair or a sequence of images is a well studied problem in computer vision. This is often achieved by employing Structure from Motion (SfM) or Simultaneous Localization and Mapping (SLAM) algorithms based on the real-time requirements. Recently, with the advent of Convolutional Neural Networks (CNNs) researchers have explored the possibility of using machine learning techniques to reconstruct the 3D structure of a scene and jointly predict the camera pose. In this work, we present a framework that achieves state-of-the-art performance on single image depth prediction for both indoor and outdoor scenes. The depth prediction system is then extended to predict optical flow and ultimately the camera pose and trained end-to-end. Our framework outperforms previous deep-learning based motion prediction approaches, and we also demonstrate that the state-of-the-art metric depths can be further improved using the knowledge of pose.", "title": "" }, { "docid": "81c8b1e9c54d089bc63166866e88bb17", "text": "Performing literature survey for scholarly activities has become a challenging and time consuming task due to the rapid growth in the number of scientific articles. Thus, automatic recommendation of high quality citations for a given scientific query topic is immensely valuable. The state-of-the-art on the problem of citation recommendation suffers with the following three limitations. First, most of the existing approaches for citation recommendation require input in the form of either the full article or a seed set of citations, or both. Nevertheless, obtaining the recommendation for citations given a set of keywords is extremely useful for many scientific purposes. Second, the existing techniques for citation recommendation aim at suggesting prestigious and well-cited articles. However, we often need recommendation of diversified citations of the given query topic for many scientific purposes; for instance, it helps authors to write survey papers on a topic and it helps scholars to get a broad view of key problems on a topic. Third, one of the problems in the keyword based citation recommendation is that the search results typically would not include the semantically correlated articles if these articles do not use exactly the same keywords. To the best of our knowledge, there is no known citation recommendation system in the literature that addresses the above three limitations simultaneously. In this paper, we propose a novel citation recommendation system called DiSCern to precisely address the above research gap. DiSCern finds relevant and diversified citations in response to a search query, in terms of keyword(s) to describe the query topic, while using only the citation graph and the keywords associated with the articles, and no latent information. We use a novel keyword expansion step, inspired by community finding in social network analysis, in DiSCern to ensure that the semantically correlated articles are also included in the results. Our proposed approach primarily builds on the Vertex Reinforced Random Walk (VRRW) to balance prestige and diversity in the recommended citations. We demonstrate the efficacy of DiSCern empirically on two datasets: a large publication dataset of more than 1.7 million articles in computer science domain and a dataset of more than 29,000 articles in theoretical high-energy physics domain. The experimental results show that our proposed approach is quite efficient and it outperforms the state-of-the-art algorithms in terms of both relevance and diversity.", "title": "" }, { "docid": "ab168b9599975ee3fe41aa72df6cda0a", "text": "BACKGROUND\nThe United Kingdom has had a significant increase in addiction to and use of cocaine among 16-29-year olds from 6% in 1998 to 10% in 2000. In 2000, the United Kingdom had the highest recorded consumption of \"recent use\" cocaine in Europe, with 3.3% of young adults. Acupuncture is quick, inexpensive, and relatively safe, and may establish itself as an important addiction service in the future.\n\n\nAIM\nTo select investigations that meet the inclusion criteria and critically appraise them in order to answer the question: \"Is acupuncture effective in the treatment of cocaine addiction?\" The focus shall then be directed toward the use of the National Acupuncture Detoxification Association (NADA) protocol as the intervention and the selection of sham points for the control group.\n\n\nDATA SOURCES\nThe ARRC database was accessed from Trina Ward (M. Phil. student) at Thames Valley University. AMED, MEDLINE and Embase were also accessed along with \"hand\" searching methods at the British Library.\n\n\nINCLUSION AND EXCLUSION CRITERIA\nPeople addicted to either cocaine or crack cocaine as their main addiction, needle-acupuncture, single-double-blinded process, randomized subjects, a reference group incorporating a form of sham points.\n\n\nEXCLUSION CRITERIA\nuse of moxibustion, laser acupuncture, transcutaneous electrical nerve stimulation (TENS) electroacupuncture or conditions that did not meet the inclusion criteria.\n\n\nQUALITY ASSESSMENT\nThe criteria set by ter Riet, Kleijnen and Knipschild (in 1990); Hammerschlag and Morris (in 1990); Koes, Bouter and van der Heijden (in 1995), were modified into one set of criteria consisting of 27 different values.\n\n\nRESULTS\nSix randomized controlled trials (RCTs) met the inclusion criteria and were included in this review. All studies scored over 60 points indicating a relatively adequate methodology quality. The mean was 75 and the standard deviation was 6.80. A linear regression analysis did not yield a statistically significant association (n = 6, p = 0.11).\n\n\nCONCLUSIONS\nThis review could not confirm that acupuncture was an effective treatment for cocaine abuse. The NADA protocol of five treatment points still offers the acupuncturist the best possible combination of acupuncture points based upon Traditional Chinese Medicine. Throughout all the clinical trials reviewed, no side-effects of acupuncture were noted. This paper calls for the full set of 5 treatment points as laid out by the NADA to be included as the treatment intervention. Points on the helix, other than the liver yang points, should be selected as sham points for the control group.", "title": "" } ]
scidocsrr
78a34a8483d20f4fedfa30dd43b44af0
From Data Fusion to Knowledge Fusion
[ { "docid": "d88ce9c09fdfa0c1ea023ce08183f39b", "text": "The development of the Internet in recent years has made it possible and useful to access many different information systems anywhere in the world to obtain information. While there is much research on the integration of heterogeneous information systems, most commercial systems stop short of the actual integration of available data. Data fusion is the process of fusing multiple records representing the same real-world object into a single, consistent, and clean representation.\n This article places data fusion into the greater context of data integration, precisely defines the goals of data fusion, namely, complete, concise, and consistent data, and highlights the challenges of data fusion, namely, uncertain and conflicting data values. We give an overview and classification of different ways of fusing data and present several techniques based on standard and advanced operators of the relational algebra and SQL. Finally, the article features a comprehensive survey of data integration systems from academia and industry, showing if and how data fusion is performed in each.", "title": "" }, { "docid": "0922dd64155a9ae468e82f36e5f8f098", "text": "Many applications rely on Web data and extraction systems to accomplish knowledge-driven tasks. Web information is not curated, so many sources provide inaccurate, or conflicting information. Moreover, extraction systems introduce additional noise to the data. We wish to automatically distinguish correct data and erroneous data for creating a cleaner set of integrated data. Previous work has shown that a naive voting strategy that trusts data provided by the majority or at least a certain number of sources may not work well in the presence of copying between the sources. However, correlation between sources can be much broader than copying: sources may provide data from complementary domains (negative correlation), extractors may focus on different types of information (negative correlation), and extractors may apply common rules in extraction (positive correlation, without copying). In this paper we present novel techniques modeling correlations between sources and applying it in truth finding. We provide a comprehensive evaluation of our approach on three real-world datasets with different characteristics, as well as on synthetic data, showing that our algorithms outperform the existing state-of-the-art techniques.", "title": "" }, { "docid": "d51408ad40bdc9a3a846aaf7da907cef", "text": "Accessing online information from various data sources has become a necessary part of our everyday life. Unfortunately such information is not always trustworthy, as different sources are of very different qualities and often provide inaccurate and conflicting information. Existing approaches attack this problem using unsupervised learning methods, and try to infer the confidence of the data value and trustworthiness of each source from each other by assuming values provided by more sources are more accurate. However, because false values can be widespread through copying among different sources and out-of-date data often overwhelm up-to-date data, such bootstrapping methods are often ineffective.\n In this paper we propose a semi-supervised approach that finds true values with the help of ground truth data. Such ground truth data, even in very small amount, can greatly help us identify trustworthy data sources. Unlike existing studies that only provide iterative algorithms, we derive the optimal solution to our problem and provide an iterative algorithm that converges to it. Experiments show our method achieves higher accuracy than existing approaches, and it can be applied on very huge data sets when implemented with MapReduce.", "title": "" } ]
[ { "docid": "bbb592c079f1cb2248ded2e249dcc943", "text": "A family of super deep networks, referred to as residual networks or ResNet [14], achieved record-beating performance in various visual tasks such as image recognition, object detection, and semantic segmentation. The ability to train very deep networks naturally pushed the researchers to use enormous resources to achieve the best performance. Consequently, in many applications super deep residual networks were employed for just a marginal improvement in performance. In this paper, we propose ∊-ResNet that allows us to automatically discard redundant layers, which produces responses that are smaller than a threshold ∊, without any loss in performance. The ∊-ResNet architecture can be achieved using a few additional rectified linear units in the original ResNet. Our method does not use any additional variables nor numerous trials like other hyperparameter optimization techniques. The layer selection is achieved using a single training process and the evaluation is performed on CIFAR-10, CIFAR-100, SVHN, and ImageNet datasets. In some instances, we achieve about 80% reduction in the number of parameters.", "title": "" }, { "docid": "596949afaabdbcc68cd8bda175400f30", "text": "We propose improved Deep Neural Network (DNN) training loss functions for more accurate single keyword spotting on resource-constrained embedded devices. The loss function modifications consist of a combination of multi-task training and weighted cross entropy. In the multi-task architecture, the keyword DNN acoustic model is trained with two tasks in parallel the main task of predicting the keyword-specific phone states, and an auxiliary task of predicting LVCSR senones. We show that multi-task learning leads to comparable accuracy over a previously proposed transfer learning approach where the keyword DNN training is initialized by an LVCSR DNN of the same input and hidden layer sizes. The combination of LVCSRinitialization and Multi-task training gives improved keyword detection accuracy compared to either technique alone. We also propose modifying the loss function to give a higher weight on input frames corresponding to keyword phone targets, with a motivation to balance the keyword and background training data. We show that weighted cross-entropy results in additional accuracy improvements. Finally, we show that the combination of 3 techniques LVCSR-initialization, multi-task training and weighted cross-entropy gives the best results, with significantly lower False Alarm Rate than the LVCSR-initialization technique alone, across a wide range of Miss Rates.", "title": "" }, { "docid": "c460179cbdb40b9d89b3cc02276d54e1", "text": "In recent years the sport of climbing has seen consistent increase in popularity. Climbing requires a complex skill set for successful and safe exercising. While elite climbers receive intensive expert coaching to refine this skill set, this progression approach is not viable for the amateur population. We have developed ClimbAX - a climbing performance analysis system that aims for replicating expert assessments and thus represents a first step towards an automatic coaching system for climbing enthusiasts. Through an accelerometer based wearable sensing platform, climber's movements are captured. An automatic analysis procedure detects climbing sessions and moves, which form the basis for subsequent performance assessment. The assessment parameters are derived from sports science literature and include: power, control, stability, speed. ClimbAX was evaluated in a large case study with 53 climbers under competition settings. We report a strong correlation between predicted scores and official competition results, which demonstrate the effectiveness of our automatic skill assessment system.", "title": "" }, { "docid": "d4f6e6f29ca34cdedb0bc71c8b3bd663", "text": "We propose a diagnostic method for probing specific information captured in vector representations of sentence meaning, via simple classification tasks with strategically constructed sentence sets. We identify some key types of semantic information that we might expect to be captured in sentence composition, and illustrate example classification tasks for targeting this information.", "title": "" }, { "docid": "d5665efd0e4a91e9be4c84fecd5fd4ad", "text": "Hardware accelerators are being increasingly deployed to boost the performance and energy efficiency of deep neural network (DNN) inference. In this paper we propose Thundervolt, a new framework that enables aggressive voltage underscaling of high-performance DNN accelerators without compromising classification accuracy even in the presence of high timing error rates. Using post-synthesis timing simulations of a DNN acceleratormodeled on theGoogle TPU,we show that Thundervolt enables between 34%-57% energy savings on stateof-the-art speech and image recognition benchmarks with less than 1% loss in classification accuracy and no performance loss. Further, we show that Thundervolt is synergistic with and can further increase the energy efficiency of commonly used run-timeDNNpruning techniques like Zero-Skip.", "title": "" }, { "docid": "2c33713709afcb3d903945aff096a7f2", "text": "This study investigates the relationship of strategic leadership behaviors with executive innovation influence and the moderating effects of top management team (TMT)’s tenure heterogeneity and social culture on that relationship. Using survey data from six countries comprising three social cultures, strategic leadership behaviors were found to have a strong positive relationship with executive influence on both product–market and administrative innovations. In addition, TMT tenure heterogeneity moderated the relationship of strategic leadership behaviors with executive innovation influence for both types of innovation, while social culture moderated that relationship only in the case of administrative innovation. Copyright  2005 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "37d063b4774b6d5916cd91bd2c49ac17", "text": "Realization of Semantic Web requires structuring of web data using domain ontologies. Most data intensive websites are powered by relational databases whose design process involves developing conceptual model using E/R or Extended E/R diagrams. This paper discusses the implementation details of a tool that builds domain ontologies in OWL (Ontology Web Language) from Extended E/R diagrams. Ontology development being a knowledge intensive task, our tool would be helpful in reducing the developmental efforts by automating the process. We bring out the differences and the similarities between the expressive capabilities of the two conceptual modeling methods, namely OWL and Extended E/R diagrams.", "title": "" }, { "docid": "b6f3dab3391a594712fdad3b31be2062", "text": "Social media has become a part of our daily life and we use it for many reasons. One of its uses is to get our questions answered. Given a multitude of social media sites, however, one immediate challenge is to pick the most relevant site for a question. This is a challenging problem because (1) questions are usually short, and (2) social media sites evolve. In this work, we propose to utilize topic specialization to find the most relevant social media site for a given question. In particular, semantic knowledge is considered for topic specialization as it can not only make a question more specific, but also dynamically represent the content of social sites, which relates a given question to a social media site. Thus, we propose to rank social media sites based on combined search engine query results. Our algorithm yields compelling results for providing a meaningful and consistent site recommendation. This work helps further understand the innate characteristics of major social media platforms for the design of social Q&A systems.", "title": "" }, { "docid": "31404322fb03246ba2efe451191e29fa", "text": "OBJECTIVES\nThe aim of this study is to report an unusual form of penile cancer presentation associated with myiasis infestation, treatment options and outcomes.\n\n\nMATERIALS AND METHODS\nWe studied 10 patients with suspected malignant neoplasm of the penis associated with genital myiasis infestation. Diagnostic assessment was conducted through clinical history, physical examination, penile biopsy, larvae identification and computerized tomography scan of the chest, abdomen and pelvis. Clinical and pathological staging was done according to 2002 TNM classification system. Radical inguinal lymphadenectomy was conducted according to the primary penile tumor pathology and clinical lymph nodes status.\n\n\nRESULTS\nPatients age ranged from 41 to 77 years (mean=62.4). All patients presented squamous cell carcinoma of the penis in association with myiasis infestation caused by Psychoda albipennis. Tumor size ranged from 4cm to 12cm (mean=5.3). Circumcision was conducted in 1 (10%) patient, while penile partial penectomy was performed in 5 (50%). Total penectomy was conducted in 2 (20%) patients, while emasculation was the treatment option for 2 (20%). All patients underwent radical inguinal lymphadenectomy. Prophylactic lymphadenectomy was performed on 3 (30%) patients, therapeutic on 5 (50%), and palliative lymphadenectomy on 2 (20%) patients. Time elapsed from primary tumor treatment to radical inguinal lymphadenectomy was 2 to 6 weeks. The mean follow-up was 34.3 months.\n\n\nCONCLUSION\nThe occurrence of myiasis in the genitalia is more common in patients with precarious hygienic practices and low socio-economic level. The treatment option varied according to the primary tumor presentation and clinical lymph node status.", "title": "" }, { "docid": "da4b970f53ec46a6d2e3ca03086e110d", "text": "In this communication, a novel filtering antenna is proposed by utilizing active frequency selective surface (FSS), which can simultaneously achieve filtering and beam steering function. The FSS unit is composed of a metallic rectangular ring and a patch, with a pair of microwave varactor diodes inserted in between along incident electric field polarization direction. Transmission phase of the emitted wave can be tuned by changing the bias voltage applied to the varactor diodes. Through different configurations of the bias voltages, we can obtain the gradient phase distribution of the emitted wave along E- and H-plane. This active FSS is then fabricated and utilized as a radome above a conventional horn antenna to demonstrate its ability of steering the beam radiated from the horn. The experimental results agree well with the simulated ones, which show that the horn antenna with the active FSS can realize beam steering in both E- and H-plane in a range of ±30° at 5.3 GHz with a bandwidth of 180 MHz.", "title": "" }, { "docid": "e37a93ff39840e1d6df589b415848a85", "text": "In this paper we propose a stacked generalization (or stacking) model for event extraction in bio-medical text. Event extraction deals with the process of extracting detailed biological phenomenon, which is more challenging compared to the traditional binary relation extraction such as protein-protein interaction. The overall process consists of mainly three steps: event trigger detection, argument extraction by edge detection and finding correct combination of arguments. In stacking, we use Linear Support Vector Classification (Linear SVC), Logistic Regression (LR) and Stochastic Gradient Descent (SGD) as base-level learning algorithms. As meta-level learner we use Linear SVC. In edge detection step, we find out the arguments of triggers detected in trigger detection step using a SVM classifier. To find correct combination of arguments, we use rules generated by studying the properties of bio-molecular event expressions, and form an event expression consisting of event trigger, its class and arguments. The output of trigger detection is fed to edge detection for argument extraction. Experiments on benchmark datasets of BioNLP2011 show the recall, precision and Fscore of 48.96%, 66.46% and 56.38%, respectively. Comparisons with the existing systems show that our proposed model attains state-of-the-art performance.", "title": "" }, { "docid": "81ec86a4e13c4a7fb7f0352ac08938ab", "text": "Although experimental studies support that men generally respond more to visual sexual stimuli than do women, there is substantial variability in this effect. One potential source of variability is the type of stimuli used that may not be of equal interest to both men and women whose preferences may be dependent upon the activities and situations depicted. The current study investigated whether men and women had preferences for certain types of stimuli. We measured the subjective evaluations and viewing times of 15 men and 30 women (15 using hormonal contraception) to sexually explicit photos. Heterosexual participants viewed 216 pictures that were controlled for the sexual activity depicted, gaze of the female actor, and the proportion of the image that the genital region occupied. Men and women did not differ in their overall interest in the stimuli, indicated by equal subjective ratings and viewing times, although there were preferences for specific types of pictures. Pictures of the opposite sex receiving oral sex were rated as least sexually attractive by all participants and they looked longer at pictures showing the female actor's body. Women rated pictures in which the female actor was looking indirectly at the camera as more attractive, while men did not discriminate by female gaze. Participants did not look as long at close-ups of genitals, and men and women on oral contraceptives rated genital images as less sexually attractive. Together, these data demonstrate sex-specific preferences for specific types of stimuli even when, across stimuli, overall interest was comparable.", "title": "" }, { "docid": "09d22e636e4651db27d6687d65a8de54", "text": "There is currently no standard or widely accepted subset of features to effectively classify different emotions based on electroencephalogram (EEG) signals. While combining all possible EEG features may improve the classification performance, it can lead to high dimensionality and worse performance due to redundancy and inefficiency. To solve the high-dimensionality problem, this paper proposes a new framework to automatically search for the optimal subset of EEG features using evolutionary computation (EC) algorithms. The proposed framework has been extensively evaluated using two public datasets (MAHNOB, DEAP) and a new dataset acquired with a mobile EEG sensor. The results confirm that EC algorithms can effectively support feature selection to identify the best EEG features and the best channels to maximize performance over a four-quadrant emotion classification problem. These findings are significant for informing future development of EEG-based emotion classification because low-cost mobile EEG sensors with fewer electrodes are becoming popular for many new applications.", "title": "" }, { "docid": "de0c3f4d5cbad1ce78e324666937c232", "text": "We propose an unsupervised method for learning multi-stage hierarchies of sparse convolutional features. While sparse coding has become an in creasingly popular method for learning visual features, it is most often traine d at the patch level. Applying the resulting filters convolutionally results in h ig ly redundant codes because overlapping patches are encoded in isolation. By tr aining convolutionally over large image windows, our method reduces the redudancy b etween feature vectors at neighboring locations and improves the efficienc y of the overall representation. In addition to a linear decoder that reconstruct s the image from sparse features, our method trains an efficient feed-forward encod er that predicts quasisparse features from the input. While patch-based training r arely produces anything but oriented edge detectors, we show that convolution al training produces highly diverse filters, including center-surround filters, corner detectors, cross detectors, and oriented grating detectors. We show that using these filters in multistage convolutional network architecture improves perfor mance on a number of visual recognition and detection tasks.", "title": "" }, { "docid": "be3ffc29a165b37b47d3ea28285a86a1", "text": "(11.1) Here we describe a mathematical model in the field of cellular biology. It is a model for two similar cells which interact via diffusion past a membrane. Each cell by itself is inert or dead in the sense that the concentrations of its enzymes achieve a constant equilibrium. In interaction however, the cellular system pulses (or expressed perhaps over dramatically, becomes alive:) in the sense that the concentrations of the enzymes in each cell will oscillate indefinitely. Of course we are using an extremely simplified picture of actual cells. The model is an example of Turing's equations of cellular biology [1] which are described in the next section. I would like to thank H. Hartman for bringing to my attention Reprinted with permission of the publisher, American Mathematical Society,", "title": "" }, { "docid": "4b851545036c059cc1056411fe8bb96d", "text": "Candida infections are a major cause of fungal septicemia in neonates and are associated with marked morbidity and mortality. Despite the spectrum of antifungal drugs being dramatically extended during the last decade, invasive fungal infections remain a serious challenge for neonatologists. Amphotericin B and its lipid formulations are the drugs of choice for the treatment of systemic candidiasis in neonates. The combination of antifungal drugs with different sites of action, like caspofungin and amphotericin B, may improve antifungal efficacy. Severe congenital ichthyosis often leads to death within the neonatal period. Main causes of death are dehydration, electrolyte disturbances, and respiratory or systemic infections. We report the case of a preterm infant with severe congenital ichthyosis and sepsis caused by Candida albicans. The infection did not improve despite proper liposomal amphotericin B treatment. After addition of caspofungin, the baby recovered. To our best knowledge, a case of a preterm infant suffering from severe congenital ichthyosis and Candida albicans sepsis, who survived, has not been previously described.", "title": "" }, { "docid": "b90d5b4a1ebf2dc2b93e2650db8515db", "text": "Most approaches for instance-aware semantic labeling traditionally focus on accuracy. Other aspects like runtime and memory footprint are arguably as important for realtime applications such as autonomous driving. Motivated by this observation and inspired by recent works that tackle multiple tasks with a single integrated architecture [13], [20], [22], in this paper we present a real-time efficient implementation based on ENet [18] that solves three autonomous driving related tasks at once: semantic scene segmentation, instance segmentation and monocular depth estimation. Our approach builds upon a branched ENet architecture with a shared encoder but different decoder branches for each of the three tasks. The presented method can run at 21 fps at a resolution of 1024x512 on the Cityscapes dataset without sacrificing accuracy compared to running each task separately.", "title": "" }, { "docid": "1cfdb3a9d6da2e421991b4e5d526a83c", "text": "Scenario-based training exemplifies the learning-by-doing approach to human performance improvement. In this paper, we enumerate the advantages of incorporating automated scenario generation technologies into the traditional scenario development pipeline. An automated scenario generator is a system that creates training scenarios from scratch, augmenting human authoring to rapidly develop new scenarios, providing a richer diversity of tailored training opportunities, and delivering training scenarios on demand. We introduce a combinatorial optimization approach to scenario generation to deliver the requisite diversity and quality of scenarios while tailoring the scenarios to a particular learner's needs and abilities. We propose a set of evaluation metrics appropriate to scenario generation technologies and present preliminary evidence for the suitability of our approach compared to other scenario generation approaches.", "title": "" }, { "docid": "e6ac100eb695e089e22defcba01fae41", "text": "Recent advances in video super-resolution have shown that convolutional neural networks combined with motion compensation are able to merge information from multiple low-resolution (LR) frames to generate high-quality images. Current state-of-the-art methods process a batch of LR frames to generate a single high-resolution (HR) frame and run this scheme in a sliding window fashion over the entire video, effectively treating the problem as a large number of separate multi-frame super-resolution tasks. This approach has two main weaknesses: 1) Each input frame is processed and warped multiple times, increasing the computational cost, and 2) each output frame is estimated independently conditioned on the input frames, limiting the system's ability to produce temporally consistent results. In this work, we propose an end-to-end trainable frame-recurrent video super-resolution framework that uses the previously inferred HR estimate to super-resolve the subsequent frame. This naturally encourages temporally consistent results and reduces the computational cost by warping only one image in each step. Furthermore, due to its recurrent nature, the proposed method has the ability to assimilate a large number of previous frames without increased computational demands. Extensive evaluations and comparisons with previous methods validate the strengths of our approach and demonstrate that the proposed framework is able to significantly outperform the current state of the art.", "title": "" } ]
scidocsrr
0ba6e234f225c910164e4c06c4eea2ff
Effects of Teachers ’ Experience and Training on Implementation of Information Communication Technology in Public Secondary Schools in Nyeri , Central District , Kenya
[ { "docid": "2ea626f0e1c4dfa3d5a23c80d8fbf70c", "text": "Although research studies in education show that use of technology can help student learning, its use is generally affected by certain barriers. In this paper, we first identify the general barriers typically faced by K-12 schools, both in the United States as well as other countries, when integrating technology into the curriculum for instructional purposes, namely: (a) resources, (b) institution, (c) subject culture, (d) attitudes and beliefs, (e) knowledge and skills, and (f) assessment. We then describe the strategies to overcome such barriers: (a) having a shared vision and technology integration plan, (b) overcoming the scarcity of resources, (c) changing attitudes and beliefs, (d) conducting professional development, and (e) reconsidering assessments. Finally, we identify several current knowledge gaps pertaining to the barriers and strategies of technology integration, and offer pertinent recommendations for future research.", "title": "" }, { "docid": "edbbf1491e552346d42d39ebf90fc9fc", "text": "The use of ICT in the classroom is very important for providing opportunities for students to learn to operate in an information age. Studying the obstacles to the use of ICT in education may assist educators to overcome these barriers and become successful technology adopters in the future. This paper provides a meta-analysis of the relevant literature that aims to present the perceived barriers to technology integration in science education. The findings indicate that teachers had a strong desire for to integrate ICT into education; but that, they encountered many barriers. The major barriers were lack of confidence, lack of competence, and lack of access to resources. Since confidence, competence and accessibility have been found to be the critical components of technology integration in schools, ICT resources including software and hardware, effective professional development, sufficient time, and technical support need to be provided to teachers. No one component in itself is sufficient to provide good teaching. However, the presence of all components increases the possibility of excellent integration of ICT in learning and teaching opportunities. Generally, this paper provides information and recommendation to those responsible for the integration of new technologies into science education.", "title": "" }, { "docid": "c29f8aeed7f7ccfe3687d300da310c25", "text": "Global investment in ICT to improve teaching and learning in schools have been initiated by many governments. Despite all these investments on ICT infrastructure, equipments and professional development to improve education in many countries, ICT adoption and integration in teaching and learning have been limited. This article reviews personal, institutional and technological factors that encourage teachers’ use of computer technology in teaching and learning processes. Also teacher-level, school-level and system-level factors that prevent teachers from ICT use are reviewed. These barriers include lack of teacher ICT skills; lack of teacher confidence; lack of pedagogical teacher training; l lack of suitable educational software; limited access to ICT; rigid structure of traditional education systems; restrictive curricula, etc. The article concluded that knowing the extent to which these barriers affect individuals and institutions may help in taking a decision on how to tackle them.", "title": "" } ]
[ { "docid": "76d297fe81d50d9efa170fb033f3e0df", "text": "In recent years, many companies have developed various distributed computation frameworks for processing machine learning (ML) jobs in clusters. Networking is a well-known bottleneck for ML systems and the cluster demands efficient scheduling for huge traffic (up to 1GB per flow) generated by ML jobs. Coflow has been proven an effective abstraction to schedule flows of such data-parallel applications. However, the implementation of coflow scheduling policy is constrained when coflow characteristics are unknown a prior, and when TCP congestion control misinterprets the congestion signal leading to low throughput. Fortunately, traffic patterns experienced by some ML jobs support to speculate the complete coflow characteristic with limited information. Hence this paper summarizes coflow from these ML jobs as self-similar coflow and proposes a decentralized self-similar coflow scheduler Cicada. Cicada assigns each coflow a probe flow to speculate its characteristics during the transportation and employs the Shortest Job First (SJF) to separate coflow into strict priority queues based on the speculation result. To achieve full bandwidth for throughput- sensitive ML jobs, and to guarantee the scheduling policy implementation, Cicada promotes the elastic transport-layer rate control that outperforms prior works. Large-scale simulations show that Cicada completes coflow 2.08x faster than the state-of-the-art schemes in the information-agnostic scenario.", "title": "" }, { "docid": "3564941b9e2bcbd43a464bd8a2385311", "text": "Adult patients seeking orthodontic treatment are increasingly motivated by esthetic considerations. The majority of these patients reject wearing labial fixed appliances and are looking instead to more esthetic treatment options, including lingual orthodontics and Invisalign appliances. Since Align Technology introduced the Invisalign appliance in 1999 in an extensive public campaign, the appliance has gained tremendous attention from adult patients and dental professionals. The transparency of the Invisalign appliance enhances its esthetic appeal for those adult patients who are averse to wearing conventional labial fixed orthodontic appliances. Although guidelines about the types of malocclusions that this technique can treat exist, few clinical studies have assessed the effectiveness of the appliance. A few recent studies have outlined some of the limitations associated with this technique that clinicians should recognize early before choosing treatment options.", "title": "" }, { "docid": "e3218926a5a32d2c44d5aea3171085e2", "text": "The present study sought to determine the effects of Mindful Sport Performance Enhancement (MSPE) on runners. Participants were 25 recreational long-distance runners openly assigned to either the 4-week intervention or to a waiting-list control group, which later received the same program. Results indicate that the MSPE group showed significantly more improvement in organizational demands (an aspect of perfectionism) compared with controls. Analyses of preto postworkshop change found a significant increase in state mindfulness and trait awareness and decreases in sport-related worries, personal standards perfectionism, and parental criticism. No improvements in actual running performance were found. Regression analyses revealed that higher ratings of expectations and credibility of the workshop were associated with lower postworkshop perfectionism, more years running predicted higher ratings of perfectionism, and more life stressors predicted lower levels of worry. Findings suggest that MSPE may be a useful mental training intervention for improving mindfulness, sport-anxiety related worry, and aspects of perfectionism in long-distance runners.", "title": "" }, { "docid": "fd39d5dcf6a9781929bdee2508fccd57", "text": "Twitter has become the de facto information sharing and communication platform. Given the factors that influence language on Twitter – size limitation as well as communication and content-sharing mechanisms – there is a continuing debate about the position of Twitter’s language in the spectrum of language on various established mediums. These include SMS and chat on the one hand (size limitations) and email (communication), blogs and newspapers (content sharing) on the other. To provide a way of determining this, we propose a computational framework that offers insights into the linguistic style of all these mediums. Our framework consists of two parts. The first part builds upon a set of linguistic features to quantify the language of a given medium. The second part introduces a flexible factorization framework, SOCLIN, which conducts a psycholinguistic analysis of a given medium with the help of an external cognitive and affective knowledge base. Applying this analytical framework to various corpora from several major mediums, we gather statistics in order to compare the linguistics of Twitter with these other mediums via a quantitative comparative study. We present several key insights: (1) Twitter’s language is surprisingly more conservative, and less informal than SMS and online chat; (2) Twitter users appear to be developing linguistically unique styles; (3) Twitter’s usage of temporal references is similar to SMS and chat; and (4) Twitter has less variation of affect than other more formal mediums. The language of Twitter can thus be seen as a projection of a more formal register into a size-restricted space.", "title": "" }, { "docid": "fa2c3c8946ebb97e119ba25cab52ff5c", "text": "The digital era arrives with a whole set of disruptive technologies that creates both risk and opportunity for open sources analysis. Although the sheer quantity of online conversations makes social media a huge source of information, their analysis is still a challenging task and many of traditional methods and research methodologies for data mining are not fit for purpose. Social data mining revolves around subjective content analysis, which deals with the computational processing of texts conveying people's evaluations, beliefs, attitudes and emotions. Opinion mining and sentiment analysis are the main paradigm of social media exploration and both concepts are often interchangeable. This paper investigates the use of appraisal categories to explore data gleaned for social media, going beyond the limitations of traditional sentiment and opinion-oriented approaches. Categories of appraisal are grounded on cognitive foundations of the appraisal theory, according to which people's emotional response are based on their own evaluative judgments or appraisals of situations, events or objects. A formal model is developed to describe and explain the way language is used in the cyberspace to evaluate, express mood and subjective states, construct personal standpoints and manage interpersonal interactions and relationships. A general processing framework is implemented to illustrate how the model is used to analyze a collection of tweets related to extremist attitudes.", "title": "" }, { "docid": "7bda4b1ef78a70e651f74995b01c3c1e", "text": "Given a graph, how can we extract good features for the nodes? For example, given two large graphs from the same domain, how can we use information in one to do classification in the other (i.e., perform across-network classification or transfer learning on graphs)? Also, if one of the graphs is anonymized, how can we use information in one to de-anonymize the other? The key step in all such graph mining tasks is to find effective node features. We propose ReFeX (Recursive Feature eXtraction), a novel algorithm, that recursively combines local (node-based) features with neighborhood (egonet-based) features; and outputs regional features -- capturing \"behavioral\" information. We demonstrate how these powerful regional features can be used in within-network and across-network classification and de-anonymization tasks -- without relying on homophily, or the availability of class labels. The contributions of our work are as follows: (a) ReFeX is scalable and (b) it is effective, capturing regional (\"behavioral\") information in large graphs. We report experiments on real graphs from various domains with over 1M edges, where ReFeX outperforms its competitors on typical graph mining tasks like network classification and de-anonymization.", "title": "" }, { "docid": "4324a73e1d771e927632f3089cad3911", "text": "Generating polygonal maps from RGB-D data is an active field of research in robotic mapping. Kinect Fusion and related algorithms provide means to generate reconstructions of large environments. However, most available implementations generate topological artifacts like redundant vertices and triangles. In this paper we present a novel data structure that allows to generate topologically consistent triangle meshes from RGB-D data without additional filtering.", "title": "" }, { "docid": "cebdedb344f2ba7efb95c2933470e738", "text": "To address this shortcoming, we propose a method for training binary neural networks with a mixture of bits, yielding effectively fractional bitwidths. We demonstrate that our method is not only effective in allowing finer tuning of the speed to accuracy trade-off, but also has inherent representational advantages. Middle-Out Algorithm Heterogeneous Bitwidth Binarization in Convolutional Neural Networks", "title": "" }, { "docid": "a8cdb14a123f12788b5a8a8ca0f5f415", "text": "Medical image data is naturally distributed among clinical institutions. This partitioning, combined with security and privacy restrictions on medical data, imposes limitations on machine learning algorithms in clinical applications, especially for small and newly established institutions. We present InsuLearn: an intuitive and robust open-source (open-source code available at: https://github.com/ DistributedML/InsuLearn) platform designed to facilitate distributed learning (classification and regression) on medical image data, while preserving data security and privacy. InsuLearn is built on ensemble learning, in which statistical models are developed at each institution independently and combined at secure coordinator nodes. InsuLearn protocols are designed such that the liveness of the system is guaranteed as institutions join and leave the network. Coordination is implemented as a cluster of replicated state machines, making it tolerant to individual node failures. We demonstrate that InsuLearn successfully integrates accurate models for horizontally partitioned data while preserving privacy.", "title": "" }, { "docid": "32ddd232359cb59868f77da29f9dace8", "text": "There is limited information on the anthropometry, strength, endurance and flexibility of female rock climbers. The aim of this study was to compare these characteristics in three groups of females: Group 1 comprised 10 elite climbers aged 31.3 +/- 5.0 years (mean +/- s) who had led to a standard of 'hard very severe'; Group 2 consisted of 10 recreational climbers aged 24.1 +/- 4.0 years who had led to a standard of 'severe'; and Group 3 comprised 10 physically active individuals aged 28.5 +/- 5.0 years who had not previously rock-climbed. The tests included finger strength (grip strength, finger strength measured on climbing-specific apparatus), flexibility, bent arm hang and pull-ups. Regression procedures (analysis of covariance) were used to examine the influence of body mass, leg length, height and age. For finger strength, the elite climbers recorded significantly higher values (P < 0.05) than the recreational climbers and non-climbers (four fingers, right hand: elite 321 +/- 18 N, recreational 251 +/- 14 N, non-climbers 256 +/- 15 N; four fingers, left hand: elite 307 +/- 14 N, recreational 248 +/- 12 N, non-climbers 243 +/- 11 N). For grip strength of the right hand, the elite climbers recorded significantly higher values than the recreational climbers only (elite 338 +/- 12 N, recreational 289 +/- 10 N, non-climbers 307 +/- 11 N). The results suggest that elite climbers have greater finger strength than recreational climbers and non-climbers.", "title": "" }, { "docid": "3fbfe42f48e380c648647f77e0d9a5c1", "text": "Defining the rules governing synaptic connectivity is key to formulating theories of neural circuit function. Interneurons can be connected by both electrical and chemical synapses, but the organization and interaction of these two complementary microcircuits is unknown. By recording from multiple molecular layer interneurons in the cerebellar cortex, we reveal specific, nonrandom connectivity patterns in both GABAergic chemical and electrical interneuron networks. Both networks contain clustered motifs and show specific overlap between them. Chemical connections exhibit a preference for transitive patterns, such as feedforward triplet motifs. This structured connectivity is supported by a characteristic spatial organization: transitivity of chemical connectivity is directed vertically in the sagittal plane, and electrical synapses appear strictly confined to the sagittal plane. The specific, highly structured connectivity rules suggest that these motifs are essential for the function of the cerebellar network.", "title": "" }, { "docid": "2ce789863ff0d3359f741adddb09b9f1", "text": "The largest source of sound events is web videos. Most videos lack sound event labels at segment level, however, a significant number of them do respond to text queries, from a match found using metadata by search engines. In this paper we explore the extent to which a search query can be used as the true label for detection of sound events in videos. We present a framework for large-scale sound event recognition on web videos. The framework crawls videos using search queries corresponding to 78 sound event labels drawn from three datasets. The datasets are used to train three classifiers, and we obtain a prediction on 3.7 million web video segments. We evaluated performance using the search query as true label and compare it with human labeling. Both types of ground truth exhibited close performance, to within 10%, and similar performance trend with increasing number of evaluated segments. Hence, our experiments show potential for using search query as a preliminary true label for sound event recognition in web videos.", "title": "" }, { "docid": "dcb06055c5494384c27f0e76415023fd", "text": "Robotic exoskeleton systems are one of the highly active areas in recent robotic research. These systems have been developed significantly to be used for the human power augmentation, robotic rehabilitation, human power assist, and haptic interaction in virtual reality. Unlike the robots used in industry, the robotic exoskeleton systems should be designed with special consideration since they directly interact with human user. In the mechanical design of these systems, movable ranges, safety, comfort wearing, low inertia, and adaptability should be especially considered. Controllability, responsiveness, flexible and smooth motion generation, and safety should especially be considered in the controllers of exoskeleton systems. Furthermore, the controller should generate the motions in accordance with the human motion intention. This paper briefly reviews the upper extremity robotic exoskeleton systems. In the short review, it is focused to identify the brief history, basic concept, challenges, and future development of the robotic exoskeleton systems. Furthermore, key technologies of upper extremity exoskeleton systems are reviewed by taking state-of-the-art robot as examples.", "title": "" }, { "docid": "6936462dee2424b92c7476faed5b5a23", "text": "A significant challenge in scene text detection is the large variation in text sizes. In particular, small text are usually hard to detect. This paper presents an accurate oriented text detector based on Faster R-CNN. We observe that Faster R-CNN is suitable for general object detection but inadequate for scene text detection due to the large variation in text size. We apply feature fusion both in RPN and Fast R-CNN to alleviate this problem and furthermore, enhance model's ability to detect relatively small text. Our text detector achieves comparable results to those state of the art methods on ICDAR 2015 and MSRA-TD500, showing its advantage and applicability.", "title": "" }, { "docid": "2ab1f2d0ca28851dcc36721686a06fa2", "text": "A quarter-century ago visual neuroscientists had little information about the number and organization of retinotopic maps in human visual cortex. The advent of functional magnetic resonance imaging (MRI), a non-invasive, spatially-resolved technique for measuring brain activity, provided a wealth of data about human retinotopic maps. Just as there are differences amongst non-human primate maps, the human maps have their own unique properties. Many human maps can be measured reliably in individual subjects during experimental sessions lasting less than an hour. The efficiency of the measurements and the relatively large amplitude of functional MRI signals in visual cortex make it possible to develop quantitative models of functional responses within specific maps in individual subjects. During this last quarter-century, there has also been significant progress in measuring properties of the human brain at a range of length and time scales, including white matter pathways, macroscopic properties of gray and white matter, and cellular and molecular tissue properties. We hope the next 25years will see a great deal of work that aims to integrate these data by modeling the network of visual signals. We do not know what such theories will look like, but the characterization of human retinotopic maps from the last 25years is likely to be an important part of future ideas about visual computations.", "title": "" }, { "docid": "6bc611936d412dde15999b2eb179c9e2", "text": "Smith-Lemli-Opitz syndrome, a severe developmental disorder associated with multiple congenital anomalies, is caused by a defect of cholesterol biosynthesis. Low cholesterol and high concentrations of its direct precursor, 7-dehydrocholesterol, in plasma and tissues are the diagnostic biochemical hallmarks of the syndrome. The plasma sterol concentrations correlate with severity and disease outcome. Mutations in the DHCR7 gene lead to deficient activity of 7-dehydrocholesterol reductase (DHCR7), the final enzyme of the cholesterol biosynthetic pathway. The human DHCR7 gene is localised on chromosome 11q13 and its structure has been characterized. Ninetyone different mutations in the DHCR7 gene have been published to date. This paper is a review of the clinical, biochemical and molecular genetic aspects.", "title": "" }, { "docid": "a117e006785ab63ef391d882a097593f", "text": "An increasing interest in understanding human perception in social media has led to the study of the processes of personality self-presentation and impression formation based on user profiles and text blogs. However, despite the popularity of online video, we do not know of any attempt to study personality impressions that go beyond the use of text and still photos. In this paper, we analyze one facet of YouTube as a repository of brief behavioral slices in the form of personal conversational vlogs, which are a unique medium for selfpresentation and interpersonal perception. We investigate the use of nonverbal cues as descriptors of vloggers’ behavior and find significant associations between automatically extracted nonverbal cues for several personality judgments. As one notable result, audio and visual cues together can be used to predict 34% of the variance of the Extraversion trait of the Big Five model. In addition, we explore the associations between vloggers’ personality scores and the level of social attention that their videos received in YouTube. Our study is conducted on a dataset of 442 YouTube vlogs and 2,210 annotations collected using Amazon’s Mechanical Turk.", "title": "" }, { "docid": "4630ade03760cb8ec1da11b16703b3f1", "text": "Dengue infection is a major cause of morbidity and mortality in Malaysia. To date, much research on dengue infection conducted in Malaysia have been published. One hundred and sixty six articles related to dengue in Malaysia were found from a search through a database dedicated to indexing all original data relevant to medicine published between the years 2000-2013. Ninety articles with clinical relevance and future research implications were selected and reviewed. These papers showed evidence of an exponential increase in the disease epidemic and a varying pattern of prevalent dengue serotypes at different times. The early febrile phase of dengue infection consist of an undifferentiated fever. Clinical suspicion and ability to identify patients at risk of severe dengue infection is important. Treatment of dengue infection involves judicious use of volume expander and supportive care. Potential future research areas are discussed to narrow our current knowledge gaps on dengue infection.", "title": "" }, { "docid": "2b89021776b9c2be56a624ea401be99e", "text": "Massive open online courses (MOOCs) are now being used across the world to provide millions of learners with access to education. Many learners complete these courses successfully, or to their own satisfaction, but the high numbers who do not finish remain a subject of concern for platform providers and educators. In 2013, a team from Stanford University analysed engagement patterns on three MOOCs run on the Coursera platform. They found four distinct patterns of engagement that emerged from MOOCs based on videos and assessments. However, not all platforms take this approach to learning design. Courses on the FutureLearn platform are underpinned by a social-constructivist pedagogy, which includes discussion as an important element. In this paper, we analyse engagement patterns on four FutureLearn MOOCs and find that only two clusters identified previously apply in this case. Instead, we see seven distinct patterns of engagement: Samplers, Strong Starters, Returners, Mid-way Dropouts, Nearly There, Late Completers and Keen Completers. This suggests that patterns of engagement in these massive learning environments are influenced by decisions about pedagogy. We also make some observations about approaches to clustering in this context.", "title": "" }, { "docid": "a86bc96645722e4c3f555700e99a1352", "text": "Consumer studies demonstrate that online users value personalized content. At the same time, providing personalization on websites seems quite profitable for web vendors. This win-win situation is however marred by privacy concerns since personalizing people's interaction entails gathering considerable amounts of data about them. As numerous recent surveys have consistently demonstrated, computer users are very concerned about their privacy on the Internet. Moreover, the collection of personal data is also subject to legal regulations in many countries and states. Both user concerns and privacy regulations impact frequently used personalization methods. This article analyzes the tension between personalization and privacy, and presents approaches to reconcile the both.", "title": "" } ]
scidocsrr
6494be1de979c50f4f013401310f86a0
Dependency Exploitation: A Unified CNN-RNN Approach for Visual Emotion Recognition
[ { "docid": "3ac89f0f4573510942996ae66ef8184c", "text": "Deep convolutional neural networks have been successfully applied to image classification tasks. When these same networks have been applied to image retrieval, the assumption has been made that the last layers would give the best performance, as they do in classification. We show that for instance-level image retrieval, lower layers often perform better than the last layers in convolutional neural networks. We present an approach for extracting convolutional features from different layers of the networks, and adopt VLAD encoding to encode features into a single vector for each image. We investigate the effect of different layers and scales of input images on the performance of convolutional features using the recent deep networks OxfordNet and GoogLeNet. Experiments demonstrate that intermediate layers or higher layers with finer scales produce better results for image retrieval, compared to the last layer. When using compressed 128-D VLAD descriptors, our method obtains state-of-the-art results and outperforms other VLAD and CNN based approaches on two out of three test datasets. Our work provides guidance for transferring deep networks trained on image classification to image retrieval tasks.", "title": "" }, { "docid": "840555a134e7606f1f3caa24786c6550", "text": "Psychological research results have confirmed that people can have different emotional reactions to different visual stimuli. Several papers have been published on the problem of visual emotion analysis. In particular, attempts have been made to analyze and predict people’s emotional reaction towards images. To this end, different kinds of hand-tuned features are proposed. The results reported on several carefully selected and labeled small image data sets have confirmed the promise of such features. While the recent successes of many computer vision related tasks are due to the adoption of Convolutional Neural Networks (CNNs), visual emotion analysis has not achieved the same level of success. This may be primarily due to the unavailability of confidently labeled and relatively large image data sets for visual emotion analysis. In this work, we introduce a new data set, which started from 3+ million weakly labeled images of different emotions and ended up 30 times as large as the current largest publicly available visual emotion data set. We hope that this data set encourages further research on visual emotion analysis. We also perform extensive benchmarking analyses on this large data set using the state of the art methods including CNNs.", "title": "" }, { "docid": "fb87648c3bb77b1d9b162a8e9dbc5e86", "text": "With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.", "title": "" }, { "docid": "c8a9919a2df2cfd730816cd0171f08dd", "text": "In this paper, we propose a new deep network that learns multi-level deep representations for image emotion classi fication (MldrNet). Image emotion can be recognized through image semantics, image aesthetics and low-level visual fea tures from both global and local views. Existing image emotion classification works using hand-crafted features o r deep features mainly focus on either low-level visual featu res or semantic-level image representations without taking al l factors into consideration. Our proposed MldrNet unifies deep representations of three levels, i.e. image semantics , image aesthetics and low-level visual features through mul tiple instance learning (MIL) in order to effectively cope wit h noisy labeled data, such as images collected from the Intern et. Extensive experiments on both Internet images and abstract paintings demonstrate the proposed method outperforms the state-of-the-art methods using deep features or hand-craf ted features. The proposed approach also outperforms the state of-the-art methods with at least 6% performance improvement in terms of overall classification accuracy.", "title": "" } ]
[ { "docid": "721a64c9a5523ba836318edcdb8de021", "text": "Highly-produced audio stories often include musical scores that reflect the emotions of the speech. Yet, creating effective musical scores requires deep expertise in sound production and is time-consuming even for experts. We present a system and algorithm for re-sequencing music tracks to generate emotionally relevant music scores for audio stories. The user provides a speech track and music tracks and our system gathers emotion labels on the speech through hand-labeling, crowdsourcing, and automatic methods. We develop a constraint-based dynamic programming algorithm that uses these emotion labels to generate emotionally relevant musical scores. We demonstrate the effectiveness of our algorithm by generating 20 musical scores for audio stories and showing that crowd workers rank their overall quality significantly higher than stories without music.", "title": "" }, { "docid": "2ab2280b7821ae6ad27fff995fd36fe0", "text": "Recent years have seen the development of a satellite communication system called a high-throughput satellite (HTS), which enables large-capacity communication to cope with various communication demands. Current HTSs have a fixed allocation of communication resources and cannot flexibly change this allocation during operation. Thus, effectively allocating communication resources for communication demands with a bias is not possible. Therefore, technology is being developed to add flexibility to satellite communication systems, but there is no system analysis model available to quantitatively evaluate the flexibility performance. In this study, we constructed a system analysis model to quantitatively evaluate the flexibility of a satellite communication system and used it to analyze a satellite communication system equipped with a digital channelizer.", "title": "" }, { "docid": "cbde86d9b73371332a924392ae1f10d0", "text": "The difficulty to solve multiple objective combinatorial optimization problems with traditional techniques has urged researchers to look for alternative, better performing approaches for them. Recently, several algorithms have been proposed which are based on the Ant Colony Optimization metaheuristic. In this contribution, the existing algorithms of this kind are reviewed and experimentally tested in several instances of the bi-objective traveling salesman problem, comparing their performance with that of two well-known multi-objective genetic algorithms.", "title": "" }, { "docid": "ce26ed1e21efb97903ecb09bb58c938c", "text": "Since its development, ingestible wireless endoscopy is considered to be a painless diagnostic method to detect a number of diseases inside GI tract. Medical related engineering companies have made significant improvements in this technology in last decade; however, some major limitations still residue. Localization of the next generation steerable endoscopic capsule robot in six degreeof-freedom (DoF) and active motion control are some of these limitations. The significance of localization capability concerns with the doctors correct diagnosis of the disease area. This paper presents a very robust 6-DoF localization method based on supervised training of an architecture consisting of recurrent networks (RNN) embedded into a convolutional neural network (CNN) to make use of both just-in-moment information obtained by CNN and correlative information across frames obtained by RNN. To our knowledge, our idea of embedding RNNs into a CNN architecture is for the first time proposed in literature. The experimental results show that the proposed RNN-in-CNN architecture performs very well for endoscopic capsule robot localization in cases vignetting, reflection distortions, noise, sudden camera movements and lack of distinguishable features.", "title": "" }, { "docid": "808960323ac755bd8755cb365d63efa0", "text": "We present a method that is suitable for clustering of vehicle trajectories obtained by an automated vision system. We combine ideas from two spectral clustering methods and propose a trajectory-similarity measure based on the Hausdorff distance, with modifications to improve its robustness and account for the fact that trajectories are ordered collections of points. We compare the proposed method with two well-known trajectory-clustering methods on a few real-world data sets.", "title": "" }, { "docid": "05f25a2de55907773c9ff13b8a2fe5f6", "text": "Deep learning (DL) creates impactful advances following a virtuous recipe: model architecture search, creating large training data sets, and scaling computation. It is widely believed that growing training sets and models should improve accuracy and result in better products. As DL application domains grow, we would like a deeper understanding of the relationships between training set size, computational scale, and model accuracy improvements to advance the state-of-the-art. This paper presents a large scale empirical characterization of generalization error and model size growth as training sets grow. We introduce a methodology for this measurement and test four machine learning domains: machine translation, language modeling, image processing, and speech recognition. Our empirical results show power-law generalization error scaling across a breadth of factors, resulting in power-law exponents—the \"steepness\" of the learning curve—yet to be explained by theoretical work. Further, model improvements only shift the error but do not appear to affect the power-law exponent. We also show that model size scales sublinearly with data size. These scaling relationships have significant implications on deep learning research, practice, and systems. They can assist model debugging, setting accuracy targets, and decisions about data set growth. They can also guide computing system design and underscore the importance of continued computational scaling.", "title": "" }, { "docid": "0190bdc5eafae72620f7fabbcdcc223c", "text": "Breast cancer is regarded as one of the most frequent mortality causes among women. As early detection of breast cancer increases the survival chance, creation of a system to diagnose suspicious masses in mammograms is important. In this paper, two automated methods are presented to diagnose mass types of benign and malignant in mammograms. In the first proposed method, segmentation is done using an automated region growing whose threshold is obtained by a trained artificial neural network (ANN). In the second proposed method, segmentation is performed by a cellular neural network (CNN) whose parameters are determined by a genetic algorithm (GA). Intensity, textural, and shape features are extracted from segmented tumors. GA is used to select appropriate features from the set of extracted features. In the next stage, ANNs are used to classify the mammograms as benign or malignant. To evaluate the performance of the proposed methods different classifiers (such as random forest, naïve Bayes, SVM, and KNN) are used. Results of the proposed techniques performed on MIAS and DDSM databases are promising. The obtained sensitivity, specificity, and accuracy rates are 96.87%, 95.94%, and 96.47%, respectively. 2014 Published by Elsevier Ltd.", "title": "" }, { "docid": "71dd012b54ae081933bddaa60612240e", "text": "This paper analyzes & compares four adders with different logic styles (Conventional, transmission gate, 14 transistors & GDI based technique) for transistor count, power dissipation, delay and power delay product. It is performed in virtuoso platform, using Cadence tool with available GPDK - 90nm kit. The width of NMOS and PMOS is set at 120nm and 240nm respectively. Transmission gate full adder has sheer advantage of high speed but consumes more power. GDI full adder gives reduced voltage swing not being able to pass logic 1 and logic 0 completely showing degraded output. Transmission gate full adder shows better performance in terms of delay (0.417530 ns), whereas 14T full adder shows better performance in terms of all three aspects.", "title": "" }, { "docid": "6b5a7e58a8407fa5cda402d4996a3a10", "text": "In the last few years, Hadoop become a \"de facto\" standard to process large scale data as an open source distributed system. With combination of data mining techniques, Hadoop improve data analysis utility. That is why, there are amount of research is studied to apply data mining technique to mapreduce framework in Hadoop. However, data mining have a possibility to cause a privacy violation and this threat is a huge obstacle for data mining using Hadoop. To solve this problem, numerous studies have been conducted. However, existing studies were insufficient and had several drawbacks. In this paper, we propose the privacy preserving data mining technique in Hadoop that is solve privacy violation without utility degradation. We focus on association rule mining algorithm that is representative data mining algorithm. We validate the proposed technique to satisfy performance and preserve data privacy through the experimental results.", "title": "" }, { "docid": "aaff9bc2844f2631e11944e049190ba4", "text": "There has been little work on examining how deep neural networks may be adapted to speakers for improved speech recognition accuracy. Past work has examined using a discriminatively trained affine transformation of the input features applied at a frame level or the re-training of the entire shallow network for a specific speaker. This work explores how deep neural networks may be adapted to speakers by re-training the input layer, the output layer or the entire network. We look at how L2 regularization using weight decay to the speaker independent model improves generalization. Other training factors are examined including the role momentum plays and stochastic mini-batch versus batch training. While improvements are significant for smaller networks, the largest show little gain from adaptation on a large vocabulary mobile speech recognition task.", "title": "" }, { "docid": "af928cd35b6b33ce1cddbf566f63e607", "text": "Machine Learning has been the quintessential solution for many AI problems, but learning is still heavily dependent on the specific training data. Some learning models can be incorporated with a prior knowledge in the Bayesian set up, but these learning models do not have the ability to access any organised world knowledge on demand. In this work, we propose to enhance learning models with world knowledge in the form of Knowledge Graph (KG) fact triples for Natural Language Processing (NLP) tasks. Our aim is to develop a deep learning model that can extract relevant prior support facts from knowledge graphs depending on the task using attention mechanism. We introduce a convolution-based model for learning representations of knowledge graph entity and relation clusters in order to reduce the attention space. We show that the proposed method is highly scalable to the amount of prior information that has to be processed and can be applied to any generic NLP task. Using this method we show significant improvement in performance for text classification with News20, DBPedia datasets and natural language inference with Stanford Natural Language Inference (SNLI) dataset. We also demonstrate that a deep learning model can be trained well with substantially less amount of labeled training data, when it has access to organised world knowledge in the form of knowledge graph.", "title": "" }, { "docid": "d9ff0ec2c3d3a2f8f271f9378f7310c2", "text": "In this paper, we propose a recurrent neural network (RNN) with residual attention (RRA) to learn long-range dependencies from sequential data. We propose to add residual connections across timesteps to RNN, which explicitly enhances the interaction between current state and hidden states that are several timesteps apart. This also allows training errors to be directly back-propagated through residual connections and effectively alleviates the gradient vanishing problem. We further reformulate an attention mechanism over residual connections. An attention gate is defined to summarize the individual contribution from multiple previous hidden states in computing the current state. We evaluate RRA on three tasks: the adding problem, pixel-by-pixel MNIST classification and sentiment analysis on the IMDB dataset. Our experiments demonstrate that RRA yields better performance, faster convergence and more stable training compared to a standard LSTM network. Furthermore, RRA shows highly competitive performance to the state-of-the-art methods.", "title": "" }, { "docid": "066eef8e511fac1f842c699f8efccd6b", "text": "In this paper, we propose a new model that is capable of recognizing overlapping mentions. We introduce a novel notion of mention separators that can be effectively used to capture how mentions overlap with one another. On top of a novel multigraph representation that we introduce, we show that efficient and exact inference can still be performed. We present some theoretical analysis on the differences between our model and a recently proposed model for recognizing overlapping mentions, and discuss the possible implications of the differences. Through extensive empirical analysis on standard datasets, we demonstrate the effectiveness of our approach.", "title": "" }, { "docid": "901ff68d346e67b812fa03a66d64f9c2", "text": "A typed lambda calculus with categorical type constructors is introduced. It has a uniform category theoretic mechanism to declare new types. Its type structure includes categorical objects like products and coproducts as well as recursive types like natural numbers and lists. It also allows duals of recursive types, i.e. lazy types, like infinite lists. It has generalized iterators for recursive types and duals of iterators for lazy types. We will give reduction rules for this simply typed lambda calculus and show that they are strongly normalizing even though it has infinite things like infinite lists.", "title": "" }, { "docid": "b32218abeff9a34c3e89eac76b8c6a45", "text": "The reliability and availability of distributed services can be ensured using replication. We present an architecture and an algorithm for Byzantine fault-tolerant state machine replication. We explore the benefits of virtualization to reliably detect and tolerate faulty replicas, allowing the transformation of Byzantine faults into omission faults. Our approach reduces the total number of physical replicas from 3f+1 to 2f+1. It is based on the concept of twin virtual machines, which involves having two virtual machines in each physical host, each one acting as failure detector of the other.", "title": "" }, { "docid": "815dedf9a4bd8c2a5955224078ec63a4", "text": "Developing personalized Web-based learning systems has been an important research issue in the e-learning field because no fixed learning pathway will be appropriate for all learners. However, the current most Web-based learning platforms with personalized curriculum sequencing tend to emphasize the learnerspsila preferences and interests for the personalized learning services, but they fail to consider difficulty levels of course materials, learning order of prior and posterior knowledge, and learnerspsila abilities while constructing a personalized learning path. As a result, these ignored factors easily lead to generating poor quality learning paths. Generally, learners could generate cognitive overload or fall into cognitive disorientation due to inappropriate curriculum sequencing during learning processes, thus reducing learning effect. With advancement of the artificial intelligence technologies, ontology technologies enable a linguistic infrastructure to represent concept relationships between courseware. Ontology can be served as a structured knowledge representation scheme, which can assist the construction of personalized learning path. Therefore, this study proposes a novel genetic-based curriculum sequencing scheme based on a generated ontology-based concept map, which can be automatically constructed by a large amount of learnerspsila pre-test results, to plan appropriate learning paths for individual learners. The experimental results indicated that the proposed approach is indeed capable of creating learning paths with high quality for individual learners. This will be helpful to learners to learn more effectively and to likely reduce learnerspsila cognitive overloads during learning processes.", "title": "" }, { "docid": "e14cd8d955d80591f905b3858c9b5d09", "text": "With the advent of the Internet of Things (IoT), security has emerged as a major design goal for smart connected devices. This explosion in connectivity created a larger attack surface area. Software-based approaches have been applied for security purposes; however, these methods must be extended with security-oriented technologies that promote hardware as the root of trust. The ARM TrustZone can enable trusted execution environments (TEEs), but existing solutions disregard real-time needs. Here, the authors demonstrate why TrustZone is becoming a reference technology for securing IoT edge devices, and how enhanced TEEs can help meet industrial IoT applications real-time requirements.", "title": "" }, { "docid": "4ea68c4cb9250418853084b60a45582b", "text": "Facial reconstruction is employed in the context of forensic investigation and for creating three-dimensional portraits of people from the past, from ancient Egyptian mummies and bog bodies to digital animations of J. S. Bach. This paper considers a facial reconstruction method (commonly known as the Manchester method) associated with the depiction and identification of the deceased from skeletal remains. Issues of artistic licence and scientific rigour, in relation to soft tissue reconstruction, anatomical variation and skeletal assessment, are discussed. The need for artistic interpretation is greatest where only skeletal material is available, particularly for the morphology of the ears and mouth, and with the skin for an ageing adult. The greatest accuracy is possible when information is available from preserved soft tissue, from a portrait, or from a pathological condition or healed injury.", "title": "" }, { "docid": "0eb5edb9d225c2efc8710d07fc602cdf", "text": "Recently, Electric Vehicles (EVs) have required high power density and high efficiency systems in order to save energy and costs. Specifically, in the DC-DC converter that feeds the non-propulsive loads in these vehicles, where the output voltage is much lower than the one of the energy storage unit. Therefore, the output current becomes quite high, and the efficiency and power density are reduced due to the high current ratings. Furthermore, magnetic components usually are the biggest contributors to the mass and volume in these converters. This paper proposes a Three-phase LLC resonant converter with one integrated transformer where all the windings of the three independent transformers are installed into only one core. Using this technique, a high reduction in the core size and thereby an increment in the power density and a reduction of the production cost are obtained. In addition, this integrated transformer is intended to be applied in the novel Three-phase LLC resonant converter with Star connection that is expected to offer reduction of the imbalanced output current, which is produced by tolerances between the phase components. Finally, the proposed converter with the novel integrated transformer is discussed and evaluated from the experimental point of view. As a result, a 70% reduction in the mass of the magnetic cores was achieved.", "title": "" } ]
scidocsrr
096b930e363950bec307262390c8cef4
Implicit Discourse Relation Recognition with Context-aware Character-enhanced Embeddings
[ { "docid": "56d0609fe4e68abbce27124dd5291033", "text": "Existing works indicate that the absence of explicit discourse connectives makes it difficult to recognize implicit discourse relations. In this paper we attempt to overcome this difficulty for implicit relation recognition by automatically inserting discourse connectives between arguments with the use of a language model. Then we propose two algorithms to leverage the information of these predicted connectives. One is to use these predicted implicit connectives as additional features in a supervised model. The other is to perform implicit relation recognition based only on these predicted connectives. Results on Penn Discourse Treebank 2.0 show that predicted discourse connectives help implicit relation recognition and the first algorithm can achieve an absolute average f-score improvement of 3% over a state of the art baseline system.", "title": "" } ]
[ { "docid": "357bf4403684149577f7110810046a94", "text": "With the development of high-speed integrated circuit, the Ultra Wideband (UWB) communication system has been developed toward the direction of miniaturization, integration, which will necessarily promote UWB antenna also be developed toward the direction of miniaturization, integration, etc. This paper proposes an improved structure of Vivaldi antenna, which loads resistance at the bottom of exponential type antenna to improve Voltage Standing Wave Ratio (VSWR) at low frequency, and which opens three symmetrical unequal rectangular slots in the antenna radiation part to increase the gain. The improved Vivaldi antenna size is 150 mm * 150 mm, and the working frequency is 0.8-3.8 GHz (measured VSWR<;2).The experimental results show that the antenna has good directional radiation characteristics within the scope of bandwidth.", "title": "" }, { "docid": "9b2b04acbbf5c847885c37c448fb99c8", "text": "We address the problem of substring searchable encryption. A single user produces a big stream of data and later on wants to learn the positions in the string that some patterns occur. Although current techniques exploit auxiliary data structures to achieve efficient substring search on the server side, the cost at the user side may be prohibitive. We revisit the work of substring searchable encryption in order to reduce the storage cost of auxiliary data structures. Our solution entails a suffix array based index design, which allows optimal storage cost $O(n)$ with small hidden factor at the size of the string n. Moreover, we implemented our scheme and the state of the art protocol \\citeChase to demonstrate the performance advantage of our solution with precise benchmark results.", "title": "" }, { "docid": "885938f7aec53d020bd4948c8a0bd233", "text": "Eighty-five samples from fifteen different legume seed lines generally available in the UK were examined by measurements of their net protein utilization by rats and by haemagglutination tests with erythrocytes from a number of different animal species. From these results the seeds were classified into four broad groups. Group a seeds from most varieties of kidney (Phaseolus vulgaris), runner (Phaseolus coccineus) and tepary (Phaseolus acutifolius) beans showed high reactivity with all cell types and were also highly toxic. Group b, which contained seeds from lima or butter beans (Phaseolus lunatus) and winged bean (Psophocarpus tetragonolobus), agglutinated only human and pronase-treated rat erythrocytes. These seeds did not support proper growth of the rats although the animals survived the 10 d experimental period. Group c consisted of seeds from lentils (Lens culinaris), peas (Pisum sativum), chick-peas (Cicer arietinum), blackeyed peas (Vigna sinensis), pigeon peas (Cajanus cajan), mung beans (Phaseolus aureus), field or broad beans (Vicia faba) and aduki beans (Phaseolus angularis). These generally had low reactivity with all cells and were non-toxic. Group d, represented by soya (Glycine max) and pinto (Phaseolus vulgaris) beans, generally had low reactivity with all cells but caused growth depression at certain dietary concentrations. This growth depression was probably mainly due to antinutritional factors other than lectins. Lectins from group a seeds showed many structural and immunological similarities. However the subunit composition of the lectin from the tepary bean samples was different from that of the other bean lectins in this or any other groups.", "title": "" }, { "docid": "f3b40a0e11847afa19f1109b7532264b", "text": "In his 2012 book How to Create a Mind, Ray Kurzweil defines a \"Pattern Recognition Theory of Mind\" that states that the brain uses millions of pattern recognizers, plus modules to check, organize, and augment them. In this article, I further the theory to go beyond pattern recognition and include also pattern activation, thus encompassing both sensory and motor functions. In addition, I treat checking, organizing, and augmentation as patterns of patterns instead of separate modules, therefore handling them the same as patterns in general. Henceforth I put forward a unified theory I call \"Pattern Activation/Recognition Theory of Mind.\" While the original theory was based on hierarchical hidden Markov models, this evolution is based on their precursor: stochastic grammars. I demonstrate that a class of self-describing stochastic grammars allows for unifying pattern activation, recognition, organization, consistency checking, metaphor, and learning, into a single theory that expresses patterns throughout. I have implemented the model as a probabilistic programming language specialized in activation/recognition grammatical and neural operations. I use this prototype to compute and present diagrams for each stochastic grammar and corresponding neural circuit. I then discuss the theory as it relates to artificial network developments, common coding, neural reuse, and unity of mind, concluding by proposing potential paths to validation.", "title": "" }, { "docid": "4d56e63ea8d3b4325aa5c7f9baa9eaeb", "text": "In this paper, the concepts of input/output-to-state stability (IOSS) and state-norm estimators are considered for switched nonlinear systems under average dwell-time switching signals. We show that when the average dwell-time is large enough, a switched system is IOSS if all of its constituent subsystems are IOSS. Moreover, under the same conditions, a non-switched state-norm estimator exists for the switched system. Furthermore, if some of the constituent subsystems are not IOSS, we show that still IOSS can be established for the switched system, if the activation time of the non-IOSS subsystems is not too big. Again, under the same conditions, a state-norm estimator exists for the switched system. However, in this case, the state-norm estimator is a switched system itself, consisting of two subsystems. We show that this state-norm estimator can be constructed such that its switching times are independent of the switching times of the switched system it is designed for. © 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "fb1f467ab11bb4c01a9e410bf84ac258", "text": "The modular arrangement of the neocortex is based on the cell minicolumn: a self-contained ecosystem of neurons and their afferent, efferent, and interneuronal connections. The authors' preliminary studies indicate that minicolumns in the brains of autistic patients are narrower, with an altered internal organization. More specifically, their minicolumns reveal less peripheral neuropil space and increased spacing among their constituent cells. The peripheral neuropil space of the minicolumn is the conduit, among other things, for inhibitory local circuit projections. A defect in these GABAergic fibers may correlate with the increased prevalence of seizures among autistic patients. This article expands on our initial findings by arguing for the specificity of GABAergic inhibition in the neocortex as being focused around its mini- and macrocolumnar organization. The authors conclude that GABAergic interneurons are vital to proper minicolumnar differentiation and signal processing (e.g., filtering capacity of the neocortex), thus providing a putative correlate to autistic symptomatology.", "title": "" }, { "docid": "a3608704922ca3303b8113783ed92bbe", "text": "Extracting and understanding community structure in complex networks is one of the most intensively investigated problems in recent years. In this paper we propose a genetic based approach to discover overlapping communities. The algorithm optimizes a fitness function able to identify densely connected groups of nodes by employing it on the line graph corresponding to the graph modelling the network. The method generates a division of the network in a number of groups in an unsupervised way. This number is automatically determined by the optimal value of the fitness function. Experiments on synthetic and real life networks show the capability of the method to successfully detect the network structure.", "title": "" }, { "docid": "5e503aaee94e2dc58f9311959d5a142e", "text": "The use of the fast Fourier transform in power spectrum analysis is described. Principal advantages of this method are a reduction in the number of computations and in required core storage, and convenient application in nonstationarity tests. The method involves sectioning the record and averaging modified periodograms of the sections. T INTRODLCTION HIS PAPER outlines a method for the application of the fast Fourier transform algorithm to the estimation of power spectra, which involves sectioning the record, taking modified periodograms of these sections, and averaging these modified periodo-grams. In many instances this method involves fewer computations than other methods. Moreover, it involves the transformation of sequences which are shorter than the whole record which is an advantage when computations are to be performed on a machine with limited core storage. Finally, it directly yields a potential resolution in the time dimension which is useful for testing and measuring nonstationarity. As will be pointed out, it is closely related to the method of complex demodulation described Let X(j), j= 0, N-1 be a sample from a stationary , second-order stochastic sequence. Assume for simplicity that E(X) 0. Let X(j) have spectral density Pcf), I f \\ 5%. We take segments, possibly overlapping, of length L with the starting points of these segments D units apart. Let X,(j),j=O, L 1 be the first such segment. Then Xdj) X($ and finally X&) X(j+ (K 1)D) j 0, ,L-1. We suppose we have K such segments; Xl(j), X,($, and that they cover the entire record, Le., that (K-1)DfL N. This segmenting is illustrated in Fig. 1. The method of estimation is as follows. For each segment of length L we calculate a modified periodo-gram. That is, we select a data window W(j), j= 0, L-1, and form the sequences Xl(j)W(j), X,(j) W(j). We then take the finite Fourier transforms A1(n), AK(~) of these sequences. Here ~k(n) xk(j) w(j)e-z~cijnlL 1 L-1 L j-0 and i= Finally, we obtain the K modified periodograms L U Ik(fn) I Ah(%) k 1, 2, K, where f n 0 , o-,L/2 n \" L and 1 Wyj). L j=o The spectral estimate is the average of these periodo", "title": "" }, { "docid": "ef898f8ae69263fea2519d9224aeb9a3", "text": "In public venues, crowd size is a key indicator of crowd safety and stability. Crowding levels can be detected using holistic image features, however this requires a large amount of training data to capture the wide variations in crowd distribution. If a crowd counting algorithm is to be deployed across a large number of cameras, such a large and burdensome training requirement is far from ideal. In this paper we propose an approach that uses local features to count the number of people in each foreground blob segment, so that the total crowd estimate is the sum of the group sizes. This results in an approach that is scalable to crowd volumes not seen in the training data, and can be trained on a very small data set. As a local approach is used, the proposed algorithm can easily be used to estimate crowd density throughout different regions of the scene and be used in a multi-camera environment. A unique localised approach to ground truth annotation reduces the required training data is also presented, as a localised approach to crowd counting has different training requirements to a holistic one. Testing on a large pedestrian database compares the proposed technique to existing holistic techniques and demonstrates improved accuracy, and superior performance when test conditions are unseen in the training set, or a minimal training set is used.", "title": "" }, { "docid": "2891f8baee4ab21d793d0832ce54c24f", "text": "This paper is concerned with the task of bilingual lexicon induction using imagebased features. By applying features from a convolutional neural network (CNN), we obtain state-of-the-art performance on a standard dataset, obtaining a 79% relative improvement over previous work which uses bags of visual words based on SIFT features. The CNN image-based approach is also compared with state-of-the-art linguistic approaches to bilingual lexicon induction, even outperforming these for one of three language pairs on another standard dataset. Furthermore, we shed new light on the type of visual similarity metric to use for genuine similarity versus relatedness tasks, and experiment with using multiple layers from the same network in an attempt to improve performance.", "title": "" }, { "docid": "e2abe7d1ceba4a71b0713eb5eda795d3", "text": "Lossy image and video compression algorithms yield visually annoying artifacts including blocking, blurring, and ringing, especially at low bit-rates. To reduce these artifacts, post-processing techniques have been extensively studied. Recently, inspired by the great success of convolutional neural network (CNN) in computer vision, some researches were performed on adopting CNN in post-processing, mostly for JPEG compressed images. In this paper, we present a CNN-based post-processing algorithm for High Efficiency Video Coding (HEVC), the state-of-theart video coding standard. We redesign a Variable-filter-size Residuelearning CNN (VRCNN) to improve the performance and to accelerate network training. Experimental results show that using our VRCNN as post-processing leads to on average 4.6% bit-rate reduction compared to HEVC baseline. The VRCNN outperforms previously studied networks in achieving higher bit-rate reduction, lower memory cost, and multiplied computational speedup.", "title": "" }, { "docid": "c3b217c7492d1febe2249cf2544d3f58", "text": "Online user reviews are increasingly becoming the de-facto standard for measuring the quality of electronics, restaurants, merchants, etc. The sheer volume of online reviews makes it difficult for a human to process and extract all meaningful information in order to make an educated purchase. As a result, there has been a trend toward systems that can automatically summarize opinions from a set of reviews and display them in an easy to process manner [1, 9]. In this paper, we present a system that summarizes the sentiment of reviews for a local service such as a restaurant or hotel. In particular we focus on aspect-based summarization models [8], where a summary is built by extracting relevant aspects of a service, such as service or value, aggregating the sentiment per aspect, and selecting aspect-relevant text. We describe the details of both the aspect extraction and sentiment detection modules of our system. A novel aspect of these models is that they exploit user provided labels and domain specific characteristics of service reviews to increase quality.", "title": "" }, { "docid": "504f3406db4465c10ffad27f2674f232", "text": "In this paper we show the possibility of using FAUST (a programming language for function based block oriented programming) to create a fast audio processor in a single chip FPGA environment. The produced VHDL code is embedded in the on-chip processor system and utilizes the FPGA fabric for parallel processing. For the purpose of implementing and testing the code a complete System-On-Chip framework has been created. We use a Digilent board with a XILINX Virtex 2 Pro FPGA. The chip has a PowerPC 405 core and the framework uses the on chip peripheral bus to interface the core. The content of this paper presents a proof-of-concept implementation using a simple two pole IIR filter. The produced code is working, although more work has to be done for implementing complex arithmetic operations support.", "title": "" }, { "docid": "b47f6272e110928a8d0db8d450e539e9", "text": "This paper presents an ocean energy power take-off system using paddle like wave energy converter (WEC), magnetic gear and efficient power converter architecture. As the WEC oscillates at a low speed of about 5-25 rpm, the direct drive generator is not an efficient design. To increase the generator speed a cost effective flux focusing magnetic gear is proposed. Power converter architecture is discussed and integration of energy storage in the system to smooth the power output is elaborated. Super-capacitor is chosen as energy storage for its better oscillatory power absorbing capability than battery. WEC is emulated in hardware using motor generator set-up and energy storage integration in the system is demonstrated.", "title": "" }, { "docid": "fe5f011e6ee10913eb9ef00478af41df", "text": "This paper highlights problems of semantic metadata interoperability in digital libraries. The prevalence of a plethora of standards and a lack of semantic interoperability can partly be attributed to the absence of theoretical foundations to underpin current metadata approaches and solutions. Contemporary metadata standards and interoperability approaches are mainly top-down and hierarchical, and, hence, fail to take into account the diversity of cultural, linguistic and local perspectives that abound. To overcome this, it is proposed that a social constructivist approach should be adopted by libraries and other cultural heritage institutions when archiving information objects that need to be enriched with metadata, thereby reflecting the diversity of views and perspectives that can be held by their users. Following on Charmaz [1], a constructivist grounded theory method is employed to investigate how library professionals and library users view metadata standards, collaborative metadata approaches and semantic web technologies in relation to semantic metadata interoperability. This method allows an active interplay between the researcher and the participants who can be either Library and Information Science researchers, librarians or library users. Following the completion first phase of data collection, preliminary reflections are presented, with emphasis on how Library and Information Science professionals view current metadata practices, especially as used in academic library contexts. However, as the study is ongoing one, it is too early to generate theoretical categories and conclusions.", "title": "" }, { "docid": "82985f584f51a5e103b29265878335e5", "text": "Orthodontic management for patients with single or bilateral congenitally missing permanent lateral incisors is a challenge to effective treatment planning. Over the last several decades, dentistry has focused on several treatment modalities for replacement of missing teeth. The two major alternative treatment options are orthodontic space closure or space opening for prosthetic replacements. For patients with high aesthetic expectations implants are one of the treatment of choices, especially when it comes to replacement of missing maxillary lateral incisors and mandibular incisors. Edentulous areas where the available bone is compromised to use conventional implants with 2,5 mm or more in diameter, narrow diameter implants with less than 2,5 mm diameter can be successfully used. This case report deals with managing a compromised situation in the region of maxillary lateral incisor using a narrow diameter implant.", "title": "" }, { "docid": "9072c5ad2fbba55bdd50b5969862f7c3", "text": "Parametricism has come to scene as an important style in both architectural design and construction where conventional Computer-Aided Design (CAD) tool has become substandard. Building Information Modeling (BIM) is a recent object-based parametric modeling tool for exploring the relationship between the geometric and non-geometric components of the model. The aim of this research is to explore the capabilities of BIM in achieving variety and flexibility in design extending from architectural to urban scale. This study proposes a method by using User Interface (UI) and Application Programming Interface (API) tools of BIM to generate a complex roof structure as a parametric family. This project demonstrates a dynamic variety in architectural scale. We hypothesized that if a function calculating the roof length is defined using a variety of inputs, it can later be applied to urban scale by utilizing a database of the inputs.", "title": "" }, { "docid": "677f2e8be01f1e2becda8efc720db85b", "text": "A snake is an energy-minimizing spline guided by external constraint forces and influenced by image forces that pull it toward features such as lines and edges. Snakes are active contour models: they lock onto nearby edges, localizing them accurately. Scale-space continuation can be used to enlarge the capture region surrounding a feature. Snakes provide a unified account of a number of visual problems, including detection of edges, lines, and subjective contours; motion tracking; and stereo matching. We have used snakes successfully for interactive interpretation, in which user-imposed constraint forces guide the snake near features of interest.", "title": "" }, { "docid": "b45aae55cc4e7bdb13463eff7aaf6c60", "text": "Text retrieval systems typically produce a ranking of documents and let a user decide how far down that ranking to go. In contrast, programs that filter text streams, software that categorizes documents, agents which alert users, and many other IR systems must make decisions without human input or supervision. It is important to define what constitutes good effectiveness for these autonomous systems, tune the systems to achieve the highest possible effectiveness, and estimate how the effectiveness changes as new data is processed. We show how to do this for binary text classification systems, emphasizing that different goals for the system lead to different optimal behaviors. Optimizing and estimating effectiveness is greatly aided if classifiers that explicitly estimate the probability of class membership are used.", "title": "" }, { "docid": "d0b16a75fb7b81c030ab5ce1b08d8236", "text": "It is unquestionable that successive hardware generations have significantly improved GPU computing workload performance over the last several years. Moore's law and DRAM scaling have respectively increased single-chip peak instruction throughput by 3X and off-chip bandwidth by 2.2X from NVIDIA's GeForce 8800 GTX in November 2006 to its GeForce GTX 580 in November 2010. However, raw capability numbers typically underestimate the improvements in real application performance over the same time period, due to significant architectural feature improvements. To demonstrate the effects of architecture features and optimizations over time, we conducted experiments on a set of benchmarks from diverse application domains for multiple GPU architecture generations to understand how much performance has truly been improving for those workloads. First, we demonstrate that certain architectural features make a huge difference in the performance of unoptimized code, such as the inclusion of a general cache which can improve performance by 2-4× in some situations. Second, we describe what optimization patterns have been most essential and widely applicable for improving performance for GPU computing workloads across all architecture generations. Some important optimization patterns included data layout transformation, converting scatter accesses to gather accesses, GPU workload regularization, and granularity coarsening, each of which improved performance on some benchmark by over 20%, sometimes by a factor of more than 5×. While hardware improvements to baseline unoptimized code can reduce the speedup magnitude, these patterns remain important for even the most recent GPUs. Finally, we identify which added architectural features created significant new optimization opportunities, such as increased register file capacity or reduced bandwidth penalties for misaligned accesses, which increase performance by 2× or more in the optimized versions of relevant benchmarks.", "title": "" } ]
scidocsrr
4f1fb50fcb7b223d9eac49ce8b26a606
A Distributed Access Control System for Cloud Federations
[ { "docid": "a289775f693d6b37f54b13898c242a82", "text": "The large-scale, dynamic, and heterogeneous nature of cloud computing poses numerous security challenges. But the cloud's main challenge is to provide a robust authorization mechanism that incorporates multitenancy and virtualization aspects of resources. The authors present a distributed architecture that incorporates principles from security management and software engineering and propose key requirements and a design model for the architecture.", "title": "" }, { "docid": "30394ae468bc521e8e00db030f19e983", "text": "A peer-to-peer network, enabling different parties to jointly store and run computations on data while keeping the data completely private. Enigma’s computational model is based on a highly optimized version of secure multi-party computation, guaranteed by a verifiable secret-sharing scheme. For storage, we use a modified distributed hashtable for holding secret-shared data. An external blockchain is utilized as the controller of the network, manages access control, identities and serves as a tamper-proof log of events. Security deposits and fees incentivize operation, correctness and fairness of the system. Similar to Bitcoin, Enigma removes the need for a trusted third party, enabling autonomous control of personal data. For the first time, users are able to share their data with cryptographic guarantees regarding their privacy.", "title": "" } ]
[ { "docid": "4f0846f7fdb8a1a4537fdc20d2e342a6", "text": "We present a connectionist model of event knowledge that is trained on examples of sequences of activities that are not explicitly labeled as events. The model learns co-occurrence patterns among the components of activities as they occur in the moment (entities, actions, and contexts), and also learns to predict sequential patterns of activities. In so doing, the model displays behaviors that in humans have been characterized as exemplifying inferencing of unmentioned event components, the prediction of upcoming components (which may or may not ever happen or be mentioned), reconstructive memory, and the ability to flexibly accommodate novel variations from previously encountered experiences. All of these behaviors emerge from what the model learns.", "title": "" }, { "docid": "1fc965670f71d9870a4eea93d129e285", "text": "The present study investigates the impact of the experience of role playing a violent character in a video game on attitudes towards violent crimes and criminals. People who played the violent game were found to be more acceptable of crimes and criminals compared to people who did not play the violent game. More importantly, interaction effects were found such that people were more acceptable of crimes and criminals outside the game if the criminals were matched with the role they played in the game and the criminal actions were similar to the activities they perpetrated during the game. The results indicate that people’s virtual experience through role-playing games can influence their attitudes and judgments of similar real-life crimes, especially if the crimes are similar to what they conducted while playing games. Theoretical and practical implications are discussed. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5ea123d6e93daf3c1bd3de8110cbf92f", "text": "Recent work in human cognitive neuroscience has linked self-consciousness to the processing of multisensory bodily signals (bodily self-consciousness [BSC]) in fronto-parietal cortex and more posterior temporo-parietal regions. We highlight the behavioral, neurophysiological, neuroimaging, and computational laws that subtend BSC in humans and non-human primates. We propose that BSC includes body-centered perception (hand, face, and trunk), based on the integration of proprioceptive, vestibular, and visual bodily inputs, and involves spatio-temporal mechanisms integrating multisensory bodily stimuli within peripersonal space (PPS). We develop four major constraints of BSC (proprioception, body-related visual information, PPS, and embodiment) and argue that the fronto-parietal and temporo-parietal processing of trunk-centered multisensory signals in PPS is of particular relevance for theoretical models and simulations of BSC and eventually of self-consciousness.", "title": "" }, { "docid": "1298ddbeea84f6299e865708fd9549a6", "text": "Since its invention in the early 1960s (Rotman and Turner, 1963), the Rotman Lens has proven itself to be a useful beamformer for designers of electronically scanned arrays. Inherent in its design is a true time delay phase shift capability that is independent of frequency and removes the need for costly phase shifters to steer a beam over wide angles. The Rotman Lens has a long history in military radar, but it has also been used in communication systems. This article uses the developed software to design and analyze a microstrip Rotman Lens for the Ku band. The initial lens design will come from a tool based on geometrical optics (GO). A second stage of analysis will be performed using a full wave finite difference time domain (FDTD) solver. The results between the first-cut design tool and the comprehensive FDTD solver will be compared, and some of the design trades will be explored to gauge their impact on the performance of the lens.", "title": "" }, { "docid": "1e3136f97585c985153b3ed43ac8db6c", "text": "In this report, we organize and reflect on recent advances and challenges in the field of sports data visualization. The exponentially-growing body of visualization research based on sports data is a prime indication of the importance and timeliness of this report. Sports data visualization research encompasses the breadth of visualization tasks and goals: exploring the design of new visualization techniques; adapting existing visualizations to a novel domain; and conducting design studies and evaluations in close collaboration with experts, including practitioners, enthusiasts, and journalists. Frequently this research has impact beyond sports in both academia and in industry because it is i) grounded in realistic, highly heterogeneous data, ii) applied to real-world problems, and iii) designed in close collaboration with domain experts. In this report, we analyze current research contributions through the lens of three categories of sports data: box score data (data containing statistical summaries of a sport event such as a game), tracking data (data about in-game actions and trajectories), and meta-data (data about the sport and its participants but not necessarily a given game). We conclude this report with a high-level discussion of sports visualization research informed by our analysis—identifying critical research gaps and valuable opportunities for the visualization community. More information is available at the STAR’s website: https://sportsdataviz.github.io/.", "title": "" }, { "docid": "8132f2f2bc4f8a40dd2bcede4666ddda", "text": "We propose OntoGain, a system for unsupervised ontology acquisition from unstructured text which relies on multi-word term extraction. For the acquisition of taxonomic relations, we exploit inherent multi-word terms’ lexical information in a comparative implementation of agglomerative hierarchical clustering and formal concept analysis methods. For the detection of non-taxonomic relations, we comparatively investigate in OntoGain an association rules based algorithm and a probabilistic algorithm. The OntoGain system allows for transformation of the derived ontology into standard OWL statements. OntoGain results are compared to both hand-crafted ontologies, as well as to a state-of-the art system, in two different domains: the medical and computer science domains.", "title": "" }, { "docid": "2a8e6cf4d19f62147b92993c30cbfde8", "text": "Off-line recognition of text play a significant role in several application such as the automatic sorting of postal mail or editing old documents. It is the ability of the computer to distinguish characters and words. Automatic off-line recognition of text can be divided into the recognition of printed and handwritten characters. Off-line Arabic handwriting recognition still faces great challenges. This paper provides a survey of Arabic character recognition systems which are classified into the character recognition categories: printed and handwritten. Also, it examines the literature on the most significant work in handwritten text recognition without segmentation and discusses algorithms which split the words into characters.", "title": "" }, { "docid": "986ba06f4c9a9b5d53882fc4ebbcbb5c", "text": "With the fast development of big data technologies, IT spending on computer clusters is increasing rapidly as well. In order to minimize the cost, architects must plan big data clusters with careful evaluation of various design choices. Current capacity planning methods are mostly trial-and-error or high level estimation based. These approaches, however, are far from efficient, especially with the increasing hardware diversity and software stack complexity. In this paper, we present CSMethod, a novel cluster simulation methodology, to facilitate efficient cluster capacity planning, performance evaluation and optimization, before system provisioning. With our proposed methodology, software stacks are simulated by an abstract yet high fidelity model, Hardware activities derived from software operations are dynamically mapped onto architecture models for processors, memory, storage and networking devices. This hardware/software hybrid methodology allows low overhead, fast and accurate cluster simulation that can be easily carried out on a standard client platform (desktop or laptop). Our experimental results with six popular Hadoop workloads demonstrate that CSMethod can achieve an average error rate of less than six percent, across various software parameters and cluster hardware configurations. We also illustrate the application of the proposed methodology with two real-world use cases: Video-streaming service system planning and Terasort cluster optimization. All our experiments are run on a commodity laptop with execution speeds faster than native executions on a multi-node high-end cluster.", "title": "" }, { "docid": "76c6dea53623c831186afc202d260608", "text": "We present CitNetExplorer, a new software tool for analyzing and visualizing citation networks of scientific publications. CitNetExplorer can for instance be used to study the development of a research field, to delineate the literature on a research topic, and to support literature reviewing. We first introduce the main concepts that need to be understood when working with CitNetExplorer. We then demonstrate CitNetExplorer by using the tool to analyze the scientometric literature and the literature on community detection in networks. Finally, we discuss some technical details on the construction, visualization, and analysis of citation networks in CitNetExplorer.", "title": "" }, { "docid": "bdc3aca95784fa167b1118fedac9d3c5", "text": "This cross-sectional study compared somatic, endurance performance determinants and heart rate variability (HRV) profiles of professional soccer players divided into different age groups: GI (17-19.9 years; n = 23), GII (20-24.9 years; n = 45), GIII (25-29.9 years; n = 30), and GIV (30-39 years; n = 26). Players underwent somatic and HRV assessment and maximal exercise testing. HRV was analyzed by spectral analysis of HRV, and high (HF) and low (LF) frequency power was transformed by a natural logarithm (Ln). Players in GIV (83 ± 7 kg) were heavier (p < 0.05) compared to both GI (73 ± 6 kg), and GII (78 ± 6 kg). Significantly lower maximal oxygen uptake (VO2max, ml•kg-1•min-1) was observed for GIV (56.6 ± 3.8) compared to GI (59.6 ± 3.9), GII (59.4 ± 4.2) and GIV (59.7 ± 4.1). All agegroups, except for GII, demonstrated comparable relative maximal power output (Pmax). For supine HRV, significantly lower Ln HF (ms2) was identified in both GIII (7.1 ± 0.8) and GIV (6.9 ± 1.0) compared to GI (7.9 ± 0.6) and GII (7.7 ± 0.9). In conclusion, soccer players aged >25 years showed negligible differences in Pmax unlike the age group differences demonstrated in VO2max. A shift towards relative sympathetic dominance, particularly due to reduced vagal activity, was apparent after approximately 8 years of competing at the professional level.", "title": "" }, { "docid": "6d291a65658fff5db76df9d9d98855a6", "text": "This paper gives an overview about different failure mechanisms which limit the safe operating area of power devices. It is demonstrated how the device internal processes can be investigated by means of device simulation. For instance, the electrothermal simulation of high-voltage diode turn-off reveals how a backside filament transforms into a continuous filament connecting the anode and cathode and how this can be accompanied with a transition from avalanche-induced into thermally driven carrier generation. A similar current destabilization may occur during insulated-gate bipolar transistor turn-off with a high turn-off rate, when the channel is closed quickly leading to strong dynamic avalanche. It is explained how the current filamentation depends on substrate resistivity, device thickness, channel width, and switching conditions (gate resistor and overcurrent). Filamentation processes during short-circuit events are discussed, and possible countermeasures are suggested. A mechanism of a periodically emerging and vanishing filament near the edge of the chip is presented. Examples on current destabilizing effects in gate turn-off thyristors, integrated gate-commutated thyristors, and metal-oxide-semiconductor field-effect transistors are given, and limitations of current device simulation are discussed.", "title": "" }, { "docid": "85581f0e6db599dd914f6e84586d15c6", "text": "Automatic feature extraction in latent fingerprints is a challenging problem due to poor quality of most latents, such as unclear ridge structures, overlapped lines and letters, and overlapped fingerprints. We proposed a latent fingerprint enhancement algorithm which requires manually marked region of interest (ROI) and singular points. The core of the proposed enhancement algorithm is a novel orientation field estimation algorithm, which fits orientation field model to coarse orientation field estimated from skeleton outputted by a commercial fingerprint SDK. Experimental results on NIST SD27 latent fingerprint database indicate that by incorporating the proposed enhancement algorithm, the matching accuracy of the commercial matcher was significantly improved.", "title": "" }, { "docid": "89a16c0ced4820853fc75db8ab82e557", "text": "We present algorithm for Heart Rate detection based on Short-Term Autocorrelation Center Clipping method. This algorithm is dedicated for biological signal detection, electrocardiogram, in noisy environment with lot of artifacts. Using this algorithm is also possible detect the R pointers in the PQRST complex of the ECG signal. In this paper the new implementation of the heart rate variability estimation is also presented. HRV module is based on parametric and non-parametric methods of the power spectral density computation.", "title": "" }, { "docid": "efd7512694ed378cb111c94e53890c89", "text": "Recent years have seen a significant growth and increased usage of large-scale knowledge resources in both academic research and industry. We can distinguish two main types of knowledge resources: those that store factual information about entities in the form of semantic relations (e.g., Freebase), namely so-called knowledge graphs, and those that represent general linguistic knowledge (e.g., WordNet or UWN). In this article, we present a third type of knowledge resource which completes the picture by connecting the two first types. Instances of this resource are graphs of semantically-associated relations (sar-graphs), whose purpose is to link semantic relations from factual knowledge graphs with their linguistic representations in human language. We present a general method for constructing sar-graphs using a languageand relation-independent, distantly supervised approach which, apart from generic language processing tools, relies solely on the availability of a lexical semantic resource, providing sense information for words, as well as a knowledge base containing seed relation instances. Using these seeds, our method extracts, validates and merges relationspecific linguistic patterns from text to create sar-graphs. To cope with the noisily labeled data arising in a distantly supervised setting, we propose several automatic pattern confidence estimation strategies, and also show how manual supervision can be used to improve the quality of sar-graph instances. We demonstrate the applicability of our method by constructing sar-graphs for 25 semantic relations, of which we make a subset publicly available at http://sargraph.dfki.de. We believe sar-graphs will prove to be useful linguistic resources for a wide variety of natural language processing tasks, and in particular for information extraction and knowledge base population. We illustrate their usefulness with experiments in relation extraction and in computer assisted language learning.", "title": "" }, { "docid": "2e6b034cbb73d91b70e3574a06140621", "text": "ETHNOPHARMACOLOGICAL RELEVANCE\nBitter melon (Momordica charantia L.) has been widely used as an traditional medicine treatment for diabetic patients in Asia. In vitro and animal studies suggested its hypoglycemic activity, but limited human studies are available to support its use.\n\n\nAIM OF STUDY\nThis study was conducted to assess the efficacy and safety of three doses of bitter melon compared with metformin.\n\n\nMATERIALS AND METHODS\nThis is a 4-week, multicenter, randomized, double-blind, active-control trial. Patients were randomized into 4 groups to receive bitter melon 500 mg/day, 1,000 mg/day, and 2,000 mg/day or metformin 1,000 mg/day. All patients were followed for 4 weeks.\n\n\nRESULTS\nThere was a significant decline in fructosamine at week 4 of the metformin group (-16.8; 95% CI, -31.2, -2.4 μmol/L) and the bitter melon 2,000 mg/day group (-10.2; 95% CI, -19.1, -1.3 μmol/L). Bitter melon 500 and 1,000 mg/day did not significantly decrease fructosamine levels (-3.5; 95% CI -11.7, 4.6 and -10.3; 95% CI -22.7, 2.2 μmol/L, respectively).\n\n\nCONCLUSIONS\nBitter melon had a modest hypoglycemic effect and significantly reduced fructosamine levels from baseline among patients with type 2 diabetes who received 2,000 mg/day. However, the hypoglycemic effect of bitter melon was less than metformin 1,000 mg/day.", "title": "" }, { "docid": "4e368af438658472eb2d7e3db118f61b", "text": "Radiological diagnosis of acetabular retroversion is based on the presence of the cross-over sign (COS), the posterior wall sign (PWS), and prominence of the ischial spine (PRISS). The primary purpose of the study was to correlate the quantitative cross-over sign with the presence or absence of the PRISS and PWS signs. The hypothesis was that both, PRISS and PWS are associated with a higher cross-over sign ratio or higher amount of acetabular retroversion. A previous study identified 1417 patients with a positive acetabular cross-over sign. Among these, three radiological parameters were assessed: (1) the amount of acetabular retroversion, quantified as a cross-over sign ratio; (2) the presence of the PRISS sign; (3) the presence of the PWS sign. The relation of these three parameters was analysed using Fisher's exact test, ANOVA, and linear regression analysis. In hips with cross-over sign, the PRISS was present in 61.7%. A direct association between PRISS and the cross-over sign ratio (p < 0.001) was seen. The PWS was positive in 31% of the hips and was also significantly related with the cross-over sign ratio (p < 0.001). In hips with a PRISS, 39.7% had a PWS sign, which was a significant relation (p < 0.001). In patients with positive PWS, 78.8% of the cases also had a PRISS (p < 0.001). Both the PRISS and PWS signs were significantly associated with higher grade cross-over values. Both the PRISS and PWS signs as well as the coexistence of COS, PRISS, and PWS are significantly associated with higher grade of acetabular retroversion. In conjunction with the COS, the PRISS and PWS signs indicate severe acetabular retroversion. Presence and recognition of distinct radiological signs around the hip joint might raise the awareness of possible femoroacetabular impingement (FAI).", "title": "" }, { "docid": "7ca2d093da7646ff0d69fb3ba9d675ae", "text": "Advancements in deep learning over the years have attracted research into how deep artificial neural networks can be used in robotic systems. It is on this basis that the following research survey will present a discussion of the applications, gains, and obstacles to deep learning in comparison to physical robotic systems while using modern research as examples. The research survey will present a summarization of the current research with specific focus on the gains and obstacles in comparison to robotics. This will be followed by a primer on discussing how notable deep learning structures can be used in robotics with relevant examples. The next section will show the practical considerations robotics researchers desire to use in regard to deep learning neural networks. Finally, the research survey will show the shortcomings and solutions to mitigate them in addition to discussion of the future trends. The intention of this research is to show how recent advancements in the broader robotics field can inspire additional research in applying deep learning in robotics.", "title": "" }, { "docid": "2b53b125dc8c79322aabb083a9c991e4", "text": "Geographical location is vital to geospatial applications like local search and event detection. In this paper, we investigate and improve on the task of text-based geolocation prediction of Twitter users. Previous studies on this topic have typically assumed that geographical references (e.g., gazetteer terms, dialectal words) in a text are indicative of its author’s location. However, these references are often buried in informal, ungrammatical, and multilingual data, and are therefore non-trivial to identify and exploit. We present an integrated geolocation prediction framework and investigate what factors impact on prediction accuracy. First, we evaluate a range of feature selection methods to obtain “location indicative words”. We then evaluate the impact of nongeotagged tweets, language, and user-declared metadata on geolocation prediction. In addition, we evaluate the impact of temporal variance on model generalisation, and discuss how users differ in terms of their geolocatability. We achieve state-of-the-art results for the text-based Twitter user geolocation task, and also provide the most extensive exploration of the task to date. Our findings provide valuable insights into the design of robust, practical text-based geolocation prediction systems.", "title": "" }, { "docid": "a68cec6fd069499099c8bca264eb0982", "text": "The anti-saccade task has emerged as an important task for investigating the flexible control that we have over behaviour. In this task, participants must suppress the reflexive urge to look at a visual target that appears suddenly in the peripheral visual field and must instead look away from the target in the opposite direction. A crucial step involved in performing this task is the top-down inhibition of a reflexive, automatic saccade. Here, we describe recent neurophysiological evidence demonstrating the presence of this inhibitory function in single-cell activity in the frontal eye fields and superior colliculus. Patients diagnosed with various neurological and/or psychiatric disorders that affect the frontal lobes or basal ganglia find it difficult to suppress the automatic pro-saccade, revealing a deficit in top-down inhibition.", "title": "" }, { "docid": "80b041b8712436474a200c5b5ed3aeb2", "text": "Building a spatially consistent model is a key functionality to endow a mobile robot with autonomy. Without an initial map or an absolute localization means, it requires to concurrently solve the localization and mapping problems. For this purpose, vision is a powerful sensor, because it provides data from which stable features can be extracted and matched as the robot moves. But it does not directly provide 3D information, which is a difficulty for estimating the geometry of the environment. This article presents two approaches to the SLAM problem using vision: one with stereovision, and one with monocular images. Both approaches rely on a robust interest point matching algorithm that works in very diverse environments. The stereovision based approach is a classic SLAM implementation, whereas the monocular approach introduces a new way to initialize landmarks. Both approaches are analyzed and compared with extensive experimental results, with a rover and a blimp.", "title": "" } ]
scidocsrr
b2ec4cad4ce473a24a5c95039576d46e
Optimal Microgrid Control and Power-Flow Study With Different Bidding Policies by Using PowerWorld Simulator
[ { "docid": "704c3e9a966cc3b59ba18a36e7b99ea0", "text": "The environmental and economical benefits of the microgrid and consequently its acceptability and degree of proliferation in the utility power industry, are primarily determined by the envisioned controller capabilities and the operational features. Depending on the type and depth of penetration of distributed energy resource (DER) units, load characteristics and power quality constraints, and market participation strategies, the required control and operational strategies of a microgrid can be significantly, and even conceptually, different than those of the conventional power systems.", "title": "" }, { "docid": "ed06226e548fac89cc06a798618622c6", "text": "Exciting yet challenging times lie ahead. The electrical power industry is undergoing rapid change. The rising cost of energy, the mass electrification of everyday life, and climate change are the major drivers that will determine the speed at which such transformations will occur. Regardless of how quickly various utilities embrace smart grid concepts, technologies, and systems, they all agree onthe inevitability of this massive transformation. It is a move that will not only affect their business processes but also their organization and technologies.", "title": "" } ]
[ { "docid": "79564b938dde94306a2a142240bf30ea", "text": "Accurately counting maize tassels is important for monitoring the growth status of maize plants. This tedious task, however, is still mainly done by manual efforts. In the context of modern plant phenotyping, automating this task is required to meet the need of large-scale analysis of genotype and phenotype. In recent years, computer vision technologies have experienced a significant breakthrough due to the emergence of large-scale datasets and increased computational resources. Naturally image-based approaches have also received much attention in plant-related studies. Yet a fact is that most image-based systems for plant phenotyping are deployed under controlled laboratory environment. When transferring the application scenario to unconstrained in-field conditions, intrinsic and extrinsic variations in the wild pose great challenges for accurate counting of maize tassels, which goes beyond the ability of conventional image processing techniques. This calls for further robust computer vision approaches to address in-field variations. This paper studies the in-field counting problem of maize tassels. To our knowledge, this is the first time that a plant-related counting problem is considered using computer vision technologies under unconstrained field-based environment. With 361 field images collected in four experimental fields across China between 2010 and 2015 and corresponding manually-labelled dotted annotations, a novel Maize Tassels Counting (MTC) dataset is created and will be released with this paper. To alleviate the in-field challenges, a deep convolutional neural network-based approach termed TasselNet is proposed. TasselNet can achieve good adaptability to in-field variations via modelling the local visual characteristics of field images and regressing the local counts of maize tassels. Extensive results on the MTC dataset demonstrate that TasselNet outperforms other state-of-the-art approaches by large margins and achieves the overall best counting performance, with a mean absolute error of 6.6 and a mean squared error of 9.6 averaged over 8 test sequences. TasselNet can achieve robust in-field counting of maize tassels with a relatively high degree of accuracy. Our experimental evaluations also suggest several good practices for practitioners working on maize-tassel-like counting problems. It is worth noting that, though the counting errors have been greatly reduced by TasselNet, in-field counting of maize tassels remains an open and unsolved problem.", "title": "" }, { "docid": "d17fbb3d8ba36118b0bc20cb9d44bf90", "text": "Although voluntary individual adoption of information technology is well studied in the literature, further theoretical development is needed to account for the specific characteristics of the mobile commerce artifact. In this study, we augment the theory of planned behavior to include evaluative variables that are not fully captured in attitude and to enhance the specificity of the model to mobile commerce. More specifically, we develop, operationalize and empirically test a model for explaining the adoption intention of transactional B2C mobile commerce. The model is empirically tested with mobile device users who have not adopted mobile commerce yet. The empirical results provide strong support for the theoretical model, shedding light on the significance and relative importance of specific adoption factors. The theoretical and empirical implications of these results are discussed.", "title": "" }, { "docid": "5c6bdb80f470d7b9b0e2acd57cb23295", "text": "We present a novel sentence reduction system for automatically removing extraneous phrases from sentences that are extracted from a document for summarization purpose. The system uses multiple sources of knowledge to decide which phrases in an extracted sentence can be removed, including syntactic knowledge, context information, and statistics computed from a corpus which consists of examples written by human professionals. Reduction can significantly improve the conciseness of automatic summaries.", "title": "" }, { "docid": "98f704cf1ea1247c8c4087af23b6ebe5", "text": "We introduce BAG, the Berkeley Analog Generator, an integrated framework for the development of generators of Analog and Mixed Signal (AMS) circuits. Such generators are parameterized design procedures that produce sized schematics and correct layouts optimized to meet a set of input specifications. BAG extends previous work by implementing interfaces to integrate all steps of the design flow into a single environment and by providing helper classes -- both at the schematic and layout level -- to aid the designer in developing truly parameterized and technology-independent circuit generators. This simplifies the codification of common tasks including technology characterization, schematic and testbench translation, simulator interfacing, physical verification and extraction, and parameterized layout creation for common styles of layout. We believe that this approach will foster design reuse, ease technology migration, and shorten time-to-market, while remaining close to the classical design flow to ease adoption. We have used BAG to design generators for several circuits, including a Voltage Controlled Oscillator (VCO) and a Switched-Capacitor (SC) voltage regulator in a CMOS 65nm process. We also present results from automatic migration of our designs to a 40nm process.", "title": "" }, { "docid": "b3b02767cdf765b46a26f79c26730503", "text": "In the last decade, computational models that distinguish semantic relations have become crucial for many applications in Natural Language Processing (NLP), such as machine translation, question answering, sentiment analysis, and so on. These computational models typically distinguish semantic relations by either representing semantically related words as vector representations in the vector space, or using neural networks to classify semantic relations. In this thesis, we mainly focus on the improvement of such computational models. Specifically, the goal of this thesis is to address the tasks of distinguishing antonymy, synonymy, and hypernymy. For the task of distinguishing antonymy and synonymy, we propose two approaches. In the first approach, we focus on improving both families of word vector representations, which are distributional and distributed vector representations. Regarding the improvement of distributional vector representation, we propose a novel weighted feature for constructing word vectors by relying on distributional lexical contrast, a feature capable of differentiating between antonymy and synonymy. In terms of the improvement of distributed vector representations, we propose a neural model to learn word vectors by integrating distributional lexical contrast into the objective function of the neural model. The resulting word vectors can distinguish antonymy from synonymy and predict degrees of word similarity. In the second approach, we aim to use lexico-syntactic patterns to classify antonymy and synonymy. To do so, we propose two pattern-based neural networks to distinguish antonymy from synonymy. The lexico-syntactic patterns are induced from the syntactic parse trees and then encoded as vector representations by neural networks. As a result, the two pattern-based neural networks improve performance over prior pattern-based methods. For the tasks of distinguishing hypernymy, we propose a novel neural model to learn hierarchical embeddings for hypernymy detection and directionality. The hierarchical embeddings are learned according to two underlying aspects (i) that the similarity of hypernymy is higher than similarity of other relations, and (ii) that the distributional hierarchy is generated between hyponyms and hypernyms. The experimental results show that hierarchical embeddings significantly outperform state-of-the-art word embeddings. In order to improve word embeddings for measuring semantic similarity and relatedness, we propose two neural models to learn word denoising embeddings by filtering noise from original word embeddings without using any external resources. Two proposed neural models receive original word embeddings as inputs and learn denoising matrices to filter noise from original word embeddings. Word denoising embeddings achieve the improvement against original word embeddings over tasks of semantic similarity and relatedness. Furthermore, rather than using English, we also shift the focus on evaluating the performance of computational models to Vietnamese. To that effect, we introduce two novel datasets of (dis-)similarity and relatedness for Vietnamese. We then make use of computational models to verify the two datasets and to observe their performance in being adapted to Vietnamese. The results show that computational models exhibit similar behaviour in the two Vietnamese datasets as in the corresponding English datasets.", "title": "" }, { "docid": "8e695f37849eabe7133e22a7ebfe855a", "text": "OBJECTIVE\nThe purpose of this review is to provide guidance on the development, validation and use of food-frequency questionnaires (FFQs) for different study designs. It does not include any recommendations about the most appropriate method for dietary assessment (e.g. food-frequency questionnaire versus weighed record).\n\n\nMETHODS\nA comprehensive search of electronic databases was carried out for publications from 1980 to 1999. Findings from the review were then commented upon and added to by a group of international experts.\n\n\nRESULTS\nRecommendations have been developed to aid in the design, validation and use of FFQs. Specific details of each of these areas are discussed in the text.\n\n\nCONCLUSIONS\nFFQs are being used in a variety of ways and different study designs. There is no gold standard for directly assessing the validity of FFQs. Nevertheless, the outcome of this review should help those wishing to develop or adapt an FFQ to validate it for its intended use.", "title": "" }, { "docid": "63405a3fc4815e869fc872bb96bb8a33", "text": "We demonstrate how to learn efficient heuristics for automated reasoning algorithms through deep reinforcement learning. We consider search algorithms for quantified Boolean logics, that already can solve formulas of impressive size up to 100s of thousands of variables. The main challenge is to find a representation which lends to making predictions in a scalable way. The heuristics learned through our approach significantly improve over the handwritten heuristics for several sets of formulas.", "title": "" }, { "docid": "e361038adbd26439ec793327ac469942", "text": "Vehicular ad hoc networks are expected to significantly improve traffic safety and transportation efficiency while providing a comfortable driving experience. However, available communication, storage, and computation resources of the connected vehicles are not well utilized to meet the service requirements of intelligent transportation systems. Vehicular cloud computing (VCC) is a promising approach that makes use of the advantages of cloud computing and applies them to vehicular networks. In this paper, we propose an optimal computation resource allocation scheme to maximize the total long-term expected reward of the VCC system. The system reward is derived by taking into account both the income and cost of the VCC system as well as the variability feature of available resources. Then, the optimization problem is formulated as an infinite horizon semi-Markov decision process (SMDP) with the defined state space, action space, reward model, and transition probability distribution of the VCC system. We utilize the iteration algorithm to develop the optimal scheme that describes which action has to be taken under a certain state. Numerical results demonstrate that the significant performance gain can be obtained by the SMDP-based scheme within the acceptable complexity.", "title": "" }, { "docid": "e1b6de27518c1c17965a891a8d14a1e1", "text": "Mobile phones are becoming more and more widely used nowadays, and people do not use the phone only for communication: there is a wide variety of phone applications allowing users to select those that fit their needs. Aggregated over time, application usage patterns exhibit not only what people are consistently interested in but also the way in which they use their phones, and can help improving phone design and personalized services. This work aims at mining automatically usage patterns from apps data recorded continuously with smartphones. A new probabilistic framework for mining usage patterns is proposed. Our methodology involves the design of a bag-of-apps model that robustly represents level of phone usage over specific times of the day, and the use of a probabilistic topic model that jointly discovers patterns of usage over multiple applications and describes users as mixtures of such patterns. Our framework is evaluated using 230 000+ hours of real-life app phone log data, demonstrates that relevant patterns of usage can be extracted, and is objectively validated on a user retrieval task with competitive performance.", "title": "" }, { "docid": "f4ae3b7eb96660f59a7d38e29e5e9557", "text": "Music performance is both a natural human activity, present in all societies, and one of the most complex and demanding cognitive challenges that the human mind can undertake. Unlike most other sensory–motor activities, music performance requires precise timing of several hierarchically organized actions, as well as precise control over pitch interval production, implemented through diverse effectors according to the instrument involved. We review the cognitive neuroscience literature of both motor and auditory domains, highlighting the value of studying interactions between these systems in a musical context, and propose some ideas concerning the role of the premotor cortex in integration of higher order features of music with appropriately timed and organized actions.", "title": "" }, { "docid": "31e955e62361b6857b31d09398760830", "text": "Measuring “how much the human is in the interaction” - the level of engagement - is instrumental in building effective interactive robots. Engagement, however, is a complex, multi-faceted cognitive mechanism that is only indirectly observable. This article formalizes with-me-ness as one of such indirect measures. With-me-ness, a concept borrowed from the field of Computer-Supported Collaborative Learning, measures in a well-defined way to what extent the human is with the robot over the course of an interactive task. As such, it is a meaningful precursor of engagement. We expose in this paper the full methodology, from real-time estimation of the human's focus of attention (relying on a novel, open-source, vision-based head pose estimator), to on-line computation of with-me-ness. We report as well on the experimental validation of this approach, using a naturalistic setup involving children during a complex robot-teaching task.", "title": "" }, { "docid": "531e30bf9610b82f6fc650652e6fc836", "text": "A versatile microreactor platform featuring a novel chemical-resistant microvalve array has been developed using combined silicon/polymer micromachining and a special polymer membrane transfer process. The basic valve unit in the array has a typical ‘transistor’ structure and a PDMS/parylene double-layer valve membrane. A robust multiplexing algorithm is also proposed for individual addressing of a large array using a minimal number of signal inputs. The in-channel microvalve is leakproof upon pneumatic actuation. In open status it introduces small impedance to the fluidic flow, and allows a significantly larger dynamic range of flow rates (∼ml min−1) compared with most of the microvalves reported. Equivalent electronic circuits were established by modeling the microvalves as PMOS transistors and the fluidic channels as simple resistors to provide theoretical prediction of the device fluidic behavior. The presented microvalve/reactor array showed excellent chemical compatibility in the tests with several typical aggressive chemicals including those seriously degrading PDMS-based microfluidic devices. Combined with the multiplexing strategy, this versatile array platform can find a variety of lab-on-a-chip applications such as addressable multiplex biochemical synthesis/assays, and is particularly suitable for those requiring tough chemicals, large flow rates and/or high-throughput parallel processing. As an example, the device performance was examined through the addressed synthesis of 30-mer DNA oligonucleotides followed by sequence validation using on-chip hybridization. The results showed leakage-free valve array addressing and proper synthesis in target reactors, as well as uniform flow distribution and excellent regional reaction selectivity. (Some figures in this article are in colour only in the electronic version) 0960-1317/06/081433+11$30.00 © 2006 IOP Publishing Ltd Printed in the UK 1433", "title": "" }, { "docid": "4ee84cfdef31d4814837ad2811e59cd4", "text": "In this paper, we propose the MIML (Multi-Instance Multi-Label learning) framework where an example is described by multiple instances and associated with multiple class labels. Compared to traditional learning frameworks, the MIML framework is more convenient and natural for representing complicated objects which have multiple semantic meanings. To learn from MIML examples, we propose the MimlBoost and MimlSvm algorithms based on a simple degeneration strategy, and experiments show that solving problems involving complicated objects with multiple semantic meanings in the MIML framework can lead to good performance. Considering that the degeneration process may lose information, we propose the D-MimlSvm algorithm which tackles MIML problems directly in a regularization framework. Moreover, we show that even when we do not have access to the real objects and thus cannot capture more information from real objects by using the MIML representation, MIML is still useful. We propose the InsDif and SubCod algorithms. InsDif works by transforming single-instances into the MIML representation for learning, while SubCod works by transforming single-label examples into the MIML representation for learning. Experiments show that in some tasks they are able to achieve better performance than learning the single-instances or single-label examples directly.", "title": "" }, { "docid": "2b3de55ff1733fac5ee8c22af210658a", "text": "With faster connection speed, Internet users are now making social network a huge reservoir of texts, images and video clips (GIF). Sentiment analysis for such online platform can be used to predict political elections, evaluates economic indicators and so on. However, GIF sentiment analysis is quite challenging, not only because it hinges on spatio-temporal visual contentabstraction, but also for the relationship between such abstraction and final sentiment remains unknown.In this paper, we dedicated to find outsuch relationship.We proposed a SentiPairSequence basedspatiotemporal visual sentiment ontology, which forms the midlevel representations for GIFsentiment. The establishment process of SentiPair contains two steps. First, we construct the Synset Forest to define the semantic tree structure of visual sentiment label elements. Then, through theSynset Forest, we organically select and combine sentiment label elements to form a mid-level visual sentiment representation. Our experiments indicate that SentiPair outperforms other competing mid-level attributes. Using SentiPair, our analysis frameworkcan achieve satisfying prediction accuracy (72.6%). We also opened ourdataset (GSO-2015) to the research community. GSO-2015 contains more than 6,000 manually annotated GIFs out of more than 40,000 candidates. Each is labeled with both sentiment and SentiPair Sequence.", "title": "" }, { "docid": "fb11348b48f65a4d3101727308a1f4fc", "text": "Spin-transfer torque random access memory (STT-RAM) has emerged as an attractive candidate for future nonvolatile memories. It advantages the benefits of current state-of-the-art memories including high-speed read operation (of static RAM), high density (of dynamic RAM), and nonvolatility (of flash memories). However, the write operation in the 1T-1MTJ STT-RAM bitcell is asymmetric and stochastic, which leads to high energy consumption and long latency. In this paper, a new write assist technique is proposed to terminate the write operation immediately after switching takes place in the magnetic tunneling junction (MTJ). As a result, both the write time and write energy consumption of 1T-1MTJ bitcells improves. Moreover, the proposed write assist technique leads to an error-free write operation. The simulation results using a 65-nm CMOS access transistor and a 40-nm MTJ technology confirm that the proposed write assist technique results in three orders of magnitude improvement in bit error rate compared with the best existing techniques. Moreover, the proposed write assist technique leads to 81% energy saving compared with a cell without write assist and adds only 9.6% area overhead to a 16-kbit STT-RAM array.", "title": "" }, { "docid": "1bc1965682f757dcfa86936911855add", "text": "Software-Defined Networking (SDN) introduces a new communication network management paradigm and has gained much attention recently. In SDN, a network controller overlooks and manages the entire network by configuring routing mechanisms for underlying switches. The switches report their status to the controller periodically, such as port statistics and flow statistics, according to their communication protocol. However, switches may contain vulnerabilities that can be exploited by attackers. A compromised switch may not only lose its normal functionality, but it may also maliciously paralyze the network by creating network congestions or packet loss. Therefore, it is important for the system to be able to detect and isolate malicious switches. In this work, we investigate a methodology for an SDN controller to detect compromised switches through real-time analysis of the periodically collected reports. Two types of malicious behavior of compromised switches are investigated: packet dropping and packet swapping. We proposed two anomaly detection algorithms to detect packet droppers and packet swappers. Our simulation results show that our proposed methods can effectively detect packet droppers and swappers. To the best of our knowledge, our work is the first to address malicious switches detection using statistics reports in SDN.", "title": "" }, { "docid": "8d20b2a4d205684f6353fe710f989fde", "text": "Financial institutions manage numerous portfolios whose risk must be managed continuously, and the large amounts of data that has to be processed renders this a considerable effort. As such, a system that autonomously detects anomalies in the risk measures of financial portfolios, would be of great value. To this end, the two econometric models ARMA-GARCH and EWMA, and the two machine learning based algorithms LSTM and HTM, were evaluated for the task of performing unsupervised anomaly detection on the streaming time series of portfolio risk measures. Three datasets of returns and Value-at-Risk series were synthesized and one dataset of real-world Value-at-Risk series had labels handcrafted for the experiments in this thesis. The results revealed that the LSTM has great potential in this domain, due to an ability to adapt to different types of time series and for being effective at finding a wide range of anomalies. However, the EWMA had the benefit of being faster and more interpretable, but lacked the ability to capture anomalous trends. The ARMA-GARCH was found to have difficulties in finding a good fit to the time series of risk measures, resulting in poor performance, and the HTM was outperformed by the other algorithms in every regard, due to an inability to learn the autoregressive behaviour of the time series.", "title": "" }, { "docid": "45b1629ad93315e389e3f78d9697e8e0", "text": "Loudspeakers and amplifiers of mobile communication receivers may cause significant nonlinear distortion in the acoustic echo path, resulting in a limitation of the performance of linear echo cancelers. In this contribution, we present a nonlinear acoustic echo suppressor in order to increase the attenuation of the nonlinearly distorted residual echo. The proposed approach is based on a power filter model of the acoustic echo path which is applied to the estimation of the power spectral density of the nonlinear residual echo. These time-variant estimates are used to appropriately adjust the frequency-dependent gain values of the echo suppressor. The performance of the proposed approach is evaluated in realistic experimental set-ups.", "title": "" }, { "docid": "58fbd637f7c044aeb0d55ba015c70f61", "text": "This paper outlines an innovative software development that utilizes Quality of Service (QoS) and parallel technologies in Cisco Catalyst Switches to increase the analytical performance of a Network Intrusion Detection and Protection System (NIDPS) when deployed in highspeed networks. We have designed a real network to present experiments that use a Snort NIDPS. Our experiments demonstrate the weaknesses of NIDPSes, such as inability to process multiple packets and propensity to drop packets in heavy traffic and high-speed networks without analysing them. We tested Snort’s analysis performance, gauging the number of packets sent, analysed, dropped, filtered, injected, and outstanding. We suggest using QoS configuration technologies in a Cisco Catalyst 3560 Series Switch and parallel Snorts to improve NIDPS performance and to reduce the number of dropped packets. Our results show that our novel configuration improves performance.", "title": "" }, { "docid": "3c9be4272d57966660c74857a68a70d3", "text": "Due to the recent surge in end-users demands, value-added video services (e.g. in-stream video advertisements) need to be provisioned in a cost-efficient and agile manner in Content Delivery Networks (CDNs). Network Function Virtualization (NFV) is an emerging technology that aims to reduce costs and bring agility by decoupling network functions from the underlying hardware. It is often used in combination with Software Defined Network (SDN), a technology to decouple control and data planes. This paper proposes an NFV and SDN-based architecture for a cost-efficient and agile provisioning of value-added video services in CDNs. In the proposed architecture, the application-level middleboxes that enable value-added video services (e.g. mixer, compressor) are provisioned as Virtual Network Functions (VNFs) and chained using application-level SDN switches. HTTP technology is used as the pillar of the implementation architecture. We have built a prototype and deployed it in an OPNFV test lab and in SAVI, a Canadian distributed test bed for future Internet applications. The performance is also evaluated.", "title": "" } ]
scidocsrr
01347f095bba102c22475914a023366c
Deep Semantic Architecture with discriminative feature visualization for neuroimage analysis
[ { "docid": "14e5874d0916a293eed6489130925098", "text": "Deep learning methods have recently made notable advances in the tasks of classification and representation learning. These tasks are important for brain imaging and neuroscience discovery, making the methods attractive for porting to a neuroimager's toolbox. Success of these methods is, in part, explained by the flexibility of deep learning models. However, this flexibility makes the process of porting to new areas a difficult parameter optimization problem. In this work we demonstrate our results (and feasible parameter ranges) in application of deep learning methods to structural and functional brain imaging data. These methods include deep belief networks and their building block the restricted Boltzmann machine. We also describe a novel constraint-based approach to visualizing high dimensional data. We use it to analyze the effect of parameter choices on data transformations. Our results show that deep learning methods are able to learn physiologically important representations and detect latent relations in neuroimaging data.", "title": "" } ]
[ { "docid": "8899dc843831f592a89d0f6cf9688dfc", "text": "Deep neural networks have yielded immense success in speech recognition, computer vision and natural language processing. However, the exploration of deep neural networks for recommender systems has received a relatively little introspection. Also, different recommendation scenarios have their own issues which creates the need for different approaches for recommendation. Specifically in news recommendation a major problem is that of varying user interests. In this work, we use deep neural networks with attention to tackle the problem of news recommendation. The key factor in user-item based collaborative filtering is to identify the interaction between user and item features. Matrix factorization is one of the most common approaches for identifying this interaction. It maps both the users and the items into a joint latent factor space such that user-item interactions in that space can be modeled as inner products in that space. Some recent work has used deep neural networks with the motive to learn an arbitrary function instead of the inner product that is used for capturing the user-item interaction. However, directly adapting it for the news domain does not seem to be very suitable. This is because of the dynamic nature of news readership where the interests of the users keep changing with time. Hence, it becomes challenging for recommendation systems to model both user preferences as well as account for the interests which keep changing over time. We present a deep neural model, where a non-linear mapping of users and item features are learnt first. For learning a non-linear mapping for the users we use an attention-based recurrent layer in combination with fully connected layers. For learning the mappings for the items we use only fully connected layers. We then use a ranking based objective function to learn the parameters of the network. We also use the content of the news articles as features for our model. Extensive experiments on a real-world dataset show a significant improvement of our proposed model over the state-of-the-art by 4.7% (Hit Ratio@10). Along with this, we also show the effectiveness of our model to handle the user cold-start and item cold-start problems. ? Vaibhav Kumar and Dhruv Khattar are the corresponding authors", "title": "" }, { "docid": "9e3d3783aa566b50a0e56c71703da32b", "text": "Heterogeneous networks are widely used to model real-world semi-structured data. The key challenge of learning over such networks is the modeling of node similarity under both network structures and contents. To deal with network structures, most existing works assume a given or enumerable set of meta-paths and then leverage them for the computation of meta-path-based proximities or network embeddings. However, expert knowledge for given meta-paths is not always available, and as the length of considered meta-paths increases, the number of possible paths grows exponentially, which makes the path searching process very costly. On the other hand, while there are often rich contents around network nodes, they have hardly been leveraged to further improve similarity modeling. In this work, to properly model node similarity in content-rich heterogeneous networks, we propose to automatically discover useful paths for pairs of nodes under both structural and content information. To this end, we combine continuous reinforcement learning and deep content embedding into a novel semi-supervised joint learning framework. Specifically, the supervised reinforcement learning component explores useful paths between a small set of example similar pairs of nodes, while the unsupervised deep embedding component captures node contents and enables inductive learning on the whole network. The two components are jointly trained in a closed loop to mutually enhance each other. Extensive experiments on three real-world heterogeneous networks demonstrate the supreme advantages of our algorithm.", "title": "" }, { "docid": "e1066f3b7ff82667dbc7186f357dd406", "text": "Generative adversarial networks (GANs) are becoming increasingly popular for image processing tasks. Researchers have started using GAN s for speech enhancement, but the advantage of using the GAN framework has not been established for speech enhancement. For example, a recent study reports encouraging enhancement results, but we find that the architecture of the generator used in the GAN gives better performance when it is trained alone using the $L_1$ loss. This work presents a new GAN for speech enhancement, and obtains performance improvement with the help of adversarial training. A deep neural network (DNN) is used for time-frequency mask estimation, and it is trained in two ways: regular training with the $L_1$ loss and training using the GAN framework with the help of an adversary discriminator. Experimental results suggest that the GAN framework improves speech enhancement performance. Further exploration of loss functions, for speech enhancement, suggests that the $L_1$ loss is consistently better than the $L_2$ loss for improving the perceptual quality of noisy speech.", "title": "" }, { "docid": "5b340560406b99bcb383816accf45060", "text": "Modern global managers are required to possess a set of competencies or multiple intelligences in order to meet pressing business challenges. Hence, expanding global managersâ€TM competencies is becoming an important issue. Many scholars and specialists have proposed various competency models containing a list of required competencies. But it is hard for someone to master a broad set of competencies at the same time. Here arises an imperative issue on how to enrich global managersâ€TM competencies by way of segmenting a set of competencies into some portions in order to facilitate competency development with a stepwise mode. To solve this issue involving the vagueness of human judgments, we have proposed an effective method combining fuzzy logic and Decision Making Trial and Evaluation Laboratory (DEMATEL) to segment required competencies for better promoting the competency development of global managers. Additionally, an empirical study is presented to illustrate the Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.", "title": "" }, { "docid": "e9dcc0eb5894907142dffdf2aa233c35", "text": "The explosion of the web and the abundance of linked data demand for effective and efficient methods for storage, management and querying. More specifically, the ever-increasing size and number of RDF data collections raises the need for efficient query answering, and dictates the usage of distributed data management systems for effectively partitioning and querying them. To this direction, Apache Spark is one of the most active big-data approaches, with more and more systems adopting it, for efficient, distributed data management. The purpose of this paper is to provide an overview of the existing works dealing with efficient query answering, in the area of RDF data, using Apache Spark. We discuss on the characteristics and the key dimension of such systems, we describe novel ideas in the area, and the corresponding drawbacks, and provide directions for future work.", "title": "" }, { "docid": "87cfc5cad31751fd89c68dc9557eb33f", "text": "his paper presents a low-voltage (LV) (1.0 V) and low-power (LP) (40 μW) inverter based operational transconductance amplifier (OTA) using FGMOS (Floating-Gate MOS) transistor and its application in Gm-C filters. The OTA was designed in a 0.18 μm CMOS process. The simulation results of the proposed OTA demonstrate an open loop gain of 30.2 dB and a unity gain frequency of 942 MHz. In this OTA, the relative tuning range of 50 is achieved. To demonstrate the use of the proposed OTA in practical circuits, the second-order filter was designed. The designed filter has a good tuning range from 100 kHz to 5.6 MHz which is suitable for the wireless specifications of Bluetooth (650 kHz), CDMA2000 (700 kHz) and Wideband CDMA (2.2 MHz). The active area occupied by the designed filter on the silicon is and the maximum power consumption of this filter is 160 μW.", "title": "" }, { "docid": "879cc991ec7353678cc22d6771684c3e", "text": "We demonstrate an x-axis Lorentz force sensor (LFS) for electronic compass applications. The sensor is based on a 30 μm thick torsional resonator fabricated in a process similar to that used in commercial MEMS gyroscopes. The sensor achieved a resolution of 210 nT/√Hz with a DC supply voltage of 2 V and driving power consumption of 1 mW. Bias instability was measured as 60 nT using the minimum Allan deviation. This mechanically-balanced torsional resonator also confers the advantage of low acceleration sensitivity; the measured response to acceleration is below the sensor's noise level.", "title": "" }, { "docid": "f9af6cca7d9ac18ace9bc6169b4393cc", "text": "Metric learning has become a widespreadly used tool in machine learning. To reduce expensive costs brought in by increasing dimensionality, low-rank metric learning arises as it can be more economical in storage and computation. However, existing low-rank metric learning algorithms usually adopt nonconvex objectives, and are hence sensitive to the choice of a heuristic low-rank basis. In this paper, we propose a novel low-rank metric learning algorithm to yield bilinear similarity functions. This algorithm scales linearly with input dimensionality in both space and time, therefore applicable to highdimensional data domains. A convex objective free of heuristics is formulated by leveraging trace norm regularization to promote low-rankness. Crucially, we prove that all globally optimal metric solutions must retain a certain low-rank structure, which enables our algorithm to decompose the high-dimensional learning task into two steps: an SVD-based projection and a metric learning problem with reduced dimensionality. The latter step can be tackled efficiently through employing a linearized Alternating Direction Method of Multipliers. The efficacy of the proposed algorithm is demonstrated through experiments performed on four benchmark datasets with tens of thousands of dimensions.", "title": "" }, { "docid": "ae9bdb80a60dd6820c1c9d9557a73ffc", "text": "We propose a novel method for predicting image labels by fusing image content descriptors with the social media context of each image. An image uploaded to a social media site such as Flickr often has meaningful, associated information, such as comments and other images the user has uploaded, that is complementary to pixel content and helpful in predicting labels. Prediction challenges such as ImageNet [6]and MSCOCO [19] use only pixels, while other methods make predictions purely from social media context [21]. Our method is based on a novel fully connected Conditional Random Field (CRF) framework, where each node is an image, and consists of two deep Convolutional Neural Networks (CNN) and one Recurrent Neural Network (RNN) that model both textual and visual node/image information. The edge weights of the CRF graph represent textual similarity and link-based metadata such as user sets and image groups. We model the CRF as an RNN for both learning and inference, and incorporate the weighted ranking loss and cross entropy loss into the CRF parameter optimization to handle the training data imbalance issue. Our proposed approach is evaluated on the MIR-9K dataset and experimentally outperforms current state-of-the-art approaches.", "title": "" }, { "docid": "5e86f40cfc3b2e9664ea1f7cc5bf730c", "text": "Due to a wide range of applications, wireless sensor networks (WSNs) have recently attracted a lot of interest to the researchers. Limited computational capacity and power usage are two major challenges to ensure security in WSNs. Recently, more secure communication or data aggregation techniques have discovered. So, familiarity with the current research in WSN security will benefit researchers greatly. In this paper, security related issues and challenges in WSNs are investigated. We identify the security threats and review proposed security mechanisms for WSNs. Moreover, we provide a brief discussion on the future research direction in WSN security.", "title": "" }, { "docid": "0efa756a15219d8383ca296860f7433a", "text": "Chronic inflammation plays a multifaceted role in carcinogenesis. Mounting evidence from preclinical and clinical studies suggests that persistent inflammation functions as a driving force in the journey to cancer. The possible mechanisms by which inflammation can contribute to carcinogenesis include induction of genomic instability, alterations in epigenetic events and subsequent inappropriate gene expression, enhanced proliferation of initiated cells, resistance to apoptosis, aggressive tumor neovascularization, invasion through tumor-associated basement membrane and metastasis, etc. Inflammation-induced reactive oxygen and nitrogen species cause damage to important cellular components (e.g., DNA, proteins and lipids), which can directly or indirectly contribute to malignant cell transformation. Overexpression, elevated secretion, or abnormal activation of proinflammatory mediators, such as cytokines, chemokines, cyclooxygenase-2, prostaglandins, inducible nitric oxide synthase, and nitric oxide, and a distinct network of intracellular signaling molecules including upstream kinases and transcription factors facilitate tumor promotion and progression. While inflammation promotes development of cancer, components of the tumor microenvironment, such as tumor cells, stromal cells in surrounding tissue and infiltrated inflammatory/immune cells generate an intratumoral inflammatory state by aberrant expression or activation of some proinflammatory molecules. Many of proinflammatory mediators, especially cytokines, chemokines and prostaglandins, turn on the angiogenic switches mainly controlled by vascular endothelial growth factor, thereby inducing inflammatory angiogenesis and tumor cell-stroma communication. This will end up with tumor angiogenesis, metastasis and invasion. Moreover, cellular microRNAs are emerging as a potential link between inflammation and cancer. The present article highlights the role of various proinflammatory mediators in carcinogenesis and their promise as potential targets for chemoprevention of inflammation-associated carcinogenesis.", "title": "" }, { "docid": "c5bcc3434495d10627d05ed032661f94", "text": "An important part of textual information around the world contains some kind of geographic features. User queries with geographic references are becoming very common and human expectations from a search engine are even higher. Although several works have been focused on this area, the interpretation of the geographic information in order to better satisfy the user needs continues being a challenge. This work proposes different techniques which are involved in the process of identifying and analyzing the geographic information in textual documents and queries in natural languages. A geographic ontology GeoNW has been built by combining GeoNames, WordNet and Wikipedia resources. Based on the information stored in GeoNW, geographic terms are identified and an algorithm for solving the toponym disambiguation problem is proposed. Once the geographic information is processed, we obtain a geographic ranking list of documents which is combined with a standard textual ranking list of documents for producing the final results. GeoCLEF test collection is used for evaluating the accuracy of the result.", "title": "" }, { "docid": "e2b98c529a0175758b2edafe284d0dc7", "text": "This paper is concerned with the problem of fuzzy-filter design for discrete-time nonlinear systems in the Takagi-Sugeno (T-S) form. Different from existing fuzzy filters, the proposed ones are designed in finite-frequency domain. First, a so-called finite-frequency l2 gain is defined that extends the standard l2 gain. Then, a sufficient condition for the filtering-error system with a finite-frequency l2 gain is derived. Based on the obtained condition, three fuzzy filters are designed to deal with noises in the low-, middle-, and high-frequency domain, respectively. The proposed fuzzy-filtering method can get a better noise-attenuation performance when frequency ranges of noises are known beforehand. An example about a tunnel-diode circuit is given to illustrate its effectiveness.", "title": "" }, { "docid": "47e11b1d734b1dcacc182e55d378f2a2", "text": "Experience replay plays an important role in the success of deep reinforcement learning (RL) by helping stabilize the neural networks. It has become a new norm in deep RL algorithms. In this paper, however, we showcase that varying the size of the experience replay buffer can hurt the performance even in very simple tasks. The size of the replay buffer is actually a hyper-parameter which needs careful tuning. Moreover, our study of experience replay leads to the formulation of the Combined DQN algorithm, which can significantly outperform primitive DQN in some tasks.", "title": "" }, { "docid": "88cb8c2f7f4fd5cdc95cc8e48faa3cb7", "text": "Prediction or prognostication is at the core of modern evidence-based medicine. Prediction of overall mortality and cardiovascular disease can be improved by a systematic evaluation of measurements from large-scale epidemiological studies or by using nested sampling designs to discover new markers from omics technologies. In study I, we investigated if prediction measures such as calibration, discrimination and reclassification could be calculated within traditional sampling designs and which of these designs were the most efficient. We found that is possible to calculate prediction measures by using a proper weighting system and that a stratified casecohort design is a reasonable choice both in terms of efficiency and simplicity. In study II, we investigated the clinical utility of several genetic scores for incident coronary heart disease. We found that genetic information could be of clinical value in improving the allocation of patients to correct risk strata and that the assessment of a genetic risk score among intermediate risk subjects could help to prevent about one coronary heart disease event every 318 people screened. In study III, we explored the association between circulating metabolites and incident coronary heart disease. We found four new metabolites associated with coronary heart disease independently of established cardiovascular risk factors and with evidence of clinical utility. By using genetic information we determined a potential causal effect on coronary heart disease of one of these novel metabolites. In study IV, we compared a large number of demographics, health and lifestyle measurements for association with all-cause and cause-specific mortality. By ranking measurements in terms of their predictive abilities we could provide new insights about their relative importance, as well as reveal some unexpected associations. Moreover we developed and validated a prediction score for five-year mortality with good discrimination ability and calibrated it for the entire UK population. In conclusion, we applied a translational approach spanning from the discovery of novel biomarkers to their evaluation in terms of clinical utility. We combined this effort with methodological improvements aimed to expand prediction measures in settings that were not previously explored. We identified promising novel metabolomics markers for cardiovascular disease and supported the potential clinical utility of a genetic score in primary prevention. Our results might fuel future studies aimed to implement these findings in clinical practice.", "title": "" }, { "docid": "774f4189181c6cdf666ecb5402969a5a", "text": "INTRODUCTION\nOsteopathic Manipulative Treatment (OMT) is effective in improving function, movement and restoring pain conditions. Despite clinical results, the mechanisms of how OMT achieves its' effects remain unclear. The fascial system is described as a tensional network that envelops the human body. Direct or indirect manipulations of the fascial system are a distinctive part of OMT.\n\n\nOBJECTIVE\nThis review describes the biological effects of direct and indirect manipulation of the fascial system.\n\n\nMATERIAL AND METHODS\nLiterature search was performed in February 2016 in the electronic databases: Cochrane, Medline, Scopus, Ostmed, Pedro and authors' publications relative to Fascia Research Congress Website.\n\n\nRESULTS\nManipulation of the fascial system seems to interfere with some cellular processes providing various pro-inflammatory and anti-inflammatory cells and molecules.\n\n\nDISCUSSION\nDespite growing research in the osteopathic field, biological effects of direct or indirect manipulation of the fascial system are not conclusive.\n\n\nCONCLUSION\nTo elevate manual medicine as a primary intervention in clinical settings, it's necessary to clarify how OMT modalities work in order to underpin their clinical efficacies.", "title": "" }, { "docid": "87f05972a93b2b432d0dad6d55e97502", "text": "The daunting volumes of community-contributed media contents on the Internet have become one of the primary sources for online advertising. However, conventional advertising treats image and video advertising as general text advertising by displaying relevant ads based on the contents of the Web page, without considering the inherent characteristics of visual contents. This article presents a contextual advertising system driven by images, which automatically associates relevant ads with an image rather than the entire text in a Web page and seamlessly inserts the ads in the nonintrusive areas within each individual image. The proposed system, called ImageSense, supports scalable advertising of, from root to node, Web sites, pages, and images. In ImageSense, the ads are selected based on not only textual relevance but also visual similarity, so that the ads yield contextual relevance to both the text in the Web page and the image content. The ad insertion positions are detected based on image salience, as well as face and text detection, to minimize intrusiveness to the user. We evaluate ImageSense on a large-scale real-world images and Web pages, and demonstrate the effectiveness of ImageSense for online image advertising.", "title": "" }, { "docid": "699c6a7b4f938d6a45d65878f08335e4", "text": "Fuzzing is a popular dynamic program analysis technique used to find vulnerabilities in complex software. Fuzzing involves presenting a target program with crafted malicious input designed to cause crashes, buffer overflows, memory errors, and exceptions. Crafting malicious inputs in an efficient manner is a difficult open problem and often the best approach to generating such inputs is through applying uniform random mutations to pre-existing valid inputs (seed files). We present a learning technique that uses neural networks to learn patterns in the input files from past fuzzing explorations to guide future fuzzing explorations. In particular, the neural models learn a function to predict good (and bad) locations in input files to perform fuzzing mutations based on the past mutations and corresponding code coverage information. We implement several neural models including LSTMs and sequence-to-sequence models that can encode variable length input files. We incorporate our models in the state-of-the-art AFL (American Fuzzy Lop) fuzzer and show significant improvements in terms of code coverage, unique code paths, and crashes for various input formats including ELF, PNG, PDF, and XML.", "title": "" }, { "docid": "b1c00b7801a51d11a8384e5977d7e041", "text": "In this article, we report the results of 2 studies that were conducted to investigate whether adult attachment theory explains employee behavior at work. In the first study, we examined the structure of a measure of adult attachment and its relations with measures of trait affectivity and the Big Five. In the second study, we examined the relations between dimensions of attachment and emotion regulation behaviors, turnover intentions, and supervisory reports of counterproductive work behavior and organizational citizenship behavior. Results showed that anxiety and avoidance represent 2 higher order dimensions of attachment that predicted these criteria (except for counterproductive work behavior) after controlling for individual difference variables and organizational commitment. The implications of these results for the study of attachment at work are discussed.", "title": "" }, { "docid": "160d488f12fa1db16756df36c649a76a", "text": "Cutaneous metastases are a rare event, representing 0.7% to 2.0% of all cutaneous malignant neoplasms. They may be the first sign of a previously undiagnosed visceral malignancy or the initial presentation of a recurrent neoplasm. The frequency of cutaneous metastases according to the type of underlying malignancies varies with sex. In men, the most common internal malignancies leading to cutaneous metastases are lung cancer, colon cancer, melanoma, squamous cell carcinoma of the oral cavity, and renal cell carcinoma. In women, breast cancer, colon cancer, melanoma, lung cancer, and ovarian cancer are the most common malignancies leading to cutaneous metastases.", "title": "" } ]
scidocsrr
68ab5cce56a5d1352e1e211e33aec611
Memory Warps for Learning Long-Term Online Video Representations
[ { "docid": "76ad212ccd103c93d45c1ffa0e208b45", "text": "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors.", "title": "" }, { "docid": "7240d65e0bc849a569d840a461157b2c", "text": "Deep convolutional neutral networks have achieved great success on image recognition tasks. Yet, it is non-trivial to transfer the state-of-the-art image recognition networks to videos as per-frame evaluation is too slow and unaffordable. We present deep feature flow, a fast and accurate framework for video recognition. It runs the expensive convolutional sub-network only on sparse key frames and propagates their deep feature maps to other frames via a flow field. It achieves significant speedup as flow computation is relatively fast. The end-to-end training of the whole architecture significantly boosts the recognition accuracy. Deep feature flow is flexible and general. It is validated on two recent large scale video datasets. It makes a large step towards practical video recognition. Code would be released.", "title": "" }, { "docid": "a5aff68d94b1fcd5fef109f8685b8b4a", "text": "We propose a novel method for temporally pooling frames in a video for the task of human action recognition. The method is motivated by the observation that there are only a small number of frames which, together, contain sufficient information to discriminate an action class present in a video, from the rest. The proposed method learns to pool such discriminative and informative frames, while discarding a majority of the non-informative frames in a single temporal scan of the video. Our algorithm does so by continuously predicting the discriminative importance of each video frame and subsequently pooling them in a deep learning framework. We show the effectiveness of our proposed pooling method on standard benchmarks where it consistently improves on baseline pooling methods, with both RGB and optical flow based Convolutional networks. Further, in combination with complementary video representations, we show results that are competitive with respect to the state-of-the-art results on two challenging and publicly available benchmark datasets.", "title": "" }, { "docid": "5a2dcebfadb2e52d1f506b5e8e6547d8", "text": "The ability to predict and therefore to anticipate the future is an important attribute of intelligence. It is also of utmost importance in real-time systems, e.g. in robotics or autonomous driving, which depend on visual scene understanding for decision making. While prediction of the raw RGB pixel values in future video frames has been studied in previous work, here we introduce the novel task of predicting semantic segmentations of future frames. Given a sequence of video frames, our goal is to predict segmentation maps of not yet observed video frames that lie up to a second or further in the future. We develop an autoregressive convolutional neural network that learns to iteratively generate multiple frames. Our results on the Cityscapes dataset show that directly predicting future segmentations is substantially better than predicting and then segmenting future RGB frames. Prediction results up to half a second in the future are visually convincing and are much more accurate than those of a baseline based on warping semantic segmentations using optical flow.", "title": "" } ]
[ { "docid": "115d3bc01e9b7fe41bdd9fc987c8676c", "text": "A novel switching median filter incorporating with a powerful impulse noise detection method, called the boundary discriminative noise detection (BDND), is proposed in this paper for effectively denoising extremely corrupted images. To determine whether the current pixel is corrupted, the proposed BDND algorithm first classifies the pixels of a localized window, centering on the current pixel, into three groups-lower intensity impulse noise, uncorrupted pixels, and higher intensity impulse noise. The center pixel will then be considered as \"uncorrupted,\" provided that it belongs to the \"uncorrupted\" pixel group, or \"corrupted.\" For that, two boundaries that discriminate these three groups require to be accurately determined for yielding a very high noise detection accuracy-in our case, achieving zero miss-detection rate while maintaining a fairly low false-alarm rate, even up to 70% noise corruption. Four noise models are considered for performance evaluation. Extensive simulation results conducted on both monochrome and color images under a wide range (from 10% to 90%) of noise corruption clearly show that our proposed switching median filter substantially outperforms all existing median-based filters, in terms of suppressing impulse noise while preserving image details, and yet, the proposed BDND is algorithmically simple, suitable for real-time implementation and application.", "title": "" }, { "docid": "cbc0e3dff1d86d88c416b1119fd3da82", "text": "One of the most challenging tasks for a flying robot is to autonomously navigate between target locations quickly and reliably while avoiding obstacles in its path, and with little to no a-priori knowledge of the operating environment. This challenge is addressed in the present paper. We describe the system design and software architecture of our proposed solution, ar X iv :1 71 2. 02 05 2v 1 [ cs .R O ] 6 D ec 2 01 7 and showcase how all the distinct components can be integrated to enable smooth robot operation. We provide critical insight on hardware and software component selection and development, and present results from extensive experimental testing in real-world warehouse environments. Experimental testing reveals that our proposed solution can deliver fast and robust aerial robot autonomous navigation in cluttered, GPS-denied environments.", "title": "" }, { "docid": "e9779af1233484b2ce9cc23d03c9beec", "text": "A number of pixel-based image fusion algorithms (using averaging, contrast pyramids, the discrete wavelet transform and the dualtree complex wavelet transform (DT-CWT) to perform fusion) are reviewed and compared with a novel region-based image fusion method which facilitates increased flexibility with the definition of a variety of fusion rules. The DT-CWT method could dissolve an image into simpler data so we could analyze the characteristic which contained within the image and then fused it with other image that had already been decomposed and DT-CWT could reconstruct the image into its original form without losing its original data. The pixel-based and region-based rules are compared to know each of their capability and performance. Region-based methods have a number of advantages over pixel-based methods. These include: the ability to use more intelligent semantic fusion rules; and for regions with certain properties to be attenuated or accentuated.", "title": "" }, { "docid": "c8b382852f445c6f05c905371330dd07", "text": "Novelty and surprise play significant roles in animal behavior and in attempts to understand the neural mechanisms underlying it. They also play important roles in technology, where detecting observations that are novel or surprising is central to many applications, such as medical diagnosis, text processing, surveillance, and security. Theories of motivation, particularly of intrinsic motivation, place novelty and surprise among the primary factors that arouse interest, motivate exploratory or avoidance behavior, and drive learning. In many of these studies, novelty and surprise are not distinguished from one another: the words are used more-or-less interchangeably. However, while undeniably closely related, novelty and surprise are very different. The purpose of this article is first to highlight the differences between novelty and surprise and to discuss how they are related by presenting an extensive review of mathematical and computational proposals related to them, and then to explore the implications of this for understanding behavioral and neuroscience data. We argue that opportunities for improved understanding of behavior and its neural basis are likely being missed by failing to distinguish between novelty and surprise.", "title": "" }, { "docid": "d15add461f0ca58de13b3dc975f7fef7", "text": "A frequency compensation technique improving characteristic of power supply rejection ratio (PSRR) for two-stage operational amplifiers is presented. This technique is applicable to most known two-stage amplifier configurations. The detailed small-signal analysis of an exemplary amplifier with the proposed compensation and a comparison to its basic version reveal several benefits of the technique which can be effectively exploited in continuous-time filter designs. This comparison shows the possibility of PSRR bandwidth broadening of more than a decade, significant reduction of chip area, the unity-gain bandwidth and power consumption improvement. These benefits are gained at the cost of a non-monotonic phase characteristic of the open-loop differential voltage gain and limitation of a close-loop voltage gain. A prototype-integrated circuit, fabricated based on 0.35 mm complementary metal-oxide semiconductor technology, was used for the technique verification. Two pairs of amplifiers with the classical Miller compensation and a cascoded input stage were measured and compared to their improved counterparts. The measurement data fully confirm the theoretically predicted advantages of the proposed compensation technique.", "title": "" }, { "docid": "2cff047c4b2577c99aa66df211b0beda", "text": "Image denoising is an important pre-processing step in medical image analysis. Different algorithms have been proposed in past three decades with varying denoising performances. More recently, having outperformed all conventional methods, deep learning based models have shown a great promise. These methods are however limited for requirement of large training sample size and high computational costs. In this paper we show that using small sample size, denoising autoencoders constructed using convolutional layers can be used for efficient denoising of medical images. Heterogeneous images can be combined to boost sample size for increased denoising performance. Simplest of networks can reconstruct images with corruption levels so high that noise and signal are not differentiable to human eye.", "title": "" }, { "docid": "173c0124ac81cfe8fa10fbdc20a1a094", "text": "This paper presents a new approach to compare fuzzy numbers using α-distance. Initially, the metric distance on the interval numbers based on the convex hull of the endpoints is proposed and it is extended to fuzzy numbers. All the properties of the α-distance are proved in details. Finally, the ranking of fuzzy numbers by the α-distance is discussed. In addition, the proposed method is compared with some known ones, the validity of the new method is illustrated by applying its to several group of fuzzy numbers.", "title": "" }, { "docid": "e8ff86bd701792e6eb5f2fa8fcc2e028", "text": "Memory layout transformations via data reorganization are very common operations, which occur as a part of the computation or as a performance optimization in data-intensive applications. These operations require inefficient memory access patterns and roundtrip data movement through the memory hierarchy, failing to utilize the performance and energy-efficiency potentials of the memory subsystem. This paper proposes a high-bandwidth and energy-efficient hardware accelerated memory layout transform (HAMLeT) system integrated within a 3D-stacked DRAM. HAMLeT uses a low-overhead hardware that exploits the existing infrastructure in the logic layer of 3D-stacked DRAMs, and does not require any changes to the DRAM layers, yet it can fully exploit the locality and parallelism within the stack by implementing efficient layout transform algorithms. We analyze matrix layout transform operations (such as matrix transpose, matrix blocking and 3D matrix rotation) and demonstrate that HAMLeT can achieve close to peak system utilization, offering up to an order of magnitude performance improvement compared to the CPU and GPU memory subsystems which does not employ HAMLeT.", "title": "" }, { "docid": "6b0294315128234ccdbec4532e6c4f7a", "text": "Carrying out similarity and analogy comparisons can be modeled as the alignment and mapping of structured representations. In this article we focus on three aspects of comparison that are central in structure-mapping theory. All three are controversial. First, comparison involves structured representations. Second, the comparison process is driven by a preference for connected relational structure. Third, the mapping between domains is rooted in semantic similarity between the relations that characterize the domains. For each of these points, we review supporting evidence and discuss some challenges raised by other researchers. We end with a discussion of the role of structure mapping in other cognitive processes.", "title": "" }, { "docid": "888efce805d5271f0b6571748793c4c6", "text": "Pedagogical changes and new models of delivering educational content should be considered in the effort to address the recommendations of the 2007 Institute of Medicine report and Benner's recommendations on the radical transformation of nursing. Transition to the nurse anesthesia practice doctorate addresses the importance of these recommendations, but educational models and specific strategies on how to implement changes in educational models and systems are still emerging. The flipped classroom (FC) is generating a considerable amount of buzz in academic circles. The FC is a pedagogical model that employs asynchronous video lectures, reading assignments, practice problems, and other digital, technology-based resources outside the classroom, and interactive, group-based, problem-solving activities in the classroom. This FC represents a unique combination of constructivist ideology and behaviorist principles, which can be used to address the gap between didactic education and clinical practice performance. This article reviews recent evidence supporting use of the FC in health profession education and suggests ways to implement the FC in nurse anesthesia educational programs.", "title": "" }, { "docid": "06b99205e1dc53e5120a22dc4f927aa0", "text": "The last 2 decades witnessed a surge in empirical studies on the variables associated with achievement in higher education. A number of meta-analyses synthesized these findings. In our systematic literature review, we included 38 meta-analyses investigating 105 correlates of achievement, based on 3,330 effect sizes from almost 2 million students. We provide a list of the 105 variables, ordered by the effect size, and summary statistics for central research topics. The results highlight the close relation between social interaction in courses and achievement. Achievement is also strongly associated with the stimulation of meaningful learning by presenting information in a clear way, relating it to the students, and using conceptually demanding learning tasks. Instruction and communication technology has comparably weak effect sizes, which did not increase over time. Strong moderator effects are found for almost all instructional methods, indicating that how a method is implemented in detail strongly affects achievement. Teachers with high-achieving students invest time and effort in designing the microstructure of their courses, establish clear learning goals, and employ feedback practices. This emphasizes the importance of teacher training in higher education. Students with high achievement are characterized by high self-efficacy, high prior achievement and intelligence, conscientiousness, and the goal-directed use of learning strategies. Barring the paucity of controlled experiments and the lack of meta-analyses on recent educational innovations, the variables associated with achievement in higher education are generally well investigated and well understood. By using these findings, teachers, university administrators, and policymakers can increase the effectivity of higher education. (PsycINFO Database Record", "title": "" }, { "docid": "5a08b007fbe1a424f9788ea68ec47d80", "text": "We introduce a novel ensemble model based on random projections. The contribution of using random projections is two-fold. First, the randomness provides the diversity which is required for the construction of an ensemble model. Second, random projections embed the original set into a space of lower dimension while preserving the dataset’s geometrical structure to a given distortion. This reduces the computational complexity of the model construction as well as the complexity of the classification. Furthermore, dimensionality reduction removes noisy features from the data and also represents the information which is inherent in the raw data by using a small number of features. The noise removal increases the accuracy of the classifier. The proposed scheme was tested using WEKA based procedures that were applied to 16 benchmark dataset from the UCI repository.", "title": "" }, { "docid": "752cf1c7cefa870c01053d87ff4f445c", "text": "Cannabidiol (CBD) represents a new promising drug due to a wide spectrum of pharmacological actions. In order to relate CBD clinical efficacy to its pharmacological mechanisms of action, we performed a bibliographic search on PUBMED about all clinical studies investigating the use of CBD as a treatment of psychiatric symptoms. Findings to date suggest that (a) CBD may exert antipsychotic effects in schizophrenia mainly through facilitation of endocannabinoid signalling and cannabinoid receptor type 1 antagonism; (b) CBD administration may exhibit acute anxiolytic effects in patients with generalised social anxiety disorder through modification of cerebral blood flow in specific brain sites and serotonin 1A receptor agonism; (c) CBD may reduce withdrawal symptoms and cannabis/tobacco dependence through modulation of endocannabinoid, serotoninergic and glutamatergic systems; (d) the preclinical pro-cognitive effects of CBD still lack significant results in psychiatric disorders. In conclusion, current evidences suggest that CBD has the ability to reduce psychotic, anxiety and withdrawal symptoms by means of several hypothesised pharmacological properties. However, further studies should include larger randomised controlled samples and investigate the impact of CBD on biological measures in order to correlate CBD's clinical effects to potential modifications of neurotransmitters signalling and structural and functional cerebral changes.", "title": "" }, { "docid": "f6df414f8f61dbdab32be2f05d921cb8", "text": "The task of discriminating one object from another is almost trivial for a human being. However, this task is computationally taxing for most modern machine learning methods, whereas, we perform this task at ease given very few examples for learning. It has been proposed that the quick grasp of concept may come from the shared knowledge between the new example and examples previously learned. We believe that the key to one-shot learning is the sharing of common parts as each part holds immense amounts of information on how a visual concept is constructed. We propose an unsupervised method for learning a compact dictionary of image patches representing meaningful components of an objects. Using those patches as features, we build a compositional model that outperforms a number of popular algorithms on a one-shot learning task. We demonstrate the effectiveness of this approach on hand-written digits and show that this model generalizes to multiple datasets.", "title": "" }, { "docid": "dfba47fd3b84d6346052b559568a0c21", "text": "Understanding gaming motivations is important given the growing trend of incorporating game-based mechanisms in non-gaming applications. In this paper, we describe the development and validation of an online gaming motivations scale based on a 3-factor model. Data from 2,071 US participants and 645 Hong Kong and Taiwan participants is used to provide a cross-cultural validation of the developed scale. Analysis of actual in-game behavioral metrics is also provided to demonstrate predictive validity of the scale.", "title": "" }, { "docid": "b4c25df52a0a5f6ab23743d3ca9a3af2", "text": "Measuring similarity between texts is an important task for several applications. Available approaches to measure document similarity are inadequate for document pairs that have non-comparable lengths, such as a long document and its summary. This is because of the lexical, contextual and the abstraction gaps between a long document of rich details and its concise summary of abstract information. In this paper, we present a document matching approach to bridge this gap, by comparing the texts in a common space of hidden topics. We evaluate the matching algorithm on two matching tasks and find that it consistently and widely outperforms strong baselines. We also highlight the benefits of incorporating domain knowledge to text matching.", "title": "" }, { "docid": "cf8bf65059568ca717289d8f23b25b38", "text": "AIM\nThis paper aims to systematically review studies investigating the strength of association between FMS composite scores and subsequent risk of injury, taking into account both methodological quality and clinical and methodological diversity.\n\n\nDESIGN\nSystematic review with meta-analysis.\n\n\nDATA SOURCES\nA systematic search of electronic databases was conducted for the period between their inception and 3 March 2016 using PubMed, Medline, Google Scholar, Scopus, Academic Search Complete, AMED (Allied and Complementary Medicine Database), CINAHL (Cumulative Index to Nursing and Allied Health Literature), Health Source and SPORTDiscus.\n\n\nELIGIBILITY CRITERIA FOR SELECTING STUDIES\nInclusion criteria: (1) English language, (2) observational prospective cohort design, (3) original and peer-reviewed data, (4) composite FMS score, used to define exposure and non-exposure groups and (5) musculoskeletal injury, reported as the outcome.\n\n\nEXCLUSION CRITERIA\n(1) data reported in conference abstracts or non-peer-reviewed literature, including theses, and (2) studies employing cross-sectional or retrospective study designs.\n\n\nRESULTS\n24 studies were appraised using the Quality of Cohort Studies assessment tool. In male military personnel, there was 'strong' evidence that the strength of association between FMS composite score (cut-point ≤14/21) and subsequent injury was 'small' (pooled risk ratio=1.47, 95% CI 1.22 to 1.77, p<0.0001, I2=57%). There was 'moderate' evidence to recommend against the use of FMS composite score as an injury prediction test in football (soccer). For other populations (including American football, college athletes, basketball, ice hockey, running, police and firefighters), the evidence was 'limited' or 'conflicting'.\n\n\nCONCLUSION\nThe strength of association between FMS composite scores and subsequent injury does not support its use as an injury prediction tool.\n\n\nTRIAL REGISTRATION NUMBER\nPROSPERO registration number CRD42015025575.", "title": "" }, { "docid": "bf294a4c3af59162b2f401e2cdcb060b", "text": "We present MCTest, a freely available set of stories and associated questions intended for research on the machine comprehension of text. Previous work on machine comprehension (e.g., semantic modeling) has made great strides, but primarily focuses either on limited-domain datasets, or on solving a more restricted goal (e.g., open-domain relation extraction). In contrast, MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. Reading comprehension can test advanced abilities such as causal reasoning and understanding the world, yet, by being multiple-choice, still provide a clear metric. By being fictional, the answer typically can be found only in the story itself. The stories and questions are also carefully limited to those a young child would understand, reducing the world knowledge that is required for the task. We present the scalable crowd-sourcing methods that allow us to cheaply construct a dataset of 500 stories and 2000 questions. By screening workers (with grammar tests) and stories (with grading), we have ensured that the data is the same quality as another set that we manually edited, but at one tenth the editing cost. By being open-domain, yet carefully restricted, we hope MCTest will serve to encourage research and provide a clear metric for advancement on the machine comprehension of text. 1 Reading Comprehension A major goal for NLP is for machines to be able to understand text as well as people. Several research disciplines are focused on this problem: for example, information extraction, relation extraction, semantic role labeling, and recognizing textual entailment. Yet these techniques are necessarily evaluated individually, rather than by how much they advance us towards the end goal. On the other hand, the goal of semantic parsing is the machine comprehension of text (MCT), yet its evaluation requires adherence to a specific knowledge representation, and it is currently unclear what the best representation is, for open-domain text. We believe that it is useful to directly tackle the top-level task of MCT. For this, we need a way to measure progress. One common method for evaluating someone’s understanding of text is by giving them a multiple-choice reading comprehension test. This has the advantage that it is objectively gradable (vs. essays) yet may test a range of abilities such as causal or counterfactual reasoning, inference among relations, or just basic understanding of the world in which the passage is set. Therefore, we propose a multiple-choice reading comprehension task as a way to evaluate progress on MCT. We have built a reading comprehension dataset containing 500 fictional stories, with 4 multiple choice questions per story. It was built using methods which can easily scale to at least 5000 stories, since the stories were created, and the curation was done, using crowd sourcing almost entirely, at a total of $4.00 per story. We plan to periodically update the dataset to ensure that methods are not overfitting to the existing data. The dataset is open-domain, yet restricted to concepts and words that a 7 year old is expected to understand. This task is still beyond the capability of today’s computers and algorithms.", "title": "" }, { "docid": "5e2eee141595ae58ca69ee694dc51c8a", "text": "Evidence-based dietary information represented as unstructured text is a crucial information that needs to be accessed in order to help dietitians follow the new knowledge arrives daily with newly published scientific reports. Different named-entity recognition (NER) methods have been introduced previously to extract useful information from the biomedical literature. They are focused on, for example extracting gene mentions, proteins mentions, relationships between genes and proteins, chemical concepts and relationships between drugs and diseases. In this paper, we present a novel NER method, called drNER, for knowledge extraction of evidence-based dietary information. To the best of our knowledge this is the first attempt at extracting dietary concepts. DrNER is a rule-based NER that consists of two phases. The first one involves the detection and determination of the entities mention, and the second one involves the selection and extraction of the entities. We evaluate the method by using text corpora from heterogeneous sources, including text from several scientifically validated web sites and text from scientific publications. Evaluation of the method showed that drNER gives good results and can be used for knowledge extraction of evidence-based dietary recommendations.", "title": "" } ]
scidocsrr
0a5d07a8836a78ae4ab14ba1cb9b5778
Cold-start, warm-start and everything in between: An autoencoder based approach to recommendation
[ { "docid": "f415b38e6d43c8ed81ce97fd924def1b", "text": "Collaborative filtering is one of the most successful and widely used methods of automated product recommendation in online stores. The most critical component of the method is the mechanism of finding similarities among users using product ratings data so that products can be recommended based on the similarities. The calculation of similarities has relied on traditional distance and vector similarity measures such as Pearson’s correlation and cosine which, however, have been seldom questioned in terms of their effectiveness in the recommendation problem domain. This paper presents a new heuristic similarity measure that focuses on improving recommendation performance under cold-start conditions where only a small number of ratings are available for similarity calculation for each user. Experiments using three different datasets show the superiority of the measure in new user cold-start conditions. 2007 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "917154ffa5d9108fd07782d1c9a183ba", "text": "Recommender systems for automatically suggested items of interest to users have become increasingly essential in fields where mass personalization is highly valued. The popular core techniques of such systems are collaborative filtering, content-based filtering and combinations of these. In this paper, we discuss hybrid approaches, using collaborative and also content data to address cold-start - that is, giving recommendations to novel users who have no preference on any items, or recommending items that no user of the community has seen yet. While there have been lots of studies on solving the item-side problems, solution for user-side problems has not been seen public. So we develop a hybrid model based on the analysis of two probabilistic aspect models using pure collaborative filtering to combine with users' information. The experiments with MovieLen data indicate substantial and consistent improvements of this model in overcoming the cold-start user-side problem.", "title": "" } ]
[ { "docid": "179675ecf9ef119fcb0bc512995e2920", "text": "There is little evidence available on the use of robot-assisted therapy in subacute stroke patients. A randomized controlled trial was carried out to evaluate the short-time efficacy of intensive robot-assisted therapy compared to usual physical therapy performed in the early phase after stroke onset. Fifty-three subacute stroke patients at their first-ever stroke were enrolled 30 ± 7 days after the acute event and randomized into two groups, both exposed to standard therapy. Additional 30 sessions of robot-assisted therapy were provided to the Experimental Group. Additional 30 sessions of usual therapy were provided to the Control Group. The following impairment evaluations were performed at the beginning (T0), after 15 sessions (T1), and at the end of the treatment (T2): Fugl-Meyer Assessment Scale (FM), Modified Ashworth Scale-Shoulder (MAS-S), Modified Ashworth Scale-Elbow (MAS-E), Total Passive Range of Motion-Shoulder/Elbow (pROM), and Motricity Index (MI). Evidence of significant improvements in MAS-S (p = 0.004), MAS-E (p = 0.018) and pROM (p < 0.0001) was found in the Experimental Group. Significant improvement was demonstrated in both Experimental and Control Group in FM (EG: p < 0.0001, CG: p < 0.0001) and MI (EG: p < 0.0001, CG: p < 0.0001), with an higher improvement in the Experimental Group. Robot-assisted upper limb rehabilitation treatment can contribute to increasing motor recovery in subacute stroke patients. Focusing on the early phase of stroke recovery has a high potential impact in clinical practice.", "title": "" }, { "docid": "bc6877a5a83531a794ac1c8f7a4c7362", "text": "A number of times when using cross-validation (CV) while trying to do classification/probability estimation we have observed surprisingly low AUC's on real data with very few positive examples. AUC is the area under the ROC and measures the ranking ability and corresponds to the probability that a positive example receives a higher model score than a negative example. Intuition seems to suggest that no reasonable methodology should ever result in a model with an AUC significantly below 0.5. The focus of this paper is not on the estimator properties of CV (bias/variance/significance), but rather on the properties of the 'holdout' predictions based on which the CV performance of a model is calculated. We show that CV creates predictions that have an 'inverse' ranking with AUC well below 0.25 using features that were initially entirely unpredictive and models that can only perform monotonic transformations. In the extreme, combining CV with bagging (repeated averaging of out-of-sample predictions) generates 'holdout' predictions with perfectly opposite rankings on random data. While this would raise immediate suspicion upon inspection, we would like to caution the data mining community against using CV for stacking or in currently popular ensemble methods. They can reverse the predictions by assigning negative weights and produce in the end a model that appears to have close to perfect predictability while in reality the data was random.", "title": "" }, { "docid": "3e012db58ce7b25866a7c95b90b1aace", "text": "The goal of graph representation learning is to embed each vertex in a graph into a low-dimensional vector space. Existing graph representation learning methods can be classified into two categories: generative models that learn the underlying connectivity distribution in the graph, and discriminative models that predict the probability of edge existence between a pair of vertices. In this paper, we propose GraphGAN, an innovative graph representation learning framework unifying above two classes of methods, in which the generative model and discriminative model play a game-theoretical minimax game. Specifically, for a given vertex, the generative model tries to fit its underlying true connectivity distribution over all other vertices and produces “fake” samples to fool the discriminative model, while the discriminative model tries to detect whether the sampled vertex is from ground truth or generated by the generative model. With the competition between these two models, both of them can alternately and iteratively boost their performance. Moreover, when considering the implementation of generative model, we propose a novel graph softmax to overcome the limitations of traditional softmax function, which can be proven satisfying desirable properties of normalization, graph structure awareness, and computational efficiency. Through extensive experiments on real-world datasets, we demonstrate that GraphGAN achieves substantial gains in a variety of applications, including link prediction, node classification, and recommendation, over state-of-the-art baselines.", "title": "" }, { "docid": "9a03c5ff214a1a41280e6f4b335c87f1", "text": "In this paper, we present an automatic abstractive summarization system of meeting conversations. Our system extends a novel multi-sentence fusion algorithm in order to generate abstract templates. It also leverages the relationship between summaries and their source meeting transcripts to select the best templates for generating abstractive summaries of meetings. Our manual and automatic evaluation results demonstrate the success of our system in achieving higher scores both in readability and informativeness.", "title": "" }, { "docid": "6a7839b42c549e31740f70aa0079ad46", "text": "Deep learning has improved performance on many natural language processing (NLP) tasks individually. However, general NLP models cannot emerge within a paradigm that focuses on the particularities of a single metric, dataset, and task. We introduce the Natural Language Decathlon (decaNLP), a challenge that spans ten tasks: question answering, machine translation, summarization, natural language inference, sentiment analysis, semantic role labeling, relation extraction, goal-oriented dialogue, semantic parsing, and commonsense pronoun resolution. We cast all tasks as question answering over a context. Furthermore, we present a new multitask question answering network (MQAN) that jointly learns all tasks in decaNLP without any task-specific modules or parameters more effectively than sequence-to-sequence and reading comprehension baselines. MQAN shows improvements in transfer learning for machine translation and named entity recognition, domain adaptation for sentiment analysis and natural language inference, and zero-shot capabilities for text classification. We demonstrate that the MQAN’s multi-pointer-generator decoder is key to this success and that performance further improves with an anti-curriculum training strategy. Though designed for decaNLP, MQAN also achieves state of the art results on the WikiSQL semantic parsing task in the single-task setting. We also release code for procuring and processing data, training and evaluating models, and reproducing all experiments for decaNLP.", "title": "" }, { "docid": "60718ad958d65eb60a520d516f1dd4ea", "text": "With the advent of the Internet, more and more public universities in Malaysia are putting in effort to introduce e-learning in their respective universities. Using a structured questionnaire derived from the literature, data was collected from 250 undergraduate students from a public university in Penang, Malaysia. Data was analyzed using AMOS version 16. The results of the structural equation model indicated that service quality (β = 0.20, p < 0.01), information quality (β = 0.37, p < 0.01) and system quality (β = 0.20, p < 0.01) were positively related to user satisfaction explaining a total of 45% variance. The second regression analysis was to examine the impact of user satisfaction on continuance intention. The results showed that satisfaction (β = 0.31, p < 0.01), system quality (β = 0.18, p < 0.01) and service quality (β = 0.30, p < 0.01) were positively related to continuance intention explaining 44% of the variance. Implications from these findings to e-learning system developers and implementers were further elaborated.", "title": "" }, { "docid": "4d7616ce77bd32bcb6bc140279aefea8", "text": "We argue that living systems process information such that functionality emerges in them on a continuous basis. We then provide a framework that can explain and model the normativity of biological functionality. In addition we offer an explanation of the anticipatory nature of functionality within our overall approach. We adopt a Peircean approach to Biosemiotics, and a dynamical approach to Digital-Analog relations and to the interplay between different levels of functionality in autonomous systems, taking an integrative approach. We then apply the underlying biosemiotic logic to a particular biological system, giving a model of the B-Cell Receptor signaling system, in order to demonstrate how biosemiotic concepts can be used to build an account of biological information and functionality. Next we show how this framework can be used to explain and model more complex aspects of biological normativity, for example, how cross-talk between different signaling pathways can be avoided. Overall, we describe an integrated theoretical framework for the emergence of normative functions and, consequently, for the way information is transduced across several interconnected organizational levels in an autonomous system, and we demonstrate how this can be applied in real biological phenomena. Our aim is to open the way towards realistic tools for the modeling of information and normativity in autonomous biological agents.", "title": "" }, { "docid": "350495750961199ae746ee17eb0ba819", "text": "Gynecologic emergencies are relatively common and include ectopic pregnancies, adnexal torsion, tubo-ovarian abscess, hemorrhagic ovarian cysts, gynecologic hemorrhage, and vulvovaginal trauma. The purpose of this article is to provide a concise review of these emergencies, focusing on the evaluation and treatment options for the patient. In many cases, other causes of an acute abdomen are in the differential diagnosis. Understanding the tenets of diagnosis helps the surgeon narrow the etiology and guide appropriate treatment.", "title": "" }, { "docid": "f3bed3a3234fd61a168c9653a82b2f04", "text": "Digital libraries such as the NASA Astrophysics Data System (Kurtz et al. 2004) permit the easy accumulation of a new type of bibliometric measure, the number of electronic accesses (\\reads\") of individual articles. We explore various aspects of this new measure. We examine the obsolescence function as measured by actual reads, and show that it can be well t by the sum of four exponentials with very di erent time constants. We compare the obsolescence function as measured by readership with the obsolescence function as measured by citations. We nd that the citation function is proportional to the sum of two of the components of the readership function. This proves that the normative theory of citation is true in the mean. We further examine in detail the similarities and di erences between the citation rate, the readership rate and the total citations for individual articles, and discuss some of the causes. Using the number of reads as a bibliometric measure for individuals, we introduce the read-cite diagram to provide a two-dimensional view of an individual's scienti c productivity. We develop a simple model to account for an individual's reads and cites and use it to show that the position of a person in the read-cite diagram is a function of age, innate productivity, and work history. We show the age biases of both reads and cites, and develop two new bibliometric measures which have substantially less age bias than citations: SumProd, a weighted sum of total citations and the readership rate, intended to show the total productivity of an individual; and Read10, the readership rate for papers published in the last ten years, intended to show an individual's current productivity. We also discuss the e ect of normalization (dividing by the number of authors on a paper) on these statistics. We apply SumProd and Read10 using new, non-parametric techniques to rank and compare di erent astronomical research organizations Subject headings: digital libraries; bibliometrics; sociology of science; information retrieval", "title": "" }, { "docid": "50c961c8b229c7a4b31ca6a67e06112c", "text": "The emerging three-dimensional (3D) chip architectures, with their intrinsic capability of reducing the wire length, is one of the promising solutions to mitigate the interconnect problem in modern microprocessor designs. 3D memory stacking also enables much higher memory bandwidth for future chip-multiprocessor design, mitigating the ``memory wall\" problem. In addition, heterogenous integration enabled by 3D technology can also result in innovation designs for future microprocessors. This paper serves as a survey of various approaches to design future 3D microprocessors, leveraging the benefits of fast latency, higher bandwidth, and heterogeneous integration capability that are offered by 3D technology.", "title": "" }, { "docid": "c443ca07add67d6fc0c4901e407c68f2", "text": "This paper proposes a compiler-based programming framework that automatically translates user-written structured grid code into scalable parallel implementation code for GPU-equipped clusters. To enable such automatic translations, we design a small set of declarative constructs that allow the user to express stencil computations in a portable and implicitly parallel manner. Our framework translates the user-written code into actual implementation code in CUDA for GPU acceleration and MPI for node-level parallelization with automatic optimizations such as computation and communication overlapping. We demonstrate the feasibility of such automatic translations by implementing several structured grid applications in our framework. Experimental results on the TSUBAME2.0 GPU-based supercomputer show that the performance is comparable as hand-written code and good strong and weak scalability up to 256 GPUs.", "title": "" }, { "docid": "fcd649a870c697edca602203d1198d88", "text": "The present work investigates the feasibility of a non invasive blood glucose monitoring system using an antenna placed in a cuff and wrapped around the wrist. The technique is based on the fact that glucose levels affect the dielectric properties of blood. Using a realistic tissue model of a human hand, simulations have been performed with HFSS to obtain the antenna input impedance. The resonant frequency is seen to shift with changes in blood glucose levels. Based on our previous work of tissue characterization, an analytical technique can be developed to relate this frequency shift to the permittivity and conductivity of blood, from which the glucose levels are determined.", "title": "" }, { "docid": "3dee885a896e9864ff06b546d64f6df1", "text": "BACKGROUND\nThe 12-item Short Form Health Survey (SF-12) as a shorter alternative of the SF-36 is largely used in health outcomes surveys. The aim of this study was to validate the SF-12 in Iran.\n\n\nMETHODS\nA random sample of the general population aged 15 years and over living in Tehran, Iran completed the SF-12. Reliability was estimated using internal consistency and validity was assessed using known groups comparison and convergent validity. In addition, the factor structure of the questionnaire was extracted by performing both exploratory factor analysis (EFA) and confirmatory factor analysis (CFA).\n\n\nRESULTS\nIn all, 5587 individuals were studied (2721 male and 2866 female). The mean age and formal education of the respondents were 35.1 (SD = 15.4) and 10.2 (SD = 4.4) years respectively. The results showed satisfactory internal consistency for both summary measures, that are the Physical Component Summary (PCS) and the Mental Component Summary (MCS); Cronbach's alpha for PCS-12 and MCS-12 was 0.73 and 0.72, respectively. Known-groups comparison showed that the SF-12 discriminated well between men and women and those who differed in age and educational status (P < 0.001). In addition, correlations between the SF-12 scales and single items showed that the physical functioning, role physical, bodily pain and general health subscales correlated higher with the PCS-12 score, while the vitality, social functioning, role emotional and mental health subscales more correlated with the MCS-12 score lending support to its good convergent validity. Finally the principal component analysis indicated a two-factor structure (physical and mental health) that jointly accounted for 57.8% of the variance. The confirmatory factory analysis also indicated a good fit to the data for the two-latent structure (physical and mental health).\n\n\nCONCLUSION\nIn general the findings suggest that the SF-12 is a reliable and valid measure of health related quality of life among Iranian population. However, further studies are needed to establish stronger psychometric properties for this alternative form of the SF-36 Health Survey in Iran.", "title": "" }, { "docid": "46bc17ab45e11b5c9c07200a60db399f", "text": "Locality-sensitive hashing (LSH) is a basic primitive in several large-scale data processing applications, including nearest-neighbor search, de-duplication, clustering, etc. In this paper we propose a new and simple method to speed up the widely-used Euclidean realization of LSH. At the heart of our method is a fast way to estimate the Euclidean distance between two d-dimensional vectors; this is achieved by the use of randomized Hadamard transforms in a non-linear setting. This decreases the running time of a (k, L)-parameterized LSH from O(dkL) to O(dlog d + kL). Our experiments show that using the new LSH in nearest-neighbor applications can improve their running times by significant amounts. To the best of our knowledge, this is the first running time improvement to LSH that is both provable and practical.", "title": "" }, { "docid": "6cd7775052d3493199ee05ecb754db92", "text": "The deployment of Internet of Things (IoT) smart devices, including sensors, is progressing at a rapid pace. However, the volume of data they generate is becoming difficult to store and process on local platforms. The scalability offered by Cloud computing provides a solution to this problem. Cloud computing provides resources at low cost for its users. However, platform-independent methods of gathering and transmitting sensor data to Clouds are not widely available. This paper presents a Cloud-based smart device data monitoring, gathering and processing platform. It first reviews the state-of-the-art embodied by the existing solutions and discusses their strengths and weaknesses. Informed by the survey analysis, a generic architecture is presented that addresses the identified challenges facing data gathering and processing in this area. Based on use case scenarios, we evaluate and demonstrate the novelty of our proposed platform.", "title": "" }, { "docid": "0d119388cedb05317ac6aa5705622520", "text": "Detecting whether a song is favorite for a user is an important but also challenging task in music recommendation. One of critical steps to do this task is to select important features for the detection. This paper presents two methods to evaluate feature importance, in which we compared nine available features based on a large user log in the real world. The set of features includes song metadata, acoustic feature, and user preference used by Collaborative Filtering techniques. The evaluation methods are designed from two views: i) the correlation between the estimated scores by song similarity in respect of a feature and the scores estimated by real play count, ii) feature selection methods over a binary classification problem, i.e., “like” or “dislike”. The experimental results show the user preference is the most important feature and artist similarity is of the second importance among these nine features.", "title": "" }, { "docid": "bbec3bad19aceb7dffb61eecbd49ac85", "text": "Mobile applications (apps) have long invaded the realm of desktop apps, and hybrid apps become a promising solution for supporting multiple mobile platforms. Providing both platform-specific functionalities via native code like native apps and user interactions via JavaScript code like web apps, hybrid apps help developers build multiple apps for different platforms without much duplicated efforts. However, most hybrid apps are developed in multiple programming languages with different semantics, which may be vulnerable to programmer errors. Moreover, because untrusted JavaScript code may access device-specific features via native code, hybrid apps may be vulnerable to various security attacks. Unfortunately, no existing tools can help hybrid app developers by detecting errors or security holes. In this paper, we present HybriDroid, the first static analysis framework for Android hybrid apps. We investigate the semantics of Android hybrid apps especially for the interoperation mechanism of Android Java and JavaScript. Then, we design and implement a static analysis framework that analyzes inter-communication between Android Java and JavaScript. As example analyses supported by HybriDroid, we implement a bug detector that identifies programmer errors due to the hybrid semantics, and a taint analyzer that finds information leaks cross language boundaries. Our empirical evaluation shows that the tools are practically usable in that they found previously uncovered bugs in real-world Android hybrid apps and possible information leaks via a widely-used advertising platform.", "title": "" }, { "docid": "c306dae94503f9e0ba07af753198173d", "text": "BACKGROUND\nTropicamide is an antimuscarinic drug usually prescribed as an ophthalmic solution to induce short-term mydriasis and cycloplegia. Over the last 2 years, tropicamide has been reported in both Russia and Italy to be self-administered intravenously (IV) for recreational purposes.\n\n\nMETHODS\nThe literature on tropicamide was searched in PsycInfo and Pubmed databases. Considering the absence of peer-reviewed data, results were integrated with a multilingual qualitative assessment of a range of Web sites, drug fora and other online resources (i.e., e-newsgroups, chat rooms, mailing lists, e-newsletters and bulletin boards): between January 2012 and January 2013, exploratory qualitative searches of more than 100 Web sites have been carried out in English and Italian using generic and specific keywords such as \"legal highs,\" \"research chemicals,\" \"online pharmacy,\" \"tropicamide,\" \"mydriacil,\" \"tropicacyl,\" \"visumidriatic,\" \"online pharmacies\" and \"tropicamide recreational abuse\" in the Google search engine.\n\n\nRESULTS\nMisuse of tropicamide typically occurs through IV injection; its effects last from 30 min to 6 h, and it is often taken in combination with other psychoactive compounds, most typically alcohol, marijuana and opiates. Medical effects of tropicamide misuse include slurred speech, persistent mydriasis, unconsciousness/unresponsiveness, hallucinations, kidney pain, dysphoria, \"open eye dreams,\" hyperthermia, tremors, suicidal feelings, convulsions, psychomotor agitation, tachycardia and headache.\n\n\nDISCUSSION/CONCLUSIONS\nMore large-scale studies need to be carried out to confirm and better describe the extent of tropicamide misuse in the European Union and elsewhere. Health and other professionals should be rapidly informed about this new and alerting trend of misuse.", "title": "" }, { "docid": "6f13503bf65ff58b7f0d4f3282f60dec", "text": "Body centric wireless communication is now accepted as an important part of 4th generation (and beyond) mobile communications systems, taking the form of human to human networking incorporating wearable sensors and communications. There are also a number of body centric communication systems for specialized occupations, such as paramedics and fire-fighters, military personnel and medical sensing and support. To support these developments there is considerable ongoing research into antennas and propagation for body centric communications systems, and this paper will summarise some of it, including the characterisation of the channel on the body, the optimisation of antennas for these channels, and communications to medical implants where advanced antenna design and characterisation and modelling of the internal body channel are important research needs. In all of these areas both measurement and simulation pose very different and challenging issues to be faced by the researcher.", "title": "" }, { "docid": "c0f41dfa5f0525f3189c96b28480b583", "text": "This paper contains the design of a three stage solar battery charge controller and a comparative study of this charge control technique with three conventional solar battery charge control techniques such as 1. Constant Current (CC) charging, 2. Two stage constant current constant voltage (CC-CV) charging technique. The analysis and the comparative study of the aforesaid charging techniques are done in MATLAB/SIMULINK environment. Here the practical data used to simulate the charge control algorithms are based on a 12Volts 7Ah Sealed", "title": "" } ]
scidocsrr
fa15e99b36d62993f4c17bca86b3d029
Applications of Structural Balance in Signed Social Networks
[ { "docid": "87ab746df486a15b895cc0a4706db6c7", "text": "Many complex systems in the real world can be modeled as signed social networks that contain both positive and negative relations. Algorithms for mining social networks have been developed in the past; however, most of them were designed primarily for networks containing only positive relations and, thus, are not suitable for signed networks. In this work, we propose a new algorithm, called FEC, to mine signed social networks where both positive within-group relations and negative between-group relations are dense. FEC considers both the sign and the density of relations as the clustering attributes, making it effective for not only signed networks but also conventional social networks including only positive relations. Also, FEC adopts an agent-based heuristic that makes the algorithm efficient (in linear time with respect to the size of a network) and capable of giving nearly optimal solutions. FEC depends on only one parameter whose value can easily be set and requires no prior knowledge on hidden community structures. The effectiveness and efficacy of FEC have been demonstrated through a set of rigorous experiments involving both benchmark and randomly generated signed networks.", "title": "" } ]
[ { "docid": "2a914d703108f165aecbb7ad1a2dde2c", "text": "The general objective of our work is to investigate the area and power-delay performances of low-voltage full adder cells in different CMOS logic styles for the predominating tree structured arithmetic circuits. A new hybrid style full adder circuit is also presented. The sum and carry generation circuits of the proposed full adder are designed with hybrid logic styles. To operate at ultra-low supply voltage, the pass logic circuit that cogenerates the intermediate XOR and XNOR outputs has been improved to overcome the switching delay problem. As full adders are frequently employed in a tree structured configuration for high-performance arithmetic circuits, a cascaded simulation structure is introduced to evaluate the full adders in a realistic application environment. A systematic and elegant procedure to scale the transistor for minimal power-delay product is proposed. The circuits being studied are optimized for energy efficiency at 0.18-/spl mu/m CMOS process technology. With the proposed simulation environment, it is shown that some survival cells in stand alone operation at low voltage may fail when cascaded in a larger circuit, either due to the lack of drivability or unsatisfactory speed of operation. The proposed hybrid full adder exhibits not only the full swing logic and balanced outputs but also strong output drivability. The increase in the transistor count of its complementary CMOS output stage is compensated by its area efficient layout. Therefore, it remains one of the best contenders for designing large tree structured arithmetic circuits with reduced energy consumption while keeping the increase in area to a minimum.", "title": "" }, { "docid": "e776c87ec35d67c6acbdf79d8a5cac0a", "text": "Continuous deployment speeds up the process of existing agile methods, such as Scrum, and Extreme Programming (XP) through the automatic deployment of software changes to end-users upon passing of automated tests. Continuous deployment has become an emerging software engineering process amongst numerous software companies, such as Facebook, Github, Netflix, and Rally Software. A systematic analysis of software practices used in continuous deployment can facilitate a better understanding of continuous deployment as a software engineering process. Such analysis can also help software practitioners in having a shared vocabulary of practices and in choosing the software practices that they can use to implement continuous deployment. The goal of this paper is to aid software practitioners in implementing continuous deployment through a systematic analysis of software practices that are used by software companies. We studied the continuous deployment practices of 19 software companies by performing a qualitative analysis of Internet artifacts and by conducting follow-up inquiries. In total, we found 11 software practices that are used by 19 software companies. We also found that in terms of use, eight of the 11 software practices are common across 14 software companies. We observe that continuous deployment necessitates the consistent use of sound software engineering practices such as automated testing, automated deployment, and code review.", "title": "" }, { "docid": "3d4e3bf458145e5e32ef2a6c81e55218", "text": "The arrival of the internet caused a large decline in both the pecuniary and non-pecuniary costs of accessing pornography. Using state-level panel data from 1998-2003, I find that the arrival of the internet was associated with a reduction in rape incidence. However, growth in internet usage had no apparent effect on other crimes. Moreover, when I disaggregate the rape data by offender age, I find that the effect of the internet on rape is concentrated among those for whom the internet-induced fall in the non-pecuniary price of pornography was the largest – men ages 15-19, who typically live with their parents. These results, which suggest that pornography and rape are substitutes, are in contrast with most previous literature. However, earlier population-level studies do not control adequately for many omitted variables, including the age distribution of the population, and most laboratory studies simply do not allow for potential substitutability between pornography and rape.", "title": "" }, { "docid": "64ba4467dc4495c6828f2322e8f415f2", "text": "Due to the advancement of microoptoelectromechanical systems and microelectromechanical systems (MEMS) technologies, novel display architectures have emerged. One of the most successful and well-known examples is the Digital Micromirror Device from Texas Instruments, a 2-D array of bistable MEMS mirrors, which function as spatial light modulators for the projection display. This concept of employing an array of modulators is also seen in the grating light valve and the interferometric modulator display, where the modulation mechanism is based on optical diffraction and interference, respectively. Along with this trend comes the laser scanning display, which requires a single scanning device with a large scan angle and a high scan frequency. A special example in this category is the retinal scanning display, which is a head-up wearable module that laser-scans the image directly onto the retina. MEMS technologies are also found in other display-related research, such as stereoscopic (3-D) displays and plastic thin-film displays.", "title": "" }, { "docid": "61f0e20762a8ce5c3c40ea200a32dd43", "text": "Online distance e-learning systems allow introducing innovative methods in pedagogy, along with studying their effectiveness. Assessing the system effectiveness is based on analyzing the log files to track the studying time, the number of connections, and earned game bonus points. This study is based on an example of the online application for practical foreign language speaking skills training between random users, which select the role of a teacher or a student on their own. The main features of the developed system include pre-defined synchronized teaching and learning materials displayed for both participants, along with user motivation by means of gamification. The actual percentage of successful connects between specifically unmotivated and unfamiliar with each other users was measured. The obtained result can be used for gauging the developed system success and the proposed teaching methodology in general. Keywords—elearning; gamification; marketing; monetization; viral marketing; virality", "title": "" }, { "docid": "55861c73dda7c01f12a8a6f756a74e29", "text": "Strategies for extracting the three-phase reference currents for shunt active power filters are compared, evaluating their performance under different source and load conditions with the new IEEE Standard 1459 power definitions. The study was applied to a three-phase four-wire system in order to include imbalance. Under balanced and sinusoidal voltages, harmonic cancellation and reactive power compensation can be attained in all the methods. However, when the voltages are distorted and/or unbalanced, the compensation capabilities are not equivalent, with some strategies unable to yield an adequate solution when the mains voltages are not ideal. Simulation and experimental results are included", "title": "" }, { "docid": "6be6e28cf4a4a044122901fad0d2bf40", "text": "ÐAutomatic transformation of paper documents into electronic documents requires geometric document layout analysis at the first stage. However, variations in character font sizes, text line spacing, and document layout structures have made it difficult to design a general-purpose document layout analysis algorithm for many years. The use of some parameters has therefore been unavoidable in previous methods. In this paper, we propose a parameter-free method for segmenting the document images into maximal homogeneous regions and identifying them as texts, images, tables, and ruling lines. A pyramidal quadtree structure is constructed for multiscale analysis and a periodicity measure is suggested to find a periodical attribute of text regions for page segmentation. To obtain robust page segmentation results, a confirmation procedure using texture analysis is applied to only ambiguous regions. Based on the proposed periodicity measure, multiscale analysis, and confirmation procedure, we could develop a robust method for geometric document layout analysis independent of character font sizes, text line spacing, and document layout structures. The proposed method was experimented with the document database from the University of Washington and the MediaTeam Document Database. The results of these tests have shown that the proposed method provides more accurate results than the previous ones. Index TermsÐGeometric document layout analysis, parameter-free method, periodicity estimation, multiscale analysis, page segmentation.", "title": "" }, { "docid": "549f8fe6d456a818c36976c7e47e4033", "text": "Given the rapid proliferation of trajectory-based approaches to study clinical consequences to stress and potentially traumatic events (PTEs), there is a need to evaluate emerging findings. This review examined convergence/divergences across 54 studies in the nature and prevalence of response trajectories, and determined potential sources of bias to improve future research. Of the 67 cases that emerged from the 54 studies, the most consistently observed trajectories following PTEs were resilience (observed in: n = 63 cases), recovery (n = 49), chronic (n = 47), and delayed onset (n = 22). The resilience trajectory was the modal response across studies (average of 65.7% across populations, 95% CI [0.616, 0.698]), followed in prevalence by recovery (20.8% [0.162, 0.258]), chronicity (10.6%, [0.086, 0.127]), and delayed onset (8.9% [0.053, 0.133]). Sources of heterogeneity in estimates primarily resulted from substantive population differences rather than bias, which was observed when prospective data is lacking. Overall, prototypical trajectories have been identified across independent studies in relatively consistent proportions, with resilience being the modal response to adversity. Thus, trajectory models robustly identify clinically relevant patterns of response to potential trauma, and are important for studying determinants, consequences, and modifiers of course following potential trauma.", "title": "" }, { "docid": "6b064b9f4c90a60fab788f9d5aee8b58", "text": "Extracorporeal photopheresis (ECP) is a technique that was developed > 20 years ago to treat erythrodermic cutaneous T-cell lymphoma (CTCL). The technique involves removal of peripheral blood, separation of the buffy coat, and photoactivation with a photosensitizer and ultraviolet A irradiation before re-infusion of cells. More than 1000 patients with CTCL have been treated with ECP, with response rates of 31-100%. ECP has been used in a number of other conditions, most widely in the treatment of chronic graft-versus-host disease (cGvHD) with response rates of 29-100%. ECP has also been used in several other autoimmune diseases including acute GVHD, solid organ transplant rejection and Crohn's disease, with some success. ECP is a relatively safe procedure, and side-effects are typically mild and transient. Severe reactions including vasovagal syncope or infections are uncommon. This is very valuable in conditions for which alternative treatments are highly toxic. The mechanism of action of ECP remains elusive. ECP produces a number of immunological changes and in some patients produces immune homeostasis with resultant clinical improvement. ECP is available in seven centres in the UK. Experts from all these centres formed an Expert Photopheresis Group and published the UK consensus statement for ECP in 2008. All centres consider patients with erythrodermic CTCL and steroid-refractory cGvHD for treatment. The National Institute for Health and Clinical Excellence endorsed the use of ECP for CTCL and suggested a need for expansion while recommending its use in specialist centres. ECP is safe, effective, and improves quality of life in erythrodermic CTCL and cGvHD, and should be more widely available for these patients.", "title": "" }, { "docid": "385fc1f02645d4d636869317cde6d35e", "text": "Events and their coreference offer useful semantic and discourse resources. We show that the semantic and discourse aspects of events interact with each other. However, traditional approaches addressed event extraction and event coreference resolution either separately or sequentially, which limits their interactions. This paper proposes a document-level structured learning model that simultaneously identifies event triggers and resolves event coreference. We demonstrate that the joint model outperforms a pipelined model by 6.9 BLANC F1 and 1.8 CoNLL F1 points in event coreference resolution using a corpus in the biology domain.", "title": "" }, { "docid": "1dc8b67323637afe08e7004d462bb793", "text": "With the WEBSOM method a textual document collection may be organized onto a graphical map display that provides an overview of the collection and facilitates interactive browsing. Interesting documents can be located on the map using a content-directed search. Each document is encoded as a histogram of word categories which are formed by the self-organizing map (SOM) algorithm based on the similarities in the contexts of the words. The encoded documents are organized on another self-organizing map, a document map, on which nearby locations contain similar documents. Special consideration is given to the computation of very large document maps which is possible with general-purpose computers if the dimensionality of the word category histograms is first reduced with a random mapping method and if computationally efficient algorithms are used in computing the SOMs. ( 1998 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "2b1eda1c5a0bb050b82f5fa42893466b", "text": "In recent years researchers have achieved considerable success applying neural network methods to question answering (QA). These approaches have achieved state of the art results in simplified closed-domain settings such as the SQuAD (Rajpurkar et al. 2016) dataset, which provides a preselected passage, from which the answer to a given question may be extracted. More recently, researchers have begun to tackle open-domain QA, in which the model is given a question and access to a large corpus (e.g., wikipedia) instead of a pre-selected passage (Chen et al. 2017a). This setting is more complex as it requires large-scale search for relevant passages by an information retrieval component, combined with a reading comprehension model that “reads” the passages to generate an answer to the question. Performance in this setting lags well behind closed-domain performance. In this paper, we present a novel open-domain QA system called Reinforced Ranker-Reader (R), based on two algorithmic innovations. First, we propose a new pipeline for open-domain QA with a Ranker component, which learns to rank retrieved passages in terms of likelihood of extracting the ground-truth answer to a given question. Second, we propose a novel method that jointly trains the Ranker along with an answer-extraction Reader model, based on reinforcement learning. We report extensive experimental results showing that our method significantly improves on the state of the art for multiple open-domain QA datasets. 2", "title": "" }, { "docid": "4c3b4a6c173a40327c2db17772cbd242", "text": "We reproduce four Twitter sentiment classification approaches that participated in previous SemEval editions with diverse feature sets. The reproduced approaches are combined in an ensemble, averaging the individual classifiers’ confidence scores for the three classes (positive, neutral, negative) and deciding sentiment polarity based on these averages. The experimental evaluation on SemEval data shows our re-implementations to slightly outperform their respective originals. Moreover, not too surprisingly, the ensemble of the reproduced approaches serves as a strong baseline in the current edition where it is top-ranked on the 2015 test set.", "title": "" }, { "docid": "2a61df18f9d3340d47073cda41da5822", "text": "Link prediction is one of the fundamental problems in network analysis. In many applications, notably in genetics, a partially observed network may not contain any negative examples of absent edges, which creates a difficulty for many existing supervised learning approaches. We develop a new method which treats the observed network as a sample of the true network with different sampling rates for positive and negative examples. We obtain a relative ranking of potential links by their probabilities, utilizing information on node covariates as well as on network topology. Empirically, the method performs well under many settings, including when the observed network is sparse. We apply the method to a protein-protein interaction network and a school friendship network.", "title": "" }, { "docid": "59f022a6e943f46e7b87213f651065d8", "text": "This paper presents a procedure to design a robust switching strategy for the basic Buck-Boost DC-DC converter utilizing switched systems' theory. The converter dynamic is described in the framework of linear switched systems and then sliding-mode controller is developed to ensure the asymptotic stability of the desired equilibrium point for the switched system with constant external input. The inherent robustness of the sliding-mode switching rule leads to efficient regulation of the output voltage under load variations. Simulation results are presented to demonstrate the outperformance of the proposed method compared to a rival scheme in the literature.", "title": "" }, { "docid": "2ffca8ee12f4266f42dc27ad430e4b62", "text": "The growing concern over environmental degradation resulting from combustion of fossil fuels and depleting fossil fuel reserves has raised awareness about alternative energy options. Renewable energy system is perfect solution of this problem. This paper presents a mathematical model of single diode solar photovoltaic (SPV) module. SPV cell generates electricity when exposed to sunlight but this generation depends on whether condition like temperature and irradiance, for better accuracy all the parameters are considered including shunt, series resistance and simulated in MATLAB/Simulink. The output is analyzed by varying the temperature and irradiance and effect of change in shunt and series resistance is also observed.", "title": "" }, { "docid": "0b79fc06afe7782e7bdcdbd96cc1c1a0", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/annals.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.", "title": "" }, { "docid": "b3947afb7856b0ffd5983f293ca508b9", "text": "High gain low profile slotted cavity with substrate integrated waveguide (SIW) is presented using TE440 high order mode. The proposed antenna is implemented to achieve 16.4 dBi high gain at 28 GHz with high radiation efficiency of 98%. Furthermore, the proposed antenna has a good radiation pattern. Simulated results using CST and HFSS software are presented and discussed. Several advantages such as low profile, low cost, light weight, small size, and easy implementation make the proposed antenna suitable for millimeter-wave wireless communications.", "title": "" }, { "docid": "8c710f24ed7f940c604388bd4109f8e2", "text": "In one of the most frequent empirical scenarios in applied linguistics, a researcher's empirical results can be summarized in a two-dimensional table, in which − the rows list the levels of a nominal/categorical variable; − the columns list the levels of another nominal/categorical variable; − the cells in the table defined by these row and column levels provide the frequencies with which combinations of row and column levels were observed in some data set. An example of data from a study of disfluencies in speech is shown in Table 1, which shows the parts of speech of 335 words following three types of disfluencies. Both the part of speech and the disfluency markers represent categorical variables. Noun Verb Conjunction Totals uh 30 70 90 190 uhm 50 20 40 110 silence 20 5 10 35 Totals 100 95 140 335 Table 1 shows that 30 uh's were followed by a noun, 20 uhm's were followed by a verb, etc. One question a researcher may be interested in exploring is whether there is a correlation between the kind of disfluency produced – the variable in the rows – and the part of speech of the word following the disfluency – the variable in the columns. An exploratory glance at the data suggests that uh mostly precedes conjunctions while silences most precede nouns, but an actual statistical test is required to determine (i) whether the distribution of the parts of speech after the disfluencies is in fact significantly different from chance and (ii) what preferences and dispreferences this data set reflects. The most frequent statistical test to analyze two-dimensional frequency tables such as Table 1 is the chi-square test for independence [A] The chi-square test for independence", "title": "" }, { "docid": "11212d5474184c1dc549c8cadc023e43", "text": "Videoconferencing is going to become attractive for geo-graphically distributed team collaboration, specifically to avoid travelling and to increase flexibility. Against this background this paper presents a next generation system - a 3D videoconference providing immersive tele-presence and natural representation of all participants in a shared virtual meeting space to enhance quality of human-centred communication. This system is based on the principle of a shared virtual table environment, which guarantees correct eye contact and gesture reproduction. The key features of our system are presented and compared to other approaches like tele-cubicles. Furthermore the current system design and details of the real-time hardware and software concept are explained.", "title": "" } ]
scidocsrr
3f8346c6ecf9d51f40d130895c1cc0fb
High Resolution CCD Polarization Imaging Sensor
[ { "docid": "130d16a19757ed0f2b049ff954dc5a2a", "text": "Vision in scattering media is important but challenging. Images suffer from poor visibility due to backscattering and attenuation. Most prior methods for scene recovery use active illumination scanners (structured and gated), which can be slow and cumbersome, while natural illumination is inapplicable to dark environments. The current paper addresses the need for a non-scanning recovery method, that uses active scene irradiance. We study the formation of images under widefield artificial illumination. Based on the formation model, the paper presents an approach for recovering the object signal. It also yields rough information about the 3D scene structure. The approach can work with compact, simple hardware, having active widefield, polychromatic polarized illumination. The camera is fitted with a polarization analyzer. Two frames of the scene are taken, with different states of the analyzer or polarizer. A recovery algorithm follows the acquisition. It allows both the backscatter and the object reflection to be partially polarized. It thus unifies and generalizes prior polarization-based methods, which had assumed exclusive polarization of either of these components. The approach is limited to an effective range, due to image noise and illumination falloff. Thus, the limits and noise sensitivity are analyzed. We demonstrate the approach in underwater field experiments.", "title": "" } ]
[ { "docid": "ae0cd5f9060fdc4247d4338023022355", "text": "Modeling disease spread and distribution using social media data has become an increasingly popular research area. While Twitter data has recently been investigated for estimating disease spread, the extent to which it is representative of disease spread and distribution in a macro perspective is still an open question. In this paper, we focus on macroscale modeling of influenza-like illnesses (ILI) using a large dataset containing 8,961,932 tweets from Australia collected in 2015. We first propose modifications of the state-of-theart ILI-related tweet detection approaches to acquire a more refined dataset. We normalize the number of detected ILIrelated tweets with Internet access and Twitter penetration rates in each state. Then, we establish a state-level linear regression model between the number of ILI-related tweets and the number of real influenza notifications. The Pearson correlation coefficient of the model is 0.93. Our results indicate that: 1) a strong positive linear correlation exists between the number of ILI-related tweets and the number of recorded influenza notifications at state scale; 2) Twitter data has promising ability in helping detect influenza outbreaks; 3) taking into account the population, Internet access and Twitter penetration rates in each state enhances the prevalence modeling analysis.", "title": "" }, { "docid": "e766e5a45936c53767898c591e6126f8", "text": "Video completion is a computer vision technique to recover the missing values in video sequences by filling the unknown regions with the known information. In recent research, tensor completion, a generalization of matrix completion for higher order data, emerges as a new solution to estimate the missing information in video with the assumption that the video frames are homogenous and correlated. However, each video clip often stores the heterogeneous episodes and the correlations among all video frames are not high. Thus, the regular tenor completion methods are not suitable to recover the video missing values in practical applications. To solve this problem, we propose a novel spatiallytemporally consistent tensor completion method for recovering the video missing data. Instead of minimizing the average of the trace norms of all matrices unfolded along each mode of a tensor data, we introduce a new smoothness regularization along video time direction to utilize the temporal information between consecutive video frames. Meanwhile, we also minimize the trace norm of each individual video frame to employ the spatial correlations among pixels. Different to previous tensor completion approaches, our new method can keep the spatio-temporal consistency in video and do not assume the global correlation in video frames. Thus, the proposed method can be applied to the general and practical video completion applications. Our method shows promising results in all evaluations on both 3D biomedical image sequence and video benchmark data sets. Video completion is the process of filling in missing pixels or replacing undesirable pixels in a video. The missing values in a video can be caused by many situations, e.g., the natural noise in video capture equipment, the occlusion from the obstacles in environment, segmenting or removing interested objects from videos. Video completion is of great importance to many applications such as video repairing and editing, movie post-production (e.g., remove unwanted objects), etc. Missing information recovery in images is called inpaint∗To whom all correspondence should be addressed. This work was partially supported by US NSF IIS-1117965, IIS-1302675, IIS-1344152. Copyright c © 2014, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. ing, which is usually accomplished by inferring or guessing the missing information from the surrounding regions, i.e. the spatial information. Video completion can be considered as an extension of 2D image inpainting to 3D. Video completion uses the information from the past and the future frames to fill the pixels in the missing region, i.e. the spatiotemporal information, which has been getting increasing attention in recent years. In computer vision, an important application area of artificial intelligence, there are many video completion algorithms. The most representative approaches include video inpainting, analogous to image inpainting (Bertalmio, Bertozzi, and Sapiro 2001), motion layer video completion, which splits the video sequence into different motion layers and completes each motion layer separately (Shiratori et al. 2006), space-time video completion, which is based on texture synthesis and is good but slow (Wexler, Shechtman, and Irani 2004), and video repairing, which repairs static background with motion layers and repairs moving foreground using model alignment (Jia et al. 2004). Many video completion methods are less effective because the video is often treated as a set of independent 2D images. Although the temporal independence assumption simplifies the problem, losing temporal consistency in recovered pixels leads to the unsatisfactory performance. On the other hand, temporal information can improve the video completion results (Wexler, Shechtman, and Irani 2004; Matsushita et al. 2005), but to exploit it the computational speeds of most methods are significantly reduced. Thus, how to efficiently and effectively utilize both spatial and temporal information is a challenging problem in video completion. In most recent work, Liu et. al. (Liu et al. 2013) estimated the missing data in video via tensor completion which was generalized from matrix completion methods. In these methods, the rank or rank approximation (trace norm) is used, as a powerful tool, to capture the global information. The tensor completion method (Liu et al. 2013) minimizes the trace norm of a tensor, i.e. the average of the trace norms of all matrices unfolded along each mode. Thus, it assumes the video frames are highly correlated in the temporal direction. If the video records homogenous episodes and all frames describe the similar information, this assumption has no problem. However, one video clip usually includes multiple different episodes and the frames from different episodes Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence", "title": "" }, { "docid": "676593ce8a3be454a276b23e4fce331b", "text": "In this paper, we propose a novel formulation for distance-based outliers that is based on the distance of a point from its kth nearest neighbor. We rank each point on the basis of its distance to its kth nearest neighbor and declare the top n points in this ranking to be outliers. In addition to developing relatively straightforward solutions to finding such outliers based on the classical nested-loop join and index join algorithms, we develop a highly efficient partition-based algorithm for mining outliers. This algorithm first partitions the input data set into disjoint subsets, and then prunes entire partitions as soon as it is determined that they cannot contain outliers. This results in substantial savings in computation. We present the results of an extensive experimental study on real-life and synthetic data sets. The results from a real-life NBA database highlight and reveal several expected and unexpected aspects of the database. The results from a study on synthetic data sets demonstrate that the partition-based algorithm scales well with respect to both data set size and data set dimensionality.", "title": "" }, { "docid": "e1c5199830d2de7c7f8f2ae28d84090b", "text": "Once generated, neurons are thought to permanently exit the cell cycle and become irreversibly differentiated. However, neither the precise point at which this post-mitotic state is attained nor the extent of its irreversibility is clearly defined. Here we report that newly born neurons from the upper layers of the mouse cortex, despite initiating axon and dendrite elongation, continue to drive gene expression from the neural progenitor tubulin α1 promoter (Tα1p). These observations suggest an ambiguous post-mitotic neuronal state. Whole transcriptome analysis of sorted upper cortical neurons further revealed that neurons continue to express genes related to cell cycle progression long after mitotic exit until at least post-natal day 3 (P3). These genes are however down-regulated thereafter, associated with a concomitant up-regulation of tumor suppressors at P5. Interestingly, newly born neurons located in the cortical plate (CP) at embryonic day 18-19 (E18-E19) and P3 challenged with calcium influx are found in S/G2/M phases of the cell cycle, and still able to undergo division at E18-E19 but not at P3. At P5 however, calcium influx becomes neurotoxic and leads instead to neuronal loss. Our data delineate an unexpected flexibility of cell cycle control in early born neurons, and describe how neurons transit to a post-mitotic state.", "title": "" }, { "docid": "3b61571cef1afad7a1aad2f9f8e586c4", "text": "A practical limitation of deep neural networks is their high degree of specialization to a single task and visual domain. Recently, inspired by the successes of transfer learning, several authors have proposed to learn instead universal feature extractors that, used as the first stage of any deep network, work well for several tasks and domains simultaneously. Nevertheless, such universal features are still somewhat inferior to specialized networks. To overcome this limitation, in this paper we propose to consider instead universal parametric families of neural networks, which still contain specialized problem-specific models, but differing only by a small number of parameters. We study different designs for such parametrizations, including series and parallel residual adapters, joint adapter compression, and parameter allocations, and empirically identify the ones that yield the highest compression. We show that, in order to maximize performance, it is necessary to adapt both shallow and deep layers of a deep network, but the required changes are very small. We also show that these universal parametrization are very effective for transfer learning, where they outperform traditional fine-tuning techniques.", "title": "" }, { "docid": "b084482c7dcffc70e307f60cb9bd3409", "text": "The evolution of the endoscopic endonasal transsphenoidal technique, which was initially reserved only for sellar lesions through the sphenoid sinus cavity, has lead in the last decades to a progressive possibility to access the skull base from the nose. This route allows midline access and visibility to the suprasellar, retrosellar and parasellar space while obviating brain retraction, and makes possible to treat transsphenoidally a variety of relatively small midline skull base and parasellar lesions traditionally approached transcranially. We report our current knowledge of the endoscopic anatomy of the midline skull base as seen from the endonasal perspective, in order to describe the surgical path and structures whose knowledge is useful during the operation. Besides, we describe the step-by-step surgical technique to access the different compartments, the \"dangerous landmarks\" to avoid in order to minimize the risks of complications and how to manage them, and our paradigm and techniques for dural and bony reconstruction. Furthermore, we report a brief description of the useful instruments and tools for the extended endoscopic approaches. Between January 2004 and April 2006 we performed 33 extended endonasal approaches for lesions arising from or involving the sellar region and the surrounding areas. The most representative pathologies of this series were the ten cranioparvngiomas, the six giant adenomas and the five meningiomas; we also used this procedure in three cases of chordomas, three of Rathke's cleft cysts and three of meningo-encephaloceles, one case of optic nerve glioma, one olfactory groove neuroendocrine tumor and one case of fibro-osseous dysplasia. Tumor removal, as assessed by post-operative MRI, revealed complete removal of the lesion in 2/6 pituitary adenomas, 7/10 craniopharyngiomas, 4/5 meningiomas, 3/3 Rathke's cleft cyst, 3/3 meningo-encephalocele. Surgical complications have been observed in 3 patients, two with a craniopharyngioma, one with a clival meningioma and one with a recurrent giant pituitary macroadenoma involving the entire left cavernous sinus, who developed a CSF leak and a second operation was necessary in order to review the cranial base reconstruction and seal the leak. One of them developed a bacterial meningitis, which resolved after a cycle of intravenous antibiotic therapy with no permanent neurological deficits. One patient with an intra-suprasellar non-functioning adenoma presented with a generalized epileptic seizure a few hours after the surgical procedure, due to the intraoperative massive CSF loss and consequent presence of intracranial air. We registered one surgical mortality. In three cases of craniopharyngioma and in one case of meningioma a new permanent diabetes insipidus was observed. One patient developed a sphenoid sinus mycosis, cured with antimycotic therapy. Epistaxis and airway difficulties were never observed. It is difficult todav to define the boundaries and the future limits of the extended approaches because the work is still in progress. Such extended endoscopic approaches, although at a first glance might be considered something that everyone can do, require an advanced and specialized training.", "title": "" }, { "docid": "f8058d7c6fa5d7b442e3ca0a445e2c6d", "text": "The second generation of the Digital Video Broadcasting standard for Satellite transmission, DVB-S2, is the evolution of the highly successful DVB-S satellite distribution technology. DVB-S2 has benefited from the latest progress in channel coding and modulation such as Low Density Parity Check Codes and higher order constellations to achieve performance that approaches Shannon¿s theoretical limit. We present a cross-layer design for Quality-of-Service (QoS) provision of interactive services, which is not specified in the standard. Our cross-layer approach exploits the satellite channel characteristics of space-time correlation via a cross-layer queueing architecture and an adaptive cross-layer scheduling policy. We show that our approach not only allows system load control but also rate adaptation to channel conditions and traffic demands on the coverage area. We also present the extension of our cross-layer design for mobile gateways focusing on the railway scenario. We illustrate the trade-off between system-wide and individual throughput by means of simulation, and that this trade-off could be a key metric in measuring the service level of DVB-S2 Broadband Service.", "title": "" }, { "docid": "47785d2cbbc5456c0a2c32c329498425", "text": "Are there important cyclical fluctuations in bond market premiums and, if so, with what macroeconomic aggregates do these premiums vary? We use the methodology of dynamic factor analysis for large datasets to investigate possible empirical linkages between forecastable variation in excess bond returns and macroeconomic fundamentals. We find that “real” and “inflation” factors have important forecasting power for future excess returns on U.S. government bonds, above and beyond the predictive power contained in forward rates and yield spreads. This behavior is ruled out by commonly employed affine term structure models where the forecastability of bond returns and bond yields is completely summarized by the cross-section of yields or forward rates. An important implication of these findings is that the cyclical behavior of estimated risk premia in both returns and long-term yields depends importantly on whether the information in macroeconomic factors is included in forecasts of excess bond returns. Without the macro factors, risk premia appear virtually acyclical, whereas with the estimated factors risk premia have a marked countercyclical component, consistent with theories that imply investors must be compensated for risks associated with macroeconomic activity. ( JEL E0, E4, G10, G12)", "title": "" }, { "docid": "41294dabf38a9d9887c70699e54c67b1", "text": "We propose an \"Enhanced Perceptual Functioning\" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in \"complex\" visual tasks, enhanced perception of first order static stimuli, diminished perception of complex movement, autonomy of low-level information processing toward higher-order operations, and differential relation between perception and general intelligence. Increased perceptual expertise may be implicated in the choice of special ability in savant autistics, and in the variability of apparent presentations within PDD (autism with and without typical speech, Asperger syndrome) in non-savant autistics. The overfunctioning of brain regions typically involved in primary perceptual functions may explain the autistic perceptual endophenotype.", "title": "" }, { "docid": "b882d6bc42e34506ba7ab26ed44d9265", "text": "Production datacenters operate under various uncertainties such as tra c dynamics, topology asymmetry, and failures. Therefore, datacenter load balancing schemes must be resilient to these uncertainties; i.e., they should accurately sense path conditions and timely react to mitigate the fallouts. Despite signi cant e orts, prior solutions have important drawbacks. On the one hand, solutions such as Presto and DRB are oblivious to path conditions and blindly reroute at xed granularity. On the other hand, solutions such as CONGA and CLOVE can sense congestion, but they can only reroute when owlets emerge; thus, they cannot always react timely to uncertainties. To make things worse, these solutions fail to detect/handle failures such as blackholes and random packet drops, which greatly degrades their performance. In this paper, we introduce Hermes, a datacenter load balancer that is resilient to the aforementioned uncertainties. At its heart, Hermes leverages comprehensive sensing to detect path conditions including failures unattended before, and it reacts using timely yet cautious rerouting. Hermes is a practical edge-based solution with no switch modi cation. We have implemented Hermes with commodity switches and evaluated it through both testbed experiments and large-scale simulations. Our results show that Hermes achieves comparable performance to CONGA and Presto in normal cases, and well handles uncertainties: under asymmetries, Hermes achieves up to 10% and 20% better ow completion time (FCT) than CONGA and CLOVE; under switch failures, it outperforms all other schemes by over 32%.", "title": "" }, { "docid": "dcd815ce7fe21d05679e9145a70609ce", "text": "In recent years, machine learning techniques have been widely used to solve many problems for fault diagnosis. However, in many real-world fault diagnosis applications, the distribution of the source domain data (on which the model is trained) is different from the distribution of the target domain data (where the learned model is actually deployed), which leads to performance degradation. In this paper, we introduce domain adaptation, which can find the solution to this problem by adapting the classifier or the regression model trained in a source domain for use in a different but related target domain. In particular, we proposed a novel deep neural network model with domain adaptation for fault diagnosis. Two main contributions are concluded by comparing to the previous works: first, the proposed model can utilize domain adaptation meanwhile strengthening the representative information of the original data, so that a high classification accuracy in the target domain can be achieved, and second, we proposed several strategies to explore the optimal hyperparameters of the model. Experimental results, on several real-world datasets, demonstrate the effectiveness and the reliability of both the proposed model and the exploring strategies for the parameters.", "title": "" }, { "docid": "68b2608c91525f3147f74b41612a9064", "text": "Protective effects of sweet orange (Citrus sinensis) peel and their bioactive compounds on oxidative stress were investigated. According to HPLC-DAD and HPLC-MS/MS analysis, hesperidin (HD), hesperetin (HT), nobiletin (NT), and tangeretin (TT) were present in water extracts of sweet orange peel (WESP). The cytotoxic effect in 0.2mM t-BHP-induced HepG2 cells was inhibited by WESP and their bioactive compounds. The protective effect of WESP and their bioactive compounds in 0.2mM t-BHP-induced HepG2 cells may be associated with positive regulation of GSH levels and antioxidant enzymes, decrease in ROS formation and TBARS generation, increase in the mitochondria membrane potential and Bcl-2/Bax ratio, as well as decrease in caspase-3 activation. Overall, WESP displayed a significant cytoprotective effect against oxidative stress, which may be most likely because of the phenolics-related bioactive compounds in WESP, leading to maintenance of the normal redox status of cells.", "title": "" }, { "docid": "b81c0d819f2afb0a0ff79b7c6aeb8ff7", "text": "This paper proposes a framework to identify and evaluate companies from the technological perspective to support merger and acquisition (M&A) target selection decision-making. This employed a text mining-based patent map approach to identify companies which can fulfill a specific strategic purpose of M&A for enhancing technological capabilities. The patent map is the visualized technological landscape of a technology industry by using technological proximities among patents, so companies which closely related to the strategic purpose can be identified. To evaluate the technological aspects of the identified companies, we provide the patent indexes that evaluate both current and future technological capabilities and potential technology synergies between acquiring and acquired companies. Furthermore, because the proposed method evaluates potential targets from the overall corporate perspective and the specific strategic perspectives simultaneously, more robust and meaningful result can be obtained than when only one perspective is considered. Thus, the proposed framework can suggest the appropriate target companies that fulfill the strategic purpose of M&A for enhancing technological capabilities. For the verification of the framework, we provide an empirical study using patent data related to flexible display technology.", "title": "" }, { "docid": "49388f99a08a41d713b701cf063a71be", "text": "In this paper, we present the first-of-its-kind machine learning (ML) system, called AI Programmer, that can automatically generate full software programs requiring only minimal human guidance. At its core, AI Programmer uses genetic algorithms (GA) coupled with a tightly constrained programming language that minimizes the overhead of its ML search space. Part of AI Programmer’s novelty stems from (i) its unique system design, including an embedded, hand-crafted interpreter for efficiency and security and (ii) its augmentation of GAs to include instruction-gene randomization bindings and programming language-specific genome construction and elimination techniques. We provide a detailed examination of AI Programmer’s system design, several examples detailing how the system works, and experimental data demonstrating its software generation capabilities and performance using only mainstream CPUs.", "title": "" }, { "docid": "daaf1a43b9d3972122932f2bcd2d0f8f", "text": "We present in this paper the specifications and design of a multi-legged robot with manipulation abilities for extraterrestrial surface exploration activities. The paper contains a brief description of the project LIMES as well as the major goals therein. The underlying methods of identifying system requirements are then described , including the envisaged mission scenario, resulting requirement lists and semantic graphs depicting dependencies. Included in this is the evaluation of different concepts with respect to varying criteria. A description of the functional principles and components is provided with regard to the merits of chosen forms of actuation, planned sensors, electronic devices and structural materials. Finally, a brief overview of how the system is to be realized is given as well as a description of the most important key technologies and components.", "title": "" }, { "docid": "bfeff1e1ef24d0cb92d1844188f87cc8", "text": "While user attribute extraction on social media has received considerable attention, existing approaches, mostly supervised, encounter great difficulty in obtaining gold standard data and are therefore limited to predicting unary predicates (e.g., gender). In this paper, we present a weaklysupervised approach to user profile extraction from Twitter. Users’ profiles from social media websites such as Facebook or Google Plus are used as a distant source of supervision for extraction of their attributes from user-generated text. In addition to traditional linguistic features used in distant supervision for information extraction, our approach also takes into account network information, a unique opportunity offered by social media. We test our algorithm on three attribute domains: spouse, education and job; experimental results demonstrate our approach is able to make accurate predictions for users’ attributes based on their tweets.1", "title": "" }, { "docid": "9ecd46e90ccd1db7daef14dd63fea8ee", "text": "HISTORY AND EXAMINATION — A 13-year-old Caucasian boy (BMI 26.4 kg/m) presented with 3 weeks’ history of polyuria, polydipsia, and weight loss. His serum glucose (26.8 mmol/l), HbA1c (9.4%, normal 3.2–5.5) and fructosamine (628 mol/l, normal 205–285) levels were highly elevated (Fig. 1), and urinalysis showed glucosuria ( ) and ketonuria ( ) . He was HLA-DRB1* 0101,*0901, DRB4*01, DQA1*0101,03, and DQB1*0303,0501. Plasma Cpeptide, determined at a blood glucose of 17.0 mmol/l, was low (0.18 nmol/l). His previous history was unremarkable, and he did not take any medication. The patient received standard treatment with insulin, fluid, and electrolyte replacement and diabetes education. After an uneventful clinical course he was discharged on multiple-injection insulin therapy (total 0.9 units kg 1 day ) after 10 days. Subsequently, insulin doses were gradually reduced to 0.3 units kg 1 day , and insulin treatment was completely stopped after 11 months. Without further treatment, HbA1c and fasting glucose levels remained normal throughout the entire follow-up of currently 4.5 years. During oral glucose tolerance testing performed 48 months after diagnosis, he had normal fasting and 2-h levels of glucose (3.7 and 5.6 mmol/l, respectively), insulin (60.5 and 217.9 pmol/l, respectively), and C-peptide (0.36 and 0.99 nmol/l, respectively). His insulin sensitivity, as determined by insulin sensitivity index (composite) and homeostasis model assessment, was normal, and BMI remained unchanged. Serum autoantibodies to GAD65, insulin autoantibody-2, insulin, and islet cell antibodies were initially positive but showed a progressive decline or loss during follow-up. INVESTIGATION — T-cell antigen recognition and cytokine profiles were studied using a library of 21 preproinsulin (PPI) peptides (2). In the patient’s peripheral blood mononuclear cells (PBMCs), a high cumulative interleukin (IL)-10) secretion (201 pg/ml) was observed in response to PPI peptides, with predominant recognition of PPI44–60 and PPI49–65, while interferon (IFN)secretion was undetectable. In contrast, in PBMCs from a cohort of 12 type 1 diabetic patients without long-term remission (2), there was a dominant IFNresponse but low IL-10 secretion to PPI. Analysis of CD4 T–helper cell subsets revealed that IL-10 secretion was mostly attributable to the patient’s naı̈ve/recently activated CD45RA cells, while a strong IFNresponse was observed in CD45RA cells. CD45RA T-cells have been associated with regulatory T-cell function in diabetes, potentially capable of suppressing", "title": "" }, { "docid": "af3e8e26ec6f56a8cd40e731894f5993", "text": "Probiotic bacteria are sold mainly in fermented foods, and dairy products play a predominant role as carriers of probiotics. These foods are well suited to promoting the positive health image of probiotics for several reasons: 1) fermented foods, and dairy products in particular, already have a positive health image; 2) consumers are familiar with the fact that fermented foods contain living microorganisms (bacteria); and 3) probiotics used as starter organisms combine the positive images of fermentation and probiotic cultures. When probiotics are added to fermented foods, several factors must be considered that may influence the ability of the probiotics to survive in the product and become active when entering the consumer's gastrointestinal tract. These factors include 1) the physiologic state of the probiotic organisms added (whether the cells are from the logarithmic or the stationary growth phase), 2) the physical conditions of product storage (eg, temperature), 3) the chemical composition of the product to which the probiotics are added (eg, acidity, available carbohydrate content, nitrogen sources, mineral content, water activity, and oxygen content), and 4) possible interactions of the probiotics with the starter cultures (eg, bacteriocin production, antagonism, and synergism). The interactions of probiotics with either the food matrix or the starter culture may be even more intensive when probiotics are used as a component of the starter culture. Some of these aspects are discussed in this article, with an emphasis on dairy products such as milk, yogurt, and cheese.", "title": "" }, { "docid": "bc0530b0dc56b4e4b4186a11742c9b5b", "text": "A dual-polarized aperture-coupled magneto-electric (ME) dipole antenna is proposed. Two separate substrate-integrated waveguides (SIWs) implemented in two printed circuit board (PCB) laminates are used to feed the antenna. The simulated -10-dB impedance bandwidth of the antenna is 21% together with an isolation of over 45 dB between the two input ports. Good radiation characteristics, including almost identical unidirectional radiation patterns in two orthogonal planes, frontto-back ratio larger than 20 dB, cross-polarization levels less than -23 dB, and a stable gain around 8 dBi over the operating band, are achieved. By employing the proposed radiating element, a 2 × 2 wideband antenna array working at the 60GHz band is designed, fabricated, and tested, which can generate two-dimensional (2-D) multiple beams with dual polarization. A measured -10 dB impedance bandwidth wider than 22% and a gain up to 12.5 dBi are obtained. Owing to the superiority of the ME dipole, the radiation pattern of the array is also stable over the operating frequencies and nearly identical in two orthogonal planes for both of the polarizations. With advantages of desirable performance, convenience of fabrication and integration, and low cost, the proposed antenna and array are attractive for millimeter-wave wireless communication systems.", "title": "" } ]
scidocsrr
0cc7abbf74751d954cd52d960a85ec7d
Explicit Inductive Bias for Transfer Learning with Convolutional Networks
[ { "docid": "f7c63fefb050b0e1f4e68267c04d2c42", "text": "Image-based food recognition pose new challenges for mainstream computer vision algorithms. Recent works in the field focused either on hand-crafted representations or on learning these by exploiting deep neural networks (DNN). Despite the success of DNN-based works, these exploit off-the-shelf deep architectures which are not cast to the specific food classification problem. We believe that better results can be obtained if the architecture is defined with respect to an analysis of the food composition. Following such an intuition, this work introduces a new deep scheme that is designed to handle the food structure. In particular, we focus on the vertical food traits that are common to a large number of categories (i.e., 15% of the whole data in current datasets). Towards the final objective, we first introduce a slice convolution block to capture such specific information. Then, we leverage on the recent success of deep residual blocks and combine those with the sliced convolution to produce the classification score. Extensive evaluations on three benchmark datasets demonstrated that our solution has better performance than existing approaches (e.g., a top–1 accuracy of 90.27% on the Food-101 dataset).", "title": "" }, { "docid": "225e7b608d06d218144853b900d40fd1", "text": "Deep neural networks require a large amount of labeled training data during supervised learning. However, collecting and labeling so much data might be infeasible in many cases. In this paper, we introduce a deep transfer learning scheme, called selective joint fine-tuning, for improving the performance of deep learning tasks with insufficient training data. In this scheme, a target learning task with insufficient training data is carried out simultaneously with another source learning task with abundant training data. However, the source learning task does not use all existing training data. Our core idea is to identify and use a subset of training images from the original source learning task whose low-level characteristics are similar to those from the target learning task, and jointly fine-tune shared convolutional layers for both tasks. Specifically, we compute descriptors from linear or nonlinear filter bank responses on training images from both tasks, and use such descriptors to search for a desired subset of training samples for the source learning task. Experiments demonstrate that our deep transfer learning scheme achieves state-of-the-art performance on multiple visual classification tasks with insufficient training data for deep learning. Such tasks include Caltech 256, MIT Indoor 67, and fine-grained classification problems (Oxford Flowers 102 and Stanford Dogs 120). In comparison to fine-tuning without a source domain, the proposed method can improve the classification accuracy by 2% - 10% using a single model. Codes and models are available at https://github.com/ZYYSzj/Selective-Joint-Fine-tuning.", "title": "" } ]
[ { "docid": "1a101ae3faeaa775737799c2324ef603", "text": "in recent years, greenhouse technology in agriculture is to automation, information technology direction with the IOT (internet of things) technology rapid development and wide application. In the paper, control networks and information networks integration of IOT technology has been studied based on the actual situation of agricultural production. Remote monitoring system with internet and wireless communications combined is proposed. At the same time, taking into account the system, information management system is designed. The collected data by the system provided for agricultural research facilities.", "title": "" }, { "docid": "7b5be6623ad250bea3b84c86c6fb0000", "text": "HTTP video streaming, employed by most of the video-sharing websites, allows users to control the video playback using, for example, pausing and switching the bit rate. These user-viewing activities can be used to mitigate the temporal structure impairments of the video quality. On the other hand, other activities, such as mouse movement, do not help reduce the impairment level. In this paper, we have performed subjective experiments to analyze user-viewing activities and correlate them with network path performance and user quality of experience. The results show that network measurement alone may miss important information about user dissatisfaction with the video quality. Moreover, video impairments can trigger user-viewing activities, notably pausing and reducing the screen size. By including the pause events into the prediction model, we can increase its explanatory power.", "title": "" }, { "docid": "0e7359ed52b616d209e904cd20d91d2b", "text": "This paper describes a benchmark for evaluation of 3D mesh segmentation salgorithms. The benchmark comprises a data set with 4,300 manually generated segmentations for 380 surface meshes of 19 different object categories, and it includes software for analyzing 11 geometric properties of segmentations and producing 4 quantitative metrics for comparison of segmentations. The paper investigates the design decisions made in building the benchmark, analyzes properties of human-generated and computer-generated segmentations, and provides quantitative comparisons of 7 recently published mesh segmentation algorithms. Our results suggest that people are remarkably consistent in the way that they segment most 3D surface meshes, that no one automatic segmentation algorithm is better than the others for all types of objects, and that algorithms based on non-local shape features seem to produce segmentations that most closely resemble ones made by humans.", "title": "" }, { "docid": "5f21a1348ad836ded2fd3d3264455139", "text": "To date, brain imaging has largely relied on X-ray computed tomography and magnetic resonance angiography with limited spatial resolution and long scanning times. Fluorescence-based brain imaging in the visible and traditional near-infrared regions (400-900 nm) is an alternative but currently requires craniotomy, cranial windows and skull thinning techniques, and the penetration depth is limited to 1-2 mm due to light scattering. Here, we report through-scalp and through-skull fluorescence imaging of mouse cerebral vasculature without craniotomy utilizing the intrinsic photoluminescence of single-walled carbon nanotubes in the 1.3-1.4 micrometre near-infrared window. Reduced photon scattering in this spectral region allows fluorescence imaging reaching a depth of >2 mm in mouse brain with sub-10 micrometre resolution. An imaging rate of ~5.3 frames/s allows for dynamic recording of blood perfusion in the cerebral vessels with sufficient temporal resolution, providing real-time assessment of blood flow anomaly in a mouse middle cerebral artery occlusion stroke model.", "title": "" }, { "docid": "0c7512ac95d72436e31b9b05199eefdd", "text": "Usable security has unique usability challenges bec ause the need for security often means that standard human-comput er-in eraction approaches cannot be directly applied. An important usability goal for authentication systems is to support users in s electing better passwords, thus increasing security by expanding th e effective password space. In click-based graphical passwords, poorly chosen passwords lead to the emergence of hotspots – portions of the image where users are more likely to select cli ck-points, allowing attackers to mount more successful diction ary attacks. We use persuasion to influence user choice in click -based graphical passwords, encouraging users to select mo re random, and hence more secure, click-points. Our approach i s to introduce persuasion to the Cued Click-Points graphical passw ord scheme (Chiasson, van Oorschot, Biddle, 2007) . Our resulting scheme significantly reduces hotspots while still maintain ing its usability.", "title": "" }, { "docid": "6967e7b45585a1428e8109ec29152e1a", "text": "Mobile AR has evolved from the bulkiness of head-mounted device and backpack device to smart device (smartphone, tablet etc.). To date, the current implementation has made what AR is today. However, the advancement of AR technology has met with limitation and challenges on its own, which resulted in not able to reach mass-market. This paper in turn presents current limitations and challenges that need to overcome. We have done a review based on past research papers on limitation in technical (hardware, algorithms and interaction technique) and non-technical (social acceptance, privacy and usefulness) aspects of developing and implementing mobile augmented reality applications. We also presented some future opportunities in mobile AR applications.", "title": "" }, { "docid": "3fa21ebc002a40b4558b3b0820d5cde9", "text": "We present the first ontology-based Vietnamese QA system KbQAS where a new knowledge acquisition approach for analyzing English and Vietnamese questions is integrated.", "title": "" }, { "docid": "ea0d8179a9e0a89c1d2d5cf5d808ebc2", "text": "We present a new security technology called the Multilayer Firewall. We argue that it is useful in some situations for which other approaches, such as cryptographically protected communications, present operational or economic difficulties. In other circumstances a Multilayer Firewall can compliment such security technology by providing additional protection against intruder attacks. We first present the operational theory behind the Multilayer Firewall and then describe a prototype that we designed and", "title": "" }, { "docid": "570e6b3f853c4e774c2ffce3b2122479", "text": "Given a repeatedly issued query and a document with a not-yet-confirmed potential to satisfy the users' needs, a search system should place this document on a high position in order to gather user feedback and obtain a more confident estimate of the document utility. On the other hand, the main objective of the search system is to maximize expected user satisfaction over a rather long period, what requires showing more relevant documents on average. The state-of-the-art approaches to solving this exploration-exploitation dilemma rely on strongly simplified settings making these approaches infeasible in practice. We improve the most flexible and pragmatic of them to handle some actual practical issues. The first one is utilizing prior information about queries and documents, the second is combining bandit-based learning approaches with a default production ranking algorithm. We show experimentally that our framework enables to significantly improve the ranking of a leading commercial search engine.", "title": "" }, { "docid": "ea7a15c9bc6a343dde5f665fd4e85cf5", "text": "Emotion detection in conversations is a necessary step for a number of applications, including opinion mining over chat history, social media threads, debates, argumentation mining, understanding consumer feedback in live conversations, etc. Currently systems do not treat the parties in the conversation individually by adapting to the speaker of each utterance. In this paper, we describe a new method based on recurrent neural networks that keeps track of the individual party states throughout the conversation and uses this information for emotion classification. Our model outperforms the state of the art by a significant margin on two different datasets.", "title": "" }, { "docid": "07464c5d5ea8be7b3d36ddccef5f78d5", "text": "Agricultural price forecasting is one of the challenging areas of time series forecasting. The feed-forward time-delay neural network (TDNN) is one of the promising and potential methods for time series prediction. However, empirical evaluations of TDNN with autoregressive integrated moving average (ARIMA) model often yield mixed results in terms of the superiority in forecasting performance. In this paper, the price forecasting capabilities of TDNN model, which can model nonlinear relationship, are compared with ARIMA model using monthly wholesale price series of oilseed crops traded in different markets in India. Most earlier studies of forecast accuracy for TDNN versus ARIMA do not consider pretesting for nonlinearity. This study shows that the nonlinearity test of price series provides reliable guide to post-sample forecast accuracy for neural network model. The TDNN model in general provides better forecast accuracy in terms of conventional root mean square error values as compared to ARIMA model for nonlinear patterns. The study also reveals that the neural network models have clear advantage over linear models for predicting the direction of monthly price change for different series. Such direction of change forecasts is particularly important in economics for capturing the business cycle movements relating to the turning points.", "title": "" }, { "docid": "5594fc8fec483698265abfe41b3776c9", "text": "This paper is an abridgement and update of numerous IEEE papers dealing with Squirrel Cage Induction Motor failure analysis. They are the result of a taxonomic study and research conducted by the author during a 40 year career in the motor industry. As the Petrochemical Industry is revolving to reliability based maintenance, increased attention should be given to preventing repeated failures. The Root Cause Failure methodology presented in this paper will assist in this transition. The scope of the product includes Squirrel Cage Induction Motors up to 3000 hp, however, much of this methodology has application to larger sizes and types.", "title": "" }, { "docid": "265bf26646113a56101c594f563cb6dc", "text": "A system, particularly a decision-making concept, that facilitates highly automated driving on freeways in real traffic is presented. The system is capable of conducting fully automated lane change (LC) maneuvers with no need for driver approval. Due to the application in real traffic, a robust functionality and the general safety of all traffic participants are among the main requirements. Regarding these requirements, the consideration of measurement uncertainties demonstrates a major challenge. For this reason, a fully integrated probabilistic concept is developed. By means of this approach, uncertainties are regarded in the entire process of determining driving maneuvers. While this also includes perception tasks, this contribution puts a focus on the driving strategy and the decision-making process for the execution of driving maneuvers. With this approach, the BMW Group Research and Technology managed to drive 100% automated in real traffic on the freeway A9 from Munich to Ingolstadt, showing a robust, comfortable, and safe driving behavior, even during multiple automated LC maneuvers.", "title": "" }, { "docid": "61051ddfb877064e477bea0131bddef4", "text": "Portfolio diversification in capital markets is an accepted investment strategy. On the other hand corporate diversification has drawn many opponents especially the agency theorists who argue that executives must not diversify on behalf of share holders. Diversification is a strategic option used by many managers to improve their firm’s performance. While extensive literature investigates the diversification performance linkage, little agreements exist concerning the nature of this relationship. Both theoretical and empirical disagreements abound as the extensive research has neither reached a consensus nor any interpretable and acceptable findings. This paper looked at diversification as a corporate strategy and its effect on firm performance using Conglomerates in the Food and Beverages Sector listed on the ZSE. The study used a combination of primary and secondary data. Primary data was collected through interviews while secondary data were gathered from financial statements and management accounts. Data was analyzed using SPSS computer package. Three competing models were derived from literature (the linear model, Inverted U model and Intermediate model) and these were empirically assessed and tested.", "title": "" }, { "docid": "f11aa75465f087bcd059e2af1dc963d4", "text": "The process of translation is ambiguous, in that there are typically many valid translations for a given sentence. This gives rise to significant variation in parallel corpora, however, most current models of machine translation do not account for this variation, instead treating the problem as a deterministic process. To this end, we present a deep generative model of machine translation which incorporates a chain of latent variables, in order to account for local lexical and syntactic variation in parallel corpora. We provide an indepth analysis of the pitfalls encountered in variational inference for training deep generative models. Experiments on several different language pairs demonstrate that the model consistently improves over strong baselines.", "title": "" }, { "docid": "9e4b7e87229dfb02c2600350899049be", "text": "This paper presents an efficient and reliable swarm intelligence-based approach, namely elitist-mutated particle swarm optimization EMPSO technique, to derive reservoir operation policies for multipurpose reservoir systems. Particle swarm optimizers are inherently distributed algorithms, in which the solution for a problem emerges from the interactions between many simple individuals called particles. In this study the standard particle swarm optimization PSO algorithm is further improved by incorporating a new strategic mechanism called elitist-mutation to improve its performance. The proposed approach is first tested on a hypothetical multireservoir system, used by earlier researchers. EMPSO showed promising results, when compared with other techniques. To show practical utility, EMPSO is then applied to a realistic case study, the Bhadra reservoir system in India, which serves multiple purposes, namely irrigation and hydropower generation. To handle multiple objectives of the problem, a weighted approach is adopted. The results obtained demonstrate that EMPSO is consistently performing better than the standard PSO and genetic algorithm techniques. It is seen that EMPSO is yielding better quality solutions with less number of function evaluations. DOI: 10.1061/ ASCE 0733-9496 2007 133:3 192 CE Database subject headings: Reservoir operation; Optimization; Irrigation; Hydroelectric power generation.", "title": "" }, { "docid": "685e6338727b4ab899cffe2bbc1a20fc", "text": "Existing code similarity comparison methods, whether source or binary code based, are mostly not resilient to obfuscations. In the case of software plagiarism, emerging obfuscation techniques have made automated detection increasingly difficult. In this paper, we propose a binary-oriented, obfuscation-resilient method based on a new concept, longest common subsequence of semantically equivalent basic blocks, which combines rigorous program semantics with longest common subsequence based fuzzy matching. We model the semantics of a basic block by a set of symbolic formulas representing the input-output relations of the block. This way, the semantics equivalence (and similarity) of two blocks can be checked by a theorem prover. We then model the semantics similarity of two paths using the longest common subsequence with basic blocks as elements. This novel combination has resulted in strong resiliency to code obfuscation. We have developed a prototype and our experimental results show that our method is effective and practical when applied to real-world software.", "title": "" }, { "docid": "9072c5ad2fbba55bdd50b5969862f7c3", "text": "Parametricism has come to scene as an important style in both architectural design and construction where conventional Computer-Aided Design (CAD) tool has become substandard. Building Information Modeling (BIM) is a recent object-based parametric modeling tool for exploring the relationship between the geometric and non-geometric components of the model. The aim of this research is to explore the capabilities of BIM in achieving variety and flexibility in design extending from architectural to urban scale. This study proposes a method by using User Interface (UI) and Application Programming Interface (API) tools of BIM to generate a complex roof structure as a parametric family. This project demonstrates a dynamic variety in architectural scale. We hypothesized that if a function calculating the roof length is defined using a variety of inputs, it can later be applied to urban scale by utilizing a database of the inputs.", "title": "" }, { "docid": "e16d89d3a6b3d38b5823fae977087156", "text": "The payoff of abarrier option depends on whether or not a specified asset price, index, or rate reaches a specified level during the life of the option. Most models for pricing barrier options assume continuous monitoring of the barrier; under this assumption, the option can often be priced in closed form. Many (if not most) real contracts with barrier provisions specify discrete monitoring instants; there are essentially no formulas for pricing these options, and even numerical pricing is difficult. We show, however, that discrete barrier options can be priced with remarkable accuracy using continuous barrier formulas by applying a simple continuity correction to the barrier. The correction shifts the barrier away from the underlying by a factor of exp (βσ √ 1t), whereβ ≈ 0.5826,σ is the underlying volatility, and1t is the time between monitoring instants. The correction is justified both theoretically and experimentally.", "title": "" }, { "docid": "e29d3ab3d3b9bd6cbff1c2a79a6c3070", "text": "This paper presents a study of passive Dickson based envelope detectors operating in the quadratic small signal regime, specifically intended to be used in RF front end of sensing units of IoE sensor nodes. Critical parameters such as open-circuit voltage sensitivity (OCVS), charge time, input impedance, and output noise are studied and simplified circuit models are proposed to predict the behavior of the detector, resulting in practical design intuitions. There is strong agreement between model predictions, simulation results and measurements of 15 representative test structures that were fabricated in a 130 nm RF CMOS process.", "title": "" } ]
scidocsrr
2398eb8423daf5bcdd1ea7e733399da7
LineScout Technology Opens the Way to Robotic Inspection and Maintenance of High-Voltage Power Lines
[ { "docid": "41e714ba7f26bfab161863b8033d8ffe", "text": "Power line inspection and maintenance is a slowly but surely emerging field for robotics. This paper describes the control scheme implemented in LineScout technology, one of the first teleoperated obstacle crossing systems that has progressed to the stage of actually performing very-high-voltage power line jobs. Following a brief overview of the hardware and software architecture, key challenges associated with the objectives of achieving reliability, robustness and ease of operation are presented. The coordinated control through visual feedback of all motors needed for obstacle crossing calls for a coherent strategy, an effective graphical user interface and rules to ensure safe, predictable operation. Other features such as automatic weight balancing are introduced to lighten the workload and let the operator concentrate on inspecting power line components. Open architecture was considered for progressive improvements. The features required to succeed in making power line robots fully autonomous are also discussed.", "title": "" } ]
[ { "docid": "4afe6e46fb1a0eb825e7485c73edd75e", "text": "Cough is a reflex action of the respiratory tract that is used to clear the upper airways. Chronic cough lasting for more than 8 weeks is common in the community. The causes include cigarette smoking, exposure to cigarette smoke, and exposure to environmental pollution, especially particulates. Diseases causing chronic cough include asthma, eosinophilic bronchitis, gastro-oesophageal reflux disease, postnasal drip syndrome or rhinosinusitis, chronic obstructive pulmonary disease, pulmonary fibrosis, and bronchiectasis. Doctors should always work towards a clear diagnosis, considering common and rare illnesses. In some patients, no cause is identified, leading to the diagnosis of idiopathic cough. Chronic cough is often associated with an increased response to tussive agents such as capsaicin. Plastic changes in intrinsic and synaptic excitability in the brainstem, spine, or airway nerves can enhance the cough reflex, and can persist in the absence of the initiating cough event. Structural and inflammatory airway mucosal changes in non-asthmatic chronic cough could represent the cause or the traumatic response to repetitive coughing. Effective control of cough requires not only controlling the disease causing the cough but also desensitisation of cough pathways.", "title": "" }, { "docid": "6ad8da8198b1f61dfe0dc337781322d9", "text": "A model of human speech quality perception has been developed to provide an objective measure for predicting subjective quality assessments. The Virtual Speech Quality Objective Listener (ViSQOL) model is a signal based full reference metric that uses a spectro-temporal measure of similarity between a reference and a test speech signal. This paper describes the algorithm and compares the results with PESQ for common problems in VoIP: clock drift, associated time warping and jitter. The results indicate that ViSQOL is less prone to underestimation of speech quality in both scenarios than the ITU standard.", "title": "" }, { "docid": "0197bfeb753c9be004a1a091a12fa1dc", "text": "Correlation filter (CF) based trackers generally include two modules, i.e., feature representation and on-line model adaptation. In existing off-line deep learning models for CF trackers, the model adaptation usually is either abandoned or has closed-form solution to make it feasible to learn deep representation in an end-to-end manner. However, such solutions fail to exploit the advances in CF models, and cannot achieve competitive accuracy in comparison with the state-of-the-art CF trackers. In this paper, we investigate the joint learning of deep representation and model adaptation, where an updater network is introduced for better tracking on future frame by taking current frame representation, tracking result, and last CF tracker as input. By modeling the representor as convolutional neural network (CNN), we truncate the alternating direction method of multipliers (ADMM) and interpret it as a deep network of updater, resulting in our model for learning representation and truncated inference (RTINet). Experiments demonstrate that our RTINet tracker achieves favorable tracking accuracy against the state-of-the-art trackers and its rapid version can run at a real-time speed of 24 fps. The code and pre-trained models will be publicly available at https://github.com/tourmaline612/RTINet.", "title": "" }, { "docid": "99d612ac042f1c2f930b7310e6308946", "text": "Live streaming services are a growing form of social media. Most live streaming platforms allow viewers to communicate with each other and the broadcaster via a text chat. However, interaction in a text chat does not work well with too many users. Existing techniques to make text chat work with a larger number of participants often limit who can participate or how much users can participate. In this paper, we describe a new design for a text chat system that allows more people to participate without overwhelming users with too many messages. Our design strategically limits the number of messages a user sees based on the concept of neighborhoods, and emphasizes important messages through upvoting. We present a study comparing our system to a chat system similar to those found in commercial streaming services. Results of the study indicate that the Conversational Circle system is easier to understand and interact with, while supporting community among viewers and highlighting important content for the streamer.", "title": "" }, { "docid": "c09256d7daaff6e2fc369df0857a3829", "text": "Violence is a serious problems for cities like Chicago and has been exacerbated by the use of social media by gang-involved youths for taunting rival gangs. We present a corpus of tweets from a young and powerful female gang member and her communicators, which we have annotated with discourse intention, using a deep read to understand how and what triggered conversations to escalate into aggression. We use this corpus to develop a part-of-speech tagger and phrase table for the variant of English that is used, as well as a classifier for identifying tweets that express grieving and aggression.", "title": "" }, { "docid": "82d62feaa0c88789c44bbdc745ab21dc", "text": "This paper proposes a new approach to solve the problem of real-time vision-based hand gesture recognition with the combination of statistical and syntactic analyses. The fundamental idea is to divide the recognition problem into two levels according to the hierarchical property of hand gestures. The lower level of the approach implements the posture detection with a statistical method based on Haar-like features and the AdaBoost learning algorithm. With this method, a group of hand postures can be detected in real time with high recognition accuracy. The higher level of the approach implements the hand gesture recognition using the syntactic analysis based on a stochastic context-free grammar. The postures that are detected by the lower level are converted into a sequence of terminal strings according to the grammar. Based on the probability that is associated with each production rule, given an input string, the corresponding gesture can be identified by looking for the production rule that has the highest probability of generating the input string.", "title": "" }, { "docid": "7f51bdc05c4a1bf610f77b629d8602f7", "text": "Special Issue Anthony Vance Brigham Young University anthony@vance.name Bonnie Brinton Anderson Brigham Young University bonnie_anderson@byu.edu C. Brock Kirwan Brigham Young University kirwan@byu.edu Users’ perceptions of risks have important implications for information security because individual users’ actions can compromise entire systems. Therefore, there is a critical need to understand how users perceive and respond to information security risks. Previous research on perceptions of information security risk has chiefly relied on self-reported measures. Although these studies are valuable, risk perceptions are often associated with feelings—such as fear or doubt—that are difficult to measure accurately using survey instruments. Additionally, it is unclear how these self-reported measures map to actual security behavior. This paper contributes to this topic by demonstrating that risk-taking behavior is effectively predicted using electroencephalography (EEG) via event-related potentials (ERPs). Using the Iowa Gambling Task, a widely used technique shown to be correlated with real-world risky behaviors, we show that the differences in neural responses to positive and negative feedback strongly predict users’ information security behavior in a separate laboratory-based computing task. In addition, we compare the predictive validity of EEG measures to that of self-reported measures of information security risk perceptions. Our experiments show that self-reported measures are ineffective in predicting security behaviors under a condition in which information security is not salient. However, we show that, when security concerns become salient, self-reported measures do predict security behavior. Interestingly, EEG measures significantly predict behavior in both salient and non-salient conditions, which indicates that EEG measures are a robust predictor of security behavior.", "title": "" }, { "docid": "a5ce24236867a513a19d98bd46bf99d2", "text": "The mandala thangka, as a religious art in Tibetan Buddhism, is an invaluable cultural and artistic heritage. However, drawing a mandala is both time and effort consuming and requires mastery skills due to its intricate details. Retaining and digitizing this heritage is an unresolved research challenge to date. In this paper, we propose a computer-aided generation approach of mandala thangka patterns to address this issue. Specifically, we construct parameterized models of three stylistic patterns used in the interior mandalas of Nyingma school in Tibetan Buddhism according to their geometric features, namely the star, crescent and lotus flower patterns. Varieties of interior mandalas are successfully generated using these proposed patterns based on the hierarchical structures observed from hand drawn mandalas. The experimental results show that our approach can efficiently generate beautifully-layered colorful interior mandalas, which significantly reduces the time and efforts in manual production and, more importantly, contributes to the digitization of this great heritage.", "title": "" }, { "docid": "57aaa47e45e8542767e327cf683288cf", "text": "Mobile edge computing usually uses caching to support multimedia contents in 5G mobile Internet to reduce the computing overhead and latency. Mobile edge caching (MEC) systems are vulnerable to various attacks such as denial of service attacks and rogue edge attacks. This article investigates the attack models in MEC systems, focusing on both the mobile offloading and the caching procedures. In this article, we propose security solutions that apply reinforcement learning (RL) techniques to provide secure offloading to the edge nodes against jamming attacks. We also present lightweight authentication and secure collaborative caching schemes to protect data privacy. We evaluate the performance of the RL-based security solution for mobile edge caching and discuss the challenges that need to be addressed in the future.", "title": "" }, { "docid": "9faf99899f00e1f0cad97bb27be026d4", "text": "The High-Temperature Winkler (HTW) process developed by Rheinbraun is a fluidised-bed gasification process particularly suitable for various types of lignite, other reactive and ballast-rich coal types, biomass and different types of pre-treated residual waste. Depending on the application, the HTW process can be used for efficient conversion of these feedstocks to produce fuel gas, reduction gas or synthesis gas. The co-gasification of pre-treated municipal solid waste (of differing origins) and lignite was demonstrated in a commercial scale during normal production in the HTW demonstration plant at Berrenrath, Germany. Approx. 1,000 metric tons of pre-treated municipal solid waste was gasified without any problems together with dried lignite in the three test campaigns. The gasifier operated to the full satisfaction of all partners throughout the test campaigns. The demonstration project yielded useful operating experience and process engineering data (e.g. energy and mass balances, gas composition, emission estimates) and provided engineering reliability for the design of future plants and an important reference for further applications, most recently for MSW gasification in Japan. The Krupp Uhde PreCon process applies the High-Temperature-Winkler (HTW) gasification as a core technology for processing solid wastes, e.g. municipal solid waste, sewage sludge, auto shredder residue or residues from plastic recycling processes. The modules used are based on those used in mechanical pre-treatment and coal gasification being tested successfully in commercial plants for several years.", "title": "" }, { "docid": "8dfa105c269696ac8ef38feb5337396f", "text": "The use of multisensory approaches to reading and literacy instruction has proven not only beneficial but also pleasantly stimulating for students as well. The approach is especially valuable for students that are underachieving or have special needs; in which these types of students may have more learning ability obstacles than their peers. Multisensory lessons will prove useful to any population in order to help achieve the desired goal of any unit. Moreover, educators can also gain positive experiences from using multisensory methods with their students to insure an interactive, fun and beneficial alternative to traditional teaching of reading and literacy. Using Multisensory Methods in Reading and Literacy Instruction Learning how to read is the foundation of elementary education in which all young children will either learn with ease, or with difficulty and hesitation. Reading requires the memorization of phonemes, sight words and high frequency words in order to decode texts; and through active experiences, children construct their understanding of the world (Gunning, 2009). Being active learners in the classroom can come from many methods such as hands on, musical or a kinesthetic approach to instruction. According to Smolkin and Donovan (2003), comprehension-related activities need not wait until children are fluently decoding but may be used during comprehension acquisition. This means that in this stage, students can use multisensory methods to begin decoding grade appropriate texts even before they begin to read. This literature review examines the use of multisensory methods on students that are beginning to read and learn from literacy instruction. Learning Through The Senses: Below Grade Level Students In most cases, beginning readers will be taught different strategies using body movements, songs and rhymes in order to memorize the alphabet or learn phonics. Using a multisensory teaching approach means helping a child to learn through more than one of the senses (Bradford, 2008). Teachers unknowingly have always used methods to teach initial readers that require the different senses including, sight, hearing, touch, taste and even smell (Greenwell & Zygouris-Coe, 2012). Therefore, rather than offer more reading strategy instruction, teachers must offer a different kind of instruction—instruction that defines reading strategies as a set of resources for exploring both written texts and the texts of students’ lived realities (Park, 2012). Different approaches to reading instruction that include multisensory instructional approaches can be used on all types of students including under or over achieving students, special needs and English language learner students. A recent study conducted by Folakemi and Adebayo (2012) investigated the effects of multisensory in comparison to metacognitive instructional approaches on vocabulary of underachieving Nigerian secondary school students. The multisensory approach was tested against the metacognitive instruction approach on vocabulary amongst one hundred and twenty students, sixty male and sixty female. The investigation took place in an Ilorin, Nigeria secondary school in which only underachieving students who consistently scored below 40% in English language were selected for the study (Folakemi & Adebayo, 2012). The researches hypothesized students that underachieve will need more attention compared to their overachieving counterparts. They noticed throughout the experiment that although the less able students are still fully capable of learning, they have difficulties and all too often give up easily and soon become disillusioned. The interest in using a multisensory approach to combat underachieving students stems from noticing not only the teacher’s dull attitude, but in the student’s attitude toward traditional instructional approaches. Most teachers have failed to see the importance of using teaching aids, which can be used for presentation, practice, revision, and testing in the ESL classroom. Students’ interest is killed because they are bored with the traditional ‘talk and board’ teaching approach (Folakemi & Adebayo, 2012). Teaching efforts needed to be directed towards this set of students in which multisensory methods can have the potential to give students the tools needed to learn through the different senses. In the study, the students were separated into four levels of independent and dependent variables of treatment and control (Folakemi & Adebayo, 2012). Different control groups in which one group was taught vocabulary using the multisensory approach and another group was taught using metacognition instruction approaches were investigated in order to come to a conclusion. The researchers hypothesized that for the under achiever students, English language teachers would need an explicit and distinctive multisensory approach to teach them (Folakemi & Adebayo, 2012). They included textbooks, video, audiotapes, computer software and visual aids to provide support for the underachieving students. These manipulatives were used during class instruction time when teaching English language arts and most exclusively, vocabulary lessons. In order to test their findings, the researches used a variety of tests to collect data for the investigation; the study was conducted into several stages. Stage one is the pretest and stage two is the administration of the test while stage three included a posttest. All the 120 subjects selected for the study are divided into the three experimental and one control group, they all took part in the two tests. The test consisted of one hundred questions, twenty questions for each vocabulary dimension while each of the experimental teachers was attached to a particular group of underachievers. The results indicated that: “MSIA (Multisensory Instructional Approach) is the most effective, followed by MCIA (Metacognitive Instructional Approach) and MSIA+MCIA. This means that the three approaches are more effective than the conventional approach. Therefore, significant difference exists between the three instructional approaches and the conventional instructional approach. This result indicates that the multisensory instructional approaches had significant effect on students spelling achievement of the underachieving students” (Folakemi & Adebayo, 2012, p. 21). The significant difference in the overall achievement in English vocabulary of the underachieving students using the four instructional approaches concluded that the three experimental groups performed significantly better than the control group with the multisensory instructional approach group performing best (Folakemi & Adebayo, 2012). These results in regards to multisensory instruction positively affect how a student learns and is becoming a more widely used tactic within the classroom. In 2007, Wendy Johnson Donnell also conducted an experiment in which she tested the effects of multisensory instructional methods in underachieving third grade students. According Bowey (1995), children from lower socioeconomic groups and minority groups tend to be further behind their peers in early literacy skills on kindergarten entry and that this gap increases over time. This gap sets up these students to be behind in their schooling and potentially become underachieving as the curriculum becomes more rigorous. Donnell’s (2012) study focuses on students coming from a low-income area to test the effects of multisensory lessons within the classroom. Before the study was conducted, she studied students at several elementary schools in the Kansas City, Kansas area. Reading records and written work of the third grade students were analyzed to come to a conclusion that, “an obstacle to reading success for many children in the third grade was automaticity in the application of the alphabetic principle, specifically vowels” (Donnell, 2007, p. 469). After reaching this pre-research conclusion, Donnell decided to research a multisensory instruction in a whole-class setting. The study consisted of using 60 whole-class multisensory word study lessons for third grade students; each of the lessons took approximately 20 minutes for a total of 20 hours instruction inside the classroom. The lessons varied from children’s oral language, to phonological and phonemic awareness, to phonics, to specific vowel-spelling patterns. Because the district the research was being conducted in already adapted the Animal Literacy program, the lessons were built to incorporate Animal Literacy. The multisensory features of the word-study lessons are both receptive and productive, with auditory, visual, and kinesthetic components (Donnell, 2007). With each lesson requiring these components, individual lesson plans were developed to target a specific purpose such as phonics or phonemic awareness to insure that a level of commitment to memory was supported. During the experiment, the study required that all 450 participating third graders all stemmed from the same district where the socioeconomic status was similar in all participating elementary schools. A uniformed population was a key component in researching the multisensory lesson plans within the classrooms. Another key component in the research was providing all the contributing teachers that were going to incorporate these independent variable multisensory lesson plans with preparation and guidance during the research as well as being taught how to distribute tests. The dependent variables, tests used within the research, included the Names Test, Elementary Spelling Inventory, Dynamic Indicators of Basic Early Literacy Skills and Oral Reading Fluency assessments. To test reading comprehension, the Scholastic Reading Inventory Interactive was used as well. After the all dependent variable tests were given, teachers collected the assessments in order to compare student results. (Donnell, 2007). The results that developed from the research indicate that", "title": "" }, { "docid": "2c442933c4729e56e5f4f46b5b8071d6", "text": "Wireless body area networks consist of several devices placed on the human body, sensing vital signs and providing remote recognition of health disorders. Low power consumption is crucial in these networks. A new energy-efficient topology is provided in this paper, considering relay and sensor nodes' energy consumption and network maintenance costs. In this topology design, relay nodes, placed on the cloth, are used to help the sensor nodes forwarding data to the sink. Relay nodes' situation is determined such that the relay nodes' energy consumption merges the uniform distribution. Simulation results show that the proposed method increases the lifetime of the network with nearly uniform distribution of the relay nodes' energy consumption. Furthermore, this technique simultaneously reduces network maintenance costs and continuous replacements of the designer clothing. The proposed method also determines the way by which the network traffic is split and multipath routed to the sink.", "title": "" }, { "docid": "fa086058ad67602b9b4429f950e70c0f", "text": "The Telecare Medicine Information System (TMIS) has brought us a lot of conveniences. However, it may also reveal patients’ privacies and other important information. So the security of TMIS can be paid much attention to, in which identity authentication plays a very important role in protecting TMIS from being illegally used. To improve the situation, TMIS needs a more secure and more efficient authentication scheme. Recently, Yan and Li et al. have proposed a secure authentication scheme for the TMIS based on biometrics, claiming that it can withstand various attacks. In this paper, we present several security problems in their scheme as follows: (a) it cannot really achieve three-factor authentication; (b) it has design flaws at the password change phase; (c) users’ biometric may be locked out; (d) it fails to achieve users’ anonymous identity. To solve these problems, a new scheme using the theory of Secure Sketch is proposed. The thorough analysis shows that our scheme can provide a stronger security than Yan-Li’s protocol, despite the little higher computation cost at client. What’s more, the proposed scheme not only can achieve anonymity preserving but also can achieve session key agreement.", "title": "" }, { "docid": "b322d03c7f1fc90f03dd9c76047c5a32", "text": "We develop a probabilistic technique for colorizing grayscale natural images. In light of the intrinsic uncertainty of this task, the proposed probabilistic framework has numerous desirable properties. In particular, our model is able to produce multiple plausible and vivid colorizations for a given grayscale image and is one of the first colorization models to provide a proper stochastic sampling scheme. Moreover, our training procedure is supported by a rigorous theoretical framework that does not require any ad hoc heuristics and allows for efficient modeling and learning of the joint pixel color distribution. We demonstrate strong quantitative and qualitative experimental results on the CIFAR-10 dataset and the challenging ILSVRC 2012 dataset.", "title": "" }, { "docid": "bcdb0e6dcbab8fcccfea15edad00a761", "text": "This article presents the 1:4 wideband balun based on transmission lines that was awarded the first prize in the Wideband Baluns Student Design Competition. The competition was held during the 2014 IEEE Microwave Theory and Techniques Society (MTT-S) International Microwave Symposium (IMS2014). It was initiated in 2011 and is sponsored by the MTT-17 Technical Coordinating Committee. The winner must implement and measure a wideband balun of his or her own design and achieve the highest possible operational frequency from at least 1 MHz (or below) while meeting the following conditions: ? female subminiature version A (SMA) connectors are used to terminate all ports ? a minimum impedance transformation ratio of two ? a maximum voltage standing wave ratio (VSWR) of 2:1 at all ports ? an insertion loss of less than 1 dB ? a common-mode rejection ratio (CMRR) of more than 25 dB ? imbalance of less than 1 dB and 2.5?.", "title": "" }, { "docid": "e380fee1d044c15a5e5ba12436b8f511", "text": "Modern resolver-to-digital converters (RDCs) are typically implemented using DSP techniques to reduce hardware footprint and enhance system accuracy. However, in such implementations, both resolver sensor and ADC channel unbalances introduce significant errors, particularly in the speed output of the tracking loop. The frequency spectrum of the output error is variable depending on the resolver mechanical velocity. This paper presents the design of an autotuning output filter based on the interpolation of precomputed filters for a DSP-based RDC with a type-II tracking loop. A fourth-order peak and a second-order high-pass filter are designed and tested for an experimental RDC. The experimental results demonstrate significant reduction of the peak-to-peak error in the estimated speed.", "title": "" }, { "docid": "bbbbe3f926de28d04328f1de9bf39d1a", "text": "The detection of fraudulent financial statements (FFS) is an important and challenging issue that has served as the impetus for many academic studies over the past three decades. Although nonfinancial ratios are generally acknowledged as the key factor contributing to the FFS of a corporation, they are usually excluded from early detection models. The objective of this study is to increase the accuracy of FFS detection by integrating the rough set theory (RST) and support vector machines (SVM) approaches, while adopting both financial and nonfinancial ratios as predictive variables. The results showed that the proposed hybrid approach (RSTþSVM) has the best classification rate as well as the lowest occurrence of Types I and II errors, and that nonfinancial ratios are indeed valuable information in FFS detection.", "title": "" }, { "docid": "e9497a16e9d12ea837c7a0ec44d71860", "text": "This article surveys existing and emerging disaggregation techniques for energy-consumption data and highlights signal features that might be used to sense disaggregated data in an easily installed and cost-effective manner.", "title": "" }, { "docid": "e8d102a7b00f81cefc4b1db043a041f8", "text": "Microelectrode measurements can be used to investigate both the intracellular pools of ions and membrane transport processes of single living cells. Microelectrodes can report these processes in the surface layers of root and leaf cells of intact plants. By careful manipulation of the plant, a minimum of disruption is produced and therefore the information obtained from these measurements most probably represents the 'in vivo' situation. Microelectrodes can be used to assay for the activity of particular transport systems in the plasma membrane of cells. Compartmental concentrations of inorganic metabolite ions have been measured by several different methods and the results obtained for the cytosol are compared. Ion-selective microelectrodes have been used to measure the activities of ions in the apoplast, cytosol and vacuole of single cells. New sensors for these microelectrodes are being produced which offer lower detection limits and the opportunity to measure other previously unmeasured ions. Measurements can be used to determine the intracellular steady-state activities or report the response of cells to environmental changes.", "title": "" }, { "docid": "d79db7b7ca4e54fe3aa768669f5ba705", "text": "Customers can participate in open innovation communities posting innovation ideas, which in turn can receive comments and votes from the rest of the community, highlighting user preferences. However, the final decision about implementing innovations corresponds to the company. This paper is focused on the customers’ activity in open innovation communities. The aim is to identify the main topics of customers’ interests in order to compare these topics with managerial decision-making. The results obtained reveal first that both votes and comments can be used to predict user preferences; and second, that customers tend to promote those innovations by reporting more comfort and benefits. In contrast, managerial decisions are more focused on the distinctive features associated with the brand image.", "title": "" } ]
scidocsrr
8037941ca0ae544a972c24e9b4ca9403
Robust Lexical Features for Improved Neural Network Named-Entity Recognition
[ { "docid": "ebc8966779ba3b9e6a768f4c462093f5", "text": "Most state-of-the-art approaches for named-entity recognition (NER) use semi supervised information in the form of word clusters and lexicons. Recently neural network-based language models have been explored, as they as a byproduct generate highly informative vector representations for words, known as word embeddings. In this paper we present two contributions: a new form of learning word embeddings that can leverage information from relevant lexicons to improve the representations, and the first system to use neural word embeddings to achieve state-of-the-art results on named-entity recognition in both CoNLL and Ontonotes NER. Our system achieves an F1 score of 90.90 on the test set for CoNLL 2003—significantly better than any previous system trained on public data, and matching a system employing massive private industrial query-log data.", "title": "" }, { "docid": "2afbb4e8963b9e6953fd6f7f8c595c06", "text": "Large-scale linguistically annotated corpora have played a crucial role in advancing the state of the art of key natural language technologies such as syntactic, semantic and discourse analyzers, and they serve as training data as well as evaluation benchmarks. Up till now, however, most of the evaluation has been done on monolithic corpora such as the Penn Treebank, the Proposition Bank. As a result, it is still unclear how the state-of-the-art analyzers perform in general on data from a variety of genres or domains. The completion of the OntoNotes corpus, a large-scale, multi-genre, multilingual corpus manually annotated with syntactic, semantic and discourse information, makes it possible to perform such an evaluation. This paper presents an analysis of the performance of publicly available, state-of-the-art tools on all layers and languages in the OntoNotes v5.0 corpus. This should set the benchmark for future development of various NLP components in syntax and semantics, and possibly encourage research towards an integrated system that makes use of the various layers jointly to improve overall performance.", "title": "" }, { "docid": "7ce314babce8509724f05beb4c3e5cdd", "text": "This paper presents WikiCoref, an English corpus annotated for anaphoric relations, where all documents are from the English version of Wikipedia. Our annotation scheme follows the one of OntoNotes with a few disparities. We annotated each markable with coreference type, mention type and the equivalent Freebase topic. Since most similar annotation efforts concentrate on very specific types of written text, mainly newswire, there is a lack of resources for otherwise over-used Wikipedia texts. The corpus described in this paper addresses this issue. We present a freely available resource we initially devised for improving coreference resolution algorithms dedicated to Wikipedia texts. Our corpus has no restriction on the topics of the documents being annotated, and documents of various sizes have been considered for annotation.", "title": "" } ]
[ { "docid": "2b77ac8576a02ddf79e5a447c3586215", "text": "A new scheme to sample signals defined on the nodes of a graph is proposed. The underlying assumption is that such signals admit a sparse representation in a frequency domain related to the structure of the graph, which is captured by the so-called graph-shift operator. Instead of using the value of the signal observed at a subset of nodes to recover the signal in the entire graph, the sampling scheme proposed here uses as input observations taken at a single node. The observations correspond to sequential applications of the graph-shift operator, which are linear combinations of the information gathered by the neighbors of the node. When the graph corresponds to a directed cycle (which is the support of time-varying signals), our method is equivalent to the classical sampling in the time domain. When the graph is more general, we show that the Vandermonde structure of the sampling matrix, critical when sampling time-varying signals, is preserved. Sampling and interpolation are analyzed first in the absence of noise, and then noise is considered. We then study the recovery of the sampled signal when the specific set of frequencies that is active is not known. Moreover, we present a more general sampling scheme, under which, either our aggregation approach or the alternative approach of sampling a graph signal by observing the value of the signal at a subset of nodes can be both viewed as particular cases. Numerical experiments illustrating the results in both synthetic and real-world graphs close the paper.", "title": "" }, { "docid": "abedd6f0896340a190750666b1d28d91", "text": "This study aimed to characterize the neural generators of the early components of the visual evoked potential (VEP) to isoluminant checkerboard stimuli. Multichannel scalp recordings, retinotopic mapping and dipole modeling techniques were used to estimate the locations of the cortical sources giving rise to the early C1, P1, and N1 components. Dipole locations were matched to anatomical brain regions visualized in structural magnetic resonance imaging (MRI) and to functional MRI (fMRI) activations elicited by the same stimuli. These converging methods confirmed previous reports that the C1 component (onset latency 55 msec; peak latency 90-92 msec) was generated in the primary visual area (striate cortex; area 17). The early phase of the P1 component (onset latency 72-80 msec; peak latency 98-110 msec) was localized to sources in dorsal extrastriate cortex of the middle occipital gyrus, while the late phase of the P1 component (onset latency 110-120 msec; peak latency 136-146 msec) was localized to ventral extrastriate cortex of the fusiform gyrus. Among the N1 subcomponents, the posterior N150 could be accounted for by the same dipolar source as the early P1, while the anterior N155 was localized to a deep source in the parietal lobe. These findings clarify the anatomical origin of these VEP components, which have been studied extensively in relation to visual-perceptual processes.", "title": "" }, { "docid": "d40a1b72029bdc8e00737ef84fdf5681", "text": "— Ability of deep networks to extract high level features and of recurrent networks to perform time-series inference have been studied. In view of universality of one hidden layer network at approximating functions under weak constraints, the benefit of multiple layers is to enlarge the space of dynamical systems approximated or, given the space, reduce the number of units required for a certain error. Traditionally shallow networks with manually engineered features are used, back-propagation extent is limited to one and attempt to choose a large number of hidden units to satisfy the Markov condition is made. In case of Markov models, it has been shown that many systems need to be modeled as higher order. In the present work, we present deep recurrent networks with longer back-propagation through time extent as a solution to modeling systems that are high order and to predicting ahead. We study epileptic seizure suppression electro-stimulator. Extraction of manually engineered complex features and prediction employing them has not allowed small low-power implementations as, to avoid possibility of surgery, extraction of any features that may be required has to be included. In this solution, a recurrent neural network performs both feature extraction and prediction. We prove analytically that adding hidden layers or increasing backpropagation extent increases the rate of decrease of approximation error. A Dynamic Programming (DP) training procedure employing matrix operations is derived. DP and use of matrix operations makes the procedure efficient particularly when using data-parallel computing. The simulation studies show the geometry of the parameter space, that the network learns the temporal structure, that parameters converge while model output displays same dynamic behavior as the system and greater than .99 Average Detection Rate on all real seizure data tried.", "title": "" }, { "docid": "ec5148e728e1cce8058638d500f3804e", "text": "Identifying extremist-associated conversations on Twitter is an open problem. Extremist groups have been leveraging Twitter (1) to spread their message and (2) to gain recruits. In this paper, we investigate the problem of determining whether a particular Twitter user engages in extremist conversation. We explore different Twitter metrics as proxies for misbehavior, including the sentiment of the user's published tweets, the polarity of the user's ego-network, and user mentions. We compare different known classifiers using these different features on manually annotated tweets involving the ISIS extremist group and find that combining all these features leads to the highest accuracy for detecting extremism on Twitter.", "title": "" }, { "docid": "a441c8669fa094658e95aeddfe88f86d", "text": "It has been claimed that recent developments in the research on the efficiency of code generation and on graphical input/output interfacing have made it possible to use a functional language to write efficient programs that can compete with industrial applications written in a traditional imperative language. As one of the early steps in verifying this claim, this paper describes a first attempt to implement a spreadsheet in a lazy, purely functional language. An interesting aspect of the design is that the language with which the user specifies the relations between the cells of the spreadsheet is itself a lazy, purely functional and higher order language as well, and not some special dedicated spreadsheet language. Another interesting aspect of the design is that the spreadsheet incorporates symbolic reduction and normalisation of symbolic expressions (including equations). This introduces the possibility of asking the system to prove equality of symbolic cell expressions: a property which can greatly enhance the reliability of a particular user-defined spreadsheet. The resulting application is by no means a fully mature product. It is not intended as a competitor to commercially available spreadsheets. However, with its higher order lazy functional language and its symbolic capabilities it may serve as an interesting candidate to fill the gap between calculators with purely functional expressions and full-featured spreadsheets with dedicated non-functional spreadsheet languages. This paper describes the global design and important implementation issues in the development of the application. The experience gained and lessons learnt during this project are treated. Performance and use of the resulting application are compared with related work.", "title": "" }, { "docid": "653f7e6f8aac3464eeac88a5c2f21f2e", "text": "The decentralized electronic currency system Bitcoin gives the possibility to execute transactions via direct communication between users, without the need to resort to third parties entrusted with legitimizing the concerned monetary value. In its current state of development a recent, fast-changing, volatile and highly mediatized technology the discourses that unfold within spaces of information and discussion related to Bitcoin can be analysed in light of their ability to produce at once the representations of value, the practices according to which it is transformed and evolves, and the devices allowing for its implementation. The literature on the system is a testament to how the Bitcoin debates do not merely spread, communicate and diffuse representation of this currency, but are closely intertwined with the practice of the money itself. By focusing its attention on a specific corpus, that of expert discourse, the article shows how, introducing and discussing a specific device, dynamic or operation as being in some way related to trust, this expert knowledge contributes to the very definition and shaping of this trust within the Bitcoin system ultimately contributing to perform the shared definition of its value as a currency.", "title": "" }, { "docid": "9826dcd8970429b1f3398128eec4335b", "text": "This article provides an overview of recent contributions to the debate on the ethical use of previously collected biobank samples, as well as a country report about how this issue has been regulated in Spain by means of the new Biomedical Research Act, enacted in the summer of 2007. By contrasting the Spanish legal situation with the wider discourse of international bioethics, we identify and discuss a general trend moving from the traditional requirements of informed consent towards new models more favourable to research in a post-genomic context.", "title": "" }, { "docid": "bc5c008b5e443b83b2a66775c849fffb", "text": "Continuous glucose monitoring (CGM) sensors are portable devices that allow measuring and visualizing the glucose concentration in real time almost continuously for several days and are provided with hypo/hyperglycemic alerts and glucose trend information. CGM sensors have revolutionized Type 1 diabetes (T1D) management, improving glucose control when used adjunctively to self-monitoring blood glucose systems. Furthermore, CGM devices have stimulated the development of applications that were impossible to create without a continuous-time glucose signal, e.g., real-time predictive alerts of hypo/hyperglycemic episodes based on the prediction of future glucose concentration, automatic basal insulin attenuation methods for hypoglycemia prevention, and the artificial pancreas. However, CGM sensors' lack of accuracy and reliability limited their usability in the clinical practice, calling upon the academic community for the development of suitable signal processing methods to improve CGM performance. The aim of this paper is to review the past and present algorithmic challenges of CGM sensors, to show how they have been tackled by our research group, and to identify the possible future ones.", "title": "" }, { "docid": "ff5700d97ad00fcfb908d90b56f6033f", "text": "How to design a secure steganography method is the problem that researchers have always been concerned about. Traditionally, the steganography method is designed in a heuristic way which does not take into account the detection side (steganalysis) fully and automatically. In this paper, we propose a new strategy that generates more suitable and secure covers for steganography with adversarial learning scheme, named SSGAN. The proposed architecture has one generative network called G, and two discriminative networks called D and S, among which the former evaluates the visual quality of the generated images for steganography and the latter assesses their suitableness for information hiding. Different from the existing work, we use WGAN instead of GAN for the sake of faster convergence speed, more stable training, and higher quality images, and also re-design the S net with more sophisticated steganalysis network. The experimental results prove the effectiveness of the proposed method.", "title": "" }, { "docid": "42d3f666325c3c9e2d61fcbad3c6659a", "text": "Supernumerary or accessory nostrils are a very rare type of congenital nasal anomaly, with only a few cases reported in the literature. They can be associated with such malformations as facial clefts and they can be unilateral or bilateral, with most cases reported being unilateral. The accessory nostril may or may not communicate with the ipsilateral nasal cavity, probably depending on the degree of embryological progression of the anomaly. A case of simple supernumerary left nostril with no nasal cavity communication and with a normally developed nose is presented. The surgical treatment is described and the different speculative theories related to the embryogenesis of supernumerary nostrils are also reviewed.", "title": "" }, { "docid": "062ef386998d3c47e1f3845dec55499c", "text": "The purpose of this study was to examine the effectiveness of the Brain Breaks® Physical Activity Solutions in changing attitudes toward physical activity of school children in a community in Poland. In 2015, a sample of 326 pupils aged 9-11 years old from 19 classes at three selected primary schools were randomly assigned to control and experimental groups within the study. During the classes, children in the experimental group performed physical activities two times per day in three to five minutes using Brain Breaks® videos for four months, while the control group did not use the videos during the test period. Students' attitudes toward physical activities were assessed before and after the intervention using the \"Attitudes toward Physical Activity Scale\". Repeated measures of ANOVA were used to examine the change from pre- to post-intervention. Overall, a repeated measures ANOVA indicated time-by-group interaction effects in 'Self-efficacy on learning with video exercises', F(1.32) = 75.28, p = 0.00, η2 = 0.19. Although the changes are minor, there were benefits of the intervention. It may be concluded that HOPSports Brain Breaks® Physical Activity Program contributes to better self-efficacy on learning while using video exercise of primary school children.", "title": "" }, { "docid": "3ca04efcb370e8a30ab5ad42d1d2d047", "text": "The exceptionally adhesive foot of the gecko remains clean in dirty environments by shedding contaminants with each step. Synthetic gecko-inspired adhesives have achieved similar attachment strengths to the gecko on smooth surfaces, but the process of contact self-cleaning has yet to be effectively demonstrated. Here, we present the first gecko-inspired adhesive that has matched both the attachment strength and the contact self-cleaning performance of the gecko's foot on a smooth surface. Contact self-cleaning experiments were performed with three different sizes of mushroom-shaped elastomer microfibres and five different sizes of spherical silica contaminants. Using a load-drag-unload dry contact cleaning process similar to the loads acting on the gecko foot during locomotion, our fully contaminated synthetic gecko adhesives could recover lost adhesion at a rate comparable to that of the gecko. We observed that the relative size of contaminants to the characteristic size of the microfibres in the synthetic adhesive strongly determined how and to what degree the adhesive recovered from contamination. Our approximate model and experimental results show that the dominant mechanism of contact self-cleaning is particle rolling during the drag process. Embedding of particles between adjacent fibres was observed for particles with diameter smaller than the fibre tips, and further studied as a temporary cleaning mechanism. By incorporating contact self-cleaning capabilities, real-world applications of synthetic gecko adhesives, such as reusable tapes, clothing closures and medical adhesives, would become feasible.", "title": "" }, { "docid": "7350c0433fe1330803403e6aa03a2f26", "text": "An introduction is provided to Multi-Entity Bayesian Networks (MEBN), a logic system that integrates First Order Logic (FOL) with Bayesian probability theory. MEBN extends ordinary Bayesian networks to allow representation of graphical models with repeated sub-structures. Knowledge is encoded as a collection of Bayesian network fragments (MFrags) that can be instantiated and combined to form highly complex situation-specific Bayesian networks. A MEBN theory (MTheory) implicitly represents a joint probability distribution over possibly unbounded numbers of hypotheses, and uses Bayesian learning to refine a knowledge base as observations accrue. MEBN provides a logical foundation for the emerging collection of highly expressive probability-based languages. A running example illustrates the representation and reasoning power of the MEBN formalism.", "title": "" }, { "docid": "ef99799bf977ba69a63c9f030fc65c7f", "text": "In this paper, we propose a novel transductive learning framework named manifold-ranking based image retrieval (MRBIR). Given a query image, MRBIR first makes use of a manifold ranking algorithm to explore the relationship among all the data points in the feature space, and then measures relevance between the query and all the images in the database accordingly, which is different from traditional similarity metrics based on pair-wise distance. In relevance feedback, if only positive examples are available, they are added to the query set to improve the retrieval result; if examples of both labels can be obtained, MRBIR discriminately spreads the ranking scores of positive and negative examples, considering the asymmetry between these two types of images. Furthermore, three active learning methods are incorporated into MRBIR, which select images in each round of relevance feedback according to different principles, aiming to maximally improve the ranking result. Experimental results on a general-purpose image database show that MRBIR attains a significant improvement over existing systems from all aspects.", "title": "" }, { "docid": "b75dd43655a70eaf0aaef43826de4337", "text": "Plagiarism detection has been considered as a classification problem which can be approximated with intrinsic strategies, considering self-based information from a given document, and external strategies, considering comparison techniques between a suspicious document and different sources. In this work, both intrinsic and external approaches for plagiarism detection are presented. First, the main contribution for intrinsic plagiarism detection is associated to the outlier detection approach for detecting changes in the author’s style. Then, the main contribution for the proposed external plagiarism detection is the space reduction technique to reduce the complexity of this plagiarism detection task. Results shows that our approach is highly competitive with respect to the leading research teams in plagiarism detection.", "title": "" }, { "docid": "4cd605375f5d27c754e4a21b81b39f1a", "text": "The dominant paradigm in drug discovery is the concept of designing maximally selective ligands to act on individual drug targets. However, many effective drugs act via modulation of multiple proteins rather than single targets. Advances in systems biology are revealing a phenotypic robustness and a network structure that strongly suggests that exquisitely selective compounds, compared with multitarget drugs, may exhibit lower than desired clinical efficacy. This new appreciation of the role of polypharmacology has significant implications for tackling the two major sources of attrition in drug development--efficacy and toxicity. Integrating network biology and polypharmacology holds the promise of expanding the current opportunity space for druggable targets. However, the rational design of polypharmacology faces considerable challenges in the need for new methods to validate target combinations and optimize multiple structure-activity relationships while maintaining drug-like properties. Advances in these areas are creating the foundation of the next paradigm in drug discovery: network pharmacology.", "title": "" }, { "docid": "73ec43c5ed8e245d0a1ff012a6a67f76", "text": "HERE IS MUCH signal processing devoted to detection and estimation. Detection is the task of detetmitdng if a specific signal set is pteaettt in an obs&tion, whflc estimation is the task of obtaining the va.iues of the parameters derriblng the signal. Often the s@tal is complicated or is corrupted by interfeting signals or noise To facilitate the detection and estimation of signal sets. the obsenation is decomposed by a basis set which spans the signal space [ 1) For many problems of engineering interest, the class of aigttlls being sought are periodic which leads quite natuallv to a decomposition by a basis consistittg of simple petiodic fun=tions, the sines and cosines. The classic Fourier tran.,fot,,, h the mechanism by which we M able to perform this decomposttmn. BY necessity, every observed signal we pmmust be of finite extent. The extent may be adjustable and Axtable. but it must be fire. Proces%ng a fiite-duration observation ~POSCS mteresting and interacting considentior,s on the hamomc analysic rhese consldentions include detectability of tones in the Presence of nearby strong tones, rcoohability of similarstrength nearby tones, tesolvability of Gxifting tona, and biases in estimating the parameten of my of the alonmenhoned signals. For practicality, the data we pare N unifomdy spaced samples of the obsetvcd signal. For convenience. N is highJy composite, and we will zwtme N is evett. The harmottic estm~afes we obtain UtmugJt the discrae Fowie~ tmnsfotm (DFT) arc N mifcwmly spaced samples of the asaciated periodic spectra. This approach in elegant and attnctive when the proce~ scheme is cast as a spectral decomposition in an N-dimensional orthogonal vector space 121. Unfottunately, in mmY practical situations, to obtain meaningful results this elegance must be compmmised. One such t=O,l;..,Nl.N.N+l.", "title": "" }, { "docid": "f3820e94a204cd07b04e905a9b1e4834", "text": "Successful analysis of player skills in video games has important impacts on the process of enhancing player experience without undermining their continuous skill development. Moreover, player skill analysis becomes more intriguing in team-based video games because such form of study can help discover useful factors in effective team formation. In this paper, we consider the problem of skill decomposition in MOBA (MultiPlayer Online Battle Arena) games, with the goal to understand what player skill factors are essential for the outcome of a game match. To understand the construct of MOBA player skills, we utilize various skill-based predictive models to decompose player skills into interpretative parts, the impact of which are assessed in statistical terms. We apply this analysis approach on two widely known MOBAs, namely League of Legends (LoL) and Defense of the Ancients 2 (DOTA2). The finding is that base skills of in-game avatars, base skills of players, and players’ champion-specific skills are three prominent skill components influencing LoL’s match outcomes, while those of DOTA2 are mainly impacted by in-game avatars’ base skills but not much by the other two. PLAYER SKILL DECOMPOSITION IN MULTIPLAYER ONLINE BATTLE ARENAS 3 Player Skill Decomposition in Multiplayer Online Battle Arenas", "title": "" }, { "docid": "8eee03189f757493797ed5be5f72c0fa", "text": "The long-term memory of most connectionist systems lies entirely in the weights of the system. Since the number of weights is typically fixed, this bounds the total amount of knowledge that can be learned and stored. Though this is not normally a problem for a neural network designed for a specific task, such a bound is undesirable for a system that continually learns over an open range of domains. To address this, we describe a lifelong learning system that leverages a fast, though non-differentiable, content-addressable memory which can be exploited to encode both a long history of sequential episodic knowledge and semantic knowledge over many episodes for an unbounded number of domains. This opens the door for investigation into transfer learning, and leveraging prior knowledge that has been learned over a lifetime of experiences to new domains.", "title": "" }, { "docid": "b95e6cc4d0e30e0f14ecc757e583502e", "text": "Over the last decade, it has become well-established that a captcha’s ability to withstand automated solving lies in the difficulty of segmenting the image into individual characters. The standard approach to solving captchas automatically has been a sequential process wherein a segmentation algorithm splits the image into segments that contain individual characters, followed by a character recognition step that uses machine learning. While this approach has been effective against particular captcha schemes, its generality is limited by the segmentation step, which is hand-crafted to defeat the distortion at hand. No general algorithm is known for the character collapsing anti-segmentation technique used by most prominent real world captcha schemes. This paper introduces a novel approach to solving captchas in a single step that uses machine learning to attack the segmentation and the recognition problems simultaneously. Performing both operations jointly allows our algorithm to exploit information and context that is not available when they are done sequentially. At the same time, it removes the need for any hand-crafted component, making our approach generalize to new captcha schemes where the previous approach can not. We were able to solve all the real world captcha schemes we evaluated accurately enough to consider the scheme insecure in practice, including Yahoo (5.33%) and ReCaptcha (33.34%), without any adjustments to the algorithm or its parameters. Our success against the Baidu (38.68%) and CNN (51.09%) schemes that use occluding lines as well as character collapsing leads us to believe that our approach is able to defeat occluding lines in an equally general manner. The effectiveness and universality of our results suggests that combining segmentation and recognition is the next evolution of catpcha solving, and that it supersedes the sequential approach used in earlier works. More generally, our approach raises questions about how to develop sufficiently secure captchas in the future.", "title": "" } ]
scidocsrr
3c6a1b106bbfa44e6e01fd7e6308e884
An Internet-Wide View into DNS Lookup Patterns
[ { "docid": "6b72622a404dc824475e9cd62d509d5c", "text": "The Domain Name System (DNS) is an essential protocol used by both legitimate Internet applications and cyber attacks. For example, botnets rely on DNS to support agile command and control infrastructures. An effective way to disrupt these attacks is to place malicious domains on a “blocklist” (or “blacklist”) or to add a filtering rule in a firewall or network intrusion detection system. To evade such security countermeasures, attackers have used DNS agility, e.g., by using new domains daily to evade static blacklists and firewalls. In this paper we propose Notos, a dynamic reputation system for DNS. The premise of this system is that malicious, agile use of DNS has unique characteristics and can be distinguished from legitimate, professionally provisioned DNS services. Notos uses passive DNS query data and analyzes the network and zone features of domains. It builds models of known legitimate domains and malicious domains, and uses these models to compute a reputation score for a new domain indicative of whether the domain is malicious or legitimate. We have evaluated Notos in a large ISP’s network with DNS traffic from 1.4 million users. Our results show that Notos can identify malicious domains with high accuracy (true positive rate of 96.8%) and low false positive rate (0.38%), and can identify these domains weeks or even months before they appear in", "title": "" } ]
[ { "docid": "dbb6a635df0cae8d1994c94947e235db", "text": "We study the problem of allocating indivisible goods among n agents in a fair manner. For this problem, maximin share (MMS) is a well-studied solution concept which provides a fairness threshold. Specifically, maximin share is defined as the minimum utility that an agent can guarantee for herself when asked to partition the set of goods into n bundles such that the remaining (n−1) agents pick their bundles adversarially. An allocation is deemed to be fair if every agent gets a bundle whose valuation is at least her maximin share. Even though maximin shares provide a natural benchmark for fairness, it has its own drawbacks and, in particular, it is not sufficient to rule out unsatisfactory allocations. Motivated by these considerations, in this work we define a stronger notion of fairness, called groupwise maximin share guarantee (GMMS). In GMMS, we require that the maximin share guarantee is achieved not just with respect to the grand bundle, but also among all the subgroups of agents. Hence, this solution concept strengthens MMS and provides an ex-post fairness guarantee. We show that in specific settings, GMMS allocations always exist. We also establish the existence of approximate GMMS allocations under additive valuations, and develop a polynomial-time algorithm to find such allocations. Moreover, we establish a scale of fairness wherein we show that GMMS implies approximate envy freeness. Finally, we empirically demonstrate the existence of GMMS allocations in a large set of randomly generated instances. For the same set of instances, we additionally show that our algorithm achieves an approximation factor better than the established, worst-case bound.", "title": "" }, { "docid": "095ea6721c07be32db3c34da986ab6a9", "text": "The skin is often viewed as a static barrier that protects the body from the outside world. Emphasis on studying the skin's architecture and biomechanics in the context of restoring skin movement and function is often ignored. It is fundamentally important that if skin is to be modelled or developed, we do not only focus on the biology of skin but also aim to understand its mechanical properties and structure in living dynamic tissue. In this review, we describe the architecture of skin and patterning seen in skin as viewed from a surgical perspective and highlight aspects of the microanatomy that have never fully been realized and provide evidence or concepts that support the importance of studying living skin's dynamic behaviour. We highlight how the structure of the skin has evolved to allow the body dynamic form and function, and how injury, disease or ageing results in a dramatic changes to the microarchitecture and changes physical characteristics of skin. Therefore, appreciating the dynamic microanatomy of skin from the deep fascia through to the skin surface is vitally important from a dermatological and surgical perspective. This focus provides an alternative perspective and approach to addressing skin pathologies and skin ageing.", "title": "" }, { "docid": "0a67326bde22c1a9b2d6407d141d2a7a", "text": "BACKGROUND\nReducing fruit and vegetable (F&V) prices is a frequently considered policy to improve dietary habits in the context of health promotion. However, evidence on the effectiveness of this intervention is limited.\n\n\nOBJECTIVE\nThe objective was to examine the effects of a 50% price discount on F&Vs or nutrition education or a combination of both on supermarket purchases.\n\n\nDESIGN\nA 6-mo randomized controlled trial within Dutch supermarkets was conducted. Regular supermarket shoppers were randomly assigned to 1 of 4 conditions: 50% price discounts on F&Vs, nutrition education, 50% price discounts plus nutrition education, or no intervention. A total of 199 participants provided baseline data; 151 (76%) were included in the final analysis. F&V purchases were measured by using supermarket register receipts at baseline, at 1 mo after the start of the intervention, at 3 mo, at 6 mo (end of the intervention period), and 3 mo after the intervention ended (9 mo).\n\n\nRESULTS\nAdjusted multilevel models showed significantly higher F&V purchases (per household/2 wk) as a result of the price discount (+3.9 kg; 95% CI: 1.5, 6.3 kg) and the discount plus education intervention (+5.6 kg; 95% CI: 3.2, 7.9 kg) at 6 mo compared with control. Moreover, the percentage of participants who consumed recommended amounts of F&Vs (≥400 g/d) increased from 42.5% at baseline to 61.3% at 6 mo in both discount groups (P = 0.03). Education alone had no significant effect.\n\n\nCONCLUSIONS\nDiscounting F&Vs is a promising intervention strategy because it resulted in substantially higher F&V purchases, and no adverse effects were observed. Therefore, pricing strategies form an important focus for future interventions or policy. However, the long-term effects and the ultimate health outcomes require further investigation. This trial was registered at the ISRCTN Trial Register as number ISRCTN56596945 and at the Dutch Trial Register (http://www.trialregister.nl/trialreg/index.asp) as number NL22568.029.08.", "title": "" }, { "docid": "e7c9330a2c454a49508748554b70af6b", "text": "The question of whether Tourette's syndrome (TS) and trichotillomania (TTM) are best conceptualized as obsessive-compulsive spectrum disorders was raised by family studies demonstrating a close relationship between TS and obsessive-compulsive disorder (OCD), and by psychopharmacological research indicating that both TTM and OCD respond more robustly to clomipramine than to desipramine. A range of studies have subsequently allowed comparison of the phenomenology, psychobiology, and management of TS and TTM, with that of OCD. Here we briefly review this literature. The data indicate that there is significant psychobiological overlap between TS and OCD, supporting the idea that TS can be conceptualized as an OCD spectrum disorder. TTM and OCD have only partial overlap in their phenomenology and psychobiology, but there are a number of reasons for why it may be useful to classify TTM and other habit disorders as part of the obsessive-compulsive spectrum of disorders.", "title": "" }, { "docid": "f9b11e55be907175d969cd7e76803caf", "text": "In this paper, we consider the multivariate Bernoulli distribution as a model to estimate the structure of graphs with binary nodes. This distribution is discussed in the framework of the exponential family, and its statistical properties regarding independence of the nodes are demonstrated. Importantly the model can estimate not only the main effects and pairwise interactions among the nodes but also is capable of modeling higher order interactions, allowing for the existence of complex clique effects. We compare the multivariate Bernoulli model with existing graphical inference models – the Ising model and the multivariate Gaussian model, where only the pairwise interactions are considered. On the other hand, the multivariate Bernoulli distribution has an interesting property in that independence and uncorrelatedness of the component random variables are equivalent. Both the marginal and conditional distributions of a subset of variables in the multivariate Bernoulli distribution still follow the multivariate Bernoulli distribution. Furthermore, the multivariate Bernoulli logistic model is developed under generalized linear model theory by utilizing the canonical link function in order to include covariate information on the nodes, edges and cliques. We also consider variable selection techniques such as LASSO in the logistic model to impose sparsity structure on the graph. Finally, we discuss extending the smoothing spline ANOVA approach to the multivariate Bernoulli logistic model to enable estimation of non-linear effects of the predictor variables.", "title": "" }, { "docid": "1f5bcb6bc3fde7bc294240ce652ae4ab", "text": "Rock climbing has increased in popularity as both a recreational physical activity and a competitive sport. Climbing is physiologically unique in requiring sustained and intermittent isometric forearm muscle contractions for upward propulsion. The determinants of climbing performance are not clear but may be attributed to trainable variables rather than specific anthropometric characteristics.", "title": "" }, { "docid": "3734b4a9e6fe0d031a24e3cb92f22f95", "text": "In this work we address joint object category and instance recognition in the context of RGB-D (depth) cameras. Motivated by local distance learning, where a novel view of an object is compared to individual views of previously seen objects, we define a view-to-object distance where a novel view is compared simultaneously to all views of a previous object. This novel distance is based on a weighted combination of feature differences between views. We show, through jointly learning per-view weights, that this measure leads to superior classification performance on object category and instance recognition. More importantly, the proposed distance allows us to find a sparse solution via Group-Lasso regularization, where a small subset of representative views of an object is identified and used, with the rest discarded. This significantly reduces computational cost without compromising recognition accuracy. We evaluate the proposed technique, Instance Distance Learning (IDL), on the RGB-D Object Dataset, which consists of 300 object instances in 51 everyday categories and about 250,000 views of objects with both RGB color and depth. We empirically compare IDL to several alternative state-of-the-art approaches and also validate the use of visual and shape cues and their combination.", "title": "" }, { "docid": "d5a2fa9be5bbce163de803a7583503f8", "text": "We compared the possibility of detecting hidden objects covered with various types of clothing by using passive imagers operating in a terahertz (THz) range at 1.2 mm (250 GHz) and a mid-wavelength infrared at 3-6 μm (50-100 THz). We investigated theoretical limitations, performance of imagers, and physical properties of fabrics in both the regions. In order to investigate the time stability of detection, we performed measurements in sessions each lasting 30 min. We present a theoretical comparison of two spectra, as well as the results of experiments. In order to compare the capabilities of passive imaging of hidden objects, we combined the properties of textiles, performance of imagers, and properties of radiation in both spectral ranges. The paper presents the comparison of the original results of measurement sessions for the two spectrums with analysis.", "title": "" }, { "docid": "5e64e36e76f4c0577ae3608b6e715a1f", "text": "Deep learning has recently become very popular on account of its incredible success in many complex datadriven applications, including image classification and speech recognition. The database community has worked on data-driven applications for many years, and therefore should be playing a lead role in supporting this new wave. However, databases and deep learning are different in terms of both techniques and applications. In this paper, we discuss research problems at the intersection of the two fields. In particular, we discuss possible improvements for deep learning systems from a database perspective, and analyze database applications that may benefit from deep learning techniques.", "title": "" }, { "docid": "a2105c4e70bf29dba01e30066667a22c", "text": "Background: Acne vulgaris is a common skin disease that affects not only teenagers but also the general population. Although acne is not physically disabling, its psychological impact can be striking, contributing to low self-esteem, depression, and anxiety. As a result, there is a significant demand for effective acne therapies. Antihistamine is a widely used medication to treat several allergic skin conditions and yet it also has been found to decrease complications of acne and improve acne symptoms. For the severe cystic acne vulgaris, oral retinoids such as isotretinoin is the primary treatment; however, health care providers hesitate to prescribe isotretinoin due to its adverse drug reactions. On the other hand, antihistamine is well known by its safe and minimal side effects. Can an antihistamine intervention in standardized treatment of acne vulgaris significantly impact the improvement of acne symptoms and reduce sebum production? Methods: An exhaustive search was conducted by MEDLINE-OVID, CINAHL, UptoDate, Web of Science, Google scholar, MEDLINE-PubMed, Clinicalkey, and ProQuest by using keywords: acne vulgaris and antihistamine. Relevant articles were assessed for quality using GRADE. Results: After the exhaustive search, two studies met the inclusion criteria and eligibility criteria. Effect of antihistamine as an adjuvant treatment of isotretinoin in acne:a randomized, controlled comparative study contains the comparison of 20 patients with moderate acne are treated with isotretinoin and another 20 patients with moderate acne are treated with additional antihistamine. Identification of Histamine Receptors and Reduction of Squalene Levels by an Antihistamine in Sebocytes was conducted on human tissue to verify the decrease of sebum production by the antihistamine’s effect. Conclusion: Both studies demonstrate the usefulness of an histamine antagonist in reducing sebum production and improving acne symptoms. Due to its low cost and safety, a recommendation can be made for antihistamine to treat acne vulgaris as an adjuvant therapy to standardized treatment. Degree Type Capstone Project Degree Name Master of Science in Physician Assistant Studies First Advisor David Keene PA-C", "title": "" }, { "docid": "6cd5b8ef199d926bccc583b7e058d9ee", "text": "Over the last three decades, a large number of evolutionary algorithms have been developed for solving multi-objective optimization problems. However, there lacks an upto-date and comprehensive software platform for researchers to properly benchmark existing algorithms and for practitioners to apply selected algorithms to solve their real-world problems. The demand of such a common tool becomes even more urgent, when the source code of many proposed algorithms has not been made publicly available. To address these issues, we have developed a MATLAB platform for evolutionary multi-objective optimization in this paper, called PlatEMO, which includes more than 50 multiobjective evolutionary algorithms and more than 100 multi-objective test problems, along with several widely used performance indicators. With a user-friendly graphical user interface, PlatEMO enables users to easily compare several evolutionary algorithms at one time and collect statistical results in Excel or LaTeX files. More importantly, PlatEMO is completely open source, such that users are able to develop new algorithms on the basis of it. This paper introduces the main features of PlatEMO and illustrates how to use it for performing comparative experiments, embedding new algorithms, creating new test problems, and developing performance indicators. Source code of PlatEMO is now available at: http://bimk.ahu.edu.cn/index.php?s=/Index/Software/index.html.", "title": "" }, { "docid": "2c40abd8bd489c1cd3d4c80efdfbaf10", "text": "In the recent past, the computer vision community has developed centralized benchmarks for the performance evaluation of a variety of tasks, including generic object and pedestrian detection, 3D reconstruction, optical flow, single-object short-term tracking, and stereo estimation. Despite potential pitfalls of such benchmarks, they have proved to be extremely helpful to advance the state of the art in the respective area. Interestingly, there has been rather limited work on the standardization of quantitative benchmarks for multiple target tracking. One of the few exceptions is the well-known PETS dataset [20], targeted primarily at surveillance applications. Despite being widely used, it is often applied inconsistently, for example involving using different subsets of the available data, different ways of training the models, or differing evaluation scripts. This paper describes our work toward a novel multiple object tracking benchmark aimed to address such issues. We discuss the challenges of creating such a framework, collecting existing and new data, gathering state-of-the-art methods to be tested on the datasets, and finally creating a unified evaluation system. With MOTChallenge we aim to pave the way toward a unified evaluation framework for a more meaningful quantification of multi-target tracking.", "title": "" }, { "docid": "d6f235abee285021a733b79b6d9c4411", "text": "We address the problem of inverse reinforcement learning in Markov decision processes where the agent is risk-sensitive. In particular, we model risk-sensitivity in a reinforcement learning framework by making use of models of human decision-making having their origins in behavioral psychology, behavioral economics, and neuroscience. We propose a gradient-based inverse reinforcement learning algorithm that minimizes a loss function defined on the observed behavior. We demonstrate the performance of the proposed technique on two examples, the first of which is the canonical Grid World example and the second of which is a Markov decision process modeling passengers decisions regarding ride-sharing. In the latter, we use pricing and travel time data from a ride-sharing company to construct the transition probabilities and rewards of the Markov decision process.", "title": "" }, { "docid": "5d040acb82a1c4707d0f9acc47bb85bd", "text": "Humanoid robots offer many physical design choices such as voice frequency and head dimensions. We used hierarchical statistical mediation analysis to trace differences in people's mental model of robots from these choices. In an experiment, a humanoid robot gave participants online advice about their health. We used mediation analysis to identify the causal path from the robot's voice and head dimensions to the participants' mental model, and to their willingness to follow the robot's advice. The male robot voice predicted impressions of a knowledgeable robot, whose advice participants said they would follow. Increasing the voice's fundamental frequency reduced this effect. The robot's short chin length (but not its forehead dimensions) predicted impressions of a sociable robot, which also predicted intentions to take the robot's advice. We discuss the use of this approach for designing robots for different roles, when people's mental model of the robot matters.", "title": "" }, { "docid": "69c0b722d5492415046ac28d55f0914b", "text": "BACKGROUND\nAllergic contact dermatitis caused by (meth)acrylates is well known, both in occupational and in non-occupational settings. Contact hypersensitivity to electrocardiogram (ECG) electrodes containing (meth)acrylates is rarely reported.\n\n\nOBJECTIVE\nTo report the first case of contact dermatitis caused by acrylic acid impurity in ECG electrodes.\n\n\nMATERIALS AND METHODS\nPatch tests were performed with separate components of electrodes and some (meth)acrylates. This was followed by high-performance liquid chromatography of electrode hydrogel.\n\n\nRESULTS\nThe patient was contact-allergic to electrode hydrogel but not to its separate constituents. Positive reactions were observed to 2-hydroxyethyl methacrylate (2-HEMA), 2-hydroxypropyl methacrylate (2-HPMA) and ethyleneglycol dimethacrylate (EGDMA). Subsequent analysis showed that the electrode hydrogel contained acrylic acid as an impurity. The latter was subsequently patch tested, with a positive result.\n\n\nCONCLUSION\nThe sensitization resulting from direct contact with ECG electrodes was caused by acrylic acid, present as an impurity in ECG electrodes. Positive reactions to 2-HEMA, 2-HPMA and EGDMA are considered to be cross-reactions.", "title": "" }, { "docid": "0ddcfd27e5eedfdbc9454d7eba80a81e", "text": "AI is a science, not merely technology, engineering. It cannot find an identity (ubi consistam) in a technology, or set of technologies, and we know that such an identification is quite dangerous. AI is the science of possible forms of intelligence, both individual and collective. To rephrase Doyle's claim, AI is the discipline aimed at understanding intelligent beings by constructing intelligent systems. Since intelligence is mainly a social phenomenon and is due to the necessity of social life, we have to construct socially intelligent systems to understand it, and we have to build social entities to have intelligent systems. If we want that the computer is not \"just a glorified pencil\" [Popper, BBC interview), that it is not a simple tool but a collaborator [Grosz, 1995], an assistant, we need to model social intelligence in the computer. If we want to embed intelligent functions in both the virtual and physical environment (ubiquitous computing) in order to support human action, these distributed intelligences must be social to understand and help the users, and to coordinate, compete and collaborate with each other. In fact Social Intelligence is one of the ways AI responded to and went out of its crisis. It is one of the way it is \"back to the future\", trying to recover all the original challenges of the discipline, its strong scientific identity, its cultural role and influence, that in the '60s and 70s gave rise to the Cognitive Science, and now wil l strongly impact on the social sciences. This stream is part of the new AI of the '90s where systems and models are conceived for reasoning and acting in open unpredictable worlds, with limited and uncertain knowledge, in real time, with bounded (both cognitive and material) resources, with hybrid architectures, interfering -either cooperatively or competitivelywith other systems. The new password is interaction [Bobrow, 1991): interaction with an evolving environment; among several, distributed and heterogeneous artificial systems in a network; with human users; among humans through computers. Important work has been done in AI (in several domains from D A I to HCI , from Agents to logic for action, knowledge, and speech acts) for modeling social intelligence and behavior. In my talk I wi l l just attempt a principled systematization. On the one side, 1 wil l illustrate what I believe to be the basic ontological categories for social action, structure, and mind; letting, first, sociality (social action, social structure) emerge bottom-up from the action and intelligence of individual agents in a common world, and, second, examine some aspects of the way-down: how emergent collective phenomena shape the individual mind. In this paper I wil l focus on the bottom-up perspective. On the other side, I wi l l propose some critical reflections on current approaches and future directions. Doing this I wi l l in particular stress five points.", "title": "" }, { "docid": "a2a8f1011606de266c3b235f31f95bee", "text": "In this paper, we look at three different methods of extracting the argumentative structure from a piece of natural language text. These methods cover linguistic features, changes in the topic being discussed and a supervised machine learning approach to identify the components of argumentation schemes, patterns of human reasoning which have been detailed extensively in philosophy and psychology. For each of these approaches we achieve results comparable to those previously reported, whilst at the same time achieving a more detailed argument structure. Finally, we use the results from these individual techniques to apply them in combination, further improving the argument structure identification.", "title": "" }, { "docid": "7ebd355d65c8de8607da0363e8c86151", "text": "In this letter, we compare the scanning beams of two leaky-wave antennas (LWAs), respectively, loaded with capacitive and inductive radiation elements, which have not been fully discussed in previous publications. It is pointed out that an LWA with only one type of radiation element suffers from a significant gain fluctuation over its beam-scanning band. To remedy this problem, we propose an LWA alternately loaded with inductive and capacitive elements along the host transmission line. The proposed LWA is able to steer its beam continuously from backward to forward with constant gain. A microstrip-based LWA is designed on the basis of the proposed method, and the measurement of its fabricated prototype demonstrates and confirms the desired results. This design method can widely be used to obtain LWAs with constant gain based on a variety of TLs.", "title": "" }, { "docid": "d339f7d94334a2ccc256c29c63fd936f", "text": "The random waypoint model is a frequently used mobility model for simulation–based studies of wireless ad hoc networks. This paper investigates the spatial node distribution that results from using this model. We show and interpret simulation results on a square and circular system area, derive an analytical expression of the expected node distribution in one dimension, and give an approximation for the two–dimensional case. Finally, the concept of attraction areas and a modified random waypoint model, the random borderpoint model, is analyzed by simulation.", "title": "" }, { "docid": "bc37250f9421f6657252ce286703e85c", "text": "This paper introduces a method for producing high quality hand motion using a small number of markers. The proposed \"handover\" animation technique constructs joint angle trajectories with the help of a reference database. Utilizing principle component analysis (PCA) applied to the database, the system automatically determines the sparse marker set to record. Further, to produce hand animation, PCA is used along with a locally weighted regression (LWR) model to reconstruct joint angles. The resulting animation is a full-resolution hand which reflects the original motion without the need for capturing a full marker set. Comparing the technique to other methods reveals improvement over the state of the art in terms of the marker set selection. In addition, the results highlight the ability to generalize the motion synthesized, both by extending the use of a single reference database to new motions, and from distinct reference datasets, over a variety of freehand motions.", "title": "" } ]
scidocsrr
ec13af2025e5c42b675a78076816d588
People Counting Based on an IR-UWB Radar Sensor
[ { "docid": "c70e11160c90bd67caa2294c499be711", "text": "The vital sign monitoring through Impulse Radio Ultra-Wide Band (IR-UWB) radar provides continuous assessment of a patient's respiration and heart rates in a non-invasive manner. In this paper, IR UWB radar is used for monitoring respiration and the human heart rate. The breathing and heart rate frequencies are extracted from the signal reflected from the human body. A Kalman filter is applied to reduce the measurement noise from the vital signal. An algorithm is presented to separate the heart rate signal from the breathing harmonics. An auto-correlation based technique is applied for detecting random body movements (RBM) during the measurement process. Experiments were performed in different scenarios in order to show the validity of the algorithm. The vital signs were estimated for the signal reflected from the chest, as well as from the back side of the body in different experiments. The results from both scenarios are compared for respiration and heartbeat estimation accuracy.", "title": "" } ]
[ { "docid": "fb640b34365ebe208e3d1cbf393a217e", "text": "In recent years, interest has been focused to the study of the two major inositol stereoisomers: myo-inositol (MI) and d-chiro-inositol (DCI), because of their involvement, as second messengers of insulin, in several insulin-dependent processes, such as metabolic syndrome and polycystic ovary syndrome. Although these molecules have different functions, very often their roles have been confused, while the meaning of several observations still needs to be interpreted under a more rigorous physiological framework. With the aim of clarifying this issue, the 2013 International Consensus Conference on MI and DCI in Obstetrics and Gynecology identified opinion leaders in all fields related to this area of research. They examined seminal experimental papers and randomized clinical trials reporting the role and the use of inositol(s) in clinical practice. The main topics were the relation between inositol(s) and metabolic syndrome, polycystic ovary syndrome (with a focus on both metabolic and reproductive aspects), congenital anomalies, gestational diabetes. Clinical trials demonstrated that inositol(s) supplementation could fruitfully affect different pathophysiological aspects of disorders pertaining Obstetrics and Gynecology. The treatment of PCOS women as well as the prevention of GDM seem those clinical conditions which take more advantages from MI supplementation, when used at a dose of 2g twice/day. The clinical experience with MI is largely superior to the one with DCI. However, the existence of tissue-specific ratios, namely in the ovary, has prompted researchers to recently develop a treatment based on both molecules in the proportion of 40 (MI) to 1 (DCI).", "title": "" }, { "docid": "43ef67c897e7f998b1eb7d3524d514f4", "text": "This brief proposes a delta-sigma modulator that operates at extremely low voltage without using a clock boosting technique. To maintain the advantages of a discrete-time integrator in oversampled data converters, a mixed differential difference amplifier (DDA) integrator is developed that removes the input sampling switch in a switched-capacitor integrator. Conventionally, many low-voltage delta-sigma modulators have used high-voltage generating circuits to boost the clock voltage levels. A mixed DDA integrator with both a switched-resistor and a switched-capacitor technique is developed to implement a discrete-time integrator without clock boosted switches. The proposed mixed DDA integrator is demonstrated by a third-order delta-sigma modulator with a feedforward topology. The fabricated modulator shows a 68-dB signal-to-noise-plus-distortion ratio for a 20-kHz signal bandwidth with an oversampling ratio of 80. The chip consumes 140 μW of power at a true 0.4-V power supply, which is the lowest voltage without a clock boosting technique among the state-of-the-art modulators in this signal band.", "title": "" }, { "docid": "606e8d14b6c16775a71f44ec461c4c7d", "text": "Emotional concepts play a huge role in our daily life since they take part into many cognitive processes: from the perception of the environment around us to different learning processes and natural communication. Social robots need to communicate with humans, which increased also the popularity of affective embodied models that adopt different emotional concepts in many everyday tasks. However, there is still a gap between the development of these solutions and the integration and development of a complex emotion appraisal system, which is much necessary for true social robots. In this paper, we propose a deep neural model which is designed in the light of different aspects of developmental learning of emotional concepts to provide an integrated solution for internal and external emotion appraisal. We evaluate the performance of the proposed model with different challenging corpora and compare it with state-of-the-art models for external emotion appraisal. To extend the evaluation of the proposed model, we designed and collected a novel dataset based on a Human-Robot Interaction (HRI) scenario. We deployed the model in an iCub robot and evaluated the capability of the robot to learn and describe the affective behavior of different persons based on observation. The performed experiments demonstrate that the proposed model is competitive with the state of the art in describing emotion behavior in general. In addition, it is able to generate internal emotional concepts that evolve through time: it continuously forms and updates the formed emotional concepts, which is a step towards creating an emotional appraisal model grounded in the robot experiences.", "title": "" }, { "docid": "1f27caaaeae8c82db6a677f66f2dee74", "text": "State of the art visual SLAM systems have recently been presented which are capable of accurate, large-scale and real-time performance, but most of these require stereo vision. Important application areas in robotics and beyond open up if similar performance can be demonstrated using monocular vision, since a single camera will always be cheaper, more compact and easier to calibrate than a multi-camera rig. With high quality estimation, a single camera moving through a static scene of course effectively provides its own stereo geometry via frames distributed over time. However, a classic issue with monocular visual SLAM is that due to the purely projective nature of a single camera, motion estimates and map structure can only be recovered up to scale. Without the known inter-camera distance of a stereo rig to serve as an anchor, the scale of locally constructed map portions and the corresponding motion estimates is therefore liable to drift over time. In this paper we describe a new near real-time visual SLAM system which adopts the continuous keyframe optimisation approach of the best current stereo systems, but accounts for the additional challenges presented by monocular input. In particular, we present a new pose-graph optimisation technique which allows for the efficient correction of rotation, translation and scale drift at loop closures. Especially, we describe the Lie group of similarity transformations and its relation to the corresponding Lie algebra. We also present in detail the system’s new image processing front-end which is able accurately to track hundreds of features per frame, and a filter-based approach for feature initialisation within keyframe-based SLAM. Our approach is proven via large-scale simulation and real-world experiments where a camera completes large looped trajectories.", "title": "" }, { "docid": "d9d3b646f7d4d88b3999f5b431159afe", "text": "The main aim of this study was to characterize neural correlates of analogizing as a cognitive contributor to fluid and crystallized intelligence. In a previous fMRI study which employed fluid analogy letter strings as criteria in a multiple plausibility design (Geake and Hansen, 2005), two frontal ROIs associated with working memory (WM) load (within BA 9 and BA 45/46) were identified as regions in which BOLD increase correlated positively with a crystallized measure of (verbal) IQ. In this fMRI study we used fluid letter, number and polygon strings to further investigate the role of analogizing in fluid (transformation string completion) and non fluid or crystallized (unique symbol counting) cognitive tasks. The multi stimulus type (letter, number, polygon) design of the analogy strings enabled investigation of a secondary research question concerning the generalizability of fluid analogizing at a neural level. A selective psychometric battery, including the Raven's Progressive Matrices (RPM), measured individual cognitive abilities. Neural activations for the effect of task-fluid analogizing (string transformation plausibility) vs. crystallized analogizing (unique symbol counting)-included bilateral frontal and parietal areas associated with WM load and fronto parietal models of general intelligence. Neural activations for stimulus type differences were mainly confined to visually specific posterior regions. ROI covariate analyses of the psychometric measures failed to find consistent co-relationships between fluid analogizing and the RPM and other subtests, except for the WAIS Digit Symbol subtest in a group of bilateral frontal cortical regions associated with the maintenance of WM load. Together, these results support claims for separate developmental trajectories for fluid cognition and general intelligence as assessed by these psychometric subtests.", "title": "" }, { "docid": "4aef87a3fc35106b43a6296e3d581b94", "text": "A uniplanar leaky-wave antenna (LWA) suitable for operation at millimeter-wave frequencies is introduced. Both unidirectional and bidirectional versions of the antenna are presented. The proposed structure consists of a coplanar waveguide fed linear array of closely spaced capacitive transverse slots. This configuration results in a fast-wave structure in which the = 0 spatial harmonic radiates in the forward direction. Since the distance, , between adjacent elements of the array is small , the slot array essentially becomes a uniform LWA. A comprehensive transmission line model is developed based upon the theory of truncated periodic transmission lines to explain the operation of the antenna and provide a tool for its design. Measured and simulated radiation patterns, directivity, gain, and an associated loss budget are presented for a 32-element antenna operating at 30 GHz. The uniplanar nature of the structure makes the antenna appropriate for integration of shunt variable capacitors such as diode or micro-electromechanical system varactors for fixed frequency beam steering at low-bias voltages.", "title": "" }, { "docid": "cc00ad5270a325a9f35495eb45806a92", "text": "This paper offers a step towards research infrastructure, which makes data from experimental economics efficiently usable for analysis of web data. We believe that regularities of human behavior found in experimental data also emerge in real world web data. A format for data from experiments is suggested, which enables its publication as open data. Once standardized datasets of experiments are available on-line, web mining can take advantages from this data. Further, the questions about t he order of causalities arisen from web data analysis can inspire new experiment setups.", "title": "" }, { "docid": "15aeb4678ac538e994d2613504871508", "text": "In this paper, we present the integration of RESURF high-voltage lateral Power MOSFETs which achieve highly competitive figures of merit (such as Rsp, defined as Rdson*Area) in a Trench-Isolated 0.18 micron 100V-rated BCD technology. The new high-voltage LDMOS are rated for operation up to 45V, 60V, 80V and 100V and achieve BVdss & Rsp of 55V/32 mOhm*mm2, 75V/57 mOhm*mm2, 100V/91 mOhm*mm2 and 128V/206 mOhm*mm2 respectively. The HV devices are integrated with high-performance and fully isolated 8V to 30V LDMOS with low Rsp, 1.8V and 5V CMOS, as well as bipolar devices, diodes and other passive devices. The devices are isolated laterally using 20 micron deep trenches with air-gap, for low stress and minimized capacitance. Large 1mm2 Power MOSFETs were fabricated and characterized in order to determine the “effective Rsp“ including metallization, using a three metal back-end with 4um thick top aluminum metallization.", "title": "" }, { "docid": "73d31d63cfaeba5fa7c2d2acc4044ca0", "text": "Plastics in the marine environment have become a major concern because of their persistence at sea, and adverse consequences to marine life and potentially human health. Implementing mitigation strategies requires an understanding and quantification of marine plastic sources, taking spatial and temporal variability into account. Here we present a global model of plastic inputs from rivers into oceans based on waste management, population density and hydrological information. Our model is calibrated against measurements available in the literature. We estimate that between 1.15 and 2.41 million tonnes of plastic waste currently enters the ocean every year from rivers, with over 74% of emissions occurring between May and October. The top 20 polluting rivers, mostly located in Asia, account for 67% of the global total. The findings of this study provide baseline data for ocean plastic mass balance exercises, and assist in prioritizing future plastic debris monitoring and mitigation strategies.", "title": "" }, { "docid": "03dc2c32044a41715991d900bb7ec783", "text": "The analysis of large scale data logged from complex cyber-physical systems, such as microgrids, often entails the discovery of invariants capturing functional as well as operational relationships underlying such large systems. We describe a latent factor approach to infer invariants underlying system variables and how we can leverage these relationships to monitor a cyber-physical system. In particular we illustrate how this approach helps rapidly identify outliers during system operation.", "title": "" }, { "docid": "53049f1514bc03368b8c2a0b18518100", "text": "The Protein Data Bank (PDB; http://www.rcsb.org/pdb/ ) is the single worldwide archive of structural data of biological macromolecules. This paper describes the goals of the PDB, the systems in place for data deposition and access, how to obtain further information, and near-term plans for the future development of the resource.", "title": "" }, { "docid": "711ca6f01ed407e1026dd9958ae94cb2", "text": "The Internet of Things (IoT) is increasingly used for critical applications and securing the IoT has become a major concern. Among other issues it is important to ensure that tampering with IoT devices is detected. Many IoT devices use WiFi for communication and Channel State Information (CSI) based tamper detection is a valid option. Each 802.11n WiFi frame contains a preamble which allows a receiver to estimate the impact of the wireless channel, the transmitter and the receiver on the signal. The estimation result - the CSI - is used by a receiver to extract the transmitted information. However, as the CSI depends on the communication environment and the transmitter hardware, it can be used as well for security purposes. If an attacker tampers with a transmitter it will have an effect on the CSI measured at a receiver. Unfortunately not only tamper events lead to CSI fluctuations; movement of people in the communication environment has an impact too. We propose to analyse CSI values of a transmission simultaneously at multiple receivers to improve distinction of tamper and movement events. A moving person is expected to have an impact on some but not all communication links between transmitter and the receivers. A tamper event impacts on all links between transmitter and the receivers. The paper describes the necessary algorithms for the proposed tamper detection method. In particular we analyse the tamper detection capability in practical deployments with varying intensity of people movement. In our experiments the proposed system deployed in a busy office environment was capable to detect 53% of tamper events (TPR = 53%) while creating zero false alarms (FPR = 0%).", "title": "" }, { "docid": "5d040acb82a1c4707d0f9acc47bb85bd", "text": "Humanoid robots offer many physical design choices such as voice frequency and head dimensions. We used hierarchical statistical mediation analysis to trace differences in people's mental model of robots from these choices. In an experiment, a humanoid robot gave participants online advice about their health. We used mediation analysis to identify the causal path from the robot's voice and head dimensions to the participants' mental model, and to their willingness to follow the robot's advice. The male robot voice predicted impressions of a knowledgeable robot, whose advice participants said they would follow. Increasing the voice's fundamental frequency reduced this effect. The robot's short chin length (but not its forehead dimensions) predicted impressions of a sociable robot, which also predicted intentions to take the robot's advice. We discuss the use of this approach for designing robots for different roles, when people's mental model of the robot matters.", "title": "" }, { "docid": "b498cd00e8f6fa1bcf9b69c50151b63b", "text": "This case-study ®ts a variety of neural network (NN) models to the well-known airline data and compares the resulting forecasts with those obtained from the Box±Jenkins and Holt± Winters methods. Many potential problems in ®tting NN models were revealed such as the possibility that the ®tting routine may not converge or may converge to a local minimum. Moreover it was found that an NN model which ®ts well may give poor out-of-sample forecasts. Thus we think it is unwise to apply NN models blindly in `black box' mode as has sometimes been suggested. Rather, the wise analyst needs to use traditional modelling skills to select a good NN model, e.g. to select appropriate lagged variables as the `inputs'. The Bayesian information criterion is preferred to Akaike's information criterion for comparing different models. Methods of examining the response surface implied by an NN model are examined and compared with the results of alternative nonparametric procedures using generalized additive models and projection pursuit regression. The latter imposes less structure on the model and is arguably easier to understand.", "title": "" }, { "docid": "4cdc2612a1698388823bca833f04854f", "text": "In 2013, Royal Philips was two years into a daunting transformation. Following declining financial performance, CEO Frans van Houten aimed to turn the Dutch icon into a “high-performing company” by 2017. This case study examines the challenges of the business-driven IT transformation at Royal Philips, a diversified technology company. The case discusses three crucial issues. First, the case reflects on Philips’ aim at creating value from combining locally relevant products and services while also leveraging its global scale and scope. Rewarded and unrewarded business complexity is analyzed. Second, the case identifies the need to design and align multiple elements of an enterprise (organizational, cultural, technical) to balance local responsiveness with global scale. Third, the case explains the role of IT (as an asset instead of a liability) in Philips’ transformation and discusses the new IT landscape with its digital platforms, and the new practices to create effective business-IT partnerships.", "title": "" }, { "docid": "51379af3a80b1996d8ac97c94d32f695", "text": "In this paper the application of uncertainty modeling to convolutional neural networks is evaluated. A novel method for adjusting the network’s predictions based on uncertainty information is introduced. This allows the network to be either optimistic or pessimistic in its prediction scores. The proposed method builds on the idea of applying dropout at test time and sampling a predictive mean and variance from the network’s output. Besides the methodological aspects, implementation details allowing for a fast evaluation are presented. Furthermore, a multilabel network architecture is introduced that strongly benefits from the presented approach. In the evaluation it will be shown that modeling uncertainty allows for improving the performance of a given model purely at test time without any further training steps. The evaluation considers several applications in the field of computer vision, including object classification and detection as well as scene attribute recognition.", "title": "" }, { "docid": "e89db5214e5bea32b37539471fccb226", "text": "In this paper, we survey the basic paradigms and notions of secure multiparty computation and discuss their relevance to the field of privacy-preserving data mining. In addition to reviewing definitions and constructions for secure multiparty computation, we discuss the issue of efficiency and demonstrate the difficulties involved in constructing highly efficient protocols. We also present common errors that are prevalent in the literature when secure multiparty computation techniques are applied to privacy-preserving data mining. Finally, we discuss the relationship between secure multiparty computation and privacy-preserving data mining, and show which problems it solves and which problems it does not.", "title": "" }, { "docid": "df67da08931ed6d0d100ff857c2b1ced", "text": "Parallel computers with tens of thousands of processors are typically programmed in a data parallel style, as opposed to the control parallel style used in multiprocessing. The success of data parallel algorithms—even on problems that at first glance seem inherently serial—suggests that this style of programming has much wider applicability than was previously thought.", "title": "" }, { "docid": "e96789cbebad9f503e9bd51af720d4af", "text": "Game tree search algorithms, such as Monte Carlo Tree Search (MCTS), require access to a forward model (or “simulator”) of the game at hand. However, in some games such forward model is not readily available. In this paper we address the problem of automatically learning forward models (more specifically, combats models) for two-player attrition games. We report experiments comparing several approaches to learn such combat model from replay data to models generated by hand. We use StarCraft, a Real-Time Strategy (RTS) game, as our application domain. Specifically, we use a large collection of already collected replays, and focus on learning a combat model for tactical combats.", "title": "" }, { "docid": "631fc877674e9d01f3104e259147382f", "text": "Traditional engineering instruction is deductive, beginning with theories and progressing to the applications of those theories. Alternative teaching approaches are more inductive. Topics are introduced by presenting specific observations, case studies or problems, and theories are taught or the students are helped to discover them only after the need to know them has been established. This study reviews several of the most commonly used inductive teaching methods, including inquiry learning, problembased learning, project-based learning, case-based teaching, discovery learning, and just-in-time teaching. The paper defines each method, highlights commonalities and specific differences, and reviews research on the effectiveness of the methods. While the strength of the evidence varies from one method to another, inductive methods are consistently found to be at least equal to, and in general more effective than, traditional deductive methods for achieving a broad range of learning outcomes.", "title": "" } ]
scidocsrr
e55dbf083855a1237f2bb67875ade871
Jigsaw puzzles with pieces of unknown orientation
[ { "docid": "6db5f103fa479fc7c7c33ea67d7950f6", "text": "Problem statement: To design, implement, and test an algorithm for so lving the square jigsaw puzzle problem, which has many applications in image processing, pattern recognition, and computer vision such as restoration of archeologica l artifacts and image descrambling. Approach: The algorithm used the gray level profiles of border pi xels for local matching of the puzzle pieces, which was performed using dynamic programming to facilita te non-rigid alignment of pixels of two gray level profiles. Unlike the classical best-first sea rch, the algorithm simultaneously located the neigh bors of a puzzle piece during the search using the wellknown Hungarian procedure, which is an optimal assignment procedure. To improve the search for a g lobal solution, every puzzle piece was considered as starting piece at various starting locations. Results: Experiments using four well-known images demonstrated the effectiveness of the proposed appr o ch over the classical piece-by-piece matching approach. The performance evaluation was based on a new precision performance measure. For all four test images, the proposed algorithm achieved 1 00% precision rate for puzzles up to 8×8. Conclusion: The proposed search mechanism based on simultaneou s all cation of puzzle pieces using the Hungarian procedure provided better performance than piece-by-piece used in classical methods.", "title": "" } ]
[ { "docid": "64e0a1345e5a181191c54f6f9524c96d", "text": "Social media based brand communities are communities initiated on the platform of social media. In this article, we explore whether brand communities based on social media (a special type of online brand communities) have positive effects on the main community elements and value creation practices in the communities as well as on brand trust and brand loyalty. A survey based empirical study with 441 respondents was conducted. The results of structural equation modeling show that brand communities established on social media have positive effects on community markers (i.e., shared consciousness, shared rituals and traditions, and obligations to society), which have positive effects on value creation practices (i.e., social networking, community engagement, impressions management, and brand use). Such communities could enhance brand loyalty through brand use and impression management practices. We show that brand trust has a full mediating role in converting value creation practices into brand loyalty. Implications for practice and future research opportunities are discussed.", "title": "" }, { "docid": "bd0691351920e8fa74c8197b9a4e91e0", "text": "Mobile robot vision-based navigation has been the source of countless research contributions, from the domains of both vision and control. Vision is becoming more and more common in applications such as localization, automatic map construction, autonomous navigation, path following, inspection, monitoring or risky situation detection. This survey presents those pieces of work, from the nineties until nowadays, which constitute a wide progress in visual navigation techniques for land, aerial and autonomous underwater vehicles. The paper deals with two major approaches: map-based navigation and mapless navigation. Map-based navigation has been in turn subdivided in metric map-based navigation and topological map-based navigation. Our outline to mapless navigation includes reactive techniques based on qualitative characteristics extraction, appearance-based localization, optical flow, features tracking, plane ground detection/tracking, etc... The recent concept of visual sonar has also been revised.", "title": "" }, { "docid": "ca7afb87dae38ee0cf079f91dbd91d43", "text": "Diet is associated with the development of CHD. The incidence of CHD is lower in southern European countries than in northern European countries and it has been proposed that this difference may be a result of diet. The traditional Mediterranean diet emphasises a high intake of fruits, vegetables, bread, other forms of cereals, potatoes, beans, nuts and seeds. It includes olive oil as a major fat source and dairy products, fish and poultry are consumed in low to moderate amounts. Many observational studies have shown that the Mediterranean diet is associated with reduced risk of CHD, and this result has been confirmed by meta-analysis, while a single randomised controlled trial, the Lyon Diet Heart study, has shown a reduction in CHD risk in subjects following the Mediterranean diet in the secondary prevention setting. However, it is uncertain whether the benefits of the Mediterranean diet are transferable to other non-Mediterranean populations and whether the effects of the Mediterranean diet will still be feasible in light of the changes in pharmacological therapy seen in patients with CHD since the Lyon Diet Heart study was conducted. Further randomised controlled trials are required and if the risk-reducing effect is confirmed then the best methods to effectively deliver this public health message worldwide need to be considered.", "title": "" }, { "docid": "628c8b906e3db854ea92c021bb274a61", "text": "Taxi demand prediction is an important building block to enabling intelligent transportation systems in a smart city. An accurate prediction model can help the city pre-allocate resources to meet travel demand and to reduce empty taxis on streets which waste energy and worsen the traffic congestion. With the increasing popularity of taxi requesting services such as Uber and Didi Chuxing (in China), we are able to collect large-scale taxi demand data continuously. How to utilize such big data to improve the demand prediction is an interesting and critical real-world problem. Traditional demand prediction methods mostly rely on time series forecasting techniques, which fail to model the complex non-linear spatial and temporal relations. Recent advances in deep learning have shown superior performance on traditionally challenging tasks such as image classification by learning the complex features and correlations from largescale data. This breakthrough has inspired researchers to explore deep learning techniques on traffic prediction problems. However, existing methods on traffic prediction have only considered spatial relation (e.g., using CNN) or temporal relation (e.g., using LSTM) independently. We propose a Deep Multi-View Spatial-Temporal Network (DMVST-Net) framework to model both spatial and temporal relations. Specifically, our proposed model consists of three views: temporal view (modeling correlations between future demand values with near time points via LSTM), spatial view (modeling local spatial correlation via local CNN), and semantic view (modeling correlations among regions sharing similar temporal patterns). Experiments on large-scale real taxi demand data demonstrate effectiveness of our approach over state-ofthe-art methods.", "title": "" }, { "docid": "9ffd270f9b674d84403349346d662cf7", "text": "Predicting the fast-rising young researchers (Academic Rising Stars) in the future provides useful guidance to the research community, e.g., offering competitive candidates to university for young faculty hiring as they are expected to have success academic careers. In this work, given a set of young researchers who have published the first first-author paper recently, we solve the problem of how to effectively predict the top k% researchers who achieve the highest citation increment in ∆t years. We explore a series of factors that can drive an author to be fast-rising and design a novel impact increment ranking learning (IIRL) algorithm that leverages those factors to predict the academic rising stars. Experimental results on the large ArnetMiner dataset with over 1.7 million authors demonstrate the effectiveness of IIRL. Specifically, it outperforms all given benchmark methods, with over 8% average improvement. Further analysis demonstrates that the prediction models for different research topics follow the similar pattern. We also find that temporal features are the best indicators for rising stars prediction, while venue features are less relevant.", "title": "" }, { "docid": "9572809d8416cc7b78683e3686e83740", "text": "Lower-limb amputees have identified comfort and mobility as the two most important characteristics of a prosthesis. While these in turn depend on a multitude of factors, they are strongly influenced by the biomechanical performance of the prosthesis and the loading it imparts to the residual limb. Recent years have seen improvements in several prosthetic components that are designed to improve patient comfort and mobility. In this paper, we discuss two of these: VSAP and prosthetic foot-ankle systems; specifically, their mechanical properties and impact on amputee gait are presented.", "title": "" }, { "docid": "cc57e023628ec7ca1bfc91c40fc58341", "text": "The design of electromagnetic interference (EMI) input filters, needed for switched power converters to fulfill the regulatory standards, is typically associated with high development effort. This paper presents a guideline for a simplified differential-mode (DM) filter design. First, a procedure to estimate the required filter attenuation based on the total input rms current using only a few equations is given. Second, a volume optimization of the needed DM filter based on the previously calculated filter attenuation and volumetric component parameters is introduced. It is shown that a minimal volume can be found for a certain optimal number of filter stages. The considerations are exemplified for two single-phase power factor correction converters operated in continuous and discontinuous conduction modes, respectively. Finally, EMI measurements done with a 300-W power converter prototype prove the proposed filter design method.", "title": "" }, { "docid": "7597cdc6a1202560b5a3d6ca00df4269", "text": "Epithelial cells require attachment to extracellular matrix (ECM) to suppress an apoptotic cell death program termed anoikis. Here we describe a nonapoptotic cell death program in matrix-detached cells that is initiated by a previously unrecognized and unusual process involving the invasion of one cell into another, leading to a transient state in which a live cell is contained within a neighboring host cell. Live internalized cells are either degraded by lysosomal enzymes or released. We term this cell internalization process entosis and present evidence for entosis as a mechanism underlying the commonly observed \"cell-in-cell\" cytological feature in human cancers. Further we propose that entosis is driven by compaction force associated with adherens junction formation in the absence of integrin engagement and may represent an intrinsic tumor suppression mechanism for cells that are detached from ECM.", "title": "" }, { "docid": "867516a6a54105e4759338e407bafa5a", "text": "At the end of the criminal intelligence analysis process there are relatively well established and understood approaches to explicit externalisation and representation of thought that include theories of argumentation, narrative and hybrid approaches that include both of these. However the focus of this paper is on the little understood area of how to support users in the process of arriving at such representations from an initial starting point where little is given. The work is based on theoretical considerations and some initial studies with end users. In focusing on process we discuss the requirements of fluidity and rigor and how to gain traction in investigations, the processes of thinking involved including abductive, deductive and inductive reasoning, how users may use thematic sorting in early stages of investigation and how tactile reasoning may be used to externalize and facilitate reasoning in a productive way. In the conclusion section we discuss the issues raised in this work and directions for future work.", "title": "" }, { "docid": "4e5dd7032d9b3fc4563c893a26867e93", "text": "User interaction in visual analytic systems is critical to enabling visual data exploration. Through interacting with visualizations, users engage in sensemaking, a process of developing and understanding relationships within datasets through foraging and synthesis. For example, two-dimensional layouts of high-dimensional data can be generated by dimension reduction models, and provide users with an overview of the relationships between information. However, exploring such spatializations can require expertise with the internal mechanisms and parameters of these models. The core contribution of this work is semantic interaction, capable of steering such models without requiring expertise in dimension reduction models, but instead leveraging the domain expertise of the user. Semantic interaction infers the analytical reasoning of the user with model updates, steering the dimension reduction model for visual data exploration. As such, it is an approach to user interaction that leverages interactions designed for synthesis, and couples them with the underlying mathematical model to provide computational support for foraging. As a result, semantic interaction performs incremental model learning to enable synergy between the user’s insights and the mathematical model. The contributions of this work are organized by providing a description of the principles of semantic interaction, providing design guidelines through the development of a visual analytic prototype, ForceSPIRE, and the evaluation of the impact of semantic interaction on the analytic process. The positive results of semantic interaction open a fundamentally new design space for designing user interactions in visual analytic systems. This research was funded in part by the National Science Foundation, CCF-0937071 and CCF-0937133, the Institute for Critical Technology and Applied Science at Virginia Tech, and the National Geospatial-Intelligence Agency contract #HMI1582-05-1-2001.", "title": "" }, { "docid": "2271192b0f8455d3e2cadb86671fdaf4", "text": "INTRODUCTION\nWith the worldwide increase in penile augmentation procedures and claims of devices designed to elongate the penis, it becomes crucial to study the scientific basis of such procedures or devices, as well as the management of a complaint of a small penis in men with a normal penile size.\n\n\nAIM\nThe aim of this work is to study the scientific basis of opting to penile augmentation procedures and to develop guidelines based on the best available evidence for the management of men complaining of a small penis despite an actually normal size.\n\n\nMETHODS\nWe reviewed the literature and evaluated the evidence about what the normal penile size is, what patients complaining of a small penis usually suffer from, benefits vs. complications of surgery, penile stretching or traction devices, and outcome with patient education and counseling. Repeated presentation and detailed discussions within the Standard Committee of the International Society for Sexual Medicine were performed.\n\n\nMAIN OUTCOME MEASURE\nRecommendations are based on the evaluation of evidence-based medical literature, widespread standards committee discussion, public presentation, and debate.\n\n\nRESULTS\nWe propose a practical approach for evaluating and counseling patients complaining of a small-sized penis.\n\n\nCONCLUSIONS\nBased on the current status of science, penile lengthening procedure surgery is still considered experimental and should only be limited to special circumstances within research or university institutions with supervising ethics committees.", "title": "" }, { "docid": "69179341377477af8ebe9013c664828c", "text": "1. Intensive agricultural practices drive biodiversity loss with potentially drastic consequences for ecosystem services. To advance conservation and production goals, agricultural practices should be compatible with biodiversity. Traditional or less intensive systems (i.e. with fewer agrochemicals, less mechanisation, more crop species) such as shaded coffee and cacao agroforests are highlighted for their ability to provide a refuge for biodiversity and may also enhance certain ecosystem functions (i.e. predation). 2. Ants are an important predator group in tropical agroforestry systems. Generally, ant biodiversity declines with coffee and cacao intensification yet the literature lacks a summary of the known mechanisms for ant declines and how this diversity loss may affect the role of ants as predators. 3. Here, how shaded coffee and cacao agroforestry systems protect biodiversity and may preserve related ecosystem functions is discussed in the context of ants as predators. Specifically, the relationships between biodiversity and predation, links between agriculture and conservation, patterns and mechanisms for ant diversity loss with agricultural intensification, importance of ants as control agents of pests and fungal diseases, and whether ant diversity may influence the functional role of ants as predators are addressed. Furthermore, because of the importance of homopteran-tending by ants in the ecological and agricultural literature, as well as to the success of ants as predators, the costs and benefits of promoting ants in agroforests are discussed. 4. Especially where the diversity of ants and other predators is high, as in traditional agroforestry systems, both agroecosystem function and conservation goals will be advanced by biodiversity protection.", "title": "" }, { "docid": "82250d2c5a7aec9a9b33c35e8ca6ddc9", "text": "Examination Timetable Problem (ETP) is NP–Hard combinatorial optimization problem. It has received tremendous research attention during the past few years given its wide use in universities. ETP can be defined as assignment of courses to be examined, candidates to time periods and examination rooms while satisfying a set of constraints which may be either hard or soft. Several methods have been proposed most of which are based on heuristics like Search techniques, Evolutionary Computation etc. In this Paper, we develop three mathematical models for Netaji Subhas Open University, Kolkata, India using Fuzzy Integer Linear Programming (FILP) technique. In most real life situations, information available in is not exact, lacks precision and has an inherent degree of vagueness. To deal with this we model various allocation variables through fuzzy numbers expressing lack of precision the decision maker has. The solution to the problem is obtained using Fuzzy number ranking method. Each feasible solution has fuzzy number obtained by Fuzzy objective function. The different FILP technique performance are demonstrated by experimental data generated through extensive simulation from Netaji Subhas Open University, Kolkata, India in terms of its execution times. The proposed FILP models are compared with commonly used heuristic viz. Integer Linear Programming approach on experimental data which gives an idea about quality of heuristic. The techniques are also compared with different Artificial Intelligence based heuristics for ETP with respect to best and mean cost as well as execution time measures on Carter benchmark datasets to illustrate its effectiveness. FILP paradigm takes an appreciable amount of time to generate satisfactory solution in comparison to other heuristics. The formulation thus serves as good benchmark for other heuristics. The experimental study presented here focuses on producing a methodology that generalizes well over spectrum of techniques that generates significant results for one or more datasets. The performance of FILP model is finally compared to the best results cited in literature for Carter benchmarks to assess its potential. The problem can be further reduced by formulating with lesser number of allocation variables it without affecting optimality of solution obtained. FLIP model for ETP can also be adapted to solve other ETP as well as combinatorial optimization problems. To the best of our knowledge this is first work on ETP using FILP technique.", "title": "" }, { "docid": "d15f95742176e78080b14042be31bcac", "text": "\"In collaboration with researchers from academia, industry, and the community, GitHub designed a survey to gather high quality and novel data on open source software development practices and communities. We collected responses from 5,500 randomly sampled respondents sourced from over 3,800 open source repositories on GitHub.com, and over 500 responses from a non-random sample of communities that work on other platforms. The results are an open data set about the attitudes, experiences, and backgrounds of those who use, build, and maintain open source software.\" [Zlotnick et al., 2017b]", "title": "" }, { "docid": "a677c1d46b9d2ad2588841eea8e3856c", "text": "In evolutionary multiobjective optimization, maintaining a good balance between convergence and diversity is particularly crucial to the performance of the evolutionary algorithms (EAs). In addition, it becomes increasingly important to incorporate user preferences because it will be less likely to achieve a representative subset of the Pareto-optimal solutions using a limited population size as the number of objectives increases. This paper proposes a reference vector-guided EA for many-objective optimization. The reference vectors can be used not only to decompose the original multiobjective optimization problem into a number of single-objective subproblems, but also to elucidate user preferences to target a preferred subset of the whole Pareto front (PF). In the proposed algorithm, a scalarization approach, termed angle-penalized distance, is adopted to balance convergence and diversity of the solutions in the high-dimensional objective space. An adaptation strategy is proposed to dynamically adjust the distribution of the reference vectors according to the scales of the objective functions. Our experimental results on a variety of benchmark test problems show that the proposed algorithm is highly competitive in comparison with five state-of-the-art EAs for many-objective optimization. In addition, we show that reference vectors are effective and cost-efficient for preference articulation, which is particularly desirable for many-objective optimization. Furthermore, a reference vector regeneration strategy is proposed for handling irregular PFs. Finally, the proposed algorithm is extended for solving constrained many-objective optimization problems.", "title": "" }, { "docid": "fd1558532ec413e80260e82b5620f83c", "text": "In today's busy world time is a vital issue which can't be managed by noticing each and every phenomenon with our tight schedule. So now a day's Automatic systems are being preferred over manual system to make life simpler and easier in all aspects. To make it a grand success Internet of Things is the latest internet technology developed. The number of users of internet has grown so rapidly that it has become a necessary part of our daily life. Our matter of concern in this project is development of Internet of Things based Garbage Monitoring System. As the population of world is increasing day by day, the environment should be clean and hygienic for our better life leads. In most of the cities the overflowed garbage bins are creating an obnoxious smell and making an unhygienic environment. And this is leading to the rapid growth of bacteria and viruses which are causing different types of diseases. To overcome these situations efficient garbage collection systems are getting developed based on IoT. Various designs have already been proposed and have advantages as well as disadvantages. This paper is a review of Garbage Monitoring System based on IoT.", "title": "" }, { "docid": "ef925e9d448cf4ca9a889b5634b685cf", "text": "This paper proposes an ameliorated wheel-based cable inspection robot, which is able to climb up a vertical cylindrical cable on the cable-stayed bridge. The newly-designed robot in this paper is composed of two equally spaced modules, which are joined by connecting bars to form a closed hexagonal body to clasp on the cable. Another amelioration is the newly-designed electric circuit, which is employed to limit the descending speed of the robot during its sliding down along the cable. For the safe landing in case of electricity broken-down, a gas damper with a slider-crank mechanism is introduced to exhaust the energy generated by the gravity when the robot is slipping down. For the present design, with payloads below 3.5 kg, the robot can climb up a cable with diameters varying from 65 mm to 205 mm. The landing system is tested experimentally and a simplified mathematical model is analyzed. Several climbing experiments performed on real cables show the capability of the proposed robot.", "title": "" }, { "docid": "05a4ec72afcf9b724979802b22091fd4", "text": "Convolutional neural networks (CNNs) have greatly improved state-of-the-art performances in a number of fields, notably computer vision and natural language processing. In this work, we are interested in generalizing the formulation of CNNs from low-dimensional regular Euclidean domains, where images (2D), videos (3D) and audios (1D) are represented, to high-dimensional irregular domains such as social networks or biological networks represented by graphs. This paper introduces a formulation of CNNs on graphs in the context of spectral graph theory. We borrow the fundamental tools from the emerging field of signal processing on graphs, which provides the necessary mathematical background and efficient numerical schemes to design localized graph filters efficient to learn and evaluate. As a matter of fact, we introduce the first technique that offers the same computational complexity than standard CNNs, while being universal to any graph structure. Numerical experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs, as long as the graph is well-constructed.", "title": "" }, { "docid": "13d8ce0c85befb38e6f2da583ac0295b", "text": "The addition of sensors to wearable computers allows them to adapt their functions to more suit the activities and situation of their wearers. A wearable sensor badge is described constructed from (hard) electronic components, which can sense perambulatory activities for context-awareness. A wearable sensor jacket is described that uses advanced knitting techniques to form (soft) fabric stretch sensors positioned to measure upper limb and body movement. Worn on-the-hip, or worn as clothing, these unobtrusive sensors supply abstract information about your current activity to your other wearable computers.", "title": "" } ]
scidocsrr
d54f20f03f1dfa7466487e320f6da167
Learning to Create Jazz Melodies Using Deep Belief Nets
[ { "docid": "2c28d01814e0732e59d493f0ea2eafcb", "text": "Victor Frankenstein sought to create an intelligent being imbued with the r ules of civilized human conduct, who could further learn how to behave and possibly even evolve through successive g nerations into a more perfect form. Modern human composers similarly strive to create intell igent algorithmic music composition systems that can follow prespecified rules, learn appropriate patte rns from a collection of melodies, or evolve to produce output more perfectly matched to some aesthetic criteria . H re we review recent efforts aimed at each of these three types of algorithmic composition. We focus pa rticularly on evolutionary methods, and indicate how monstrous many of the results have been. We present a ne w method that uses coevolution to create linked artificial music critics and music composers , and describe how this method can attach the separate parts of rules, learning, and evolution together in to one coherent body. “Invention, it must be humbly admitted, does not consist in creating out of void, but ou t of chaos; the materials must, in the first place, be afforded...” --Mary Shelley, Frankenstein (1831/1993, p. 299)", "title": "" } ]
[ { "docid": "702edfcf726076e6e8f2e02cb3996c27", "text": "WLAN RSS-based localization has been a hot research topic for the last years. To obtain high accuracy in the noisy wireless channel, WLAN location determination systems usually use a calibration phase, where a radio map, capturing the signal strength signatures at different locations in the area of interest, is built. The radio map construction process takes a lot of time and effort, reducing the value of WLAN localization systems. In this paper, we propose 3D ray tracing as a way for automatically generating a highly accurate radiomap. We compare this method to previously used propagation modeling-based methods like the Wall Attenuation Factor and 2D ray tracing models. We evaluate the performance of each method and its computational cost in a typical residential environment. We also examine the sensitivity of the localization accuracy to inaccurate material parameters. Our results quantify the accuracy- complexity trade-off of the different proposed techniques with 3D ray tracing giving the best localization accuracy compared to measurements with acceptable computational requirements on a typical PC.", "title": "" }, { "docid": "d63609f3850ceb80945ab72b242fcfe3", "text": "Code review is the manual assessment of source code by humans, mainly intended to identify defects and quality problems. Modern Code Review (MCR), a lightweight variant of the code inspections investigated since the 1970s, prevails today both in industry and open-source software (OSS) systems. The objective of this paper is to increase our understanding of the practical benefits that the MCR process produces on reviewed source code. To that end, we empirically explore the problems fixed through MCR in OSS systems. We manually classified over 1,400 changes taking place in reviewed code from two OSS projects into a validated categorization scheme. Surprisingly, results show that the types of changes due to the MCR process in OSS are strikingly similar to those in the industry and academic systems from literature, featuring the similar 75:25 ratio of maintainability-related to functional problems. We also reveal that 7–35% of review comments are discarded and that 10–22% of the changes are not triggered by an explicit review comment. Patterns emerged in the review data; we investigated them revealing the technical factors that influence the number of changes due to the MCR process. We found that bug-fixing tasks lead to fewer changes and tasks with more altered files and a higher code churn have more changes. Contrary to intuition, the person of the reviewer had no impact on the number of changes.", "title": "" }, { "docid": "243a3c43cc6d6b51d533e83b8683a1ad", "text": "Collective intelligence (CI), a group’s capacity to perform a wide variety of tasks, is a key factor in successful collaboration. Group composition, particularly diversity and member social perceptiveness, are consistent predictors of CI, but we have limited knowledge about the mechanisms underlying their effects. To address this gap, we examine how physiological synchrony, as an indicator of coordination and rapport, relates to CI in computer-mediated teams, and if synchrony might serve as a mechanism explaining the effect of group composition on CI. We present results from a laboratory experiment where 120 dyads completed the Test of Collective Intelligence (TCI) together online and rated their group satisfaction, while wearing physiological sensors. The first 60 dyads communicated via video and audio in study 1, while the next 60 dyads communicated via audio only in study 2. In study 1, we find that synchrony in facial expressions and synchrony in standard deviation of loudness in speech (both indicative of shared experience) was associated with CI and synchrony in electrodermal activity (indicative of shared arousal) with group satisfaction. Furthermore, various forms of synchrony mediated the effect of member diversity and social perceptiveness on CI and group satisfaction. In study 2, synchrony in facial expressions no longer had an effect on CI, but synchrony in standard deviation of loudness in speech continued to positively effect CI. Our results have important implications for online collaborations and distributed teams.", "title": "" }, { "docid": "38147fd01321b7d44cb47cc010f1a560", "text": "The Internet of Things is a paradigm where everyday objects can be equipped with identifying, sensing, networking and processing capabilities that will allow them to communicate with one another and with other devices and services over the Internet to accomplish some objective. Ultimately, IoT devices will be ubiquitous, context-aware and will enable ambient intelligence. This article reports on the current state of research on the Internet of Things by examining the literature, identifying current trends, describing challenges that threaten IoT diffusion, presenting open research questions and future directions and compiling a comprehensive reference list to assist researchers.", "title": "" }, { "docid": "c61e64ebef3ec28622732dd3a85f602d", "text": "BACKGROUND: Systematic Literature Reviews (SLRs) have gained significant popularity among software engineering (SE) researchers since 2004. Several researchers have also been working on improving the scientific and technological support for SLRs in SE. We argue that there is also an essential need for evidence-based body of knowledge about different aspects of the adoption of SLRs in SE. OBJECTIVE: The main objective of this research is to empirically investigate the adoption and use of SLRs in SE research from various perspectives. METHOD: We used multi-method approach as it is based on a combination of complementary research methods which are expected to compensate each others' limitations. RESULTS: A large majority of the participants are convinced of the value of using a rigorous and systematic methodology for literature reviews. However, there are concerns about the required time and resources for SLRs. One of the most important motivators for performing SLRs is new findings and inception of innovative ideas for further research. The reported SLRs are more influential compared to the traditional literature reviews in terms of number of citations. One of the main challenges of conducting SLRs is drawing a balance between rigor and required effort. CONCLUSIONS: SLR has become a popular research methodology for conducting literature review and evidence aggregation in SE. There is an overall positive perception about this methodology. The findings provide interesting insights into different aspects of SLRs. We expect that the findings can provide valuable information to readers on what can be expected from conducting SLRs and the potential impact of such reviews.", "title": "" }, { "docid": "423cba015a9cfc247943dd7d3c4ea1cf", "text": "No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or informa­ tion storage and retrieval) without permission in writing from the publisher. Preface Probability is common sense reduced to calculation Laplace This book is an outgrowth of our involvement in teaching an introductory prob­ ability course (\"Probabilistic Systems Analysis'�) at the Massachusetts Institute of Technology. The course is attended by a large number of students with diverse back­ grounds, and a broad range of interests. They span the entire spectrum from freshmen to beginning graduate students, and from the engineering school to the school of management. Accordingly, we have tried to strike a balance between simplicity in exposition and sophistication in analytical reasoning. Our key aim has been to develop the ability to construct and analyze probabilistic models in a manner that combines intuitive understanding and mathematical precision. In this spirit, some of the more mathematically rigorous analysis has been just sketched or intuitively explained in the text. so that complex proofs do not stand in the way of an otherwise simple exposition. At the same time, some of this analysis is developed (at the level of advanced calculus) in theoretical prob­ lems, that are included at the end of the corresponding chapter. FUrthermore, some of the subtler mathematical issues are hinted at in footnotes addressed to the more attentive reader. The book covers the fundamentals of probability theory (probabilistic mod­ els, discrete and continuous random variables, multiple random variables, and limit theorems), which are typically part of a first course on the subject. It also contains, in Chapters 4-6 a number of more advanced topics, from which an instructor can choose to match the goals of a particular course. In particular, in Chapter 4, we develop transforms, a more advanced view of conditioning, sums of random variables, least squares estimation, and the bivariate normal distribu-v vi Preface tion. Furthermore, in Chapters 5 and 6, we provide a fairly detailed introduction to Bernoulli, Poisson, and Markov processes. Our M.LT. course covers all seven chapters in a single semester, with the ex­ ception of the material on the bivariate normal (Section 4.7), and on continuous­ time Markov chains (Section 6.5). However, in an alternative course, the material on stochastic processes could be omitted, thereby allowing additional emphasis on foundational material, or coverage of other topics of the instructor's choice. Our …", "title": "" }, { "docid": "e83622a6c195b63f9a20306af8aade18", "text": "BACKGROUND\nPelvic floor muscle training is the most commonly recommended physical therapy treatment for women with stress leakage of urine. It is also used in the treatment of women with mixed incontinence, and less commonly for urge incontinence. Adjuncts, such as biofeedback or electrical stimulation, are also commonly used with pelvic floor muscle training. The content of pelvic floor muscle training programmes is highly variable.\n\n\nOBJECTIVES\nTo determine the effects of pelvic floor muscle training for women with symptoms or urodynamic diagnoses of stress, urge and mixed incontinence, in comparison to no treatment or other treatment options.\n\n\nSEARCH STRATEGY\nSearch strategy: We searched the Cochrane Incontinence Group trials register (May 2000), Medline (1980 to 1998), Embase (1980 to 1998), the database of the Dutch National Institute of Allied Health Professions (to 1998), the database of the Cochrane Rehabilitation and Related Therapies Field (to 1998), Physiotherapy Index (to 1998) and the reference lists of relevant articles. We handsearched the proceedings of the International Continence Society (1980 to 2000). We contacted investigators in the field to locate studies. Date of the most recent searches: May 2000.\n\n\nSELECTION CRITERIA\nRandomised trials in women with symptoms or urodynamic diagnoses of stress, urge or mixed incontinence that included pelvic floor muscle training in at least one arm of the trial.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo reviewers assessed all trials for inclusion/exclusion and methodological quality. Data were extracted by the lead reviewer onto a standard form and cross checked by another. Disagreements were resolved by discussion. Data were processed as described in the Cochrane Handbook. Sensitivity analysis on the basis of diagnosis was planned and undertaken where appropriate.\n\n\nMAIN RESULTS\nForty-three trials met the inclusion criteria. The primary or only reference for 15 of these was a conference abstract. The pelvic floor muscle training programs, and comparison interventions, varied markedly. Outcome measures differed between trials, and methods of data reporting varied, making the data difficult to combine. Many of the trials were small. Allocation concealment was adequate in five trials, and nine trials used assessors masked to group allocation. Thirteen trials reported that there were no losses to follow up, seven trials had dropout rates of less than 10%, but in the remaining trials the proportion of dropouts ranged from 12% to 41%. Pelvic floor muscle training was better than no treatment or placebo treatments for women with stress or mixed incontinence. 'Intensive' appeared to be better than 'standard' pelvic floor muscle training. PFMT may be more effective than some types of electrical stimulation but there were problems in combining the data from these trials. There is insufficient evidence to determine if pelvic floor muscle training is better or worse than other treatments. The effect of adding pelvic floor muscle training to other treatments (e.g. electrical stimulation, behavioural training) is not clear due to the limited amount of evidence available. Evidence of the effect of adding other adjunctive treatments to PFMT (e.g. vaginal cones, intravaginal resistance) is equally limited. The effectiveness of biofeedback assisted PFMT is not clear, but on the basis of the evidence available there did not appear to be any benefit over PFMT alone at post treatment assessment. Long-term outcomes of pelvic floor muscle training are unclear. Side effects of pelvic floor muscle training were uncommon and reversible. A number of the formal comparisons should be viewed with caution due to statistical heterogeneity, lack of statistical independence, and the possibility of spurious confidence intervals in some instances.\n\n\nREVIEWER'S CONCLUSIONS\nPelvic floor muscle training appeared to be an effective treatment for adult women with stress or mixed incontinence. Pelvic floor muscle training was better than no treatment or placebo treatments. The limitations of the evidence available mean that is difficult to judge if pelvic floor muscle training was better or worse than other treatments. Most trials to date have studied the effect of treatment in younger, premenopausal women. The role of pelvic floor muscle training for women with urge incontinence alone remains unclear. Many of the trials were small with poor reporting of allocation concealment and masking of outcome assessors. In addition there was a lack of consistency in the choice and reporting of outcome measures that made data difficult to combine. Methodological problems limit the confidence that can be placed in the findings of the review. Further, large, high quality trials are necessary.", "title": "" }, { "docid": "f48ee93659a25bee9a49e8be6c789987", "text": "what design is from a theoretical point of view, which is a role of the descriptive model. However, descriptive models are not necessarily helpful in directly deriving either the architecture of intelligent CAD or the knowledge representation for intelligent CAD. For this purpose, we need a computable design process model that should coincide, at least to some extent, with a cognitive model that explains actual design activities. One of the major problems in developing so-called intelligent computer-aided design (CAD) systems (ten Hagen and Tomiyama 1987) is the representation of design knowledge, which is a two-part process: the representation of design objects and the representation of design processes. We believe that intelligent CAD systems will be fully realized only when these two types of representation are integrated. Progress has been made in the representation of design objects, as can be seen, for example, in geometric modeling; however, almost no significant results have been seen in the representation of design processes, which implies that we need a design theory to formalize them. According to Finger and Dixon (1989), design process models can be categorized into a descriptive model that explains how design is done, a cognitive model that explains the designer’s behavior, a prescriptive model that shows how design must be done, and a computable model that expresses a method by which a computer can accomplish a task. A design theory for intelligent CAD is not useful when it is merely descriptive or cognitive; it must also be computable. We need a general model of design Articles", "title": "" }, { "docid": "42a79b084dd18dafbe69aa3f0778158a", "text": "This paper introduces an approach for dense 3D reconstruc7 7 tion from unregistered Internet-scale photo collections with about 3 mil8 8 lion of images within the span of a day on a single PC (“cloudless”). Our 9 9 method advances image clustering, stereo, stereo fusion and structure 10 10 from motion to achieve high computational performance. We leverage 11 11 geometric and appearance constraints to obtain a highly parallel imple12 12 mentation on modern graphics processors and multi-core architectures. 13 13 This leads to two orders of magnitude higher performance on an order 14 14 of magnitude larger dataset than competing state-of-the-art approaches. 15 15", "title": "" }, { "docid": "95be4f5132cde3c637c5ee217b5c8405", "text": "In recent years, information communication and computation technologies are deeply converging, and various wireless access technologies have been successful in deployment. It can be predicted that the upcoming fifthgeneration mobile communication technology (5G) can no longer be defined by a single business model or a typical technical characteristic. 5G is a multi-service and multitechnology integrated network, meeting the future needs of a wide range of big data and the rapid development of numerous businesses, and enhancing the user experience by providing smart and customized services. In this paper, we propose a cloud-based wireless network architecture with four components, i.e., mobile cloud, cloud-based radio access network (Cloud RAN), reconfigurable network and big data centre, which is capable of providing a virtualized, reconfigurable, smart wireless network.", "title": "" }, { "docid": "868df6c0c43dd49588cc0892b50e8079", "text": "Software bugs, such as concurrency, memory and semantic bugs, can significantly affect system reliability. Although much effort has been made to address this problem, there are still many bugs that cannot be detected, especially concurrency bugs due to the complexity of concurrent programs. Effective approaches for detecting these common bugs are therefore highly desired.\n This paper presents an invariant-based bug detection tool, DefUse, which can detect not only concurrency bugs (including the previously under-studied order violation bugs), but also memory and semantic bugs. Based on the observation that many bugs appear as violations to programmers' data flow intentions, we introduce three different types of definition-use invariants that commonly exist in both sequential and concurrent programs. We also design an algorithm to automatically extract such invariants from programs, which are then used to detect bugs. Moreover, DefUse uses various techniques to prune false positives and rank error reports.\n We evaluated DefUse using sixteen real-world applications with twenty real-world concurrency and sequential bugs. Our results show that DefUse can effectively detect 19 of these bugs, including 2 new bugs that were never reported before, with only a few false positives. Our training sensitivity results show that, with the benefit of the pruning and ranking algorithms, DefUse is accurate even with insufficient training.", "title": "" }, { "docid": "5638ba62bcbfd1bd5e46b4e0dccf0d94", "text": "Sentiment analysis aims to automatically uncover the underlying attitude that we hold towards an entity. The aggregation of these sentiment over a population represents opinion polling and has numerous applications. Current text-based sentiment analysis rely on the construction of dictionaries and machine learning models that learn sentiment from large text corpora. Sentiment analysis from text is currently widely used for customer satisfaction assessment and brand perception analysis, among others. With the proliferation of social media, multimodal sentiment analysis is set to bring new opportunities with the arrival of complementary data streams for improving and going beyond text-based sentiment analysis. Since sentiment can be detected through affective traces it leaves, such as facial and vocal displays, multimodal sentiment analysis offers promising avenues for analyzing facial and vocal expressions in addition to the transcript or textual content. These approaches leverage emotion recognition and context inference to determine the underlying polarity and scope of an individual’s sentiment. In this survey, we define sentiment and the problem of multimodal sentiment analysis and review recent developments in multimodal sentiment analysis in different domains, including spoken reviews, images, video blogs, human-machine and human-human interaction. Challenges and opportunities of this emerging field are also discussed leading to our thesis that multimodal sentiment analysis holds a significant untapped potential.", "title": "" }, { "docid": "6664ed79a911247b401a4bd0b2cc619c", "text": "Extracting good representations from images is essential for many computer vision tasks. In this paper, we propose hierarchical matching pursuit (HMP), which builds a feature hierarchy layer-by-layer using an efficient matching pursuit encoder. It includes three modules: batch (tree) orthogonal matching pursuit, spatial pyramid max pooling, and contrast normalization. We investigate the architecture of HMP, and show that all three components are critical for good performance. To speed up the orthogonal matching pursuit, we propose a batch tree orthogonal matching pursuit that is particularly suitable to encode a large number of observations that share the same large dictionary. HMP is scalable and can efficiently handle full-size images. In addition, HMP enables linear support vector machines (SVM) to match the performance of nonlinear SVM while being scalable to large datasets. We compare HMP with many state-of-the-art algorithms including convolutional deep belief networks, SIFT based single layer sparse coding, and kernel based feature learning. HMP consistently yields superior accuracy on three types of image classification problems: object recognition (Caltech-101), scene recognition (MIT-Scene), and static event recognition (UIUC-Sports).", "title": "" }, { "docid": "d28ab4d2979872bf868ef9b7fe8487bb", "text": "We have developed an easy-to-use and cost-effective system to construct textured 3D animated face models from videos with minimal user interaction. This is a particularly challenging task for faces due to a lack of prominent textures. We develop a robust system by following a model-based approach: we make full use of generic knowledge of faces in head motion determination, head tracking, model fitting, and multiple-view bundle adjustment. Our system first takes, with an ordinary video camera, images of a face of a person sitting in front of the camera turning their head from one side to the other. After five manual clicks on two images to indicate the position of the eye corners, nose tip and mouth corners, the system automatically generates a realistic looking 3D human head model that can be animated immediately (different poses, facial expressions and talking). A user, with a PC and a video camera, can use our system to generate his/her face model in a few minutes. The face model can then be imported in his/her favorite game, and the user sees themselves and their friends take part in the game they are playing. We have demonstrated the system on a laptop computer live at many events, and constructed face models for hundreds of people. It works robustly under various environment settings.", "title": "" }, { "docid": "64094eef703f761aa82509326533c796", "text": "Grammatical error correction, like other machine learning tasks, greatly benefits from large quantities of high quality training data, which is typically expensive to produce. While writing a program to automatically generate realistic grammatical errors would be difficult, one could learn the distribution of naturallyoccurring errors and attempt to introduce them into other datasets. Initial work on inducing errors in this way using statistical machine translation has shown promise; we investigate cheaply constructing synthetic samples, given a small corpus of human-annotated data, using an off-the-rack attentive sequence-to-sequence model and a straight-forward post-processing procedure. Our approach yields error-filled artificial data that helps a vanilla bi-directional LSTM to outperform the previous state of the art at grammatical error detection, and a previously introduced model to gain further improvements of over 5% F0.5 score. When attempting to determine if a given sentence is synthetic, a human annotator at best achieves 39.39 F1 score, indicating that our model generates mostly human-like instances.", "title": "" }, { "docid": "026b46aee4653dcfb31d0041439ad3bf", "text": "In this paper, we develop a diagnosis model based on particle swarm optimization (PSO), support vector machines (SVMs) and association rules (ARs) to diagnose erythemato-squamous diseases. The proposed model consists of two stages: first, AR is used to select the optimal feature subset from the original feature set; then a PSO based approach for parameter determination of SVM is developed to find the best parameters of kernel function (based on the fact that kernel parameter setting in the SVM training procedure significantly influences the classification accuracy, and PSO is a promising tool for global searching). Experimental results show that the proposed AR_PSO–SVM model achieves 98.91% classification accuracy using 24 features of the erythemato-squamous diseases dataset taken from UCI (University of California at Irvine) machine learning database. Therefore, we can conclude that our proposed method is very promising compared to the previously reported results. & 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0e8cde83260d6ca4d8b3099628c25fc2", "text": "1Department of Molecular Virology, Immunology and Medical Genetics, The Ohio State University Medical Center, Columbus, Ohio, USA. 2Department of Physics, Pohang University of Science and Technology, Pohang, Korea. 3School of Interdisciplinary Bioscience and Bioengineering, Pohang, Korea. 4Physics Department, The Ohio State University, Columbus, Ohio, USA. 5These authors contributed equally to this work. e-mail: fishel.7@osu.edu", "title": "" }, { "docid": "b75c7e5c8badea76dc21c05901e32423", "text": "The need for autonomous navigation has increased in recent years due to the adoption of unmanned aerial vehicles (UAVs) and Micro UAVs (MAVs) for task, such as search and rescue, terrain mapping, and other missions, albeit by different means. MAVs have been less successful at fulfilling these missions as they are unable to carry complex sensors and camera systems for computer vision which larger UAVs routinely use. Monocular vision has been used previously to provide vision capabilities for MAVs. Although, monocular vision has had less success at obstacle detection and avoidance compared to stereo vision and is more computationally expensive. The more expensive computations have there for posed a problem in the past for on board closed MAV systems for autonomous navigation using monocular vision. However, with embedded GPUs recently gaining traction for small yet powerful parallel computations in small form factors show promise for fully closed MAV systems. This paper discusses the future of autonomous navigation with an embedded GPU, NVIDIA’s Jetson TX1 board, and AR.Drone 2.0 MAV drone using a novel obstacle detection algorithm implementing goodFeaturesToTrack, Lucas-Kanade Optical Flow, image segmentation, and size expansion.", "title": "" }, { "docid": "336254888f3683d8dec038538bd704bb", "text": "In recent years, Electric vehicles (EVs) are receiving significant attention as an environmental-sustainable and cost-effective substitute of vehicles with internal combustion engine, for the solution of the dependence from fossil fuels and for the saving of Green-House Gasses emission The present paper deals with an overview on different types of EVs charging stations and a comparison between the related European and American Standards. The work includes also a summary on possible types of Energy Storage Systems (ESSs), that are important for the integration of EVs fast charging stations of the last generation in smart grids. Finally a brief analysis on the possible electrical layout for the ESS integration in EVs charging system, proposed in literature, is reported.", "title": "" }, { "docid": "2753c131bafcd392116383a04d3066b2", "text": "With the massive construction of the China high-speed railway, it is of a great significance to propose an automatic approach to inspect the defects of the catenary support devices. Based on the obtained high resolution images, the detection and extraction of the components on the catenary support devices are the vital steps prior to their defect report. Inspired by the existing object detection Faster R-CNN framework, a cascaded convolutional neural network (CNN) architecture is built to successively detect the various components and the tiny fasteners in the complex catenary support device structures. Meanwhile, some missing states of the fasteners on the cantilever joints are directly reported via our proposed architecture. Experiments on the Wuhan-Guangzhou high-speed railway dataset demonstrate a practical performance of the component detection with good adaptation and robustness in complex environments, feasible to accurately inspect the extremely tiny defects on the various catenary components.", "title": "" } ]
scidocsrr
fc25e2217640f637c5b9c43def7dd8d1
Design and MinION testing of a nanopore targeted gene sequencing panel for chronic lymphocytic leukemia
[ { "docid": "ee785105669d58052ad3b3a3954ba9fb", "text": "Data visualization is an essential component of genomic data analysis. However, the size and diversity of the data sets produced by today's sequencing and array-based profiling methods present major challenges to visualization tools. The Integrative Genomics Viewer (IGV) is a high-performance viewer that efficiently handles large heterogeneous data sets, while providing a smooth and intuitive user experience at all levels of genome resolution. A key characteristic of IGV is its focus on the integrative nature of genomic studies, with support for both array-based and next-generation sequencing data, and the integration of clinical and phenotypic data. Although IGV is often used to view genomic data from public sources, its primary emphasis is to support researchers who wish to visualize and explore their own data sets or those from colleagues. To that end, IGV supports flexible loading of local and remote data sets, and is optimized to provide high-performance data visualization and exploration on standard desktop systems. IGV is freely available for download from http://www.broadinstitute.org/igv, under a GNU LGPL open-source license.", "title": "" } ]
[ { "docid": "30dffba83b24e835a083774aa91e6c59", "text": "Wikipedia is one of the most popular sites on the Web, with millions of users relying on it to satisfy a broad range of information needs every day. Although it is crucial to understand what exactly these needs are in order to be able to meet them, little is currently known about why users visit Wikipedia. The goal of this paper is to fill this gap by combining a survey of Wikipedia readers with a log-based analysis of user activity. Based on an initial series of user surveys, we build a taxonomy of Wikipedia use cases along several dimensions, capturing users’ motivations to visit Wikipedia, the depth of knowledge they are seeking, and their knowledge of the topic of interest prior to visiting Wikipedia. Then, we quantify the prevalence of these use cases via a large-scale user survey conducted on live Wikipedia with almost 30,000 responses. Our analyses highlight the variety of factors driving users to Wikipedia, such as current events, media coverage of a topic, personal curiosity, work or school assignments, or boredom. Finally, we match survey responses to the respondents’ digital traces in Wikipedia’s server logs, enabling the discovery of behavioral patterns associated with specific use cases. For instance, we observe long and fast-paced page sequences across topics for users who are bored or exploring randomly, whereas those using Wikipedia for work or school spend more time on individual articles focused on topics such as science. Our findings advance our understanding of reader motivations and behavior on Wikipedia and can have implications for developers aiming to improve Wikipedia’s user experience, editors striving to cater to their readers’ needs, third-party services (such as search engines) providing access to Wikipedia content, and researchers aiming to build tools such as recommendation engines.", "title": "" }, { "docid": "aa0f1910a52018d224dbe65b2be07a4f", "text": "We describe a system that uses automated planning to synthesize correct and efficient parallel graph programs from high-level algorithmic specifications. Automated planning allows us to use constraints to declaratively encode program transformations such as scheduling, implementation selection, and insertion of synchronization. Each plan emitted by the planner satisfies all constraints simultaneously, and corresponds to a composition of these transformations. In this way, we obtain an integrated compilation approach for a very challenging problem domain. We have used this system to synthesize parallel programs for four graph problems: triangle counting, maximal independent set computation, preflow-push maxflow, and connected components. Experiments on a variety of inputs show that the synthesized implementations perform competitively with hand-written, highly-tuned code.", "title": "" }, { "docid": "bce79146a0316fd10c6ee492ff0b5686", "text": "Recent advances in deep learning for object recognition in natural images has prompted a surge of interest in applying a similar set of techniques to medical images. Most of the initial attempts largely focused on replacing the input to such a deep convolutional neural network from a natural image to a medical image. This, however, does not take into consideration the fundamental differences between these two types of data. More specifically, detection or recognition of an anomaly in medical images depends significantly on fine details, unlike object recognition in natural images where coarser, more global structures matter more. This difference makes it inadequate to use the existing deep convolutional neural networks architectures, which were developed for natural images, because they rely on heavily downsampling an image to a much lower resolution to reduce the memory requirements. This hides details necessary to make accurate predictions for medical images. Furthermore, a single exam in medical imaging often comes with a set of different views which must be seamlessly fused in order to reach a correct conclusion. In our work, we propose to use a multi-view deep convolutional neural network that handles a set of more than one highresolution medical image. We evaluate this network on large-scale mammography-based breast cancer screening (BI-RADS prediction) using 103 thousand images. We focus on investigating the impact of training set sizes and image sizes on the prediction accuracy. Our results highlight that performance clearly increases with the size of training set, and that the best performance can only be achieved using the images in the original resolution. This suggests the future direction of medical imaging research using deep neural networks is to utilize as much data as possible with the least amount of potentially harmful preprocessing.", "title": "" }, { "docid": "351beace260a731aaf8dcf6e6870ad99", "text": "The field of Explainable Artificial Intelligence has taken steps towards increasing transparency in the decision-making process of machine learning models for classification tasks. Understanding the reasons behind the predictions of models increases our trust in them and lowers the risks of using them. In an effort to extend this to other tasks apart from classification, this thesis explores the interpretability aspect for sequence tagging models for the task of Named Entity Recognition (NER). This work proposes two approaches for adapting LIME, an interpretation method for classification, to sequence tagging and NER. The first approach is a direct adaptation of LIME to the task, while the second includes adaptations following the idea that entities are conceived as a group of words and we would like one explanation for the whole entity. Given the challenges in the evaluation of the interpretation method, this work proposes an extensive evaluation from different angles. It includes a quantitative analysis using the AOPC metric; a qualitative analysis that studies the explanations at instance and dataset levels as well as the semantic structure of the embeddings and the explanations; and a human evaluation to validate the model's behaviour. The evaluation has discovered patterns and characteristics to take into account when explaining NER models.", "title": "" }, { "docid": "03e5084a5e33205fc4deaeb69c66b460", "text": "In this paper we present a general convex optimization approach for solving highdimensional tensor regression problems under low-dimensional structural assumptions. We consider using convex and weakly decomposable regularizers assuming that the underlying tensor lies in an unknown low-dimensional subspace. Within our framework, we derive general risk bounds of the resulting estimate under fairly general dependence structure among covariates. Our framework leads to upper bounds in terms of two very simple quantities, the Gaussian width of a convex set in tensor space and the intrinsic dimension of the low-dimensional tensor subspace. These general bounds provide useful upper bounds on rates of convergence for a number of fundamental statistical models of interest including multi-response regression, vector auto-regressive models, low-rank tensor models and pairwise interaction models. Moreover, in many of these settings we prove that the resulting estimates are minimax optimal. Departments of Statistics and Computer Science, and Optimization Group at Wisconsin Institute for Discovery, University of Wisconsin-Madison, 1300 University Avenue, Madison, WI 53706. The research of Garvesh Raskutti is supported in part by NSF Grant DMS-1407028 Department of Statistics and Morgridge Institute for Research, University of Wisconsin-Madison, 1300 University Avenue, Madison, WI 53706. The research of Ming Yuan was supported in part by NSF FRG Grant DMS-1265202, and NIH Grant 1-U54AI117924-01.", "title": "" }, { "docid": "7437f0c8549cb8f73f352f8043a80d19", "text": "Graphene is considered as one of leading candidates for gas sensor applications in the Internet of Things owing to its unique properties such as high sensitivity to gas adsorption, transparency, and flexibility. We present self-activated operation of all graphene gas sensors with high transparency and flexibility. The all-graphene gas sensors which consist of graphene for both sensor electrodes and active sensing area exhibit highly sensitive, selective, and reversible responses to NO2 without external heating. The sensors show reliable operation under high humidity conditions and bending strain. In addition to these remarkable device performances, the significantly facile fabrication process enlarges the potential of the all-graphene gas sensors for use in the Internet of Things and wearable electronics.", "title": "" }, { "docid": "b759613b1eedd29d32fbbc118767b515", "text": "Deep learning has been shown successful in a number of domains, ranging from acoustics, images to natural language processing. However, applying deep learning to the ubiquitous graph data is non-trivial because of the unique characteristics of graphs. Recently, a significant amount of research efforts have been devoted to this area, greatly advancing graph analyzing techniques. In this survey, we comprehensively review different kinds of deep learning methods applied to graphs. We divide existing methods into three main categories: semi-supervised methods including Graph Neural Networks and Graph Convolutional Networks, unsupervised methods including Graph Autoencoders, and recent advancements including Graph Recurrent Neural Networks and Graph Reinforcement Learning. We then provide a comprehensive overview of these methods in a systematic manner following their history of developments. We also analyze the differences of these methods and how to composite different architectures. Finally, we briefly outline their applications and discuss potential future directions.", "title": "" }, { "docid": "d90a66cf63abdc1d0caed64812de7043", "text": "BACKGROUND/AIMS\nEnd-stage liver disease accounts for one in forty deaths worldwide. Chronic infections with hepatitis B virus (HBV) and hepatitis C virus (HCV) are well-recognized risk factors for cirrhosis and liver cancer, but estimates of their contributions to worldwide disease burden have been lacking.\n\n\nMETHODS\nThe prevalence of serologic markers of HBV and HCV infections among patients diagnosed with cirrhosis or hepatocellular carcinoma (HCC) was obtained from representative samples of published reports. Attributable fractions of cirrhosis and HCC due to these infections were estimated for 11 WHO-based regions.\n\n\nRESULTS\nGlobally, 57% of cirrhosis was attributable to either HBV (30%) or HCV (27%) and 78% of HCC was attributable to HBV (53%) or HCV (25%). Regionally, these infections usually accounted for >50% of HCC and cirrhosis. Applied to 2002 worldwide mortality estimates, these fractions represent 929,000 deaths due to chronic HBV and HCV infections, including 446,000 cirrhosis deaths (HBV: n=235,000; HCV: n=211,000) and 483,000 liver cancer deaths (HBV: n=328,000; HCV: n=155,000).\n\n\nCONCLUSIONS\nHBV and HCV infections account for the majority of cirrhosis and primary liver cancer throughout most of the world, highlighting the need for programs to prevent new infections and provide medical management and treatment for those already infected.", "title": "" }, { "docid": "10990c819cbc6dfb88b4c2de829f27f1", "text": "Building on the fraudulent foundation established by atheist Sigmund Freud, psychoanalyst Erik Erikson has proposed a series of eight \"life cycles,\" each with an accompanying \"life crisis,\" to explain both human behavior and man's religious tendencies. Erikson's extensive application of his theories to the life of Martin Luther reveals his contempt for the living God who has revealed Himself in Scripture. This paper will consider Erikson's view of man, sin, redemption, and religion, along with an analysis of his eight \"life cycles.\" Finally, we will critique his attempted psychoanalysis of Martin Luther.", "title": "" }, { "docid": "71ae8b4cc2f4e531be95cdbb147c75eb", "text": "This paper is to explore the possibility to use alternative data and artificial intelligence techniques to trade stocks. The efficacy of the daily Twitter sentiment on predicting the stock return is examined using machine learning methods. Reinforcement learning(Q-learning) is applied to generate the optimal trading policy based on the sentiment signal. The predicting power of the sentiment signal is more significant if the stock price is driven by the expectation on the company growth and when the company has a major event that draws the public attention. The optimal trading strategy based on reinforcement learning outperforms the trading strategy based on the machine learning prediction.", "title": "" }, { "docid": "9581c692787cfef1ce2916100add4c1e", "text": "Diabetes related eye disease is growing as a major health concern worldwide. Diabetic retinopathy is an infirmity due to higher level of glucose in the retinal capillaries, resulting in cloudy vision and blindness eventually. With regular screening, pathology can be detected in the instigating stage and if intervened with in time medication could prevent further deterioration. This paper develops an automated diagnosis system to recognize retinal blood vessels, and pathologies, such as exudates and microaneurysms together with certain texture properties using image processing techniques. These anatomical and texture features are then fed into a multiclass support vector machine (SVM) for classifying it into normal, mild, moderate, severe and proliferative categories. Advantages include, it processes quickly a large collection of fundus images obtained from mass screening which lessens cost and increases efficiency for ophthalmologists. Our method was evaluated on two publicly available databases and got encouraging results with a state of the art in this area.", "title": "" }, { "docid": "746b9e9e1fdacc76d3acb4f78d824901", "text": "This paper proposes a new method for the detection of glaucoma using fundus image which mainly affects the optic disc by increasing the cup size is proposed. The ratio of the optic cup to disc (CDR) in retinal fundus images is one of the primary physiological parameter for the diagnosis of glaucoma. The Kmeans clustering technique is recursively applied to extract the optic disc and optic cup region and an elliptical fitting technique is applied to find the CDR values. The blood vessels in the optic disc region are detected by using local entropy thresholding approach. The ratio of area of blood vessels in the inferiorsuperior side to area of blood vessels in the nasal-temporal side (ISNT) is combined with the CDR for the classification of fundus image as normal or glaucoma by using K-Nearest neighbor , Support Vector Machine and Bayes classifier. A batch of 36 retinal images obtained from the Aravind Eye Hospital, Madurai, Tamilnadu, India is used to assess the performance of the proposed system and a classification rate of 95% is achieved.", "title": "" }, { "docid": "fc3aeb32f617f7a186d41d56b559a2aa", "text": "Existing neural relation extraction (NRE) models rely on distant supervision and suffer from wrong labeling problems. In this paper, we propose a novel adversarial training mechanism over instances for relation extraction to alleviate the noise issue. As compared with previous denoising methods, our proposed method can better discriminate those informative instances from noisy ones. Our method is also efficient and flexible to be applied to various NRE architectures. As shown in the experiments on a large-scale benchmark dataset in relation extraction, our denoising method can effectively filter out noisy instances and achieve significant improvements as compared with the state-of-theart models.", "title": "" }, { "docid": "86aa313233bee3f040604ffa214af4bf", "text": "It is hypothesized that collective efficacy, defined as social cohesion among neighbors combined with their willingness to intervene on behalf of the common good, is linked to reduced violence. This hypothesis was tested on a 1995 survey of 8782 residents of 343 neighborhoods in Chicago, Illinois. Multilevel analyses showed that a measure of collective efficacy yields a high between-neighborhood reliability and is negatively associated with variations in violence, when individual-level characteristics, measurement error, and prior violence are controlled. Associations of concentrated disadvantage and residential instability with violence are largely mediated by collective efficacy.", "title": "" }, { "docid": "39bf7e3a8e75353a3025e2c0f18768f9", "text": "Ligament reconstruction is the current standard of care for active patients with an anterior cruciate ligament (ACL) rupture. Although the majority of ACL reconstruction (ACLR) surgeries successfully restore the mechanical stability of the injured knee, postsurgical outcomes remain widely varied. Less than half of athletes who undergo ACLR return to sport within the first year after surgery, and it is estimated that approximately 1 in 4 to 1 in 5 young, active athletes who undergo ACLR will go on to a second knee injury. The outcomes after a second knee injury and surgery are significantly less favorable than outcomes after primary injuries. As advances in graft reconstruction and fixation techniques have improved to consistently restore passive joint stability to the preinjury level, successful return to sport after ACLR appears to be predicated on numerous postsurgical factors. Importantly, a secondary ACL injury is most strongly related to modifiable postsurgical risk factors. Biomechanical abnormalities and movement asymmetries, which are more prevalent in this cohort than previously hypothesized, can persist despite high levels of functional performance, and also represent biomechanical and neuromuscular control deficits and imbalances that are strongly associated with secondary injury incidence. Decreased neuromuscular control and high-risk movement biomechanics, which appear to be heavily influenced by abnormal trunk and lower extremity movement patterns, not only predict first knee injury risk but also reinjury risk. These seminal findings indicate that abnormal movement biomechanics and neuromuscular control profiles are likely both residual to, and exacerbated by, the initial injury. Evidence-based medicine (EBM) strategies should be used to develop effective, efficacious interventions targeted to these impairments to optimize the safe return to high-risk activity. In this Current Concepts article, the authors present the latest evidence related to risk factors associated with ligament failure or a secondary (contralateral) injury in athletes who return to sport after ACLR. From these data, they propose an EBM paradigm shift in postoperative rehabilitation and return-to-sport training after ACLR that is focused on the resolution of neuromuscular deficits that commonly persist after surgical reconstruction and standard rehabilitation of athletes.", "title": "" }, { "docid": "31dbedbcdb930ead1f8274ff2c181fcb", "text": "This paper sums up lessons learned from a sequence of cooperative design workshops where end users were enabled to design mobile systems through scenario building, role playing, and low-fidelity prototyping. We present a resulting fixed workshop structure with well-chosen constraints that allows for end users to explore and design new technology and work practices. In these workshops, the systems developers get input to design from observing how users stage and act out current and future use scenarios and improvise new technology to fit their needs. A theoretical framework is presented to explain the creative processes involved and the workshop as a user-centered design method. Our findings encourage us to recommend the presented workshop structure for design projects involving mobility and computer-mediated communication, in particular project where the future use of the resulting products and services also needs to be designed.", "title": "" }, { "docid": "c25a62b5798e7c08579efb61c35f2c66", "text": "In this paper, we propose a new adaptive stochastic gradient Langevin dynamics (ASGLD) algorithmic framework and its two specialized versions, namely adaptive stochastic gradient (ASG) and adaptive gradient Langevin dynamics(AGLD), for non-convex optimization problems. All proposed algorithms can escape from saddle points with at most $O(\\log d)$ iterations, which is nearly dimension-free. Further, we show that ASGLD and ASG converge to a local minimum with at most $O(\\log d/\\epsilon^4)$ iterations. Also, ASGLD with full gradients or ASGLD with a slowly linearly increasing batch size converge to a local minimum with iterations bounded by $O(\\log d/\\epsilon^2)$, which outperforms existing first-order methods.", "title": "" }, { "docid": "4482faa886c3216bf35265da250633c4", "text": "Acidification of rain-water is identified as one of the most serious environmental problems of transboundary nature. Acid rain is mainly a mixture of sulphuric and nitric acids depending upon the relative quantities of oxides of sulphur and nitrogen emissions. Due to the interaction of these acids with other constituents of the atmosphere, protons are released causing increase in the soil acidity Lowering of soil pH mobilizes and leaches away nutrient cations and increases availability of toxic heavy metals. Such changes in the soil chemical characteristics reduce the soil fertility which ultimately causes the negative impact on growth and productivity of forest trees and crop plants. Acidification of water bodies causes large scale negative impact on aquatic organisms including fishes. Acidification has some indirect effects on human health also. Acid rain affects each and every components of ecosystem. Acid rain also damages man-made materials and structures. By reducing the emission of the precursors of acid rain and to some extent by liming, the problem of acidification of terrestrial and aquatic ecosystem has been reduced during last two decades.", "title": "" }, { "docid": "0ebc0724a8c966e93e05fb7fce80c1ab", "text": "Firms in the financial services industry have been faced with the dramatic and relatively recent emergence of new technology innovations, and process disruptions. The industry as a whole, and many new fintech start-ups are looking for new pathways to successful business models, the creation of enhanced customer experience, and new approaches that result in services transformation. Industry and academic observers believe this to be more of a revolution than a set of less impactful changes, with financial services as a whole due for major improvements in efficiency, in customer centricity and informedness. The long-standing dominance of leading firms that are not able to figure out how to effectively hook up with the “Fintech Revolution” is at stake. This article presents a new fintech innovation mapping approach that enables the assessment of the extent to which there are changes and transformations in four key areas of the financial services industry. We discuss: (1) operations management in financial services, and the changes that are occurring there; (2) technology innovations that have begun to leverage the execution and stakeholder value associated with payments settlement, cryptocurrencies, blockchain technologies, and cross-border payment services; (3) multiple fintech innovations that have impacted lending and deposit services, peer-to-peer (P2P) lending and the use of social media; (4) issues with respect to investments, financial markets, trading, risk management, robo-advisory and related services that are influenced by blockchain and fintech innovations.", "title": "" }, { "docid": "1e53e57544d6f4250396800b5792de5f", "text": "Several data mining algorithms use iterative optimization methods for learning predictive models. It is not easy to determine upfront which optimization method will perform best or converge fast for such tasks. In this paper, we analyze Meta Algorithms (MAs) which work by adaptively combining iterates from a pool of base optimization algorithms. We show that the performance of MAs are competitive with the best convex combination of the iterates from the base algorithms for online as well as batch convex optimization problems. We illustrate the effectiveness of MAs on the problem of portfolio selection in the stock market and use several existing ideas for portfolio selection as base algorithms. Using daily S\\&P500 data for the past 21 years and a benchmark NYSE dataset, we show that MAs outperform existing portfolio selection algorithms with provable guarantees by several orders of magnitude, and match the performance of the best heuristics in the pool.", "title": "" } ]
scidocsrr
8497670de42f22b2b3d4de50899958e4
CUDA vs OpenACC: Performance Case Studies with Kernel Benchmarks and a Memory-Bound CFD Application
[ { "docid": "6537921976c2779d1e7d921c939ec64d", "text": "Stencil computation sweeps over a spatial grid over multiple time steps to perform nearest-neighbor computations. The bandwidth-to-compute requirement for a large class of stencil kernels is very high, and their performance is bound by the available memory bandwidth. Since memory bandwidth grows slower than compute, the performance of stencil kernels will not scale with increasing compute density. We present a novel 3.5D-blocking algorithm that performs 2.5D-spatial and temporal blocking of the input grid into on-chip memory for both CPUs and GPUs. The resultant algorithm is amenable to both thread- level and data-level parallelism, and scales near-linearly with the SIMD width and multiple-cores. Our performance numbers are faster or comparable to state-of-the-art-stencil implementations on CPUs and GPUs. Our implementation of 7-point-stencil is 1.5X-faster on CPUs, and 1.8X faster on GPUs for single- precision floating point inputs than previously reported numbers. For Lattice Boltzmann methods, the corresponding speedup number on CPUs is 2.1X.", "title": "" } ]
[ { "docid": "15de232c8daf22cf1a1592a21e1d9df3", "text": "This survey discusses how recent developments in multimodal processing facilitate conceptual grounding of language. We categorize the information flow in multimodal processing with respect to cognitive models of human information processing and analyze different methods for combining multimodal representations. Based on this methodological inventory, we discuss the benefit of multimodal grounding for a variety of language processing tasks and the challenges that arise. We particularly focus on multimodal grounding of verbs which play a crucial role for the compositional power of language. Title and Abstract in German Multimodale konzeptuelle Verankerung für die automatische Sprachverarbeitung Dieser Überblick erörtert, wie aktuelle Entwicklungen in der automatischen Verarbeitung multimodaler Inhalte die konzeptuelle Verankerung sprachlicher Inhalte erleichtern können. Die automatischen Methoden zur Verarbeitung multimodaler Inhalte werden zunächst hinsichtlich der zugrundeliegenden kognitiven Modelle menschlicher Informationsverarbeitung kategorisiert. Daraus ergeben sich verschiedene Methoden um Repräsentationen unterschiedlicher Modalitäten miteinander zu kombinieren. Ausgehend von diesen methodischen Grundlagen wird diskutiert, wie verschiedene Forschungsprobleme in der automatischen Sprachverarbeitung von multimodaler Verankerung profitieren können und welche Herausforderungen sich dabei ergeben. Ein besonderer Schwerpunkt wird dabei auf die multimodale konzeptuelle Verankerung von Verben gelegt, da diese eine wichtige kompositorische Funktion erfüllen.", "title": "" }, { "docid": "4aa17982590e86fea90267e4386e2ef1", "text": "There are many promising psychological interventions on the horizon, but there is no clear methodology for preparing them to be scaled up. Drawing on design thinking, the present research formalizes a methodology for redesigning and tailoring initial interventions. We test the methodology using the case of fixed versus growth mindsets during the transition to high school. Qualitative inquiry and rapid, iterative, randomized \"A/B\" experiments were conducted with ~3,000 participants to inform intervention revisions for this population. Next, two experimental evaluations showed that the revised growth mindset intervention was an improvement over previous versions in terms of short-term proxy outcomes (Study 1, N=7,501), and it improved 9th grade core-course GPA and reduced D/F GPAs for lower achieving students when delivered via the Internet under routine conditions with ~95% of students at 10 schools (Study 2, N=3,676). Although the intervention could still be improved even further, the current research provides a model for how to improve and scale interventions that begin to address pressing educational problems. It also provides insight into how to teach a growth mindset more effectively.", "title": "" }, { "docid": "e2cd2edc74d932f1632a858ac124f902", "text": "Large writes are beneficial both on individual disks and on disk arrays, e.g., RAID-5. The presented design enables large writes of internal B-tree nodes and leaves. It supports both in-place updates and large append-only (“log-structured”) write operations within the same storage volume, within the same B-tree, and even at the same time. The essence of the proposal is to make page migration inexpensive, to migrate pages while writing them, and to make such migration optional rather than mandatory as in log-structured file systems. The inexpensive page migration also aids traditional defragmentation as well as consolidation of free space needed for future large writes. These advantages are achieved with a very limited modification to conventional B-trees that also simplifies other B-tree operations, e.g., key range locking and compression. Prior proposals and prototypes implemented transacted B-tree on top of log-structured file systems and added transaction support to log-structured file systems. Instead, the presented design adds techniques and performance characteristics of log-structured file systems to traditional B-trees and their standard transaction support, notably without adding a layer of indirection for locating B-tree nodes on disk. The result retains fine-granularity locking, full transactional ACID guarantees, fast search performance, etc. expected of a modern B-tree implementation, yet adds efficient transacted page relocation and large, high-bandwidth writes.", "title": "" }, { "docid": "29c62dce09752ce0eee4ec9d1840fad0", "text": "This paper focuses on a novel and challenging vision task, dense video captioning, which aims to automatically describe a video clip with multiple informative and diverse caption sentences. The proposed method is trained without explicit annotation of fine-grained sentence to video region-sequence correspondence, but is only based on weak video-level sentence annotations. It differs from existing video captioning systems in three technical aspects. First, we propose lexical fully convolutional neural networks (Lexical-FCN) with weakly supervised multi-instance multi-label learning to weakly link video regions with lexical labels. Second, we introduce a novel submodular maximization scheme to generate multiple informative and diverse region-sequences based on the Lexical-FCN outputs. A winner-takes-all scheme is adopted to weakly associate sentences to region-sequences in the training phase. Third, a sequence-to-sequence learning based language model is trained with the weakly supervised information obtained through the association process. We show that the proposed method can not only produce informative and diverse dense captions, but also outperform state-of-the-art single video captioning methods by a large margin.", "title": "" }, { "docid": "d1f3961959f11ce553237ef8941da86a", "text": "Inspired by recent successes of deep learning in computer vision and speech recognition, we propose a novel framework to encode time series data as different types of images, namely, Gramian Angular Fields (GAF) and Markov Transition Fields (MTF). This enables the use of techniques from computer vision for classification. Using a polar coordinate system, GAF images are represented as a Gramian matrix where each element is the trigonometric sum (i.e., superposition of directions) between different time intervals. MTF images represent the first order Markov transition probability along one dimension and temporal dependency along the other. We used Tiled Convolutional Neural Networks (tiled CNNs) on 12 standard datasets to learn high-level features from individual GAF, MTF, and GAF-MTF images that resulted from combining GAF and MTF representations into a single image. The classification results of our approach are competitive with five stateof-the-art approaches. An analysis of the features and weights learned via tiled CNNs explains why the approach works.", "title": "" }, { "docid": "5e5e2d038ae29b4c79c79abe3d20ae40", "text": "Article history: Received 28 February 2013 Accepted 26 July 2013 Available online 11 October 2013 Fault diagnosis of Discrete Event Systems has become an active research area in recent years. The research activity in this area is driven by the needs of many different application domains such as manufacturing, process control, control systems, transportation, communication networks, software engineering, and others. The aim of this paper is to review the state-of the art of methods and techniques for fault diagnosis of Discrete Event Systems based on models that include faulty behaviour. Theoretical and practical issues related to model description tools, diagnosis processing structure, sensor selection, fault representation and inference are discussed. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "99511c1267d396d3745f075a40a06507", "text": "Problem Description: It should be well known that processors are outstripping memory performance: specifically that memory latencies are not improving as fast as processor cycle time or IPC or memory bandwidth. Thought experiment: imagine that a cache miss takes 10000 cycles to execute. For such a processor instruction level parallelism is useless, because most of the time is spent waiting for memory. Branch prediction is also less effective, since most branches can be determined with data already in registers or in the cache; branch prediction only helps for branches which depend on outstanding cache misses. At the same time, pressures for reduced power consumption mount. Given such trends, some computer architects in industry (although not Intel EPIC) are talking seriously about retreating from out-of-order superscalar processor architecture, and instead building simpler, faster, dumber, 1-wide in-order processors with high degrees of speculation. Sometimes this is proposed in combination with multiprocessing and multithreading: tolerate long memory latencies by switching to other processes or threads. I propose something different: build narrow fast machines but use intelligent logic inside the CPU to increase the number of outstanding cache misses that can be generated from a single program. By MLP I mean simply the number of outstanding cache misses that can be generated (by a single thread, task, or program) and executed in an overlapped manner. It does not matter what sort of execution engine generates the multiple outstanding cache misses. An out-of-order superscalar ILP CPU may generate multiple outstanding cache misses, but 1-wide processors can be just as effective. Change the metrics: total execution time remains the overall goal, but instead of reporting IPC as an approximation to this, we must report MLP. Limit studies should be in terms of total number of non-overlapped cache misses on critical path. Now do the research: Many present-day hot topics in computer architecture help ILP, but do not help MLP. As mentioned above, predicting branch directions for branches that can be determined from data already in the cache or in registers does not help MLP for extremely long latencies. Similarly, prefetching of data cache misses for array processing codes does not help MLP – it just moves it around. Instead, investigate microarchitectures that help MLP: (0) Trivial case – explicit multithreading, like SMT. (1) Slightly less trivial case – implicitly multithread single programs, either by compiler software on an MT machine, or by a hybrid, such as …", "title": "" }, { "docid": "553eb49b292b5edb4b53953701410a7d", "text": "We review the most important mathematical models and algorithms developed for the exact solution of the one-dimensional bin packing and cutting stock problems, and experimentally evaluate, on state-of-the art computers, the performance of the main available software tools.", "title": "" }, { "docid": "f01d7df02efb2f4114d93adf0da8fbf1", "text": "This review summarizes the different methods of preparation of polymer nanoparticles including nanospheres and nanocapsules. The first part summarizes the basic principle of each method of nanoparticle preparation. It presents the most recent innovations and progresses obtained over the last decade and which were not included in previous reviews on the subject. Strategies for the obtaining of nanoparticles with controlled in vivo fate are described in the second part of the review. A paragraph summarizing scaling up of nanoparticle production and presenting corresponding pilot set-up is considered in the third part of the review. Treatments of nanoparticles, applied after the synthesis, are described in the next part including purification, sterilization, lyophilization and concentration. Finally, methods to obtain labelled nanoparticles for in vitro and in vivo investigations are described in the last part of this review.", "title": "" }, { "docid": "1e1706e1bd58a562a43cc7719f433f4f", "text": "In this paper, we present the use of D-higraphs to perform HAZOP studies. D-higraphs is a formalism that includes in a single model the functional as well as the structural (ontological) components of any given system. A tool to perform a semi-automatic guided HAZOP study on a process plant is presented. The diagnostic system uses an expert system to predict the behavior modeled using D-higraphs. This work is applied to the study of an industrial case and its results are compared with other similar approaches proposed in previous studies. The analysis shows that the proposed methodology fits its purpose enabling causal reasoning that explains causes and consequences derived from deviations, it also fills some of the gaps and drawbacks existing in previous reported HAZOP assistant tools.", "title": "" }, { "docid": "2fc7b4f4763d094462f13688b473d370", "text": "Software-product-line engineering has gained considerable momentum in recent years, both in industry and in academia. A software product line is a family of software products that share a common set of features. Software product lines challenge traditional analysis techniques, such as type checking, model checking, and theorem proving, in their quest of ensuring correctness and reliability of software. Simply creating and analyzing all products of a product line is usually not feasible, due to the potentially exponential number of valid feature combinations. Recently, researchers began to develop analysis techniques that take the distinguishing properties of software product lines into account, for example, by checking feature-related code in isolation or by exploiting variability information during analysis. The emerging field of product-line analyses is both broad and diverse, so it is difficult for researchers and practitioners to understand their similarities and differences. We propose a classification of product-line analyses to enable systematic research and application. Based on our insights with classifying and comparing a corpus of 123 research articles, we develop a research agenda to guide future research on product-line analyses.", "title": "" }, { "docid": "eb0a5d496dd9a427ab7d52416f70aab3", "text": "Progress in habit theory can be made by distinguishing habit from frequency of occurrence, and using independent measures for these constructs. This proposition was investigated in three studies using a longitudinal, cross-sectional and experimental design on eating, mental habits and word processing, respectively. In Study 1, snacking habit and past snacking frequency independently predicted later snacking behaviour, while controlling for the theory of planned behaviour variables. Habit fully mediated the effect of past on later behaviour. In Study 2, habitual negative self-thinking and past frequency of negative self-thoughts independently predicted self-esteem and the presence of depressive and anxiety symptoms. In Study 3, habit varied as a function of experimentally manipulated task complexity, while behavioural frequency was held constant. Taken together, while repetition is necessary for habits to develop, these studies demonstrate that habit should not be equated with frequency of occurrence, but rather should be considered as a mental construct involving features of automaticity, such as lack of awareness, difficulty to control and mental efficiency.", "title": "" }, { "docid": "de43054eb774df93034ffc1976a932b7", "text": "Recent experiments in programming natural language question-answering systems are reviewed to summarize the methods that have been developed for syntactic, semantic, and logical analysis of English strings. It is concluded that at least minimally effective techniques have been devised for answering questions from natural language subsets in small scale experimental systems and that a useful paradigm has evolved to guide research efforts in the field. Current approaches to semantic analysis and logical inference are seen to be effective beginnings but of questionable generality with respect either to subtle aspects of meaning or to applications over large subsets of English. Generalizing from current small-scale experiments to language-processing systems based on dictionaries with thousands of entries—with correspondingly large grammars and semantic systems—may entail a new order of complexity and require the invention and development of entirely different approaches to semantic analysis and question answering.", "title": "" }, { "docid": "4d1a448569c55f919d9ce4da0928c89a", "text": "The hit, break and cut classes of verbs are grammatically relevant in Kimaragang, as in English. The relevance of such classes for determining how arguments are expressed suggests that the meaning of a verb is composed of (a) systematic components of meaning (the EVENT TEMPLATE); and (b) idiosyncratic properties of the individual root. Assuming this approach to be essentially correct, we compare grammatical phenomena in Kimaragang which are sensitive to verb class membership with phenomena which are not class-sensitive. The tendency that emerges is that class-sensitive alternations do not seem to be affix-dependent, and are quite restricted in their ability to introduce new arguments into the argument structure. 1. Verbs of hitting and breaking in English This paper discusses the relationship between verbal semantics and clause structure in Kimaragang Dusun, an endangered Philippine-type language of northern Borneo. It builds on a classic paper by Charles Fillmore (1970), in which he distinguishes two classes of transitive verbs in English: “surface contact” verbs (e.g., hit, slap, strike, bump, stroke) vs. “change of state” verbs (e.g., break, bend, fold, shatter, crack). Fillmore shows that the members of each class share certain syntactic and semantic properties which distinguish them from members of the other class. He further argues that the correlation between these syntactic and semantic properties supports a view of lexical semantics under which the meaning of a verb is made up of two kinds of elements: (a) systematic components of meaning that are shared by an entire class; and (b) idiosyncratic components that are specific to the individual root. Only the former are assumed to be “grammatically relevant.” This basic insight has been foundational for a large body of subsequent work in the area of lexical semantics. One syntactic test that distinguishes hit verbs from break verbs in English is the “causative alternation”, which is systematically possible with break verbs (John broke the window vs. The 1 I would like to thank Jim Johansson, Farrell Ackerman and John Beavers for helpful discussion of these issues. Thanks also to Jim Johansson for giving me access to his field dictionary (Johansson, n.d.), the source of many of the Kimaragang examples in this paper. Special thanks are due to my primary language consultant, Janama Lantubon. Part of the research for this study was supported by NEH-NSF Documenting Endangered Languages fellowship no. FN-50027-07. The grammar of hitting, breaking and cutting in Kimaragang Dusun 2 window broke) but systematically impossible with hit verbs (John hit the window vs. *The window hit). A second test involves a kind of “possessor ascension”, a paraphrase in which the possessor of a body-part noun can be expressed as direct object. This paraphrase is grammatical with hit verbs (I hit his leg vs. I hit him on the leg) but not with break verbs (I broke his leg vs. *I broke him on the leg). A third diagnostic relates to the potential ambiguity of the passive participle. Participles of both classes take a verbal-eventive reading; but participles of break verbs also allow an adjectival-stative reading (the window is still broken) which is unavailable for participles of hit verbs (*the window is still hit). Semantically, the crucial difference between the two classes is that break verbs entail a result, specifically a “separation in [the] material integrity” of the patient (Hale and Keyser 1987). This entailment cannot be cancelled (e.g., I broke the window with a hammer; #it didn’t faze the window, but the hammer shattered). The hit verbs, in contrast, do not have this entailment (I hit the window with a hammer; it didn’t faze the window, but the hammer shattered). A second difference is that break verbs may impose selectional restrictions based on physical properties of the object (I {folded/?bent/ *broke/*shattered} the blanket) whereas hit verbs do not (I {hit/slapped/struck/beat} the blanket). Selectional restrictions of hit verbs are more likely to be based on physical properties of the instrument. In the years since 1970, these two classes of verbs have continued to be studied and discussed in numerous publications. Additional diagnostics have been identified, including the with/against alternation (examples 1–2; cf. Fillmore 1977:75); the CONATIVE alternation (Mary hit/broke the piñata vs. Mary hit/*broke at the piñata; Guerssel et al. 1985); and the Middle alternation (This glass breaks/*hits easily; Fillmore 1977, Hale and Keyser 1987). These tests and others are summarized in Levin (1993). (1) a. I hit the fence with the stick. b. I hit the stick against the fence. (2) a. I broke the window with the stick. b. #I broke the stick against the window. (not the same meaning!!) Another verb class that has received considerable attention in recent years is the cut class (e.g., Guerssel et al. 1985, Bohnemeyer 2007, Asifa et al. 2007). In this paper I will show that these same three classes (hit, break, cut) are distinguished by a number of grammatical and semantic properties in Kimaragang as well. Section 2 briefly introduces some of the basic assumptions that we will adopt about the structure of verb meanings. Section 3 discusses criteria that distinguish hit verbs from break verbs, and section 4 discusses the properties of the cut verbs. Section 5 introduces another test, which I refer to as the instrumental alternation, which exhibits a different pattern for each of the three classes. Section 6 discusses the tests themselves, trying to identify characteristic properties of the constructions that are sensitive to verb classes, and which distinguish these constructions from those that are not class-sensitive. 2. What do verb classes tell us? Fillmore‟s approach to the study of verb meanings has inspired a large volume of subsequent research; see for example Levin (1993), Levin and Rappaport Hovav (1995, 1998, 2005; henceforth L&RH), and references cited in those works. Much of this research is concerned with exploring the following hypotheses, which were already at least partially articulated in Fillmore (1970): The grammar of hitting, breaking and cutting in Kimaragang Dusun 3 a. Verb meanings are composed of two kinds of information. Some components of meaning are systematic, forming a kind of “event template”, while others are idiosyncratic, specific to that particular root. b. Only systematic components of meaning are “grammatically relevant”, more specifically, relevant to argument realization. c. Grammatically determined verb classes are sets of verbs that share the same template. The systematic aspects of meaning distinguish one class from another, while roots belonging to the same class are distinguished by features of their idiosyncratic meaning. Levin (1993) states: “[T]here is a sense in which the notion of verb class is an artificial construct. Verb classes arise because a set of verbs with one or more shared meaning components show similar behavior... The important theoretical construct is the meaning component, not the verb class...” Identifying semantically determined sets of verbs is thus a first step in understanding what elements of meaning are relevant for determining how arguments will be expressed. Notice that the three prototypical verbs under consideration here (hit, beak, cut) are all transitive verbs, and all three select the same set of semantic roles: agent, patient, plus optional instrument. Thus the event template that defines each class, and allows us to account for the grammatical differences summarized above, must be more than a simple list of semantic roles. In addition to identifying grammatically relevant components of meaning, the study of verb classes is important as a means of addressing the following questions: (a) What is the nature of the “event template”, and how should it be represented? and (b) What morpho-syntactic processes or constructions are valid tests for “grammatical relevance” in the sense intended above? Clearly these three issues are closely inter-related, and cannot be fully addressed in isolation from each other. However, in this paper I will focus primarily on the third question, which I will re-state in the following way: What kinds of grammatical constructions or tests are relevant for identifying semantically-based verb classes? 3. Verbs of hitting and breaking in Kimaragang 3.1 Causative-inchoative alternation Kimaragang is structurally very similar to the languages of the central Philippines. In particular, Kimaragang exhibits the rich Philippine-type voice system in which the semantic role of the subject (i.e., the NP marked for nominative case) is indicated by the voice affixation of the verb. 2 In the Active Voice, an additional “transitivity” prefix occurs on transitive verbs; this prefix is lacking on intransitive verbs. 3 Many verbal roots occur in both transitive and intransitive forms, as illustrated in (3) with the root patay „die; kill‟. In the most productive pattern, and the one of interest to us here, the intransitive form has an inchoative (change of state) meaning while the transitive form has a causative meaning. However, it is important to note that there is no causative morpheme present in these forms (morphological causatives are marked by a different prefix, po-, as discussed in section 6.1). 2 See Kroeger (2005) for a more detailed summary with examples. 3 For details see Kroeger (1996); Kroeger & Johansson (2005). The grammar of hitting, breaking and cutting in Kimaragang Dusun 4 (3) a. Minamatay(<in>m-poN-patay) oku do tasu. 4 <PST>AV-TR-die 1sg.NOM ACC dog „I killed a dog.‟ b. Minatay(<in>m-patay) it tasu. <PST>AV-die NOM dog „The dog died.‟ Virtually all break-type roots allow both the causative and inchoative forms, as illustrated in (6– 7); but hit-type roots generally occur only in the transitive form. Thus just as in English, the causative alternation is highly productive with ", "title": "" }, { "docid": "7a17ff6cbc7fcbdb2c867a23dc1be591", "text": "Particle swarm optimization has become a common heuristic technique in the optimization community, with many researchers exploring the concepts, issues, and applications of the algorithm. In spite of this attention, there has as yet been no standard definition representing exactly what is involved in modern implementations of the technique. A standard is defined here which is designed to be a straightforward extension of the original algorithm while taking into account more recent developments that can be expected to improve performance on standard measures. This standard algorithm is intended for use both as a baseline for performance testing of improvements to the technique, as well as to represent PSO to the wider optimization community", "title": "" }, { "docid": "a0b55bafeac6f681c758ccb45d54f6e5", "text": "( 南京理工大学 计算机学院 江苏 南京 210094 ) E_mail:sj012328@163.com 摘要:基于学习分类器集成算法,设计了在动态环境下的适应度函数,在理论上推导并证明了集成算法的收敛性,为本文提出的路 径规划算法的收敛提供了理论保证。仿真实验结果也表明遗传算法和学习分类器结合用于机器人的路径规划是收敛的,遗传算法的 早熟收敛和收敛速度慢两大难题也得到很大改善。 关键词:路径规划 机器人 学习分类器 收敛性 Research on convergence of robot path planning based on LCS Jie Shao Jing yu Yang (School of computer Science Nanjing University of Science and Technology Nanjing 210094 ) Abstract: A path planning algorithm of robot is proposed based on ensemble algorithm of the learning classifier system, which design fitness function in dynamic environment. The paper derived and proved that ensemble algorithm is convergence and provided a theoretical guarantee for the path planning algorithm. Simulation results also showed that genetic algorithms and learning classifier system combination for robot path planning is effective. Two major problems of the GA premature convergence and slow convergence have been significantly improved. Keyword: Path Planning Robot Learning classifier system convergence", "title": "" }, { "docid": "8966f87b2441cc2c348e25e3503e766c", "text": "Fuzzing is a simple yet effective approach to discover software bugs utilizing randomly generated inputs. However, it is limited by coverage and cannot find bugs hidden in deep execution paths of the program because the randomly generated inputs fail complex sanity checks, e.g., checks on magic values, checksums, or hashes. To improve coverage, existing approaches rely on imprecise heuristics or complex input mutation techniques (e.g., symbolic execution or taint analysis) to bypass sanity checks. Our novel method tackles coverage from a different angle: by removing sanity checks in the target program. T-Fuzz leverages a coverage-guided fuzzer to generate inputs. Whenever the fuzzer can no longer trigger new code paths, a light-weight, dynamic tracing based technique detects the input checks that the fuzzer-generated inputs fail. These checks are then removed from the target program. Fuzzing then continues on the transformed program, allowing the code protected by the removed checks to be triggered and potential bugs discovered. Fuzzing transformed programs to find bugs poses two challenges: (1) removal of checks leads to over-approximation and false positives, and (2) even for true bugs, the crashing input on the transformed program may not trigger the bug in the original program. As an auxiliary post-processing step, T-Fuzz leverages a symbolic execution-based approach to filter out false positives and reproduce true bugs in the original program. By transforming the program as well as mutating the input, T-Fuzz covers more code and finds more true bugs than any existing technique. We have evaluated T-Fuzz on the DARPA Cyber Grand Challenge dataset, LAVA-M dataset and 4 real-world programs (pngfix, tiffinfo, magick and pdftohtml). For the CGC dataset, T-Fuzz finds bugs in 166 binaries, Driller in 121, and AFL in 105. In addition, found 3 new bugs in previously-fuzzed programs and libraries.", "title": "" }, { "docid": "8296ce0143992c7513051c70758541be", "text": "This artic,le introduces Adaptive Resonance Theor) 2-A (ART 2-A), an efjCicient algorithm that emulates the self-organizing pattern recognition and hypothesis testing properties of the ART 2 neural network architect~~rc, hut at a speed two to three orders of magnitude fbster. Analysis and simulations show how’ the ART 2-A systems correspond to ART 2 rivnamics at both the fast-learn limit and at intermediate learning rate.r. Intermediate ieurning rates permit fust commitment of category nodes hut slow recoding, analogous to properties of word frequency effects. encoding specificity ef@cts, and episodic memory. Better noise tolerunce is hereby achieved ti’ithout a loss of leurning stability. The ART 2 and ART 2-A systems are contrasted with the leader algorithm. The speed of ART 2-A makes pructical the use of ART 2 modules in large scale neural computation. Keywords-Neural networks, Pattern recognition. Category formation. Fast learning, Adaptive resonance.", "title": "" }, { "docid": "65cdf37698a552944fca3b9f4cb2d6cc", "text": "The Wechsler Adult Intelligence Scale--Third Edition (WAIS-III; D. Wechsler, 1997) permits the calculation of both traditional IQ and index scores. However, if only the subtests constituting the index scores are administered, especially those yielding the Verbal Comprehension and Perceptual Organization Indexes, there is no equivalent measure of Full Scale IQ. Following the procedure for calculating a General Ability Index (GAI; A. Prifitera, L. G. Weiss, & D. H. Saklofske, 1998) for the Wechsler Intelligence Scale for Children--Third Edition (D. Wechsler, 1991), GAI normative tables for the WAIS-III standardization sample are reported here.", "title": "" }, { "docid": "16c522d458ed5df9d620e8255886e69e", "text": "Linked Stream Data has emerged as an effort to represent dynamic, time-dependent data streams following the principles of Linked Data. Given the increasing number of available stream data sources like sensors and social network services, Linked Stream Data allows an easy and seamless integration, not only among heterogenous stream data, but also between streams and Linked Data collections, enabling a new range of real-time applications. This tutorial gives an overview about Linked Stream Data processing. It describes the basic requirements for the processing, highlighting the challenges that are faced, such as managing the temporal aspects and memory overflow. It presents the different architectures for Linked Stream Data processing engines, their advantages and disadvantages. The tutorial also reviews the state of the art Linked Stream Data processing systems, and provide a comparison among them regarding the design choices and overall performance. A short discussion of the current challenges in open problems is given at the end.", "title": "" } ]
scidocsrr
d20444f2aeb0bcbc25835726b89a2fb1
Better cross company defect prediction
[ { "docid": "dc66c80a5031c203c41c7b2908c941a3", "text": "There has been a great deal of interest in defect prediction: using prediction models trained on historical data to help focus quality-control resources in ongoing development. Since most new projects don't have historical data, there is interest in cross-project prediction: using data from one project to predict defects in another. Sadly, results in this area have largely been disheartening. Most experiments in cross-project defect prediction report poor performance, using the standard measures of precision, recall and F-score. We argue that these IR-based measures, while broadly applicable, are not as well suited for the quality-control settings in which defect prediction models are used. Specifically, these measures are taken at specific threshold settings (typically thresholds of the predicted probability of defectiveness returned by a logistic regression model). However, in practice, software quality control processes choose from a range of time-and-cost vs quality tradeoffs: how many files shall we test? how many shall we inspect? Thus, we argue that measures based on a variety of tradeoffs, viz., 5%, 10% or 20% of files tested/inspected would be more suitable. We study cross-project defect prediction from this perspective. We find that cross-project prediction performance is no worse than within-project performance, and substantially better than random prediction!", "title": "" }, { "docid": "697580dda38c9847e9ad7c6a14ad6cd0", "text": "Background: This paper describes an analysis that was conducted on newly collected repository with 92 versions of 38 proprietary, open-source and academic projects. A preliminary study perfomed before showed the need for a further in-depth analysis in order to identify project clusters.\n Aims: The goal of this research is to perform clustering on software projects in order to identify groups of software projects with similar characteristic from the defect prediction point of view. One defect prediction model should work well for all projects that belong to such group. The existence of those groups was investigated with statistical tests and by comparing the mean value of prediction efficiency.\n Method: Hierarchical and k-means clustering, as well as Kohonen's neural network was used to find groups of similar projects. The obtained clusters were investigated with the discriminant analysis. For each of the identified group a statistical analysis has been conducted in order to distinguish whether this group really exists. Two defect prediction models were created for each of the identified groups. The first one was based on the projects that belong to a given group, and the second one - on all the projects. Then, both models were applied to all versions of projects from the investigated group. If the predictions from the model based on projects that belong to the identified group are significantly better than the all-projects model (the mean values were compared and statistical tests were used), we conclude that the group really exists.\n Results: Six different clusters were identified and the existence of two of them was statistically proven: 1) cluster proprietary B -- T=19, p=0.035, r=0.40; 2) cluster proprietary/open - t(17)=3.18, p=0.05, r=0.59. The obtained effect sizes (r) represent large effects according to Cohen's benchmark, which is a substantial finding.\n Conclusions: The two identified clusters were described and compared with results obtained by other researchers. The results of this work makes next step towards defining formal methods of reuse defect prediction models by identifying groups of projects within which the same defect prediction model may be used. Furthermore, a method of clustering was suggested and applied.", "title": "" } ]
[ { "docid": "56d9b47d1860b5a80c62da9f75b6769d", "text": "Optical see-through head-mounted displays (OSTHMDs) have many advantages in augmented reality application, but their utility in practical applications has been limited by the complexity of calibration. Because the human subject is an inseparable part of the eye-display system, previous methods for OSTHMD calibration have required extensive manual data collection using either instrumentation or manual point correspondences and are highly dependent on operator skill. This paper describes display-relative calibration (DRC) for OSTHMDs, a new two phase calibration method that minimizes the human element in the calibration process and ensures reliable calibration. Phase I of the calibration captures the parameters of the display system relative to a normalized reference frame and is performed in a jig with no human factors issues. The second phase optimizes the display for a specific user and the placement of the display on the head. Several phase II alternatives provide flexibility in a variety of applications including applications involving untrained users.", "title": "" }, { "docid": "0488511dc0641993572945e98a561cc7", "text": "Deep learning (DL) defines a new data-driven programming paradigm that constructs the internal system logic of a crafted neuron network through a set of training data. We have seen wide adoption of DL in many safety-critical scenarios. However, a plethora of studies have shown that the state-of-the-art DL systems suffer from various vulnerabilities which can lead to severe consequences when applied to real-world applications. Currently, the testing adequacy of a DL system is usually measured by the accuracy of test data. Considering the limitation of accessible high quality test data, good accuracy performance on test data can hardly provide confidence to the testing adequacy and generality of DL systems. Unlike traditional software systems that have clear and controllable logic and functionality, the lack of interpretability in a DL system makes system analysis and defect detection difficult, which could potentially hinder its real-world deployment. In this paper, we propose DeepGauge, a set of multi-granularity testing criteria for DL systems, which aims at rendering a multi-faceted portrayal of the testbed. The in-depth evaluation of our proposed testing criteria is demonstrated on two well-known datasets, five DL systems, and with four state-of-the-art adversarial attack techniques against DL. The potential usefulness of DeepGauge sheds light on the construction of more generic and robust DL systems.", "title": "" }, { "docid": "c8977fe68b265b735ad4261f5fe1ec25", "text": "We present ACQUINE - Aesthetic Quality Inference Engine, a publicly accessible system which allows users to upload their photographs and have them rated automatically for aesthetic quality. The system integrates a support vector machine based classifier which extracts visual features on the fly and performs real-time classification and prediction. As the first publicly available tool for automatically determining the aesthetic value of an image, this work is a significant first step in recognizing human emotional reaction to visual stimulus. In this paper, we discuss fundamentals behind this system, and some of the challenges faced while creating it. We report statistics generated from over 140,000 images uploaded by Web users. The system is demonstrated at http://acquine.alipr.com.", "title": "" }, { "docid": "36357f48cbc3ed4679c679dcb77bdd81", "text": "In this paper, we review research and applications in the area of mediated or remote social touch. Whereas current communication media rely predominately on vision and hearing, mediated social touch allows people to touch each other over a distance by means of haptic feedback technology. Overall, the reviewed applications have interesting potential, such as the communication of simple ideas (e.g., through Hapticons), establishing a feeling of connectedness between distant lovers, or the recovery from stress. However, the beneficial effects of mediated social touch are usually only assumed and have not yet been submitted to empirical scrutiny. Based on social psychological literature on touch, communication, and the effects of media, we assess the current research and design efforts and propose future directions for the field of mediated social touch.", "title": "" }, { "docid": "fb8201417666d992d508538583c5713f", "text": "We analyze the I/O behavior of iBench, a new collection of productivity and multimedia application workloads. Our analysis reveals a number of differences between iBench and typical file-system workload studies, including the complex organization of modern files, the lack of pure sequential access, the influence of underlying frameworks on I/O patterns, the widespread use of file synchronization and atomic operations, and the prevalence of threads. Our results have strong ramifications for the design of next generation local and cloud-based storage systems.", "title": "" }, { "docid": "1dee93ec9e8de1cf365534581fb19623", "text": "The term “Business Model”started to gain momentum in the early rise of the new economy and it is currently used both in business practice and scientific research. Under a general point of view BMs are considered as a contact point among technology, organization and strategy used to describe how an organization gets value from technology and uses it as a source of competitive advantage. Recent contributions suggest to use ontologies to define a shareable conceptualization of BM. The aim of this study is to investigate the role of BM Ontologies as a conceptual tool for the cooperation of subjects interested in achieving a common goal and operating in complex and innovative environments. This is the case for example of those contexts characterized by the deployment of e-services from multiple service providers in cross border environments. Through an extensive literature review on BM we selected the most suitable conceptual tool and studied its application to the LD-CAST project during a participatory action research activity in order to analyse the BM design process of a new organisation based on the cooperation of service providers (the Chambers of Commerce from Italy, Romania, Poland and Bulgaria) with different needs, legal constraints and cultural background.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "a9069e2560b78e97bf8e76889041a201", "text": "We consider the setting of an agent with a fixed body interacting with an unknown and uncertain external world. We show that models trained to predict proprioceptive information about the agent’s body come to represent objects in the external world. In spite of being trained with only internally available signals, these dynamic body models come to represent external objects through the necessity of predicting their effects on the agent’s own body. That is, the model learns holistic persistent representations of objects in the world, even though the only training signals are body signals. Our dynamics model is able to successfully predict distributions over 132 sensor readings over 100 steps into the future and we demonstrate that even when the body is no longer in contact with an object, the latent variables of the dynamics model continue to represent its shape. We show that active data collection by maximizing the entropy of predictions about the body— touch sensors, proprioception and vestibular information—leads to learning of dynamic models that show superior performance when used for control. We also collect data from a real robotic hand and show that the same models can be used to answer questions about properties of objects in the real world. Videos with qualitative results of our models are available at https://goo.gl/mZuqAV.", "title": "" }, { "docid": "d04e975e48bd385a69fdf58c93103fd3", "text": "In this paper we will present a low-phase-noise wide-tuning-range oscillator suitable for scaled CMOS processes. It switches between the two resonant modes of a high-order LC resonator that consists of two identical LC tanks coupled by capacitor and transformer. The mode switching method does not add lossy switches to the resonator and thus doubles frequency tuning range without degrading phase noise performance. Moreover, the coupled resonator leads to 3 dB lower phase noise than a single LC tank, which provides a way of achieving low phase noise in scaled CMOS process. Finally, the novel way of using inductive and capacitive coupling jointly decouples frequency separation and tank impedances of the two resonant modes, and makes it possible to achieve balanced performance. The proposed structure is verified by a prototype in a low power 65 nm CMOS process, which covers all cellular bands with a continuous tuning range of 2.5-5.6 GHz and meets all stringent phase noise specifications of cellular standards. It uses a 0.6 V power supply and achieves excellent phase noise figure-of-merit (FoM) of 192.5 dB at 3.7 GHz and >; 188 dB across the entire tuning range. This demonstrates the possibility of achieving low phase noise and wide tuning range at the same time in scaled CMOS processes.", "title": "" }, { "docid": "5fd840b020b69c9588faf575f8079e83", "text": "We demonstrate that modern image recognition methods based on artificial neural networks can recover hidden information from images protected by various forms of obfuscation. The obfuscation techniques considered in this paper are mosaicing (also known as pixelation), blurring (as used by YouTube), and P3, a recently proposed system for privacy-preserving photo sharing that encrypts the significant JPEG coefficients to make images unrecognizable by humans. We empirically show how to train artificial neural networks to successfully identify faces and recognize objects and handwritten digits even if the images are protected using any of the above obfuscation techniques.", "title": "" }, { "docid": "4a1a1b3012f2ce941cc532a55b49f09b", "text": "Gamification informally refers to making a system more game-like. More specifically, gamification denotes applying game mechanics to a non-game system. We theorize that gamification success depends on the game mechanics employed and their effects on user motivation and immersion. The proposed theory may be tested using an experiment or questionnaire study.", "title": "" }, { "docid": "0c43c0dbeaff9afa0e73bddb31c7dac0", "text": "A compact dual-band dielectric resonator antenna (DRA) using a parasitic c-slot fed by a microstrip line is proposed. In this configuration, the DR performs the functions of an effective radiator and the feeding structure of the parasitic c-slot in the ground plane. By optimizing the proposed structure parameters, the structure resonates at two different frequencies. One is from the DRA with the broadside patterns and the other from the c-slot with the dipole-like patterns. In order to determine the performance of varying design parameters on bandwidth and resonance frequency, the parametric study is carried out using simulation software High-Frequency Structure Simulator and experimental results. The measured and simulated results show excellent agreement.", "title": "" }, { "docid": "46bc17ab45e11b5c9c07200a60db399f", "text": "Locality-sensitive hashing (LSH) is a basic primitive in several large-scale data processing applications, including nearest-neighbor search, de-duplication, clustering, etc. In this paper we propose a new and simple method to speed up the widely-used Euclidean realization of LSH. At the heart of our method is a fast way to estimate the Euclidean distance between two d-dimensional vectors; this is achieved by the use of randomized Hadamard transforms in a non-linear setting. This decreases the running time of a (k, L)-parameterized LSH from O(dkL) to O(dlog d + kL). Our experiments show that using the new LSH in nearest-neighbor applications can improve their running times by significant amounts. To the best of our knowledge, this is the first running time improvement to LSH that is both provable and practical.", "title": "" }, { "docid": "efcf84406a2218deeb4ca33cb8574172", "text": "Cross-site scripting attacks represent one of the major security threats in today’s Web applications. Current approaches to mitigate cross-site scripting vulnerabilities rely on either server-based or client-based defense mechanisms. Although effective for many attacks, server-side protection mechanisms may leave the client vulnerable if the server is not well patched. On the other hand, client-based mechanisms may incur a significant overhead on the client system. In this work, we present a hybrid client-server solution that combines the benefits of both architectures. Our Proxy-based solution leverages the strengths of both anomaly detection and control flow analysis to provide accurate detection. We demonstrate the feasibility and accuracy of our approach through extended testing using real-world cross-site scripting exploits.", "title": "" }, { "docid": "2f88356c3a1ab60e3dd084f7d9630c70", "text": "Recently, some E-commerce sites launch a new interaction box called Tips on their mobile apps. Users can express their experience and feelings or provide suggestions using short texts typically several words or one sentence. In essence, writing some tips and giving a numerical rating are two facets of a user's product assessment action, expressing the user experience and feelings. Jointly modeling these two facets is helpful for designing a better recommendation system. While some existing models integrate text information such as item specifications or user reviews into user and item latent factors for improving the rating prediction, no existing works consider tips for improving recommendation quality. We propose a deep learning based framework named NRT which can simultaneously predict precise ratings and generate abstractive tips with good linguistic quality simulating user experience and feelings. For abstractive tips generation, gated recurrent neural networks are employed to \"translate'' user and item latent representations into a concise sentence. Extensive experiments on benchmark datasets from different domains show that NRT achieves significant improvements over the state-of-the-art methods. Moreover, the generated tips can vividly predict the user experience and feelings.", "title": "" }, { "docid": "6341eaeb32d0e25660de6be6d3943e81", "text": "Theorists have speculated that primary psychopathy (or Factor 1 affective-interpersonal features) is prominently heritable whereas secondary psychopathy (or Factor 2 social deviance) is more environmentally determined. We tested this differential heritability hypothesis using a large adolescent twin sample. Trait-based proxies of primary and secondary psychopathic tendencies were assessed using Multidimensional Personality Questionnaire (MPQ) estimates of Fearless Dominance and Impulsive Antisociality, respectively. The environmental contexts of family, school, peers, and stressful life events were assessed using multiple raters and methods. Consistent with prior research, MPQ Impulsive Antisociality was robustly associated with each environmental risk factor, and these associations were significantly greater than those for MPQ Fearless Dominance. However, MPQ Fearless Dominance and Impulsive Antisociality exhibited similar heritability, and genetic effects mediated the associations between MPQ Impulsive Antisociality and the environmental measures. Results were largely consistent across male and female twins. We conclude that gene-environment correlations rather than main effects of genes and environments account for the differential environmental correlates of primary and secondary psychopathy.", "title": "" }, { "docid": "4bce473bb65dfc545d5895c7edb6cea6", "text": "mathematical framework of the population equations. It will turn out that the results are – of course – consistent with those derived from the population equation. We study a homogeneous network of N identical neurons which are mutually coupled with strength wij = J0/N where J0 > 0 is a positive constant. In other words, the (excitatory) interaction is scaled with one over N so that the total input to a neuron i is of order one even if the number of neurons is large (N →∞). Since we are interested in synchrony we suppose that all neurons have fired simultaneously at t̂ = 0. When will the neurons fire again? Since all neurons are identical we expect that the next firing time will also be synchronous. Let us calculate the period T between one synchronous pulse and the next. We start from the firing condition of SRM0 neurons θ = ui(t) = η(t− t̂i) + ∑", "title": "" }, { "docid": "17caec370a97af736d948123f9e7be73", "text": "Multiple-purpose forensics has been attracting increasing attention worldwide. However, most of the existing methods based on hand-crafted features often require domain knowledge and expensive human labour and their performances can be affected by factors such as image size and JPEG compression. Furthermore, many anti-forensic techniques have been applied in practice, making image authentication more difficult. Therefore, it is of great importance to develop methods that can automatically learn general and robust features for image operation detectors with the capability of countering anti-forensics. In this paper, we propose a new convolutional neural network (CNN) approach for multi-purpose detection of image manipulations under anti-forensic attacks. The dense connectivity pattern, which has better parameter efficiency than the traditional pattern, is explored to strengthen the propagation of general features related to image manipulation detection. When compared with three state-of-the-art methods, experiments demonstrate that the proposed CNN architecture can achieve a better performance (i.e., with a 11% improvement in terms of detection accuracy under anti-forensic attacks). The proposed method can also achieve better robustness against JPEG compression with maximum improvement of 13% on accuracy under low-quality JPEG compression.", "title": "" }, { "docid": "36e238fa3c85b41a062d08fd9844c9be", "text": "Building generalization is a difficult operation due to the complexity of the spatial distribution of buildings and for reasons of spatial recognition. In this study, building generalization is decomposed into two steps, i.e. building grouping and generalization execution. The neighbourhood model in urban morphology provides global constraints for guiding the global partitioning of building sets on the whole map by means of roads and rivers, by which enclaves, blocks, superblocks or neighbourhoods are formed; whereas the local constraints from Gestalt principles provide criteria for the further grouping of enclaves, blocks, superblocks and/or neighbourhoods. In the grouping process, graph theory, Delaunay triangulation and the Voronoi diagram are employed as supporting techniques. After grouping, some useful information, such as the sum of the building’s area, the mean separation and the standard deviation of the separation of buildings, is attached to each group. By means of the attached information, an appropriate operation is selected to generalize the corresponding groups. Indeed, the methodology described brings together a number of welldeveloped theories/techniques, including graph theory, Delaunay triangulation, the Voronoi diagram, urban morphology and Gestalt theory, in such a way that multiscale products can be derived.", "title": "" }, { "docid": "f4fb4638bb8bc6ae551dc729b6bcea2e", "text": "mark of facial attractiveness.1,2 Skeletal asymmetries generally require surgical intervention to improve facial esthetics and correct any associated malocclusions. The classic approach in volves a presurgical phase of orthodontics, during which dental compensations are eliminated, and a postsurgical phase to refine the occlusion. The presurgical phase can be lengthy, involving tooth decompensations that often exaggerate the existing dentofacial deformities.3 Skeletal anchorage now makes it possible to eliminate the presurgical orthodontic phase and to correct minor surgical inaccuracies and relapse tendencies after surgery. In addition to a significant reduction in treatment time, this approach offers immediate gratification in the correction of facial deformities,2 which can translate into better patient compliance with elastic wear and appointments. Another reported advantage is the elimination of soft-tissue imbalances that might interfere with ortho dontic tooth movements. This article describes a “surgery first” approach in a patient with complex dentofacial asymmetry and Class III malocclusion.", "title": "" } ]
scidocsrr
01706b96302e253b3ec0ab8e25b13449
Where you Instagram?: Associating Your Instagram Photos with Points of Interest
[ { "docid": "bd33ed4cde24e8ec16fb94cf543aad8e", "text": "Users' locations are important to many applications such as targeted advertisement and news recommendation. In this paper, we focus on the problem of profiling users' home locations in the context of social network (Twitter). The problem is nontrivial, because signals, which may help to identify a user's location, are scarce and noisy. We propose a unified discriminative influence model, named as UDI, to solve the problem. To overcome the challenge of scarce signals, UDI integrates signals observed from both social network (friends) and user-centric data (tweets) in a unified probabilistic framework. To overcome the challenge of noisy signals, UDI captures how likely a user connects to a signal with respect to 1) the distance between the user and the signal, and 2) the influence scope of the signal. Based on the model, we develop local and global location prediction methods. The experiments on a large scale data set show that our methods improve the state-of-the-art methods by 13%, and achieve the best performance.", "title": "" } ]
[ { "docid": "37efaf5cbd7fb400b713db6c7c980d76", "text": "Social media users who post bullying related tweets may later experience regret, potentially causing them to delete their posts. In this paper, we construct a corpus of bullying tweets and periodically check the existence of each tweet in order to infer if and when it becomes deleted. We then conduct exploratory analysis in order to isolate factors associated with deleted posts. Finally, we propose the construction of a regrettable posts predictor to warn users if a tweet might cause regret.", "title": "" }, { "docid": "0de9ea0f7ee162f1def6ee9b95ea9ba3", "text": "While much exciting progress is being made in mobile visual search, one important question has been left unexplored in all current systems. When the first query fails to find the right target (up to 50% likelihood), how should the user form his/her search strategy in the subsequent interaction? In this paper, we propose a novel Active Query Sensing system to suggest the best way for sensing the surrounding scenes while forming the second query for location search. We accomplish the goal by developing several unique components -- an offline process for analyzing the saliency of the views associated with each geographical location based on score distribution modeling, predicting the visual search precision of individual views and locations, estimating the view of an unseen query, and suggesting the best subsequent view change. Using a scalable visual search system implemented over a NYC street view data set (0.3 million images), we show a performance gain as high as two folds, reducing the failure rate of mobile location search to only 12% after the second query. This work may open up an exciting new direction for developing interactive mobile media applications through innovative exploitation of active sensing and query formulation.", "title": "" }, { "docid": "c6054c39b9b36b5d446ff8da3716ec30", "text": "The Web is a constantly expanding global information space that includes disparate types of data and resources. Recent trends demonstrate the urgent need to manage the large amounts of data stream, especially in specific domains of application such as critical infrastructure systems, sensor networks, log file analysis, search engines and more recently, social networks. All of these applications involve large-scale data-intensive tasks, often subject to time constraints and space complexity. Algorithms, data management and data retrieval techniques must be able to process data stream, i.e., process data as it becomes available and provide an accurate response, based solely on the data stream that has already been provided. Data retrieval techniques often require traditional data storage and processing approach, i.e., all data must be available in the storage space in order to be processed. For instance, a widely used relevance measure is Term Frequency–Inverse Document Frequency (TF–IDF), which can evaluate how important a word is in a collection of documents and requires to a priori know the whole dataset. To address this problem, we propose an approximate version of the TF–IDF measure suitable to work on continuous data stream (such as the exchange of messages, tweets and sensor-based log files). The algorithm for the calculation of this measure makes two assumptions: a fast response is required, and memory is both limited and infinitely smaller than the size of the data stream. In addition, to face the great computational power required to process massive data stream, we present also a parallel implementation of the approximate TF–IDF calculation using Graphical Processing Units (GPUs). This implementation of the algorithm was tested on generated and real data stream and was able to capture the most frequent terms. Our results demonstrate that the approximate version of the TF–IDF measure performs at a level that is comparable to the solution of the precise TF–IDF measure. 2014 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "fe35799be26543a90b4d834e41b492eb", "text": "Social Web stands for the culture of participation and collaboration on the Web. Structures emerge from social interactions: social tagging enables a community of users to assign freely chosen keywords to Web resources. The structure that evolves from social tagging is called folksonomy and recent research has shown that the exploitation of folksonomy structures is beneficial to information systems. In this thesis we propose models that better capture usage context of social tagging and develop two folksonomy systems that allow for the deduction of contextual information from tagging activities. We introduce a suite of ranking algorithms that exploit contextual information embedded in folksonomy structures and prove that these contextsensitive ranking algorithms significantly improve search in Social Web systems. We setup a framework of user modeling and personalization methods for the Social Web and evaluate this framework in the scope of personalized search and social recommender systems. Extensive evaluation reveals that our context-based user modeling techniques have significant impact on the personalization quality and clearly improve regular user modeling approaches. Finally, we analyze the nature of user profiles distributed on the Social Web, implement a service that supports cross-system user modeling and investigate the impact of cross-system user modeling methods on personalization. In different experiments we prove that our cross-system user modeling strategies solve cold-start problems in social recommender systems and that intelligent re-use of external profile information improves the recommendation quality also beyond the cold-start.", "title": "" }, { "docid": "a8164a657a247761147c6012fd5442c9", "text": "Latent variable models are a powerful tool for addressing several tasks in machine learning. However, the algorithms for learning the parameters of latent variable models are prone to getting stuck in a bad local optimum. To alleviate this problem, we build on the intuition that, rather than considering all samples simultaneously, the algorithm should be presented with the training data in a meaningful order that facilitates learning. The order of the samples is determined by how easy they are. The main challenge is that typically we are not provided with a readily computable measure of the easiness of samples. We address this issue by proposing a novel, iterative self-paced learning algorithm where each iteration simultaneously selects easy samples and learns a new parameter vector. The number of samples selected is governed by a weight that is annealed until the entire training data has been considered. We empirically demonstrate that the self-paced learning algorithm outperforms the state of the art method for learning a latent structural SVM on four applications: object localization, noun phrase coreference, motif finding and handwritten digit recognition.", "title": "" }, { "docid": "3a84567c28d6a59271334594307263a5", "text": "Comprehension difficulty was rated for metaphors of the form Noun1-is-aNoun2; in addition, participants completed frames of the form Noun1-is-________ with their literal interpretation of the metaphor. Metaphor comprehension was simulated with a computational model based on Latent Semantic Analysis. The model matched participants’ interpretations for both easy and difficult metaphors. When interpreting easy metaphors, both the participants and the model generated highly consistent responses. When interpreting difficult metaphors, both the participants and the model generated disparate responses.", "title": "" }, { "docid": "d9440b9ba13c1c5ccae80b0d513b5330", "text": "Endogenous cannabinoids play an important role in the physiology and behavioral expression of stress responses. Activation of the hypothalamic-pituitary-adrenal (HPA) axis, including the release of glucocorticoids, is the fundamental hormonal response to stress. Endocannabinoid (eCB) signaling serves to maintain HPA-axis homeostasis, by buffering basal activity as well as by mediating glucocorticoid fast feedback mechanisms. Following chronic stressor exposure, eCBs are also involved in physiological and behavioral habituation processes. Behavioral consequences of stress include fear and stress-induced anxiety as well as memory formation in the context of stress, involving contextual fear conditioning and inhibitory avoidance learning. Chronic stress can also lead to depression-like symptoms. Prominent in these behavioral stress responses is the interaction between eCBs and the HPA-axis. Future directions may differentiate among eCB signaling within various brain structures/neuronal subpopulations as well as between the distinct roles of the endogenous cannabinoid ligands. Investigation into the role of the eCB system in allostatic states and recovery processes may give insight into possible therapeutic manipulations of the system in treating chronic stress-related conditions in humans.", "title": "" }, { "docid": "3256b2050c603ca16659384a0e98a22c", "text": "In this paper, we propose a Hough transform-based method to identify low-contrast defects in unevenly illuminated images, and especially focus on the inspection of mura defects in liquid crystal display (LCD) panels. The proposed method works on 1-D gray-level profiles in the horizontal and vertical directions of the surface image. A point distinctly deviated from the ideal line of a profile can be identified as a defect one. A 1-D gray-level profile in the unevenly illuminated image results in a nonstationary line signal. The most commonly used technique for straight line detection in a noisy image is Hough transform (HT). The standard HT requires a sufficient number of points lie exactly on the same straight line at a given parameter resolution so that the accumulator will show a distinct peak in the parameter space. It fails to detect a line in a nonstationary signal. In the proposed HT scheme, the points that contribute to the vote do not have to lie on a line. Instead, a distance tolerance to the line sought is first given. Any point with the distance to the line falls within the tolerance will be accumulated by taking the distance as the voting weight. A fast search procedure to tighten the possible ranges of line parameters is also proposed for mura detection in LCD images.", "title": "" }, { "docid": "3d4afb9ed09fbb6200175e2440b56755", "text": "A brief account is given of the discovery of abscisic acid (ABA) in roots and root caps of higher plants as well as the techniques by which ABA may be demonstrated in these tissues. The remainder of the review is concerned with examining the rôle of ABA in the regulation of root growth. In this regard, it is well established that when ABA is supplied to roots their elongation is usually inhibited, although at low external concentrations a stimulation of growth may also be found. Fewer observations have been directed at exploring the connection between root growth and the level of naturally occurring, endogenous ABA. Nevertheless, the evidence here also suggests that ABA is an inhibitory regulator of root growth. Moreover, ABA appears to be involved in the differential growth that arises in response to a gravitational stimulus. Recent reports that deny a rôle for ABA in root gravitropism are considered inconclusive. The response of roots to osmotic stress and the changes in ABA levels which ensue, are summarised; so are the interrelations between ABA and other hormones, particularly auxin (e.g. indoleacetic acid); both are considered in the context of the root growth and development. Quantitative changes in auxin and ABA levels may together provide the root with a flexible means of regulating its growth.", "title": "" }, { "docid": "557b718f65e68f3571302e955ddb74d7", "text": "Synthetic aperture radar (SAR) has been an unparalleled tool in cloudy and rainy regions as it allows observations throughout the year because of its all-weather, all-day operation capability. In this paper, the influence of Wenchuan Earthquake on the Sichuan Giant Panda habitats was evaluated for the first time using SAR interferometry and combining data from C-band Envisat ASAR and L-band ALOS PALSAR data. Coherence analysis based on the zero-point shifting indicated that the deforestation process was significant, particularly in habitats along the Min River approaching the epicenter after the natural disaster, and as interpreted by the vegetation deterioration from landslides, avalanches and debris flows. Experiments demonstrated that C-band Envisat ASAR data were sensitive to vegetation, resulting in an underestimation of deforestation; in contrast, L-band PALSAR data were capable of evaluating the deforestation process owing to a better penetration and the significant coherence gain on damaged forest areas. The percentage of damaged forest estimated by PALSAR decreased from 20.66% to 17.34% during 2009–2010, implying an approximate 3% recovery rate of forests in the earthquake OPEN ACCESS Remote Sens. 2014, 6 6284 impacted areas. This study proves that long-wavelength SAR interferometry is promising for rapid assessment of disaster-induced deforestation, particularly in regions where the optical acquisition is constrained.", "title": "" }, { "docid": "c0b30475f78acefae1c15f9f5d6dc57b", "text": "Traditionally, autonomous cars make predictions about other drivers’ future trajectories, and plan to stay out of their way. This tends to result in defensive and opaque behaviors. Our key insight is that an autonomous car’s actions will actually affect what other cars will do in response, whether the car is aware of it or not. Our thesis is that we can leverage these responses to plan more efficient and communicative behaviors. We model the interaction between an autonomous car and a human driver as a dynamical system, in which the robot’s actions have immediate consequences on the state of the car, but also on human actions. We model these consequences by approximating the human as an optimal planner, with a reward function that we acquire through Inverse Reinforcement Learning. When the robot plans with this reward function in this dynamical system, it comes up with actions that purposefully change human state: it merges in front of a human to get them to slow down or to reach its own goal faster; it blocks two lanes to get them to switch to a third lane; or it backs up slightly at an intersection to get them to proceed first. Such behaviors arise from the optimization, without relying on hand-coded signaling strategies and without ever explicitly modeling communication. Our user study results suggest that the robot is indeed capable of eliciting desired changes in human state by planning using this dynamical system.", "title": "" }, { "docid": "bbfc488e55fe2dfaff2af73a75c31edd", "text": "This overview covers a wide range of cannabis topics, initially examining issues in dispensaries and self-administration, plus regulatory requirements for production of cannabis-based medicines, particularly the Food and Drug Administration \"Botanical Guidance.\" The remainder pertains to various cannabis controversies that certainly require closer examination if the scientific, consumer, and governmental stakeholders are ever to reach consensus on safety issues, specifically: whether botanical cannabis displays herbal synergy of its components, pharmacokinetics of cannabis and dose titration, whether cannabis medicines produce cyclo-oxygenase inhibition, cannabis-drug interactions, and cytochrome P450 issues, whether cannabis randomized clinical trials are properly blinded, combatting the placebo effect in those trials via new approaches, the drug abuse liability (DAL) of cannabis-based medicines and their regulatory scheduling, their effects on cognitive function and psychiatric sequelae, immunological effects, cannabis and driving safety, youth usage, issues related to cannabis smoking and vaporization, cannabis concentrates and vape-pens, and laboratory analysis for contamination with bacteria and heavy metals. Finally, the issue of pesticide usage on cannabis crops is addressed. New and disturbing data on pesticide residues in legal cannabis products in Washington State are presented with the observation of an 84.6% contamination rate including potentially neurotoxic and carcinogenic agents. With ongoing developments in legalization of cannabis in medical and recreational settings, numerous scientific, safety, and public health issues remain.", "title": "" }, { "docid": "ceb4563a83fc49e5aceac7b56a8d63c0", "text": "PURPOSE\nThe literature has shown that anterior cruciate ligament (ACL) tear rates vary by gender, by sport, and in response to injury-reduction training programs. However, there is no consensus as to the magnitudes of these tear rates or their variations as a function of these variables. For example, the female-male ACL tear ratio has been reported to be as high as 9:1. Our purpose was to apply meta-analysis to the entire applicable literature to generate accurate estimates of the true incidences of ACL tear as a function of gender, sport, and injury-reduction training.\n\n\nMETHODS\nA PubMed literature search was done to identify all studies dealing with ACL tear incidence. Bibliographic cross-referencing was done to identify additional articles. Meta-analytic principles were applied to generate ACL incidences as a function of gender, sport, and prior injury-reduction training.\n\n\nRESULTS\nFemale-male ACL tear incidences ratios were as follows: basketball, 3.5; soccer, 2.67; lacrosse, 1.18; and Alpine skiing, 1.0. The collegiate soccer tear rate was 0.32 for female subjects and 0.12 for male subjects. For basketball, the rates were 0.29 and 0.08, respectively. The rate for recreational Alpine skiers was 0.63, and that for experts was 0.03, with no gender variance. The two volleyball studies had no ACL tears. Training reduced the ACL tear incidence in soccer by 0.24 but did not reduce it at all in basketball.\n\n\nCONCLUSIONS\nFemale subjects had a roughly 3 times greater incidence of ACL tears in soccer and basketball versus male subjects. Injury-reduction programs were effective for soccer but not basketball. Recreational Alpine skiers had the highest incidences of ACL tear, whereas expert Alpine skiers had the lowest incidences. Volleyball may in fact be a low-risk sport rather than a high-risk sport. Alpine skiers and lacrosse players had no gender difference for ACL tear rate. Year-round female athletes who play soccer and basketball have an ACL tear rate of approximately 5%.\n\n\nLEVEL OF EVIDENCE\nLevel IV, therapeutic case series.", "title": "" }, { "docid": "a6aa10b5adcf3241157919cb0e6863e9", "text": "Current neural networks are accumulating accolades for their performance on a variety of real-world computational tasks including recognition, classification, regression, and prediction, yet there are few scalable architectures that have emerged to address the challenges posed by their computation. This paper introduces Minitaur, an event-driven neural network accelerator, which is designed for low power and high performance. As an field-programmable gate array-based system, it can be integrated into existing robotics or it can offload computationally expensive neural network tasks from the CPU. The version presented here implements a spiking deep network which achieves 19 million postsynaptic currents per second on 1.5 W of power and supports up to 65 K neurons per board. The system records 92% accuracy on the MNIST handwritten digit classification and 71% accuracy on the 20 newsgroups classification data set. Due to its event-driven nature, it allows for trading off between accuracy and latency.", "title": "" }, { "docid": "eccd1b3b8acbf8426d7ccb7933e0bd0e", "text": "We consider an architecture for a serverless distributed file system that does not assume mutual trust among the client computers. The system provides security, availability, and reliability by distributing multiple encrypted replicas of each file among the client machines. To assess the feasibility of deploying this system on an existing desktop infrastructure, we measure and analyze a large set of client machines in a commercial environment. In particular, we measure and report results on disk usage and content; file activity; and machine uptimes, lifetimes, and loads. We conclude that the measured desktop infrastructure would passably support our proposed system, providing availability on the order of one unfilled file request per user per thousand days.", "title": "" }, { "docid": "5a416fb88c3f5980989f7556fb19755c", "text": "Cloud computing helps to share data and provide many resources to users. Users pay only for those resources as much they used. Cloud computing stores the data and distributed resources in the open environment. The amount of data storage increases quickly in open environment. So, load balancing is a main challenge in cloud environment. Load balancing is helped to distribute the dynamic workload across multiple nodes to ensure that no single node is overloaded. It helps in proper utilization of resources .It also improve the performance of the system. Many existing algorithms provide load balancing and better resource utilization. There are various types load are possible in cloud computing like memory, CPU and network load. Load balancing is the process of finding overloaded nodes and then transferring the extra load to other nodes.", "title": "" }, { "docid": "64e26b00bba3bba8d2ab77b44f049c58", "text": "The transmission properties of a folded corrugated substrate integrated waveguide (FCSIW) and a proposed half-mode FCSIW is investigated. For the same cut-off frequency, these structures have similar performance to CSIW and HMCSIW respectively, but with significantly reduced width. The top wall is isolated from the bottom wall at DC thereby permitting active devices to be connected directly to, and biased through them. Arrays of quarter-wave stubs above the top wall allow TE1,0 mode conduction currents to flow between the top and side walls. Measurements and simulations of waveguides designed to have a nominal cut-off frequency of 3 GHz demonstrate the feasibility of these compact waveguides.", "title": "" }, { "docid": "bd817e69a03da1a97e9c412b5e09eb33", "text": "The emergence of carbapenemase producing bacteria, especially New Delhi metallo-β-lactamase (NDM-1) and its variants, worldwide, has raised amajor public health concern. NDM-1 hydrolyzes a wide range of β-lactam antibiotics, including carbapenems, which are the last resort of antibiotics for the treatment of infections caused by resistant strain of bacteria. In this review, we have discussed bla NDM-1variants, its genetic analysis including type of specific mutation, origin of country and spread among several type of bacterial species. Wide members of enterobacteriaceae, most commonly Escherichia coli, Klebsiella pneumoniae, Enterobacter cloacae, and gram-negative non-fermenters Pseudomonas spp. and Acinetobacter baumannii were found to carry these markers. Moreover, at least seventeen variants of bla NDM-type gene differing into one or two residues of amino acids at distinct positions have been reported so far among different species of bacteria from different countries. The genetic and structural studies of these variants are important to understand the mechanism of antibiotic hydrolysis as well as to design new molecules with inhibitory activity against antibiotics. This review provides a comprehensive view of structural differences among NDM-1 variants, which are a driving force behind their spread across the globe.", "title": "" }, { "docid": "c346ddfd1247d335c1a45d094ae2bb60", "text": "In this paper we introduce a novel approach for stereoscopic rendering of virtual environments with a wide Field-of-View (FoV) up to 360°. Handling such a wide FoV implies the use of non-planar projections and generates specific problems such as for rasterization and clipping of primitives. We propose a novel pre-clip stage specifically adapted to geometric approaches for which problems occur with polygons spanning across the projection discontinuities. Our approach integrates seamlessly with immersive virtual reality systems as it is compatible with stereoscopy, head-tracking, and multi-surface projections. The benchmarking of our approach with different hardware setups could show that it is well compliant with real-time constraint, and capable of displaying a wide range of FoVs. Thus, our geometric approach could be used in various VR applications in which the user needs to extend the FoV and apprehend more visual information.", "title": "" }, { "docid": "2595c67531f0da4449f5914cac3488a7", "text": "In this paper we present a novel interaction metaphor for handheld projectors we label MotionBeam. We detail a number of interaction techniques that utilize the physical movement of a handheld projector to better express the motion and physicality of projected objects. Finally we present the first iteration of a projected character design that uses the MotionBeam metaphor for user interaction.", "title": "" } ]
scidocsrr
25afdb1b2b378c785549be2a014bb21a
Hierarchical Aligned Cluster Analysis for Temporal Clustering of Human Motion
[ { "docid": "32598fba1f5e7507113d89ad1978e867", "text": "Good motion data is costly to create. Such an expense often makes the reuse of motion data through transformation and retargetting a more attractive option than creating new motion from scratch. Reuse requires the ability to search automatically and efficiently a growing corpus of motion data, which remains a difficult open problem. We present a method for quickly searching long, unsegmented motion clips for subregions that most closely match a short query clip. Our search algorithm is based on a weighted PCA-based pose representation that allows for flexible and efficient pose-to-pose distance calculations. We present our pose representation and the details of the search algorithm. We evaluate the performance of a prototype search application using both synthetic and captured motion data. Using these results, we propose ways to improve the application's performance. The results inform a discussion of the algorithm's good scalability characteristics.", "title": "" } ]
[ { "docid": "f03e6476b531ca1ffc2967158faabe58", "text": "This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. To strive for sustainability under today's intense business competition, organisations apply technology roadmapping (TRM) as a strategic planning tool to align their technology strategies with business strategies. Many organisations desire to integrate TRM into an ongoing strategic planning process. The consequences of TRM implementation can lead to some changes in the business process, organisational structure, or even working culture. Applying a change management approach will help organisations to understand the basic elements that an individual needs so that some challenges can be addressed in advance before adopting the TRM process. This paper proposes a practical guideline to implement technology roadmapping along with a case example.", "title": "" }, { "docid": "ca75798a9090810682f99400f6a8ff4e", "text": "We present the first empirical analysis of Bitcoin-based scams: operations established with fraudulent intent. By amalgamating reports gathered by voluntary vigilantes and tracked in online forums, we identify 192 scams and categorize them into four groups: Ponzi schemes, mining scams, scam wallets and fraudulent exchanges. In 21% of the cases, we also found the associated Bitcoin addresses, which enables us to track payments into and out of the scams. We find that at least $11 million has been contributed to the scams from 13 000 distinct victims. Furthermore, we present evidence that the most successful scams depend on large contributions from a very small number of victims. Finally, we discuss ways in which the scams could be countered.", "title": "" }, { "docid": "a6e35b743c2cfd2cd764e5ad83decaa7", "text": "An e-vendor’s website inseparably embodies an interaction with the vendor and an interaction with the IT website interface. Accordingly, research has shown two sets of unrelated usage antecedents by customers: 1) customer trust in the e-vendor and 2) customer assessments of the IT itself, specifically the perceived usefulness and perceived ease-of-use of the website as depicted in the technology acceptance model (TAM). Research suggests, however, that the degree and impact of trust, perceived usefulness, and perceived ease of use change with experience. Using existing, validated scales, this study describes a free-simulation experiment that compares the degree and relative importance of customer trust in an e-vendor vis-à-vis TAM constructs of the website, between potential (i.e., new) customers and repeat (i.e., experienced) ones. The study found that repeat customers trusted the e-vendor more, perceived the website to be more useful and easier to use, and were more inclined to purchase from it. The data also show that while repeat customers’ purchase intentions were influenced by both their trust in the e-vendor and their perception that the website was useful, potential customers were not influenced by perceived usefulness, but only by their trust in the e-vendor. Implications of this apparent trust-barrier and guidelines for practice are discussed.", "title": "" }, { "docid": "cf8dfff6a026fc3bb4248cd813af9947", "text": "We consider a multi agent optimization problem where a network of agents collectively solves a global optimization problem with the objective function given by the sum of locally known convex functions. We propose a fully distributed broadcast-based Alternating Direction Method of Multipliers (ADMM), in which each agent broadcasts the outcome of his local processing to all his neighbors. We show that both the objective function values and the feasibility violation converge with rate O(1/T), where T is the number of iterations. This improves upon the O(1/√T) convergence rate of subgradient-based methods. We also characterize the effect of network structure and the choice of communication matrix on the convergence speed. Because of its broadcast nature, the storage requirements of our algorithm are much more modest compared to the distributed algorithms that use pairwise communication between agents.", "title": "" }, { "docid": "a96209a2f6774062537baff5d072f72f", "text": "In recent years, extensive research has been conducted in the area of Service Level Agreement (SLA) for utility computing systems. An SLA is a formal contract used to guarantee that consumers’ service quality expectation can be achieved. In utility computing systems, the level of customer satisfaction is crucial, making SLAs significantly important in these environments. Fundamental issue is the management of SLAs, including SLA autonomy management or trade off among multiple Quality of Service (QoS) parameters. Many SLA languages and frameworks have been developed as solutions; however, there is no overall classification for these extensive works. Therefore, the aim of this chapter is to present a comprehensive survey of how SLAs are created, managed and used in utility computing environment. We discuss existing use cases from Grid and Cloud computing systems to identify the level of SLA realization in state-of-art systems and emerging challenges for future research.", "title": "" }, { "docid": "e2718438e96defc0d6e07ccb50c5c089", "text": "In this paper, rotor slits on the rotor's outer circumference is adopted to reduce certain harmonic components of radial forces and, hence, acoustic noise and vibration in an interior permanent magnet synchronous machine (IPMSM). The 48th order of harmonic component for the radial force is found to be responsible for the noise and vibration problem in the studied motor. For this purpose, the influential natural frequencies, speed range, and order of harmonic components for radial force are analyzed in a systematic way. A set of design procedures is formulated to find the proper locations for the slits circumferentially. Two base designs have been identified in electromagnetic analysis to reduce the radial force component and, hence, vibration. The features for both base models are combined to create a hybridized model. Then, the operating conditions, such as speed, current, and excitation angle are investigated on the hybridized model, in the high-dimensional analysis. At influential speed region, the hybridized model achieved up to 70% drop of 48th order harmonic for radial force in a wide operating range, and the highest drop goes up to 82.5%. Torque drop in the influential speed ranges from 2.5% to 5%.", "title": "" }, { "docid": "b1e039673d60defd9b8699074235cf1b", "text": "Sentiment classification has undergone significant development in recent years. However, most existing studies assume the balance between negative and positive samples, which may not be true in reality. In this paper, we investigate imbalanced sentiment classification instead. In particular, a novel clustering-based stratified under-sampling framework and a centroid-directed smoothing strategy are proposed to address the imbalanced class and feature distribution problems respectively. Evaluation across different datasets shows the effectiveness of both the under-sampling framework and the smoothing strategy in handling the imbalanced problems in real sentiment classification applications.", "title": "" }, { "docid": "096acc73f3f7801e518711a58c2ee5f5", "text": "In this article, the authors present a life-course perspective on crime and a critique of the developmental criminology paradigm. Their fundamental argument is that persistent offending and desistance—or trajectories of crime—can be meaningfully understood within the same theoretical framework, namely, a revised agegraded theory of informal social control. The authors examine three major issues. First, they analyze data that undermine the idea that developmentally distinct groups of offenders can be explained by unique causal processes. Second, they revisit the concept of turning points from a time-varying view of key life events. Third, they stress the overlooked importance of human agency in the development of crime. The authors’ life-course theory envisions development as the constant interaction between individuals and their environment, coupled with random developmental noise and a purposeful human agency that they distinguish from rational choice. Contrary to influential developmental theories in criminology, the authors thus conceptualize crime as an emergent process reducible neither to the individual nor the environment.", "title": "" }, { "docid": "f700b168c98d235a7fb76581cc24717f", "text": "It is becoming increasingly easy to automatically replace a face of one person in a video with the face of another person by using a pre-trained generative adversarial network (GAN). Recent public scandals, e.g., the faces of celebrities being swapped onto pornographic videos, call for automated ways to detect these Deepfake videos. To help developing such methods, in this paper, we present the first publicly available set of Deepfake videos generated from videos of VidTIMIT database. We used open source software based on GANs to create the Deepfakes, and we emphasize that training and blending parameters can significantly impact the quality of the resulted videos. To demonstrate this impact, we generated videos with low and high visual quality (320 videos each) using differently tuned parameter sets. We showed that the state of the art face recognition systems based on VGG and Facenet neural networks are vulnerable to Deepfake videos, with 85.62% and 95.00% false acceptance rates (on high quality versions) respectively, which means methods for detecting Deepfake videos are necessary. By considering several baseline approaches, we found that audio-visual approach based on lipsync inconsistency detection was not able to distinguish Deepfake videos. The best performing method, which is based on visual quality metrics and is often used in presentation attack detection domain, resulted in 8.97% equal error rate on high quality Deepfakes. Our experiments demonstrate that GAN-generated Deepfake videos are challenging for both face recognition systems and existing detection methods, and the further development of face swapping technology will make it even more so.", "title": "" }, { "docid": "8c221ad31eda07f1628c3003a8c12724", "text": "This paper presents a novel unsupervised domain adaptation method for cross-domain visual recognition. We propose a unified framework that reduces the shift between domains both statistically and geometrically, referred to as Joint Geometrical and Statistical Alignment (JGSA). Specifically, we learn two coupled projections that project the source domain and target domain data into low-dimensional subspaces where the geometrical shift and distribution shift are reduced simultaneously. The objective function can be solved efficiently in a closed form. Extensive experiments have verified that the proposed method significantly outperforms several state-of-the-art domain adaptation methods on a synthetic dataset and three different real world cross-domain visual recognition tasks.", "title": "" }, { "docid": "4e8c39eaa7444158a79573481b80a77f", "text": "Image patch classification is an important task in many different medical imaging applications. In this work, we have designed a customized Convolutional Neural Networks (CNN) with shallow convolution layer to classify lung image patches with interstitial lung disease (ILD). While many feature descriptors have been proposed over the past years, they can be quite complicated and domain-specific. Our customized CNN framework can, on the other hand, automatically and efficiently learn the intrinsic image features from lung image patches that are most suitable for the classification purpose. The same architecture can be generalized to perform other medical image or texture classification tasks.", "title": "" }, { "docid": "d18d4780cc259da28da90485bd3f0974", "text": "L'ostéogenèse imparfaite (OI) est un groupe hétérogène de maladies affectant le collagène de type I et caractérisées par une fragilité osseuse. Les formes létales sont rares et se caractérisent par une micromélie avec déformation des membres. Un diagnostic anténatal d'OI létale a été fait dans deux cas, par échographie à 17 et à 25 semaines d'aménorrhée, complélées par un scanner du squelette fœtal dans un cas. Une interruption thérapeutique de grossesse a été indiquée dans les deux cas. Pan African Medical Journal. 2016; 25:88 doi:10.11604/pamj.2016.25.88.5871 This article is available online at: http://www.panafrican-med-journal.com/content/article/25/88/full/ © Houda EL Mhabrech et al. The Pan African Medical Journal ISSN 1937-8688. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Pan African Medical Journal – ISSN: 19378688 (www.panafrican-med-journal.com) Published in partnership with the African Field Epidemiology Network (AFENET). (www.afenet.net) Case report Open Access", "title": "" }, { "docid": "32c405ebed87b4e1ca47cd15b7b9b61b", "text": "Video cameras are pervasively deployed for security and smart city scenarios, with millions of them in large cities worldwide. Achieving the potential of these cameras requires efficiently analyzing the live videos in realtime. We describe VideoStorm, a video analytics system that processes thousands of video analytics queries on live video streams over large clusters. Given the high costs of vision processing, resource management is crucial. We consider two key characteristics of video analytics: resource-quality tradeoff with multi-dimensional configurations, and variety in quality and lag goals. VideoStorm’s offline profiler generates query resourcequality profile, while its online scheduler allocates resources to queries to maximize performance on quality and lag, in contrast to the commonly used fair sharing of resources in clusters. Deployment on an Azure cluster of 101 machines shows improvement by as much as 80% in quality of real-world queries and 7× better lag, processing video from operational traffic cameras.", "title": "" }, { "docid": "ff345d732a273577ca0f965b92e1bbbd", "text": "Integrated circuit (IC) testing for quality assurance is approaching 50% of the manufacturing costs for some complex mixed-signal IC’s. For many years the market growth and technology advancements in digital IC’s were driving the developments in testing. The increasing trend to integrate information acquisition and digital processing on the same chip has spawned increasing attention to the test needs of mixed-signal IC’s. The recent advances in wireless communications indicate a trend toward the integration of the RF and baseband mixed signal technologies. In this paper we examine the developments in IC testing form the historic, current status and future view points. In separate sections we address the testing developments for digital, mixed signal and RF IC’s. With these reviews as context, we relate new test paradigms that have the potential to fundamentally alter the methods used to test mixed-signal and RF parts.", "title": "" }, { "docid": "a4957c88aee24ee9223afea8b01a8a62", "text": "This study examined smartphone user behaviors and their relation to self-reported smartphone addiction. Thirty-four users who did not own smartphones were given instrumented iPhones that logged all phone use over the course of the year-long study. At the conclusion of the study, users were asked to rate their level of addiction to the device. Sixty-two percent agreed or strongly agreed that they were addicted to their iPhones. These users showed differentiated smartphone use as compared to those users who did not indicate an addiction. Addicted users spent twice as much time on their phone and launched applications much more frequently (nearly twice as often) as compared to the non-addicted user. Mail, Messaging, Facebook and the Web drove this use. Surprisingly, Games did not show any difference between addicted and nonaddicted users. Addicted users showed significantly lower time-per-interaction than did non-addicted users for Mail, Facebook and Messaging applications. One addicted user reported that his addiction was problematic, and his use data was beyond three standard deviations from the upper hinge. The implications of the relationship between the logged and self-report data are discussed.", "title": "" }, { "docid": "b16992ec2416b420b2115037c78cfd4b", "text": "Dictionary learning algorithms or supervised deep convolution networks have considerably improved the efficiency of predefined feature representations such as SIFT. We introduce a deep scattering convolution network, with complex wavelet filters over spatial and angular variables. This representation brings an important improvement to results previously obtained with predefined features over object image databases such as Caltech and CIFAR. The resulting accuracy is comparable to results obtained with unsupervised deep learning and dictionary based representations. This shows that refining image representations by using geometric priors is a promising direction to improve image classification and its understanding.", "title": "" }, { "docid": "45719c2127204b4eb169fccd2af0bf82", "text": "A face hallucination algorithm is proposed to generate high-resolution images from JPEG compressed low-resolution inputs by decomposing a deblocked face image into structural regions such as facial components and non-structural regions like the background. For structural regions, landmarks are used to retrieve adequate high-resolution component exemplars in a large dataset based on the estimated head pose and illumination condition. For non-structural regions, an efficient generic super resolution algorithm is applied to generate high-resolution counterparts. Two sets of gradient maps extracted from these two regions are combined to guide an optimization process of generating the hallucination image. Numerous experimental results demonstrate that the proposed algorithm performs favorably against the state-of-the-art hallucination methods on JPEG compressed face images with different poses, expressions, and illumination conditions.", "title": "" }, { "docid": "fef5e04bf8ddb05dfd02f10c7862ce6b", "text": "With the rise of computer networks in the past decades, the sp read of distributed applications with components across multiple machines, and with new notions such as mobile code, there has been a need for formal methods to model and reason about concurrency and mobility. The study of sequ ential computations has been based on notions such as Turing machines, recursive functions, the -calculus, all equivalent formalisms capturing the essenc e of sequential computations. Unfortunately, for concurrent programs, th eories for sequential computation are not enough. Many programs are not simply programs that compute a result and re turn it to the user, but rather interact with other programs, and even move from machine to machine. Process calculi are an attempt at getting a formal foundatio n based on such ideas. They emerged from the work of Hoare [4] and Milner [6] on models of concurrency. These calc uli are meant to model systems made up of processes communicating by exchanging values across channels. They a llow for the dynamic creation and removal of processes, allowing the modelling of dynamic systems. A typical proces s calculus in that vein is CCS [6, 7]. The -calculus extends CCS with the ability to create and remove communicat ion links between processes, a new form of dynamic behaviour. By allowing links to be created and deleted, it is po sible to model a form of mobility, by identifying the position of a process by its communication links. This book, “The -calculus: A Theory of Mobile Processes”, by Davide Sangior gi and David Walker, is a in-depth study of the properties of the -calculus and its variants. In a sense, it is the logical foll owup to the recent introduction to concurrency and the -calculus by Milner [8], reviewed in SIGACT News, 31(4), Dec ember 2000. What follows is a whirlwind introduction to CCS and the -calculus. It is meant as a way to introduce the notions discussed in much more depth by the book under review. Let us s tart with the basics. CCS provides a syntax for writing processes. The syntax is minimalist, in the grand tradition of foundational calculi such as the -calculus. Processes perform actions, which can be of three forms: the sending of a message over channel x (written x), the receiving of a message over channel x (written x), and internal actions (written ), the details of which are unobservable. Send and receive actions are called synchronizationactions, since communication occurs when the correspondin g processes synchronize. Let stand for actions, including the internal action , while we reserve ; ; : : : for synchronization actions. 1 Processes are written using the following syntax: P ::= Ahx1; : : : ; xki jXi2I i:Pi j P1jP2 j x:P We write 0 for the empty summation (when I = ;). The idea behind process expressions is simple. The proces s 0 represents the process that does nothing and simply termina tes. A process of the form :P awaits to synchronize with a process of the form :Q, after which the processes continue as process P andQ respectively. A generalization 1In the literature, the actions of CCS are often given a much mo re abstract interpretation, as simply names and co-names. T he send/receive interpretation is useful when one moves to the -calculus.", "title": "" }, { "docid": "e9ba4e76a3232e25233a4f5fe206e8ba", "text": "Systems code is often written in low-level languages like C/C++, which offer many benefits but also delegate memory management to programmers. This invites memory safety bugs that attackers can exploit to divert control flow and compromise the system. Deployed defense mechanisms (e.g., ASLR, DEP) are incomplete, and stronger defense mechanisms (e.g., CFI) often have high overhead and limited guarantees [19, 15, 9]. We introduce code-pointer integrity (CPI), a new design point that guarantees the integrity of all code pointers in a program (e.g., function pointers, saved return addresses) and thereby prevents all control-flow hijack attacks, including return-oriented programming. We also introduce code-pointer separation (CPS), a relaxation of CPI with better performance properties. CPI and CPS offer substantially better security-to-overhead ratios than the state of the art, they are practical (we protect a complete FreeBSD system and over 100 packages like apache and postgresql), effective (prevent all attacks in the RIPE benchmark), and efficient: on SPEC CPU2006, CPS averages 1.2% overhead for C and 1.9% for C/C++, while CPI’s overhead is 2.9% for C and 8.4% for C/C++. A prototype implementation of CPI and CPS can be obtained from http://levee.epfl.ch.", "title": "" }, { "docid": "a0594bdeeafdbcc6e2e936cd025407e0", "text": "[Purpose] The aim of this study was to compare the effects of \"McGill stabilization exercises\" and \"conventional physiotherapy\" on pain, functional disability and active back flexion and extension range of motion in patients with chronic non-specific low back pain. [Subjects and Methods] Thirty four patients with chronic non-specific low back pain were randomly assigned to McGill stabilization exercises group (n=17) and conventional physiotherapy group (n=17). In both groups, patients performed the corresponding exercises for six weeks. The visual analog scale (VAS), Quebec Low Back Pain Disability Scale Questionnaire and inclinometer were used to measure pain, functional disability, and active back flexion and extension range of motion, respectively. [Results] Statistically significant improvements were observed in pain, functional disability, and active back extension range of motion in McGill stabilization exercises group. However, active back flexion range of motion was the only clinical symptom that statistically increased in patients who performed conventional physiotherapy. There was no significant difference between the clinical characteristics while compared these two groups of patients. [Conclusion] The results of this study indicated that McGill stabilization exercises and conventional physiotherapy provided approximately similar improvement in pain, functional disability, and active back range of motion in patients with chronic non-specific low back pain. However, it appears that McGill stabilization exercises provide an additional benefit to patients with chronic non-specific low back, especially in pain and functional disability improvement.", "title": "" } ]
scidocsrr
cf1573327854c91a71912e9f8b5a2366
Visual attention analysis and prediction on human faces with mole
[ { "docid": "0999a01e947019409c75150f85058728", "text": "We present a robot localization system using biologically inspired vision. Our system models two extensively studied human visual capabilities: (1) extracting the ldquogistrdquo of a scene to produce a coarse localization hypothesis and (2) refining it by locating salient landmark points in the scene. Gist is computed here as a holistic statistical signature of the image, thereby yielding abstract scene classification and layout. Saliency is computed as a measure of interest at every image location, which efficiently directs the time-consuming landmark-identification process toward the most likely candidate locations in the image. The gist features and salient regions are then further processed using a Monte Carlo localization algorithm to allow the robot to generate its position. We test the system in three different outdoor environments-building complex (38.4 m times 54.86 m area, 13 966 testing images), vegetation-filled park (82.3 m times 109.73 m area, 26 397 testing images), and open-field park (137.16 m times 178.31 m area, 34 711 testing images)-each with its own challenges. The system is able to localize, on average, within 0.98, 2.63, and 3.46 m, respectively, even with multiple kidnapped-robot instances.", "title": "" } ]
[ { "docid": "ee5c8e8c4f2964510604d1ef4a452372", "text": "Learning customer preferences from an observed behaviour is an important topic in the marketing literature. Structural models typically model forward-looking customers or firms as utility-maximizing agents whose utility is estimated using methods of Stochastic Optimal Control. We suggest an alternative approach to study dynamic consumer demand, based on Inverse Reinforcement Learning (IRL). We develop a version of the Maximum Entropy IRL that leads to a highly tractable model formulation that amounts to low-dimensional convex optimization in the search for optimal model parameters. Using simulations of consumer demand, we show that observational noise for identical customers can be easily confused with an apparent consumer heterogeneity.", "title": "" }, { "docid": "66d35e0f9d725475d9d1e61a724cf5ea", "text": "As data-driven methods are becoming pervasive in a wide variety of disciplines, there is an urgent need to develop scalable and sustainable tools to simplify the process of data science, to make it easier for the users to keep track of the analyses being performed and datasets being generated, and to enable the users to understand and analyze the workflows. In this paper, we describe our vision of a unified provenance and metadata management system to support lifecycle management of complex collaborative data science workflows. We argue that the information about the analysis processes and data artifacts can, and should be, captured in a semi-passive manner; and we show that querying and analyzing this information can not only simplify bookkeeping and debugging tasks but also enable a rich new set of capabilities like identifying flaws in the data science process itself. It can also significantly reduce the user time spent in fixing post-deployment problems through automated analysis and monitoring. We have implemented a prototype system, PROVDB, on top of git and Neo4j, and we describe its key features and capabilities.", "title": "" }, { "docid": "281c64b492a1aff7707dbbb5128799c8", "text": "Internet business models have been widely discussed in literature and applied within the last decade. Nevertheless, a clear understanding of some e-commerce concepts does not exist yet. The classification of business models in e-commerce is one of these areas. The current research tries to fill this gap through a conceptual and qualitative study. Nine main e-commerce business model types are selected from literature and analyzed to define the criteria and their sub-criteria (characteristics). As a result three different classifications for business models are determined. This study can be used to improve the understanding of essential functions, relations and mechanisms of existing e-commerce business models.", "title": "" }, { "docid": "88a15c0efdfeba3e791ea88862aee0c3", "text": "Logic-based approaches to legal problem solving model the rule-governed nature of legal argumentation, justification, and other legal discourse but suffer from two key obstacles: the absence of efficient, scalable techniques for creating authoritative representations of legal texts as logical expressions; and the difficulty of evaluating legal terms and concepts in terms of the language of ordinary discourse. Data-centric techniques can be used to finesse the challenges of formalizing legal rules and matching legal predicates with the language of ordinary parlance by exploiting knowledge latent in legal corpora. However, these techniques typically are opaque and unable to support the rule-governed discourse needed for persuasive argumentation and justification. This paper distinguishes representative legal tasks to which each approach appears to be particularly well suited and proposes a hybrid model that exploits the complementarity of each.", "title": "" }, { "docid": "55772e55adb83d4fd383ddebcf564a71", "text": "The generation of multi-functional drug delivery systems, namely solid dosage forms loaded with nano-sized carriers, remains little explored and is still a challenge for formulators. For the first time, the coupling of two important technologies, 3D printing and nanotechnology, to produce innovative solid dosage forms containing drug-loaded nanocapsules was evaluated here. Drug delivery devices were prepared by fused deposition modelling (FDM) from poly(ε-caprolactone) (PCL) and Eudragit® RL100 (ERL) filaments with or without a channelling agent (mannitol). They were soaked in deflazacort-loaded nanocapsules (particle size: 138nm) to produce 3D printed tablets (printlets) loaded with them, as observed by SEM. Drug loading was improved by the presence of the channelling agent and a linear correlation was obtained between the soaking time and the drug loading (r2=0.9739). Moreover, drug release profiles were dependent on the polymeric material of tablets and the presence of the channelling agent. In particular, tablets prepared with a partially hollow core (50% infill) had a higher drug loading (0.27% w/w) and faster drug release rate. This study represents an original approach to convert nanocapsules suspensions into solid dosage forms as well as an efficient 3D printing method to produce novel drug delivery systems, as personalised nanomedicines.", "title": "" }, { "docid": "52a6319c28c6c889101d9b2b6d4a76d3", "text": "A method is developed for imputing missing values when the probability of response depends upon the variable being imputed. The missing data problem is viewed as one of parameter estimation in a regression model with stochastic ensoring of the dependent variable. The prediction approach to imputation is used to solve this estimation problem. Wages and salaries are imputed to nonrespondents in the Current Population Survey and the results are compared to the nonrespondents' IRS wage and salary data. The stochastic ensoring approach gives improved results relative to a prediction approach that ignores the response mechanism.", "title": "" }, { "docid": "fc3d4b4ac0d13b34aeadf5806013689d", "text": "Internet of Things (IoT) is one of the emerging technologies of this century and its various aspects, such as the Infrastructure, Security, Architecture and Privacy, play an important role in shaping the future of the digitalised world. Internet of Things devices are connected through sensors which have significant impacts on the data and its security. In this research, we used IoT five layered architecture of the Internet of Things to address the security and private issues of IoT enabled services and applications. Furthermore, a detailed survey on Internet of Things infrastructure, architecture, security, and privacy of the heterogeneous objects were presented. The paper identifies the major challenge in the field of IoT; one of them is to secure the data while accessing the objects through sensing machines. This research advocates the importance of securing the IoT ecosystem at each layer resulting in an enhanced overall security of the connected devices as well as the data generated. Thus, this paper put forwards a security model to be utilised by the researchers, manufacturers and developers of IoT devices, applications and services.", "title": "" }, { "docid": "b36058bcfcb5f5f4084fe131c42b13d9", "text": "We present regular linear temporal logic (RLTL), a logic that generalizes linear temporal logic with the ability to use regular expressions arbitrarily as sub-expressions. Every LTL operator can be defined as a context in regular linear temporal logic. This implies that there is a (linear) translation from LTL to RLTL. Unlike LTL, regular linear temporal logic can define all ω-regular languages, while still keeping the satisfiability problem in PSPACE. Unlike the extended temporal logics ETL∗, RLTL is defined with an algebraic signature. In contrast to the linear time μ-calculus, RLTL does not depend on fix-points in its syntax.", "title": "" }, { "docid": "4ee84cfdef31d4814837ad2811e59cd4", "text": "In this paper, we propose the MIML (Multi-Instance Multi-Label learning) framework where an example is described by multiple instances and associated with multiple class labels. Compared to traditional learning frameworks, the MIML framework is more convenient and natural for representing complicated objects which have multiple semantic meanings. To learn from MIML examples, we propose the MimlBoost and MimlSvm algorithms based on a simple degeneration strategy, and experiments show that solving problems involving complicated objects with multiple semantic meanings in the MIML framework can lead to good performance. Considering that the degeneration process may lose information, we propose the D-MimlSvm algorithm which tackles MIML problems directly in a regularization framework. Moreover, we show that even when we do not have access to the real objects and thus cannot capture more information from real objects by using the MIML representation, MIML is still useful. We propose the InsDif and SubCod algorithms. InsDif works by transforming single-instances into the MIML representation for learning, while SubCod works by transforming single-label examples into the MIML representation for learning. Experiments show that in some tasks they are able to achieve better performance than learning the single-instances or single-label examples directly.", "title": "" }, { "docid": "7b7ae905f8695dcac4ed1231c76ced69", "text": "In this paper the intelligent control of full automatic car wash using a programmable logic controller (PLC) has been investigated and designed to do all steps of carwashing. The Intelligent control of full automatic carwash has the ability to identify and profile the geometrical dimensions of the vehicle chassis. Vehicle dimension identification is an important point in this control system to adjust the washing brushes position and time duration. The study also tries to design a control set for simulating and building the automatic carwash. The main purpose of the simulation is to develop criteria for designing and building this type of carwash in actual size to overcome challenges of automation. The results of this research indicate that the proposed method in process control not only increases productivity, speed, accuracy and safety but also reduce the time and cost of washing based on dynamic model of the vehicle. A laboratory prototype based on an advanced intelligent control has been built to study the validity of the design and simulation which it’s appropriate performance confirms the validity of this study. Keywords—Automatic Carwash, Dimension, PLC.", "title": "" }, { "docid": "c6485365e8ce550ea8c507aa963a00c2", "text": "Consensus molecular subtypes and the evolution of precision medicine in colorectal cancer Rodrigo Dienstmann, Louis Vermeulen, Justin Guinney, Scott Kopetz, Sabine Tejpar and Josep Tabernero Nature Reviews Cancer 17, 79–92 (2017) In this article a source of grant funding for one of the authors was omitted from the Acknowledgements section. The online version of the article has been corrected to include: “The work of R.D. was supported by the Grant for Oncology Innovation under the project ‘Next generation of clinical trials with matched targeted therapies in colorectal cancer’”. C O R R E C T I O N", "title": "" }, { "docid": "c219b930c571a7429dc5c4edc92022f2", "text": "Manually labeling documents for training a text classifier is expensive and time-consuming. Moreover, a classifier trained on labeled documents may suffer from overfitting and adaptability problems. Dataless text classification (DLTC) has been proposed as a solution to these problems, since it does not require labeled documents. Previous research in DLTC has used explicit semantic analysis of Wikipedia content to measure semantic distance between documents, which is in turn used to classify test documents based on nearest neighbours. The semantic-based DLTC method has a major drawback in that it relies on a large-scale, finely-compiled semantic knowledge base, which is difficult to obtain in many scenarios. In this paper we propose a novel kind of model, descriptive LDA (DescLDA), which performs DLTC with only category description words and unlabeled documents. In DescLDA, the LDA model is assembled with a describing device to infer Dirichlet priors from prior descriptive documents created with category description words. The Dirichlet priors are then used by LDA to induce category-aware latent topics from unlabeled documents. Experimental results with the 20Newsgroups and RCV1 datasets show that: (1) our DLTC method is more effective than the semantic-based DLTC baseline method; and (2) the accuracy of our DLTC method is very close to state-of-the-art supervised text classification methods. As neither external knowledge resources nor labeled documents are required, our DLTC method is applicable to a wider range of scenarios.", "title": "" }, { "docid": "e244cbd076ea62b4d720378c2adf4438", "text": "This paper introduces flash organizations: crowds structured like organizations to achieve complex and open-ended goals. Microtask workflows, the dominant crowdsourcing structures today, only enable goals that are so simple and modular that their path can be entirely pre-defined. We present a system that organizes crowd workers into computationally-represented structures inspired by those used in organizations - roles, teams, and hierarchies - which support emergent and adaptive coordination toward open-ended goals. Our system introduces two technical contributions: 1) encoding the crowd's division of labor into de-individualized roles, much as movie crews or disaster response teams use roles to support coordination between on-demand workers who have not worked together before; and 2) reconfiguring these structures through a model inspired by version control, enabling continuous adaptation of the work and the division of labor. We report a deployment in which flash organizations successfully carried out open-ended and complex goals previously out of reach for crowdsourcing, including product design, software development, and game production. This research demonstrates digitally networked organizations that flexibly assemble and reassemble themselves from a globally distributed online workforce to accomplish complex work.", "title": "" }, { "docid": "1f44c8d792b961649903eb1ab2612f3c", "text": "Teeth segmentation is an important step in human identification and Content Based Image Retrieval (CBIR) systems. This paper proposes a new approach for teeth segmentation using morphological operations and watershed algorithm. In Cone Beam Computer Tomography (CBCT) and Multi Slice Computer Tomography (MSCT) each tooth is an elliptic shape region that cannot be separated only by considering their pixels' intensity values. For segmenting a tooth from the image, some enhancement is necessary. We use morphological operators such as image filling and image opening to enhance the image. In the proposed algorithm, a Maximum Intensity Projection (MIP) mask is used to separate teeth regions from black and bony areas. Then each tooth is separated using the watershed algorithm. Anatomical constraints are used to overcome the over segmentation problem in watershed method. The results show a high accuracy for the proposed algorithm in segmenting teeth. Proposed method decreases time consuming by considering only one image of CBCT and MSCT for segmenting teeth instead of using all slices.", "title": "" }, { "docid": "f005ebceeac067ffae197fee603ed8c7", "text": "The extended Kalman filter (EKF) is one of the most widely used methods for state estimation with communication and aerospace applications based on its apparent simplicity and tractability (Shi et al., 2002; Bolognani et al., 2003; Wu et al., 2004). However, for an EKF to guarantee satisfactory performance, the system model should be known exactly. Unknown external disturbances may result in the inaccuracy of the state estimate, even cause divergence. This difficulty has been recognized in the literature (Reif & Unbehauen, 1999; Reif et al., 2000), and several schemes have been developed to overcome it. A traditional approach to improve the performance of the filter is the 'covariance setting' technique, where a positive definite estimation error covariance matrix is chosen by the filter designer (Einicke et al., 2003; Bolognani et al., 2003). As it is difficult to manually tune the covariance matrix for dynamic system, adaptive extended Kalman filter (AEKF) approaches for online estimation of the covariance matrix have been adopted (Kim & ILTIS, 2004; Yu et al., 2005; Ahn & Won, 2006). However, only in some special cases, the optimal estimation of the covariance matrix can be obtained. And inaccurate approximation of the covariance matrix may blur the state estimate. Recently, the robust H∞ filter has received considerable attention (Theodor et al., 1994; Shen & Deng, 1999; Zhang et al., 2005; Tseng & Chen, 2001). The robust filters take different forms depending on what kind of disturbances are accounted for, while the general performance criterion of the filters is to guarantee a bounded energy gain from the worst possible disturbance to the estimation error. Although the robust extended Kalman filter (REKF) has been deeply investigated (Einicke & White, 1999; Reif et al., 1999; Seo et al., 2006), how to prescribe the level of disturbances attenuation is still an open problem. In general, the selection of the attenuation level can be seen as a tradeoff between the optimality and the robustness. In other words, the robustness of the REKF is obtained at the expense of optimality. This chapter reviews the adaptive robust extended Kalman filter (AREKF), an effective algorithm which will remain stable in the presence of unknown disturbances, and yield accurate estimates in the absence of disturbances (Xiong et al., 2008). The key idea of the AREKF is to design the estimator based on the stability analysis, and determine whether the error covariance matrix should be reset according to the magnitude of the innovation. O pe n A cc es s D at ab as e w w w .in te ch w eb .o rg", "title": "" }, { "docid": "bde9e26746ddcc6e53f442a0e400a57e", "text": "Aljebreen, Mohammed, \"Implementing a dynamic scaling of web applications in a virtualized cloud computing environment\" (2013). Abstract Cloud computing is becoming more essential day by day. The allure of the cloud is the significant value and benefits that people gain from it, such as reduced costs, increased storage, flexibility, and more mobility. Flexibility is one of the major benefits that cloud computing can provide in terms of scaling up and down the infrastructure of a network. Once traffic has increased on one server within the network, a load balancer instance will route incoming requests to a healthy instance, which is less busy and less burdened. When the full complement of instances cannot handle any more requests, past research has been done by Chieu et. al. that presented a scaling algorithm to address a dynamic scalability of web applications on a virtualized cloud computing environment based on relevant indicators that can increase or decrease servers, as needed. In this project, I implemented the proposed algorithm, but based on CPU Utilization threshold. In addition, two tests were run exploring the capabilities of different metrics when faced with ideal or challenging conditions. The results did find a superior metric that was able to perform successfully under both tests. 3 Dedication I lovingly dedicate this thesis to my gracious and devoted mother for her unwavering love and for always believing in me. 4 Acknowledgments This thesis would not have been possible without the support of many people. My wish is to express humble gratitude to the committee chair, Prof. Sharon Mason, who was perpetually generous in offering her invaluable assistance, support, and guidance. Deepest gratitude is also due to the members of my supervisory committee, Prof. Lawrence Hill and Prof. Jim Leone, without whose knowledge and direction this study would not have been successful. Special thanks also to Prof. Charles Border for his financial support of this thesis and priceless assistance. Profound gratitude to my mother, Moneerah, who has been there from the very beginning, for her support and endless love. I would also like to convey thanks to my wife for her patient and unending encouragement and support throughout the duration of my studies; without my wife's encouragement, I would not have completed this degree. I wish to express my gratitude to my beloved sister and brothers for their kind understanding throughout my studies. Special thanks to my friend, Mohammed Almathami, for his …", "title": "" }, { "docid": "c85bd1c2ffb6b53bfeec1ec69f871360", "text": "In this paper, we present a new design of a compact power divider based on the modification of the conventional Wilkinson power divider. In this new configuration, length reduction of the high-impedance arms is achieved through capacitive loading using open stubs. Radial configuration was adopted for bandwidth enhancement. Additionally, by insertion of the complex isolation network between the high-impedance transmission lines at an arbitrary phase angle other than 90 degrees, both electrical and physical isolation were achieved. Design equations as well as the synthesis procedure of the isolation network are demonstrated using an example centred at 1 GHz. The measurement results revealed a reduction of 60% in electrical length compared to the conventional Wilkinson power divider with a total length of only 30 degrees at the centre frequency of operation.", "title": "" }, { "docid": "15f6b6be4eec813fb08cb3dd8b9c97f2", "text": "ACKNOWLEDGEMENTS First, I would like to thank my supervisor Professor H. Levent Akın for his guidance. This thesis would not have been possible without his encouragement and enthusiastic support. I would also like to thank all the staff at the Artificial Intelligence Laboratory for their encouragement throughout the year. Their success in RoboCup is always a good motivation. Sharing their precious ideas during the weekly seminars have always guided me to the right direction. Finally I am deeply grateful to my family and to my wife Derya. They always give me endless love and support, which has helped me to overcome the various challenges along the way. Thank you for your patience... The field of Intelligent Transport Systems (ITS) is improving rapidly in the world. Ultimate aim of such systems is to realize fully autonomous vehicle. The researches in the field offer the potential for significant enhancements in safety and operational efficiency. Lane tracking is an important topic in autonomous navigation because the navigable region usually stands between the lanes, especially in urban environments. Several approaches have been proposed, but Hough transform seems to be the dominant among all. A robust lane tracking method is also required for reducing the effect of the noise and achieving the required processing time. In this study, we present a new lane tracking method which uses a partitioning technique for obtaining Multiresolution Hough Transform (MHT) of the acquired vision data. After the detection process, a Hidden Markov Model (HMM) based method is proposed for tracking the detected lanes. Traffic signs are important instruments to indicate the rules on roads. This makes them an essential part of the ITS researches. It is clear that leaving traffic signs out of concern will cause serious consequences. Although the car manufacturers have started to deploy intelligent sign detection systems on their latest models, the road conditions and variations of actual signs on the roads require much more robust and fast detection and tracking methods. Localization of such systems is also necessary because traffic signs differ slightly between countries. This study also presents a fast and robust sign detection and tracking method based on geometric transformation and genetic algorithms (GA). Detection is done by a genetic algorithm (GA) approach supported by a radial symmetry check so that false alerts are considerably reduced. Classification v is achieved by a combination of SURF features with NN or SVM classifiers. A heuristic …", "title": "" }, { "docid": "1733a6f167e7e13bc816b7fc546e19e3", "text": "As many other machine learning driven medical image analysis tasks, skin image analysis suffers from a chronic lack of labeled data and skewed class distributions, which poses problems for the training of robust and well-generalizing models. The ability to synthesize realistic looking images of skin lesions could act as a reliever for the aforementioned problems. Generative Adversarial Networks (GANs) have been successfully used to synthesize realistically looking medical images, however limited to low resolution, whereas machine learning models for challenging tasks such as skin lesion segmentation or classification benefit from much higher resolution data. In this work, we successfully synthesize realistically looking images of skin lesions with GANs at such high resolution. Therefore, we utilize the concept of progressive growing, which we both quantitatively and qualitatively compare to other GAN architectures such as the DCGAN and the LAPGAN. Our results show that with the help of progressive growing, we can synthesize highly realistic dermoscopic images of skin lesions that even expert dermatologists find hard to distinguish from real ones.", "title": "" }, { "docid": "27316b23e7a7cd163abd40f804caf61b", "text": "Attention based recurrent neural networks (RNN) have shown a great success for question answering (QA) in recent years. Although significant improvements have been achieved over the non-attentive models, the position information is not well studied within the attention-based framework. Motivated by the effectiveness of using the word positional context to enhance information retrieval, we assume that if a word in the question (i.e., question word) occurs in an answer sentence, the neighboring words should be given more attention since they intuitively contain more valuable information for question answering than those far away. Based on this assumption, we propose a positional attention based RNN model, which incorporates the positional context of the question words into the answers' attentive representations. Experiments on two benchmark datasets show the great advantages of our proposed model. Specifically, we achieve a maximum improvement of 8.83% over the classical attention based RNN model in terms of mean average precision. Furthermore, our model is comparable to if not better than the state-of-the-art approaches for question answering.", "title": "" } ]
scidocsrr
8b595113ad1fab654a06f9bb218b5da4
SentiGAN: Generating Sentimental Texts via Mixture Adversarial Networks
[ { "docid": "89f157fd5c42ba827b7d613f80770992", "text": "Generating emotional language is a key step towards building empathetic natural language processing agents. However, a major challenge for this line of research is the lack of large-scale labeled training data, and previous studies are limited to only small sets of human annotated sentiment labels. Additionally, explicitly controlling the emotion and sentiment of generated text is also difficult. In this paper, we take a more radical approach: we exploit the idea of leveraging Twitter data that are naturally labeled with emojis. We collect a large corpus of Twitter conversations that include emojis in the response and assume the emojis convey the underlying emotions of the sentence. We investigate several conditional variational autoencoders training on these conversations, which allow us to use emojis to control the emotion of the generated text. Experimentally, we show in our quantitative and qualitative analyses that the proposed models can successfully generate highquality abstractive conversation responses in accordance with designated emotions.", "title": "" } ]
[ { "docid": "ab6d4dbaf92c142dfce0c8133e7ae669", "text": "This paper presents a high-performance substrate-integrated-waveguide RF microelectromechanical systems (MEMS) tunable filter for 1.2-1.6-GHz frequency range. The proposed filter is developed using packaged RF MEMS switches and utilizes a two-layer structure that effectively isolates the cavity filter from the RF MEMS switch circuitry. The two-pole filter implemented on RT/Duroid 6010LM exhibits an insertion loss of 2.2-4.1 dB and a return loss better than 15 dB for all tuning states. The relative bandwidth of the filter is 3.7 ± 0.5% over the tuning range. The measured Qu of the filter is 93-132 over the tuning range, which is the best reported Q in filters using off-the-shelf RF MEMS switches on conventional printed circuit board substrates. In addition, an upper stopband rejection better than 28 dB is obtained up to 4.0 GHz by employing low-pass filters at the bandpass filter terminals at the cost of 0.7-1.0-dB increase in the insertion loss.", "title": "" }, { "docid": "6dc4e4949d4f37f884a23ac397624922", "text": "Research indicates that maladaptive patterns of Internet use constitute behavioral addiction. This article explores the research on the social effects of Internet addiction. There are four major sections. The Introduction section overviews the field and introduces definitions, terminology, and assessments. The second section reviews research findings and focuses on several key factors related to Internet addiction, including Internet use and time, identifiable problems, gender differences, psychosocial variables, and computer attitudes. The third section considers the addictive potential of the Internet in terms of the Internet, its users, and the interaction of the two. The fourth section addresses current and projected treatments of Internet addiction, suggests future research agendas, and provides implications for educational psychologists.", "title": "" }, { "docid": "1b4e3dcd8f94c3f6e3451ced417655e3", "text": "The serverless paradigm has been rapidly adopted by developers of cloud-native applications, mainly because it relieves them from the burden of provisioning, scaling and operating the underlying infrastructure. In this paper, we propose a novel computing paradigm - Deviceless Edge Computing that extends the serverless paradigm to the edge of the network, enabling IoT and Edge devices to be seamlessly integrated as application execution infrastructure. We also discuss open challenges to realize Deviceless Edge Computing, based on our experience in prototyping a deviceless platform.", "title": "" }, { "docid": "ca307225e8ab0e7876446cf17d659fc8", "text": "This paper presents a novel class of substrate integrated waveguide (SIW) filters, based on periodic perforations of the dielectric layer. The perforations allow to reduce the local effective dielectric permittivity, thus creating waveguide sections below cutoff. This effect is exploited to implement immittance inverters through analytical formulas, providing simple design rules for the direct synthesis of the filters. The proposed solution is demonstrated through the design and testing of several filters with different topologies (including half-mode SIW and folded structures). The comparison with classical iris-type SIW filters demonstrates that the proposed filters exhibit better performance in terms of sensitivity to fabrication inaccuracies and rejection bandwidth, at the cost of a slightly larger size.", "title": "" }, { "docid": "12866e003093bc7d89d751697f2be93c", "text": "We argue that the right way to understand distributed protocols is by considering how messages change the state of knowledge of a system. We present a hierarchy of knowledge states that a system may be in, and discuss how communication can move the system's state of knowledge of a fact up the hierarchy. Of special interest is the notion of common knowledge. Common knowledge is an essential state of knowledge for reaching agreements and coordinating action. We show that in practical distributed systems, common knowledge is not attainable. We introduce various relaxations of common knowledge that are attainable in many cases of interest. We describe in what sense these notions are appropriate, and discuss their relationship to each other. We conclude with a discussion of the role of knowledge in distributed systems.", "title": "" }, { "docid": "d94f4df63ac621d9a8dec1c22b720abb", "text": "Automatically selecting an appropriate set of materialized views and indexes for SQL databases is a non-trivial task. A judicious choice must be cost-driven and influenced by the workload experienced by the system. Although there has been work in materialized view selection in the context of multidimensional (OLAP) databases, no past work has looked at the problem of building an industry-strength tool for automated selection of materialized views and indexes for SQL workloads. In this paper, we present an end-to-end solution to the problem of selecting materialized views and indexes. We describe results of extensive experimental evaluation that demonstrate the effectiveness of our techniques. Our solution is implemented as part of a tuning wizard that ships with Microsoft SQL Server 2000.", "title": "" }, { "docid": "36b6c222587948357c275155b085ae6e", "text": "Deep Neural Networks (DNNs) require very large amounts of computation, and many different algorithms have been proposed to implement their most expensive layers, each of which has a large number of variants with different trade-offs of parallelism, locality, memory footprint, and execution time. In addition, specific algorithms operate much more efficiently on specialized data layouts. \n We state the problem of optimal primitive selection in the presence of data layout transformations, and show that it is NP-hard by demonstrating an embedding in the Partitioned Boolean Quadratic Assignment problem (PBQP). We propose an analytic solution via a PBQP solver, and evaluate our approach experimentally by optimizing several popular DNNs using a library of more than 70 DNN primitives, on an embedded platform and a general purpose platform. We show experimentally that significant gains are possible versus the state of the art vendor libraries by using a principled analytic solution to the problem of primitive selection in the presence of data layout transformations.", "title": "" }, { "docid": "d5d55ca4eaa5c4ee129ddfcd7b5ddf87", "text": "Person re-identification (re-id) aims to match pedestrians observed by disjoint camera views. It attracts increasing attention in computer vision due to its importance to surveillance system. To combat the major challenge of cross-view visual variations, deep embedding approaches are proposed by learning a compact feature space from images such that the Euclidean distances correspond to their cross-view similarity metric. However, the global Euclidean distance cannot faithfully characterize the ideal similarity in a complex visual feature space because features of pedestrian images exhibit unknown distributions due to large variations in poses, illumination and occlusion. Moreover, intra-personal training samples within a local range are robust to guide deep embedding against uncontrolled variations, which however, cannot be captured by a global Euclidean distance. In this paper, we study the problem of person re-id by proposing a novel sampling to mine suitable positives (i.e., intra-class) within a local range to improve the deep embedding in the context of large intra-class variations. Our method is capable of learning a deep similarity metric adaptive to local sample structure by minimizing each sample’s local distances while propagating through the relationship between samples to attain the whole intra-class minimization. To this end, a novel objective function is proposed to jointly optimize ∗Corresponding author. Email addresses: lin.wu@uq.edu.au (Lin Wu ), wangy@cse.unsw.edu.au (Yang Wang), junbin.gao@sydney.edu.au (Junbin Gao), xueli@itee.uq.edu.au (Xue Li) Preprint submitted to Elsevier 8·9·2017 ar X iv :1 70 6. 03 16 0v 2 [ cs .C V ] 7 S ep 2 01 7 similarity metric learning, local positive mining and robust deep embedding. This yields local discriminations by selecting local-ranged positive samples, and the learned features are robust to dramatic intra-class variations. Experiments on benchmarks show state-of-the-art results achieved by our method.", "title": "" }, { "docid": "d9c189cbf2695fa9ac032b8c6210a070", "text": "The increasing of aspect ratio in DRAM capacitors causes structural instabilities and device failures as the generation evolves. Conventionally, two-dimensional and three-dimensional models are used to solve these problems by optimizing thin film thickness, material properties and structure parameters; however, it is not enough to analyze the latest failures associated with large-scale DRAM capacitor arrays. Therefore, beam-shell model based on classical beam and shell theories is developed in this study to simulate diverse failures. It enables us to solve multiple failure modes concurrently such as supporter crack, capacitor bending, and storage-poly fracture.", "title": "" }, { "docid": "43a57d9ad5a4ea7cb446adf8cb91f640", "text": "It is widely acknowledged that the value of a house is the mixture of a large number of characteristics. House price prediction thus presents a unique set of challenges in practice. While a large body of works are dedicated to this task, their performance and applications have been limited by the shortage of long time span of transaction data, the absence of real-world settings and the insufficiency of housing features. To this end, a time-aware latent hierarchical model is introduced to capture underlying spatiotemporal interactions behind the evolution of house prices. The hierarchical perspective obviates the need for historical transaction data of exactly same houses when temporal effects are considered. The proposed framework is examined on a large-scale dataset of the property transaction in Beijing. The whole experimental procedure strictly complies with the real-world scenario. The empirical evaluation results demonstrate the outperformance of our approach over alternative competitive methods.", "title": "" }, { "docid": "9e5144241a78ad34045d23d137c84596", "text": "The conventional approach to sampling signals or images follows the celebrated Shannon sampling theorem: the sampling rate must be at least twice the maximum frequency present in the signal (the so-called Nyquist rate). In fact, this principle underlies nearly all signal acquisition protocols used in consumer audio and visual electronics, medical imaging devices, radio receivers, and so on. In the field of data conversion, for example, standard analog-to-digital converter (ADC) technology implements the usual quantized Shannon representation: the signal is uniformly sampled at or above the Nyquist rate. This paper surveys the theory of compressive sampling also known as compressed sensing, or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition. The CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use. To make this possible, compressive sampling relies on two tenets, namely, sparsity, which pertains to the signals of interest, and incoherence, which pertains to the sensing modality. • Sparsity expresses the idea that the “information rate” of a continuous time signal may be much smaller than suggested by its bandwidth, or that a discrete-time signal depends on a number of degrees of freedom which is comparably much smaller than its (finite) length. More precisely, compressive sampling exploits the fact that many natural signals are sparse or compressible in the sense that they have concise representations when expressed in the proper basis Ψ. • Incoherence extends the duality between time and frequency and expresses the idea that objects having a sparse representation in Ψ must be spread out in the domain in which they are acquired, just as a Dirac or a spike in the time domain is spread out in the frequency domain. Put differently, incoherence says that unlike the signal of interest, the sampling/sensing waveforms have an extremely dense representation in Ψ.", "title": "" }, { "docid": "46cc515d0d41e0027cc975f37d9e1f7b", "text": "A distributed data-stream architecture finds application in sensor networks for monitoring environment and activities. In such a network, large numbers of sensors deliver continuous data to a central server. The rate at which the data is sampled at each sensor affects the communication resource and the computational load at the central server. In this paper, we propose a novel adaptive sampling technique where the sampling rate at each sensor adapts to the streaming-data characteristics. Our approach employs a Kalman-Filter (KF)-based estimation technique wherein the sensor can use the KF estimation error to adaptively adjust its sampling rate within a given range, autonomously. When the desired sampling rate violates the range, a new sampling rate is requested from the server. The server allocates new sampling rates under the constraint of available resources such that KF estimation error over all the active streaming sensors is minimized. Through empirical studies, we demonstrate the flexibility and effectiveness of our model.", "title": "" }, { "docid": "6c41dc25f8d63da094732fd54a8497ff", "text": "Robotics systems are complex, often consisted of basic services including SLAM for localization and mapping, Convolution Neural Networks for scene understanding, and Speech Recognition for user interaction, etc. Meanwhile, robots are mobile and usually have tight energy constraints, integrating these services onto an embedded platform with around 10 W of power consumption is critical to the proliferation of mobile robots. In this paper, we present a case study on integrating real-time localization, vision, and speech recognition services on a mobile SoC, Nvidia Jetson TX1, within about 10 W of power envelope. In addition, we explore whether offloading some of the services to cloud platform can lead to further energy efficiency while meeting the real-time requirements.", "title": "" }, { "docid": "e6021e334415240dd813fa2baae36773", "text": "In this study, we propose a discriminative training algorithm to jointly minimize mispronunciation detection errors (i.e., false rejections and false acceptances) and diagnosis errors (i.e., correctly pinpointing mispronunciations but incorrectly stating how they are wrong). An optimization procedure, similar to Minimum Word Error (MWE) discriminative training, is developed to refine the ML-trained HMMs. The errors to be minimized are obtained by comparing transcribed training utterances (including mispronunciations) with Extended Recognition Networks [3] which contain both canonical pronunciations and explicitly modeled mispronunciations. The ERN is compiled by handcrafted rules, or data-driven rules. Several conclusions can be drawn from the experiments: (1) data-driven rules are more effective than hand-crafted ones in capturing mispronunciations; (2) compared with the ML training baseline, discriminative training can reduce false rejections and diagnostic errors, though false acceptances increase slightly due to a small number of false-acceptance samples in the training set.", "title": "" }, { "docid": "609651c6c87b634814a81f38d9bfbc67", "text": "Resistance training (RT) has shown the most promise in reducing/reversing effects of sarcopenia, although the optimum regime specific for older adults remains unclear. We hypothesized myofiber hypertrophy resulting from frequent (3 days/wk, 16 wk) RT would be impaired in older (O; 60-75 yr; 12 women, 13 men), sarcopenic adults compared with young (Y; 20-35 yr; 11 women, 13 men) due to slowed repair/regeneration processes. Myofiber-type distribution and cross-sectional area (CSA) were determined at 0 and 16 wk. Transcript and protein levels of myogenic regulatory factors (MRFs) were assessed as markers of regeneration at 0 and 24 h postexercise, and after 16 wk. Only Y increased type I CSA 18% (P < 0.001). O showed smaller type IIa (-16%) and type IIx (-24%) myofibers before training (P < 0.05), with differences most notable in women. Both age groups increased type IIa (O, 16%; Y, 25%) and mean type II (O, 23%; Y, 32%) size (P < 0.05). Growth was generally most favorable in young men. Percent change scores on fiber size revealed an age x gender interaction for type I fibers (P < 0.05) as growth among Y (25%) exceeded that of O (4%) men. Myogenin and myogenic differentiation factor D (MyoD) mRNAs increased (P < 0.05) in Y and O, whereas myogenic factor (myf)-5 mRNA increased in Y only (P < 0.05). Myf-6 protein increased (P < 0.05) in both Y and O. The results generally support our hypothesis as 3 days/wk training led to more robust hypertrophy in Y vs. O, particularly among men. However, this differential hypertrophy adaptation was not explained by age variation in MRF expression.", "title": "" }, { "docid": "d3107e466c5c8e84b578d0563f5c5644", "text": "The recent popularity of mobile camera phones allows for new opportunities to gather important metadata at the point of capture. This paper describes a method for generating metadata for photos using spatial, temporal, and social context. We describe a system we implemented for inferring location information for pictures taken with camera phones and its performance evaluation. We propose that leveraging contextual metadata at the point of capture can address the problems of the semantic and sensory gaps. In particular, combining and sharing spatial, temporal, and social contextual metadata from a given user and across users allows us to make inferences about media content.", "title": "" }, { "docid": "4d959fc84483618a1ea6648b16d2e4d2", "text": "In this themed issue of the Journal of Sport & Exercise Psychology, we bring together an eclectic mix of papers focusing on how expert performers learn the skills needed to compete at the highest level in sport. In the preface, we highlight the value of adopting the expert performance approach as a systematic framework for the evaluation and development of expertise and expert performance in sport. We then place each of the empirical papers published in this issue into context and briefly outline their unique contributions to knowledge in this area. Finally, we highlight several potential avenues for future research in the hope of encouraging others to scientifically study how experts acquire the mechanisms mediating superior performance in sport and how coaches can draw on this knowledge to guide their athletes toward the most effective training activities.", "title": "" }, { "docid": "19b5ec2f1347b458bccc79eb18b5bc39", "text": "Objective: Cyber bullying is a combination of the word cyber and bullying where cyber basically means the Internet or on-line. In this case, cyber bullying will focus on getting in action with bullying by using the Internet or modern technologies such as on-line chats, online media and short messaging texts through social media. The current review aims to compile and summarize the results of relevant publications related to “cyber bullying.\" The review also includes discussing on relevant variables related to cyber bullying. Methods: Information from relevant publications addresses the demographics, prevalence, differences between cyber bullying and traditional bullying, bullying motivation, avenues to overcome it, preventions, coping mechanisms in relation to “cyber bullying” were retrieved and summarized. Results: The prevalence of cyber bullying ranges from 30% 55% and the contributing risk factors include positive association with perpetration, non-supportive school environment, and Internet risky behaviors. Both males and females have been equal weigh on being perpetrators and victims. The older groups with more technology exposures are more prone to be exposed to cyber bullying. With respect to individual components of bullying, repetition is less evident in cyber bullying and power imbalance is not measured by physicality but in terms of popularity and technical knowledge of the perpetrator. Conclusion: Due to the limited efforts centralized on the intervention, future researchers should focus on testing the efficacy of possible interventional programs and the effects of different roles in the intervention in order to curb the problem and prevent more deleterious effects of cyber bullying. ASEAN Journal of Psychiatry, Vol. 17 (1): January – June 2016: XX XX.", "title": "" }, { "docid": "187127dd1ab5f97b1158a77a25ddce91", "text": "We introduce stochastic variational inference for Gaussian process models. This enables the application of Gaussian process (GP) models to data sets containing millions of data points. We show how GPs can be variationally decomposed to depend on a set of globally relevant inducing variables which factorize the model in the necessary manner to perform variational inference. Our approach is readily extended to models with non-Gaussian likelihoods and latent variable models based around Gaussian processes. We demonstrate the approach on a simple toy problem and two real world data sets.", "title": "" }, { "docid": "70bed43cdfd50586e803bf1a9c8b3c0a", "text": "We design a way to model apps as vectors, inspired by the recent deep learning approach to vectorization of words called word2vec. Our method relies on how users use apps. In particular, we visualize the time series of how each user uses mobile apps as a “document”, and apply the recent word2vec modeling on these documents, but the novelty is that the training context is carefully weighted by the time interval between the usage of successive apps. This gives us the app2vec vectorization of apps. We apply this to industrial scale data from Yahoo! and (a) show examples that app2vec captures semantic relationships between apps, much as word2vec does with words, (b) show using Yahoo!'s extensive human evaluation system that 82% of the retrieved top similar apps are semantically relevant, achieving 37% lift over bag-of-word approach and 140% lift over matrix factorization approach to vectorizing apps, and (c) finally, we use app2vec to predict app-install conversion and improve ad conversion prediction accuracy by almost 5%. This is the first industry scale design, training and use of app vectorization.", "title": "" } ]
scidocsrr
b5020ade07db21e3632fd1853a4be31a
Multi-dimensional trade-off considerations of the 750V micro pattern trench IGBT for electric drive train applications
[ { "docid": "8b6758fdd357384c2032afd405bf2c6a", "text": "A novel 1200 V Insulated Gate Bipolar Transistor (IGBT) for high-speed switching that combines Shorted Dummy-cell (SD) to control carrier extraction at the emitter side and P/P- collector to reduce hole injection from the backside is proposed. The SD-IGBT with P/P- collector has achieved 37 % reduction of turn-off power dissipation compared with a conventional Floating Dummy-cell (FD) IGBT. The SD-IGBT with P/P- collector also has high turn-off current capability because it extracts carriers uniformly from the dummy-cell. These results show the proposed device has a ideal carrier profile for high-speed switching.", "title": "" } ]
[ { "docid": "b150c18332645bf46e7f2e8ababbcfc4", "text": "Wilkinson Power Dividers/Combiners The in-phase power combiners and dividers are important components of the RF and microwave transmitters when it is necessary to deliver a high level of the output power to antenna, especially in phased-array systems. In this case, it is also required to provide a high degree of isolation between output ports over some frequency range for identical in-phase signals with equal amplitudes. Figure 19(a) shows a planar structure of the basic parallel beam N-way divider/combiner, which provides a combination of powers from the N signal sources. Here, the input impedance of the N transmission lines (connected in parallel) with the characteristic impedance of Z0 each is equal to Z0/N. Consequently, an additional quarterwave transmission line with the characteristic impedance", "title": "" }, { "docid": "ce3e480e50ffc7a79c3dbc71b07ec9f7", "text": "A relatively recent advance in cognitive neuroscience has been multi-voxel pattern analysis (MVPA), which enables researchers to decode brain states and/or the type of information represented in the brain during a cognitive operation. MVPA methods utilize machine learning algorithms to distinguish among types of information or cognitive states represented in the brain, based on distributed patterns of neural activity. In the current investigation, we propose a new approach for representation of neural data for pattern analysis, namely a Mesh Learning Model. In this approach, at each time instant, a star mesh is formed around each voxel, such that the voxel corresponding to the center node is surrounded by its p-nearest neighbors. The arc weights of each mesh are estimated from the voxel intensity values by least squares method. The estimated arc weights of all the meshes, called Mesh Arc Descriptors (MADs), are then used to train a classifier, such as Neural Networks, k-Nearest Neighbor, Naïve Bayes and Support Vector Machines. The proposed Mesh Model was tested on neuroimaging data acquired via functional magnetic resonance imaging (fMRI) during a recognition memory experiment using categorized word lists, employing a previously established experimental paradigm (Öztekin & Badre, 2011). Results suggest that the proposed Mesh Learning approach can provide an effective algorithm for pattern analysis of brain activity during cognitive processing.", "title": "" }, { "docid": "fb162c94248297f35825ff1022ad2c59", "text": "This article traces the evolution of ambulance location and relocation models proposed over the past 30 years. The models are classified in two main categories. Deterministic models are used at the planning stage and ignore stochastic considerations regarding the availability of ambulances. Probabilistic models reflect the fact that ambulances operate as servers in a queueing system and cannot always answer a call. In addition, dynamic models have been developed to repeatedly relocate ambulances throughout the day. 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "1e8e4364427d18406594af9ad3a73a28", "text": "The Internet Addiction Scale (IAS) is a self-report instrument based on the 7 Diagnostic and Statistical Manual of Mental Disorders (4th ed.; American Psychiatric Association, 1994) substance dependence criteria and 2 additional criteria recommended by Griffiths (1998). The IAS was administered to 233 undergraduates along with 4 measures pertaining to loneliness and boredom proneness. An item reliability analysis reduced the initial scale from 36 to 31 items (with a Cronbach's alpha of .95). A principal-components analysis indicated that the IAS consisted mainly of one factor. Multiple regression analyses revealed that Family and Social Loneliness and Boredom Proneness were significantly correlated with the IAS; Family and Social Loneliness uniquely predicted IAS scores. No evidence for widespread Internet addiction was found.", "title": "" }, { "docid": "2547e6e8138c49b76062e241391dfc1d", "text": "Methods of deep neural networks (DNNs) have recently demonstrated superior performance on a number of natural language processing tasks. However, in most previous work, the models are learned based on either unsupervised objectives, which does not directly optimize the desired task, or singletask supervised objectives, which often suffer from insufficient training data. We develop a multi-task DNN for learning representations across multiple tasks, not only leveraging large amounts of cross-task data, but also benefiting from a regularization effect that leads to more general representations to help tasks in new domains. Our multi-task DNN approach combines tasks of multiple-domain classification (for query classification) and information retrieval (ranking for web search), and demonstrates significant gains over strong baselines in a comprehensive set of domain adaptation.", "title": "" }, { "docid": "b49cc6cc439e153650c858f65f97b3d7", "text": "The evolution of mobile malware poses a serious threat to smartphone security. Today, sophisticated attackers can adapt by maximally sabotaging machine-learning classifiers via polluting training data, rendering most recent machine learning-based malware detection tools (such as Drebin and DroidAPIMiner) ineffective. In this paper, we explore the feasibility of constructing crafted malware samples; examine how machine-learning classifiers can be misled under three different threat models; then conclude that injecting carefully crafted data into training data can significantly reduce detection accuracy. To tackle the problem, we propose KuafuDet, a two-phase learning enhancing approach that learns mobile malware by adversarial detection. KuafuDet includes an offline training phase that selects and extracts features from the training set, and an online detection phase that utilizes the classifier trained by the first phase. To further address the adversarial environment, these two phases are intertwined through a self-adaptive learning scheme, wherein an automated camouflage detector is introduced to filter the suspicious false negatives and feed them back into the training phase. We finally show KuafuDet significantly reduces false negatives and boosts the detection accuracy by at least 15%. Experiments on more than 250,000 mobile applications demonstrate that KuafuDet is scalable and can be highly effective as a standalone system.", "title": "" }, { "docid": "abbdc23d1c8833abda16f477dddb45fd", "text": "Recently introduced generative adversarial networks (GANs) have been shown numerous promising results to generate realistic samples. In the last couple of years, it has been studied to control features in synthetic samples generated by the GAN. Auxiliary classifier GAN (ACGAN), a conventional method to generate conditional samples, employs a classification layer in discriminator to solve the problem. However, in this paper, we demonstrate that the auxiliary classifier can hardly provide good guidance for training of the generator, where the classifier suffers from overfitting. Since the generator learns from classification loss, such a problem has a chance to hinder the training. To overcome this limitation, here, we propose a controllable GAN (ControlGAN) structure. By separating a feature classifier from the discriminator, the classifier can be trained with data augmentation technique, which can support to make a fine classifier. Evaluated with the CIFAR-10 dataset, ControlGAN outperforms AC-WGAN-GP which is an improved version of the ACGAN, where Inception score of the ControlGAN is 8.61 ± 0.10. Furthermore, we demonstrate that the ControlGAN can generate intermediate features and opposite features for interpolated input and extrapolated input labels that are not used in the training process. It implies that the ControlGAN can significantly contribute to the variety of generated samples.", "title": "" }, { "docid": "591327371e942690a88265233fefc548", "text": "The comb fingers of high aspect ratio structures fabricated by micromachining technology are usually not parallel. Effects of the inclination of the fingers and edge effect on the capacitance, driving electrostatic force, and electrostatic spring constant are studied. The complex nonlinear air damping in the 3-D resonators is also determined accurately. The governing equations are presented to describe the complex dynamic problem by taking both linear and nonlinear mechanical spring stiffness constants into account. The dynamic responses of the micro-resonator driven by electrostatic combs are investigated using the multiscale method. Stability analysis is presented using the maximum Lyapunov index map, and effects of vacuum pressure on the frequency tuning and stability are also discussed. The comparisons show that the numerical results agree well with the experimental data reported in the literature, and it verified the validity of the presented dynamic model. The results also demonstrate that the inclination of the fingers causes the resonance frequency to increase and the electrostatic spring to harden under applied dc voltage. Therefore, it can provide an effective approach to balance the traditional resonance frequency decreasing and stiffness softening from driving electrostatic force. The inclination of the fingers can be helpful for strengthening the stability of the MEMS resonators, and avoiding the occurrence of pull-in.", "title": "" }, { "docid": "48d1f79cd3b887cced3d3a2913a25db3", "text": "Children's use of electronic media, including Internet and video gaming, has increased dramatically to an average in the general population of roughly 3 h per day. Some children cannot control their Internet use leading to increasing research on \"internet addiction.\" The objective of this article is to review the research on ADHD as a risk factor for Internet addiction and gaming, its complications, and what research and methodological questions remain to be addressed. The literature search was done in PubMed and Psychinfo, as well as by hand. Previous research has demonstrated rates of Internet addiction as high as 25% in the population and that it is addiction more than time of use that is best correlated with psychopathology. Various studies confirm that psychiatric disorders, and ADHD in particular, are associated with overuse, with severity of ADHD specifically correlated with the amount of use. ADHD children may be vulnerable since these games operate in brief segments that are not attention demanding. In addition, they offer immediate rewards with a strong incentive to increase the reward by trying the next level. The time spent on these games may also exacerbate ADHD symptoms, if not directly then through the loss of time spent on more developmentally challenging tasks. While this is a major issue for many parents, there is no empirical research on effective treatment. Internet and off-line gaming overuse and addiction are serious concerns for ADHD youth. Research is limited by the lack of measures for youth or parents, studies of children at risk, and studies of impact and treatment.", "title": "" }, { "docid": "ae5497a11458851438d6cc86daec189a", "text": "Automated activity recognition enables a wide variety of applications related to child and elderly care, disease diagnosis and treatment, personal health or sports training, for which it is key to seamlessly determine and log the user’s motion. This work focuses on exploring the use of smartphones to perform activity recognition without interfering in the user’s lifestyle. Thus, we study how to build an activity recognition system to be continuously executed in a mobile device in background mode. The system relies on device’s sensing, processing and storing capabilities to estimate significant movements/postures (walking at different paces—slow, normal, rush, running, sitting, standing). In order to evaluate the combinations of sensors, features and algorithms, an activity dataset of 16 individuals has been gathered. The performance of a set of lightweight classifiers (Naïve Bayes, Decision Table and Decision Tree) working on different sensor data has been fully evaluated and optimized in terms of accuracy, computational cost and memory fingerprint. Results have pointed out that a priori information on the relative position of the mobile device with respect to the user’s body enhances the estimation accuracy. Results show that computational low-cost Decision Tables using the best set of features among mean and variance and considering all the sensors (acceleration, gravity, linear acceleration, magnetometer, gyroscope) may be enough to get an activity estimation accuracy of around 88 % (78 % is the accuracy of the Naïve Bayes algorithm with the same characteristics used as a baseline). To demonstrate its applicability, the activity recognition system has been used to enable a mobile application to promote active lifestyles.", "title": "" }, { "docid": "4b544bb34c55e663cdc5f0a05201e595", "text": "BACKGROUND\nThis study seeks to examine a multidimensional model of student motivation and engagement using within- and between-network construct validation approaches.\n\n\nAIMS\nThe study tests the first- and higher-order factor structure of the motivation and engagement wheel and its corresponding measurement tool, the Motivation and Engagement Scale - High School (MES-HS; formerly the Student Motivation and Engagement Scale).\n\n\nSAMPLE\nThe study draws upon data from 12,237 high school students from 38 Australian high schools.\n\n\nMETHODS\nThe hypothesized 11-factor first-order structure and the four-factor higher-order structure, their relationship with a set of between-network measures (class participation, enjoyment of school, educational aspirations), factor invariance across gender and year-level, and the effects of age and gender are examined using confirmatory factor analysis and structural equation modelling.\n\n\nRESULTS\nIn terms of within-network validity, (1) the data confirm that the 11-factor and higher-order factor models of motivation and engagement are good fitting and (2) multigroup tests showed invariance across gender and year levels. In terms of between-network validity, (3) correlations with enjoyment of school, class participation and educational aspirations are in the hypothesized directions, and (4) girls reflect a more adaptive pattern of motivation and engagement, and year-level findings broadly confirm hypotheses that middle high school students seem to reflect a less adaptive pattern of motivation and engagement.\n\n\nCONCLUSION\nThe first- and higher-order structures hold direct implications for educational practice and directions for future motivation and engagement research.", "title": "" }, { "docid": "af2ef011b7636d12a83003e32755f840", "text": "This paper deals with the impact of fault prediction techniques on checkpointing strategies. We extend the classical first-order analysis of Young and Daly in the presence of a fault prediction system, characterized by its recall and its precision. In this framework, we provide an optimal algorithm to decide when to take predictions into account, and we derive the optimal value of the checkpointing period. These results allow to analytically assess the key parameters that impact the performance of fault predictors at very large scale. Key-words: Fault-tolerance, checkpointing, prediction, algorithms, model, exascale ∗ LIP, École Normale Supérieure de Lyon, France † University of Tennessee Knoxville, USA ‡ Institut Universitaire de France § INRIA ha l-0 07 88 31 3, v er si on 1 14 F eb 2 01 3 Étude de l’impact de la prédiction de fautes sur les stratégies de protocoles de checkpoint Résumé : Ce travail considère l’impact des techniques de prédiction de fautes sur les stratégies de protocoles de sauvegarde de points de reprise (checkpoints) et de redémarrage. Nous étendons l’analyse classique de Young en présence d’un système de prédiction de fautes, qui est caractérisé par son rappel (taux de pannes prévues sur nombre total de pannes) et par sa précision (taux de vraies pannes parmi le nombre total de pannes annoncées). Dans ce travail, nous avons pu obtenir la valeur optimale de la période de checkpoint (minimisant ainsi le gaspillage de l’utilisation des ressources dû au coût de prise de ces points de sauvegarde) dans différents scénarios. Ce papier pose les fondations théoriques pour de futures expériences et une validation du modèle. Mots-clés : Tolérance aux pannes, checkpoint, prédiction, algorithmes, modèle, exascale ha l-0 07 88 31 3, v er si on 1 14 F eb 2 01 3 Checkpointing algorithms and fault prediction 3", "title": "" }, { "docid": "ba4df2305d4f292a6ee0f033e58d7a16", "text": "Reliable and real-time 3D reconstruction and localization functionality is a crucial prerequisite for the navigation of actively controlled capsule endoscopic robots as an emerging, minimally invasive diagnostic and therapeutic technology for use in the gastrointestinal (GI) tract. In this study, we propose a fully dense, non-rigidly deformable, strictly real-time, intraoperative map fusion approach for actively controlled endoscopic capsule robot applications which combines magnetic and vision-based localization, with non-rigid deformations based frame-to-model map fusion. The performance of the proposed method is evaluated using four different ex-vivo porcine stomach models. Across different trajectories of varying speed and complexity, and four different endoscopic cameras, the root mean square surface reconstruction errors vary from 1.58 to 2.17 cm.", "title": "" }, { "docid": "16a384727d6a323437a0b6ed3cdcc230", "text": "The ability to learn from a small number of examples has been a difficult problem in machine learning since its inception. While methods have succeeded with large amounts of training data, research has been underway in how to accomplish similar performance with fewer examples, known as one-shot or more generally few-shot learning. This technique has been shown to have promising performance, but in practice requires fixed-size inputs making it impractical for production systems where class sizes can vary. This impedes training and the final utility of few-shot learning systems. This paper describes an approach to constructing and training a network that can handle arbitrary example sizes dynamically as the system is used.", "title": "" }, { "docid": "9327a13308cd713bcfb3b4717eaafef0", "text": "A review of both laboratory and field studies on the effects of setting goals when performing a task found that in 90% of the studies, specific and challenging goals lead to higher performance than easy goals, \"do your best\" goals, or no goals. Goals affect performance by directing attention, mobilizing effort, increasing persistence, and motivating strategy development. Goal setting is most likely to improve task performance when the goals are specific and sufficiently challenging, the subjects have sufficient ability (and ability differences are controlled), feedback is provided to show progress in relation to the goal, rewards such as money are given for goal attainment, the experimenter or manager is supportive, and assigned goals are accepted by the individual. No reliable individual differences have emerged in goal-setting studies, probably because the goals were typically assigned rather than self-set. Need for achievement and self-esteem may be the most promising individual difference variables.", "title": "" }, { "docid": "79f10f0b7da7710ce68d9df6212579b6", "text": "The Internet is probably the most successful distributed computing system ever. However, our capabilities for data querying and manipulation on the internet are primordial at best. The user expectations are enhancing over the period of time along with increased amount of operational data past few decades. The data-user expects more deep, exact, and detailed results. Result retrieval for the user query is always relative o the pattern of data storage and index. In Information retrieval systems, tokenization is an integrals part whose prime objective is to identifying the token and their count. In this paper, we have proposed an effective tokenization approach which is based on training vector and result shows that efficiency/ effectiveness of proposed algorithm. Tokenization on documents helps to satisfy user’s information need more precisely and reduced search sharply, is believed to be a part of information retrieval. Pre-processing of input document is an integral part of Tokenization, which involves preprocessing of documents and generates its respective tokens which is the basis of these tokens probabilistic IR generate its scoring and gives reduced search space. The comparative analysis is based on the two parameters; Number of Token generated, Pre-processing time.", "title": "" }, { "docid": "de408de1915d43c4db35702b403d0602", "text": "real-time population health assessment and monitoring D. L. Buckeridge M. Izadi A. Shaban-Nejad L. Mondor C. Jauvin L. Dubé Y. Jang R. Tamblyn The fragmented nature of population health information is a barrier to public health practice. Despite repeated demands by policymakers, administrators, and practitioners to develop information systems that provide a coherent view of population health status, there has been limited progress toward developing such an infrastructure. We are creating an informatics platform for describing and monitoring the health status of a defined population by integrating multiple clinical and administrative data sources. This infrastructure, which involves a population health record, is designed to enable development of detailed portraits of population health, facilitate monitoring of population health indicators, enable evaluation of interventions, and provide clinicians and patients with population context to assist diagnostic and therapeutic decision-making. In addition to supporting public health professionals, clinicians, and the public, we are designing the infrastructure to provide a platform for public health informatics research. This early report presents the requirements and architecture for the infrastructure and describes the initial implementation of the population health record, focusing on indicators of chronic diseases related to obesity.", "title": "" }, { "docid": "bb12f0a1ecace2493b83c664bdfb7d9b", "text": "Information retrieval is concerned with representing content in a form that can be easily accessed by users with information needs [61, 65]. A definition at this level of generality applies equally well to any index-based retrieval system or database application; so let us focus the topic a little more carefully. Information retrieval, as a field, works primarily with highly unstructured content, such as text documents written in natural language; it deals with information needs that are generally not formulated according to precise specifications; and its criteria for success are based in large part on the demands of a diverse set of human users. Our purpose in this short article is not to provide a survey of the field of information retrieval — for this we refer the reader to texts and surveys such as [25, 29, 51, 60, 61, 62, 63, 65, 70]. Rather, we wish to discuss some specific applications of techniques from linear algebra to information retrieval and hypertext analysis. In particular, we focus on spectral methods — the use of eigenvectors and singular vectors of matrices — and their role in these areas. After briefly introducing the use of vector-space models in information retrieval [52, 65], we describe the application of the singular value decomposition to dimensionreduction, through the Latent Semantic Indexing technique [14]. We contrast this with several other approaches to clustering and dimension-reduction based on vector-space models.", "title": "" }, { "docid": "17813a603f0c56c95c96f5b2e0229026", "text": "Geographic ranges are estimated for brachiopod and bivalve species during the late Middle (mid-Givetian) to the middle Late (terminal Frasnian) Devonian to investigate range changes during the time leading up to and including the Late Devonian biodiversity crisis. Species ranges were predicted using GARP (Genetic Algorithm using Rule-set Prediction), a modeling program developed to predict fundamental niches of modern species. This method was applied to fossil species to examine changing ranges during a critical period of Earth’s history. Comparisons of GARP species distribution predictions with historical understanding of species occurrences indicate that GARP models predict accurately the presence of common species in some depositional settings. In addition, comparison of GARP distribution predictions with species-range reconstructions from geographic information systems (GIS) analysis suggests that GARP modeling has the potential to predict species ranges more completely and tailor ranges more specifically to environmental parameters than GIS methods alone. Thus, GARP modeling is a potentially useful tool for predicting fossil species ranges and can be used to address a wide array of palaeontological problems. The use of GARP models allows a statistical examination of the relationship of geographic range size with species survival during the Late Devonian. Large geographic range was statistically associated with species survivorship across the crisis interval for species examined in the linguiformis Zone but not for species modeled in the preceding Lower varcus or punctata zones. The enhanced survival benefit of having a large geographic range, therefore, appears to be restricted to the biodiversity crisis interval.", "title": "" }, { "docid": "95974e6e910799e478a1d0c9cda86bcd", "text": "Recently, there has been an explosion of cloud-based services that enable developers to include a spectrum of recognition services, such as emotion recognition, in their applications. The recognition of emotions is a challenging problem, and research has been done on building classifiers to recognize emotion in the open world. Often, learned emotion models are trained on data sets that may not sufficiently represent a target population of interest. For example, many of these on-line services have focused on training and testing using a majority representation of adults and thus are tuned to the dynamics of mature faces. For applications designed to serve an older or younger age demographic, using the outputs from these pre-defined models may result in lower performance rates than when using a specialized classifier. Similar challenges with biases in performance arise in other situations where datasets in these large-scale on-line services have a non-representative ratio of the desired class of interest. We consider the challenge of providing application developers with the power to utilize pre-constructed cloud-based services in their applications while still ensuring satisfactory performance for their unique workload of cases. We focus on biases in emotion recognition as a representative scenario to evaluate an approach to improving recognition rates when an on-line pre-trained classifier is used for recognition of a class that may have a minority representation in the training set. We discuss a hierarchical classification approach to address this challenge and show that the average recognition rate associated with the most difficult emotion for the minority class increases by 41.5% and the overall recognition rate for all classes increases by 17.3% when using this approach.", "title": "" } ]
scidocsrr
a254189588a62d5bcead728bfa07c8bc
How the relationship between the crisis life cycle and mass media content can better inform crisis communication .
[ { "docid": "aaebd4defcc22d6b1e8e617ab7f3ec70", "text": "In the American political process, news discourse concerning public policy issues is carefully constructed. This occurs in part because both politicians and interest groups take an increasingly proactive approach to amplify their views of what an issue is about. However, news media also play an active role in framing public policy issues. Thus, in this article, news discourse is conceived as a sociocognitive process involving all three players: sources, journalists, and audience members operating in the universe of shared culture and on the basis of socially defined roles. Framing analysis is presented as a constructivist approach to examine news discourse with the primary focus on conceptualizing news texts into empirically operationalizable dimensions—syntactical, script, thematic, and rhetorical structures—so that evidence of the news media's framing of issues in news texts may be gathered. This is considered an initial step toward analyzing the news discourse process as a whole. Finally, an extended empirical example is provided to illustrate the applications of this conceptual framework of news texts.", "title": "" } ]
[ { "docid": "ff1f503123ce012b478a3772fa9568b5", "text": "Cementoblastoma is a rare odontogenic tumor that has distinct clinical and radiographical features normally suggesting the correct diagnosis. The clinicians and oral pathologists must have in mind several possible differential diagnoses that can lead to a misdiagnosed lesion, especially when unusual clinical features are present. A 21-year-old male presented with dull pain in lower jaw on right side. The clinical inspection of the region was non-contributory to the diagnosis but the lesion could be appreciated on palpation. A swelling was felt in the alveolar region of mandibular premolar-molar on right side. Radiographic examination was suggestive of benign cementoblastoma and the tumor was removed surgically along with tooth. The diagnosis was confirmed by histopathologic study. Although this neoplasm is rare, the dental practitioner should be aware of the clinical, radiographical and histopathological features that will lead to its early diagnosis and treatment.", "title": "" }, { "docid": "d4e22e73965bcd9fdb1628711d6beb44", "text": "This project is designed to measure heart beat (pulse count), by using embedded technology. In this project simultaneously it can measure and monitor the patient’s condition. This project describes the design of a simple, low-cost controller based wireless patient monitoring system. Heart rate of the patient is measured from the thumb finger using IRD (Infra Red Device sensor).Pulse counting sensor is arranged to check whether the heart rate is normal or not. So that a SMS is sent to the mobile number using GSM module interfaced to the controller in case of abnormal condition. A buzzer alert is also given. The heart rate can be measured by monitoring one's pulse using specialized medical devices such as an electrocardiograph (ECG), portable device e.g. The patient heart beat monitoring systems is one of the major wrist strap watch, or any other commercial heart rate monitors which normally consisting of a chest strap with electrodes. Despite of its accuracy, somehow it is costly, involve many clinical settings and patient must be attended by medical experts for continuous monitoring.", "title": "" }, { "docid": "02effa562af44c07076b4ab853642945", "text": "Purpose – The purpose of this paper is to explore the impact of corporate social responsibility (CSR) engagement on employee motivation, job satisfaction and organizational identification as well as employee citizenship in voluntary community activities. Design/methodology/approach – Employees (n 1⁄4 224) of a major airline carrier participated in the study based on a 54-item questionnaire, containing four different sets of items related to volunteering, motivation, job satisfaction and organizational identification. The employee sample consisted of two sub-samples drawn randomly from the company pool of employees, differentiating between active participants in the company’s CSR programs (APs) and non participants (NAPs). Findings – Significant differences were found between APs and NAPs on organizational identification and motivation, but not for job satisfaction. In addition, positive significant correlations between organizational identification, volunteering, job satisfaction, and motivation were obtained. These results are interpreted within the broader context that ties social identity theory (SIT) and organizational identification increase. Practical implications – The paper contributes to the understanding of the interrelations between CSR and other organizational behavior constructs. Practitioners can learn from this study how to increase job satisfaction and organizational identification. Both are extremely important for an organization’s sustainability. Originality/value – This is a first attempt to investigate the relationship between CSR, organizational identification and motivation, comparing two groups from the same organization. The paper discusses the questions: ‘‘Are there potential gains at the intra-organizational level in terms of enhanced motivation and organizational attitudes on the part of employees?’’ and ‘‘Does volunteering or active participation in CSR yield greater benefits for involved employees in terms of their motivation, job satisfaction and identification?’’.", "title": "" }, { "docid": "5cf444f83a8b4b3f9482e18cea796348", "text": "This paper investigates L-shaped iris (LSI) embedded in substrate integrated waveguide (SIW) structures. A lumped element equivalent circuit is utilized to thoroughly discuss the iris behavior in a wide frequency band. This structure has one more degree of freedom and design parameter compared with the conventional iris structures; therefore, it enables design flexibility with enhanced performance. The LSI is utilized to realize a two-pole evanescent-mode filter with an enhanced stopband and a dual-band filter combining evanescent and ordinary modes excitation. Moreover, a prescribed filtering function is demonstrated using the lumped element analysis not only including evanescent-mode pole, but also close-in transmission zero. The proposed LSI promises to substitute the conventional posts in (SIW) filter design.", "title": "" }, { "docid": "09c19ae7eea50f269ee767ac6e67827b", "text": "In the last years Python has gained more and more traction in the scientific community. Projects like NumPy, SciPy, and Matplotlib have created a strong foundation for scientific computing in Python and machine learning packages like scikit-learn or packages for data analysis like Pandas are building on top of it. In this paper we present Wyrm ( https://github.com/bbci/wyrm ), an open source BCI toolbox in Python. Wyrm is applicable to a broad range of neuroscientific problems. It can be used as a toolbox for analysis and visualization of neurophysiological data and in real-time settings, like an online BCI application. In order to prevent software defects, Wyrm makes extensive use of unit testing. We will explain the key aspects of Wyrm’s software architecture and design decisions for its data structure, and demonstrate and validate the use of our toolbox by presenting our approach to the classification tasks of two different data sets from the BCI Competition III. Furthermore, we will give a brief analysis of the data sets using our toolbox, and demonstrate how we implemented an online experiment using Wyrm. With Wyrm we add the final piece to our ongoing effort to provide a complete, free and open source BCI system in Python.", "title": "" }, { "docid": "a3b919ee9780c92668c0963f23983f82", "text": "A terrified woman called police because her ex-boyfriend was breaking into her home. Upon arrival, police heard screams coming from the basement. They stopped halfway down the stairs and found the ex-boyfriend pointing a rifle at the floor. Officers observed a strange look on the subject’s face as he slowly raised the rifle in their direction. Both officers fired their weapons, killing the suspect. The rifle was not loaded.", "title": "" }, { "docid": "b90ec3edc349a98c41d1106b3c6628ba", "text": "Conventional speech recognition system is constructed by unfolding the spectral-temporal input matrices into one-way vectors and using these vectors to estimate the affine parameters of neural network according to the vector-based error backpropagation algorithm. System performance is constrained because the contextual correlations in frequency and time horizons are disregarded and the spectral and temporal factors are excluded. This paper proposes a spectral-temporal factorized neural network (STFNN) to tackle this weakness. The spectral-temporal structure is preserved and factorized in hidden layers through two ways of factor matrices which are trained by using the factorized error backpropagation. Affine transformation in standard neural network is generalized to the spectro-temporal factorization in STFNN. The structural features or patterns are extracted and forwarded towards the softmax outputs. A deep neural factorization is built by cascading a number of factorization layers with fully-connected layers for speech recognition. An orthogonal constraint is imposed in factor matrices for redundancy reduction. Experimental results show the merit of integrating the factorized features in deep feedforward and recurrent neural networks for speech recognition.", "title": "" }, { "docid": "2802d66dfa1956bf83649614b76d470e", "text": "Given a classification task, what is the best way to teach the resulting boundary to a human? While machine learning techniques can provide excellent methods for finding the boundary, including the selection of examples in an online setting, they tell us little about how we would teach a human the same task. We propose to investigate the problem of example selection and presentation in the context of teaching humans, and explore a variety of mechanisms in the interests of finding what may work best. In particular, we begin with the baseline of random presentation and then examine combinations of several mechanisms: the indication of an example’s relative difficulty, the use of the shaping heuristic from the cognitive science literature (moving from easier examples to harder ones), and a novel kernel-based “coverage model” of the subject’s mastery of the task. From our experiments on 54 human subjects learning and performing a pair of synthetic classification tasks via our teaching system, we found that we can achieve the greatest gains with a combination of shaping and the coverage model.", "title": "" }, { "docid": "26bc2aa9b371e183500e9c979c1fff65", "text": "Complex regional pain syndrome (CRPS) is clinically characterized by pain, abnormal regulation of blood flow and sweating, edema of skin and subcutaneous tissues, trophic changes of skin, appendages of skin and subcutaneous tissues, and active and passive movement disorders. It is classified into type I (previously reflex sympathetic dystrophy) and type II (previously causalgia). Based on multiple evidence from clinical observations, experimentation on humans, and experimentation on animals, the hypothesis has been put forward that CRPS is primarily a disease of the central nervous system. CRPS patients exhibit changes which occur in somatosensory systems processing noxious, tactile and thermal information, in sympathetic systems innervating skin (blood vessels, sweat glands), and in the somatomotor system. This indicates that the central representations of these systems are changed and data show that CRPS, in particular type I, is a systemic disease involving these neuronal systems. This way of looking at CRPS shifts the attention away from interpreting the syndrome conceptually in a narrow manner and to reduce it to one system or to one mechanism only, e. g., to sympathetic-afferent coupling. It will further our understanding why CRPS type I may develop after a trivial trauma, after a trauma being remote from the affected extremity exhibiting CRPS, and possibly after immobilization of an extremity. It will explain why, in CRPS patients with sympathetically maintained pain, a few temporary blocks of the sympathetic innervation of the affected extremity sometimes lead to long-lasting (even permanent) pain relief and to resolution of the other changes observed in CRPS. This changed view will bring about a diagnostic reclassification and redefinition of CRPS and will have bearings on the therapeutic approaches. Finally it will shift the focus of research efforts.", "title": "" }, { "docid": "4c39ff8119ddc75213251e7321c7e795", "text": "Building and debugging distributed software remains extremely difficult. We conjecture that by adopting a data-centric approach to system design and by employing declarative programming languages, a broad range of distributed software can be recast naturally in a data-parallel programming model. Our hope is that this model can significantly raise the level of abstraction for programmers, improving code simplicity, speed of development, ease of software evolution, and program correctness.\n This paper presents our experience with an initial large-scale experiment in this direction. First, we used the Overlog language to implement a \"Big Data\" analytics stack that is API-compatible with Hadoop and HDFS and provides comparable performance. Second, we extended the system with complex distributed features not yet available in Hadoop, including high availability, scalability, and unique monitoring and debugging facilities. We present both quantitative and anecdotal results from our experience, providing some concrete evidence that both data-centric design and declarative languages can substantially simplify distributed systems programming.", "title": "" }, { "docid": "ccc70871f57f25da6141a7083bdf5174", "text": "This paper outlines and tests two agency models of dividends. According to the “outcome” model, dividends are the result of effective pressure by minority shareholders to force corporate insiders to disgorge cash. According to the “substitute” model, insiders interested in issuing equity in the future choose to pay dividends to establish a reputation for decent treatment of minority shareholders. The first model predicts that stronger minority shareholder rights should be associated with higher dividend payouts; the second model predicts the opposite. Tests on a cross-section of 4,000 companies from 33 countries with different levels of minority shareholder rights support the outcome agency model of dividends. The authors are from Harvard University, Harvard University, Harvard University and University of Chicago, respectively. They are grateful to Alexander Aganin for excellent research assistance, and to Lucian Bebchuk, Mihir Desai, Edward Glaeser, Denis Gromb, Oliver Hart, James Hines, Kose John, James Poterba, Roberta Romano, Raghu Rajan, Lemma Senbet, René Stulz, Daniel Wolfenzohn, Luigi Zingales, and two anonymous referees for helpful comments. 2 The so-called dividend puzzle (Black 1976) has preoccupied the attention of financial economists at least since Modigliani and Miller’s (1958, 1961) seminal work. This work established that, in a frictionless world, when the investment policy of a firm is held constant, its dividend payout policy has no consequences for shareholder wealth. Higher dividend payouts lead to lower retained earnings and capital gains, and vice versa, leaving total wealth of the shareholders unchanged. Contrary to this prediction, however, corporations follow extremely deliberate dividend payout strategies (Lintner (1956)). This evidence raises a puzzle: how do firms choose their dividend policies? In the United States and other countries, the puzzle is even deeper since many shareholders are taxed more heavily on their dividend receipts than on capital gains. The actual magnitude of this tax burden is debated (see Poterba and Summers (1985) and Allen and Michaely (1997)), but taxes generally make it even harder to explain dividend policies of firms. Economists have proposed a number of explanations of the dividend puzzle. Of these, particularly popular is the idea that firms can signal future profitability by paying dividends (Bhattacharya (1979), John and Williams (1985), Miller and Rock (1985), Ambarish, John, and Williams (1987)). Empirically, this theory had considerable initial success, since firms that initiate (or raise) dividends experience share price increases, and the converse is true for firms that eliminate (or cut) dividends (Aharony and Swary (1980), Asquith and Mullins (1983)). Recent results are more mixed, since current dividend changes do not help predict firms’ future earnings growth (DeAngelo, DeAngelo, and Skinner (1996) and Benartzi, Michaely, and Thaler (1997)). Another idea, which has received only limited attention until recently (e.g., Easterbrook (1984), Jensen (1986), Fluck (1998a, 1998b), Myers (1998), Gomes (1998), Zwiebel (1996)), is 3 that dividend policies address agency problems between corporate insiders and outside shareholders. According to these theories, unless profits are paid out to shareholders, they may be diverted by the insiders for personal use or committed to unprofitable projects that provide private benefits for the insiders. As a consequence, outside shareholders have a preference for dividends over retained earnings. Theories differ on how outside shareholders actually get firms to disgorge cash. The key point, however, is that failure to disgorge cash leads to its diversion or waste, which is detrimental to outside shareholders’ interest. The agency approach moves away from the assumptions of the Modigliani-Miller theorem by recognizing two points. First, the investment policy of the firm cannot be taken as independent of its dividend policy, and, in particular, paying out dividends may reduce the inefficiency of marginal investments. Second, and more subtly, the allocation of all the profits of the firm to shareholders on a pro-rata basis cannot be taken for granted, and in particular the insiders may get preferential treatment through asset diversion, transfer prices and theft, even holding the investment policy constant. In so far as dividends are paid on a pro-rata basis, they benefit outside shareholders relative to the alternative of expropriation of retained earnings. In this paper, we attempt to identify some of the basic elements of the agency approach to dividends, to understand its key implications, and to evaluate them on a cross-section of over 4,000 firms from 33 countries around the world. The reason for looking around the world is that the severity of agency problems to which minority shareholders are exposed differs greatly across countries, in part because legal protection of these shareholders vary (La Porta et al. (1997, 1998)). Empirically, we find that dividend policies vary across legal regimes in ways consistent with a particular version of the agency theory of dividends. Specifically, firms in common law 4 countries, where investor protection is typically better, make higher dividend payouts than firms in civil law countries do. Moreover, in common but not civil law countries, high growth firms make lower dividend payouts than low growth firms. These results support the version of the agency theory in which investors in good legal protection countries use their legal powers to extract dividends from firms, especially when reinvestment opportunities are poor. Section I of the paper summarizes some of the theoretical arguments. Section II describes the data. Section III presents our empirical findings. Section IV concludes. I. Theoretical Issues. A. Agency Problems and Legal Regimes Conflicts of interest between corporate insiders, such as managers and controlling shareholders, on the one hand, and outside investors, such as minority shareholders, on the other hand, are central to the analysis of the modern corporation (Berle and Means (1932), Jensen and Meckling (1976)). The insiders who control corporate assets can use these assets for a range of purposes that are detrimental to the interests of the outside investors. Most simply, they can divert corporate assets to themselves, through outright theft, dilution of outside investors through share issues to the insiders, excessive salaries, asset sales to themselves or other corporations they control at favorable prices, or transfer pricing with other entities they control (see Shleifer and Vishny (1997) for a discussion). Alternatively, insiders can use corporate assets to pursue investment strategies that yield them personal benefits of control, such as growth or diversification, without benefitting outside investors (e.g., Baumol (1959), Jensen (1986)). What is meant by insiders varies from country to country. In the United States, U.K., 5 Canada, and Australia, where ownership in large corporations is relatively dispersed, most large corporations are to a significant extent controlled by their managers. In most other countries, large firms typically have shareholders that own a significant fraction of equity, such as the founding families (La Porta, Lopez-de-Silanes, and Shleifer (1999)). The controlling shareholders can effectively determine the decisions of the managers (indeed, managers typically come from the controlling family), and hence the problem of managerial control per se is not as severe as it is in the rich common law countries. On the other hand, the controlling shareholders can implement policies that benefit themselves at the expense of minority shareholders. Regardless of the identity of the insiders, the victims of insider control are minority shareholders. It is these minority shareholders that would typically have a taste for dividends. One of the principal remedies to agency problems is the law. Corporate and other law gives outside investors, including shareholders, certain powers to protect their investment against expropriation by insiders. These powers in the case of shareholders range from the right to receive the same per share dividends as the insiders, to the right to vote on important corporate matters, including the election of directors, to the right to sue the company for damages. The very fact that this legal protection exists probably explains why becoming a minority shareholder is a viable investment strategy, as opposed to just being an outright giveaway of money to strangers who are under few if any obligations to give it back. As pointed out by La Porta et al. (1998), the extent of legal protection of outside investors differs enormously across countries. Legal protection consists of both the content of the laws and the quality of their enforcement. Some countries, including most notably the wealthy common law countries such as the U.S. and the U.K., provide effective protection of minority shareholders 6 so that the outright expropriation of corporate assets by the insiders is rare. Agency problems manifest themselves primarily through non-value-maximizing investment choices. In many other countries, the condition of outside investors is a good deal more precarious, but even there some protection does exist. La Porta et al. (1998) show in particular that common law countries appear to have the best legal protection of minority shareholders, whereas civil law countries, and most conspicuously the French civil law countries, have the weakest protection. The quality of investor protection, viewed as a proxy for lower agency costs, has been shown to matter for a number of important issues in corporate finance. For example, corporate ownership is more concentrated in countries with inferior shareholder protection (La Porta et al. (1998), La Porta, Lopez-de-Silanes, and Shleifer (1999)). The valuation and breadth of cap", "title": "" }, { "docid": "a346607a5e2e6c48e07e3e34a2ec7b0d", "text": "The development and professionalization of a video game requires tools for analyzing the practice of the players and teams, their tactics and strategies. These games are very popular and by nature numerical, they provide many tracks that we analyzed in terms of team play. We studied Defense of the Ancients (DotA), a Multiplayer Online Battle Arena (MOBA), where two teams battle in a game very similar to rugby or American football. Through topological measures – area of polygon described by the players, inertia, diameter, distance to the base – that are independent of the exact nature of the game, we show that the outcome of the match can be relevantly predicted. Mining e-sport’s tracks is opening interest in further application of these tools for analyzing real time sport. © 2014. Published by Elsevier B.V. Selection and/or peer review under responsibility of American Applied Science Research Institute", "title": "" }, { "docid": "616b6db46d3a01730c3ea468b0a03fc5", "text": "We demonstrate the surprising strength of unimodal baselines in multimodal domains, and make concrete recommendations for best practices in future research. Where existing work often compares against random or majority class baselines, we argue that unimodal approaches better capture and reflect dataset biases and therefore provide an important comparison when assessing the performance of multimodal techniques. We present unimodal ablations on three recent datasets in visual navigation and QA, seeing an up to 29% absolute gain in performance over published baselines.", "title": "" }, { "docid": "119c20c537f833731965e0d8aeba0964", "text": "The literature on Inverse Reinforcement Learning (IRL) typically assumes that humans take actions in order to minimize the expected value of a cost function, i.e., that humans are risk neutral. Yet, in practice, humans are often far from being risk neutral. To fill this gap, the objective of this paper is to devise a framework for risk-sensitive IRL in order to explicitly account for a human’s risk sensitivity. To this end, we propose a flexible class of models based on coherent risk measures, which allow us to capture an entire spectrum of risk preferences from risk-neutral to worst-case. We propose efficient non-parametric algorithms based on linear programming and semi-parametric algorithms based on maximum likelihood for inferring a human’s underlying risk measure and cost function for a rich class of static and dynamic decision-making settings. The resulting approach is demonstrated on a simulated driving game with ten human participants. Our method is able to infer and mimic a wide range of qualitatively different driving styles from highly risk-averse to risk-neutral in a data-efficient manner. Moreover, comparisons of the Risk-Sensitive (RS) IRL approach with a risk-neutral model show that the RS-IRL framework more accurately captures observed participant behavior both qualitatively and quantitatively, especially in scenarios where catastrophic outcomes such as collisions can occur.", "title": "" }, { "docid": "bb815929889d93e19c6581c3f9a0b491", "text": "This paper presents an HMM-MLP hybrid system to recognize complex date images written on Brazilian bank cheques. The system first segments implicitly a date image into sub-fields through the recognition process based on an HMM-based approach. Afterwards, the three obligatory date sub-fields are processed by the system (day, month and year). A neural approach has been adopted to work with strings of digits and a Markovian strategy to recognize and verify words. We also introduce the concept of meta-classes of digits, which is used to reduce the lexicon size of the day and year and improve the precision of their segmentation and recognition. Experiments show interesting results on date recognition.", "title": "" }, { "docid": "2f2e5d62475918dc9cfd54522f480a11", "text": "In this paper, we propose a very simple and elegant patch-based, machine learning technique for image denoising using the higher order singular value decomposition (HOSVD). The technique simply groups together similar patches from a noisy image (with similarity defined by a statistically motivated criterion) into a 3D stack, computes the HOSVD coefficients of this stack, manipulates these coefficients by hard thresholding, and inverts the HOSVD transform to produce the final filtered image. Our technique chooses all required parameters in a principled way, relating them to the noise model. We also discuss our motivation for adopting the HOSVD as an appropriate transform for image denoising. We experimentally demonstrate the excellent performance of the technique on grayscale as well as color images. On color images, our method produces state-of-the-art results, outperforming other color image denoising algorithms at moderately high noise levels. A criterion for optimal patch-size selection and noise variance estimation from the residual images (after denoising) is also presented.", "title": "" }, { "docid": "b84d8b711738bbd889a3a88ba82f45c0", "text": "Transmission over wireless channel is challenging. As such, different application required different signal processing approach of radio system. So, a highly reconfigurable radio system is on great demand as the traditional fixed and embedded radio system are not viable to cater the needs for frequently change requirements of wireless communication. A software defined radio or better known as an SDR, is a software-based radio platform that offers flexibility to deliver the highly reconfigurable system requirements. This approach allows a different type of communication system requirements such as standard, protocol, or signal processing method, to be deployed by using the same set of hardware and software such as USRP and GNU Radio respectively. For researchers, this approach has opened the door to extend their studies in simulation domain into experimental domain. However, the realization of SDR concept is inherently limited by the analog components of the hardware being used. Despite that, the implementation of SDR is still new yet progressing, thus, this paper intends to provide an insight about its viability as a high re-configurable platform for communication system. This paper presents the SDR-based transceiver of common digital modulation system by means of GNU Radio and USRP.", "title": "" }, { "docid": "60a655d6b6d79f55151e871d2f0d4d34", "text": "The clinical characteristics of drug hypersensitivity reactions are very heterogeneous as drugs can actually elicit all types of immune reactions. The majority of allergic reactions involve either drug-specific IgE or T cells. Their stimulation leads to quite distinct immune responses, which are classified according to Gell and Coombs. Here, an extension of this subclassification, which considers the distinct T-cell functions and immunopathologies, is presented. These subclassifications are clinically useful, as they require different treatment and diagnostic steps. Copyright © 2007 S. Karger AG, Basel", "title": "" }, { "docid": "d80d52806cbbdd6148e3db094eabeed7", "text": "We decided to test a surprisingly simple hypothesis; namely, that the relationship between an image of a scene and the chromaticity of scene illumination could be learned by a neural network. The thought was that if this relationship could be extracted by a neural network, then the trained network would be able to determine a scene's illumination from its image, which would then allow correction of the image colors to those relative to a standard illuminant, thereby providing color constancy. Using a database of surface reflectances and illuminants, along with the spectral sensitivity functions of our camera, we generated thousands of images of randomly selected illuminants lighting `scenes' of 1 to 60 randomly selected reflectances. During the learning phase the network is provided the image data along with the chromaticity of its illuminant. After training, the network outputs (very quickly) the chromaticity of the illumination given only the image data. We obtained surprisingly good estimates of he ambient illumination lighting from the network even when applied to scenes in our lab that were completely unrelated to the training data.", "title": "" }, { "docid": "3e0f74c880165b5147864dfaa6a75c11", "text": "Traditional hollow metallic waveguide manufacturing techniques are readily capable of producing components with high-precision geometric tolerances, yet generally lack the ability to customize individual parts on demand or to deliver finished components with low lead times. This paper proposes a Rapid-Prototyping (RP) method for relatively low-loss millimeter-wave hollow waveguides produced using consumer-grade stere-olithographic (SLA) Additive Manufacturing (AM) technology, in conjunction with an electroless metallization process optimized for acrylate-based photopolymer substrates. To demonstrate the capabilities of this particular AM process, waveguide prototypes are fabricated for the W- and D-bands. The measured insertion loss at W-band is between 0.12 dB/in to 0.25 dB/in, corresponding to a mean value of 0.16 dB/in. To our knowledge, this is the lowest insertion loss figure presented to date, when compared to other W-Band AM waveguide designs reported in the literature. Printed D-band waveguide prototypes exhibit a transducer loss of 0.26 dB/in to 1.01 dB/in, with a corresponding mean value of 0.65 dB/in, which is similar performance to a commercial metal waveguide.", "title": "" } ]
scidocsrr
206bd53c2f28475975d72bf44504f279
Learning Clip Representations for Skeleton-Based 3D Action Recognition
[ { "docid": "afee419227629f8044b5eb0addd65ce3", "text": "Both Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) have shown improvements over Deep Neural Networks (DNNs) across a wide variety of speech recognition tasks. CNNs, LSTMs and DNNs are complementary in their modeling capabilities, as CNNs are good at reducing frequency variations, LSTMs are good at temporal modeling, and DNNs are appropriate for mapping features to a more separable space. In this paper, we take advantage of the complementarity of CNNs, LSTMs and DNNs by combining them into one unified architecture. We explore the proposed architecture, which we call CLDNN, on a variety of large vocabulary tasks, varying from 200 to 2,000 hours. We find that the CLDNN provides a 4-6% relative improvement in WER over an LSTM, the strongest of the three individual models.", "title": "" }, { "docid": "4d69284c25e1a9a503dd1c12fde23faa", "text": "Human pose estimation has been actively studied for decades. While traditional approaches rely on 2d data like images or videos, the development of Time-of-Flight cameras and other depth sensors created new opportunities to advance the field. We give an overview of recent approaches that perform human motion analysis which includes depthbased and skeleton-based activity recognition, head pose estimation, facial feature detection, facial performance capture, hand pose estimation and hand gesture recognition. While the focus is on approaches using depth data, we also discuss traditional image based methods to provide a broad overview of recent developments in these areas.", "title": "" }, { "docid": "0d16b2f41e4285a5b89b31ed16f378a8", "text": "Recognizing human actions from unknown and unseen (novel) views is a challenging problem. We propose a Robust Non-Linear Knowledge Transfer Model (R-NKTM) for human action recognition from novel views. The proposed R-NKTM is a deep fully-connected neural network that transfers knowledge of human actions from any unknown view to a shared high-level virtual view by finding a set of non-linear transformations that connects the views. The R-NKTM is learned from 2D projections of dense trajectories of synthetic 3D human models fitted to real motion capture data and generalizes to real videos of human actions. The strength of our technique is that we learn a single R-NKTM for all actions and all viewpoints for knowledge transfer of any real human action video without the need for re-training or fine-tuning the model. Thus, R-NKTM can efficiently scale to incorporate new action classes. R-NKTM is learned with dummy labels and does not require knowledge of the camera viewpoint at any stage. Experiments on three benchmark cross-view human action datasets show that our method outperforms existing state-of-the-art.", "title": "" } ]
[ { "docid": "743aeaa668ba32e6561e9e62015e24cd", "text": "A smart city enables the effective utilization of resources and better quality of services to the citizens. To provide services such as air quality management, weather monitoring and automation of homes and buildings in a smart city, the basic parameters are temperature, humidity and CO2. This paper presents a customised design of an Internet of Things (IoT) enabled environment monitoring system to monitor temperature, humidity and CO2. In developed system, data is sent from the transmitter node to the receiver node. The data received at the receiver node is monitored and recorded in an excel sheet in a personal computer (PC) through a Graphical User Interface (GUI), made in LabVIEW. An Android application has also been developed through which data is transferred from LabVIEW to a smartphone, for monitoring data remotely. The results and the performance of the proposed system is discussed.", "title": "" }, { "docid": "8d91b88e9f57181e9c5427b8578bc322", "text": "AIM\n This paper reports on a study that looked at the characteristics of exemplary nurse leaders in times of change from the perspective of frontline nurses.\n\n\nBACKGROUND\n Large-scale changes in the health care system and their associated challenges have highlighted the need for strong leadership at the front line.\n\n\nMETHODS\n In-depth personal interviews with open-ended questions were the primary means of data collection. The study identified and explored six frontline nurses' perceptions of the qualities of nursing leaders through qualitative content analysis. This study was validated by results from the current literature.\n\n\nRESULTS\n The frontline nurses described several common characteristics of exemplary nurse leaders, including: a passion for nursing; a sense of optimism; the ability to form personal connections with their staff; excellent role modelling and mentorship; and the ability to manage crisis while guided by a set of moral principles. All of these characteristics pervade the current literature regarding frontline nurses' perspectives on nurse leaders.\n\n\nCONCLUSION\n This study identified characteristics of nurse leaders that allowed them to effectively assist and support frontline nurses in the clinical setting.\n\n\nIMPLICATIONS FOR NURSING MANAGEMENT\n The findings are of significance to leaders in the health care system and in the nursing profession who are in a position to foster development of leaders to mentor and encourage frontline nurses.", "title": "" }, { "docid": "f4fc99eebfea1d5c899b956430ee896e", "text": "Searchable Encryption (SE) has been extensively examined by both academic and industry researchers. While many academic SE schemes show provable security, they usually expose some query information (e.g., search and access patterns) to achieve high efficiency. However, several inference attacks have exploited such leakage, e.g., a query recovery attack can convert opaque query trapdoors to their corresponding keywords based on some prior knowledge. On the other hand, many proposed SE schemes require significant modification of existing applications, which makes them less practical, weak in usability, and difficult to deploy. In this paper, we introduce a secure and practical searchable symmetric encryption scheme with provable security strength for cloud applications, called IDCrypt, which improves the search efficiency, and enhances the security strength of SE using symmetric cryptography. We further point out the main challenges in securely searching on multiple indexes and sharing encrypted data between multiple users. To address the above issues, we propose a token-adjustment search scheme to preserve the search functionality among multi-indexes, and a key sharing scheme which combines identity-based encryption and public-key encryption. Our experimental results show that the overhead of the key sharing scheme is fairly low.", "title": "" }, { "docid": "67995490350c68f286029d8b401d78d8", "text": "OBJECTIVE\nModifiable risk factors for dementia were recently identified and compiled in a systematic review. The 'Lifestyle for Brain Health' (LIBRA) score, reflecting someone's potential for dementia prevention, was studied in a large longitudinal population-based sample with respect to predicting cognitive change over an observation period of up to 16 years.\n\n\nMETHODS\nLifestyle for Brain Health was calculated at baseline for 949 participants aged 50-81 years from the Maastricht Ageing Study. The predictive value of LIBRA for incident dementia and cognitive impairment was examined by using Cox proportional hazard models and by testing its relation with cognitive decline.\n\n\nRESULTS\nLifestyle for Brain Health predicted future risk of dementia, as well as risk of cognitive impairment. A one-point increase in LIBRA score related to 19% higher risk for dementia and 9% higher risk for cognitive impairment. LIBRA predicted rate of decline in processing speed, but not memory or executive functioning.\n\n\nCONCLUSIONS\nLifestyle for Brain Health (LIBRA) may help in identifying and monitoring risk status in dementia-prevention programmes, by targeting modifiable, lifestyle-related risk factors. Copyright © 2017 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "22629b96f1172328e654ea6ed6dccd92", "text": "This paper uses the case of contract manufacturing in the electronics industry to illustrate an emergent American model of industrial organization, the modular production network. Lead firms in the modular production network concentrate on the creation, penetration, and defense of markets for end products—and increasingly the provision of services to go with them—while manufacturing capacity is shifted out-of-house to globally-operating turn-key suppliers. The modular production network relies on codified inter-firm links and the generic manufacturing capacity residing in turn-key suppliers to reduce transaction costs, build large external economies of scale, and reduce risk for network actors. I test the modular production network model against some of the key theoretical tools that have been developed to predict and explain industry structure: Joseph Schumpeter's notion of innovation in the giant firm, Alfred Chandler's ideas about economies of speed and the rise of the modern corporation, Oliver Williamson's transaction cost framework, and a range of other production network models that appear in the literature. I argue that the modular production network yields better economic performance in the context of globalization than more spatially and socially embedded network models. I view the emergence of the modular production network as part of a historical process of industrial transformation in which nationally-specific models of industrial organization co-evolve in intensifying rounds of competition, diffusion, and adaptation.", "title": "" }, { "docid": "6a1da115f887498370b400efa6e57ed0", "text": "Local search heuristics for non-convex optimizations are popular in applied machine learning. However, in general it is hard to guarantee that such algorithms even converge to a local minimum, due to the existence of complicated saddle point structures in high dimensions. Many functions have degenerate saddle points such that the first and second order derivatives cannot distinguish them with local optima. In this paper we use higher order derivatives to escape these saddle points: we design the first efficient algorithm guaranteed to converge to a third order local optimum (while existing techniques are at most second order). We also show that it is NP-hard to extend this further to finding fourth order local optima.", "title": "" }, { "docid": "9c799b4d771c724969be7b392697ebee", "text": "Search engines need to model user satisfaction to improve their services. Since it is not practical to request feedback on searchers' perceptions and search outcomes directly from users, search engines must estimate satisfaction from behavioral signals such as query refinement, result clicks, and dwell times. This analysis of behavior in the aggregate leads to the development of global metrics such as satisfied result clickthrough (typically operationalized as result-page clicks with dwell time exceeding a particular threshold) that are then applied to all searchers' behavior to estimate satisfac-tion levels. However, satisfaction is a personal belief and how users behave when they are satisfied can also differ. In this paper we verify that searcher behavior when satisfied and dissatisfied is indeed different among individual searchers along a number of dimensions. As a result, we introduce and evaluate learned models of satisfaction for individual searchers and searcher cohorts. Through experimentation via logs from a large commercial Web search engine, we show that our proposed models can predict search satisfaction more accurately than a global baseline that applies the same satisfaction model across all users. Our findings have implications for the study and application of user satisfaction in search systems.", "title": "" }, { "docid": "cb47cc2effac1404dd60a91a099699d1", "text": "We survey recent trends in practical algorithms for balanced graph partitioning, point to applications and discuss future research directions.", "title": "" }, { "docid": "040d39a7bf861a05cbd10fda9c0a1576", "text": "Skin laceration repair is an important skill in family medicine. Sutures, tissue adhesives, staples, and skin-closure tapes are options in the outpatient setting. Physicians should be familiar with various suturing techniques, including simple, running, and half-buried mattress (corner) sutures. Although suturing is the preferred method for laceration repair, tissue adhesives are similar in patient satisfaction, infection rates, and scarring risk in low skin-tension areas and may be more cost-effective. The tissue adhesive hair apposition technique also is effective in repairing scalp lacerations. The sting of local anesthesia injections can be lessened by using smaller gauge needles, administering the injection slowly, and warming or buffering the solution. Studies have shown that tap water is safe to use for irrigation, that white petrolatum ointment is as effective as antibiotic ointment in postprocedure care, and that wetting the wound as early as 12 hours after repair does not increase the risk of infection. Patient education and appropriate procedural coding are important after the repair.", "title": "" }, { "docid": "8c46f24d8e710c5fb4e25be76fc5b060", "text": "This paper presents the novel design of a wideband circularly polarized (CP) Radio Frequency Identification (RFID) reader microstrip patch antenna for worldwide Ultra High Frequency (UHF) band which covers 840–960 MHz. The proposed antenna, which consists of a microstrip patch with truncated corners and a cross slot, is placed on a foam substrate (εr = 1.06) above a ground plane and is fed through vias through ground plane holes that extend from the quadrature 3 dB branch line hybrid coupler placed below the ground plane. This helps to separate feed network radiation, from the patch antenna and keeping the CP purity. The prototype antenna was fabricated with a total size of 225 × 250 × 12.8 mm3 which shows a measured impedance matching band of 840–1150MHz (31.2%) as well as measured rotating linear based circularly polarized radiation patterns. The simulated and measured 3 dB Axial Ratio (AR) bandwidth is better than 23% from 840–1050 MHz meeting and exceeding the target worldwide RFID UHF band.", "title": "" }, { "docid": "245c02139f875fac756dc17d1a2fc6c2", "text": "This paper tries to answer two questions. First, how to infer real-time air quality of any arbitrary location given environmental data and historical air quality data from very sparse monitoring locations. Second, if one needs to establish few new monitoring stations to improve the inference quality, how to determine the best locations for such purpose? The problems are challenging since for most of the locations (>99%) in a city we do not have any air quality data to train a model from. We design a semi-supervised inference model utilizing existing monitoring data together with heterogeneous city dynamics, including meteorology, human mobility, structure of road networks, and point of interests (POIs). We also propose an entropy-minimization model to suggest the best locations to establish new monitoring stations. We evaluate the proposed approach using Beijing air quality data, resulting in clear advantages over a series of state-of-the-art and commonly used methods.", "title": "" }, { "docid": "989a16f498eaaa62d5578cc1bcc8bc04", "text": "UML activity diagram is widely used to describe the behavior of the software system. Unfortunately, there is still no practical tool to verify the UML diagrams automatically. This paper proposes an alternative to translate UML activity diagram into a colored petri nets with inscription. The model translation rules are proposed to guide the automatic translation of the activity diagram with atomic action into a CPN model. Moreover, the relevant basic arc inscriptions are generated without manual elaboration. The resulting CPN with inscription is correctly verified as expected.", "title": "" }, { "docid": "d394d5d1872bbb6a38c28ecdc0e24f06", "text": "An ever increasing number of configuration parameters are provided to system users. But many users have used one configuration setting across different workloads, leaving untapped the performance potential of systems. A good configuration setting can greatly improve the performance of a deployed system under certain workloads. But with tens or hundreds of parameters, it becomes a highly costly task to decide which configuration setting leads to the best performance. While such task requires the strong expertise in both the system and the application, users commonly lack such expertise.\n To help users tap the performance potential of systems, we present Best Config, a system for automatically finding a best configuration setting within a resource limit for a deployed system under a given application workload. BestConfig is designed with an extensible architecture to automate the configuration tuning for general systems. To tune system configurations within a resource limit, we propose the divide-and-diverge sampling method and the recursive bound-and-search algorithm. BestConfig can improve the throughput of Tomcat by 75%, that of Cassandra by 63%, that of MySQL by 430%, and reduce the running time of Hive join job by about 50% and that of Spark join job by about 80%, solely by configuration adjustment.", "title": "" }, { "docid": "72ddcb7a55918a328576a811a89d245b", "text": "Among all new emerging RNA species, microRNAs (miRNAs) have attracted the interest of the scientific community due to their implications as biomarkers of prognostic value, disease progression, or diagnosis, because of defining features as robust association with the disease, or stable presence in easily accessible human biofluids. This field of research has been established twenty years ago, and the development has been considerable. The regulatory nature of miRNAs makes them great candidates for the treatment of infectious diseases, and a successful example in the field is currently being translated to clinical practice. This review will present a general outline of miRNAmolecules, as well as successful stories of translational significance which are getting us closer from the basic bench studies into clinical practice.", "title": "" }, { "docid": "2d8f92f752bd1b4756e991a1f7e70926", "text": "We present a new method to auto-adjust camera exposure for outdoor robotics. In outdoor environments, scene dynamic range may be wider than the dynamic range of the cameras due to sunlight and skylight. This can results in failures of vision-based algorithms because important image features are missing due to under-/over-saturation. To solve the problem, we adjust camera exposure to maximize image features in the gradient domain. By exploiting the gradient domain, our method naturally determines the proper exposure needed to capture important image features in a manner that is robust against illumination conditions. The proposed method is implemented using an off-the-shelf machine vision camera and is evaluated using outdoor robotics applications. Experimental results demonstrate the effectiveness of our method, which improves the performance of robot vision algorithms.", "title": "" }, { "docid": "fedcb2bd51b9fd147681ae23e03c7336", "text": "Epidemiological studies have revealed the important role that foodstuffs of vegetable origin have to play in the prevention of numerous illnesses. The natural antioxidants present in such foodstuffs, among which the fl avonoids are widely present, may be responsible for such an activity. Flavonoids are compounds that are low in molecular weight and widely distributed throughout the vegetable kingdom. They may be of great utility in states of accute or chronic diarrhoea through the inhibition of intestinal secretion and motility, and may also be benefi cial in the reduction of chronic infl ammatory damage in the intestine, by affording protection against oxidative stress and by preserving mucosal function. For this reason, the use of these agents is recommended in the treatment of infl ammatory bowel disease, in which various factors are involved in extreme immunological reactions, which lead to chronic intestinal infl ammation.", "title": "" }, { "docid": "3630c575bf7b5250930c7c54d8a1c6d0", "text": "The RCSB Protein Data Bank (RCSB PDB, http://www.rcsb.org) provides access to 3D structures of biological macromolecules and is one of the leading resources in biology and biomedicine worldwide. Our efforts over the past 2 years focused on enabling a deeper understanding of structural biology and providing new structural views of biology that support both basic and applied research and education. Herein, we describe recently introduced data annotations including integration with external biological resources, such as gene and drug databases, new visualization tools and improved support for the mobile web. We also describe access to data files, web services and open access software components to enable software developers to more effectively mine the PDB archive and related annotations. Our efforts are aimed at expanding the role of 3D structure in understanding biology and medicine.", "title": "" }, { "docid": "53598a996f31476b32871cf99f6b84f0", "text": "The CL-SciSumm 2016 Shared Task is the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain. The task built off of the experience and training data set created in its namesake pilot task, which was conducted in 2014 by the same organizing committee. The track included three tasks involving: (1A) identifying relationships between citing documents and the referred document, (1B) classifying the discourse facets, and (2) generating the abstractive summary. The dataset comprised 30 annotated sets of citing and reference papers from the open access research papers in the CL domain. This overview paper describes the participation and the official results of the second CL-SciSumm Shared Task, organized as a part of the Joint Workshop onBibliometric-enhanced Information Retrieval and Natural Language Processing for Digital Libraries (BIRNDL 2016), held in New Jersey,USA in June, 2016. The annotated dataset used for this shared task and the scripts used for evaluation can be accessed and used by the community at: https://github.com/WING-NUS/scisumm-corpus.", "title": "" }, { "docid": "daf2c30e059694007c2ba84cab916e07", "text": "The field of multi-agent system (MAS) is an active area of research within artificial intelligence, with an increasingly important impact in industrial and other real-world applications. In a MAS, autonomous agents interact to pursue personal interests and/or to achieve common objectives. Distributed Constraint Optimization Problems (DCOPs) have emerged as a prominent agent model to govern the agents’ autonomous behavior, where both algorithms and communication models are driven by the structure of the specific problem. During the last decade, several extensions to the DCOP model have been proposed to enable support of MAS in complex, real-time, and uncertain environments. This survey provides an overview of the DCOP model, offering a classification of its multiple extensions and addressing both resolution methods and applications that find a natural mapping within each class of DCOPs. The proposed classification suggests several future perspectives for DCOP extensions, and identifies challenges in the design of efficient resolution algorithms, possibly through the adaptation of strategies from different areas.", "title": "" }, { "docid": "7b8fc21d27c9eb7c8e1df46eec7d6b6d", "text": "This paper examines two methods - magnet shifting and optimizing the magnet pole arc - for reducing cogging torque in permanent magnet machines. The methods were applied to existing machine designs and their performance was calculated using finite-element analysis (FEA). Prototypes of the machine designs were constructed and experimental results obtained. It is shown that the FEA predicted the cogging torque to be nearly eliminated using the two methods. However, there was some residual cogging in the prototypes due to manufacturing difficulties. In both methods, the back electromotive force was improved by reducing harmonics while preserving the magnitude.", "title": "" } ]
scidocsrr
654b317f6f6a4ed8f2ab415c90d71dac
Deep Compositional Cross-modal Learning to Rank via Local-Global Alignment
[ { "docid": "f2603a583b63c1c8f350b3ddabe16642", "text": "We consider the problem of modeling annotated data---data with multiple types where the instance of one type (such as a caption) serves as a description of the other type (such as an image). We describe three hierarchical probabilistic mixture models which aim to describe such data, culminating in correspondence latent Dirichlet allocation, a latent variable model that is effective at modeling the joint distribution of both types and the conditional distribution of the annotation given the primary type. We conduct experiments on the Corel database of images and captions, assessing performance in terms of held-out likelihood, automatic annotation, and text-based image retrieval.", "title": "" } ]
[ { "docid": "4aaea3737b3331f3e016018367c3040c", "text": "BACKGROUND\nPedestrian-friendly neighborhoods with proximal destinations and services encourage walking and decrease car dependence, thereby contributing to more active and healthier communities. Proximity to key destinations and services is an important aspect of the urban design decision making process, particularly in areas adopting a transit-oriented development (TOD) approach to urban planning, whereby densification occurs within walking distance of transit nodes. Modeling destination access within neighborhoods has been limited to circular catchment buffers or more sophisticated network-buffers generated using geoprocessing routines within geographical information systems (GIS). Both circular and network-buffer catchment methods are problematic. Circular catchment models do not account for street networks, thus do not allow exploratory 'what-if' scenario modeling; and network-buffering functionality typically exists within proprietary GIS software, which can be costly and requires a high level of expertise to operate.\n\n\nMETHODS\nThis study sought to overcome these limitations by developing an open-source simple agent-based walkable catchment tool that can be used by researchers, urban designers, planners, and policy makers to test scenarios for improving neighborhood walkable catchments. A simplified version of an agent-based model was ported to a vector-based open source GIS web tool using data derived from the Australian Urban Research Infrastructure Network (AURIN). The tool was developed and tested with end-user stakeholder working group input.\n\n\nRESULTS\nThe resulting model has proven to be effective and flexible, allowing stakeholders to assess and optimize the walkability of neighborhood catchments around actual or potential nodes of interest (e.g., schools, public transport stops). Users can derive a range of metrics to compare different scenarios modeled. These include: catchment area versus circular buffer ratios; mean number of streets crossed; and modeling of different walking speeds and wait time at intersections.\n\n\nCONCLUSIONS\nThe tool has the capacity to influence planning and public health advocacy and practice, and by using open-access source software, it is available for use locally and internationally. There is also scope to extend this version of the tool from a simple to a complex model, which includes agents (i.e., simulated pedestrians) 'learning' and incorporating other environmental attributes that enhance walkability (e.g., residential density, mixed land use, traffic volume).", "title": "" }, { "docid": "36cd44e476c59791acf37c7570232cfb", "text": "In this paper, we show that it is feasible for a mobile phone to be used as an SOS beacon in an aerial search and rescue operation. We show with various experiments that we can reliably detect WiFi-enabled mobile phones from the air at distances up to 200 m. By using a custom mobile application that triggers WiFi scanning with the display off, we can simultaneously extend battery life and increase WiFi scanning frequency, compared to keeping the phone in the default scanning mode. Even if an application is not installed or used, our measurement study suggests that it may be possible to detect mobile devices from their background WiFi emissions alone.", "title": "" }, { "docid": "82be3cafe24185b1f3c58199031e41ef", "text": "UNLABELLED\nFamily-based therapy (FBT) is regarded as best practice for the treatment of eating disorders in children and adolescents. In FBT, parents play a vital role in bringing their child or adolescent to health; however, a significant minority of families do not respond to this treatment. This paper introduces a new model whereby FBT is enhanced by integrating emotion-focused therapy (EFT) principles and techniques with the aims of helping parents to support their child's refeeding and interruption of symptoms. Parents are also supported to become their child's 'emotion coach'; and to process any emotional 'blocks' that may interfere with their ability to take charge of recovery. A parent testimonial is presented to illustrate the integration of the theory and techniques of EFT in the FBT model. EFFT (Emotion-Focused Family Therapy) is a promising model of therapy for those families who require a more intense treatment to bring about recovery of an eating disorder.\n\n\nKEY PRACTITIONER MESSAGE\nMore intense therapeutic models exist for treatment-resistant eating disorders in children and adolescents. Emotion is a powerful healing tool in families struggling with an eating disorder. Working with parent's emotions and emotional reactions to their child's struggles has the potential to improve child outcomes.", "title": "" }, { "docid": "adc310c02471d8be579b3bfd32c33225", "text": "In this work, we put forward the notion of Worry-Free Encryption. This allows Alice to encrypt confidential information under Bob's public key and send it to him, without having to worry about whether Bob has the authority to actually access this information. This is done by encrypting the message under a hidden access policy that only allows Bob to decrypt if his credentials satisfy the policy. Our notion can be seen as a functional encryption scheme but in a public-key setting. As such, we are able to insist that even if the credential authority is corrupted, it should not be able to compromise the security of any honest user.\n We put forward the notion of Worry-Free Encryption and show how to achieve it for any polynomial-time computable policy, under only the assumption that IND-CPA public-key encryption schemes exist. Furthermore, we construct CCA-secure Worry-Free Encryption, efficiently in the random oracle model, and generally (but inefficiently) using simulation-sound non-interactive zero-knowledge proofs.", "title": "" }, { "docid": "be89ea7764b6a22ce518bac03a8c7540", "text": "In remote, rugged or sensitive environments ground based mapping for condition assessment of species is both time consuming and potentially destructive. The application of photogrammetric methods to generate multispectral imagery and surface models based on UAV imagery at appropriate temporal and spatial resolutions is described. This paper describes a novel method to combine processing of NIR and visible image sets to produce multiband orthoimages and DEM models from UAV imagery with traditional image location and orientation uncertainties. This work extends the capabilities of recently developed commercial software (Pix4UAV from Pix4D) to show that image sets of different modalities (visible and NIR) can be automatically combined to generate a 4 band orthoimage. Reconstruction initially uses all imagery sets (NIR and visible) to ensure all images are in the same reference frame such that a 4-band orthoimage can be created. We analyse the accuracy of this automatic process by using ground control points and an evaluation on the matching performance between images of different modalities is shown. By combining sub-decimetre multispectral imagery with high spatial resolution surface models and ground based observation it is possible to generate detailed maps of vegetation assemblages at the species level. Potential uses with other conservation monitoring are discussed.", "title": "" }, { "docid": "2915218bc86d049d6b8e3a844a9768fd", "text": "Power and energy systems are on the verge of a profound change where Smart Grid solutions will enhance their efficiency and flexibility. Advanced ICT and control systems are key elements of the Smart Grid to enable efficient integration of a high amount of renewable energy resources which in turn are seen as key elements of the future energy system. The corresponding distribution grids have to become more flexible and adaptable as the current ones in order to cope with the upcoming high share of energy from distributed renewable sources. The complexity of Smart Grids requires to consider and imply many components when a new application is designed. However, a holistic ICT-based approach for modelling, designing and validating Smart Grid developments is missing today. The goal of this paper therefore is to discuss an advanced design approach and the corresponding information model, covering system, application, control and communication aspects of Smart Grids.", "title": "" }, { "docid": "b549ed594246ee9251488d73b8bf9b88", "text": "Web classification is used in many security devices for preventing users to access selected web sites that are not allowed by the current security policy, as well for improving web search and for implementing contextual advertising. There are many commercial web classification services available on the market and a few publicly available web directory services. Unfortunately they mostly focus on English-speaking web sites, making them unsuitable for other languages in terms of classification reliability and coverage. This paper covers the design and implementation of a web-based classification tool for TLDs (Top Level Domain). Each domain is classified by analysing the main domain web site, and classifying it in categories according to its content. The tool has been successfully validated by classifying all the registered it. Internet domains, whose results are presented in this paper.", "title": "" }, { "docid": "dc1c602709691d96edea1e64c4afa114", "text": "The authors propose an integration of person-centered therapy, with its focus on the here and now of client awareness of self, and solution-focused therapy, with its future-oriented techniques that also raise awareness of client potentials. Although the two theories hold different assumptions regarding the therapist's role in facilitating client change, it is suggested that solution-focused techniques are often compatible for use within a person-centered approach. Further, solution-focused activities may facilitate the journey of becoming self-aware within the person-centered tradition. This article reviews the two theories, clarifying the similarities and differences. To illustrate the potential integration of the approaches, several types of solution-focused strategies are offered through a clinical example. (PsycINFO Database Record (c) 2011 APA, all rights reserved).", "title": "" }, { "docid": "e21878a1409cf7cf031f85c6dd8d65fa", "text": "Human CYP1A2 is one of the major CYPs in human liver and metabolizes a number of clinical drugs (e.g., clozapine, tacrine, tizanidine, and theophylline; n > 110), a number of procarcinogens (e.g., benzo[a]pyrene and aromatic amines), and several important endogenous compounds (e.g., steroids). CYP1A2 is subject to reversible and/or irreversible inhibition by a number of drugs, natural substances, and other compounds. The CYP1A gene cluster has been mapped on to chromosome 15q24.1, with close link between CYP1A1 and 1A2 sharing a common 5'-flanking region. The human CYP1A2 gene spans almost 7.8 kb comprising seven exons and six introns and codes a 515-residue protein with a molecular mass of 58,294 Da. The recently resolved CYP1A2 structure has a relatively compact, planar active site cavity that is highly adapted for the size and shape of its substrates. The architecture of the active site of 1A2 is characterized by multiple residues on helices F and I that constitutes two parallel substrate binding platforms on either side of the cavity. A large interindividual variability in the expression and activity of CYP1A2 has been observed, which is largely caused by genetic, epigenetic and environmental factors (e.g., smoking). CYP1A2 is primarily regulated by the aromatic hydrocarbon receptor (AhR) and CYP1A2 is induced through AhR-mediated transactivation following ligand binding and nuclear translocation. Induction or inhibition of CYP1A2 may provide partial explanation for some clinical drug interactions. To date, more than 15 variant alleles and a series of subvariants of the CYP1A2 gene have been identified and some of them have been associated with altered drug clearance and response and disease susceptibility. Further studies are warranted to explore the clinical and toxicological significance of altered CYP1A2 expression and activity caused by genetic, epigenetic, and environmental factors.", "title": "" }, { "docid": "c24bd4156e65d57eda0add458304988c", "text": "Graphene is enabling a plethora of applications in a wide range of fields due to its unique electrical, mechanical, and optical properties. Among them, graphene-based plasmonic miniaturized antennas (or shortly named, graphennas) are garnering growing interest in the field of communications. In light of their reduced size, in the micrometric range, and an expected radiation frequency of a few terahertz, graphennas offer means for the implementation of ultra-short-range wireless communications. Motivated by their high radiation frequency and potentially wideband nature, this paper presents a methodology for the time-domain characterization and evaluation of graphennas. The proposed framework is highly vertical, as it aims to build a bridge between technological aspects, antenna design, and communications. Using this approach, qualitative and quantitative analyses of a particular case of graphenna are carried out as a function of two critical design parameters, namely, chemical potential and carrier mobility. The results are then compared to the performance of equivalent metallic antennas. Finally, the suitability of graphennas for ultra-short-range communications is briefly discussed.", "title": "" }, { "docid": "fa0f02cde08a3cee4b691788815cb757", "text": "Control strategies for these contaminants will require a better understanding of how they move around the globe.", "title": "" }, { "docid": "e39a7208e32c23164601ec608362de53", "text": "We address the problem of describing people based on fine-grained clothing attributes. This is an important problem for many practical applications, such as identifying target suspects or finding missing people based on detailed clothing descriptions in surveillance videos or consumer photos. We approach this problem by first mining clothing images with fine-grained attribute labels from online shopping stores. A large-scale dataset is built with about one million images and fine-detailed attribute sub-categories, such as various shades of color (e.g., watermelon red, rosy red, purplish red), clothing types (e.g., down jacket, denim jacket), and patterns (e.g., thin horizontal stripes, houndstooth). As these images are taken in ideal pose/lighting/background conditions, it is unreliable to directly use them as training data for attribute prediction in the domain of unconstrained images captured, for example, by mobile phones or surveillance cameras. In order to bridge this gap, we propose a novel double-path deep domain adaptation network to model the data from the two domains jointly. Several alignment cost layers placed inbetween the two columns ensure the consistency of the two domain features and the feasibility to predict unseen attribute categories in one of the domains. Finally, to achieve a working system with automatic human body alignment, we trained an enhanced RCNN-based detector to localize human bodies in images. Our extensive experimental evaluation demonstrates the effectiveness of the proposed approach for describing people based on fine-grained clothing attributes.", "title": "" }, { "docid": "605125a6801bd9aa190f177ee4f0cb1f", "text": "One of the challenges in bio-computing is to enable the efficient use and inter-operation of a wide variety of rapidly-evolving computational methods to simulate, analyze, and understand the complex properties and interactions of molecular systems. In our laboratory we investigates several areas, including protein-ligand docking, protein-protein docking, and complex molecular assemblies. Over the years we have developed a number of computational tools such as molecular surfaces, phenomenological potentials, various docking and visualization programs which we use in conjunction with programs developed by others. The number of programs available to compute molecular properties and/or simulate molecular interactions (e.g., molecular dynamics, conformational analysis, quantum mechanics, distance geometry, docking methods, ab-initio methods) is large and growing rapidly. Moreover, these programs come in many flavors and variations, using different force fields, search techniques, algorithmic details (e.g., continuous space vs. discrete, Cartesian vs. torsional). Each variation presents its own characteristic set of advantages and limitations. These programs also tend to evolve rapidly and are usually not written as components, making it hard to get them to work together.", "title": "" }, { "docid": "90c6cf2fd66683843a8dd549676727d5", "text": "Despite great progress in neuroscience, there are still fundamental unanswered questions about the brain, including the origin of subjective experience and consciousness. Some answers might rely on new physical mechanisms. Given that biophotons have been discovered in the brain, it is interesting to explore if neurons use photonic communication in addition to the well-studied electro-chemical signals. Such photonic communication in the brain would require waveguides. Here we review recent work (S. Kumar, K. Boone, J. Tuszynski, P. Barclay, and C. Simon, Scientific Reports 6, 36508 (2016)) suggesting that myelinated axons could serve as photonic waveguides. The light transmission in the myelinated axon was modeled, taking into account its realistic imperfections, and experiments were proposed both in vivo and in vitro to test this hypothesis. Potential implications for quantum biology are discussed.", "title": "" }, { "docid": "eddcf41fe566b65540d147171ce50002", "text": "This paper addresses the problem of virtual pedestrian autonomous navigation for crowd simulation. It describes a method for solving interactions between pedestrians and avoiding inter-collisions. Our approach is agent-based and predictive: each agent perceives surrounding agents and extrapolates their trajectory in order to react to potential collisions. We aim at obtaining realistic results, thus the proposed model is calibrated from experimental motion capture data. Our method is shown to be valid and solves major drawbacks compared to previous approaches such as oscillations due to a lack of anticipation. We first describe the mathematical representation used in our model, we then detail its implementation, and finally, its calibration and validation from real data.", "title": "" }, { "docid": "bdcd0cad7a2abcb482b1a0755a2e7af4", "text": "We present a novel attribute learning framework named Hypergraph-based Attribute Predictor (HAP). In HAP, a hypergraph is leveraged to depict the attribute relations in the data. Then the attribute prediction problem is casted as a regularized hypergraph cut problem, in which a collection of attribute projections is jointly learnt from the feature space to a hypergraph embedding space aligned with the attributes. The learned projections directly act as attribute classifiers (linear and kernelized). This formulation leads to a very efficient approach. By considering our model as a multi-graph cut task, our framework can flexibly incorporate other available information, in particular class label. We apply our approach to attribute prediction, Zero-shot and N-shot learning tasks. The results on AWA, USAA and CUB databases demonstrate the value of our methods in comparison with the state-of-the-art approaches.", "title": "" }, { "docid": "89e97c0c62b054664ecd2542329e4540", "text": "ion from the underlying big data technologies is needed to enable ease of use for data scientists, and for business users. Many of the techniques required for real-time, prescriptive analytics, such as predictive modelling, optimization, and simulation, are data and compute intensive. Combined with big data these require distributed storage and parallel, or distributed computing. At the same time many of the machine learning and data mining algorithms are not straightforward to parallelize. A recent survey (Paradigm 4 2014) found that “although 49 % of the respondent data scientists could not fit their data into relational databases anymore, only 48 % have used Hadoop or Spark—and of those 76 % said they could not work effectively due to platform issues”. This is an indicator that big data computing is too complex to use without sophisticated computer science know-how. One direction of advancement is for abstractions and high-level procedures to be developed that hide the complexities of distributed computing and machine learning from data scientists. The other direction of course will be more skilled data scientists, who are literate in distributed computing, or distributed computing experts becoming more literate in data science and statistics. Advances are needed for the following technologies: • Abstraction is a common tool in computer science. Each technology at first is cumbersome. Abstraction manages complexity so that the user (e.g., 13 Big Data in the Energy and Transport Sectors 241", "title": "" }, { "docid": "257f00fc5a4b2a0addbd7e9cc2bf6fec", "text": "Security experts have demonstrated numerous risks imposed by Internet of Things (IoT) devices on organizations. Due to the widespread adoption of such devices, their diversity, standardization obstacles, and inherent mobility, organizations require an intelligent mechanism capable of automatically detecting suspicious IoT devices connected to their networks. In particular, devices not included in a white list of trustworthy IoT device types (allowed to be used within the organizational premises) should be detected. In this research, Random Forest, a supervised machine learning algorithm, was applied to features extracted from network traffic data with the aim of accurately identifying IoT device types from the white list. To train and evaluate multi-class classifiers, we collected and manually labeled network traffic data from 17 distinct IoT devices, representing nine types of IoT devices. Based on the classification of 20 consecutive sessions and the use of majority rule, IoT device types that are not on the white list were correctly detected as unknown in 96% of test cases (on average), and white listed device types were correctly classified by their actual types in 99% of cases. Some IoT device types were identified quicker than others (e.g., sockets and thermostats were successfully detected within five TCP sessions of connecting to the network). Perfect detection of unauthorized IoT device types was achieved upon analyzing 110 consecutive sessions; perfect classification of white listed types required 346 consecutive sessions, 110 of which resulted in 99.49% accuracy. Further experiments demonstrated the successful applicability of classifiers trained in one location and tested on another. In addition, a discussion is provided regarding the resilience of our machine learning-based IoT white listing method to adversarial attacks.", "title": "" }, { "docid": "7a9a7b888b9e3c2b82e6c089d05e2803", "text": "Background:\nBullous pemphigoid (BP) is a chronic, autoimmune blistering skin disease that affects patients' daily life and psychosocial well-being.\n\n\nObjective:\nThe aim of the study was to evaluate the quality of life, anxiety, depression and loneliness in BP patients.\n\n\nMethods:\nFifty-seven BP patients and fifty-seven healthy controls were recruited for the study. The quality of life of each patient was assessed using the Dermatology Life Quality Index (DLQI) scale. Moreover, they were evaluated for anxiety and depression according to the Hospital Anxiety Depression Scale (HADS-scale), while loneliness was measured through the Loneliness Scale-Version 3 (UCLA) scale.\n\n\nResults:\nThe mean DLQI score was 9.45±3.34. Statistically significant differences on the HADS total scale and in HADS-depression subscale (p=0.015 and p=0.002, respectively) were documented. No statistically significant difference was found between the two groups on the HADS-anxiety subscale. Furthermore, significantly higher scores were recorded on the UCLA Scale compared with healthy volunteers (p=0.003).\n\n\nConclusion:\nBP had a significant impact on quality of life and the psychological status of patients, probably due to the appearance of unattractive lesions on the skin, functional problems and disease chronicity.", "title": "" }, { "docid": "0ec7538bef6a3ad982b8935f6124127d", "text": "New technology has been seen as a way for many businesses in the tourism industry to stay competitive and enhance their marketing campaign in various ways. AR has evolved as the buzzword of modern information technology and is gaining increasing attention in the media as well as through a variety of use cases. This trend is highly fostered across mobile applications as well as the hype of wearable computing triggered by Google’s Glass project to be launched in 2014. However, although research on AR has been conducted in various fields including the Urban Tourism industry, the majority of studies focus on technical aspects of AR, while others are tailored to specific applications. Therefore, this paper aims to examine the current implementation of AR in the Urban Tourism context and identifies areas of research and development that is required to guide the early stages of AR implementation in a purposeful way to enhance the tourist experience. The paper provides an overview of AR and examines the impacts AR has made on the economy. Hence, AR applications in Urban Tourism are identified and benefits of AR are discussed. Please cite this article as: Jung, T. and Han, D. (2014). Augmented Reality (AR) in Urban Heritage Tourism. e-Review of Tourism Research. (ISSN: 1941-5842) Augmented Reality (AR) in Urban Heritage Tourism Timothy Jung and Dai-In Han Department of Food and Tourism Management Manchester\t\r Metropolitan\t\r University,\t\r United\t\r Kingdom t.jung@mmu.ac.uk,\t\r d.han@mmu.ac.uk", "title": "" } ]
scidocsrr
6f3e9e963475ed7ba90d1ede096a8d17
The Long-Term Benefits of Positive Self-Presentation via Profile Pictures, Number of Friends and the Initiation of Relationships on Facebook for Adolescents’ Self-Esteem and the Initiation of Offline Relationships
[ { "docid": "8f978ac84eea44a593e9f18a4314342c", "text": "There is clear evidence that interpersonal social support impacts stress levels and, in turn, degree of physical illness and psychological well-being. This study examines whether mediated social networks serve the same palliative function. A survey of 401 undergraduate Facebook users revealed that, as predicted, number of Facebook friends associated with stronger perceptions of social support, which in turn associated with reduced stress, and in turn less physical illness and greater well-being. This effect was minimized when interpersonal network size was taken into consideration. However, for those who have experienced many objective life stressors, the number of Facebook friends emerged as the stronger predictor of perceived social support. The \"more-friends-the-better\" heuristic is proposed as the most likely explanation for these findings.", "title": "" }, { "docid": "69982ee7465c4e2ab8a2bfc72a8bbb89", "text": "This study examines if Facebook, one of the most popular social network sites among college students in the U.S., is related to attitudes and behaviors that enhance individuals’ social capital. Using data from a random web survey of college students across Texas (n = 2, 603), we find positive relationships between intensity of Facebook use and students’ life satisfaction, social trust, civic engagement, and political participation. While these findings should ease the concerns of those who fear that Facebook has mostly negative effects on young adults, the positive and significant associations between Facebook variables and social capital were small, suggesting that online social networks are not the most effective solution for youth disengagement from civic duty and democracy.", "title": "" }, { "docid": "a671673f330bd2b1ec14aaca9f75981a", "text": "The aim of this study was to contrast the validity of two opposing explanatory hypotheses about the effect of online communication on adolescents' well-being. The displacement hypothesis predicts that online communication reduces adolescents' well-being because it displaces time spent with existing friends, thereby reducing the quality of these friendships. In contrast, the stimulation hypothesis states that online communication stimulates well-being via its positive effect on time spent with existing friends and the quality of these friendships. We conducted an online survey among 1,210 Dutch teenagers between 10 and 17 years of age. Using mediation analyses, we found support for the stimulation hypothesis but not for the displacement hypothesis. We also found a moderating effect of type of online communication on adolescents' well-being: Instant messaging, which was mostly used to communicate with existing friends, positively predicted well-being via the mediating variables (a) time spent with existing friends and (b) the quality of these friendships. Chat in a public chatroom, which was relatively often used to talk with strangers, had no effect on adolescents' wellbeing via the mediating variables.", "title": "" }, { "docid": "b3a1aba2e9a3cfc8897488bb058f3358", "text": "The social networking site, Facebook, has gained an enormous amount of popularity. In this article, we review the literature on the factors contributing to Facebook use. We propose a model suggesting that Facebook use is motivated by two primary needs: (1) The need to belong and (2) the need for self-presentation. Demographic and cultural factors contribute to the need to belong, whereas neuroticism, narcissism, shyness, self-esteem and self-worth contribute to the need for self presentation. Areas for future research are discussed.", "title": "" } ]
[ { "docid": "6e7d5e2548e12d11afd3389b6d677a0f", "text": "Internet marketing is a field that is continuing to grow, and the online auction concept may be defining a totally new and unique distribution alternative. Very few studies have examined auction sellers and their internet marketing strategies. This research examines the internet auction phenomenon as it relates to the marketing mix of online auction sellers. The data in this study indicate that, whilst there is great diversity among businesses that utilise online auctions, distinct cost leadership and differentiation marketing strategies are both evident. These two approaches are further distinguished in terms of the internet usage strategies employed by each group.", "title": "" }, { "docid": "8a8edb63c041a01cbb887cd526b97eb0", "text": "We propose BrainNetCNN, a convolutional neural network (CNN) framework to predict clinical neurodevelopmental outcomes from brain networks. In contrast to the spatially local convolutions done in traditional image-based CNNs, our BrainNetCNN is composed of novel edge-to-edge, edge-to-node and node-to-graph convolutional filters that leverage the topological locality of structural brain networks. We apply the BrainNetCNN framework to predict cognitive and motor developmental outcome scores from structural brain networks of infants born preterm. Diffusion tensor images (DTI) of preterm infants, acquired between 27 and 46 weeks gestational age, were used to construct a dataset of structural brain connectivity networks. We first demonstrate the predictive capabilities of BrainNetCNN on synthetic phantom networks with simulated injury patterns and added noise. BrainNetCNN outperforms a fully connected neural-network with the same number of model parameters on both phantoms with focal and diffuse injury patterns. We then apply our method to the task of joint prediction of Bayley-III cognitive and motor scores, assessed at 18 months of age, adjusted for prematurity. We show that our BrainNetCNN framework outperforms a variety of other methods on the same data. Furthermore, BrainNetCNN is able to identify an infant's postmenstrual age to within about 2 weeks. Finally, we explore the high-level features learned by BrainNetCNN by visualizing the importance of each connection in the brain with respect to predicting the outcome scores. These findings are then discussed in the context of the anatomy and function of the developing preterm infant brain.", "title": "" }, { "docid": "71333997a4f9f38de0b53697d7b7cff1", "text": "Environmental sustainability of a supply chain depends on the purchasing strategy of the supply chain members. Most of the earlier models have focused on cost, quality, lead time, etc. issues but not given enough importance to carbon emission for supplier evaluation. Recently, there is a growing pressure on supply chain members for reducing the carbon emission of their supply chain. This study presents an integrated approach for selecting the appropriate supplier in the supply chain, addressing the carbon emission issue, using fuzzy-AHP and fuzzy multi-objective linear programming. Fuzzy AHP (FAHP) is applied first for analyzing the weights of the multiple factors. The considered factors are cost, quality rejection percentage, late delivery percentage, green house gas emission and demand. These weights of the multiple factors are used in fuzzy multi-objective linear programming for supplier selection and quota allocation. An illustration with a data set from a realistic situation is presented to demonstrate the effectiveness of the proposed model. The proposed approach can handle realistic situation when there is information vagueness related to inputs. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "045a4622691d1ae85593abccb823b193", "text": "The capability of Corynebacterium glutamicum for glucose-based synthesis of itaconate was explored, which can serve as building block for production of polymers, chemicals, and fuels. C. glutamicum was highly tolerant to itaconate and did not metabolize it. Expression of the Aspergillus terreus CAD1 gene encoding cis-aconitate decarboxylase (CAD) in strain ATCC13032 led to the production of 1.4mM itaconate in the stationary growth phase. Fusion of CAD with the Escherichia coli maltose-binding protein increased its activity and the itaconate titer more than two-fold. Nitrogen-limited growth conditions boosted CAD activity and itaconate titer about 10-fold to values of 1440 mU mg(-1) and 30 mM. Reduction of isocitrate dehydrogenase activity via exchange of the ATG start codon to GTG or TTG resulted in maximal itaconate titers of 60 mM (7.8 g l(-1)), a molar yield of 0.4 mol mol(-1), and a volumetric productivity of 2.1 mmol l(-1) h(-1).", "title": "" }, { "docid": "2efd26fc1e584aa5f70bdf9d24e5c2cd", "text": "Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast and questions notions generally held to be “laws of nature” by practitioners of numerical computing: 1. High-level dynamic programs have to be slow. 2. One must prototype in one language and then rewrite in another language for speed or deployment. 3. There are parts of a system appropriate for the programmer, and other parts that are best left untouched as they have been built by the experts. We introduce the Julia programming language and its design—a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, which is what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can achieve machine performance without sacrificing human convenience.", "title": "" }, { "docid": "7bd56fffe892775084dc23d3d9d43484", "text": "Stars form in dense clouds of interstellar gas and dust. The residual dust surrounding a young star scatters and diffuses its light, making the star's \"cocoon\" of dust observable from Earth. The resulting structures, called reflection nebulae, are commonly very colorful in appearance due to wavelength-dependent effects in the scattering and extinction of light. The intricate interplay of scattering and extinction cause the color hues, brightness distributions, and the apparent shapes of such nebulae to vary greatly with viewpoint. We describe an interactive visualization tool for realistically rendering the appearance of arbitrary 3D dust distributions surrounding one or more illuminating stars. Our rendering algorithm is based on the physical models used in astrophysics research. The tool can be used to create virtual fly-throughs of reflection nebulae for interactive desktop visualizations, or to produce scientifically accurate animations for educational purposes, e.g., in planetarium shows. The algorithm is also applicable to investigate on-the-fly the visual effects of physical parameter variations, exploiting visualization technology to help gain a deeper and more intuitive understanding of the complex interaction of light and dust in real astrophysical settings.", "title": "" }, { "docid": "9f746a67a960b01c9e33f6cd0fcda450", "text": "Motivated by large-scale multimedia applications we propose to learn mappings from high-dimensional data to binary codes that preserve semantic similarity. Binary codes are well suited to large-scale applications as they are storage efficient and permit exact sub-linear kNN search. The framework is applicable to broad families of mappings, and uses a flexible form of triplet ranking loss. We overcome discontinuous optimization of the discrete mappings by minimizing a piecewise-smooth upper bound on empirical loss, inspired by latent structural SVMs. We develop a new loss-augmented inference algorithm that is quadratic in the code length. We show strong retrieval performance on CIFAR-10 and MNIST, with promising classification results using no more than kNN on the binary codes.", "title": "" }, { "docid": "cec9f586803ffc8dc5868f6950967a1f", "text": "This report aims to summarize the field of technological forecasting (TF), its techniques and applications by considering the following questions: • What are the purposes of TF? • Which techniques are used for TF? • What are the strengths and weaknesses of these techniques / how do we evaluate their quality? • Do we need different TF techniques for different purposes/technologies? We also present a brief analysis of how TF is used in practice. We analyze how corporate decisions, such as investing millions of dollars to a new technology like solar energy, are being made and we explore if funding allocation decisions are based on “objective, repeatable, and quantifiable” decision parameters. Throughout the analysis, we compare the bibliometric and semantic-enabled approach of the MIT/MIST Collaborative research project “Technological Forecasting using Data Mining and Semantics” (TFDMS) with the existing studies / practices of TF and where TFDMS fits in and how it will contribute to the general TF field.", "title": "" }, { "docid": "2a79464b8674b689239f4579043bd525", "text": "In this paper, we extend an attention-based neural machine translation (NMT) model by allowing it to access an entire training set of parallel sentence pairs even after training. The proposed approach consists of two stages. In the first stage– retrieval stage–, an off-the-shelf, black-box search engine is used to retrieve a small subset of sentence pairs from a training set given a source sentence. These pairs are further filtered based on a fuzzy matching score based on edit distance. In the second stage–translation stage–, a novel translation model, called search engine guided NMT (SEG-NMT), seamlessly uses both the source sentence and a set of retrieved sentence pairs to perform the translation. Empirical evaluation on three language pairs (En-Fr, En-De, and En-Es) shows that the proposed approach significantly outperforms the baseline approach and the improvement is more significant when more relevant sentence pairs were retrieved.", "title": "" }, { "docid": "c2fc4e65c484486f5612f4006b6df102", "text": "Although flat item category structure where categories are independent in a same level has been well studied to enhance recommendation performance, in many real applications, item category is often organized in hierarchies to reflect the inherent correlations among categories. In this paper, we propose a novel matrix factorization model by exploiting category hierarchy from the perspectives of users and items for effective recommendation. Specifically, a user (an item) can be influenced (characterized) by her preferred categories (the categories it belongs to) in the hierarchy. We incorporate how different categories in the hierarchy co-influence a user and an item. Empirical results show the superiority of our approach against other counterparts.", "title": "" }, { "docid": "103ebae051da74f14561e3fa976273b6", "text": "Data-driven generative modeling has made remarkable progress by leveraging the power of deep neural networks. A reoccurring challenge is how to sample a rich variety of data from the entire target distribution, rather than only from the distribution of the training data. In other words, we would like the generative model to go beyond the observed training samples and learn to also generate “unseen” data. In our work, we present a generative neural network for shapes that is based on a part-based prior, where the key idea is for the network to synthesize shapes by varying both the shape parts and their compositions. Treating a shape not as an unstructured whole, but as a (re-)composable set of deformable parts, adds a combinatorial dimension to the generative process to enrich the diversity of the output, encouraging the generator to venture more into the “unseen”. We show that our part-based model generates richer variety of feasible shapes compared with a baseline generative model. To this end, we introduce two quantitative metrics to evaluate the ingenuity of the generative model and assess how well generated data covers both the training data and unseen data from the same target distribution.", "title": "" }, { "docid": "16338883787b5a1ff4df2bb5e9d4f21a", "text": "The next generations of large-scale data-centers and supercomputers demand optical interconnects to migrate to 400G and beyond. Microring modulators in silicon-photonics VLSI chips are promising devices to meet this demand due to their energy efficiency and compatibility with dense wavelength division multiplexed chip-to-chip optical I/O. Higher order pulse amplitude modulation (PAM) schemes can be exploited to mitigate their fundamental energy–bandwidth tradeoff at the system level for high data rates. In this paper, we propose an optical digital-to-analog converter based on a segmented microring resonator, capable of operating at 20 GS/s with improved linearity over conventional optical multi-level generators that can be used in a variety of applications such as optical arbitrary waveform generators and PAM transmitters. Using this technique, we demonstrate a PAM-4 transmitter that directly converts the digital data into optical levels in a commercially available 45-nm SOI CMOS process. We achieved 40-Gb/s PAM-4 transmission at 42-fJ/b modulator and driver energies, and 685-fJ/b total transmitter energy efficiency with an area bandwidth density of 0.67 Tb/s/mm2. The transmitter incorporates a thermal tuning feedback loop to address the thermal and process variations of microrings’ resonance wavelength. This scheme is suitable for system-on-chip applications with a large number of I/O links, such as switches and general-purpose and specialized processors in large-scale computing and storage systems.", "title": "" }, { "docid": "3a3470d13c9c63af1a62ee7bc57a96ef", "text": "Cloud computing is a distributed computing model that still faces problems. New ideas emerge to take advantage of its features and among the research challenges found in the cloud, we can highlight Identity and Access Management. The main problems of the application of access control in the cloud are the necessary flexibility and scalability to support a large number of users and resources in a dynamic and heterogeneous environment, with collaboration and information sharing needs. This paper proposes the use of risk-based dynamic access control for cloud computing. The proposal is presented as an access control model based on an extension of the XACML standard with three new components: the Risk Engine, the Risk Quantification Web Services and the Risk Policies. The risk policies present a method to describe risk metrics and their quantification, using local or remote functions. The risk policies allow users and cloud service providers to define how to handle risk-based access control for their resources, using different quantification and aggregation methods. The model reaches the access decision based on a combination of XACML decisions and risk analysis. A prototype of the model is implemented, showing it has enough expressivity to describe the models of related work. In the experimental results, the prototype takes between 2 and 6 milliseconds to reach access decisions using a risk policy. A discussion on the security aspects of the model is also presented.", "title": "" }, { "docid": "136ed8dc00926ceec6d67b9ab35e8444", "text": "This paper addresses the property requirements of repair materials for high durability performance for concrete structure repair. It is proposed that the high tensile strain capacity of High Performance Fiber Reinforced Cementitious Composites (HPFRCC) makes such materials particularly suitable for repair applications, provided that the fresh properties are also adaptable to those required in placement techniques in typical repair applications. A specific version of HPFRCC, known as Engineered Cementitious Composites (ECC), is described. It is demonstrated that the fresh and hardened properties of ECC meet many of the requirements for durable repair performance. Recent experience in the use of this material in a bridge deck patch repair is highlighted. The origin of this article is a summary of a keynote lecture with the same title given at the Conference on Fiber Composites, High-Performance Concretes and Smart Materials, Chennai, India, Jan., 2004. It is only slightly updated here.", "title": "" }, { "docid": "d2b44c8d6a22eecb3626776a2e5c551c", "text": "Genes and their protein products are essential molecular units of a living organism. The knowledge of their functions is key for the understanding of physiological and pathological biological processes, as well as in the development of new drugs and therapies. The association of a gene or protein with its functions, described by controlled terms of biomolecular terminologies or ontologies, is named gene functional annotation. Very many and valuable gene annotations expressed through terminologies and ontologies are available. Nevertheless, they might include some erroneous information, since only a subset of annotations are reviewed by curators. Furthermore, they are incomplete by definition, given the rapidly evolving pace of biomolecular knowledge. In this scenario, computational methods that are able to quicken the annotation curation process and reliably suggest new annotations are very important. Here, we first propose a computational pipeline that uses different semantic and machine learning methods to predict novel ontology-based gene functional annotations; then, we introduce a new semantic prioritization rule to categorize the predicted annotations by their likelihood of being correct. Our tests and validations proved the effectiveness of our pipeline and prioritization of predicted annotations, by selecting as most likely manifold predicted annotations that were later confirmed.", "title": "" }, { "docid": "6519ae37d66b3e5524318adc5070223e", "text": "Powering cellular networks with renewable energy sources via energy harvesting (EH) have recently been proposed as a promising solution for green networking. However, with intermittent and random energy arrivals, it is challenging to provide satisfactory quality of service (QoS) in EH networks. To enjoy the greenness brought by EH while overcoming the instability of the renewable energy sources, hybrid energy supply (HES) networks that are powered by both EH and the electric grid have emerged as a new paradigm for green communications. In this paper, we will propose new design methodologies for HES green cellular networks with the help of Lyapunov optimization techniques. The network service cost, which addresses both the grid energy consumption and achievable QoS, is adopted as the performance metric, and it is optimized via base station assignment and power control (BAPC). Our main contribution is a low-complexity online algorithm to minimize the long-term average network service cost, namely, the Lyapunov optimization-based BAPC (LBAPC) algorithm. One main advantage of this algorithm is that the decisions depend only on the instantaneous side information without requiring distribution information of channels and EH processes. To determine the network operation, we only need to solve a deterministic per-time slot problem, for which an efficient inner-outer optimization algorithm is proposed. Moreover, the proposed algorithm is shown to be asymptotically optimal via rigorous analysis. Finally, sample simulation results are presented to verify the theoretical analysis as well as validate the effectiveness of the proposed algorithm.", "title": "" }, { "docid": "3be195643e5cb658935b20997f7ebdea", "text": "We describe the structure and functionality of the Internet Cache Protocol (ICP) and its implementation in the Squid Web Caching software. ICP is a lightweight message format used for communication among Web caches. Caches exchange ICP queries and replies to gather information to use in selecting the most appropriate location from which to retrieve an object. We present background on the history of ICP, and discuss issues in ICP deployment, e ciency, security, and interaction with other aspects of Web tra c behavior. We catalog successes, failures, and lessons learned from using ICP to deploy a global Web cache hierarchy.", "title": "" }, { "docid": "9ad276fd2c5166c12c997fa2b7ec8292", "text": "Recent years have witnessed the rapid proliferation and widespread adoption of a new class of information technologies, commonly known as social media. Researchers often rely on social network analysis (SNA) in attempting to understand these technologies, often without considering how the novel capabilities of social media platforms might affect the underlying theories of SNA, which were developed primarily through studies of offline social networks. This article outlines several key differences between traditional offline social networks and online social media networks by juxtaposing an established typology of social network research with a well-regarded definition of social media platforms that articulates four key features. The results show that at four major points of intersection, social media has considerable theoretical implications for SNA. In exploring these points of intersection, this study outlines a series of theoretically distinctive research questions for SNA in social media contexts. These points of intersection offer considerable opportunities for researchers to investigate the theoretical implications introduced by social media and lay the groundwork for a robust social media agenda potentially spanning multiple disciplines. ***FORTHCOMING AT MIS QUARTERLY, THEORY AND REVIEW***", "title": "" }, { "docid": "e2de8284e14cb3abbd6e3fbcfb5bc091", "text": "In this paper, novel 2 one-dimensional (1D) Haar-like filtering techniques are proposed as a new and low calculation cost feature extraction method suitable for 3D acceleration signals based human activity recognition. Proposed filtering method is a simple difference filter with variable filter parameters. Our method holds a strong adaptability to various classification problems which no previously studied features (mean, standard deviation, etc.) possessed. In our experiment on human activity recognition, the proposed method achieved both the highest recognition accuracy of 93.91% while reducing calculation cost to 21.22% compared to previous method.", "title": "" }, { "docid": "c0a3bb7720bd79d496bcf6281f444411", "text": "Do you dream to create good visualizations for your dataset simply like a Google search? If yes, our visionary systemDeepEye is committed to fulfill this task. Given a dataset and a keyword query, DeepEye understands the query intent, generates and ranks good visualizations. The user can pick the one he likes and do a further faceted search to easily navigate the visualizations. We detail the architecture of DeepEye, key components, as well as research challenges and opportunities.", "title": "" } ]
scidocsrr
a3d4a6a12f2916ce2507956d3101f040
The interactive performance of SLIM: a stateless, thin-client architecture
[ { "docid": "014f1369be6a57fb9f6e2f642b3a4926", "text": "VNC is platform-independent – a VNC viewer on one operating system may connect to a VNC server on the same or any other operating system. There are clients and servers for many GUIbased operating systems and for Java. Multiple clients may connect to a VNC server at the same time. Popular uses for this technology include remote technical support and accessing files on one's work computer from one's home computer, or vice versa.", "title": "" } ]
[ { "docid": "350137bf3c493b23aa6d355df946440f", "text": "Given the increasing popularity of wearable devices, this paper explores the potential to use wearables for steering and driver tracking. Such capability would enable novel classes of mobile safety applications without relying on information or sensors in the vehicle. In particular, we study how wrist-mounted inertial sensors, such as those in smart watches and fitness trackers, can track steering wheel usage and angle. In particular, tracking steering wheel usage and turning angle provide fundamental techniques to improve driving detection, enhance vehicle motion tracking by mobile devices and help identify unsafe driving. The approach relies on motion features that allow distinguishing steering from other confounding hand movements. Once steering wheel usage is detected, it further uses wrist rotation measurements to infer steering wheel turning angles. Our on-road experiments show that the technique is 99% accurate in detecting steering wheel usage and can estimate turning angles with an average error within 3.4 degrees.", "title": "" }, { "docid": "be3d420dee60602b50a5ae5923c86a88", "text": "We introduce the concept of dynamically growing a neural network during training. In particular, an untrainable deep network starts as a trainable shallow network and newly added layers are slowly, organically added during training, thereby increasing the network's depth. This is accomplished by a new layer, which we call DropIn. The DropIn layer starts by passing the output from a previous layer (effectively skipping over the newly added layers), then increasingly including units from the new layers for both feedforward and backpropagation. We show that deep networks, which are untrainable with conventional methods, will converge with DropIn layers interspersed in the architecture. In addition, we demonstrate that DropIn provides regularization during training in an analogous way as dropout. Experiments are described with the MNIST dataset and various expanded LeNet architectures, CIFAR-10 dataset with its architecture expanded from 3 to 11 layers, and on the ImageNet dataset with the AlexNet architecture expanded to 13 layers and the VGG 16-layer architecture.", "title": "" }, { "docid": "0aa0c63a4617bf829753df08c5544791", "text": "The paper discusses the application program interface (API). Most software projects reuse components exposed through APIs. In fact, current-day software development technologies are becoming inseparable from the large APIs they provide. An API is the interface to implemented functionality that developers can access to perform various tasks. APIs support code reuse, provide high-level abstractions that facilitate programming tasks, and help unify the programming experience. A study of obstacles that professional Microsoft developers faced when learning to use APIs uncovered challenges and resulting implications for API users and designers. The article focuses on the obstacles to learning an API. Although learnability is only one dimension of usability, there's a clear relationship between the two, in that difficult-to-use APIs are likely to be difficult to learn as well. Many API usability studies focus on situations where developers are learning to use an API. The author concludes that as APIs keep growing larger, developers will need to learn a proportionally smaller fraction of the whole. In such situations, the way to foster more efficient API learning experiences is to include more sophisticated means for developers to identify the information and the resources they need-even for well-designed and documented APIs.", "title": "" }, { "docid": "4fabfd530004921901d09134ebfd0eae", "text": "“Additive Manufacturing Technologies: 3D Printing, Rapid Prototyping, and Direct Digital Manufacturing” is authored by Ian Gibson, David Rosen and Brent Stucker, who collectively possess 60 years’ experience in the fi eld of additive manufacturing (AM). This is the second edition of the book which aims to include current developments and innovations in a rapidly changing fi eld. Its primary aim is to serve as a teaching aid for developing and established curricula, therefore becoming an all-encompassing introductory text for this purpose. It is also noted that researchers may fi nd the text useful as a guide to the ‘state-of-the-art’ and to identify research opportunities. The book is structured to provide justifi cation and information for the use and development of AM by using standardised terminology to conform to standards (American Society for Testing and Materials (ASTM) F42) introduced since the fi rst edition. The basic principles and historical developments for AM are introduced in summary in the fi rst three chapters of the book and this serves as an excellent introduction for the uninitiated. Chapters 4–11 focus on the core technologies of AM individually and, in most cases, in comprehensive detail which gives those interested in the technical application and development of the technologies a solid footing. The remaining chapters provide guidelines and examples for various stages of the process including machine and/or materials selection, design considerations and software limitations, applications and post-processing considerations.", "title": "" }, { "docid": "0bb8e4555509fbd898c01b6fb9ac9279", "text": "The OASIS standard Devices Profile for Web Services (DPWS) enables the use of Web services on smart and resource-constrained devices, which are the cornerstones of the Internet of Things (IoT). DPWS sees a perspective of being able to build service-oriented and event-driven IoT applications on top of these devices with secure Web service capabilities and a seamless integration into existing World Wide Web infrastructure. We introduce DPWSim, a simulation toolkit to support the development of such applications. DPWSim allows developers to prototype, develop, and test IoT applications using the DPWS technology without the presence of physical devices. It also can be used for the collaboration between manufacturers, developers, and designers during the new product development process.", "title": "" }, { "docid": "72f17106ad48b144ccab55b564fece7d", "text": "We present an efficient and robust model matching method which uses a joint shape and texture appearance model to generate a set of region template detectors. The model is fitted to an unseen image in an iterative manner by generating templates using the joint model and the current parameter estimates, correlating the templates with the target image to generate response images and optimising the shape parameters so as to maximise the sum of responses. The appearance model is similar to that used in the AAM [1]. However in our approach the appearance model is used to generate likely feature templates, instead of trying to approximate the image pixels directly. We show that when applied to human faces, our Constrained Local Model (CLM) algorithm is more robust and more accurate than the original AAM search method, which relies on the image reconstruction error to update the model parameters. We demonstrate improved localisation accuracy on two publicly available face data sets and improved tracking on a challenging set of in-car face sequences.", "title": "" }, { "docid": "4beb0193ce98da0cfd625da7a033d257", "text": "BACKGROUND\nThere are well-established relations between personality and the heart, as evidenced by associations between negative emotions on the one hand, and coronary heart disease or chronic heart failure on the other. However, there are substantial gaps in our knowledge about relations between the heart and personality in healthy individuals. Here, we investigated whether amplitude patterns of the electrocardiogram (ECG) correlate with neurotisicm, extraversion, agreeableness, warmth, positive emotion, and tender-mindedness as measured with the Neuroticism-Extraversion-Openness (NEO) personality inventory. Specifically, we investigated (a) whether a cardiac amplitude measure that was previously reported to be related to flattened affectivity (referred to as Eκ values) would explain variance of NEO scores, and (b) whether correlations can be found between NEO scores and amplitudes of the ECG.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nNEO scores and rest ECGs were obtained from 425 healthy individuals. Neuroticism and positive emotion significantly differed between individuals with high and low Eκ values. In addition, stepwise cross-validated regressions indicated correlations between ECG amplitudes and (a) agreeableness, as well as (b) positive emotion.\n\n\nCONCLUSIONS/SIGNIFICANCE\nThese results are the first to demonstrate that ECG amplitude patterns provide information about the personality of an individual as measured with NEO personality scales and facets. These findings open new perspectives for a more efficient personality assessment using cardiac measures, as well as for more efficient risk-stratification and pre-clinical diagnosis of individuals at risk for cardiac, affective and psychosomatic disorders.", "title": "" }, { "docid": "6e1c0cd2b1cb993ab9e78f7aac846264", "text": "the content of «technical» realization of three special methods during criminalistic cognition: criminalistic identification, criminalistic diagnostics and criminalistic classification. Criminalistic technics (as a system of knowledge) is a branch of the special part of criminalistic theory describing and explaining regularities of emergence of materially fixed traces during investigation of criminal offences. It’s for finding and examining concrete technical means, knowledge and skills are already worked out and recommended.", "title": "" }, { "docid": "0d6d2413cbaaef5354cf2bcfc06115df", "text": "Bibliometric and “tech mining” studies depend on a crucial foundation—the search strategy used to retrieve relevant research publication records. Database searches for emerging technologies can be problematic in many respects, for example the rapid evolution of terminology, the use of common phraseology, or the extent of “legacy technology” terminology. Searching on such legacy terms may or may not pick up R&D pertaining to the emerging technology of interest. A challenge is to assess the relevance of legacy terminology in building an effective search model. Common-usage phraseology additionally confounds certain domains in which broader managerial, public interest, or other considerations are prominent. In contrast, searching for highly technical topics is relatively straightforward. In setting forth to analyze “Big Data,” we confront all three challenges—emerging terminology, common usage phrasing, and intersecting legacy technologies. In response, we have devised a systematic methodology to help identify research relating to Big Data. This methodology uses complementary search approaches, starting with a Boolean search model and subsequently employs contingency term sets to further refine the selection. The four search approaches considered are: (1) core lexical query, (2) expanded lexical query, (3) specialized journal search, and (4) cited reference analysis. Of special note here is the use of a “Hit-Ratio” that helps distinguish Big Data elements from less relevant legacy technology terms. We believe that such a systematic search development positions us to do meaningful analyses of Big Data research patterns, connections, and trajectories. Moreover, we suggest that such a systematic search approach can help formulate more replicable searches with high recall and satisfactory precision for other emerging technology studies.", "title": "" }, { "docid": "b68a728f4e737f293dca0901970b41fe", "text": "With maturity of advanced technologies and urgent requirement for maintaining a healthy environment with reasonable price, China is moving toward a trend of generating electricity from renewable wind resources. How to select a suitable wind farm becomes an important focus for stakeholders. This paper first briefly introduces wind farm and then develops its critical success criteria. A new multi-criteria decision-making (MCDM) model, based on the analytic hierarchy process (AHP) associated with benefits, opportunities, costs and risks (BOCR), is proposed to help select a suitable wind farm project. Multiple factors that affect the success of wind farm operations are analyzed by taking into account experts’ opinions, and a performance ranking of the wind farms is generated. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4c3a7002536a825b73607c45a6b36cb4", "text": "In this article we take an empirical cross-country perspective to investigate the robustness and causality of the link between income inequality and crime rates. First, we study the correlation between the Gini index and, respectively, homicide and robbery rates along different dimensions of the data (within and between countries). Second, we examine the inequality-crime link when other potential crime determinants are controlled for. Third, we control for the likely joint endogeneity of income inequality in order to isolate its exogenous impact on homicide and robbery rates. Fourth, we control for the measurement error in crime rates by modelling it as both unobserved country-specific effects and random noise. Lastly, we examine the robustness of the inequality-crime link to alternative measures of inequality. The sample for estimation consists of panels of non-overlapping 5-year averages for 39 countries over 1965-95 in the case of homicides, and 37 countries over 1970-1994 in the case of robberies. We use a variety of statistical techniques, from simple correlations to regression analysis and from static OLS to dynamic GMM estimation. We find that crime rates and inequality are positively correlated (within each country and, particularly, between countries), and it appears that this correlation reflects causation from inequality to crime rates, even controlling for other crime determinants. * We are grateful for comments and suggestions from Francois Bourguignon, Dante Contreras, Francisco Ferreira, Edward Glaeser, Sam Peltzman, Debraj Ray, Luis Servén, and an anonymous referee. N. Loayza worked at the research group of the Central Bank of Chile during the preparation of the paper. This study was sponsored by the Latin American Regional Studies Program, The World Bank. The opinions and conclusions expressed here are those of the authors and do not necessarily represent the views of the institutions to which they are affiliated.", "title": "" }, { "docid": "5d76b2578fa2aa05a607ab0a542ab81f", "text": "60 A practical approach to the optimal design of precast, prestressed concrete highway bridge girder systems is presented. The approach aims at standardizing the optimal design of bridge systems, as opposed to standardizing girder sections. Structural system optimization is shown to be more relevant than conventional girder optimization for an arbitrarily chosen structural system. Bridge system optimization is defined as the optimization of both longitudinal and transverse bridge configurations (number of spans, number of girders, girder type, reinforcements and tendon layout). As a result, the preliminary design process is much simplified by using some developed design charts from which selection of the optimum bridge system, number and type of girders, and amounts of prestressed and non-prestressed reinforcements are easily obtained for a given bridge length, width and loading type.", "title": "" }, { "docid": "05eeadabcb4b7599e8bbcee96f0147eb", "text": "Convolutional Neural Network(CNN) becomes one of the most preferred deep learning method because of achieving superior success at solution of important problems of machine learning like pattern recognition, object recognition and classification. With CNN, high performance has been obtained in traffic sign recognition which is important for autonomous vehicles. In this work, two-stage hierarchical CNN structure is proposed. Signs are seperated into 9 main groups at the first stage by using structure similarity index. And then classes of each main group are subclassed with CNNs at the second stage. Performance of the network is measured on 43-classes GTSRB dataset and compared with other methods.", "title": "" }, { "docid": "0df1a15c02c29d9462356641fbe78b43", "text": "Localization is an essential and important research issue in wireless sensor networks (WSNs). Most localization schemes focus on static sensor networks. However, mobile sensors are required in some applications such that the sensed area can be enlarged. As such, a localization scheme designed for mobile sensor networks is necessary. In this paper, we propose a localization scheme to improve the localization accuracy of previous work. In this proposed scheme, the normal nodes without location information can estimate their own locations by gathering the positions of location-aware nodes (anchor nodes) and the one-hop normal nodes whose locations are estimated from the anchor nodes. In addition, we propose a scheme that predicts the moving direction of sensor nodes to increase localization accuracy. Simulation results show that the localization error in our proposed scheme is lower than the previous schemes in various mobility models and moving speeds.", "title": "" }, { "docid": "65f2651ec987ece0de560d9ac65e06a8", "text": "This paper describes neural network models that we prepared for the author profiling task of PAN@CLEF 2017. In previous PAN series, statistical models using a machine learning method with a variety of features have shown superior performances in author profiling tasks. We decided to tackle the author profiling task using neural networks. Neural networks have recently shown promising results in NLP tasks. Our models integrate word information and character information with multiple neural network layers. The proposed models have marked joint accuracies of 64–86% in the gender identification and the language variety identification of four languages.", "title": "" }, { "docid": "6c0b700a5c195cdf58175b5253fd2aaa", "text": "In this study, we propose a speaker-dependent WaveNet vocoder, a method of synthesizing speech waveforms with WaveNet, by utilizing acoustic features from existing vocoder as auxiliary features of WaveNet. It is expected that WaveNet can learn a sample-by-sample correspondence between speech waveform and acoustic features. The advantage of the proposed method is that it does not require (1) explicit modeling of excitation signals and (2) various assumptions, which are based on prior knowledge specific to speech. We conducted both subjective and objective evaluation experiments on CMUARCTIC database. From the results of the objective evaluation, it was demonstrated that the proposed method could generate high-quality speech with phase information recovered, which was lost by a mel-cepstrum vocoder. From the results of the subjective evaluation, it was demonstrated that the sound quality of the proposed method was significantly improved from mel-cepstrum vocoder, and the proposed method could capture source excitation information more accurately.", "title": "" }, { "docid": "d7a2708fc70f6480d9026aeefce46610", "text": "In order to study the differential protein expression in complex biological samples, strategies for rapid, highly reproducible and accurate quantification are necessary. Isotope labeling and fluorescent labeling techniques have been widely used in quantitative proteomics research. However, researchers are increasingly turning to label-free shotgun proteomics techniques for faster, cleaner, and simpler results. Mass spectrometry-based label-free quantitative proteomics falls into two general categories. In the first are the measurements of changes in chromatographic ion intensity such as peptide peak areas or peak heights. The second is based on the spectral counting of identified proteins. In this paper, we will discuss the technologies of these label-free quantitative methods, statistics, available computational software, and their applications in complex proteomics studies.", "title": "" }, { "docid": "d3d5f135cc2a09bf0dfc1ef88c6089b5", "text": "In this paper, we present the Expert Hub System, which was designed to help governmental structures find the best experts in different areas of expertise for better reviewing of the incoming grant proposals. In order to define the areas of expertise with topic modeling and clustering, and then to relate experts to corresponding areas of expertise and rank them according to their proficiency in certain areas of expertise, the Expert Hub approach uses the data from the Directorate of Science and Technology Programmes. Furthermore, the paper discusses the use of Big Data and Machine Learning in the Russian government", "title": "" }, { "docid": "c4dfe9eb3aa4d082e96815d8c610968d", "text": "In this paper, we consider the problem of predicting demographics of geographic units given geotagged Tweets that are composed within these units. Traditional survey methods that offer demographics estimates are usually limited in terms of geographic resolution, geographic boundaries, and time intervals. Thus, it would be highly useful to develop computational methods that can complement traditional survey methods by offering demographics estimates at finer geographic resolutions, with flexible geographic boundaries (i.e. not confined to administrative boundaries), and at different time intervals. While prior work has focused on predicting demographics and health statistics at relatively coarse geographic resolutions such as the county-level or state-level, we introduce an approach to predict demographics at finer geographic resolutions such as the blockgroup-level. For the task of predicting gender and race/ethnicity counts at the blockgrouplevel, an approach adapted from prior work to our problem achieves an average correlation of 0.389 (gender) and 0.569 (race) on a held-out test dataset. Our approach outperforms this prior approach with an average correlation of 0.671 (gender) and 0.692 (race).", "title": "" } ]
scidocsrr
ff5fb7253b0f45f9669f6f94188bdf32
Adaptive, Model-driven Autoscaling for Cloud Applications
[ { "docid": "40cea15a4fbe7f939a490ea6b6c9a76a", "text": "An application provider leases resources (i.e., virtual machine instances) of variable configurations from a IaaS provider over some lease duration (typically one hour). The application provider (i.e., consumer) would like to minimize their cost while meeting all service level obligations (SLOs). The mechanism of adding and removing resources at runtime is referred to as autoscaling. The process of autoscaling is automated through the use of a management component referred to as an autoscaler. This paper introduces a novel autoscaling approach in which both cloud and application dynamics are modeled in the context of a stochastic, model predictive control problem. The approach exploits trade-off between satisfying performance related objectives for the consumer's application while minimizing their cost. Simulation results are presented demonstrating the efficacy of this new approach.", "title": "" }, { "docid": "38d3dc6b5eb1dbf85b1a371b645a17da", "text": "Energy costs for data centers continue to rise, already exceeding $15 billion yearly. Sadly much of this power is wasted. Servers are only busy 10--30% of the time on average, but they are often left on, while idle, utilizing 60% or more of peak power when in the idle state.\n We introduce a dynamic capacity management policy, AutoScale, that greatly reduces the number of servers needed in data centers driven by unpredictable, time-varying load, while meeting response time SLAs. AutoScale scales the data center capacity, adding or removing servers as needed. AutoScale has two key features: (i) it autonomically maintains just the right amount of spare capacity to handle bursts in the request rate; and (ii) it is robust not just to changes in the request rate of real-world traces, but also request size and server efficiency.\n We evaluate our dynamic capacity management approach via implementation on a 38-server multi-tier data center, serving a web site of the type seen in Facebook or Amazon, with a key-value store workload. We demonstrate that AutoScale vastly improves upon existing dynamic capacity management policies with respect to meeting SLAs and robustness.", "title": "" } ]
[ { "docid": "b2f1fca7a05423c06cea45600582520a", "text": "In Software Abstractions Daniel Jackson introduces an approach tosoftware design that draws on traditional formal methods but exploits automated tools to find flawsas early as possible. This approach--which Jackson calls \"lightweight formal methods\" or\"agile modeling\"--takes from formal specification the idea of a precise and expressivenotation based on a tiny core of simple and robust concepts but replaces conventional analysis basedon theorem proving with a fully automated analysis that gives designers immediate feedback. Jacksonhas developed Alloy, a language that captures the essence of software abstractions simply andsuccinctly, using a minimal toolkit of mathematical notions. This revised edition updates the text,examples, and appendixes to be fully compatible with the latest version of Alloy (Alloy 4).The designer can use automated analysis not only to correct errors but also tomake models that are more precise and elegant. This approach, Jackson says, can rescue designersfrom \"the tarpit of implementation technologies\" and return them to thinking deeply aboutunderlying concepts. Software Abstractions introduces the key elements: a logic,which provides the building blocks of the language; a language, which adds a small amount of syntaxto the logic for structuring descriptions; and an analysis, a form of constraint solving that offersboth simulation (generating sample states and executions) and checking (finding counterexamples toclaimed properties).", "title": "" }, { "docid": "035f780309fc777ece17cbfe4aabc01b", "text": "The phenolic composition and antibacterial and antioxidant activities of the green alga Ulva rigida collected monthly for 12 months were investigated. Significant differences in antibacterial activity were observed during the year with the highest inhibitory effect in samples collected during spring and summer. The highest free radical scavenging activity and phenolic content were detected in U. rigida extracts collected in late winter (February) and early spring (March). The investigation of the biological properties of U. rigida fractions collected in spring (April) revealed strong antimicrobial and antioxidant activities. Ethyl acetate and n-hexane fractions exhibited substantial acetylcholinesterase inhibitory capacity with EC50 of 6.08 and 7.6 μg mL−1, respectively. The total lipid, protein, ash, and individual fatty acid contents of U. rigida were investigated. The four most abundant fatty acids were palmitic, oleic, linolenic, and eicosenoic acids.", "title": "" }, { "docid": "b7c7984f10f5e55de0c497798b1d64ac", "text": "The relationships between personality traits and performance are often assumed to be linear. This assumption has been challenged conceptually and empirically, but results to date have been inconclusive. In the current study, we took a theory-driven approach in systematically addressing this issue. Results based on two different samples generally supported our expectations of the curvilinear relationships between personality traits, including Conscientiousness and Emotional Stability, and job performance dimensions, including task performance, organizational citizenship behavior, and counterproductive work behaviors. We also hypothesized and found that job complexity moderated the curvilinear personality–performance relationships such that the inflection points after which the relationships disappear were lower for low-complexity jobs than they were for high-complexity jobs. This finding suggests that high levels of the two personality traits examined are more beneficial for performance in high- than low-complexity jobs. We conclude by discussing the implications of these findings for the use of personality in personnel selection.", "title": "" }, { "docid": "9ac6a33be64cbdd46a4d2a8bd101f9b5", "text": "Cloud computing and Internet of Things (IoT) are computing technologies that provide services to consumers and businesses, allowing organizations to become more agile and flexible. Therefore, ensuring quality of service (QoS) through service-level agreements (SLAs) for such cloud-based services is crucial for both the service providers and service consumers. As SLAs are critical for cloud deployments and wider adoption of cloud services, the management of SLAs in cloud and IoT has thus become an important and essential aspect. This paper investigates the existing research on the management of SLAs in IoT applications that are based on cloud services. For this purpose, a systematic mapping study (a well-defined method) is conducted to identify the published research results that are relevant to SLAs. This paper identifies 328 primary studies and categorizes them into seven main technical classifications: SLA management, SLA definition, SLA modeling, SLA negotiation, SLA monitoring, SLA violation and trustworthiness, and SLA evolution. This paper also summarizes the research types, research contributions, and demographic information in these studies. The evaluation of the results shows that most of the approaches for managing SLAs are applied in academic or controlled experiments with limited industrial settings rather than in real industrial environments. Many studies focus on proposal models and methods to manage SLAs, and there is a lack of focus on the evolution perspective and a lack of adequate tool support to facilitate practitioners in their SLA management activities. Moreover, the scarce number of studies focusing on concrete metrics for qualitative or quantitative assessment of QoS in SLAs urges the need for in-depth research on metrics definition and measurements for SLAs.", "title": "" }, { "docid": "e99c12645fd14528a150f915b3849c2b", "text": "Teaching in the cyberspace classroom requires moving beyond old models of. pedagogy into new practices that are more facilitative. It involves much more than simply taking old models of pedagogy and transferring them to a different medium. Unlike the face-to-face classroom, in online distance education, attention needs to be paid to the development of a sense of community within the group of participants in order for the learning process to be successful. The transition to the cyberspace classroom can be successfully achieved if attention is paid to several key areas. These include: ensuring access to and familiarity with the technology in use; establishing guidelines and procedures which are relatively loose and free-flowing, and generated with significant input from participants; striving to achieve maximum participation and \"buy-in\" from the participants; promoting collaborative learning; and creating a double or triple loop in the learning process to enable participants to reflect on their learning process. All of these practices significantly contribute to the development of an online learning community, a powerful tool for enhancing the learning experience. Each of these is reviewed in detail in the paper. (AEF) Reproductions supplied by EDRS are the best that can be made from the original document. Making the Transition: Helping Teachers to Teach Online Rena M. Palloff, Ph.D. Crossroads Consulting Group and The Fielding Institute Alameda, CA", "title": "" }, { "docid": "4627d8e86bec798979962847523cc7e0", "text": "Consuming news over online media has witnessed rapid growth in recent years, especially with the increasing popularity of social media. However, the ease and speed with which users can access and share information online facilitated the dissemination of false or unverified information. One way of assessing the credibility of online news stories is by examining the attached images. These images could be fake, manipulated or not belonging to the context of the accompanying news story. Previous attempts to news verification provided the user with a set of related images for manual inspection. In this work, we present a semi-automatic approach to assist news-consumers in instantaneously assessing the credibility of information in hypertext news articles by means of meta-data and feature analysis of images in the articles. In the first phase, we use a hybrid approach including image and text clustering techniques for checking the authenticity of an image. In the second phase, we use a hierarchical feature analysis technique for checking the alteration in an image, where different sets of features, such as edges and SURF, are used. In contrast to recently reported manual news verification, our presented work shows a quantitative measurement on a custom dataset. Results revealed an accuracy of 72.7% for checking the authenticity of attached images with a dataset of 55 articles. Finding alterations in images resulted in an accuracy of 88% for a dataset of 50 images.", "title": "" }, { "docid": "4dd2fc66b1a2f758192b02971476b4cc", "text": "Although efforts have been directed toward the advancement of women in science, technology, engineering, and mathematics (STEM) positions, little research has directly examined women's perspectives and bottom-up strategies for advancing in male-stereotyped disciplines. The present study utilized Photovoice, a Participatory Action Research method, to identify themes that underlie women's experiences in traditionally male-dominated fields. Photovoice enables participants to convey unique aspects of their experiences via photographs and their in-depth knowledge of a community through personal narrative. Forty-six STEM women graduate students and postdoctoral fellows completed a Photovoice activity in small groups. They presented photographs that described their experiences pursuing leadership positions in STEM fields. Three types of narratives were discovered and classified: career strategies, barriers to achievement, and buffering strategies or methods for managing barriers. Participants described three common types of career strategies and motivational factors, including professional development, collaboration, and social impact. Moreover, the lack of rewards for these workplace activities was seen as limiting professional effectiveness. In terms of barriers to achievement, women indicated they were not recognized as authority figures and often worked to build legitimacy by fostering positive relationships. Women were vigilant to other people's perspectives, which was costly in terms of time and energy. To manage role expectations, including those related to gender, participants engaged in numerous role transitions throughout their day to accommodate workplace demands. To buffer barriers to achievement, participants found resiliency in feelings of accomplishment and recognition. Social support, particularly from mentors, helped participants cope with negative experiences and to envision their future within the field. Work-life balance also helped participants find meaning in their work and have a sense of control over their lives. Overall, common workplace challenges included a lack of social capital and limited degrees of freedom. Implications for organizational policy and future research are discussed.", "title": "" }, { "docid": "5525b8ddce9a8a6430da93f48e93dea5", "text": "One major goal of vision is to infer physical models of objects, surfaces, and their layout from sensors. In this paper, we aim to interpret indoor scenes from one RGBD image. Our representation encodes the layout of walls, which must conform to a Manhattan structure but is otherwise flexible, and the layout and extent of objects, modeled with CAD-like 3D shapes. We represent both the visible and occluded portions of the scene, producing a complete 3D parse. Such a scene interpretation is useful for robotics and visual reasoning, but difficult to produce due to the wellknown challenge of segmentation, the high degree of occlusion, and the diversity of objects in indoor scene. We take a data-driven approach, generating sets of potential object regions, matching to regions in training images, and transferring and aligning associated 3D models while encouraging fit to observations and overall consistency. We demonstrate encouraging results on the NYU v2 dataset and highlight a variety of interesting directions for future work.", "title": "" }, { "docid": "bd3b9d9e8a1dc39f384b073765175de6", "text": "We generalize the stochastic block model to the important case in which edges are annotated with weights drawn from an exponential family distribution. This generalization introduces several technical difficulties for model estimation, which we solve using a Bayesian approach. We introduce a variational algorithm that efficiently approximates the model’s posterior distribution for dense graphs. In specific numerical experiments on edge-weighted networks, this weighted stochastic block model outperforms the common approach of first applying a single threshold to all weights and then applying the classic stochastic block model, which can obscure latent block structure in networks. This model will enable the recovery of latent structure in a broader range of network data than was previously possible.", "title": "" }, { "docid": "e3db1429e8821649f35270609459cb0d", "text": "Novelty detection is the task of recognising events the differ from a model of normality. This paper proposes an acoustic novelty detector based on neural networks trained with an adversarial training strategy. The proposed approach is composed of a feature extraction stage that calculates Log-Mel spectral features from the input signal. Then, an autoencoder network, trained on a corpus of “normal” acoustic signals, is employed to detect whether a segment contains an abnormal event or not. A novelty is detected if the Euclidean distance between the input and the output of the autoencoder exceeds a certain threshold. The innovative contribution of the proposed approach resides in the training procedure of the autoencoder network: instead of using the conventional training procedure that minimises only the Minimum Mean Squared Error loss function, here we adopt an adversarial strategy, where a discriminator network is trained to distinguish between the output of the autoencoder and data sampled from the training corpus. The autoencoder, then, is trained also by using the binary cross-entropy loss calculated at the output of the discriminator network. The performance of the algorithm has been assessed on a corpus derived from the PASCAL CHiME dataset. The results showed that the proposed approach provides a relative performance improvement equal to 0.26% compared to the standard autoencoder. The significance of the improvement has been evaluated with a one-tailed z-test and resulted significant with p < 0.001. The presented approach thus showed promising results on this task and it could be extended as a general training strategy for autoencoders if confirmed by additional experiments.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "8ca6e0b5c413cc228af0d64ce8cf9d3b", "text": "On January 8, a Database Column reader asked for our views on new distributed database research efforts, and we'll begin here with our views on MapReduce. This is a good time to discuss it, since the recent trade press has been filled with news of the revolution of so-called \"cloud computing.\" This paradigm entails harnessing large numbers of (low-end) processors working in parallel to solve a computing problem. In effect, this suggests constructing a data center by lining up a large number of \"jelly beans\" rather than utilizing a much smaller number of high-end servers.", "title": "" }, { "docid": "12c947a09e6dbaeca955b18900912b96", "text": "A two stages car detection method using deformable part models with composite feature sets (DPM/CF) is proposed to recognize cars of various types and from multiple viewing angles. In the first stage, a HOG template is matched to detect the bounding box of the entire car of a certain type and viewed from a certain angle (called a t/a pair), which yields a region of interest (ROI). In the second stage, various part detectors using either HOG or the convolution neural network (CNN) features are applied to the ROI for validation. An optimization procedure based on latent logistic regression is adopted to select the optimal part detector's location, window size, and feature to use. Extensive experimental results indicate the proposed DPM/CF system can strike a balance between detection accuracy and training complexity.", "title": "" }, { "docid": "69acb21a36cd8fc31978058897b35942", "text": "Designing a driving policy for autonomous vehicles is a difficult task. Recent studies suggested an end-toend (E2E) training of a policy to predict car actuators directly from raw sensory inputs. It is appealing due to the ease of labeled data collection and since handcrafted features are avoided. Explicit drawbacks such as interpretability, safety enforcement and learning efficiency limit the practical application of the approach. In this paper, we amend the basic E2E architecture to address these shortcomings, while retaining the power of end-to-end learning. A key element in our proposed architecture is formulation of the learning problem as learning of trajectory. We also apply a Gaussian mixture model loss to contend with multi-modal data, and adopt a finance risk measure, conditional value at risk, to emphasize rare events. We analyze the effect of each concept and present driving performance in a highway scenario in the TORCS simulator. Video is available in this link.", "title": "" }, { "docid": "91365154a173be8be29ef14a3a76b08e", "text": "Fraud is a criminal practice for illegitimate gain of wealth or tampering information. Fraudulent activities are of critical concern because of their severe impact on organizations, communities as well as individuals. Over the last few years, various techniques from different areas such as data mining, machine learning, and statistics have been proposed to deal with fraudulent activities. Unfortunately, the conventional approaches display several limitations, which were addressed largely by advanced solutions proposed in the advent of Big Data. In this paper, we present fraud analysis approaches in the context of Big Data. Then, we study the approaches rigorously and identify their limits by exploiting Big Data analytics.", "title": "" }, { "docid": "6e5792c73b34eacc7bef2c8777da5147", "text": "Neural network machine translation systems have recently demonstrated encouraging results. We examine the performance of a recently proposed recurrent neural network model for machine translation on the task of Japanese-to-English translation. We observe that with relatively little training the model performs very well on a small hand-designed parallel corpus, and adapts to grammatical complexity with ease, given a small vocabulary. The success of this model on a small corpus warrants more investigation of its performance on a larger corpus.", "title": "" }, { "docid": "c322b725e87bc9d9aad40e50b3696f0a", "text": "In this paper we give a somewhat personal and perhaps biased overview of the field of Computer Vision. First, we define computer vision and give a very brief history of it. Then, we outline some of the reasons why computer vision is a very difficult research field. Finally, we discuss past, present, and future applications of computer vision. Especially, we give some examples of future applications which we think are very promising. 1 What is Computer Vision? Computer Vision has a dual goal. From the biological science point of view, computer vision aims to come up with computational models of the human visual system. From the engineering point of view, computer vision aims to build autonomous systems which could perform some of the tasks which the human visual system can perform (and even surpass it in many cases). Many vision tasks are related to the extraction of 3D and temporal information from time-varying 2D data such as obtained by one or more television cameras, and more generally the understanding of such dynamic scenes. Of course, the two goals are intimately related. The properties and characteristics of the human visual system often give inspiration to engineers who are designing computer vision systems. Conversely, computer vision algorithms can offer insights into how the human visual system works. In this paper we shall adopt the engineering point of view. 2 History of Computer Vision It is commonly accepted that the father of Computer Vision is Larry Roberts, who in his Ph.D. thesis (cir. 1960) at MIT discussed the possibilities of extracting 3D geometrical information from 2D perspective views of blocks (polyhedra) [1]. Many researchers, at MIT and elsewhere, in Artificial Intelligence, followed this work and studied computer vision in the context of the blocks world. Later, researchers realized that it was necessary to tackle images from the real world. Thus, much research was needed in the so called ``low-level” vision tasks such as edge detection and segmentation. A major milestone was the framework proposed by David Marr (cir. 1978) at MIT, who took a bottom-up approach to scene understanding [2]. Low-level image processing algorithms are applied to 2D images to obtain the ``primal sketch” (directed edge segments, etc.), from which a 2.5 D sketch of the scene is obtained using binocular stereo. Finally, high-level (structural analysis, a priori knowledge) techniques are used to get 3D model representations of the objects in the scene. This is probably the single most influential work in computer vision ever. Many researchers cried: ``From the paradigm created for us by Marr, no one can drive us out.” Nonetheless, more recently a number of computer vision researchers realized some of the limitation of Marr’s paradigm, and advocated a more top-down and heterogeneous approach. Basically, the program of Marr is extremely difficult to carry out, but more important, for many if not most computer vision applications, it is not necessary to get complete 3D object models. For example, in autonomous vehicle navigation using computer vision, it may be necessary to find out only whether an object is moving away from or toward your vehicle, but not the exact 3D motion of the object. This new paradigm is sometimes called ``Purposive Vision” implying that the algorithms should be goal driven and in many cases could be qualitative [3]. One of the main advocates of this new paradigm is Yiannis Aloimonos, University of Maryland. Looking over the history of computer vision, it is important to note that because of the broad spectrum of potential applications, the trend has been the merge of computer vision with other closely related fields. These include: Image processing (the raw images have to be processed before further analysis). Photogrammetry (cameras used for imaging have to be calibrated. Determining object poses in 3D is important in both computer vision and photogrammetry). Computer graphics (3D modeling is central to both computer vision and computer graphics. Many exciting applications need both computer vision and computer graphics see Section 4). 3 Why is Computer Vision Difficult? Computer Vision as a field of research is notoriously difficult. Almost no research problem has been satisfactorily solved. One main reason for this difficulty is that the human visual system is simply too good for many tasks (e.g., face recognition), so that computer vision systems suffer by comparison. A human can recognize faces under all kinds of variations in illumination, viewpoint, expression, etc. In most cases we have no difficulty in recognizing a friend in a photograph taken many years ago. Also, there appears to be no limit on how many faces we can store in our brains for future recognition. There appears no hope in building an autonomous system with such stellar performance. Two major related difficulties in computer vision can be identified: 1. How do we distill and represent the vast amount of human knowledge in a computer in such a way that retrieval is easy? 2. How do we carry out (in both hardware and software) the vast amount of computation that is often required in such a way that the task (such as face recognition) can be done in real time? 4 Application of Computer Vision: Past, Present, and Future Past and present applications of computer vision include: Autonomous navigation, robotic assembly, and industrial inspections. At best, the results have been mixed. (I am excluding industrial inspection applications which involve only 2D image processing and pattern. recognition.) The main difficulty is that computer vision algorithms are almost all brittle; an algorithm may work in some cases but not in others. My opinion is that in order for a computer vision application to be potentially successful, it has to satisfy two criteria: 1)Possibility of human interaction. 2) Forgiving (i.e., some mistakes are tolerable). It also needs to be emphasized that in many applications vision should be combined with other modalities (such as audio) to achieve the goals. Measured against these two criteria, some of the exciting computer vision applications which can be potentially very successful include: Image/video databases-Image content-based indexing and retrieval. Vision-based human computer interface e.g., using gesture (combined with speech) in interacting with virtual environments. Virtual agent/actor generating scenes of a synthetic person based on parameters extracted from video sequences of a real person. It is heartening to see that a number of researchers in computer vision have already started to delve into these and related applications. 5 Characterizing Human Facial Expressions: Smile To conclude this paper, we would like to give a very brief summary of a research project we are undertaking at our Institute which is relevant to two of the applications mentioned in the last Section, namely, vision-based human computer interface, and virtual agent/actors, as well as many other applications. Details of this project can be found in Ref. 4. Different people usually express their emotional feelings in different ways. An interesting question is number of canonical facial expressions for a given emotion. This would lead to applications in human computer interface, virtual agent/actor, as well as model-based video compression scenarios, such as video-phone. Take smile as an example. Suppose, by facial motion analysis, there are 16 categories found among all smiles posed by different people. Smiles within each category can be approximately represented by a single mile which could be called a canonical smile. The facial movements associated with each canonical smile can be designed in advance. A new smile is recognized and replaced by the canonical smile at the transmitting side, only the index of that canonical smile needs to be transmitted. At the receiving sides, this canonical smile will be reconstructed to express that person’s happiness. We are using an approach to the characterization of facial expressions based on the principal component analysis of the facial motion parameters. Smile is used as an example, however, the methodology can be generalized to other facial expressions. A database consisting of a number of different people’s smiles is first collected. Two frames are chosen from each smile sequence, a neutral face image and an image where the smile reaches its apex. The motion vectors of a set of feature points are derived from these two images and a feature space is created. Each smile is designated by a point in this feature space. The principal component analysis technique is used for dimension reduction and some preliminary results of smile characterization are obtained. Some dynamic characteristics of smile are also studied. For smiles, the most significant part on the face is the mouth. Therefore, four points around the mouth are chosen as the feature points for smile characterization: The two corners of the mouth and the mid-points of the upper and lower lip boundaries. About 60 people volunteered to show their smiles. These four points are identified in the two end frames of each smiling sequence, i.e., the neutral face image and the one in which the smile reaches its apex. The two face images are first registered based on some fixed features, e.g., the eye corners and the nostrils. In this way, the global motion of the head can be compensated for since only the local facial motions during smiles are of interest. Thus, every smile is represented by four vectors which point from the feature points on the neutral face image to the corresponding feature points on the smiling face image. These motion vectors are further normalized according to the two mouth corner points. Then, each component of these vectors serves as one dimension of the ``smile feature space.” In our experiments to date, these are 2D vectors. Thus, the dimensionality of the smile feature space is 8. Principal component", "title": "" }, { "docid": "5640d9307fa3d1b611358d3f14d5fb4c", "text": "An N-LDMOS ESD protection device with drain back and PESD optimization design is proposed. With PESD layer enclosing the N+ drain region, a parasitic SCR is created to achieve high ESD level. When PESD is close to gate, the turn-on efficiency can be further improved (Vt1: 11.2 V reduced to 7.2 V) by the punch-through path from N+/PESD to PW. The proposed ESD N-LDMOS can sustain over 8KV HBM with low trigger behavior without extra area cost.", "title": "" }, { "docid": "7fdf51a07383b9004882c058743b5726", "text": "We propose using application specific virtual machines (ASVMs) to reprogram deployed wireless sensor networks. ASVMs provide a way for a user to define an application-specific boundary between virtual code and the VM engine. This allows programs to be very concise (tens to hundreds of bytes), making program installation fast and inexpensive. Additionally, concise programs interpret few instructions, imposing very little interpretation overhead. We evaluate ASVMs against current proposals for network programming runtimes and show that ASVMs are more energy efficient by as much as 20%. We also evaluate ASVMs against hand built TinyOS applications and show that while interpretation imposes a significant execution overhead, the low duty cycles of realistic applications make the actual cost effectively unmeasurable.", "title": "" } ]
scidocsrr
bd445e2d446f3b66df4aa5b7e1244e44
MathDQN: Solving Arithmetic Word Problems via Deep Reinforcement Learning
[ { "docid": "0007c9ab00e628848a08565daaf4063e", "text": "We present an active detection model for localizing objects in scenes. The model is class-specific and allows an agent to focus attention on candidate regions for identifying the correct location of a target object. This agent learns to deform a bounding box using simple transformation actions, with the goal of determining the most specific location of target objects following top-down reasoning. The proposed localization agent is trained using deep reinforcement learning, and evaluated on the Pascal VOC 2007 dataset. We show that agents guided by the proposed model are able to localize a single instance of an object after analyzing only between 11 and 25 regions in an image, and obtain the best detection results among systems that do not use object proposals for object localization.", "title": "" }, { "docid": "8fd830d62cceb6780d0baf7eda399fdf", "text": "Little work from the Natural Language Processing community has targeted the role of quantities in Natural Language Understanding. This paper takes some key steps towards facilitating reasoning about quantities expressed in natural language. We investigate two different tasks of numerical reasoning. First, we consider Quantity Entailment, a new task formulated to understand the role of quantities in general textual inference tasks. Second, we consider the problem of automatically understanding and solving elementary school math word problems. In order to address these quantitative reasoning problems we first develop a computational approach which we show to successfully recognize and normalize textual expressions of quantities. We then use these capabilities to further develop algorithms to assist reasoning in the context of the aforementioned tasks.", "title": "" }, { "docid": "711d8291683bd23e2060b56ce7120f23", "text": "Solving simple arithmetic word problems is one of the challenges in Natural Language Understanding. This paper presents a novel method to learn to use formulas to solve simple arithmetic word problems. Our system, analyzes each of the sentences to identify the variables and their attributes; and automatically maps this information into a higher level representation. It then uses that representation to recognize the presence of a formula along with its associated variables. An equation is then generated from the formal description of the formula. In the training phase, it learns to score the <formula, variables> pair from the systematically generated higher level representation. It is able to solve 86.07% of the problems in a corpus of standard primary school test questions and beats the state-of-the-art by", "title": "" }, { "docid": "6eeeb343309fc24326ed42b62d5524b1", "text": "We introduce a novel machine learning framework based on recursive autoencoders for sentence-level prediction of sentiment label distributions. Our method learns vector space representations for multi-word phrases. In sentiment prediction tasks these representations outperform other state-of-the-art approaches on commonly used datasets, such as movie reviews, without using any pre-defined sentiment lexica or polarity shifting rules. We also evaluate the model’s ability to predict sentiment distributions on a new dataset based on confessions from the experience project. The dataset consists of personal user stories annotated with multiple labels which, when aggregated, form a multinomial distribution that captures emotional reactions. Our algorithm can more accurately predict distributions over such labels compared to several competitive baselines.", "title": "" } ]
[ { "docid": "eb4ae32b55af8ed25122640bffafde39", "text": "Unlike chemical synthesis, biological synthesis of nanoparticles is gaining tremendous interest, and plant extracts are preferred over other biological sources due to their ample availability and wide array of reducing metabolites. In this project, we investigated the reducing potential of aqueous extract of Artemisia absinthium L. for synthesizing silver nanoparticles (AgNPs). Optimal synthesis of AgNPs with desirable physical and biological properties was investigated using ultra violet-visible spectroscopy (UV-vis), dynamic light scattering (DLS), transmission electron microscopy (TEM) and energy-dispersive X-ray analysis (EDX). To determine their appropriate concentrations for AgNP synthesis, two-fold dilutions of silver nitrate (20 to 0.62 mM) and aqueous plant extract (100 to 0.79 mg ml(-1)) were reacted. The results showed that silver nitrate (2mM) and plant extract (10 mg ml(-1)) mixed in different ratios significantly affected size, stability and yield of AgNPs. Extract to AgNO3 ratio of 6:4v/v resulted in the highest conversion efficiency of AgNO3 to AgNPs, with the particles in average size range of less than 100 nm. Furthermore, the direct imaging of synthesized AgNPs by TEM revealed polydispersed particles in the size range of 5 to 20 nm. Similarly, nanoparticles with the characteristic peak of silver were observed with EDX. This study presents a comprehensive investigation of the differential behavior of plant extract and AgNO3 to synthesize biologically stable AgNPs.", "title": "" }, { "docid": "7d197033396c7a55593da79a5a70fa96", "text": "1. Introduction Fundamental questions about weighting (Fig 1) seem to be ~ most common during the analysis of survey data and I encounter them almost every week. Yet we \"lack a single, reasonably comprehensive, introductory explanation of the process of weighting\" [Sharot 1986], readily available to and usable by survey practitioners, who are looking for simple guidance, and this paper aims to meet some of that need. Some partial treatments have appeared in the survey literature [e.g., Kish 1965], but the topic seldom appears even in the indexes. However, we can expect growing interest, as witnessed by six publications since 1987 listed in the references.", "title": "" }, { "docid": "8c52c67dde20ce0a50ea22aaa4f917a5", "text": "This paper presents the vision of the Artificial Vision and Intelligent Systems Laboratory (VisLab) on future automated vehicles, ranging from sensor selection up to their extensive testing. VisLab's design choices are explained using the BRAiVE autonomous vehicle prototype as an example. BRAiVE, which is specifically designed to develop, test, and demonstrate advanced safety applications with different automation levels, features a high integration level and a low-cost sensor suite, which are mainly based on vision, as opposed to many other autonomous vehicle implementations based on expensive and invasive sensors. The importance of performing extensive tests to validate the design choices is considered to be a hard requirement, and different tests have been organized, including an intercontinental trip from Italy to China. This paper also presents the test, the main challenges, and the vehicles that have been specifically developed for this test, which was performed by four autonomous vehicles based on BRAiVE's architecture. This paper also includes final remarks on VisLab's perspective on future vehicles' sensor suite.", "title": "" }, { "docid": "328ba61afa9b311a33d557999738864d", "text": "In this paper, a multiscale convolutional network (MSCN) and graph-partitioning-based method is proposed for accurate segmentation of cervical cytoplasm and nuclei. Specifically, deep learning via the MSCN is explored to extract scale invariant features, and then, segment regions centered at each pixel. The coarse segmentation is refined by an automated graph partitioning method based on the pretrained feature. The texture, shape, and contextual information of the target objects are learned to localize the appearance of distinctive boundary, which is also explored to generate markers to split the touching nuclei. For further refinement of the segmentation, a coarse-to-fine nucleus segmentation framework is developed. The computational complexity of the segmentation is reduced by using superpixel instead of raw pixels. Extensive experimental results demonstrate that the proposed cervical nucleus cell segmentation delivers promising results and outperforms existing methods.", "title": "" }, { "docid": "3f0b6a3238cf60d7e5d23363b2affe95", "text": "This paper presents a new strategy to control the generated power that comes from the energy sources existing in autonomous and isolated Microgrids. In this particular study, the power system consists of a power electronic converter supplied by a battery bank, which is used to form the AC grid (grid former converter), an energy source based on a wind turbine with its respective power electronic converter (grid supplier converter), and the power consumers (loads). The main objective of this proposed strategy is to control the state of charge of the battery bank limiting the voltage on its terminals by controlling the power generated by the energy sources. This is done without using dump loads or any physical communication among the power electronic converters or the individual energy source controllers. The electrical frequency of the microgrid is used to inform to the power sources and their respective converters the amount of power they need to generate in order to maintain the battery-bank state of charge below or equal its maximum allowable limit. It is proposed a modified droop control to implement this task.", "title": "" }, { "docid": "21ce9ed056f5c54d0626e3a4e8224bcc", "text": "This paper presents an application of evolutionary fuzzy classifier design to a road accident data analysis. A fuzzy classifier evolved by the genetic programming was used to learn the labeling of data in a real world road accident data set. The symbolic classifier was inspected in order to select important features and the relations among them. Selected features provide a feedback for traffic management authorities that can exploit the knowledge to improve road safety and mitigate the severity of traffic accidents.", "title": "" }, { "docid": "2a225a33dc4d8cd08d0ae4a18d8b267c", "text": "Support Vector Machines is a powerful methodology for solving problems in nonlinear classification, function estimation and density estimation which has also led recently to many new developments in kernel based learning in general. In these methods one solves convex optimization problems, typically quadratic programs. We focus on Least Squares Support Vector Machines which are reformulations to standard SVMs that lead to solving linear KKT systems. Least squares support vector machines are closely related to regularization networks and Gaussian processes but additionally emphasize and exploit primaldual interpretations from optimization theory. In view of interior point algorithms such LS-SVM KKT systems can be considered as a core problem. Where needed the obtained solutions can be robustified and/or sparsified. As an alternative to a top-down choice of the cost function, methods from robust statistics are employed in a bottom-up fashion for further improving the estimates. We explain the natural links between LS-SVM classifiers and kernel Fisher discriminant analysis. The framework is further extended towards unsupervised learning by considering PCA analysis and its kernel version as a one-class modelling problem. This leads to new primal-dual support vector machine formulations for kernel PCA and kernel canonical correlation analysis. Furthermore, LS-SVM formulations are mentioned towards recurrent networks and control, thereby extending the methods from static to dynamic problems. In general, support vector machines may pose heavy computational challenges for large data sets. For this purpose, we propose a method of Fixed Size LS-SVM where the estimation is done in the primal space in relation to a Nyström sampling with active selection of support vectors and we discuss extensions to committee networks. The methods will be illustrated by several benchmark and real-life applications.", "title": "" }, { "docid": "2da84ca7d7db508a6f9a443f2dbae7c1", "text": "This paper proposes a computationally efficient approach to detecting objects natively in 3D point clouds using convolutional neural networks (CNNs). In particular, this is achieved by leveraging a feature-centric voting scheme to implement novel convolutional layers which explicitly exploit the sparsity encountered in the input. To this end, we examine the trade-off between accuracy and speed for different architectures and additionally propose to use an L1 penalty on the filter activations to further encourage sparsity in the intermediate representations. To the best of our knowledge, this is the first work to propose sparse convolutional layers and L1 regularisation for efficient large-scale processing of 3D data. We demonstrate the efficacy of our approach on the KITTI object detection benchmark and show that VoteSDeep models with as few as three layers outperform the previous state of the art in both laser and laser-vision based approaches by margins of up to 40% while remaining highly competitive in terms of processing time.", "title": "" }, { "docid": "059463f31fcb83c346f96ed8345ff9a6", "text": "Cancer incidence is projected to increase in the future and an effectual preventive strategy is required to face this challenge. Alteration of dietary habits is potentially an effective approach for reducing cancer risk. Assessment of biological effects of a specific food or bioactive component that is linked to cancer and prediction of individual susceptibility as a function of nutrient-nutrient interactions and genetics is an essential element to evaluate the beneficiaries of dietary interventions. In general, the use of biomarkers to evaluate individuals susceptibilities to cancer must be easily accessible and reliable. However, the response of individuals to bioactive food components depends not only on the effective concentration of the bioactive food components, but also on the target tissues. This fact makes the response of individuals to food components vary from one individual to another. Nutrigenomics focuses on the understanding of interactions between genes and diet in an individual and how the response to bioactive food components is influenced by an individual's genes. Nutrients have shown to affect gene expression and to induce changes in DNA and protein molecules. Nutrigenomic approaches provide an opportunity to study how gene expression is regulated by nutrients and how nutrition affects gene variations and epigenetic events. Finding the components involved in interactions between genes and diet in an individual can potentially help identify target molecules important in preventing and/or reducing the symptoms of cancer.", "title": "" }, { "docid": "9dceccb7b171927a5cba5a16fd9d76c6", "text": "This paper involved developing two (Type I and Type II) equal-split Wilkinson power dividers (WPDs). The Type I divider can use two short uniform-impedance transmission lines, one resistor, one capacitor, and two quarter-wavelength (λ/4) transformers in its circuit. Compared with the conventional equal-split WPD, the proposed Type I divider can relax the two λ/4 transformers and the output ports layout restrictions of the conventional WPD. To eliminate the number of impedance transformers, the proposed Type II divider requires only one impedance transformer attaining the optimal matching design and a compact size. A compact four-way equal-split WPD based on the proposed Type I and Type II dividers was also developed, facilitating a simple layout, and reducing the circuit size. Regarding the divider, to obtain favorable selectivity and isolation performance levels, two Butterworth filter transformers were integrated in the proposed Type I divider to perform filter response and power split functions. Finally, a single Butterworth filter transformer was integrated in the proposed Type II divider to demonstrate a compact filtering WPD.", "title": "" }, { "docid": "5000e96519cf477e6ab2ea35fd181046", "text": "When computing descriptors of image data, the type of information that can be extracted may be strongly dependent on the scales at which the image operators are applied. This article presents a systematic methodology for addressing this problem. A mechanism is presented for automatic selection of scale levels when detecting one-dimensional image features, such as edges and ridges. A novel concept of a scale-space edge is introduced, defined as a connected set of points in scale-space at which: (i) the gradient magnitude assumes a local maximum in the gradient direction, and (ii) a normalized measure of the strength of the edge response is locally maximal over scales. An important consequence of this definition is that it allows the scale levels to vary along the edge. Two specific measures of edge strength are analyzed in detail, the gradient magnitude and a differential expression derived from the third-order derivative in the gradient direction. For a certain way of normalizing these differential descriptors, by expressing them in terms of so-called γ-normalized derivatives, an immediate consequence of this definition is that the edge detector will adapt its scale levels to the local image structure. Specifically, sharp edges will be detected at fine scales so as to reduce the shape distortions due to scale-space smoothing, whereas sufficiently coarse scales will be selected at diffuse edges, such that an edge model is a valid abstraction of the intensity profile across the edge. Since the scale-space edge is defined from the intersection of two zero-crossing surfaces in scale-space, the edges will by definition form closed curves. This simplifies selection of salient edges, and a novel significance measure is proposed, by integrating the edge strength along the edge. Moreover, the scale information associated with each edge provides useful clues to the physical nature of the edge. With just slight modifications, similar ideas can be used for formulating ridge detectors with automatic selection, having the characteristic property that the selected scales on a scale-space ridge instead reflect the width of the ridge. It is shown how the methodology can be implemented in terms of straightforward visual front-end operations, and the validity of the approach is supported by theoretical analysis as well as experiments on real-world and synthetic data.", "title": "" }, { "docid": "7ff084619d05d21975ff41748a260418", "text": "In the development of speech recognition algorithms, it is important to know whether any apparent difference in performance of algorithms is statistically significant, yet this issue is almost always overlooked. We present two simple tests for deciding whether the difference in error-rates between two algorithms tested on the same data set is statistically significant. The first (McNemar’s test) requires the errors made by an algorithm to be independent events and is most appropriate for isolated word algorithms. The second (a matched-pairs test) can be used even when errors are not independent events and is more appropriate for connected speech.", "title": "" }, { "docid": "a0903fc562ccd9dfe708afbef43009cd", "text": "A stacked field-effect transistor (FET) linear cellular antenna switch adopting a transistor layout with odd-symmetrical drain-source metal wiring and an extremely low-power biasing strategy has been implemented in silicon-on-insulator CMOS technology. A multi-fingered switch-FET device with odd-symmetrical drain-source metal wiring is adopted herein to improve the insertion loss (IL) and isolation of the antenna switch by minimizing the product of the on-resistance and off-capacitance. To remove the spurious emission and digital switching noise problems from the antenna switch driver circuits, an extremely low-power biasing scheme driven by only positive bias voltage has been devised. The proposed antenna switch that employs the new biasing scheme shows almost the same power-handling capability and harmonic distortion as a conventional version based on a negative biasing scheme, while greatly reducing long start-up time and wasteful active current consumption in a stand-by mode of the conventional antenna switch driver circuits. The implemented single-pole four-throw antenna switch is perfectly capable of handling a high power signal up to +35 dBm with suitably low IL of less than 1 dB, and shows second- and third-order harmonic distortion of less than -45 dBm when a 1-GHz RF signal with a power of +35 dBm and a 2-GHz RF signal with a power of +33 dBm are applied. The proposed antenna switch consumes almost no static power.", "title": "" }, { "docid": "216f23db607aabee32907bda19012b8e", "text": "Stereo matching is one of the key technologies in stereo vision system due to its ultra high data bandwidth requirement, heavy memory accessing and algorithm complexity. To speed up stereo matching, various algorithms are implemented by different software and hardware processing methods. This paper presents a survey of stereo matching software and hardware implementation research status based on local and global algorithm analysis. Based on different processing platforms, including CPU, DSP, GPU, FPGA and ASIC, analysis are made on software or hardware realization performance, which is represented by frame rate, efficiency represented by MDES, and processing quality represented by error rate. Among them, GPU, FPGA and ASIC implementations are suitable for real-time embedded stereo matching applications, because they are low power consumption, low cost, and have high performance. Finally, further stereo matching optimization technologies are pointed out, including both algorithm and parallelism optimization for data bandwidth reduction and memory storage strategy.", "title": "" }, { "docid": "7d42d3d197a4d62e1b4c0f3c08be14a9", "text": "Links between issue reports and their corresponding commits in version control systems are often missing. However, these links are important for measuring the quality of a software system, predicting defects, and many other tasks. Several approaches have been designed to solve this problem by automatically linking bug reports to source code commits via comparison of textual information in commit messages and bug reports. Yet, the effectiveness of these techniques is oftentimes suboptimal when commit messages are empty or contain minimum information; this particular problem makes the process of recovering traceability links between commits and bug reports particularly challenging. In this work, we aim at improving the effectiveness of existing bug linking techniques by utilizing rich contextual information. We rely on a recently proposed approach, namely ChangeScribe, which generates commit messages containing rich contextual information by using code summarization techniques. Our approach then extracts features from these automatically generated commit messages and bug reports, and inputs them into a classification technique that creates a discriminative model used to predict if a link exists between a commit message and a bug report. We compared our approach, coined as RCLinker (Rich Context Linker), to MLink, which is an existing state-of-the-art bug linking approach. Our experiment results on bug reports from six software projects show that RCLinker outperforms MLink in terms of F-measure by 138.66%.", "title": "" }, { "docid": "99d3354d91a330e7b3bd3cc6204251ca", "text": "PHACE syndrome is a neurocutaneous disorder characterized by large cervicofacial infantile hemangiomas and associated anomalies: posterior fossa brain malformation, hemangioma, arterial cerebrovascular anomalies, coarctation of the aorta and cardiac defects, and eye/endocrine abnormalities of the brain. When ventral developmental defects (sternal clefting or supraumbilical raphe) are present the condition is termed PHACE. In this report, we describe three PHACE cases that presented unique features (affecting one of the organ systems described for this syndrome) that have not been described previously. In the first case, a definitive PHACE association, the patient presented with an ipsilateral mesenteric lymphatic malformation, at the age of 14 years. In the second case, an anomaly of the posterior segment of the eye, not mentioned before in PHACE literature, a retinoblastoma, has been described. Specific chemotherapy avoided enucleation. And, in the third case, the child presented with an unusual midline frontal bone cleft, corresponding to Tessier 14 cleft. Two patients' hemangiomas responded well to propranolol therapy. The first one was followed and treated in the pre-propranolol era and had a moderate response to corticoids and interferon.", "title": "" }, { "docid": "f779bf251b3d066e594867680e080ef4", "text": "Machine Translation is area of research since six decades. It is gaining popularity since last decade due to better computational facilities available at personal computer systems. This paper presents different Machine Translation system where Sanskrit is involved as source, target or key support language. Researchers employ various techniques like Rule based, Corpus based, Direct for machine translation. The main aim to focus on Sanskrit in Machine Translation in this paper is to uncover the language suitability, its morphology and employ appropriate MT techniques.", "title": "" }, { "docid": "5e681caab6212e3f82d482f2ac332a14", "text": "Task-aware flow schedulers collect task information across the data center to optimize task-level performance. However, the majority of the tasks, which generate short flows and are called tiny tasks, have been largely overlooked by current schedulers. The large number of tiny tasks brings significant overhead to the centralized schedulers, while the existing decentralized schedulers are too complex to fit in commodity switches. In this paper we present OPTAS, a lightweight, commodity-switch-compatible scheduling solution that efficiently monitors and schedules flows for tiny tasks with low overhead. OPTAS monitors system calls and buffer footprints to recognize the tiny tasks, and assigns them with higher priorities than larger ones. The tiny tasks are then transferred in a FIFO manner by adjusting two attributes, namely, the window size and round trip time, of TCP. We have implemented OPTAS as a Linux kernel module, and experiments on our 37-server testbed show that OPTAS is at least 2.2× faster than fair sharing, and 1.2× faster than only assigning tiny tasks with the highest priority.", "title": "" }, { "docid": "08df6cd44a26be6c4cc96082631a0e6e", "text": "In the natural habitat of our ancestors, physical activity was not a preventive intervention but a matter of survival. In this hostile environment with scarce food and ubiquitous dangers, human genes were selected to optimize aerobic metabolic pathways and conserve energy for potential future famines.1 Cardiac and vascular functions were continuously challenged by intermittent bouts of high-intensity physical activity and adapted to meet the metabolic demands of the working skeletal muscle under these conditions. When speaking about molecular cardiovascular effects of exercise, we should keep in mind that most of the changes from baseline are probably a return to normal values. The statistical average of physical activity in Western societies is so much below the levels normal for our genetic background that sedentary lifestyle in combination with excess food intake has surpassed smoking as the No. 1 preventable cause of death in the United States.2 Physical activity has been shown to have beneficial effects on glucose metabolism, skeletal muscle function, ventilator muscle strength, bone stability, locomotor coordination, psychological well-being, and other organ functions. However, in the context of this review, we will focus entirely on important molecular effects on the cardiovascular system. The aim of this review is to provide a bird’s-eye view on what is known and unknown about the physiological and biochemical mechanisms involved in mediating exercise-induced cardiovascular effects. The resulting map is surprisingly detailed in some areas (ie, endothelial function), whereas other areas, such as direct cardiac training effects in heart failure, are still incompletely understood. For practical purposes, we have decided to use primarily an anatomic approach to present key data on exercise effects on cardiac and vascular function. For the cardiac effects, the left ventricle and the cardiac valves will be described separately; for the vascular effects, we will follow the arterial vascular tree, addressing changes in the aorta, the large conduit arteries, the resistance vessels, and the microcirculation before turning our attention toward the venous and the pulmonary circulation (Figure 1). Cardiac Effects of Exercise Left Ventricular Myocardium and Ventricular Arrhythmias The maintenance of left ventricular (LV) mass and function depends on regular exercise. Prolonged periods of physical inactivity, as studied in bed rest trials, lead to significant reductions in LV mass and impaired cardiac compliance, resulting in reduced upright stroke volume and orthostatic intolerance.3 In contrast, a group of bed rest subjects randomized to regular supine lower-body negative pressure treadmill exercise showed an increase in LV mass and a preserved LV stoke volume.4 In previously sedentary healthy subjects, a 12-week moderate exercise program induced a mild cardiac hypertrophic response as measured by cardiac magnetic resonance imaging.5 These findings highlight the plasticity of LV mass and function in relation to the current level of physical activity.", "title": "" }, { "docid": "ec5ade0dd3aee92102934de27beb6b4f", "text": "This paper covers the whole process of developing an Augmented Reality Stereoscopig Render Engine for the Oculus Rift. To capture the real world in form of a camera stream, two cameras with fish-eye lenses had to be installed on the Oculus Rift DK1 hardware. The idea was inspired by Steptoe [1]. After the introduction, a theoretical part covers all the most neccessary elements to achieve an AR System for the Oculus Rift, following the implementation part where the code from the AR Stereo Engine is explained in more detail. A short conclusion section shows some results, reflects some experiences and in the final chapter some future works will be discussed. The project can be accessed via the git repository https: // github. com/ MaXvanHeLL/ ARift. git .", "title": "" } ]
scidocsrr
3875760461b1998be08f4c6af4a58c1f
Impact of digital control in power electronics
[ { "docid": "1648a759d2487177af4b5d62407fd6cd", "text": "This paper discusses the presence of steady-state limit cycles in digitally controlled pulse-width modulation (PWM) converters, and suggests conditions on the control law and the quantization resolution for their elimination. It then introduces single-phase and multi-phase controlled digital dither as a means of increasing the effective resolution of digital PWM (DPWM) modules, allowing for the use of low resolution DPWM units in high regulation accuracy applications. Bounds on the number of bits of dither that can be used in a particular converter are derived.", "title": "" } ]
[ { "docid": "fc9061348b46fc1bf7039fa5efcbcea1", "text": "We propose that a leadership identity is coconstructed in organizations when individuals claim and grant leader and follower identities in their social interactions. Through this claiming-granting process, individuals internalize an identity as leader or follower, and those identities become relationally recognized through reciprocal role adoption and collectively endorsed within the organizational context. We specify the dynamic nature of this process, antecedents to claiming and granting, and an agenda for research on leadership identity and development.", "title": "" }, { "docid": "e3c77ede3d63708b138b6aa240fea57b", "text": "We numerically investigated 3-dimensional (3D) sub-wavelength structured metallic nanohole films with various thicknesses using wavelength interrogation technique. The reflectivity and full-width at half maximum (FWHM) of the localized surface plasmon resonance (LSPR) spectra was calculated using finite-difference time domain (FDTD) method. Results showed that a 100nm-thick silver nanohole gave higher reflectivity of 92% at the resonance wavelength of 644nm. Silver, copper and aluminum structured thin films showed only a small difference in the reflectivity spectra for various metallic film thicknesses whereas gold thin films showed a reflectivity decrease as the film thickness was increased. However, all four types of metallic nanohole films exhibited increment in FWHM (broader curve) and the resonance wavelength was red-shifted as the film thicknesses were decreased.", "title": "" }, { "docid": "c4282486dad6f0fef06964bd3fa45272", "text": "In recent years, deep neural models have been widely adopted for text matching tasks, such as question answering and information retrieval, showing improved performance as compared with previous methods. In this paper, we introduce the MatchZoo toolkit that aims to facilitate the designing, comparing and sharing of deep text matching models. Speci€cally, the toolkit provides a uni€ed data preparation module for di‚erent text matching problems, a ƒexible layer-based model construction process, and a variety of training objectives and evaluation metrics. In addition, the toolkit has implemented two schools of representative deep text matching models, namely representation-focused models and interactionfocused models. Finally, users can easily modify existing models, create and share their own models for text matching in MatchZoo.", "title": "" }, { "docid": "a1fe9d395292fb3e4283f320022cacc7", "text": "Hepatitis A is a common disease in developing countries and Albania has a high prevalence of this disease associated to young age. In spite of the occurrence of a unique serotype there are different genotypes classified from I to VII. Genotype characterisation of HAV isolates circulating in Albania has been undertaken, as well as the study of the occurrence of antigenic variants in the proteins VP3 and VP1. To evaluate the genetic variability of the Albanian hepatitis A virus (HAV) isolates, samples were collected from 12 different cities, and the VP1/2A junction amplified and sequenced. These sequences were aligned and a phylogenetic analysis performed. Additionally, the amino half sequence of the protein VP3 and the complete sequence of the VP1 was determined. Anti-HAV IgM were present in 66.2% of all the sera. Fifty HAV isolates were amplified and the analysis revealed that all the isolates were sub-genotype IA with only limited mutations. When the deduced amino acid sequences were obtained, the alignment showed only two amino acids substitutions at positions 22 and 34 of the 2A protein. A higher genomic stability of the VP1/2A region, in contrast with what occurs in other parts of the world could be observed, indicating high endemicity of HAV in Albania. In addition, two potential antigenic variants were detected. The first at position 46 of VP3 in seven isolates and the second at position 23 of VP1 in six isolates.", "title": "" }, { "docid": "19d6ad18011815602854685211847c52", "text": "This paper presents a method for learning an And-Or model to represent context and occlusion for car detection and viewpoint estimation. The learned And-Or model represents car-to-car context and occlusion configurations at three levels: (i) spatially-aligned cars, (ii) single car under different occlusion configurations, and (iii) a small number of parts. The And-Or model embeds a grammar for representing large structural and appearance variations in a reconfigurable hierarchy. The learning process consists of two stages in a weakly supervised way (i.e., only bounding boxes of single cars are annotated). First, the structure of the And-Or model is learned with three components: (a) mining multi-car contextual patterns based on layouts of annotated single car bounding boxes, (b) mining occlusion configurations between single cars, and (c) learning different combinations of part visibility based on CAD simulations. The And-Or model is organized in a directed and acyclic graph which can be inferred by Dynamic Programming. Second, the model parameters (for appearance, deformation and bias) are jointly trained using Weak-Label Structural SVM. In experiments, we test our model on four car detection datasets-the KITTI dataset [1] , the PASCAL VOC2007 car dataset [2] , and two self-collected car datasets, namely the Street-Parking car dataset and the Parking-Lot car dataset, and three datasets for car viewpoint estimation-the PASCAL VOC2006 car dataset [2] , the 3D car dataset [3] , and the PASCAL3D+ car dataset [4] . Compared with state-of-the-art variants of deformable part-based models and other methods, our model achieves significant improvement consistently on the four detection datasets, and comparable performance on car viewpoint estimation.", "title": "" }, { "docid": "2496fa63868717ce2ed56c1777c4b0ed", "text": "Person re-identification (reID) is an important task that requires to retrieve a person’s images from an image dataset, given one image of the person of interest. For learning robust person features, the pose variation of person images is one of the key challenges. Existing works targeting the problem either perform human alignment, or learn human-region-based representations. Extra pose information and computational cost is generally required for inference. To solve this issue, a Feature Distilling Generative Adversarial Network (FD-GAN) is proposed for learning identity-related and pose-unrelated representations. It is a novel framework based on a Siamese structure with multiple novel discriminators on human poses and identities. In addition to the discriminators, a novel same-pose loss is also integrated, which requires appearance of a same person’s generated images to be similar. After learning pose-unrelated person features with pose guidance, no auxiliary pose information and additional computational cost is required during testing. Our proposed FD-GAN achieves state-of-the-art performance on three person reID datasets, which demonstrates that the effectiveness and robust feature distilling capability of the proposed FD-GAN. ‡‡", "title": "" }, { "docid": "338e037f4ec9f6215f48843b9d03f103", "text": "Sparse deep neural networks(DNNs) are efficient in both memory and compute when compared to dense DNNs. But due to irregularity in computation of sparse DNNs, their efficiencies are much lower than that of dense DNNs on general purpose hardwares. This leads to poor/no performance benefits for sparse DNNs. Performance issue for sparse DNNs can be alleviated by bringing structure to the sparsity and leveraging it for improving runtime efficiency. But such structural constraints often lead to sparse models with suboptimal accuracies. In this work, we jointly address both accuracy and performance of sparse DNNs using our proposed class of neural networks called HBsNN (Hierarchical Block sparse Neural Networks).", "title": "" }, { "docid": "fe08f3e1dc4fe2d71059b483c8532e88", "text": "Digital asset management (DAM) has increasing benefits in booming global Internet economy, but it is still a great challenge for providing an effective way to manage, store, ingest, organize and retrieve digital asset. To do it, we present a new digital asset management platform, called DAM-Chain, with Transaction-based Access Control (TBAC) which integrates the distribution ABAC model and the blockchain technology. In this platform, the ABAC provides flexible and diverse authorization mechanisms for digital asset escrowed into blockchain while the blockchain's transactions serve as verifiable and traceable medium of access request procedure. We also present four types of transactions to describe the TBAC access control procedure, and provide the algorithms of these transactions corresponding to subject registration, object escrowing and publication, access request and grant. By maximizing the strengths of both ABAC and blockchain, this platform can support flexible and diverse permission management, as well as verifiable and transparent access authorization process in an open decentralized environment.", "title": "" }, { "docid": "c7de7b159579b5c8668f2a072577322c", "text": "This paper presents a method for effectively using unlabeled sequential data in the learning of hidden Markov models (HMMs). With the conventional approach, class labels for unlabeled data are assigned deterministically by HMMs learned from labeled data. Such labeling often becomes unreliable when the number of labeled data is small. We propose an extended Baum-Welch (EBW) algorithm in which the labeling is undertaken probabilistically and iteratively so that the labeled and unlabeled data likelihoods are improved. Unlike the conventional approach, the EBW algorithm guarantees convergence to a local maximum of the likelihood. Experimental results on gesture data and speech data show that when labeled training data are scarce, by using unlabeled data, the EBW algorithm improves the classification performance of HMMs more robustly than the conventional naive labeling (NL) approach. keywords Unlabeled data, sequential data, hidden Markov models, extended Baum-Welch algorithm.", "title": "" }, { "docid": "41d32df9d58f9c38f75010c87c0c3327", "text": "Evidence from many countries in recent years suggests that collateral values and recovery rates on corporate defaults can be volatile and, moreover, that they tend to go down just when the number of defaults goes up in economic downturns. This link between recovery rates and default rates has traditionally been neglected by credit risk models, as most of them focused on default risk and adopted static loss assumptions, treating the recovery rate either as a constant parameter or as a stochastic variable independent from the probability of default. This traditional focus on default analysis has been partly reversed by the recent significant increase in the number of studies dedicated to the subject of recovery rate estimation and the relationship between default and recovery rates. This paper presents a detailed review of the way credit risk models, developed during the last thirty years, treat the recovery rate and, more specifically, its relationship with the probability of default of an obligor. We also review the efforts by rating agencies to formally incorporate recovery ratings into their assessment of corporate loan and bond credit risk and the recent efforts by the Basel Committee on Banking Supervision to consider “downturn LGD” in their suggested requirements under Basel II. Recent empirical evidence concerning these issues and the latest data on high-yield bond and leverage loan defaults is also presented and discussed.", "title": "" }, { "docid": "26439bd538c8f0b5d6fba3140e609aab", "text": "A planar antenna with a broadband feeding structure is presented and analyzed for ultrawideband applications. The proposed antenna consists of a suspended radiator fed by an n-shape microstrip feed. Study shows that this antenna achieves an impedance bandwidth from 3.1-5.1 GHz (48%) for a reflection of coefficient of iotaS11iota < -10 dB, and an average gain of 7.7 dBi. Stable boresight radiation patterns are achieved across the entire operating frequency band, by suppressing the high order mode resonances. This design exhibits good mechanical tolerance and manufacturability.", "title": "" }, { "docid": "6d26012bd529735410477c9f389bbf73", "text": "Most current planners assume complete domain models and focus on generating correct plans. Unfortunately, domain modeling is a laborious and error-prone task, thus real world agents have to plan with incomplete domain models. While domain experts cannot guarantee completeness, often they are able to circumscribe the incompleteness of the model by providing annotations as to which parts of the domain model may be incomplete. In this paper, we study planning problems with incomplete domain models where the annotations specify possible preconditions and effects of actions. We show that the problem of assessing the quality of a plan, or its plan robustness, is #P -complete, establishing its equivalence with the weighted model counting problems. We present two approaches to synthesizing robust plans. While the method based on the compilation to conformant probabilistic planning is much intuitive, its performance appears to be limited to only small problem instances. Our second approach based on stochastic heuristic search works well for much larger problems. It aims to use the robustness measure directly for estimating heuristic distance, which is then used to guide the search. Our planning system, PISA, outperforms a state-of-the-art planner handling incomplete domain models in most of the tested domains, both in terms of plan quality and planning time. Finally, we also present an extension of PISA called CPISA that is able to exploit the available of past successful plan traces to both improve the robustness of the synthesized plans and reduce the domain modeling burden.", "title": "" }, { "docid": "20b28dd4a0717add4e032976a7946109", "text": "In planning an s-curve speed profile for a computer numerical control (CNC) machine, centripetal acceleration and its derivative have to be considered. In a CNC machine, these quantities dictate how much voltage and current should be applied to servo motor windings. In this paper, the necessity of considering centripetal jerk in speed profile generation especially in the look-ahead mode is explained. It is demonstrated that the magnitude of centripetal jerk is proportional to the curvature derivative of the path known as \"sharpness\". It is also explained that a proper limited jerk motion is only possible when a G2-continuous machining path is planned. Then using a simplified mathematical representation of clothoids, a novel method for approximating a given path with a sequence of clothoid segments is proposed. Using this method, a semi-parallel G2-continuous path with adjustable deviation from the original shape for a sample machining contour is generated. Maximum permissible feed rate for the generated path is also calculated.", "title": "" }, { "docid": "c58fc1a572d5120e14eb6e501a50b8aa", "text": "475 Abstract— In this paper a dc-dc buck-boost converter is modeled and controlled using sliding mode technique. First the buck-boost converter is modeled and dynamic equations describing the converter are derived and sliding mode controller is designed. The robustness of the converter system is tested against step load changes and input voltage variations. Matlab/Simulink is used for the simulations. The simulation results are presented..", "title": "" }, { "docid": "5f57fdeba1afdfb7dcbd8832f806bc48", "text": "OBJECTIVES\nAdolescents spend increasingly more time on electronic devices, and sleep deficiency rising in adolescents constitutes a major public health concern. The aim of the present study was to investigate daytime screen use and use of electronic devices before bedtime in relation to sleep.\n\n\nDESIGN\nA large cross-sectional population-based survey study from 2012, the youth@hordaland study, in Hordaland County in Norway.\n\n\nSETTING\nCross-sectional general community-based study.\n\n\nPARTICIPANTS\n9846 adolescents from three age cohorts aged 16-19. The main independent variables were type and frequency of electronic devices at bedtime and hours of screen-time during leisure time.\n\n\nOUTCOMES\nSleep variables calculated based on self-report including bedtime, rise time, time in bed, sleep duration, sleep onset latency and wake after sleep onset.\n\n\nRESULTS\nAdolescents spent a large amount of time during the day and at bedtime using electronic devices. Daytime and bedtime use of electronic devices were both related to sleep measures, with an increased risk of short sleep duration, long sleep onset latency and increased sleep deficiency. A dose-response relationship emerged between sleep duration and use of electronic devices, exemplified by the association between PC use and risk of less than 5 h of sleep (OR=2.70, 95% CI 2.14 to 3.39), and comparable lower odds for 7-8 h of sleep (OR=1.64, 95% CI 1.38 to 1.96).\n\n\nCONCLUSIONS\nUse of electronic devices is frequent in adolescence, during the day as well as at bedtime. The results demonstrate a negative relation between use of technology and sleep, suggesting that recommendations on healthy media use could include restrictions on electronic devices.", "title": "" }, { "docid": "7299cec968f909f2bfce5182190d9fb2", "text": "Identifying and correcting syntax errors is a challenge all novice programmers confront. As educators, the more we understand about the nature of these errors and how students respond to them, the more effective our teaching can be. It is well known that just a few types of errors are far more frequently encountered by students learning to program than most. In this paper, we examine how long students spend resolving the most common syntax errors, and discover that certain types of errors are not solved any more quickly by the higher ability students. Moreover, we note that these errors consume a large amount of student time, suggesting that targeted teaching interventions may yield a significant payoff in terms of increasing student productivity.", "title": "" }, { "docid": "0508773a4c1a753918f21b8b97848a62", "text": "In this paper, the time dependent dielectric breakdown behavior is investigated for production type crystalline ZrO2-based thin films under dc and ac stress. Constant voltage stress measurements over six decades in time show that the voltage acceleration of time-to-breakdown follows the conventional exponential law. The effects of ac stress on time-to-breakdown are studied in detail by changing the experimental parameters including stress voltage, base voltage, and frequency. In general, ac stressing gives rise to a gain in lifetime, which may result from less overall charge trapping. This trap dynamic was investigated by dielectric absorption measurements. Overall, the typical DRAM refresh of the capacitor leads to the most critical reliability concern.", "title": "" }, { "docid": "0a6a3e82b701bfbdbb73a9e8573fc94a", "text": "Providing effective feedback on resource consumption in the home is a key challenge of environmental conservation efforts. One promising approach for providing feedback about residential energy consumption is the use of ambient and artistic visualizations. Pervasive computing technologies enable the integration of such feedback into the home in the form of distributed point-of-consumption feedback devices to support decision-making in everyday activities. However, introducing these devices into the home requires sensitivity to the domestic context. In this paper we describe three abstract visualizations and suggest four design requirements that this type of device must meet to be effective: pragmatic, aesthetic, ambient, and ecological. We report on the findings from a mixed methods user study that explores the viability of using ambient and artistic feedback in the home based on these requirements. Our findings suggest that this approach is a viable way to provide resource use feedback and that both the aesthetics of the representation and the context of use are important elements that must be considered in this design space.", "title": "" }, { "docid": "a4cfe72cae5bdaed110299d652e60a6f", "text": "Hoffa's (infrapatellar) fat pad (HFP) is one of the knee fat pads interposed between the joint capsule and the synovium. Located posterior to patellar tendon and anterior to the capsule, the HFP is richly innervated and, therefore, one of the sources of anterior knee pain. Repetitive local microtraumas, impingement, and surgery causing local bleeding and inflammation are the most frequent causes of HFP pain and can lead to a variety of arthrofibrotic lesions. In addition, the HFP may be secondarily involved to menisci and ligaments disorders, injuries of the patellar tendon and synovial disorders. Patients with oedema or abnormalities of the HFP on magnetic resonance imaging (MRI) are often symptomatic; however, these changes can also be seen in asymptomatic patients. Radiologists should be cautious in emphasising abnormalities of HFP since they do not always cause pain and/or difficulty in walking and, therefore, do not require therapy. Teaching Points • Hoffa's fat pad (HFP) is richly innervated and, therefore, a source of anterior knee pain. • HFP disorders are related to traumas, involvement from adjacent disorders and masses. • Patients with abnormalities of the HFP on MRI are often but not always symptomatic. • Radiologists should be cautious in emphasising abnormalities of HFP.", "title": "" }, { "docid": "ad8b5a47ede41c39a3ac5fa462dc8815", "text": "Because traditional electric power distribution systems have been designed assuming the primary substation is the sole source of power and short-circuit capacity, DR interconnection results in operating situations that do not occur in a conventional system. This paper discusses several system issues which may be encountered as DR penetrates into distribution systems. The voltage issues covered are the DR impact on system voltage, interaction of DR and capacitor operations, and interaction of DR and voltage regulator and LTC operations. Protection issues include fuse coordination, feeding faults after utility protection opens, impact of DR on interrupting rating of devices, faults on adjacent feeders, fault detection, ground source impacts, single phase interruption on three phase line, recloser coordination and conductor burndown. Loss of power grid is also discussed, including vulnerability and overvoltages due to islanding and coordination with reclosing. Also covered separately are system restoration and network issues.", "title": "" } ]
scidocsrr
3413d476d50b59d2eea2a236c19f9c37
User-centric ultra-dense networks for 5G: challenges, methodologies, and directions
[ { "docid": "e066761ecb7d8b7468756fb4be6b8fcb", "text": "The surest way to increase the system capacity of a wireless link is by getting the transmitter and receiver closer to each other, which creates the dual benefits of higher-quality links and more spatial reuse. In a network with nomadic users, this inevitably involves deploying more infrastructure, typically in the form of microcells, hot spots, distributed antennas, or relays. A less expensive alternative is the recent concept of femtocells - also called home base stations - which are data access points installed by home users to get better indoor voice and data coverage. In this article we overview the technical and business arguments for femtocells and describe the state of the art on each front. We also describe the technical challenges facing femtocell networks and give some preliminary ideas for how to overcome them.", "title": "" }, { "docid": "cdef5f6a50c1f427e8f37be3c6ebbccf", "text": "In this article, we summarize the 5G mobile communication requirements and challenges. First, essential requirements for 5G are pointed out, including higher traffic volume, indoor or hotspot traffic, and spectrum, energy, and cost efficiency. Along with these changes of requirements, we present a potential step change for the evolution toward 5G, which shows that macro-local coexisting and coordinating paths will replace one macro-dominated path as in 4G and before. We hereafter discuss emerging technologies for 5G within international mobile telecommunications. Challenges and directions in hardware, including integrated circuits and passive components, are also discussed. Finally, a whole picture for the evolution to 5G is predicted and presented.", "title": "" } ]
[ { "docid": "2c15bef67e6bdbfaf66e1164f8dddf52", "text": "Social behavior is ordinarily treated as being under conscious (if not always thoughtful) control. However, considerable evidence now supports the view that social behavior often operates in an implicit or unconscious fashion. The identifying feature of implicit cognition is that past experience influences judgment in a fashion not introspectively known by the actor. The present conclusion--that attitudes, self-esteem, and stereotypes have important implicit modes of operation--extends both the construct validity and predictive usefulness of these major theoretical constructs of social psychology. Methodologically, this review calls for increased use of indirect measures--which are imperative in studies of implicit cognition. The theorized ordinariness of implicit stereotyping is consistent with recent findings of discrimination by people who explicitly disavow prejudice. The finding that implicit cognitive effects are often reduced by focusing judges' attention on their judgment task provides a basis for evaluating applications (such as affirmative action) aimed at reducing such unintended discrimination.", "title": "" }, { "docid": "eab311504e78caa71bcd56043cfc6570", "text": "In sentence modeling and classification, convolutional neural network approaches have recently achieved state-of-the-art results, but all such efforts process word vectors sequentially and neglect long-distance dependencies. To exploit both deep learning and linguistic structures, we propose a tree-based convolutional neural network model which exploit various long-distance relationships between words. Our model improves the sequential baselines on all three sentiment and question classification tasks, and achieves the highest published accuracy on TREC.", "title": "" }, { "docid": "4e2bfd87acf1287f36694634a6111b3f", "text": "This paper presents a model for managing departure aircraft at the spot or gate on the airport surface. The model is applied over two time frames: long term (one hour in future) for collaborative decision making, and short term (immediate) for decisions regarding the release of aircraft. The purpose of the model is to provide the controller a schedule of spot or gate release times optimized for runway utilization. This model was tested in nominal and heavy surface traffic scenarios in a simulated environment, and results indicate average throughput improvement of 10% in high traffic scenarios even with up to two minutes of uncertainty in spot arrival times.", "title": "" }, { "docid": "29c52509c5235db62e2a586dbaf07ff6", "text": "This paper studies the area of fraud detection in the light of existing intrusion detection research. Fraud detection and intrusion detection have traditionally been two almost completely separate research areas. Fraud detection has long been used by such businesses as telecom companies, banks and insurance companies. Intrusion detection has recently become a popular means to protect computer systems and computer based services. Many of the services offered by businesses using fraud detection are now computer based, thus opening new ways of committing fraud not covered by traditional fraud detection systems. Merging fraud detection with intrusion detection may be a solution for protecting new computer based services. An IP based telecom service is used as an example to illustrate these new problems and the use of a suggested fraud model.", "title": "" }, { "docid": "085f6b8b53bd2e7afb5558e5b0b0356a", "text": "Computer graphics applications controlled through natural gestures are gaining increasing popularity these days due to recent developments in low-cost tracking systems and gesture recognition technologies. Although interaction techniques through natural gestures have already demonstrated their benefits in manipulation, navigation and avatar-control tasks, effective selection with pointing gestures remains an open problem. In this paper we survey the state-of-the-art in 3D object selection techniques. We review important findings in human control models, analyze major factors influencing selection performance, and classify existing techniques according to a number of criteria. Unlike other components of the application’s user interface, pointing techniques need a close coupling with the rendering pipeline, introducing new elements to be drawn, and potentially modifying the object layout and the way the scene is rendered. Conversely, selection performance is affected by rendering issues such as visual feedback, depth perception, and occlusion management. We thus review existing literature paying special attention to those aspects in the boundary between computer graphics and human computer interaction.", "title": "" }, { "docid": "1a59bf4467e73a6cae050e5670dbf4fa", "text": "BACKGROUND\nNivolumab combined with ipilimumab resulted in longer progression-free survival and a higher objective response rate than ipilimumab alone in a phase 3 trial involving patients with advanced melanoma. We now report 3-year overall survival outcomes in this trial.\n\n\nMETHODS\nWe randomly assigned, in a 1:1:1 ratio, patients with previously untreated advanced melanoma to receive nivolumab at a dose of 1 mg per kilogram of body weight plus ipilimumab at a dose of 3 mg per kilogram every 3 weeks for four doses, followed by nivolumab at a dose of 3 mg per kilogram every 2 weeks; nivolumab at a dose of 3 mg per kilogram every 2 weeks plus placebo; or ipilimumab at a dose of 3 mg per kilogram every 3 weeks for four doses plus placebo, until progression, the occurrence of unacceptable toxic effects, or withdrawal of consent. Randomization was stratified according to programmed death ligand 1 (PD-L1) status, BRAF mutation status, and metastasis stage. The two primary end points were progression-free survival and overall survival in the nivolumab-plus-ipilimumab group and in the nivolumab group versus the ipilimumab group.\n\n\nRESULTS\nAt a minimum follow-up of 36 months, the median overall survival had not been reached in the nivolumab-plus-ipilimumab group and was 37.6 months in the nivolumab group, as compared with 19.9 months in the ipilimumab group (hazard ratio for death with nivolumab plus ipilimumab vs. ipilimumab, 0.55 [P<0.001]; hazard ratio for death with nivolumab vs. ipilimumab, 0.65 [P<0.001]). The overall survival rate at 3 years was 58% in the nivolumab-plus-ipilimumab group and 52% in the nivolumab group, as compared with 34% in the ipilimumab group. The safety profile was unchanged from the initial report. Treatment-related adverse events of grade 3 or 4 occurred in 59% of the patients in the nivolumab-plus-ipilimumab group, in 21% of those in the nivolumab group, and in 28% of those in the ipilimumab group.\n\n\nCONCLUSIONS\nAmong patients with advanced melanoma, significantly longer overall survival occurred with combination therapy with nivolumab plus ipilimumab or with nivolumab alone than with ipilimumab alone. (Funded by Bristol-Myers Squibb and others; CheckMate 067 ClinicalTrials.gov number, NCT01844505 .).", "title": "" }, { "docid": "9735cecc4d8419475c72c4bd52ab556e", "text": "Information diffusion and virus propagation are fundamental processes talking place in networks. While it is often possible to directly observe when nodes become infected, observing individual transmissions (i.e., who infects whom or who influences whom) is typically very difficult. Furthermore, in many applications, the underlying network over which the diffusions and propagations spread is actually unobserved. We tackle these challenges by developing a method for tracing paths of diffusion and influence through networks and inferring the networks over which contagions propagate. Given the times when nodes adopt pieces of information or become infected, we identify the optimal network that best explains the observed infection times. Since the optimization problem is NP-hard to solve exactly, we develop an efficient approximation algorithm that scales to large datasets and in practice gives provably near-optimal performance. We demonstrate the effectiveness of our approach by tracing information cascades in a set of 170 million blogs and news articles over a one year period to infer how information flows through the online media space. We find that the diffusion network of news tends to have a core-periphery structure with a small set of core media sites that diffuse information to the rest of the Web. These sites tend to have stable circles of influence with more general news media sites acting as connectors between them.", "title": "" }, { "docid": "87fefee3cb35d188ad942ee7c8fad95f", "text": "Financial frictions are a central element of most of the models that the literature on emerging markets crises has proposed for explaining the ‘Sudden Stop’ phenomenon. To date, few studies have aimed to examine the quantitative implications of these models and to integrate them with an equilibrium business cycle framework for emerging economies. This paper surveys these studies viewing them as ability-to-pay and willingness-to-pay variations of a framework that adds occasionally binding borrowing constraints to the small open economy real-business-cycle model. A common feature of the different models is that agents factor in the risk of future Sudden Stops in their optimal plans, so that equilibrium allocations and prices are distorted even when credit constraints do not bind. Sudden Stops are a property of the unique, flexible-price competitive equilibrium of these models that occurs in a particular region of the state space in which negative shocks make borrowing constraints bind. The resulting nonlinear effects imply that solving the models requires non-linear numerical methods, which are described in the survey. The results show that the models can yield relatively infrequent Sudden Stops with large current account reversals and deep recessions nested within smoother business cycles. Still, research in this area is at an early stage and this survey aims to stimulate further work. Cristina Arellano Enrique G. Mendoza Department of Economics Department of Economics Social Sciences Building University of Maryland Duke University College Park, MD 20742 Durham, NC 27708-0097 and NBER mendozae@econ.duke.edu", "title": "" }, { "docid": "85d1d340f41d2da04d1dea7d70801df1", "text": "In this Part II of this paper we first refine the analysis of error-free vector transformations presented in Part I. Based on that we present an algorithm for calculating the rounded-to-nearest result of s := ∑ pi for a given vector of floatingpoint numbers pi, as well as algorithms for directed rounding. A special algorithm for computing the sign of s is given, also working for huge dimensions. Assume a floating-point working precision with relative rounding error unit eps. We define and investigate a K-fold faithful rounding of a real number r. Basically the result is stored in a vector Resν of K non-overlapping floating-point numbers such that ∑ Resν approximates r with relative accuracy epsK , and replacing ResK by its floating-point neighbors in ∑ Resν forms a lower and upper bound for r. For a given vector of floating-point numbers with exact sum s, we present an algorithm for calculating a K-fold faithful rounding of s using solely the working precision. Furthermore, an algorithm for calculating a faithfully rounded result of the sum of a vector of huge dimension is presented. Our algorithms are fast in terms of measured computing time because they allow good instruction-level parallelism, they neither require special operations such as access to mantissa or exponent, they contain no branch in the inner loop, nor do they require some extra precision: The only operations used are standard floating-point addition, subtraction and multiplication in one working precision, for example double precision. Certain constants used in the algorithms are proved to be optimal.", "title": "" }, { "docid": "8f2c7770fdcd9bfe6a7e9c6e10569fc7", "text": "The purpose of this paper is to explore the importance of Information Technology (IT) Governance models for public organizations and presenting an IT Governance model that can be adopted by both practitioners and researchers. A review of the literature in IT Governance has been initiated to shape the intended theoretical background of this study. The systematic literature review formalizes a richer context for the IT Governance concept. An empirical survey, using a questionnaire based on COBIT 4.1 maturity model used to investigate IT Governance practice in multiple case studies from Kingdom of Bahrain. This method enabled the researcher to gain insights to evaluate IT Governance practices. The results of this research will enable public sector organizations to adopt an IT Governance model in a simple and dynamic manner. The model provides a basic structure of a concept; for instance, this allows organizations to gain a better perspective on IT Governance processes and provides a clear focus for decision-making attention. IT Governance model also forms as a basis for further research in IT Governance adoption models and bridges the gap between conceptual frameworks, real life and functioning governance.", "title": "" }, { "docid": "40f452c48367c51cfe6bd95a6b8f9548", "text": "This paper presents a new single-phase, Hybrid Switched Reluctance (HSR) motor for low-cost, low-power, pump or fan drive systems. Its single-phase configuration allows use of a simple converter to reduce the system cost. Cheap ferrite magnets are used and arranged in a special flux concentration manner to increase effectively the torque density and efficiency of this machine. The efficiency of this machine is comparable to the efficiency of a traditional permanent magnet machine in the similar power range. The cogging torque, due to the existence of the permanent magnetic field, is beneficially used to reduce the torque ripple and enable self-starting of the machine. The starting torque of this machine is significantly improved by a slight extension of the stator pole-arc. A prototype machine and a complete drive system has been manufactured and tested. Results are given in this paper.", "title": "" }, { "docid": "5a4959ef609e2ed64018aed292b7f27f", "text": "With thousands of alerts identified by IDSs every day, the process of distinguishing which alerts are important (i.e., true positives) and which are is irrelevant (i.e., false positives) is become more complicated. The security administrator must analyze each single alert either a true of false alert. This paper proposes an alert prioritization model, which is based on risk assessment. The model uses indicators, such as priority, reliability, asset value, as decision factors to calculate alert's risk. The objective is to determine the impact of certain alerts generated by IDS on the security status of an information system, also improve the detection of intrusions using snort by classifying the most critical alerts by their levels of risk, thus, only the alerts that presents a real threat will be displayed to the security administrator, so, we reduce the number of false positives, also we minimize the analysis time of the alerts. The model was evaluated using KDD Cup 99 Dataset as test environment and a pattern matching algorithm.", "title": "" }, { "docid": "e7a6bb8f63e35f3fb0c60bdc26817e03", "text": "A simple mechanism is presented, based on ant-like agents, for routing and load balancing in telecommunications networks, following the initial works of Appleby and Stewart (1994) and Schoonderwoerd et al. (1997). In the present work, agents are very similar to those proposed by Schoonderwoerd et al. (1997), but are supplemented with a simplified dynamic programming capability, initially experimented by Guérin (1997) with more complex agents, which is shown to significantly improve the network's relaxation and its response to perturbations. Topic area: Intelligent agents and network management", "title": "" }, { "docid": "a637d37cb1c4a937b64494903b33193d", "text": "The multienzyme complexes, pyruvate dehydrogenase and alpha-ketoglutarate dehydrogenase, involved in the central metabolism of Escherichia coli consist of multiple copies of three different enzymes, E1, E2 and E3, that cooperate to channel substrate intermediates between their active sites. The E2 components form the core of the complex, while a mixture of E1 and E3 components binds to the core. We present a random steady-state model to describe catalysis by such multienzyme complexes. At a fast time scale, the model describes the enzyme catalytic mechanisms of substrate channeling at a steady state, by polynomially approximating the analytic solution of a biochemical master equation. At a slower time scale, the structural organization of the different enzymes in the complex and their random binding/unbinding to the core is modeled using methods from equilibrium statistical mechanics. Biologically, the model describes the optimization of catalytic activity by substrate sharing over the entire enzyme complex. The resulting enzymatic models illustrate the random steady state (RSS) for modeling multienzyme complexes in metabolic pathways.", "title": "" }, { "docid": "2487c225879ab88c0d56ab9c91793346", "text": "The purpose of this article is to propose a Sustainable Balanced Scorecard model for Chilean wineries (SBSC). This system, which is based on the Balanced Scorecard (BSC), one of the most widespread management systems nowadays in the world, Rigby, and Bilodeau (2011), will allow the wine companies to manage the business in two dimensions: sustainability, which will measure how sustainable is the business and the temporal dimension, linking the measurement of strategic performance with the day to day. To achieve the target previously raised, a research on sustainability will be developed, along with strategic performance measurement systems and a diagnosis of the Chilean wine industry, based on in-depth interviews to 42 companies in the central zone of Chile. On the basis of the assessment of the wine industry carried out, it is concluded that the bases for a future design and implementation of the SBSC system are in place since it was found that 83% of the vineyards have a strategic plan formally in place, which corresponds to the input of the proposed system.", "title": "" }, { "docid": "9eb4a4519e9a1e3a7547520a23adcaf2", "text": "We introduce an approach for deep reinforcement learning (RL) that improves upon the efficiency, generalization capacity, and interpretability of conventional approaches through structured perception and relational reasoning. It uses self-attention to iteratively reason about the relations between entities in a scene and to guide a model-free policy. Our results show that in a novel navigation and planning task called Box-World, our agent finds interpretable solutions that improve upon baselines in terms of sample complexity, ability to generalize to more complex scenes than experienced during training, and overall performance. In the StarCraft II Learning Environment, our agent achieves state-of-the-art performance on six mini-games – surpassing human grandmaster performance on four. By considering architectural inductive biases, our work opens new directions for overcoming important, but stubborn, challenges in deep RL.", "title": "" }, { "docid": "fdc580124be4f1398976d4161791bf8a", "text": "Child abuse is a problem that affects over six million children in the United States each year. Child neglect accounts for 78 % of those cases. Despite this, the issue of child neglect is still not well understood, partially because child neglect does not have a consistent, universally accepted definition. Some researchers consider child neglect and child abuse to be one in the same, while other researchers consider them to be conceptually different. Factors that make child neglect difficult to define include: (1) Cultural differences; motives must be taken into account because parents may believe they are acting in the child’s best interests based on cultural beliefs (2) the fact that the effect of child abuse is not always immediately visible; the effects of emotional neglect specifically may not be apparent until later in the child’s development, and (3) the large spectrum of actions that fall under the category of child abuse. Some of the risk factors for increased child neglect and maltreatment have been identified. These risk factors include socioeconomic status, education level, family composition, and the presence of dysfunction family characteristics. Studies have found that children from poorer families and children of less educated parents are more likely to sustain fatal unintentional injuries than children of wealthier, better educated parents. Studies have also found that children living with adults unrelated to them are at increased risk for unintentional injuries and maltreatment. Dysfunctional family characteristics may even be more indicative of child neglect. Parental alcohol or drug abuse, parental personal history of neglect, and parental stress greatly increase the odds of neglect. Parental depression doubles the odds of child neglect. However, more research needs to be done to better understand these risk factors and to identify others. Having a clearer understanding of the risk factors could lead to prevention and treatment, as it would allow for health care personnel to screen for high-risk children and intervene before it is too late. Screening could also be done in the schools and organized after school activities. Parenting classes have been shown to be an effective intervention strategy by decreasing parental stress and potential for abuse, but there has been limited research done on this approach. Parenting classes can be part of the corrective actions for parents found to be neglectful or abusive, but parenting classes may also be useful as a preventative measure, being taught in schools or readily available in higher-risk communities. More research has to be done to better define child abuse and neglect so that it can be effectively addressed and treated.", "title": "" }, { "docid": "4b432638ecceac3d1948fb2b2e9be49b", "text": "Software process refers to the set of tools, methods, and practices used to produce a software artifact. The objective of a software process management model is to produce software artifacts according to plans while simultaneously improving the organization's capability to produce better artifacts. The SEI's Capability Maturity Model (CMM) is a software process management model; it assists organizations to provide the infrastructure for achieving a disciplined and mature software process. There is a growing concern that the CMM is not applicable to small firms because it requires a huge investment. In fact, detailed studies of the CMM show that its applications may cost well over $100,000. This article attempts to address the above concern by studying the feasibility of a scaled-down version of the CMM for use in small software firms. The logic for a scaled-down CMM is that the same quantitative quality control principles that work for larger projects can be scaled-down and adopted for smaller ones. Both the CMM and the Personal Software Process (PSP) are briefly described and are used as basis.", "title": "" }, { "docid": "23d560ca3bb6f2d7d9b615b5ad3224d2", "text": "The Pebbles project is creating applications to connmt multiple Personal DigiM Assistants &DAs) to a main computer such as a PC We are cmenfly using 3Com Pd@Ilots b-use they are popdar and widespread. We created the ‘Remote Comrnandefl application to dow users to take turns sending input from their PahnPiiots to the PC as if they were using the PCS mouse and keyboard. ‘.PebblesDraw” is a shared whiteboard application we btit that allows dl of tie users to send input simtdtaneously while sharing the same PC display. We are investigating the use of these applications in various contexts, such as colocated mmtings. Keywor& Personal Digiti Assistants @DAs), PH11oc Single Display Groupware, Pebbles, AmuleL", "title": "" }, { "docid": "6927647b1e1f6bf9bcf65db50e9f8d6e", "text": "Six of the ten leading causes of death in the United States can be directly linked to diet. Measuring accurate dietary intake, the process of determining what someone eats is considered to be an open research problem in the nutrition and health fields. We are developing image-based tools in order to automatically obtain accurate estimates of what foods a user consumes. We have developed a novel food record application using the embedded camera in a mobile device. This paper describes the current status of food image analysis and overviews problems that still need to be addressed.", "title": "" } ]
scidocsrr
814c1782754e9015ed744f83d481626f
Japan's 2014 General Election: Political Bots, Right-Wing Internet Activism, and Prime Minister Shinzō Abe's Hidden Nationalist Agenda
[ { "docid": "940df82b743d99cb3f6dff903920482f", "text": "Online publishing, social networks, and web search have dramatically lowered the costs to produce, distribute, and discover news articles. Some scholars argue that such technological changes increase exposure to diverse perspectives, while others worry they increase ideological segregation. We address the issue by examining web browsing histories for 50,000 U.S.-located users who regularly read online news. We find that social networks and search engines increase the mean ideological distance between individuals. However, somewhat counterintuitively, we also find these same channels increase an individual’s exposure to material from his or her less preferred side of the political spectrum. Finally, we show that the vast majority of online news consumption is accounted for by individuals simply visiting the home pages of their favorite, typically mainstream, news outlets, tempering the consequences—both positive and negative—of recent technological changes. We thus uncover evidence for both sides of the debate, while also finding that the magnitude of the e↵ects are relatively modest. WORD COUNT: 5,762 words", "title": "" }, { "docid": "0bc29304bd058053d6d0440f60f884d5", "text": "YouTube is one of the more powerful tools for self-learning and entertaining globally. Uploading and sharing on YouTube have increased recently as these are possible via a simple click. Moreover, some countries, including Saudi Arabia, use this technology more than others. While there are many Saudi channels and videos for all age groups, there are limited channels for people with disabilities such as Deaf and Hard of Hearing people (DHH). The utilization of YouTube among DHH people has not reached its full potential. To investigate this phenomenon, we conducted an empirical research study to uncover factors influencing DHH people’s motivations, perceptions and adoption of YouTube, based on the Technology Acceptance Model (TAM). The results showed that DHH people pinpoint some useful functions in YouTube, such as the captions in English and the translation in Arabic. However, Arab DHH people are not sufficiently motivated to watch YouTube due to the fact that the YouTube time-span is fast and DHH personnel prefer greater time to allow them to read and understand the contents. Hence, DHH people tend to avoid sharing YouTube videos among their contacts.", "title": "" } ]
[ { "docid": "2f0eb4a361ff9f09bda4689a1f106ff2", "text": "The growth of Quranic digital publishing increases the need to develop a better framework to authenticate Quranic quotes with the original source automatically. This paper aims to demonstrate the significance of the quote authentication approach. We propose an approach to verify the e-citation of the Quranic quote as compared with original texts from the Quran. In this paper, we will concentrate mainly on discussing the Algorithm to verify the fundamental text for Quranic quotes.", "title": "" }, { "docid": "eff3b5c790b62021d4615f4a1708d707", "text": "Web services are becoming business-critical components that must provide a non-vulnerable interface to the client applications. However, previous research and practice show that many web services are deployed with critical vulnerabilities. SQL Injection vulnerabilities are particularly relevant, as web services frequently access a relational database using SQL commands. Penetration testing and static code analysis are two well-know techniques often used for the detection of security vulnerabilities. In this work we compare how effective these two techniques are on the detection of SQL Injection vulnerabilities in web services code. To understand the strengths and limitations of these techniques, we used several commercial and open source tools to detect vulnerabilities in a set of vulnerable services. Results suggest that, in general, static code analyzers are able to detect more SQL Injection vulnerabilities than penetration testing tools. Another key observation is that tools implementing the same detection approach frequently detect different vulnerabilities. Finally, many tools provide a low coverage and a high false positives rate, making them a bad option for programmers.", "title": "" }, { "docid": "ddc56e9f2cbe9c086089870ccec7e510", "text": "Serotonin is an ancient monoamine neurotransmitter, biochemically derived from tryptophan. It is most abundant in the gastrointestinal tract, but is also present throughout the rest of the body of animals and can even be found in plants and fungi. Serotonin is especially famous for its contributions to feelings of well-being and happiness. More specifically it is involved in learning and memory processes and is hence crucial for certain behaviors throughout the animal kingdom. This brief review will focus on the metabolism, biological role and mode-of-action of serotonin in insects. First, some general aspects of biosynthesis and break-down of serotonin in insects will be discussed, followed by an overview of the functions of serotonin, serotonin receptors and their pharmacology. Throughout this review comparisons are made with the vertebrate serotonergic system. Last but not least, possible applications of pharmacological adjustments of serotonin signaling in insects are discussed.", "title": "" }, { "docid": "6f9afe3cbf5cc675c6b4e96ee2ccfa76", "text": "As more firms begin to collect (and seek value from) richer customer-level datasets, a focus on the emerging concept of customer-base analysis is becoming increasingly common and critical. Such analyses include forward-looking projections ranging from aggregate-level sales trajectories to individual-level conditional expectations (which, in turn, can be used to derive estimates of customer lifetime value). We provide an overview of a class of parsimonious models (called probability models) that are well-suited to meet these rising challenges. We first present a taxonomy that captures some of the key distinctions across different kinds of business settings and customer relationships, and identify some of the unique modeling and measurement issues that arise across them. We then provide deeper coverage of these modeling issues, first for noncontractual settings (i.e., situations in which customer “death” is unobservable), then contractual ones (i.e., situations in which customer “death” can be observed). We review recent literature in these areas, highlighting substantive insights that arise from the research as well as the methods used to capture them. We focus on practical applications that use appropriately chosen data summaries (such as recency and frequency) and rely on commonly available software packages (such as Microsoft Excel). n 2009 Direct Marketing Educational Foundation, Inc. Published by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "d089515aa3325010616010d9017f158e", "text": "We report a receiver for four-level pulse-amplitude modulated (PAM-4) encoded data signals, which was measured to receive data at 22 Gb/s with a bit error rate (BER) <10/sup -12/ at a maximum frequency deviation of 350 ppm and a 2/sup 7/-1 PRBS pattern. We propose a bit-sliced architecture for the data path, and a novel voltage shifting amplifier to introduce a programmable offset to the differential data signal. We present a novel method to characterize sampling latches and include them in the data path. A current-mode logic (CML) biasing scheme using programmable matched resistors limits the effect of process variations. The receiver also features a programmable signal termination, an analog equalizer and offset compensation for each sampling latch. The measured current consumption is 207 mA from a 1.1-V supply, and the active chip area is 0.12 mm/sup 2/.", "title": "" }, { "docid": "5efe4e98fd21e83033669aaf58857bf6", "text": "Object detection in optical remote sensing images, being a fundamental but challenging problem in the field of aerial and satellite image analysis, plays an important role for a wide range of applications and is receiving significant attention in recent years. While enormous methods exist, a deep review of the literature concerning generic object detection is still lacking. This paper aims to provide a review of the recent progress in this field. Different from several previously published surveys that focus on a specific object class such as building and road, we concentrate on more generic object categories including, but are not limited to, road, building, tree, vehicle, ship, airport, urban-area. Covering about 270 publications we survey 1) template matching-based object detection methods, 2) knowledge-based object detection methods, 3) object-based image analysis (OBIA)-based object detection methods, 4) machine learning-based object detection methods, and 5) five publicly available datasets and three standard evaluation metrics. We also discuss the challenges of current studies and propose two promising research directions, namely deep learning-based feature representation and weakly supervised learning-based geospatial object detection. It is our hope that this survey will be beneficial for the researchers to have better understanding of this research field.", "title": "" }, { "docid": "646a1a07019d0f2965051baebcfe62c5", "text": "We present a computing model based on the DNA strand displacement technique, which performs Bayesian inference. The model will take single-stranded DNA as input data, that represents the presence or absence of a specific molecular signal (evidence). The program logic encodes the prior probability of a disease and the conditional probability of a signal given the disease affecting a set of different DNA complexes and their ratios. When the input and program molecules interact, they release a different pair of single-stranded DNA species whose ratio represents the application of Bayes’ law: the conditional probability of the disease given the signal. The models presented in this paper can have the potential to enable the application of probabilistic reasoning in genetic diagnosis in vitro.", "title": "" }, { "docid": "6ccb58b003394200846205914989b88f", "text": "This paper describes a new, large scale discourse-level annotation project – the Penn Discourse TreeBank (PDTB). We present an approach to annotating a level of discourse structure that is based on identifying discourse connectives and their arguments. The PDTB is being built directly on top of the Penn TreeBank and Propbank, thus supporting the extraction of useful syntactic and semantic features and providing a richer substrate for the development and evaluation of practical algorithms. We provide a detailed preliminary analysis of inter-annotator agreement – both the level of agreement and the types of inter-annotator variation.", "title": "" }, { "docid": "11d8d62d92cb5cda76f817530132bd3e", "text": "This paper describes the development of a human airbag system which is designed to reduce the impact force from falls. A micro inertial measurement unit (muIMU), based on MEMS accelerometers and gyro sensors is developed as the motion sensing part of the system. A recognition algorithm is used for real-time fall determination. With the algorithm, a microcontroller integrated with the muIMU can discriminate falling-down motion from normal human motions and trigger an airbag system when a fall occurs. Our airbag system is designed to have fast response with moderate input pressure, i.e., the experimental response time is less than 0.3 second under 0.4 MPa. In addition, we present our progress on using support vector machine (SVM) training together with the muIMU to better distinguish falling and normal motions. Experimental results show that selected eigenvector sets generated from 200 experimental data sets can be accurately separated into falling and other motions", "title": "" }, { "docid": "b4fddc33bdf1afc1bc3e867d8d560bf1", "text": "Answering questions from university admission exams (Gaokao in Chinese) is a challenging AI task since it requires effective representation to capture complicated semantic relations between questions and answers. In this work, we propose a hybrid neural model for deep questionanswering task from history examinations. Our model employs a cooperative gated neural network to retrieve answers with the assistance of extra labels given by a neural turing machine labeler. Empirical study shows that the labeler works well with only a small training dataset and the gated mechanism is good at fetching the semantic representation of lengthy answers. Experiments on question answering demonstrate the proposed model obtains substantial performance gains over various neural model baselines in terms of multiple evaluation metrics.", "title": "" }, { "docid": "38f85a10e8f8b815974f5e42386b1fa3", "text": "Because Facebook is available on hundreds of millions of desktop and mobile computing platforms around the world and because it is available on many different kinds of platforms (from desktops and laptops running Windows, Unix, or OS X to hand held devices running iOS, Android, or Windows Phone), it would seem to be the perfect place to conduct steganography. On Facebook, information hidden in image files will be further obscured within the millions of pictures and other images posted and transmitted daily. Facebook is known to alter and compress uploaded images so they use minimum space and bandwidth when displayed on Facebook pages. The compression process generally disrupts attempts to use Facebook for image steganography. This paper explores a method to minimize the disruption so JPEG images can be used as steganography carriers on Facebook.", "title": "" }, { "docid": "65946b75e84eaa86caf909d4c721a190", "text": "The Park Geun-hye Administration of Korea (2013–2017) aims to increase the level of transparency and citizen trust in government through the Government 3.0 initiative. This new initiative for public sector innovation encourages citizen-government collaboration and collective intelligence, thereby improving the quality of policy-making and implementation and solving public problems in a new way. However, the national initiative that identifies collective intelligence and citizen-government collaboration alike fails to understand what the wisdom of crowds genuinely means. Collective intelligence is not a magic bullet to solve public problems, which are called “wicked problems”. Collective deliberation over public issues often brings pain and patience, rather than fun and joy. It is not so easy that the public finds the best solution for soothing public problems through collective deliberation. The Government 3.0 initiative does not pay much attention to difficulties in gathering scattered wisdom, but rather highlights uncertain opportunities created by collective interactions and communications. This study deeply discusses the weaknesses in the logic of, and approach to, collective intelligence underlying the Government 3.0 initiative in Korea and the overall influence of the national initiative on participatory democracy.", "title": "" }, { "docid": "67ca7b4e38b545cd34ef79f305655a45", "text": "Failsafe performance is clarified for electric vehicles (EVs) with the drive structure driven by front and rear wheels independently, i.e., front and rear wheel independent drive type (FRID) EV. A simulator based on the four-wheel vehicle model, which can be applied to various types of drive systems like four in-wheel motor-drive-type EVs, is used for the clarification. Yaw rate and skid angle, which are related to drivability and steerability of vehicles and which further influence the safety of vehicles during runs, are analyzed under the condition that one of the motor drive systems fails while cornering on wet roads. In comparison with the four in-wheel motor-drive-type EVs, it is confirmed that the EVs with the structure focused in this paper have little change of the yaw rate and that hardly any dangerous phenomena appear, which would cause an increase in the skid angle of vehicles even if the front or rear wheel drive systems fail when running on wet roads with low friction coefficient. Moreover, the failsafe drive performance of the FRID EVs with the aforementioned structure is verified through experiments using a prototype EV.", "title": "" }, { "docid": "d0a9e27e2a8e4f6c2f40355bdc7a0a97", "text": "The abilities to identify with others and to distinguish between self and other play a pivotal role in intersubjective transactions. Here, we marshall evidence from developmental science, social psychology and neuroscience (including clinical neuropsychology) that support the view of a common representation network (both at the computational and neural levels) between self and other. However, sharedness does not mean identicality, otherwise representations of self and others would completely overlap, and lead to confusion. We argue that self-awareness and agency are integral components for navigating within these shared representations. We suggest that within this shared neural network the inferior parietal cortex and the prefrontal cortex in the right hemisphere play a special role in interpersonal awareness.", "title": "" }, { "docid": "6b19185466fb134b6bfb09b04b9e4b15", "text": "BACKGROUND\nThe increasing concern about the adverse effects of overuse of smartphones during clinical practicum implies the need for policies restricting smartphone use while attending to patients. It is important to educate health personnel about the potential risks that can arise from the associated distraction.\n\n\nOBJECTIVE\nThe aim of this study was to analyze the relationship between the level of nomophobia and the distraction associated with smartphone use among nursing students during their clinical practicum.\n\n\nMETHODS\nA cross-sectional study was carried out on 304 nursing students. The nomophobia questionnaire (NMP-Q) and a questionnaire about smartphone use, the distraction associated with it, and opinions about phone restriction policies in hospitals were used.\n\n\nRESULTS\nA positive correlation between the use of smartphones and the total score of nomophobia was found. In the same way, there was a positive correlation between opinion about smartphone restriction polices with each of the dimensions of nomophobia and the total score of the questionnaire.\n\n\nCONCLUSIONS\nNursing students who show high levels of nomophobia also regularly use their smartphones during their clinical practicum, although they also believe that the implementation of policies restricting smartphone use while working is necessary.", "title": "" }, { "docid": "1abef5c69eab484db382cdc2a2a1a73f", "text": "Conventional methods of 3D object generative modeling learn volumetric predictions using deep networks with 3D convolutional operations, which are direct analogies to classical 2D ones. However, these methods are computationally wasteful in attempt to predict 3D shapes, where information is rich only on the surfaces. In this paper, we propose a novel 3D generative modeling framework to efficiently generate object shapes in the form of dense point clouds. We use 2D convolutional operations to predict the 3D structure from multiple viewpoints and jointly apply geometric reasoning with 2D projection optimization. We introduce the pseudo-renderer, a differentiable module to approximate the true rendering operation, to synthesize novel depth maps for optimization. Experimental results for single-image 3D object reconstruction tasks show that we outperforms state-of-the-art methods in terms of shape similarity and prediction density.", "title": "" }, { "docid": "2fb484ef6d394e27a3157774048c3917", "text": "As the demand of high quality service in next generation wireless communication systems increases, a high performance of data transmission requires an increase of spectrum efficiency and an improvement of error performance in wireless communication systems. One of the promising approaches to 4G is adaptive OFDM (AOFDM). In AOFDM, adaptive transmission scheme is employed according to channel fading condition with OFDM to improve the performance Adaptive modulation system is superior to fixed modulation system since it changes modulation scheme according to channel fading condition. Performance of adaptive modulation system depends on decision making logic. Adaptive modulation systems using hardware decision making circuits are inefficient to decide or change modulation scheme according to given conditions. Using fuzzy logic in decision making interface makes the system more efficient. In this paper, we propose a OFDM system with adaptive modulation using fuzzy logic interface to improve system capacity with maintaining good error performance. The results of computer simulation show the improvement of system capacity in Rayleigh fading channel.", "title": "" }, { "docid": "35d9cfbb5f0b2623ce83973ae3235c74", "text": "Text entry has been a bottleneck of nontraditional computing devices. One of the promising methods is the virtual keyboard for touch screens. Correcting previous estimates on virtual keyboard efficiency in the literature, we estimated the potential performance of the existing QWERTY, FITALY, and OPTI designs of virtual keyboards to be in the neighborhood of 28, 36, and 38 words per minute (wpm), respectively. This article presents 2 quantitative design techniques to search for virtual keyboard layouts. The first technique simulated the dynamics of a keyboard with digraph springs between keys, which produced a Hooke keyboard with 41.6 wpm movement efficiency. The second technique used a Metropolis random walk algorithm guided by a “Fitts-digraph energy” objective function that quantifies the movement efficiency of a virtual keyboard. This method produced various Metropolis keyboards with different HUMAN-COMPUTER INTERACTION, 2002, Volume 17, pp. 89–XXX Copyright © 2002, Lawrence Erlbaum Associates, Inc. Shumin Zhai is a human–computer interaction researcher with an interest in inventing and analyzing interaction methods and devices based on human performance insights and experimentation; he is a Research Staff Member in the User Sciences and Experience Research Department of the IBM Almaden Research Center. Michael Hunter is a graduate student of Computer Science at Brigham Young University; he is interested in designing graphical and haptic user interfaces. Barton A. Smith is an experimental scientist with an interest in machines, people, and society; he is manager of the Human Interface Research Group at the IBM Almaden Research Center. shapes and structures with approximately 42.5 wpm movement efficiency, which was 50% higher than QWERTY and 10% higher than OPTI. With a small reduction (41.16 wpm) of movement efficiency, we introduced 2 more design objectives that produced the ATOMIK layout. One was alphabetical tuning that placed the keys with a tendency from A to Z so a novice user could more easily locate the keys. The other was word connectivity enhancement so the most frequent words were easier to find, remember, and type.", "title": "" }, { "docid": "39ed08e9a08b7d71a4c177afe8f0056a", "text": "This paper proposes an anticipation model of potential customers’ purchasing behavior. This model is inferred from past purchasing behavior of loyal customers and the web server log files of loyal and potential customers by means of clustering analysis and association rules analysis. Clustering analysis collects key characteristics of loyal customers’ personal information; these are used to locate other potential customers. Association rules analysis extracts knowledge of loyal customers’ purchasing behavior, which is used to detect potential customers’ near-future interest in a star product. Despite using offline analysis to filter out potential customers based on loyal customers’ personal information and generate rules of loyal customers’ click streams based on loyal customers’ web log data, an online analysis which observes potential customers’ web logs and compares it with loyal customers’ click stream rules can more readily target potential customers who may be interested in the star products in the near future. 2006 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
20186e8baf3f94aa23c08db0803db717
Snake-Based Segmentation of Teeth from Virtual Dental Casts
[ { "docid": "b82adc75ccdf7bd437f969d226bc29a1", "text": "Snakes, or active contours, are used extensively in computer vision and image processing applications, particularly to locate object boundaries. Problems associated with initialization and poor convergence to concave boundaries, however, have limited their utility. This paper develops a new external force for active contours, largely solving both problems. This external force, which we call gradient vector flow (GVF), is computed as a diffusion of the gradient vectors of a gray-level or binary edge map derived from the image. The resultant field has a large capture range and forces active contours into concave regions. Examples on simulated images and one real image are presented.", "title": "" }, { "docid": "78fc46165449f94e75e70a2654abf518", "text": "This paper presents a non-photorealistic rendering technique that automatically generates a line drawing from a photograph. We aim at extracting a set of coherent, smooth, and stylistic lines that effectively capture and convey important shapes in the image. We first develop a novel method for constructing a smooth direction field that preserves the flow of the salient image features. We then introduce the notion of flow-guided anisotropic filtering for detecting highly coherent lines while suppressing noise. Our method is simple and easy to implement. A variety of experimental results are presented to show the effectiveness of our method in producing self-contained, high-quality line illustrations.", "title": "" } ]
[ { "docid": "ef79fbd26ad0bdc951edcdef8bcffdbf", "text": "Question answering (Q&A) sites, where communities of volunteers answer questions, may provide faster, cheaper, and better services than traditional institutions. However, like other Web 2.0 platforms, user-created content raises concerns about information quality. At the same time, Q&A sites may provide answers of different quality because they have different communities and technological platforms. This paper compares answer quality on four Q&A sites: Askville, WikiAnswers, Wikipedia Reference Desk, and Yahoo! Answers. Findings indicate that: 1) the use of similar collaborative processes on these sites results in a wide range of outcomes. Significant differences in answer accuracy, completeness, and verifiability were found; 2) answer multiplication does not always result in better information. Answer multiplication yields more complete and verifiable answers but does not result in higher accuracy levels; and 3) a Q&A site’s popularity does not correlate with its answer quality, on all three measures.", "title": "" }, { "docid": "6d825778d5d2cb935aab35c60482a267", "text": "As the workforce ages rapidly in industrialized countries, a phenomenon known as the graying of the workforce, new challenges arise for firms as they have to juggle this dramatic demographical change (Trend 1) in conjunction with the proliferation of increasingly modern information and communication technologies (ICTs) (Trend 2). Although these two important workplace trends are pervasive, their interdependencies have remained largely unexplored. While Information Systems (IS) research has established the pertinence of age to IS phenomena from an empirical perspective, it has tended to model the concept merely as a control variable with limited understanding of its conceptual nature. In fact, even the few IS studies that used the concept of age as a substantive variable have mostly relied on stereotypical accounts alone to justify their age-related hypotheses. Further, most of these studies have examined the role of age in the same phenomenon (i.e., initial adoption of ICTs), implying a marked lack of diversity with respect to the phenomena under investigation. Overall, IS research has yielded only limited insight into the role of age in phenomena involving ICTs. In this essay, we argue for the importance of studying agerelated impacts more carefully and across various IS phenomena, and we enable such research by providing a research agenda that IS scholars can use. In doing so, we hope that future research will further both our empirical and conceptual understanding of the managerial challenges arising from the interplay of a graying workforce and rapidly evolving ICTs. 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "69102c54448921bfbc63c007cc927b8d", "text": "Multi-goal reinforcement learning (MGRL) addresses tasks where the desired goal state can change for every trial. State-of-the-art algorithms model these problems such that the reward formulation depends on the goals, to associate them with high reward. This dependence introduces additional goal reward resampling steps in algorithms like Hindsight Experience Replay (HER) that reuse trials in which the agent fails to reach the goal by recomputing rewards as if reached states were psuedo-desired goals. We propose a reformulation of goal-conditioned value functions for MGRL that yields a similar algorithm, while removing the dependence of reward functions on the goal. Our formulation thus obviates the requirement of reward-recomputation that is needed by HER and its extensions. We also extend a closely related algorithm, Floyd-Warshall Reinforcement Learning, from tabular domains to deep neural networks for use as a baseline. Our results are competitive with HER while substantially improving sampling efficiency in terms of reward computation.", "title": "" }, { "docid": "b33b10f3b6720b1bec3a030f236ac16c", "text": "In this paper, we present a unified model for the automatic induction of word senses from text, and the subsequent disambiguation of particular word instances using the automatically extracted sense inventory. The induction step and the disambiguation step are based on the same principle: words and contexts are mapped to a limited number of topical dimensions in a latent semantic word space. The intuition is that a particular sense is associated with a particular topic, so that different senses can be discriminated through their association with particular topical dimensions; in a similar vein, a particular instance of a word can be disambiguated by determining its most important topical dimensions. The model is evaluated on the SEMEVAL-2010 word sense induction and disambiguation task, on which it reaches stateof-the-art results.", "title": "" }, { "docid": "875548b7dc303bef8efa8284216e010d", "text": "BACKGROUND\nGigantomastia is a breast disorder marked by exaggerated rapid growth of the breasts, generally bilaterally. Since this disorder is very rare and has been reported only in sparse case reports its etiology has yet to be fully established. Treatment is aimed at improving the clinical and psychological symptoms and reducing the treatment side effects; however, the best therapeutic option varies from case to case.\n\n\nCASE PRESENTATION\nThe present report described a case of gestational gigantomastia in a 30-year-old woman, gravida 2, parity 1, 17 week pregnant admitted to Pars Hospital, Tehran, Iran, on May 2014. The patient was admitted to hospital at week 17 of pregnancy, although her breasts initially had begun to enlarge from the first trimester. The patient developed hypercalcemia in her 32nd week of pregnancy. The present report followed this patient from diagnosis until the completion of treatment.\n\n\nCONCLUSION\nAlthough gestational gigantomastia is a rare condition, its timely prognosis and careful examination of some conditions like hyperprolactinemia and hypercalcemia is essential in successful management of this condition.", "title": "" }, { "docid": "26f76aa41a64622ee8f0eaaed2aac529", "text": "OBJECTIVE\nIn this study, we explored the impact of an occupational therapy wellness program on daily habits and routines through the perspectives of youth and their parents.\n\n\nMETHOD\nData were collected through semistructured interviews with children and their parents, the Pizzi Healthy Weight Management Assessment(©), and program activities.\n\n\nRESULTS\nThree themes emerged from the interviews: Program Impact, Lessons Learned, and Time as a Barrier to Health. The most common areas that both youth and parents wanted to change were time spent watching television and play, fun, and leisure time. Analysis of activity pie charts indicated that the youth considerably increased active time in their daily routines from Week 1 to Week 6 of the program.\n\n\nCONCLUSION\nAn occupational therapy program focused on health and wellness may help youth and their parents be more mindful of their daily activities and make health behavior changes.", "title": "" }, { "docid": "274485dd39c0727c99fcc0a07d434b25", "text": "Fetal mortality rate is considered a good measure of the quality of health care in a country or a medical facility. If we look at the current scenario, we find that we have focused more on child mortality rate than on fetus mortality. Even it is a same situation in developed country. Our aim is to provide technological solution to help decrease the fetal mortality rate. Also if we consider pregnant women, they have to come to hospital 2-3 times a week for their regular checkups. It becomes a problem for working women and women having diabetes or other disease. For these reasons it would be very helpful if they can do this by themselves at home. This will reduce the frequency of their visit to the hospital at same time cause no compromise in the wellbeing of both the mother and the child. The end to end system consists of wearable sensors, built into a fabric belt, that collects and sends vital signs of patients via bluetooth to smart mobile phones for further processing and made available to required personnel allowing efficient monitoring and alerting when attention is required in often challenging and chaotic scenarios.", "title": "" }, { "docid": "d3b6fcc353382c947cfb0b4a73eda0ef", "text": "Robust object tracking is a challenging task in computer vision. To better solve the partial occlusion issue, part-based methods are widely used in visual object trackers. However, due to the complicated online training and updating process, most of these part-based trackers cannot run in real-time. Correlation filters have been used in tracking tasks recently because of the high efficiency. However, the conventional correlation filter based trackers cannot deal with occlusion. Furthermore, most correlation filter based trackers fix the scale and rotation of the target which makes the trackers unreliable in long-term tracking tasks. In this paper, we propose a novel tracking method which track objects based on parts with multiple correlation filters. Our method can run in real-time. Additionally, the Bayesian inference framework and a structural constraint mask are adopted to enable our tracker to be robust to various appearance changes. Extensive experiments have been done to prove the effectiveness of our method.", "title": "" }, { "docid": "d01692a4ee83531badacea6658b74d8f", "text": "Question Answering (QA) research for factoid questions has recently achieved great success. Presently, QA systems developed for European, Middle Eastern and Asian languages are capable of providing answers with reasonable accuracy. However, Bengali being among themost spoken languages in theworld, no factoid question answering system is available for Bengali till date. This paper describes the first attempt on building a factoid question answering system for Bengali language. The challenges in developing a question answering system for Bengali have been discussed. Extraction and ranking of relevant sentences have also been proposed. Also extraction strategy of the ranked answers from the relevant sentences are suggested for Bengali question answering system.", "title": "" }, { "docid": "32e2c444bfbe7c85ea600c2b91bf2370", "text": "The consumption of caffeine (an adenosine receptor antagonist) correlates inversely with depression and memory deterioration, and adenosine A2A receptor (A2AR) antagonists emerge as candidate therapeutic targets because they control aberrant synaptic plasticity and afford neuroprotection. Therefore we tested the ability of A2AR to control the behavioral, electrophysiological, and neurochemical modifications caused by chronic unpredictable stress (CUS), which alters hippocampal circuits, dampens mood and memory performance, and enhances susceptibility to depression. CUS for 3 wk in adult mice induced anxiogenic and helpless-like behavior and decreased memory performance. These behavioral changes were accompanied by synaptic alterations, typified by a decrease in synaptic plasticity and a reduced density of synaptic proteins (synaptosomal-associated protein 25, syntaxin, and vesicular glutamate transporter type 1), together with an increased density of A2AR in glutamatergic terminals in the hippocampus. Except for anxiety, for which results were mixed, CUS-induced behavioral and synaptic alterations were prevented by (i) caffeine (1 g/L in the drinking water, starting 3 wk before and continued throughout CUS); (ii) the selective A2AR antagonist KW6002 (3 mg/kg, p.o.); (iii) global A2AR deletion; and (iv) selective A2AR deletion in forebrain neurons. Notably, A2AR blockade was not only prophylactic but also therapeutically efficacious, because a 3-wk treatment with the A2AR antagonist SCH58261 (0.1 mg/kg, i.p.) reversed the mood and synaptic dysfunction caused by CUS. These results herald a key role for synaptic A2AR in the control of chronic stress-induced modifications and suggest A2AR as candidate targets to alleviate the consequences of chronic stress on brain function.", "title": "" }, { "docid": "45fb31643f4fd53b08c51818f284f2df", "text": "This paper introduces a new type of fuzzy inference systems, denoted as dynamic evolving neural-fuzzy inference system (DENFIS), for adaptive online and offline learning, and their application for dynamic time series prediction. DENFIS evolve through incremental, hybrid (supervised/unsupervised), learning, and accommodate new input data, including new features, new classes, etc., through local element tuning. New fuzzy rules are created and updated during the operation of the system. At each time moment, the output of DENFIS is calculated through a fuzzy inference system based on -most activated fuzzy rules which are dynamically chosen from a fuzzy rule set. Two approaches are proposed: 1) dynamic creation of a first-order Takagi–Sugeno-type fuzzy rule set for a DENFIS online model; and 2) creation of a first-order Takagi–Sugeno-type fuzzy rule set, or an expanded high-order one, for a DENFIS offline model. A set of fuzzy rules can be inserted into DENFIS before or during its learning process. Fuzzy rules can also be extracted during or after the learning process. An evolving clustering method (ECM), which is employed in both online and offline DENFIS models, is also introduced. It is demonstrated that DENFIS can effectively learn complex temporal sequences in an adaptive way and outperform some well-known, existing models.", "title": "" }, { "docid": "f69f8b58e926a8a4573dd650ee29f80b", "text": "Zab is a crash-recovery atomic broadcast algorithm we designed for the ZooKeeper coordination service. ZooKeeper implements a primary-backup scheme in which a primary process executes clients operations and uses Zab to propagate the corresponding incremental state changes to backup processes1. Due the dependence of an incremental state change on the sequence of changes previously generated, Zab must guarantee that if it delivers a given state change, then all other changes it depends upon must be delivered first. Since primaries may crash, Zab must satisfy this requirement despite crashes of primaries.", "title": "" }, { "docid": "d7102755d7934532e1de73815e282f27", "text": "We present an application of Monte Carlo tree search (MCTS) for the game of Ms Pac-Man. Contrary to most applications of MCTS to date, Ms Pac-Man requires almost real-time decision making and does not have a natural end state. We approached the problem by performing Monte Carlo tree searches on a five player maxn tree representation of the game with limited tree search depth. We performed a number of experiments using both the MCTS game agents (for pacman and ghosts) and agents used in previous work (for ghosts). Performance-wise, our approach gets excellent scores, outperforming previous non-MCTS opponent approaches to the game by up to two orders of magnitude.", "title": "" }, { "docid": "8010361144a7bd9fc336aba88f6e8683", "text": "Moving garments and other cloth objects exhibit dynamic, complex wrinkles. Generating such wrinkles in a virtual environment currently requires either a time-consuming manual design process, or a computationally expensive simulation, often combined with accurate parameter-tuning requiring specialized animator skills. Our work presents an alternative approach for wrinkle generation which combines coarse cloth animation with a post-processing step for efficient generation of realistic-looking fine dynamic wrinkles. Our method uses the stretch tensor of the coarse animation output as a guide for wrinkle placement. To ensure temporal coherence, the placement mechanism uses a space-time approach allowing not only for smooth wrinkle appearance and disappearance, but also for wrinkle motion, splitting, and merging over time. Our method generates believable wrinkle geometry using specialized curve-based implicit deformers. The method is fully automatic and has a single user control parameter that enables the user to mimic different fabrics.", "title": "" }, { "docid": "d1c34dda56e06cdae9d23c2e1cec41d2", "text": "The detection of loop closure is of essential importance in visual simultaneous localization and mapping systems. It can reduce the accumulating drift of localization algorithms if the loops are checked correctly. Traditional loop closure detection approaches take advantage of Bag-of-Words model, which clusters the feature descriptors as words and measures the similarity between the observations in the word space. However, the features are usually designed artificially and may not be suitable for data from new-coming sensors. In this paper a novel loop closure detection approach is proposed that learns features from raw data using deep neural networks instead of common visual features. We discuss the details of the method of training neural networks. Experiments on an open dataset are also demonstrated to evaluate the performance of the proposed method. It can be seen that the neural network is feasible to solve this problem.", "title": "" }, { "docid": "ee37a743edd1b87d600dcf2d0050ca18", "text": "Recommender systems play a crucial role in mitigating the problem of information overload by suggesting users' personalized items or services. The vast majority of traditional recommender systems consider the recommendation procedure as a static process and make recommendations following a fixed strategy. In this paper, we propose a novel recommender system with the capability of continuously improving its strategies during the interactions with users. We model the sequential interactions between users and a recommender system as a Markov Decision Process (MDP) and leverage Reinforcement Learning (RL) to automatically learn the optimal strategies via recommending trial-and-error items and receiving reinforcements of these items from users' feedback. Users' feedback can be positive and negative and both types of feedback have great potentials to boost recommendations. However, the number of negative feedback is much larger than that of positive one; thus incorporating them simultaneously is challenging since positive feedback could be buried by negative one. In this paper, we develop a novel approach to incorporate them into the proposed deep recommender system (DEERS) framework. The experimental results based on real-world e-commerce data demonstrate the effectiveness of the proposed framework. Further experiments have been conducted to understand the importance of both positive and negative feedback in recommendations.", "title": "" }, { "docid": "8d176debd26505d424dcbf8f5cfdb4d1", "text": "We present a system for training deep neural networks for object detection using synthetic images. To handle the variability in real-world data, the system relies upon the technique of domain randomization, in which the parameters of the simulator-such as lighting, pose, object textures, etc.-are randomized in non-realistic ways to force the neural network to learn the essential features of the object of interest. We explore the importance of these parameters, showing that it is possible to produce a network with compelling performance using only non-artistically-generated synthetic data. With additional fine-tuning on real data, the network yields better performance than using real data alone. This result opens up the possibility of using inexpensive synthetic data for training neural networks while avoiding the need to collect large amounts of hand-annotated real-world data or to generate high-fidelity synthetic worlds-both of which remain bottlenecks for many applications. The approach is evaluated on bounding box detection of cars on the KITTI dataset.", "title": "" }, { "docid": "04b32423acd23c03188ca8bf208a24fd", "text": "We extend the notion of memristive systems to capacitive and inductive elements, namely, capacitors and inductors whose properties depend on the state and history of the system. All these elements typically show pinched hysteretic loops in the two constitutive variables that define them: current-voltage for the memristor, charge-voltage for the memcapacitor, and current-flux for the meminductor. We argue that these devices are common at the nanoscale, where the dynamical properties of electrons and ions are likely to depend on the history of the system, at least within certain time scales. These elements and their combination in circuits open up new functionalities in electronics and are likely to find applications in neuromorphic devices to simulate learning, adaptive, and spontaneous behavior.", "title": "" }, { "docid": "4df6678c57115f6179587cff1cc5f228", "text": "Depth maps captured by Kinect-like cameras are lack of depth data in some areas and suffer from heavy noise. These defects have negative impacts on practical applications. In order to enhance the depth maps, this paper proposes a new inpainting algorithm that extends the original fast marching method (FMM) to reconstruct unknown regions. The extended FMM incorporates an aligned color image as the guidance for inpainting. An edge-preserving guided filter is further applied for noise reduction. To validate our algorithm and compare it with other existing methods, we perform experiments on both the Kinect data and the Middlebury dataset which, respectively, provide qualitative and quantitative results. The results show that our method is efficient and superior to others.", "title": "" }, { "docid": "ec3246cab3c6d8720a5fee5351869b79", "text": "We present the first study of Native Language Identification (NLI) applied to text written in languages other than English, using data from six languages. NLI is the task of predicting an author’s first language (L1) using only their writings in a second language (L2), with applications in Second Language Acquisition and forensic linguistics. Most research to date has focused on English but there is a need to apply NLI to other languages, not only to gauge its applicability but also to aid in teaching research for other emerging languages. With this goal, we identify six typologically very different sources of non-English L2 data and conduct six experiments using a set of commonly used features. Our first two experiments evaluate our features and corpora, showing that the features perform well and at similar rates across languages. The third experiment compares non-native and native control data, showing that they can be discerned with 95% accuracy. Our fourth experiment provides a cross-linguistic assessment of how the degree of syntactic data encoded in part-of-speech tags affects their efficiency as classification features, finding that most differences between L1 groups lie in the ordering of the most basic word categories. We also tackle two questions that have not previously been addressed for NLI. Other work in NLI has shown that ensembles of classifiers over feature types work well and in our final exper2 S. Malmasi and M. Dras iment we use such an oracle classifier to derive an upper limit for classification accuracy with our feature set. We also present an analysis examining feature diversity, aiming to estimate the degree of overlap and complementarity between our chosen features employing an association measure for binary data. Finally, we conclude with a general discussion and outline directions for future work.", "title": "" } ]
scidocsrr
54706ff3e9726dc1bfa45766d3892b23
Anatomical evaluation of the modified posterolateral approach for posterolateral tibial plateau fracture
[ { "docid": "ec69b95261fc19183a43c0e102f39016", "text": "The selection of a surgical approach for the treatment of tibia plateau fractures is an important decision. Approximately 7% of all tibia plateau fractures affect the posterolateral corner. Displaced posterolateral tibia plateau fractures require anatomic articular reduction and buttress plate fixation on the posterior aspect. These aims are difficult to reach through a lateral or anterolateral approach. The standard posterolateral approach with fibula osteotomy and release of the posterolateral corner is a traumatic procedure, which includes the risk of fragment denudation. Isolated posterior approaches do not allow sufficient visual control of fracture reduction, especially if the fracture is complex. Therefore, the aim of this work was to present a surgical approach for posterolateral tibial plateau fractures that both protects the soft tissue and allows for good visual control of fracture reduction. The approach involves a lateral arthrotomy for visualizing the joint surface and a posterolateral approach for the fracture reduction and plate fixation, which are both achieved through one posterolateral skin incision. Using this approach, we achieved reduction of the articular surface and stable fixation in six of seven patients at the final follow-up visit. No complications and no loss of reduction were observed. Additionally, the new posterolateral approach permits direct visual exposure and facilitates the application of a buttress plate. Our approach does not require fibular osteotomy, and fragments of the posterolateral corner do not have to be detached from the soft tissue network.", "title": "" }, { "docid": "ad8762ae878b7e731b11ab6d67f9867d", "text": "We describe a posterolateral transfibular neck approach to the proximal tibia. This approach was developed as an alternative to the anterolateral approach to the tibial plateau for the treatment of two fracture subtypes: depressed and split depressed fractures in which the comminution and depression are located in the posterior half of the lateral tibial condyle. These fractures have proved particularly difficult to reduce and adequately internally fix through an anterior or anterolateral approach. The approach described in this article exposes the posterolateral aspect of the tibial plateau between the posterior margin of the iliotibial band and the posterior cruciate ligament. The approach allows lateral buttressing of the lateral tibial plateau and may be combined with a simultaneous posteromedial and/or anteromedial approach to the tibial plateau. Critically, the proximal tibial soft tissue envelope and its blood supply are preserved. To date, we have used this approach either alone or in combination with a posteromedial approach for the successful reduction of tibial plateau fractures in eight patients. No complications related to this approach were documented, including no symptoms related to the common peroneal nerve, and all fractures and fibular neck osteotomies healed uneventfully.", "title": "" } ]
[ { "docid": "8954672b2e2b6351abfde0747fd5d61c", "text": "Sentiment Analysis (SA), an application of Natural Language processing (NLP), has been witnessed a blooming interest over the past decade. It is also known as opinion mining, mood extraction and emotion analysis. The basic in opinion mining is classifying the polarity of text in terms of positive (good), negative (bad) or neutral (surprise). Mood Extraction automates the decision making performed by human. It is the important aspect for capturing public opinion about product preferences, marketing campaigns, political movements, social events and company strategies. In addition to sentiment analysis for English and other European languages, this task is applied on various Indian languages like Bengali, Hindi, Telugu and Malayalam. This paper describes the survey on main approaches for performing sentiment extraction.", "title": "" }, { "docid": "670d4860fc3172b7ffa429268462b64d", "text": "This article describes the benefits and risks of providing RPDs. It emphasises the importance of co-operation between the dental team and patient to ensure that the balance of this 'equation' is in the patient's favour.", "title": "" }, { "docid": "496a7b9155ad336e178a62545b7eb0b7", "text": "A B S T R AC T Existing approaches to organizational discourse, which we label as ‘managerialist’, ‘interpretive’ and ‘critical’, either privilege agency at the expense of structure or the other way around. This tension reflects that between approaches to discourse in the social sciences more generally but is sharper in the organizational context, where discourse is typically temporally and contextually specific and imbued with attributions of instrumental intent. As the basis for a more sophisticated understanding of organizational discourse, we draw on the work of Giddens to develop a structurational conceptualization in which discourse is viewed as a duality of communicative actions and structural properties, recursively linked through the modality of actors’ interpretive schemes. We conclude by exploring some of the theoretical implications of this conceptualization and its consequences for the methodology of organizational discourse analysis.", "title": "" }, { "docid": "af9e3268901a46967da226537eba3cb6", "text": "Magnetic Resonance Imaging (MRI) is a non-invasive diagnostic tool very frequently used for brain 8 imaging. The classification of MRI images of normal and pathological brain conditions pose a challenge from 9 technological and clinical point of view, since MR imaging focuses on soft tissue anatomy and generates a large 10 information set and these can act as a mirror reflecting the conditions of the brain. A new approach by 11 integrating wavelet entropy based spider web plots and probabilistic neural network is proposed for the 12 classification of MRI brain images. The two step method for classification uses (1) wavelet entropy based spider 13 web plots for the feature extraction and (2) probabilistic neural network for the classification. The spider web 14 plot is a geometric construction drawn using the entropy of the wavelet approximation components and the areas 15 calculated are used as feature set for classification. Probabilistic neural network provides a general solution to 16 the pattern classification problems and the classification accuracy is found to be 100%. 17 Keywords-Magnetic Resonance Imaging (MRI), Wavelet Transformation, Entropy, Spider Web Plots, 18 Probabilistic Neural Network 19", "title": "" }, { "docid": "8e3cc3937f91c12bb5d515f781928f8b", "text": "As the size of data set in cloud increases rapidly, how to process large amount of data efficiently has become a critical issue. MapReduce provides a framework for large data processing and is shown to be scalable and fault-tolerant on commondity machines. However, it has higher learning curve than SQL-like language and the codes are hard to maintain and reuse. On the other hand, traditional SQL-based data processing is familiar to user but is limited in scalability. In this paper, we propose a hybrid approach to fill the gap between SQL-based and MapReduce data processing. We develop a data management system for cloud, named SQLMR. SQLMR complies SQL-like queries to a sequence of MapReduce jobs. Existing SQL-based applications are compatible seamlessly with SQLMR and users can manage Tera to PataByte scale of data with SQL-like queries instead of writing MapReduce codes. We also devise a number of optimization techniques to improve the performance of SQLMR. The experiment results demonstrate both performance and scalability advantage of SQLMR compared to MySQL and two NoSQL data processing systems, Hive and HadoopDB.", "title": "" }, { "docid": "8c174dbb8468b1ce6f4be3676d314719", "text": "An estimated 24 million people worldwide have dementia, the majority of whom are thought to have Alzheimer's disease. Thus, Alzheimer's disease represents a major public health concern and has been identified as a research priority. Although there are licensed treatments that can alleviate symptoms of Alzheimer's disease, there is a pressing need to improve our understanding of pathogenesis to enable development of disease-modifying treatments. Methods for improving diagnosis are also moving forward, but a better consensus is needed for development of a panel of biological and neuroimaging biomarkers that support clinical diagnosis. There is now strong evidence of potential risk and protective factors for Alzheimer's disease, dementia, and cognitive decline, but further work is needed to understand these better and to establish whether interventions can substantially lower these risks. In this Seminar, we provide an overview of recent evidence regarding the epidemiology, pathogenesis, diagnosis, and treatment of Alzheimer's disease, and discuss potential ways to reduce the risk of developing the disease.", "title": "" }, { "docid": "d4fb67823dd774e3efc25de61b8e503c", "text": "Light scattering from hair is normally simulated in computer graphics using Kajiya and Kay’s classic phenomenological model. We have made new measurements that exhibit visually significant effects not predicted by Kajiya and Kay’s model. Our measurements go beyond previous hair measurements by examining out-of-plane scattering, and together with this previous work they show a multiple specular highlight and variation in scattering with rotation about the fiber axis. We explain the sources of these effects using a model of a hair fiber as a transparent elliptical cylinder with an absorbing interior and a surface covered with tilted scales. Based on an analytical scattering function for a circular cylinder, we propose a practical shading model for hair that qualitatively matches the scattering behavior shown in the measurements. In a comparison between a photograph and rendered images, we demonstrate the new model’s ability to match the appearance of real hair. CR Categories: I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism—Shading", "title": "" }, { "docid": "0e0f78b8839f4724153b8931342824d2", "text": "The acquisition, processing, and interpretation of thermal images from unmanned aerial vehicles (UAVs) is becoming a useful source of information for agronomic applications because of the higher temporal and spatial resolution of these products compared with those obtained from satellites. However, due to the low load capacity of the UAV they need to mount light, uncooled thermal cameras, where the microbolometer is not stabilized to a constant temperature. This makes the camera precision low for many applications. Additionally, the low contrast of the thermal images makes the photogrammetry process inaccurate, which result in large errors in the generation of orthoimages. In this research, we propose the use of new calibration algorithms, based on neural networks, which consider the sensor temperature and the digital response of the microbolometer as input data. In addition, we evaluate the use of the Wallis filter for improving the quality of the photogrammetry process using structure from motion software. With the proposed calibration algorithm, the measurement accuracy increased from 3.55 °C with the original camera configuration to 1.37 °C. The implementation of the Wallis filter increases the number of tie-point from 58,000 to 110,000 and decreases the total positing error from 7.1 m to 1.3 m.", "title": "" }, { "docid": "4f6f225f978bbf00c20f80538dc12aad", "text": "A smart building is created when it is engineered, delivered and operated smart. The Internet of Things (IoT) is advancing a new breed of smart buildings enables operational systems that deliver more accurate and useful information for improving operations and providing the best experiences for tenants. Big Data Analytics framework analyze building data to uncover new insight capable of driving real value and greater performance. Internet of Things technologies enhance the situational awareness or “smartness” of service providers and consumers alike. There is a need for an integrated IoT Big Data Analytics framework to fill the research gap in the Big Data Analytics domain. This paper also presents a novel approach for mobile phone centric observation applied to indoor localization for smart buildings. The applicability of the framework of this paper is demonstrated with the help of a scenario involving the analysis of real-time smart building data for automatically managing the oxygen level, luminosity and smoke/hazardous gases in different parts of the smart building. Lighting control in smart buildings and homes can be automated by having computer controlled lights and blinds along with illumination sensors that are distributed in the building. This paper gives an overview of an approach that algorithmically sets up the control system that can automate any building without custom programming. The resulting system controls blinds to ensure even lighting and also adds artificial illumination to ensure light coverage remains adequate at all times of the day, adjusting for weather and seasons. The key contribution of this paper is the complex integration of Big Data Analytics and IoT for addressing the large volume and velocity challenge of real-time data in the smart building domain.", "title": "" }, { "docid": "97f89b905d51d2965c60bb4bbed08b4c", "text": "This communication deals with simultaneous generation of a contoured and a pencil beam from a single shaped reflector with two feeds. A novel concept of generating a high gain pencil beam from a shaped reflector is presented using focal plane conjugate field matching method. The contoured beam is generated from the shaped reflector by introducing deformations in a parabolic reflector surface. This communication proposes a simple method to counteract the effects of shaping and generate an additional high gain pencil beam from the shaped reflector. This is achieved by using a single feed which is axially and laterally displaced from the focal point. The proposed method is successfully applied to generate an Indian main land coverage contoured beam and a high gain pencil beam over Andaman Islands. The contoured beam with peak gain of 33.05 dBi and the pencil beam with 43.8 dBi peak gain is generated using the single shaped reflector and two feeds. This technique saves mass and volume otherwise would have required for feed cluster to compensate for the surface distortion.", "title": "" }, { "docid": "6177d208d27ecc9dee54b988d1c2bc2d", "text": "Animal learning is driven not only by biological needs but also by intrinsic motivations (IMs) serving the acquisition of knowledge. Computational modeling involving IMs is indicating that learning of motor skills requires that autonomous agents self-generate tasks/goals and use them to acquire skills solving/leading to them. We propose a neural architecture driven by IMs that is able to self-generate goals on the basis of the environmental changes caused by the agent’s actions. The main novelties of the model are that it is focused on the acquisition of attention (looking) skills and that its architecture and functioning are broadly inspired by the functioning of relevant primate brain areas (superior colliculus, basal ganglia, and frontal cortex). These areas, involved in IM-based behavior learning, play important functions for reflexive and voluntary attention. The model is tested within a simple simulated pan-tilt camera robot engaged in learning to switch on different lights by looking at them, and is able to self-generate visual goals and learn attention skills under IM guidance. The model represents a novel hypothesis on how primates and robots might autonomously learn attention skills and has a potential to account for developmental psychology experiments and the underlying brain mechanisms.", "title": "" }, { "docid": "0bd981ea6d38817b560383f48fdfb729", "text": "Lightweight wheelchairs are characterized by their low cost and limited range of adjustment. Our study evaluated three different folding lightweight wheelchair models using the American National Standards Institute/Rehabilitation Engineering Society of North America (ANSI/RESNA) standards to see whether quality had improved since the previous data were reported. On the basis of reports of increasing breakdown rates in the community, we hypothesized that the quality of these wheelchairs had declined. Seven of the nine wheelchairs tested failed to pass the multidrum test durability requirements. An average of 194,502 +/- 172,668 equivalent cycles was completed, which is similar to the previous test results and far below the 400,000 minimum required to pass the ANSI/RESNA requirements. This was also significantly worse than the test results for aluminum ultralight folding wheelchairs. Overall, our results uncovered some disturbing issues with these wheelchairs and suggest that manufacturers should put more effort into this category to improve quality. To improve the durability of lightweight wheelchairs, we suggested that stronger regulations be developed that require wheelchairs to be tested by independent and certified test laboratories. We also proposed a wheelchair rating system based on the National Highway Transportation Safety Administration vehicle crash ratings to assist clinicians and end users when comparing the durability of different wheelchairs.", "title": "" }, { "docid": "7cc20934720912ad1c056dc9afd97e18", "text": "Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. We describe two experiments that. demonstrate a real-time HMM-based system for recognizing sentence level American Sign Language (ASL) without explicitly modeling the fingers. The first experiment tracks hands wearing colored gloves and attains a word accuracy of 99%. The second experiment tracks hands without gloves and attains a word accuracy of 92%. Both experiments have a 40 word lexicon.", "title": "" }, { "docid": "73723bf217557d8269cb0c23140e2ec9", "text": "The uniform one-dimensional fragment of first-order logic, U1, is a recently introduced formalism that extends two-variable logic in a natural way to contexts with relations of all arities. We survey properties of U1 and investigate its relationship to description logics designed to accommodate higher arity relations, with particular attention given to DLRreg . We also define a description logic version of a variant of U1 and prove a range of new results concerning the expressivity of U1 and related logics.", "title": "" }, { "docid": "1450c2025de3ea31271c9d6c56be016f", "text": "The vast increase in clinical data has the potential to bring about large improvements in clinical quality and other aspects of healthcare delivery. However, such benefits do not come without cost. The analysis of such large datasets, particularly where the data may have to be merged from several sources and may be noisy and incomplete, is a challenging task. Furthermore, the introduction of clinical changes is a cyclical task, meaning that the processes under examination operate in an environment that is not static. We suggest that traditional methods of analysis are unsuitable for the task, and identify complexity theory and machine learning as areas that have the potential to facilitate the examination of clinical quality. By its nature the field of complex adaptive systems deals with environments that change because of the interactions that have occurred in the past. We draw parallels between health informatics and bioinformatics, which has already started to successfully use machine learning methods.", "title": "" }, { "docid": "7687f85746acf4e3cd24d512e5efd31e", "text": "Thyroid eye disease is a multifactorial autoimmune disease with a spectrum of signs and symptoms. Oftentimes, the diagnosis of thyroid eye disease is straightforward, based upon history and physical examination. The purpose of this review is to assist the eye-care practitioner in staging the severity of thyroid eye disease (mild, moderate-to-severe and sight-threatening) and correlating available treatment modalities. Eye-care practitioners play an important role in the multidisciplinary team by assessing functional vision while also managing ocular health.", "title": "" }, { "docid": "265421a07efc8ab26a6766f90bf53245", "text": "Recently, there has been much excitement in the research community over using social networks to mitigate multiple identity, or Sybil, attacks. A number of schemes have been proposed, but they differ greatly in the algorithms they use and in the networks upon which they are evaluated. As a result, the research community lacks a clear understanding of how these schemes compare against each other, how well they would work on real-world social networks with different structural properties, or whether there exist other (potentially better) ways of Sybil defense.\n In this paper, we show that, despite their considerable differences, existing Sybil defense schemes work by detecting local communities (i.e., clusters of nodes more tightly knit than the rest of the graph) around a trusted node. Our finding has important implications for both existing and future designs of Sybil defense schemes. First, we show that there is an opportunity to leverage the substantial amount of prior work on general community detection algorithms in order to defend against Sybils. Second, our analysis reveals the fundamental limits of current social network-based Sybil defenses: We demonstrate that networks with well-defined community structure are inherently more vulnerable to Sybil attacks, and that, in such networks, Sybils can carefully target their links in order make their attacks more effective.", "title": "" }, { "docid": "c70466f8b1e70fcdd4b7fe3f2cb772b2", "text": "We present Tor, a circuit-based low-latency anonymous communication service. This second-generation Onion Routing system addresses limitations in the original design. Tor adds perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for rendezvous points. Tor works on the real-world Internet, requires no special privileges or kernel modifications, requires little synchronization or coordination between nodes, and provides a reasonable tradeoff between anonymity, usability, and efficiency. We briefly describe our experiences with an international network of more than a dozen hosts. We close with a list of open problems in anonymous communication.", "title": "" }, { "docid": "78f272578191996200259e10d209fe19", "text": "The information in government web sites, which are widely adopted in many countries, must be accessible for all people, easy to use, accurate and secure. The main objective of this study is to investigate the usability, accessibility and security aspects of e-government web sites in Kyrgyz Republic. The analysis of web government pages covered 55 sites listed in the State Information Resources of the Kyrgyz Republic and five government web sites which were not included in the list. Analysis was conducted using several automatic evaluation tools. Results suggested that government web sites in Kyrgyz Republic have a usability error rate of 46.3 % and accessibility error rate of 69.38 %. The study also revealed security vulnerabilities in these web sites. Although the “Concept of Creation and Development of Information Network of the Kyrgyz Republic” was launched at September 23, 1994, government web sites in the Kyrgyz Republic have not been reviewed and still need great efforts to improve accessibility, usability and security.", "title": "" }, { "docid": "531a7417bd66ff0fdd7fb35c7d6d8559", "text": "G. R. White University of Sussex, Brighton, UK Abstract In order to design new methodologies for evaluating the user experience of video games, it is imperative to initially understand two core issues. Firstly, how are video games developed at present, including components such as processes, timescales and staff roles, and secondly, how do studios design and evaluate the user experience. This chapter will discuss the video game development process and the practices that studios currently use to achieve the best possible user experience. It will present four case studies from game developers Disney Interactive (Black Rock Studio), Relentless, Zoe Mode, and HandCircus, each detailing their game development process and also how this integrates with the user experience evaluation. The case studies focus on different game genres, platforms, and target user groups, ensuring that this chapter represents a balanced view of current practices in evaluating user experience during the game development process.", "title": "" } ]
scidocsrr
ba888cd26ac294f48876e5cf28116136
Adaptive Grids for Clustering Massive Data Sets
[ { "docid": "b7a4eec912eb32b3b50f1b19822c44a1", "text": "Mining numerical data is a relatively difficult problem in data mining. Clustering is one of the techniques. We consider a database with numerical attributes, in which each transaction is viewed as a multi-dimensional vector. By studying the clusters formed by these vectors, we can discover certain behaviors hidden in the data. Traditional clustering algorithms find clusters in the full space of the data sets. This results in high dimensional clusters, which are poorly comprehensible to human. One important task in this setting is the ability to discover clusters embedded in the subspaces of a high-dimensional data set. This problem is known as subspace clustering. We follow the basic assumptions of previous work CLIQUE. It is found that the number of subspaces with clustering is very large, and a criterion called the coverage is proposed in CLIQUE for the pruning. In addition to coverage, we identify new useful criteria for this problem and propose an entropybased algorithm called ENCLUS to handle the criteria. Our major contributions are: (1) identify new meaningful criteria of high density and correlation of dimensions for goodness of clustering in subspaces, (2) introduce the use of entropy and provide evidence to support its use, (3) make use of two closure properties based on entropy to prune away uninteresting subspaces efficiently, (4) propose a mechanism to mine non-minimally correlated subspaces which are of interest because of strong clustering, (5) experiments are carried out to show the effectiveness of the proposed method.", "title": "" }, { "docid": "1c5f53fe8d663047a3a8240742ba47e4", "text": "Spatial data mining is the discovery of interesting relationships and characteristics that may exist implicitly in spatial databases. In this paper, we explore whether clustering methods have a role to play in spatial data mining. To this end, we develop a new clustering method called CLAHANS which is based on randomized search. We also develop two spatial data mining algorithms that use CLAHANS. Our analysis and experiments show that with the assistance of CLAHANS, these two algorithms are very effective and can lead to discoveries that are difficult to find with current spatial data mining algorithms. Furthermore, experiments conducted to compare the performance of CLAHANS with that of existing clustering methods show that CLAHANS is the most efficient.", "title": "" } ]
[ { "docid": "546f96600d90107ed8262ad04274b012", "text": "Large-scale labeled training datasets have enabled deep neural networks to excel on a wide range of benchmark vision tasks. However, in many applications it is prohibitively expensive or timeconsuming to obtain large quantities of labeled data. To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled target domain. Unfortunately, direct transfer across domains often performs poorly due to domain shift and dataset bias. Domain adaptation is the machine learning paradigm that aims to learn a model from a source domain that can perform well on a different (but related) target domain. In this paper, we summarize and compare the latest unsupervised domain adaptation methods in computer vision applications. We classify the non-deep approaches into sample re-weighting and intermediate subspace transformation categories, while the deep strategy includes discrepancy-based methods, adversarial generative models, adversarial discriminative models and reconstruction-based methods. We also discuss some potential directions.", "title": "" }, { "docid": "7086861716db2b7d0841ad85199683ce", "text": "AIM\nAlthough children spend most of their time involved in activities related to school, few studies have focused on the association between school social environment and oral health. This cross-sectional study assessed individual and school-related social environment correlates of dental caries in Brazilian schoolchildren aged 8-12 years.\n\n\nMETHODS\nA sample of children from 20 private and public schools (n=1,211) was selected. Socio-economic data were collected from parents, and data regarding children characteristics were collected from children using a questionnaire. Dental examinations were performed to assess the presence of dental plaque: dental caries experience (DMFT≥1) and dental caries severity (mean dmf-t/DMF-T). The social school environment was assessed by a questionnaire administered to school coordinators. Multilevel Poisson regression was used to investigate the association between school social environment and dental caries prevalence and experience.\n\n\nRESULTS\nThe dental caries prevalence was 32.4% (95% confidence interval: 29.7-35.2) and the mean dmf-t/DMF-T was 1.84 (standard deviation: 2.2). Multilevel models showed that the mean dmf-t/DMF-T and DMFT≥1 were associated with lower maternal schooling and higher levels of dental plaque. For contextual variables, schools offering after-hours sports activities were associated with a lower prevalence of dental caries and a lower mean of dmf-t/DMF-T, while the occurrence of violence and theft episodes was positively associated with dental caries.\n\n\nCONCLUSIONS\nThe school social environment has an influence on dental caries in children. The results suggest that strategies focused on the promotion of healthier environments should be stimulated to reduce inequalities in dental caries.", "title": "" }, { "docid": "688bacdee25152e1de6bcc5005b75d9a", "text": "Data Mining provides powerful techniques for various fields including education. The research in the educational field is rapidly increasing due to the massive amount of students’ data which can be used to discover valuable pattern pertaining students’ learning behaviour. This paper proposes a framework for predicting students’ academic performance of first year bachelor students in Computer Science course. The data were collected from 8 year period intakes from July 2006/2007 until July 2013/2014 that contains the students’ demographics, previous academic records, and family background information. Decision Tree, Naïve Bayes, and Rule Based classification techniques are applied to the students’ data in order to produce the best students’ academic performance prediction model. The experiment result shows the Rule Based is a best model among the other techniques by receiving the highest accuracy value of 71.3%. The extracted knowledge from prediction model will be used to identify and profile the student to determine the students’ level of success in the first semester.", "title": "" }, { "docid": "e591165d8e141970b8263007b076dee1", "text": "Treating a human mind like a machine is an essential component of dehumanization, whereas attributing a humanlike mind to a machine is an essential component of anthropomorphism. Here we tested how a cue closely connected to a person's actual mental experience-a humanlike voice-affects the likelihood of mistaking a person for a machine, or a machine for a person. We predicted that paralinguistic cues in speech are particularly likely to convey the presence of a humanlike mind, such that removing voice from communication (leaving only text) would increase the likelihood of mistaking the text's creator for a machine. Conversely, adding voice to a computer-generated script (resulting in speech) would increase the likelihood of mistaking the text's creator for a human. Four experiments confirmed these hypotheses, demonstrating that people are more likely to infer a human (vs. computer) creator when they hear a voice expressing thoughts than when they read the same thoughts in text. Adding human visual cues to text (i.e., seeing a person perform a script in a subtitled video clip), did not increase the likelihood of inferring a human creator compared with only reading text, suggesting that defining features of personhood may be conveyed more clearly in speech (Experiments 1 and 2). Removing the naturalistic paralinguistic cues that convey humanlike capacity for thinking and feeling, such as varied pace and intonation, eliminates the humanizing effect of speech (Experiment 4). We discuss implications for dehumanizing others through text-based media, and for anthropomorphizing machines through speech-based media. (PsycINFO Database Record", "title": "" }, { "docid": "49fddbf79a836e2ae9f297b32fb3681d", "text": "Imitation learning has traditionally been applied to learn a single task from demonstrations thereof. The requirement of structured and isolated demonstrations limits the scalability of imitation learning approaches as they are difficult to apply to real-world scenarios, where robots have to be able to execute a multitude of tasks. In this paper, we propose a multi-modal imitation learning framework that is able to segment and imitate skills from unlabelled and unstructured demonstrations by learning skill segmentation and imitation learning jointly. The extensive simulation results indicate that our method can efficiently separate the demonstrations into individual skills and learn to imitate them using a single multi-modal policy. The video of our experiments is available at http://sites.google.com/view/nips17intentiongan.", "title": "" }, { "docid": "87c56a28428132d4023c312ce216fd04", "text": "The era of big data has resulted in the development and applications of technologies and methods aimed at effectively using massive amounts of data to support decision-making and knowledge discovery activities. In this paper, the five Vs of big data, volume, velocity, variety, veracity, and value, are reviewed, as well as new technologies, including NoSQL databases that have emerged to accommodate the needs of big data initiatives. The role of conceptual modeling for big data is then analyzed and suggestions made for effective conceptual modeling efforts with respect to big data. & 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "904d175ba1f94a980ceb88f9941f0a55", "text": "Currently, wind turbines can incur unforeseen damage up to five times a year. Particularly during bad weather, wind turbines located offshore are difficult to access for visual inspection. As a result, long periods of turbine standstill can result in great economic inefficiencies that undermine the long-term viability of the technology. Hence, the load carrying structure should be monitored continuously in order to minimize the overall cost of maintenance and repair. The end result are turbines defined by extend lifetimes and greater economic viability. For that purpose, an automated monitoring system for early damage detection and damage localisation is currently under development for wind turbines. Most of the techniques existing for global damage detection of structures work by using frequency domain methods. Frequency shifts and mode shape changes are usually used for damage detection of large structures (e.g. bridges, large buildings and towers) [1]. Damage can cause a change in the distribution of structural stiffness which has to be detected by measuring dynamic responses using natural excitation. Even though mode shapes are more sensitive to damage compared to frequency shifts, the use of mode shapes requires a lot of sensors installed so as to reliably detect mode shape changes for early damage detection [2]. The design of our developed structural health monitoring (SHM) system is based on three functional modules that track changes in the global dynamic behaviour of both the turbine tower and blade elements. A key feature of the approach is the need for a minimal number of strain gages and accelerometers necessary to record the structure’s condition. Module 1 analyzes the proportionality of maximum stress and maximum velocity; already small changes in component stiffness can be detected. Afterwards, module 3 is activated for localization and quantization of the damage. The approach of module 3 is based on a numerical model which solves a multi-parameter eigenvalue problem. As a prerequisite, highly resolved eigenfrequencies and a parameterization of a validated structural model are required. Both are provided for the undamaged structure by module 2", "title": "" }, { "docid": "c20733b414a1b39122ef54d161885d81", "text": "This paper discusses the role of clusters and focal firms in the economic performance of small firms in Italy. Using the example of the packaging industry of northern Italy, it shows how clusters of small firms have emerged around a few focal or leading companies. These companies have helped the clusters grow and diversify through technological and managerial spillover effects, through the provision of purchase orders, and sometimes through financial links. The role of common local training institutes, whose graduates often start up small firms within the local cluster, is also discussed.", "title": "" }, { "docid": "866f1b980b286f6ed3ace9caf0dc415a", "text": "In this letter, we propose a road structure refined convolutional neural network (RSRCNN) approach for road extraction in aerial images. In order to obtain structured output of road extraction, both deconvolutional and fusion layers are designed in the architecture of RSRCNN. For training RSRCNN, a new loss function is proposed to incorporate the geometric information of road structure in cross-entropy loss, thus called road-structure-based loss function. Experimental results demonstrate that the trained RSRCNN model is able to advance the state-of-the-art road extraction for aerial images, in terms of precision, recall, F-score, and accuracy.", "title": "" }, { "docid": "d86aa00419ad3773c1f3f27e076c2ba6", "text": "Image captioning with a natural language has been an emerging trend. However, the social image, associated with a set of user-contributed tags, has been rarely investigated for a similar task. The user-contributed tags, which could reflect the user attention, have been neglected in conventional image captioning. Most existing image captioning models cannot be applied directly to social image captioning. In this work, a dual attention model is proposed for social image captioning by combining the visual attention and user attention simultaneously.Visual attention is used to compress a large mount of salient visual information, while user attention is applied to adjust the description of the social images with user-contributed tags. Experiments conducted on the Microsoft (MS) COCO dataset demonstrate the superiority of the proposed method of dual attention.", "title": "" }, { "docid": "83b5da6ab8ab9a906717fda7aa66dccb", "text": "Image quality assessment (IQA) tries to estimate human perception based image visual quality in an objective manner. Existing approaches target this problem with or without reference images. For no-reference image quality assessment, there is no given reference image or any knowledge of the distortion type of the image. Previous approaches measure the image quality from signal level rather than semantic analysis. They typically depend on various features to represent local characteristic of an image. In this paper we propose a new no-reference (NR) image quality assessment (IQA) framework based on semantic obviousness. We discover that semantic-level factors affect human perception of image quality. With such observation, we explore semantic obviousness as a metric to perceive objects of an image. We propose to extract two types of features, one to measure the semantic obviousness of the image and the other to discover local characteristic. Then the two kinds of features are combined for image quality estimation. The principles proposed in our approach can also be incorporated with many existing IQA algorithms to boost their performance. We evaluate our approach on the LIVE dataset. Our approach is demonstrated to be superior to the existing NR-IQA algorithms and comparable to the state-of-the-art full-reference IQA (FR-IQA) methods. Cross-dataset experiments show the generalization ability of our approach.", "title": "" }, { "docid": "c2fd86b36364ac9c40e873176443c4c8", "text": "In a public service announcement on 17 March 2016, the Federal Bureau of Investigation jointly with the U.S. Department of Transportation and the National Highway Traffic Safety Administration (NHTSA) released a warning regarding the increasing vulnerability of motor vehicles to remote exploits [18]. Engine shutdowns, disabled brakes, and locked doors are a few examples of possible vehicle cybersecurity attacks. Modern cars grow into a new target for cyberattacks as they become increasingly connected. While driving on the road, sharks (i.e., hackers) need only to be within communication range of a vehicle to attack it. However, in some cases, they can hack into it while they are miles away. In this article, we aim to illuminate the latest vehicle cybersecurity threats including malware attacks, on-board diagnostic (OBD) vulnerabilities, and automobile apps threats. We illustrate the in-vehicle network architecture and demonstrate the latest defending mechanisms designed to mitigate such threats.", "title": "" }, { "docid": "395d21b52ff74935fffcc1924aec5c0f", "text": "The desire to take medicines is one feature which distinguishes man, the animal, from his fellow creatures (1). Thus did William Osler express skepticism about remedies available in the early 20th century and an avuncular indulgence toward patients who wanted them. His comment expresses the attitude of many physicians today toward consumers of herbal medicines, and indeed may be timeless: Medicinal herbs were found in the personal effects of an ice man, whose body was frozen in the Swiss Alps for more than 5000 years (2). Since these herbs appear to have treated the parasites found in his intestine (2), the desire to take medicines may signify a timeless quest for cures that flowers today in the form of widely acclaimed new drugs. The effectiveness of a modern drug is ultimately judged by the results of clinical trials. Ordinarily, such trials are designed to test the assumption that a drug's pharmacologic activity will favorably affect a disease process, which in turn is viewed in terms of a physiologic model. Clinical trials yield convincing results, however, only if they are conducted in accordance with principles that, for example, ensure elimination of bias and reduce the possibility that results occurred merely by chance. Trials must also use drug preparations with consistent pharmacologic properties. These principles apply to all drugs, whether they originate as traditional remedies or in precepts of molecular biology. Indeed, such principles have successfully guided digitalis from medicinal plant to modern drug; we might ask, therefore, how these principles apply to the evaluation of today's herbal medicines. Digitalis: From Folk Remedy to Modern Drug Withering, who introduced foxglove to the medical profession in 1785 (3), took the first steps in transforming digitalis from a folk remedy to a modern drug when he simplified a family receipt for dropsy that contained more than 20 substances (3) by assuming that foxglove was the active ingredient. Careful clinical observations then enabled him to recognize the plant's slim margin of safety and thus the importance of dose: just enough foxglove to cause diuresis, but not enough to cause vomiting or very slow pulse. Bioassays and Chemical Standardization By the early 20th century, it was understood that activities of medicines derived from foxglove were influenced by such factors as the time when the leaves are gathered, and climatic and soil conditions [as well as] the manner in which the drug is prepared for the market (4). Clearly, plants have ingredients with therapeutic activity, but their preparations must be standardized to yield consistent products, which therefore can be given in doses that are maximally safe and effective. In 1906, the pharmacopeia contained a daunting number of digitalis preparationsfor example, Digitin, Extractum Digitalis, and Infusum Digitalis (5)whose potency had never been investigated. When these preparations were investigated by using a new bioassay based on the fact that digitalis causes asystole in the frog, the results were surprising: The potencies of 16 commercial digital preparations varied over a fourfold range (4). Fortunately, the bioassay also provided a way to control this problem, and the frog bioassay was soon officially adopted by the United States Pharmacopeia to standardize digitalis preparations. This bioassay, which indicated the importance of laboratory studies for the emerging science of pharmacology, provided the means to standardize the potency of a chemically complex herbal medicine, even when its active ingredients were uncertain. Soon the quest for even better methods of standardizing digitalis yielded several dozen bioassays in more than six different animal species (6). Thus, the cat heart assay replaced the frog heart assay, which in turn was replaced by the pigeon assay. The ultimate bioassay, however, was done in humans; it was based on the digitalis-induced changes in a patient's electrocardiogram (7). Although digoxin, now the preferred form of digitalis, can be standardized chemically, a bioassay of sorts is still required to establish its bioavailability (8) and, hence, the pharmaceutical standardization needed to carry out the clinical trials that shape our current perspective on the drug (9). Herbal Remedies in the United States Today Challenges in Standardizing Herbal Medicines Unfortunately, standardization methods such as those described for digitalis are not suitable for many herbs. Bioassays must be based on biological models, which are not available for the health claims made for many popular herbs, and chemical analysis has limited value when the ingredients responsible for a plant's activity have not been identified. In addition, if the active ingredient of an herb were known, it would remain unclear whether the crude herb would be preferable to its purified active principle. In the absence of definitive information in this regard, such traditional herbal preparations as digitalis leaf and opium have been replaced by such drugs as digoxin and codeine, respectively. How can an herb be standardized if its active ingredients are not known and there is no suitable bioassay? EGb 761, a patented extract of Ginkgo biloba, is a commendable attempt to solve this problem and to achieve a consistent formulation of ginkgo. Thus, EGb 761 sets feasible standards for how and where ginkgo is grown and harvested, how the leaves are extracted, and the target values for several chemical constituents of the medicinal product (10). EGb 761, which aims for chemical consistency and, presumably, therapeutic consistency, was used in three of four studies that, on the basis of a meta-analysis, concluded that ginkgo conferred a small but significant benefit in patients with Alzheimer disease (11). In the absence of evidence to the contrary, those who hope to replicate these trial results would justifiably select this ginkgo product in preference to others with less well-specified standards of botanical and chemical consistency. Recent studies with St. John's wort, however, remind us of the potential pitfalls of standardizing a medicinal herb to constituents that may not be responsible for therapeutic activity. For years, St. John's wort, which meta-analysis finds superior to placebo for treatment of mild to moderate depression (12), has been standardized by its content of hypericin. Hypericin, however, has never been confirmed as the herb's active ingredient and may be no more than a characteristic ingredient of the plant, useful for botanical verification but not necessarily for therapeutic standardization. Another constituent of St. John's wort, hyperforin, now appears to be a more potent antidepressant than hypericin. Thus, the potency of various St. John's wort extracts for inhibiting the neuronal uptake of serotonin, a characteristic of conventional antidepressants such as fluoxetine, increases with increasing hyperforin content. Studies in animal models of depression (13) and patients with mild to moderate depression (14) suggest that antidepressant activity is related to content of hyperforin, not hypericin. For example, a three-arm clinical trial of 147 patients that compared two St. John's wort extracts of equal hypericin content with placebo found antidepressant activity to be higher for the extract that had a 10-fold higher hyperforin content (14). Although this trial was relatively small and therefore of limited statistical significance, its results suggest that antidepressant activity demonstrated in a meta-analysis of past studies (12) may have resulted from the fortuitous inclusion of hyperforin in many of the St. John's wort formulations included. If the active ingredient of St. John's wort products used in these studies was not optimized, the studies as a group would undoubtedly underestimate the potential antidepressant activity of St. John's wort. Additional evidence suggests that the consumer is not receiving the full possible benefit of St. John's wort. On a recent visit to a local food store, I found St. John's wort preparations that were reminiscent of digitalis formulations at the beginning of the 20th century. Some were said to contain 0.3% (300 mg) hypericin, another was a liquid formulation containing 180 mg of hypericins, and a third contained 0.3% (450 mg) hypericin. The highest content of 0.3% hypericin was 530 mg. Yet another product carried the label St. John's wort, but its contents were not quantified. Hyperforin content was listed only for some products, whereas other products indicated that St. John's wort had been combined with such ingredients as kava, Echinacea, licorice root, or coconut. The parts of the plant used in the preparations were described as leaf, flowers and stem, aerial parts, or simply fl ers and leaf. Although labels on some St. John's wort products indicated an awareness of recent studies on hyperforin, other labels confirmed that there is no barrier to selling herbal preparations of doubtful scientific rationale and uncertain potency. Clinical Trials of Herbs Randomized clinical trials have become the gold standard for evaluating the efficacy of a drug and have assumed a similar status for evaluating an herbal remedy. Although the methodology of herbal trials is improving, some studies cited in herbal compendia have shortcomings. One problem is that results of herbal trials often do not reach statistical significance because they enroll fewer participants than trials of a conventional drug, and the role of chance may be overlooked in interpreting such trials. For example, the results of clinical studies were recently examined to determine whether parthenolide, a characteristic component of feverfew, was necessary for feverfew's apparent role in prevention of migraine. It was reasoned (15) that parthenolide could not be the sole active ingredient of feverfew because the parthenolide content of the feverfew preparation used in one negative trial (16) ", "title": "" }, { "docid": "3770720cff3a36596df097835f4f10a9", "text": "As mobile computing technologies have been more powerful and inclusive in people’s daily life, the issue of mobile assisted language learning (MALL) has also been widely explored in CALL research. Many researches on MALL consider the emerging mobile technologies have considerable potentials for the effective language learning. This review study focuses on the investigation of newly emerging mobile technologies and their pedagogical applications for language teachers and learners. Recent research or review on mobile assisted language learning tends to focus on more detailed applications of newly emerging mobile technology, rather than has given a broader point focusing on types of mobile device itself. In this paper, I thus reviewed recent research and conference papers for the last decade, which utilized newly emerging and integrated mobile technology. Its pedagogical benefits and challenges are discussed.", "title": "" }, { "docid": "c48d0c94d3e97661cc2c944cc4b61813", "text": "CIPO is the very “tip of the iceberg” of functional gastrointestinal disorders, being a rare and frequently misdiagnosed condition characterized by an overall poor outcome. Diagnosis should be based on clinical features, natural history and radiologic findings. There is no cure for CIPO and management strategies include a wide array of nutritional, pharmacologic, and surgical options which are directed to minimize malnutrition, promote gut motility and reduce complications of stasis (ie, bacterial overgrowth). Pain may become so severe to necessitate major analgesic drugs. Underlying causes of secondary CIPO should be thoroughly investigated and, if detected, treated accordingly. Surgery should be indicated only in a highly selected, well characterized subset of patients, while isolated intestinal or multivisceral transplantation is a rescue therapy only in those patients with intestinal failure unsuitable for or unable to continue with TPN/HPN. Future perspectives in CIPO will be directed toward an accurate genomic/proteomic phenotying of these rare, challenging patients. Unveiling causative mechanisms of neuro-ICC-muscular abnormalities will pave the way for targeted therapeutic options for patients with CIPO.", "title": "" }, { "docid": "01cd8355e0604868659e1a312d385ebe", "text": "In the past years, knowledge graphs have proven to be beneficial for recommender systems, efficiently addressing paramount issues such as new items and data sparsity. At the same time, several works have recently tackled the problem of knowledge graph completion through machine learning algorithms able to learn knowledge graph embeddings. In this paper, we show that the item recommendation problem can be seen as a specific case of knowledge graph completion problem, where the “feedback” property, which connects users to items that they like, has to be predicted. We empirically compare a set of state-of-the-art knowledge graph embeddings algorithms on the task of item recommendation on the Movielens 1M dataset. The results show that knowledge graph embeddings models outperform traditional collaborative filtering baselines and that TransH obtains the best performance.", "title": "" }, { "docid": "974d7b697942a8872b01d7b5d2302750", "text": "Purpose – This study provides insights into corporate achievements in supply chain management (SCM) and logistics management and details how they might help disaster agencies. The authors highlight and identify current practices, particularities, and challenges in disaster relief supply chains. Design/methodology/approach – Both SCM and logistics management literature and examples drawn from real-life cases inform the development of the theoretical model. Findings – The theoretical, dual-cycle model that focuses on the key missions of disaster relief agencies: first, prevention and planning and, second, response and recovery. Three major contributions are offered: (1) a concise representation of current practices and particularities of disaster relief supply chains compared with commercial SCM; (2) challenges and barriers to the development of more efficient SCM practices, classified into learning, strategizing, and coordinating and measurement issues; and (3) a simple, functional model for understanding how collaborations between corporations and disaster relief agencies might help relief agencies meet SCM challenges. Research limitations/implications – The study does not address culture clash–related considerations. Rather than representing the entire scope of real-life situations and practices, the analysis relies on key assumptions to help conceptualize collaborative paths.", "title": "" }, { "docid": "e0f7c82754694084c6d05a2d37be3048", "text": "Introducing variability while maintaining coherence is a core task in learning to generate utterances in conversation. Standard neural encoder-decoder models and their extensions using conditional variational autoencoder often result in either trivial or digressive responses. To overcome this, we explore a novel approach that injects variability into neural encoder-decoder via the use of external memory as a mixture model, namely Variational Memory Encoder-Decoder (VMED). By associating each memory read with a mode in the latent mixture distribution at each timestep, our model can capture the variability observed in sequential data such as natural conversations. We empirically compare the proposed model against other recent approaches on various conversational datasets. The results show that VMED consistently achieves significant improvement over others in both metricbased and qualitative evaluations.", "title": "" }, { "docid": "6549a00df9fadd56b611ee9210102fe8", "text": "Ontology editors are software tools that allow the creation and maintenance of ontologies through a graphical user interface. As the Semantic Web effort grows, a larger community of users for this kind of tools is expected. New users include people not specifically skilled in the use of ontology formalisms. In consequence, the usability of ontology editors can be viewed as a key adoption precondition for Semantic Web technologies. In this paper, the usability evaluation of several representative ontology editors is described. This evaluation is carried out by combining a heuristic pre-assessment and a subsequent user-testing phase. The target population comprises people with no specific ontology-creation skills that have a general knowledge about domain modelling. The problems found point out that, for this kind of users, current editors are adequate for the creation and maintenance of simple ontologies, but also that there is room for improvement, especially in browsing mechanisms, help systems and visualization metaphors.", "title": "" } ]
scidocsrr
edd789fe06013fdf37a87659ca7d5b82
Context-Based Few-Shot Word Representation Learning
[ { "docid": "49387b129347f7255bf77ad9cc726275", "text": "Words in natural language follow a Zipfian distribution whereby some words are frequent but most are rare. Learning representations for words in the “long tail” of this distribution requires enormous amounts of data. Representations of rare words trained directly on end-tasks are usually poor, requiring us to pre-train embeddings on external data, or treat all rare words as out-of-vocabulary words with a unique representation. We provide a method for predicting embeddings of rare words on the fly from small amounts of auxiliary data with a network trained against the end task. We show that this improves results against baselines where embeddings are trained on the end task in a reading comprehension task, a recognizing textual entailment task, and in language modelling.", "title": "" }, { "docid": "09df260d26638f84ec3bd309786a8080", "text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/", "title": "" } ]
[ { "docid": "2fa6f761f22e0484a84f83e5772bef40", "text": "We consider the problem of planning smooth paths for a vehicle in a region bounded by polygonal chains. The paths are represented as B-spline functions. A path is found by solving an optimization problem using a cost function designed to care for both the smoothness of the path and the safety of the vehicle. Smoothness is defined as small magnitude of the derivative of curvature and safety is defined as the degree of centering of the path between the polygonal chains. The polygonal chains are preprocessed in order to remove excess parts and introduce safety margins for the vehicle. The method has been implemented for use with a standard solver and tests have been made on application data provided by the Swedish mining company LKAB.", "title": "" }, { "docid": "8c308305b4a04934126c4746c8333b52", "text": "The authors report on the development of the Cancer Tissue Information Extraction System (caTIES)--an application that supports collaborative tissue banking and text mining by leveraging existing natural language processing methods and algorithms, grid communication and security frameworks, and query visualization methods. The system fills an important need for text-derived clinical data in translational research such as tissue-banking and clinical trials. The design of caTIES addresses three critical issues for informatics support of translational research: (1) federation of research data sources derived from clinical systems; (2) expressive graphical interfaces for concept-based text mining; and (3) regulatory and security model for supporting multi-center collaborative research. Implementation of the system at several Cancer Centers across the country is creating a potential network of caTIES repositories that could provide millions of de-identified clinical reports to users. The system provides an end-to-end application of medical natural language processing to support multi-institutional translational research programs.", "title": "" }, { "docid": "9c30ef5826b413bab262b7a0884eb119", "text": "In this survey paper, we review recent uses of convolution neural networks (CNNs) to solve inverse problems in imaging. It has recently become feasible to train deep CNNs on large databases of images, and they have shown outstanding performance on object classification and segmentation tasks. Motivated by these successes, researchers have begun to apply CNNs to the resolution of inverse problems such as denoising, deconvolution, super-resolution, and medical image reconstruction, and they have started to report improvements over state-of-the-art methods, including sparsity-based techniques such as compressed sensing. Here, we review the recent experimental work in these areas, with a focus on the critical design decisions: Where does the training data come from? What is the architecture of the CNN? and How is the learning problem formulated and solved? We also bring together a few key theoretical papers that offer perspective on why CNNs are appropriate for inverse problems and point to some next steps in the field.", "title": "" }, { "docid": "15e2fc773fb558e55d617f4f9ac22f69", "text": "Recent advances in ASR and spoken language processing have led to improved systems for automated assessment for spoken language. However, it is still challenging for automated scoring systems to achieve high performance in terms of the agreement with human experts when applied to non-native children’s spontaneous speech. The subpar performance is mainly caused by the relatively low recognition rate on non-native children’s speech. In this paper, we investigate different neural network architectures for improving non-native children’s speech recognition and the impact of the features extracted from the corresponding ASR output on the automated assessment of speaking proficiency. Experimental results show that bidirectional LSTM-RNN can outperform feed-forward DNN in ASR, with an overall relative WER reduction of 13.4%. The improved speech recognition can then boost the language proficiency assessment performance. Correlations between the rounded automated scores and expert scores range from 0.66 to 0.70 for the three speaking tasks studied, similar to the humanhuman agreement levels for these tasks.", "title": "" }, { "docid": "3ec1da9b86b3338b1ad4890add51a20b", "text": "In this paper, we present the dynamic modeling and controller design of a tendon-driven system that is antagonistically driven by elastic tendons. In the dynamic modeling, the tendons are approximated as linear axial springs, neglecting their masses. An overall equation for motion is established by following the Euler–Lagrange formalism of dynamics, combined with rigid-body rotation and vibration. The controller is designed using the singular perturbation approach, which leads to a composite controller (i.e., consisting of a fast sub-controller and a slow sub-controller). An appropriate internal force is superposed to the control action to ensure the tendons to be in tension for all configurations. Experimental results are provided to demonstrate the validity and effectiveness of the proposed controller for the antagonistic tendon-driven system.", "title": "" }, { "docid": "76f3c76572e46131354707b2da7f55b6", "text": "Purpose – Competitive environment and numerous stakeholders’ pressures are forcing hotels to comply their operations with the principles of sustainable development, especially in the field of environmental responsibility. Therefore, more and more of them incorporate environmental objectives in their business policies and strategies. The fulfilment of the environmental objectives requires the hotel to develop and implement environmentally sustainable business practices, as well as to implement reliable tools to assess environmental impact, of which environmental accounting and reporting are particularly emphasized. The purpose of this paper is to determine the development of hotel environmental accounting practices, based on previous research and literature review. Approach – This paper provides an overview of current research in the field of hotel environmental accounting and reporting, based on established knowledge about hotel environmental responsibility. The research has been done according to the review of articles in academic journals. Conclusions about the requirements for achieving hotel long-term sustainability have been drawn. Findings – Previous studies have shown that environmental accounting and reporting practice in hotel business is weaker when compared to other activities, and that most hotels still insufficiently use the abovementioned instruments of environmental management to reduce their environmental footprint and to improve their relationship with stakeholders. The paper draws conclusions about possible perspectives that environmental accounting has in ensuring hotel sustainability. Originality – The study provides insights into the problem of environmental responsibility of hotels, from the standpoint of environmental accounting and reporting, as tools for assessing hotel impact on the environment and for improving its environmentally sustainable business practice. The ideas for improving hotel environmental efficiency are shaped based on previous findings.", "title": "" }, { "docid": "ba203abd0bd55fc9d06fe979a604d741", "text": "Graph Convolutional Networks (GCNs) have become a crucial tool on learning representations of graph vertices. The main challenge of adapting GCNs on largescale graphs is the scalability issue that it incurs heavy cost both in computation and memory due to the uncontrollable neighborhood expansion across layers. In this paper, we accelerate the training of GCNs through developing an adaptive layer-wise sampling method. By constructing the network layer by layer in a top-down passway, we sample the lower layer conditioned on the top one, where the sampled neighborhoods are shared by different parent nodes and the over expansion is avoided owing to the fixed-size sampling. More importantly, the proposed sampler is adaptive and applicable for explicit variance reduction, which in turn enhances the training of our method. Furthermore, we propose a novel and economical approach to promote the message passing over distant nodes by applying skip connections. Intensive experiments on several benchmarks verify the effectiveness of our method regarding the classification accuracy while enjoying faster convergence speed.", "title": "" }, { "docid": "0441fb016923cd0b7676d3219951c230", "text": "Globally modeling and reasoning over relations between regions can be beneficial for many computer vision tasks on both images and videos. Convolutional Neural Networks (CNNs) excel at modeling local relations by convolution operations, but they are typically inefficient at capturing global relations between distant regions and require stacking multiple convolution layers. In this work, we propose a new approach for reasoning globally in which a set of features are globally aggregated over the coordinate space and then projected to an interaction space where relational reasoning can be efficiently computed. After reasoning, relation-aware features are distributed back to the original coordinate space for down-stream tasks. We further present a highly efficient instantiation of the proposed approach and introduce the Global Reasoning unit (GloRe unit) that implements the coordinate-interaction space mapping by weighted global pooling and weighted broadcasting, and the relation reasoning via graph convolution on a small graph in interaction space. The proposed GloRe unit is lightweight, end-to-end trainable and can be easily plugged into existing CNNs for a wide range of tasks. Extensive experiments show our GloRe unit can consistently boost the performance of state-of-the-art backbone architectures, including ResNet [15, 16], ResNeXt [33], SE-Net [18] and DPN [9], for both 2D and 3D CNNs, on image classification, semantic segmentation and video action recognition task.", "title": "" }, { "docid": "3bdd6168db10b8b195ce88ae9c4a75f9", "text": "Nowadays Intrusion Detection System (IDS) which is increasingly a key element of system security is used to identify the malicious activities in a computer system or network. There are different approaches being employed in intrusion detection systems, but unluckily each of the technique so far is not entirely ideal. The prediction process may produce false alarms in many anomaly based intrusion detection systems. With the concept of fuzzy logic, the false alarm rate in establishing intrusive activities can be reduced. A set of efficient fuzzy rules can be used to define the normal and abnormal behaviors in a computer network. Therefore some strategy is needed for best promising security to monitor the anomalous behavior in computer network. In this paper I present a few research papers regarding the foundations of intrusion detection systems, the methodologies and good fuzzy classifiers using genetic algorithm which are the focus of current development efforts and the solution of the problem of Intrusion Detection System to offer a realworld view of intrusion detection. Ultimately, a discussion of the upcoming technologies and various methodologies which promise to improve the capability of computer systems to detect intrusions is offered.", "title": "" }, { "docid": "904454a191da497071ee9b835561c6e6", "text": "We introduce a stochastic discrete automaton model to simulate freeway traffic. Monte-Carlo simulations of the model show a transition from laminar traffic flow to start-stopwaves with increasing vehicle density, as is observed in real freeway traffic. For special cases analytical results can be obtained.", "title": "" }, { "docid": "5ed525a96ab5663ca8df698e275620f2", "text": "Most video-based action recognition approaches choose to extract features from the whole video to recognize actions. The cluttered background and non-action motions limit the performances of these methods, since they lack the explicit modeling of human body movements. With recent advances of human pose estimation, this work presents a novel method to recognize human action as the evolution of pose estimation maps. Instead of relying on the inaccurate human poses estimated from videos, we observe that pose estimation maps, the byproduct of pose estimation, preserve richer cues of human body to benefit action recognition. Specifically, the evolution of pose estimation maps can be decomposed as an evolution of heatmaps, e.g., probabilistic maps, and an evolution of estimated 2D human poses, which denote the changes of body shape and body pose, respectively. Considering the sparse property of heatmap, we develop spatial rank pooling to aggregate the evolution of heatmaps as a body shape evolution image. As body shape evolution image does not differentiate body parts, we design body guided sampling to aggregate the evolution of poses as a body pose evolution image. The complementary properties between both types of images are explored by deep convolutional neural networks to predict action label. Experiments on NTU RGB+D, UTD-MHAD and PennAction datasets verify the effectiveness of our method, which outperforms most state-of-the-art methods.", "title": "" }, { "docid": "1d8f7705ba0dd969ed6de9e7e6a9a419", "text": "A Mecanum-wheeled robot benefits from great omni-direction maneuverability. However it suffers from random slippage and high-speed vibration, which creates electric power safety, uncertain position errors and energy waste problems for heavy-duty tasks. A lack of Mecanum research on heavy-duty autonomous navigation demands a robot platform to conduct experiments in the future. This paper introduces AuckBot, a heavy-duty omni-directional Mecanum robot platform developed at the University of Auckland, including its hardware overview, the control system architecture and the simulation design. In particular the control system, synergistically combining the Beckhoff system as the Controller-PC to serve low-level motion execution and ROS as the Navigation-PC to accomplish highlevel intelligent navigation tasks, is developed. In addition, a computer virtual simulation based on ISG-virtuos for virtual AuckBot has been validated. The present status and future work of AuckBot are described at the end.", "title": "" }, { "docid": "9c25a2e343e9e259a9881fd13983c150", "text": "Advances in cognitive, affective, and social neuroscience raise a host of new questions concerning the ways in which neuroscience can and should be used. These advances also challenge our intuitions about the nature of humans as moral and spiritual beings. Neuroethics is the new field that grapples with these issues. The present article surveys a number of applications of neuroscience to such diverse arenas as marketing, criminal justice, the military, and worker productivity. The ethical, legal, and societal effects of these applications are discussed. Less practical, but perhaps ultimately more consequential, is the impact of neuroscience on our worldview and our understanding of the human person.", "title": "" }, { "docid": "d4cdea26217e90002a3c4522124872a2", "text": "Recently, several methods for single image super-resolution(SISR) based on deep neural networks have obtained high performance with regard to reconstruction accuracy and computational performance. This paper details the methodology and results of the New Trends in Image Restoration and Enhancement (NTIRE) challenge. The task of this challenge is to restore rich details (high frequencies) in a high resolution image for a single low resolution input image based on a set of prior examples with low and corresponding high resolution images. The challenge has two tracks. We present a super-resolution (SR) method, which uses three losses assigned with different weights to be regarded as optimization target. Meanwhile, the residual blocks are also used for obtaining significant improvement in the evaluation. The final model consists of 9 weight layers with four residual blocks and reconstructs the low resolution image with three color channels simultaneously, which shows better performance on these two tracks and benchmark datasets.", "title": "" }, { "docid": "d64179da43db5f5bd15ff7e31e38d391", "text": "Real-world graph applications are typically domain-specific and model complex business processes in the property graph data model. To implement a domain-specific graph algorithm in the context of such a graph application, simply providing a set of built-in graph algorithms is usually not sufficient nor does it allow algorithm customization to the user's needs. To cope with these issues, graph database vendors provide---in addition to their declarative graph query languages---procedural interfaces to write user-defined graph algorithms.\n In this paper, we introduce GraphScript, a domain-specific graph query language tailored to serve advanced graph analysis tasks and the specification of complex graph algorithms. We describe the major language design of GraphScript, discuss graph-specific optimizations, and describe the integration into an enterprise data platform.", "title": "" }, { "docid": "208f426b5e60fb73b5f49e86f942e98f", "text": "Using the contemporary view of computing exemplified by recent models and results from non-uniform complexity theory, we investigate the computational power of cognitive systems. We show that in accordance with the so-called extended Turing machine paradigm such systems can be modelled as non-uniform evolving interactive systems whose computational power surpasses that of the classical Turing machines. Our results show that there is an infinite hierarchy of cognitive systems. Within this hierarchy, there are systems achieving and surpassing the human intelligence level. Any intelligence level surpassing the human intelligence is called the superintelligence level. We will argue that, formally, from a computation viewpoint the human-level intelligence is upper-bounded by the $$\\Upsigma_2$$ class of the Arithmetical Hierarchy. In this class, there are problems whose complexity grows faster than any computable function and, therefore, not even exponential growth of computational power can help in solving such problems, or reach the level of superintelligence.", "title": "" }, { "docid": "87e3727df4e8d7f275695da161b0d924", "text": "Self-determination theory (SDT; Deci & Ryan, 2000) proposes that intrinsic, relative to extrinsic, goal content is a critical predictor of the quality of an individual's behavior and psychological well-being. Through three studies, we developed and psychometrically tested a measure of intrinsic and extrinsic goal content in the exercise context: the Goal Content for Exercise Questionnaire (GCEQ). In adults, exploratory (N = 354; Study 1) and confirmatory factor analyses (N = 312; Study 2) supported a 20-item solution consisting of 5 lower order factors (i.e., social affiliation, health management, skill development, image and social recognition) that could be subsumed within a 2-factor higher order structure (i.e., intrinsic and extrinsic). Evidence for external validity, temporal stability, gender invariance, and internal consistency of the GCEQ was found. An independent sample (N = 475; Study 3) provided further support for the lower order structure of the GCEQ and some support for the higher order structure. The GCEQ was supported as a measure of exercise-based goal content, which may help understand how intrinsic and extrinsic goals can motivate exercise behavior.", "title": "" }, { "docid": "2ecd0bf132b3b77dc1625ef8d09c925b", "text": "This paper presents an efficient algorithm to compute time-to-x (TTX) criticality measures (e.g. time-to-collision, time-to-brake, time-to-steer). Such measures can be used to trigger warnings and emergency maneuvers in driver assistance systems. Our numerical scheme finds a discrete time approximation of TTX values in real time using a modified binary search algorithm. It computes TTX values with high accuracy by incorporating realistic vehicle dynamics and using realistic emergency maneuver models. It is capable of handling complex object behavior models (e.g. motion prediction based on DGPS maps). Unlike most other methods presented in the literature, our approach enables decisions in scenarios with multiple static and dynamic objects in the scene. The flexibility of our method is demonstrated on two exemplary applications: intersection assistance for left-turn-across-path scenarios and pedestrian protection by automatic steering.", "title": "" }, { "docid": "e541ae262655b7f5affefb32ce9267ee", "text": "Internet of Things (IoT) is a revolutionary technology for the modern society. IoT can connect every surrounding objects for various applications like security, medical fields, monitoring and other industrial applications. This paper considers the application of IoT in the field of medicine. IoT in E-medicine can take the advantage of emerging technologies to provide immediate treatment to the patient as well as monitors and keeps track of health record for healthy person. IoT then performs complex computations on these collected data and can provide health related advice. Though IoT can provide a cost effective medical services to any people of all age groups, there are several key issues that need to be addressed. System security, IoT interoperability, dynamic storage facility and unified access mechanisms are some of the many fundamental issues associated with IoT. This paper proposes a system level design solution for security and flexibility aspect of IoT. In this paper, the functional components are bound in security function group which ensures the management of privacy and secure operation of the system. The security function group comprises of components which offers secure communication using Ciphertext-Policy Attribute-Based Encryption (CP-ABE). Since CP-ABE are delegated to unconstrained devices with the assumption that these devices are trusted, the producer encrypts data using AES and the ABE scheme is protected through symmetric key solutions.", "title": "" }, { "docid": "cb1645b5b37e99a1dac8c6af1d6b1027", "text": "In recent years, the increasing propagation of hate speech on social media and the urgent need for effective countermeasures have drawn significant investment from governments, companies, and researchers. A large number of methods have been developed for automated hate speech detection online. This aims to classify textual content into non-hate or hate speech, in which case the method may also identify the targeting characteristics (i.e., types of hate, such as race, and religion) in the hate speech. However, we notice significant difference between the performance of the two (i.e., non-hate v.s. hate). In this work, we argue for a focus on the latter problem for practical reasons. We show that it is a much more challenging task, as our analysis of the language in the typical datasets shows that hate speech lacks unique, discriminative features and therefore is found in the ‘long tail’ in a dataset that is difficult to discover. We then propose Deep Neural Network structures serving as feature extractors that are particularly effective for capturing the semantics of hate speech. Our methods are evaluated on the largest collection of hate speech datasets based on Twitter, and are shown to be able to outperform the best performing method by up to 5 percentage points in macro-average F1, or 8 percentage points in the more challenging case of identifying hateful content.", "title": "" } ]
scidocsrr
f3c97b10c3c5cf5a6276ecbfcdac621a
Security analysis and enhancements of 3GPP authentication and key agreement protocol
[ { "docid": "b03d88449eaf4e393dc842340f6951ea", "text": "Use of mobile personal computers in open networked environment is revolutionalising the way we use computers. Mobile networked computing is raising important information security and privacy issues. This paper is concerned with the design of authentication protocols for a mobile computing environment. The paper rst analyses the authenti-cation initiator protocols proposed by Beller,Chang and Yacobi (BCY) and the modiications considered by Carlsen and points out some weaknesses. The paper then suggests improvements to these protocols. The paper proposes secure end-to-end protocols between mobile users using both symmetric and public key based systems. These protocols enable mutual authentication and establish a shared secret key between mobile users. Furthermore, these protocols provide a certain degree of anonymity of the communicating users to be achieved visa -vis other system users.", "title": "" } ]
[ { "docid": "ab793dc03b8002a638a101abdccd1b38", "text": "This paper describes a technique to obtain a time dilation or contraction of an audio signal. Different Computer Graphics applications can take advantage of this technique. In real-time networked VR applications, such as teleconference or games, audio might be transmited independently from the rest of the data, These different signals arrive asynchronously and need to be somehow resynchronized on the fly. In animation, it can help to automatically fit and merge pre-recorded sound samples to special timed events. It also makes it easier to accomplish special effects like lip-sync for dubbing or changing the voice of an animated character. Our technique tries to eliminate distortions by the replication of the original signal frequencies. Malvar wavelets are used to avoid clicking between segment transitions.", "title": "" }, { "docid": "31461de346fb454f296495287600a74f", "text": "The working hypothesis of the paper is that motor images are endowed with the same properties as those of the (corresponding) motor representations, and therefore have the same functional relationship to the imagined or represented movement and the same causal role in the generation of this movement. The fact that the timing of simulated movements follows the same constraints as that of actually executed movements is consistent with this hypothesis. Accordingly, many neural mechanisms are activated during motor imagery, as revealed by a sharp increase in tendinous reflexes in the limb imagined to move, and by vegetative changes which correlate with the level of mental effort. At the cortical level, a specific pattern of activation, that closely resembles that of action execution, is observed in areas devoted to motor control. This activation might be the substrate for the effects of mental training. A hierarchical model of the organization of action is proposed: this model implies a short-term memory storage of a 'copy' of the various representational steps. These memories are erased when an action corresponding to the represented goal takes place. By contrast, if the action is incompletely or not executed, the whole system remains activated, and the content of the representation is rehearsed. This mechanism would be the substrate for conscious access to this content during motor imagery and mental training.", "title": "" }, { "docid": "c43b77b56a6e2cb16a6b85815449529d", "text": "We propose a new method for clustering multivariate time series. A univariate time series can be represented by a fixed-length vector whose components are statistical features of the time series, capturing the global structure. These descriptive vectors, one for each component of the multivariate time series, are concatenated, before being clustered using a standard fast clustering algorithm such as k-means or hierarchical clustering. Such statistical feature extraction also serves as a dimension-reduction procedure for multivariate time series. We demonstrate the effectiveness and simplicity of our proposed method by clustering human motion sequences: dynamic and high-dimensional multivariate time series. The proposed method based on univariate time series structure and statistical metrics provides a novel, yet simple and flexible way to cluster multivariate time series data efficiently with promising accuracy. The success of our method on the case study suggests that clustering may be a valuable addition to the tools available for human motion pattern recognition research.", "title": "" }, { "docid": "2f48b326aaa7b41a7ee347cedce344ed", "text": "In this paper a new kind of quasi-quartic trigonometric polynomial base functions with two shape parameters λ and μ over the space Ω = span {1, sin t, cos t, sin2t, cos2t, sin3t, cos3t} is presented and the corresponding quasi-quartic trigonometric Bézier curves and surfaces are defined by the introduced base functions. Each curve segment is generated by five consecutive control points. The shape of the curve can be adjusted by altering the values of shape parameters while the control polygon is kept unchanged. These curves inherit most properties of the usual quartic Bézier curves in the polynomial space and they can be used as an efficient new model for geometric design in the fields of CAGD.", "title": "" }, { "docid": "1212637c91d8c57299c922b6bde91ce8", "text": "BACKGROUND\nIn the late 1980's, occupational science was introduced as a basic discipline that would provide a foundation for occupational therapy. As occupational science grows and develops, some question its relationship to occupational therapy and criticize the direction and extent of its growth and development.\n\n\nPURPOSE\nThis study was designed to describe and critically analyze the growth and development of occupational science and characterize how this has shaped its current status and relationship to occupational therapy.\n\n\nMETHOD\nUsing a mixed methods design, 54 occupational science documents published in the years 1990 and 2000 were critically analyzed to describe changes in the discipline between two points in time. Data describing a range of variables related to authorship, publication source, stated goals for occupational science and type of research were collected.\n\n\nRESULTS\nDescriptive statistics, themes and future directions are presented and discussed.\n\n\nPRACTICE IMPLICATIONS\nThrough the support of a discipline that is dedicated to the pursuit of a full understanding of occupation, occupational therapy will help to create a new and complex body of knowledge concerning occupation. However, occupational therapy must continue to make decisions about how knowledge produced within occupational science and other disciplines can be best used in practice.", "title": "" }, { "docid": "95fbf262f9e673bd646ad7e02c5cbd53", "text": "Department of Finance Stern School of Business and NBER, New York University, 44 W. 4th Street, New York, NY 10012; mkacperc@stern.nyu.edu; http://www.stern.nyu.edu/∼mkacperc. Department of Finance Stern School of Business, NBER, and CEPR, New York University, 44 W. 4th Street, New York, NY 10012; svnieuwe@stern.nyu.edu; http://www.stern.nyu.edu/∼svnieuwe. Department of Economics Stern School of Business, NBER, and CEPR, New York University, 44 W. 4th Street, New York, NY 10012; lveldkam@stern.nyu.edu; http://www.stern.nyu.edu/∼lveldkam. We thank John Campbell, Joseph Chen, Xavier Gabaix, Vincent Glode, Ralph Koijen, Jeremy Stein, Matthijs van Dijk, and seminar participants at NYU Stern (economics and finance), Harvard Business School, Chicago Booth, MIT Sloan, Yale SOM, Stanford University (economics and finance), University of California at Berkeley (economics and finance), UCLA economics, Duke economics, University of Toulouse, University of Vienna, Australian National University, University of Melbourne, University of New South Wales, University of Sydney, University of Technology Sydney, Erasmus University, University of Mannheim, University of Alberta, Concordia, Lugano, the Amsterdam Asset Pricing Retreat, the Society for Economic Dynamics meetings in Istanbul, CEPR Financial Markets conference in Gerzensee, UBC Summer Finance conference, and Econometric Society meetings in Atlanta for useful comments and suggestions. Finally, we thank the Q-group for their generous financial support.", "title": "" }, { "docid": "b631b883e9d8a41f597d9b59d7e451fb", "text": "The availability of highly accurate maps has become crucial due to the increasing importance of location-based mobile applications as well as autonomous vehicles. However, mapping roads is currently an expensive and humanintensive process. High-resolution aerial imagery provides a promising avenue to automatically infer a road network. Prior work uses convolutional neural networks (CNNs) to detect which pixels belong to a road (segmentation), and then uses complex post-processing heuristics to infer graph connectivity [4, 10]. We show that these segmentation methods have high error rates (poor precision) because noisy CNN outputs are difficult to correct. We propose a novel approach, Unthule, to construct highly accurate road maps from aerial images. In contrast to prior work, Unthule uses an incremental search process guided by a CNN-based decision function to derive the road network graph directly from the output of the CNN. We train the CNN to output the direction of roads traversing a supplied point in the aerial imagery, and then use this CNN to incrementally construct the graph. We compare our approach with a segmentation method on fifteen cities, and find that Unthule has a 45% lower error rate in identifying junctions across these cities.", "title": "" }, { "docid": "f0c9db6cab187463162c8bba71ea011a", "text": "Traditional Network-on-Chips (NoCs) employ simple arbitration strategies, such as round-robin or oldest-first, to decide which packets should be prioritized in the network. This is counter-intuitive since different packets can have very different effects on system performance due to, e.g., different level of memory-level parallelism (MLP) of applications. Certain packets may be performance-critical because they cause the processor to stall, whereas others may be delayed for a number of cycles with no effect on application-level performance as their latencies are hidden by other outstanding packets'latencies. In this paper, we define slack as a key measure that characterizes the relative importance of a packet. Specifically, the slack of a packet is the number of cycles the packet can be delayed in the network with no effect on execution time. This paper proposes new router prioritization policies that exploit the available slack of interfering packets in order to accelerate performance-critical packets and thus improve overall system performance. When two packets interfere with each other in a router, the packet with the lower slack value is prioritized. We describe mechanisms to estimate slack, prevent starvation, and combine slack-based prioritization with other recently proposed application-aware prioritization mechanisms.\n We evaluate slack-based prioritization policies on a 64-core CMP with an 8x8 mesh NoC using a suite of 35 diverse applications. For a representative set of case studies, our proposed policy increases average system throughput by 21.0% over the commonlyused round-robin policy. Averaged over 56 randomly-generated multiprogrammed workload mixes, the proposed policy improves system throughput by 10.3%, while also reducing application-level unfairness by 30.8%.", "title": "" }, { "docid": "a89235677ad6ac3612983ca2bfeb584b", "text": "INTRODUCTION\nNanomedicine is defined as the area using nanotechnology's concepts for the benefit of human beings, their health and well being. The field of nanotechnology opened new unsuspected fields of research a few years ago.\n\n\nAIM OF THE STUDY\nTo provide an overview of nanotechnology application areas that could affect care for psychiatric illnesses.\n\n\nMETHODS\nWe conducted a systematic review using the PRISMA criteria (preferred reporting items for systematic reviews and meta-analysis). Inclusion criteria were specified in advance: all studies describing the development of nanotechnology in psychiatry. The research paradigm was: \"(nanotechnology OR nanoparticles OR nanomedicine) AND (central nervous system)\" Articles were identified in three research bases, Medline (1966-present), Web of Science (1975-present) and Cochrane (all articles). The last search was carried out on April 2, 2012. Seventy-six items were included in this qualitative review.\n\n\nRESULTS\nThe main applications of nanotechnology in psychiatry are (i) pharmacology. There are two main difficulties in neuropharmacology. Drugs have to pass the blood brain barrier and then to be internalized by targeted cells. Nanoparticles could increase drugs' bioavailability and pharmacokinetics, especially improving safety and efficacy of psychotropic drugs. Liposomes, nanosomes, nanoparticle polymers, nanobubbles are some examples of this targeted drug delivery. Nanotechnologies could also add new pharmacological properties, like nanohells and dendrimers; (ii) living analysis. Nanotechnology provides technical assistance to in vivo imaging or metabolome analysis; (iii) central nervous system modeling. Research teams have modelized inorganic synapses and mimicked synaptic behavior, essential for further creation of artificial neural systems. Some nanoparticle assemblies present the same small world and free-scale network architecture as cortical neural networks. Nanotechnologies and quantum physics could be used to create models of artificial intelligence and mental illnesses.\n\n\nDISCUSSION\nEven if nanotechnologies are promising, their safety is still tricky and this must be kept in mind.\n\n\nCONCLUSION\nWe are not about to see a concrete application of nanomedicine in daily psychiatric practice. However, it seems essential that psychiatrists do not forsake this area of research the perspectives of which could be decisive in the field of mental illness.", "title": "" }, { "docid": "82ef80d6257c5787dcf9201183735497", "text": "Big data is becoming a research focus in intelligent transportation systems (ITS), which can be seen in many projects around the world. Intelligent transportation systems will produce a large amount of data. The produced big data will have profound impacts on the design and application of intelligent transportation systems, which makes ITS safer, more efficient, and profitable. Studying big data analytics in ITS is a flourishing field. This paper first reviews the history and characteristics of big data and intelligent transportation systems. The framework of conducting big data analytics in ITS is discussed next, where the data source and collection methods, data analytics methods and platforms, and big data analytics application categories are summarized. Several case studies of big data analytics applications in intelligent transportation systems, including road traffic accidents analysis, road traffic flow prediction, public transportation service plan, personal travel route plan, rail transportation management and control, and assets maintenance are introduced. Finally, this paper discusses some open challenges of using big data analytics in ITS.", "title": "" }, { "docid": "4513872c2240390dca8f4b704e606157", "text": "We apply game theory to a vehicular traffic model to study the effect of driver strategies on traffic flow. The resulting model inherits the realistic dynamics achieved by a two-lane traffic model and aims to incorporate phenomena caused by driver-driver interactions. To achieve this goal, a game-theoretic description of driver interaction was developed. This game-theoretic formalization allows one to model different lane-changing behaviors and to keep track of mobility performance. We simulate the evolution of cooperation, traffic flow, and mobility performance for different modeled behaviors. The analysis of these results indicates a mobility optimization process achieved by drivers' interactions.", "title": "" }, { "docid": "bf7a683ab9dde3e3d2cacf2a99828d4a", "text": "Computing is transitioning from single-user devices to the Internet of Things (IoT), in which multiple users with complex social relationships interact with a single device. Currently deployed techniques fail to provide usable access-control specification or authentication in such settings. In this paper, we begin reenvisioning access control and authentication for the home IoT. We propose that access control focus on IoT capabilities (i. e., certain actions that devices can perform), rather than on a per-device granularity. In a 425-participant online user study, we find stark differences in participants’ desired access-control policies for different capabilities within a single device, as well as based on who is trying to use that capability. From these desired policies, we identify likely candidates for default policies. We also pinpoint necessary primitives for specifying more complex, yet desired, access-control policies. These primitives range from the time of day to the current location of users. Finally, we discuss the degree to which different authentication methods potentially support desired policies.", "title": "" }, { "docid": "8c04758d9f1c44e007abf6d2727d4a4f", "text": "The automatic identification and diagnosis of rice diseases are highly desired in the field of agricultural information. Deep learning is a hot research topic in pattern recognition and machine learning at present, it can effectively solve these problems in vegetable pathology. In this study, we propose a novel rice diseases identification method based on deep convolutional neural networks (CNNs) techniques. Using a dataset of 500 natural images of diseased and healthy rice leaves and stems captured from rice experimental field, CNNs are trained to identify 10 common rice diseases. Under the 10-fold cross-validation strategy, the proposed CNNs-based model achieves an accuracy of 95.48%. This accuracy is much higher than conventional machine learning model. The simulation results for the identification of rice diseases show the feasibility and effectiveness of the proposed method. © 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "1b6812231498387f158d24de8669dc27", "text": "The ideas and findings in this report should not be construed as an official DoD position. It is published in the interest of scientific and technical information exchange. Use of any trademarks in this report is not intended in any way to infringe on the rights of the trademark holder. Internal use. Permission to reproduce this document and to prepare derivative works from this document for internal use is granted, provided the copyright and \" No Warranty \" statements are included with all reproductions and derivative works. External use. This document may be reproduced in its entirety, without modification, and freely distributed in written or electronic form without requesting formal permission. Permission is required for any other external and/or commercial use. a federally funded research and development center. The Government of the United States has a royalty-free government-purpose license to use, duplicate, or disclose the work, in whole or in part and in any manner, and to have or permit others to do so, for government purposes pursuant to the copyright license under the clause at 252.227-7013. Abstract xiii 1 Introduction 1 1.1 Purpose and Structure of this Report 1 1.2 Background 1 1.3 The Strategic Planning Landscape 1", "title": "" }, { "docid": "4cda02d9f5b5b16773b8cbffc54e91ca", "text": "We present a novel global stereo model designed for view interpolation. Unlike existing stereo models which only output a disparity map, our model is able to output a 3D triangular mesh, which can be directly used for view interpolation. To this aim, we partition the input stereo images into 2D triangles with shared vertices. Lifting the 2D triangulation to 3D naturally generates a corresponding mesh. A technical difficulty is to properly split vertices to multiple copies when they appear at depth discontinuous boundaries. To deal with this problem, we formulate our objective as a two-layer MRF, with the upper layer modeling the splitting properties of the vertices and the lower layer optimizing a region-based stereo matching. Experiments on the Middlebury and the Herodion datasets demonstrate that our model is able to synthesize visually coherent new view angles with high PSNR, as well as outputting high quality disparity maps which rank at the first place on the new challenging high resolution Middlebury 3.0 benchmark.", "title": "" }, { "docid": "b1958bbb9348a05186da6db649490cdd", "text": "Fourier ptychography (FP) utilizes illumination control and computational post-processing to increase the resolution of bright-field microscopes. In effect, FP extends the fixed numerical aperture (NA) of an objective lens to form a larger synthetic system NA. Here, we build an FP microscope (FPM) using a 40X 0.75NA objective lens to synthesize a system NA of 1.45. This system achieved a two-slit resolution of 335 nm at a wavelength of 632 nm. This resolution closely adheres to theoretical prediction and is comparable to the measured resolution (315 nm) associated with a standard, commercially available 1.25 NA oil immersion microscope. Our work indicates that Fourier ptychography is an attractive method to improve the resolution-versus-NA performance, increase the working distance, and enlarge the field-of-view of high-resolution bright-field microscopes by employing lower NA objectives.", "title": "" }, { "docid": "8715a3b9ac7487adbb6d58e8a45ceef6", "text": "Before the computer age, authenticating a user was a relatively simple process. One person could authenticate another by visual recognition, interpersonal communication, or, more formally, mutually agreed upon authentication methods. With the onset of the computer age, authentication has become more complicated. Face-to-face visual authentication has largely dissipated, with computers and networks intervening. Sensitive information is exchanged daily between humans and computers, and from computer to computer. This complexity demands more formal protection methods; in short, authentication processes to manage our routine interactions with such machines and networks. Authentication is the process of positively verifying identity, be it that of a user, device, or entity in a computer system. Often authentication is the prerequisite to accessing system resources. Positive verification is accomplished by means of matching some indicator of identity, such as a shared secret prearranged at the time a person was authorized to use the system. The most familiar user authenticator in use today is the password. The secure sockets layer (SSL) is an example of machine to machine authentication. Human–machine authentication is known as user authentication and it consists of verifying the identity of a user: is this person really who she claims to be? User authentication is much less secure than machine authentication and is known as the Achilles’ heel of secure systems. This paper introduces various human authenticators and compares them based on security, convenience, and cost. The discussion is set in the context of a larger analysis of security issues, namely, measuring a system’s vulnerability to attack. The focus is kept on remote computer authentication. Authenticators can be categorized into three main types: secrets (what you know), tokens (what you have), and IDs (who you are). A password is a secret word, phrase, or personal identification number. Although passwords are ubiquitously used, they pose vulnerabilities, the biggest being that a short mnemonic password can be guessed or searched by an ambitious attacker, while a longer, random password is difficult for a person to remember. A token is a physical device used to aid authentication. Examples include bank cards and smart cards. A token can be an active device that yields one-time passcodes (time-synchronous or", "title": "" }, { "docid": "33285ad9f7bc6e33b48e3f1e27a1ccc9", "text": "Information visualization is a very important tool in BigData analytics. BigData, structured and unstructured data which contains images, videos, texts, audio and other forms of data, collected from multiple datasets, is too big, too complex and moves too fast to analyse using traditional methods. This has given rise to two issues; 1) how to reduce multidimensional data without the loss of any data patterns for multiple datasets, 2) how to visualize BigData patterns for analysis. In this paper, we have classified the BigData attributes into `5Ws' data dimensions, and then established a `5Ws' density approach that represents the characteristics of data flow patterns. We use parallel coordinates to display the `5Ws' sending and receiving densities which provide more analytic features for BigData analysis. The experiment shows that this new model with parallel coordinate visualization can be efficiently used for BigData analysis and visualization.", "title": "" }, { "docid": "5bd3cf8712d04b19226e53fca937e5a6", "text": "This paper reviews the published studies on tourism demand modelling and forecasting since 2000. One of the key findings of this review is that the methods used in analysing and forecasting the demand for tourism have been more diverse than those identified by other review articles. In addition to the most popular time series and econometric models, a number of new techniques have emerged in the literature. However, as far as the forecasting accuracy is concerned, the study shows that there is no single model that consistently outperforms other models in all situations. Furthermore, this study identifies some new research directions, which include improving the forecasting accuracy through forecast combination; integrating both qualitative and quantitative forecasting approaches, tourism cycles and seasonality analysis, events’ impact assessment and risk forecasting.", "title": "" }, { "docid": "f427dc8838618d0904cfe27200ac032d", "text": "Sequential pattern mining has been studied extensively in data mining community. Most previous studies require the specification of a minimum support threshold to perform the mining. However, it is difficult for users to provide an appropriate threshold in practice. To overcome this difficulty, we propose an alternative task: mining topfrequent closed sequential patterns of length no less than , where is the desired number of closed sequential patterns to be mined, and is the minimum length of each pattern. We mine closed patterns since they are compact representations of frequent patterns. We developed an efficient algorithm, called TSP, which makes use of the length constraint and the properties of topclosed sequential patterns to perform dynamic supportraising and projected database-pruning. Our extensive performance study shows that TSP outperforms the closed sequential pattern mining algorithm even when the latter is running with the best tuned minimum support threshold.", "title": "" } ]
scidocsrr
91b02ebcd000160014f99bfb8de326dd
Early Fusion of Camera and Lidar for robust road detection based on U-Net FCN
[ { "docid": "36152b59aaaaa7e3a69ac57db17e44b8", "text": "In this paper, a reliable road/obstacle detection with 3D point cloud for intelligent vehicle on a variety of challenging environments (undulated road and/or uphill/ downhill) is handled. For robust detection of road we propose the followings: 1) correction of 3D point cloud distorted by the motion of vehicle (high speed and heading up and down) incorporating vehicle posture information; 2) guideline for the best selection of the proper features such as gradient value, height average of neighboring node; 3) transformation of the road detection problem into a classification problem of different features; and 4) inference algorithm based on MRF with the loopy belief propagation for the area that the LIDAR does not cover. In experiments, we use a publicly available dataset as well as numerous scans acquired by the HDL-64E sensor mounted on experimental vehicle in inner city traffic scenes. The results show that the proposed method is more robust and reliable than the conventional approach based on the height value on the variety of challenging environment. Jaemin Byun Robot and Cognitive System Research Department(RCSRD) in Electronicsand Telecommunications Research Institute (ETRI), daejeon, south korea, e-mail: jaemin.byu@etri.re.kr, Ki-in Na Robot and Cognitive System Research Department(RCSRD) in Electronicsand Telecommunications Research Institute (ETRI), daejeon, south korea, e-mail: kina@etri.re.kr Beom-su Seo Robot and Cognitive System Research Department(RCSRD) in Electronicsand Telecommunications Research Institute (ETRI), daejeon, south korea, e-mail: bsseo@etri.re.kr MyungChan Roh Robot and Cognitive System Research Department(RCSRD) in Electronicsand Telecommunications Research Institute (ETRI), daejeon, south korea, e-mail: mcroh@etri.re.kr", "title": "" }, { "docid": "378dcab60812075f58534d8dca1c5f33", "text": "Autonomous driving is a key factor for future mobility. Properly perceiving the environment of the vehicles is essential for a safe driving, which requires computing accurate geometric and semantic information in real-time. In this paper, we challenge state-of-the-art computer vision algorithms for building a perception system for autonomous driving. An inherent drawback in the computation of visual semantics is the trade-off between accuracy and computational cost. We propose to circumvent this problem by following an offline-online strategy. During the offline stage dense 3D semantic maps are created. In the online stage the current driving area is recognized in the maps via a re-localization process, which allows to retrieve the pre-computed accurate semantics and 3D geometry in real-time. Then, detecting the dynamic obstacles we obtain a rich understanding of the current scene. We evaluate quantitatively our proposal in the KITTI dataset and discuss the related open challenges for the computer vision community.", "title": "" } ]
[ { "docid": "fb05042ac52f448d9c7d3f820df4b790", "text": "Protein gamma-turn prediction is useful in protein function studies and experimental design. Several methods for gamma-turn prediction have been developed, but the results were unsatisfactory with Matthew correlation coefficients (MCC) around 0.2–0.4. Hence, it is worthwhile exploring new methods for the prediction. A cutting-edge deep neural network, named Capsule Network (CapsuleNet), provides a new opportunity for gamma-turn prediction. Even when the number of input samples is relatively small, the capsules from CapsuleNet are effective to extract high-level features for classification tasks. Here, we propose a deep inception capsule network for gamma-turn prediction. Its performance on the gamma-turn benchmark GT320 achieved an MCC of 0.45, which significantly outperformed the previous best method with an MCC of 0.38. This is the first gamma-turn prediction method utilizing deep neural networks. Also, to our knowledge, it is the first published bioinformatics application utilizing capsule network, which will provide a useful example for the community. Executable and source code can be download at http://dslsrv8.cs.missouri.edu/~cf797/MUFoldGammaTurn/download.html.", "title": "" }, { "docid": "ef7069ddd470608196bbeef5e8fda49d", "text": "ETHNOPHARMACOLOGICAL RELEVANCE\nNigella sativa (N. sativa) L. (Ranunculaceae), well known as black cumin, has been used as a herbal medicine that has a rich historical background. It has been traditionally and clinically used in the treatment of several diseases. Many reviews have investigated this valuable plant, but none of them focused on its clinical effects. Therefore, the aim of the present review is to provide a comprehensive report of clinical studies on N. sativa and some of its constituents.\n\n\nMATERIALS AND METHODS\nStudies on the clinical effects of N. sativa and its main constituent, thymoquinone, which were published between 1979 and 2015, were searched using various databases.\n\n\nRESULTS AND DISCUSSION\nDuring the last three decades, several in vivo and in vitro animal studies revealed the pharmacological properties of the plant, including its antioxidant, antibacterial, antiproliferative, proapoptotic, anti-inflammatory, and antiepileptic properties, and its effect on improvement in atherogenesis, endothelial dysfunction, glucose metabolism, lipid profile dysfunction, and prevention of hippocampus pyramidal cell loss. In clinical studies, antimicrobial, antioxidant, anti-inflammatory, antitumor, and antidiabetic properties as well as therapeutic effects on metabolic syndrome, and gastrointestinal, neuronal, cardiovascular, respiratory, urinary, and reproductive disorders were found in N. sativa and its constituents.\n\n\nCONCLUSION\nExtensive basic and clinical studies on N. sativa seed powder, oil, extracts (aqueous, ethanolic, and methanolic), and thymoquinone showed valuable therapeutic effects on different disorders with a wide range of safe doses. However, there were some confounding factors in the reviewed clinical trials, and a few of them presented data about the phytochemical composition of the plant. Therefore, a more standard clinical trial with N. sativa supplementation is needed for the plant to be used as an inexpensive potential biological adjuvant therapy.", "title": "" }, { "docid": "7a1e32dc80550704207c5e0c7e73da26", "text": "Stock markets are affected by many uncertainties and interrelated economic and political factors at both local and global levels. The key to successful stock market forecasting is achieving best results with minimum required input data. To determine the set of relevant factors for making accurate predictions is a complicated task and so regular stock market analysis is very essential. More specifically, the stock market’s movements are analyzed and predicted in order to retrieve knowledge that could guide investors on when to buy and sell. It will also help the investor to make money through his investment in the stock market. This paper surveys large number of resources from research papers, web-sources, company reports and other available sources.", "title": "" }, { "docid": "17f685f61fba724311a86267cdf33871", "text": "The main advantage of using the Hough Transform to detect ellipses is its robustness against missing data points. However, the storage and computational requirements of the Hough Transform preclude practical applications. Although there are many modifications to the Hough Transform, these modifications still demand significant storage requirement. In this paper, we present a novel ellipse detection algorithm which retains the original advantages of the Hough Transform while minimizing the storage and computation complexity. More specifically, we use an accumulator that is only one dimensional. As such, our algorithm is more effective in terms of storage requirement. In addition, our algorithm can be easily parallelized to achieve good execution time. Experimental results on both synthetic and real images demonstrate the robustness and effectiveness of our algorithm in which both complete and incomplete ellipses can be extracted.", "title": "" }, { "docid": "ec89eb1388055a1c81eb26bf2e2d1316", "text": "There is growing interest across a range of disciplines in the relationship between pets and health, with a range of therapeutic, physiological, psychological and psychosocial benefits now documented. While much of the literature has focused on the individual benefits of pet ownership, this study considered the potential health benefits that might accrue to the broader community, as encapsulated in the construct of social capital. A random survey of 339 adult residents from Perth, Western Australia were selected from three suburbs and interviewed by telephone. Pet ownership was found to be positively associated with some forms of social contact and interaction, and with perceptions of neighbourhood friendliness. After adjustment for demographic variables, pet owners scored higher on social capital and civic engagement scales. The results suggest that pet ownership provides potential opportunities for interactions between neighbours and that further research in this area is warranted. Social capital is another potential mechanism by which pets exert an influence on human health.", "title": "" }, { "docid": "13d5011f3d6c1997e3c44b3f03cf2017", "text": "Reinforcement learning with appropriately designed reward signal could be used to solve many sequential learning problems. However, in practice, the reinforcement learning algorithms could be broken in unexpected, counterintuitive ways. One of the failure modes is reward hacking which usually happens when a reward function makes the agent obtain high return in an unexpected way. This unexpected way may subvert the designer’s intentions and lead to accidents during training. In this paper, a new multi-step state-action value algorithm is proposed to solve the problem of reward hacking. Unlike traditional algorithms, the proposed method uses a new return function, which alters the discount of future rewards and no longer stresses the immediate reward as the main influence when selecting the current state action. The performance of the proposed method is evaluated on two games, Mappy and Mountain Car. The empirical results demonstrate that the proposed method can alleviate the negative impact of reward hacking and greatly improve the performance of reinforcement learning algorithm. Moreover, the results illustrate that the proposed method could also be applied to the continuous state space problem successfully.", "title": "" }, { "docid": "989cdc80521e1c8761f733ad3ed49d79", "text": "The wide availability of sensing devices in the medical domain causes the creation of large and very large data sets. Hence, tasks as the classification in such data sets becomes more and more difficult. Deep Neural Networks (DNNs) are very effective in classification, yet finding the best values for their hyper-parameters is a difficult and time-consuming task. This paper introduces an approach to decrease execution times to automatically find good hyper-parameter values for DNN through Evolutionary Algorithms when classification task is faced. This decrease is obtained through the combination of two mechanisms. The former is constituted by a distributed version for a Differential Evolution algorithm. The latter is based on a procedure aimed at reducing the size of the training set and relying on a decomposition into cubes of the space of the data set attributes. Experiments are carried out on a medical data set about Obstructive Sleep Anpnea. They show that sub-optimal DNN hyper-parameter values are obtained in a much lower time with respect to the case where this reduction is not effected, and that this does not come to the detriment of the accuracy in the classification over the test set items.", "title": "" }, { "docid": "467637b1f55d4673d0ddd5322a130979", "text": "In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyperparameter selection. The starting point of this paper is the observation that unrolled iterative methods have the form of a CNN (filtering followed by pointwise non-linearity) when the normal operator (<inline-formula> <tex-math notation=\"LaTeX\">$H^{*}H$ </tex-math></inline-formula>, where <inline-formula> <tex-math notation=\"LaTeX\">$H^{*}$ </tex-math></inline-formula> is the adjoint of the forward imaging operator, <inline-formula> <tex-math notation=\"LaTeX\">$H$ </tex-math></inline-formula>) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a <inline-formula> <tex-math notation=\"LaTeX\">$512\\times 512$ </tex-math></inline-formula> image on the GPU.", "title": "" }, { "docid": "ea5431e8f2f1e197988cf1b52ee685ce", "text": "Prunus mume (mei), which was domesticated in China more than 3,000 years ago as ornamental plant and fruit, is one of the first genomes among Prunus subfamilies of Rosaceae been sequenced. Here, we assemble a 280M genome by combining 101-fold next-generation sequencing and optical mapping data. We further anchor 83.9% of scaffolds to eight chromosomes with genetic map constructed by restriction-site-associated DNA sequencing. Combining P. mume genome with available data, we succeed in reconstructing nine ancestral chromosomes of Rosaceae family, as well as depicting chromosome fusion, fission and duplication history in three major subfamilies. We sequence the transcriptome of various tissues and perform genome-wide analysis to reveal the characteristics of P. mume, including its regulation of early blooming in endodormancy, immune response against bacterial infection and biosynthesis of flower scent. The P. mume genome sequence adds to our understanding of Rosaceae evolution and provides important data for improvement of fruit trees.", "title": "" }, { "docid": "4a761bed54487cb9c34fc0ff27883944", "text": "We show that unsupervised training of latent capsule layers using only the reconstruction loss, without masking to select the correct output class, causes a loss of equivariances and other desirable capsule qualities. This implies that supervised capsules networks can’t be very deep. Unsupervised sparsening of latent capsule layer activity both restores these qualities and appears to generalize better than supervised masking, while potentially enabling deeper capsules networks. We train a sparse, unsupervised capsules network of similar geometry to (Sabour et al., 2017) on MNIST (LeCun et al., 1998) and then test classification accuracy on affNIST1 using an SVM layer. Accuracy is improved from benchmark 79% to 90%.", "title": "" }, { "docid": "ece81717e6cdab30cfb60d705bc4fc5e", "text": "It is well established that autism spectrum disorders (ASD) have a strong genetic component; however, for at least 70% of cases, the underlying genetic cause is unknown. Under the hypothesis that de novo mutations underlie a substantial fraction of the risk for developing ASD in families with no previous history of ASD or related phenotypes—so-called sporadic or simplex families—we sequenced all coding regions of the genome (the exome) for parent–child trios exhibiting sporadic ASD, including 189 new trios and 20 that were previously reported. Additionally, we also sequenced the exomes of 50 unaffected siblings corresponding to these new (n = 31) and previously reported trios (n = 19), for a total of 677 individual exomes from 209 families. Here we show that de novo point mutations are overwhelmingly paternal in origin (4:1 bias) and positively correlated with paternal age, consistent with the modest increased risk for children of older fathers to develop ASD. Moreover, 39% (49 of 126) of the most severe or disruptive de novo mutations map to a highly interconnected β-catenin/chromatin remodelling protein network ranked significantly for autism candidate genes. In proband exomes, recurrent protein-altering mutations were observed in two genes: CHD8 and NTNG1. Mutation screening of six candidate genes in 1,703 ASD probands identified additional de novo, protein-altering mutations in GRIN2B, LAMC3 and SCN1A. Combined with copy number variant (CNV) data, these results indicate extreme locus heterogeneity but also provide a target for future discovery, diagnostics and therapeutics.", "title": "" }, { "docid": "acf514a4aa34487121cc853e55ceaed4", "text": "Stereotype threat spillover is a situational predicament in which coping with the stress of stereotype confirmation leaves one in a depleted volitional state and thus less likely to engage in effortful self-control in a variety of domains. We examined this phenomenon in 4 studies in which we had participants cope with stereotype and social identity threat and then measured their performance in domains in which stereotypes were not \"in the air.\" In Study 1 we examined whether taking a threatening math test could lead women to respond aggressively. In Study 2 we investigated whether coping with a threatening math test could lead women to indulge themselves with unhealthy food later on and examined the moderation of this effect by personal characteristics that contribute to identity-threat appraisals. In Study 3 we investigated whether vividly remembering an experience of social identity threat results in risky decision making. Finally, in Study 4 we asked whether coping with threat could directly influence attentional control and whether the effect was implemented by inefficient performance monitoring, as assessed by electroencephalography. Our results indicate that stereotype threat can spill over and impact self-control in a diverse array of nonstereotyped domains. These results reveal the potency of stereotype threat and that its negative consequences might extend further than was previously thought.", "title": "" }, { "docid": "1adacc7dc452e27024756c36eecb8cae", "text": "The techniques of using neural networks to learn distributed word representations (i.e., word embeddings) have been used to solve a variety of natural language processing tasks. The recently proposed methods, such as CBOW and Skip-gram, have demonstrated their effectiveness in learning word embeddings based on context information such that the obtained word embeddings can capture both semantic and syntactic relationships between words. However, it is quite challenging to produce high-quality word representations for rare or unknown words due to their insufficient context information. In this paper, we propose to leverage morphological knowledge to address this problem. Particularly, we introduce the morphological knowledge as both additional input representation and auxiliary supervision to the neural network framework. As a result, beyond word representations, the proposed neural network model will produce morpheme representations, which can be further employed to infer the representations of rare or unknown words based on their morphological structure. Experiments on an analogical reasoning task and several word similarity tasks have demonstrated the effectiveness of our method in producing high-quality words embeddings compared with the state-of-the-art methods.", "title": "" }, { "docid": "9837e331cf1c2a5bb0cee92e4ae44ca5", "text": "Isocitrate dehydrogenase 2 (IDH2) is located in the mitochondrial matrix. IDH2 acts in the forward Krebs cycle as an NADP(+)-consuming enzyme, providing NADPH for maintenance of the reduced glutathione and peroxiredoxin systems and for self-maintenance by reactivation of cystine-inactivated IDH2 by glutaredoxin 2. In highly respiring cells, the resulting NAD(+) accumulation then induces sirtuin-3-mediated activating IDH2 deacetylation, thus increasing its protective function. Reductive carboxylation of 2-oxoglutarate by IDH2 (in the reverse Krebs cycle direction), which consumes NADPH, may follow glutaminolysis of glutamine to 2-oxoglutarate in cancer cells. When the reverse aconitase reaction and citrate efflux are added, this overall \"anoxic\" glutaminolysis mode may help highly malignant tumors survive aglycemia during hypoxia. Intermittent glycolysis would hypothetically be required to provide ATP. When oxidative phosphorylation is dormant, this mode causes substantial oxidative stress. Arg172 mutants of human IDH2-frequently found with similar mutants of cytosolic IDH1 in grade 2 and 3 gliomas, secondary glioblastomas, and acute myeloid leukemia-catalyze reductive carboxylation of 2-oxoglutarate and reduction to D-2-hydroxyglutarate, which strengthens the neoplastic phenotype by competitive inhibition of histone demethylation and 5-methylcytosine hydroxylation, leading to genome-wide histone and DNA methylation alternations. D-2-hydroxyglutarate also interferes with proline hydroxylation and thus may stabilize hypoxia-induced factor α.", "title": "" }, { "docid": "116294113ff20558d3bcb297950f6d63", "text": "This paper aims to analyze the influence of a Halbach array by using a semi analytical design optimization approach on a novel electrical machine design with slotless air gap winding. The useable magnetic flux density caused by the Halbach array magnetization is studied and compared to conventional radial magnetization systems. First, several discrete magnetic flux densities are analyzed for an infinitesimal wire size in an air gap range from 0.1 mm to 5 mm by the finite element method in Ansys Maxwell. Fourier analysis is used to approximate continuous functions for each magnetic flux density characteristic for each air gap height. Then, using a six-step commutation control, the magnetic flux acting on a certain phase geometry is considered for a parametric machine model. The design optimization approach utilizes the design freedom of the magnetic flux density shape in air gap as well as the heights and depths of all magnetic circuit components, which are stator and rotor cores, permanent magnets, air gap, and air gap winding. Use of a nonlinear optimization formulation, allows for fast and precise analytical calculation of objective function. In this way the influence of both magnetizations on Pareto optimal machine design sets, when mass and efficiency are weighted, are compared. Other design requirements, such as torque, current, air gap and wire height, are considered via constraints on this optimization. Finally, an optimal motor design study for the Halbach array magnetization pattern is compared to the conventional radial magnetization. As a reference design, an existing 15-inch rim wheel-hub motor with air gap winding is used.", "title": "" }, { "docid": "eb8f0a30d222b89e5fda3ea1d83ea525", "text": "We present a method which exploits automatically generated scientific discourse annotations to create a content model for the summarisation of scientific articles. Full papers are first automatically annotated using the CoreSC scheme, which captures 11 contentbased concepts such as Hypothesis, Result, Conclusion etc at the sentence level. A content model which follows the sequence of CoreSC categories observed in abstracts is used to provide the skeleton of the summary, making a distinction between dependent and independent categories. Summary creation is also guided by the distribution of CoreSC categories found in the full articles, in order to adequately represent the article content. Finally, we demonstrate the usefulness of the summaries by evaluating them in a complex question answering task. Results are very encouraging as summaries of papers from automatically obtained CoreSCs enable experts to answer 66% of complex content-related questions designed on the basis of paper abstracts. The questions were answered with a precision of 75%, where the upper bound for human summaries (abstracts) was 95%.", "title": "" }, { "docid": "3c29c0a3e8ec6292f05c7907436b5e9a", "text": "Emerging Wi-Fi technologies are expected to cope with large amounts of traffic in dense networks. Consequently, proposals for the future IEEE 802.11ax Wi-Fi amendment include sensing threshold and transmit power adaptation, in order to improve spatial reuse. However, it is not yet understood to which extent such adaptive approaches — and which variant — would achieve a better balance between spatial reuse and the level of interference, in order to improve the network performance. Moreover, it is not clear how legacy Wi-Fi devices would be affected by new-generation Wi-Fi implementing these adaptive design parameters. In this paper we present a thorough comparative study in ns-3 for four major proposed adaptation algorithms and we compare their performance against legacy non-adaptive Wi-Fi. Additionally, we consider mixed populations where both legacy non-adaptive and new-generation adaptive populations coexist. We assume a dense indoor residential deployment and different numbers of available channels in the 5 GHz band, relevant for future IEEE 802.11ax. Our results show that for the dense scenarios considered, the algorithms do not significantly improve the overall network performance compared to the legacy baseline, as they increase the throughput of some nodes, while decreasing the throughput of others. For mixed populations in dense deployments, adaptation algorithms that improve the performance of new-generation nodes degrade the performance of legacy nodes and vice versa. This suggests that to support Wi-Fi evolution for dense deployments and consistently increase the throughput throughout the network, more sophisticated algorithms are needed, e.g. considering combinations of input parameters in current variants.", "title": "" }, { "docid": "5445892bdf8478cfacac9d599dead1f9", "text": "The problem of determining feature correspondences across multiple views is considered. The term \"true multi-image\" matching is introduced to describe techniques that make full and efficient use of the geometric relationships between multiple images and the scene. A true multi-image technique must generalize to any number of images, be of linear algorithmic complexity in the number of images, and use all the images in an equal manner. A new space-sweep approach to true multi-image matching is presented that simultaneously determines 2D feature correspondences and the 3D positions of feature points in the scene. The method is illustrated on a seven-image matching example from the aerial im-", "title": "" }, { "docid": "a39fb4e8c15878ba4fdac54f02451789", "text": "The Cloud computing system can be easily threatened by various attacks, because most of the cloud computing systems provide service to so many people who are not proven to be trustworthy. Due to their distributed nature, cloud computing environment are easy targets for intruders[1]. There are various Intrusion Detection Systems having various specifications to each. Cloud computing have two approaches i. e. Knowledge-based IDS and Behavior-Based IDS to detect intrusions in cloud computing. Behavior-Based IDS assumes that an intrusion can be detected by observing a deviation from normal to expected behavior of the system or user[2]s. Knowledge-based IDS techniques apply knowledge", "title": "" }, { "docid": "74fd21dccc9e883349979c8292c5f450", "text": "Stack Overflow (SO) has been a great source of natural language questions and their code solutions (i.e., question-code pairs), which are critical for many tasks including code retrieval and annotation. In most existing research, question-code pairs were collected heuristically and tend to have low quality. In this paper, we investigate a new problem of systematically mining question-code pairs from Stack Overflow (in contrast to heuristically collecting them). It is formulated as predicting whether or not a code snippet is a standalone solution to a question. We propose a novel Bi-View Hierarchical Neural Network which can capture both the programming content and the textual context of a code snippet (i.e., two views) to make a prediction. On two manually annotated datasets in Python and SQL domain, our framework substantially outperforms heuristic methods with at least 15% higher F1 and accuracy. Furthermore, we present StaQC (Stack Overflow Question-Code pairs), the largest dataset to date of ∼148K Python and ∼120K SQL question-code pairs, automatically mined from SO using our framework. Under various case studies, we demonstrate that StaQC can greatly help develop data-hungry models for associating natural language with programming language1.", "title": "" } ]
scidocsrr
5b67442ae83eb4edbcdfb6851947e8e9
Evaluating the Potential of Texture and Color Descriptors for Remote Sensing Image Retrieval and Classification
[ { "docid": "84ca7dc9cac79fe14ea2061919c44a05", "text": "We describe two new color indexing techniques. The rst one is a more robust version of the commonly used color histogram indexing. In the index we store the cumulative color histograms. The L 1-, L 2-, or L 1-distance between two cumulative color histograms can be used to deene a similarity measure of these two color distributions. We show that while this method produces only slightly better results than color histogram methods, it is more robust with respect to the quantization parameter of the histograms. The second technique is an example of a new approach to color indexing. Instead of storing the complete color distributions, the index contains only their dominant features. We implement this approach by storing the rst three moments of each color channel of an image in the index, i.e., for a HSV image we store only 9 oating point numbers per image. The similarity function which is used for the retrieval is a weighted sum of the absolute diierences between corresponding moments. Our tests clearly demonstrate that a retrieval based on this technique produces better results and runs faster than the histogram-based methods.", "title": "" } ]
[ { "docid": "5ce6bac4ec1f916c1ebab9da09816c0e", "text": "High-performance parallel computing architectures are increasingly based on multi-core processors. While current commercially available processors are at 8 and 16 cores, technological and power constraints are limiting the performance growth of the cores and are resulting in architectures with much higher core counts, such as the experimental many-core Intel Single-chip Cloud Computer (SCC) platform. These trends are presenting new sets of challenges to HPC applications including programming complexity and the need for extreme energy efficiency.\n In this paper, we first investigate the power behavior of scientific Partitioned Global Address Space (PGAS) application kernels on the SCC platform, and explore opportunities and challenges for power management within the PGAS framework. Results obtained via empirical evaluation of Unified Parallel C (UPC) applications on the SCC platform under different constraints, show that, for specific operations, the potential for energy savings in PGAS is large; and power/performance trade-offs can be effectively managed using a cross-layer approach. We investigate cross-layer power management using PGAS language extensions and runtime mechanisms that manipulate power/performance tradeoffs. Specifically, we present the design, implementation and evaluation of such a middleware for application-aware cross-layer power management of UPC applications on the SCC platform. Finally, based on our observations, we provide a set of insights that can be used to support similar power management for PGAS applications on other many-core platforms.", "title": "" }, { "docid": "799c839fad857c1ba90a9905f1b1d544", "text": "Much of the research published in the property discipline consists of work utilising quantitative methods. While research gained using quantitative methods, if appropriately designed and rigorous, leads to results which are typically generalisable and quantifiable, it does not allow for a rich and in-depth understanding of a phenomenon. This is especially so if a researcher’s aim is to uncover the issues or factors underlying that phenomenon. Such an aim would require using a qualitative research methodology, and possibly an interpretive as opposed to a positivist theoretical perspective. The purpose of this paper is to provide a general overview of qualitative methodologies with the aim of encouraging a broadening of methodological approaches to overcome the positivist methodological bias which has the potential of inhibiting property behavioural research.", "title": "" }, { "docid": "8503b51197d8242c4ec242f7190c2405", "text": "We provide a state-of-the-art explication of application security and software protection. The relationship between application security and data security, network security, and software security is discussed. Three simplified threat models for software are sketched. To better understand what attacks must be defended against in order to improve software security, we survey software attack approaches and attack tools. A simplified software security view of a software application is given, and along with illustrative examples, used to motivate a partial list of software security requirements for applications.", "title": "" }, { "docid": "a679d37b88485cf71569f9aeefefbac5", "text": "Incrementality is ubiquitous in human-human interaction and beneficial for human-computer interaction. It has been a topic of research in different parts of the NLP community, mostly with focus on the specific topic at hand even though incremental systems have to deal with similar challenges regardless of domain. In this survey, I consolidate and categorize the approaches, identifying similarities and differences in the computation and data, and show trade-offs that have to be considered. A focus lies on evaluating incremental systems because the standard metrics often fail to capture the incremental properties of a system and coming up with a suitable evaluation scheme is non-trivial. Title and Abstract in German Inkrementelle Sprachverarbeitung: Herausforderungen, Strategien und Evaluation Inkrementalität ist allgegenwärtig in Mensch-Mensch-Interaktiton und hilfreich für MenschComputer-Interaktion. In verschiedenen Teilen der NLP-Community wird an Inkrementalität geforscht, zumeist fokussiert auf eine konkrete Aufgabe, obwohl sich inkrementellen Systemen domänenübergreifend ähnliche Herausforderungen stellen. In diesem Überblick trage ich Ansätze zusammen, kategorisiere sie und stelle Ähnlichkeiten und Unterschiede in Berechnung und Daten sowie nötige Abwägungen vor. Ein Fokus liegt auf der Evaluierung inkrementeller Systeme, da Standardmetriken of nicht in der Lage sind, die inkrementellen Eigenschaften eines Systems einzufangen und passende Evaluationsschemata zu entwickeln nicht einfach ist.", "title": "" }, { "docid": "9ca71bbeb4643a6a347050002f1317f5", "text": "In modern society, we are increasingly disconnected from natural light/dark cycles and beset by round-the-clock exposure to artificial light. Light has powerful effects on physical and mental health, in part via the circadian system, and thus the timing of light exposure dictates whether it is helpful or harmful. In their compelling paper, Obayashi et al. (Am J Epidemiol. 2018;187(3):427-434.) offer evidence that light at night can prospectively predict an elevated incidence of depressive symptoms in older adults. Strengths of the study include the longitudinal design and direct, objective assessment of light levels, as well as accounting for multiple plausible confounders during analyses. Follow-up studies should address the study's limitations, including reliance on a global self-report of sleep quality and a 2-night assessment of light exposure that may not reliably represent typical light exposure. In addition, experimental studies including physiological circadian measures will be necessary to determine whether the light effects on depression are mediated through the circadian system or are so-called \"direct\" effects of light. In any case, these exciting findings could inform novel approaches to preventing depressive disorders in older adults.", "title": "" }, { "docid": "b9fb60fadf13304b46f87fda305f118e", "text": "Coordinated cyberattacks of power meter readings can be arranged to be undetectable by any bad data detection algorithm in the power system state estimation process. These unobservable attacks present a potentially serious threat to grid operations. Of particular interest are sparse attacks that involve the compromise of a modest number of meter readings. An efficient algorithm to find all unobservable attacks [under standard DC load flow approximations] involving the compromise of exactly two power injection meters and an arbitrary number of line power meters is presented. This requires O(n2m) flops for a power system with n buses and m line meters. If all lines are metered, there exist canonical forms that characterize all 3, 4, and 5-sparse unobservable attacks. These can be quickly detected in power systems using standard graph algorithms. Known-secure phasor measurement units [PMUs] can be used as countermeasures against an arbitrary collection of cyberattacks. Finding the minimum number of necessary PMUs is NP-hard. It is shown that p + 1 PMUs at carefully chosen buses are sufficient to neutralize a collection of p cyberattacks.", "title": "" }, { "docid": "69eceabd9967260cbdec56d02bcafd83", "text": "A modified Vivaldi antenna is proposed in this paper especially for the millimeter-wave application. The metal support frame is used to fix the structured substrate and increased the front-to-back ratio as well as the radiation gain. Detailed design process are presented, following which one sample is designed with its working frequency band from 75GHz to 150 GHz. The sample is also fabricated and measured. Good agreements between simulated results and measured results are obtained.", "title": "" }, { "docid": "d0f71092df2eab53e7f32eff1cb7af2e", "text": "Topic modeling of textual corpora is an important and challenging problem. In most previous work, the “bag-of-words” assumption is usually made which ignores the ordering of words. This assumption simplifies the computation, but it unrealistically loses the ordering information and the semantic of words in the context. In this paper, we present a Gaussian Mixture Neural Topic Model (GMNTM) which incorporates both the ordering of words and the semantic meaning of sentences into topic modeling. Specifically, we represent each topic as a cluster of multi-dimensional vectors and embed the corpus into a collection of vectors generated by the Gaussian mixture model. Each word is affected not only by its topic, but also by the embedding vector of its surrounding words and the context. The Gaussian mixture components and the topic of documents, sentences and words can be learnt jointly. Extensive experiments show that our model can learn better topics and more accurate word distributions for each topic. Quantitatively, comparing to state-of-the-art topic modeling approaches, GMNTM obtains significantly better performance in terms of perplexity, retrieval accuracy and classification accuracy.", "title": "" }, { "docid": "1d632b05f8b3ff5300a2a3ece8d05376", "text": "This study focuses on feature selection in paralinguistic analysis and presents recently developed supervised and unsupervised methods for feature subset selection and feature ranking. Using the standard k-nearest-neighbors (kNN) rule as the classification algorithm, the feature selection methods are evaluated individually and in different combinations in seven paralinguistic speaker trait classification tasks. In each analyzed data set, the overall number of features highly exceeds the number of data points available for training and evaluation, making a well-generalizing feature selection process extremely difficult. The performance of feature sets on the feature selection data is observed to be a poor indicator of their performance on unseen data. The studied feature selection methods clearly outperform a standard greedy hill-climbing selection algorithm by being more robust against overfitting. When the selection methods are suitably combined with each other, the performance in the classification task can be further improved. In general, it is shown that the use of automatic feature selection in paralinguistic analysis can be used to reduce the overall number of features to a fraction of the original feature set size while still achieving a comparable or even better performance than baseline support vector machine or random forest classifiers using the full feature set. The most typically selected features for recognition of speaker likability, intelligibility and five personality traits are also reported.", "title": "" }, { "docid": "390505bd6f04e899a15c64c26beac606", "text": "Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To close this gap, a two-stage deep neural network (TDNN) is proposed. In particular, in the first stage, using the rated essays for nontarget prompts as the training data, a shallow model is learned to select essays with an extreme quality for the target prompt, serving as pseudo training data; in the second stage, an end-to-end hybrid deep model is proposed to learn a prompt-dependent rating model consuming the pseudo training data from the first step. Evaluation of the proposed TDNN on the standard ASAP dataset demonstrates a promising improvement for the prompt-independent AES task.", "title": "" }, { "docid": "799bc245ecfabf59416432ab62fe9320", "text": "This study examines resolution skills in phishing email detection, defined as the abilities of individuals to discern correct judgments from incorrect judgments in probabilistic decisionmaking. An illustration of the resolution skills is provided. A number of antecedents to resolution skills in phishing email detection, including familiarity with the sender, familiarity with the email, online transaction experience, prior victimization of phishing attack, perceived selfefficacy, time to judgment, and variability of time in judgments, are examined. Implications of the study are further discussed.", "title": "" }, { "docid": "01835769f2dc9391051869374e200a6a", "text": "Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.", "title": "" }, { "docid": "503c9c4d0d8f94d3e7a9ea8ee496e08b", "text": "Memories for context become less specific with time resulting in animals generalizing fear from training contexts to novel contexts. Though much attention has been given to the neural structures that underlie the long-term consolidation of a context fear memory, very little is known about the mechanisms responsible for the increase in fear generalization that occurs as the memory ages. Here, we examine the neural pattern of activation underlying the expression of a generalized context fear memory in male C57BL/6J mice. Animals were context fear conditioned and tested for fear in either the training context or a novel context at recent and remote time points. Animals were sacrificed and fluorescent in situ hybridization was performed to assay neural activation. Our results demonstrate activity of the prelimbic, infralimbic, and anterior cingulate (ACC) cortices as well as the ventral hippocampus (vHPC) underlie expression of a generalized fear memory. To verify the involvement of the ACC and vHPC in the expression of a generalized fear memory, animals were context fear conditioned and infused with 4% lidocaine into the ACC, dHPC, or vHPC prior to retrieval to temporarily inactivate these structures. The results demonstrate that activity of the ACC and vHPC is required for the expression of a generalized fear memory, as inactivation of these regions returned the memory to a contextually precise form. Current theories of time-dependent generalization of contextual memories do not predict involvement of the vHPC. Our data suggest a novel role of this region in generalized memory, which should be incorporated into current theories of time-dependent memory generalization. We also show that the dorsal hippocampus plays a prolonged role in contextually precise memories. Our findings suggest a possible interaction between the ACC and vHPC controls the expression of fear generalization.", "title": "" }, { "docid": "6716302b3168098a52f56b6aa7b82e94", "text": "Consumers are increasingly relying on web-based social content, such as product reviews, prior to making to a purchase. Recent surveys in the Retail Industry confirm that social content is indeed the #1 aid in a buying decision. Currently, accessing or adding to this valuable web-based social content repository is mostly limited to computers far removed from the site of the shopping experience itself. We present a mobile Augmented Reality application, which extends such social content from the computer monitor into the physical world through mobile phones, providing consumers with in situ information on products right when and where they need to make buying decisions.", "title": "" }, { "docid": "e8b0536f5d749b5f6f5651fe69debbe1", "text": "Current centralized cloud datacenters provide scalable computation- and storage resources in a virtualized infrastructure and employ a use-based \"pay-as-you-go\" model. But current mobile devices and their resource-hungry applications (e.g., Speech-or face recognition) demand for these resources on the spot, though a mobile device's intrinsic characteristic is its limited availability of resources (e.g., CPU, storage, bandwidth, energy). Thus, mobile cloud computing (MCC) was introduced to overcome these limitations by transparently making accessible the apparently infinite cloud resources to the mobile devices and by allowing mobile applications to (elastically) expand into the cloud. However, MCC often relies on a stable and fast connection to the mobile devices' surrogate in the cloud, which is a rare case in mobile scenarios. Moreover, the increased latency and the limited bandwidth prevent the use of real-time applications like, e.g. Cloud gaming. Instead, mobile edge computing (MEC) or fog computing tries to provide the necessary resources at the logical edge of the network by including infrastructure components to create ad-hoc mobile clouds. However, this approach requires the replication and management of the applications' business logic in an untrusted, unreliable and constantly changing environment. Consequently, this paper presents a novel approach to allow mobile app developers to easily benefit from the features of MEC. In particular, we present a programming model and framework that directly fit the common app developers' mindset to design elastic and scalable edge-based mobile applications.", "title": "" }, { "docid": "db42b2c5b9894943c3ba05fad07ee2f9", "text": "This paper deals principally with the grid connection problem of a kite-based system, named the “Kite Generator System (KGS).” It presents a control scheme of a closed-orbit KGS, which is a wind power system with a relaxation cycle. Such a system consists of a kite with its orientation mechanism and a power transformation system that connects the previous part to the electric grid. Starting from a given closed orbit, the optimal tether's length rate variation (the kite's tether radial velocity) and the optimal orbit's period are found. The trajectory-tracking problem is not considered in this paper; only the kite's tether radial velocity is controlled via the electric machine rotation velocity. The power transformation system transforms the mechanical energy generated by the kite into electrical energy that can be transferred to the grid. A Matlab/simulink model of the KGS is employed to observe its behavior, and to insure the control of its mechanical and electrical variables. In order to improve the KGS's efficiency in case of slow changes of wind speed, a maximum power point tracking (MPPT) algorithm is proposed.", "title": "" }, { "docid": "cdc1e3b629659bf342def1f262d7aa0b", "text": "In educational contexts, understanding the student’s learning must take account of the student’s construction of reality. Reality as experienced by the student has an important additional value. This assumption also applies to a student’s perception of evaluation and assessment. Students’ study behaviour is not only determined by the examination or assessment modes that are used. Students’ perceptions about evaluation methods also play a significant role. This review aims to examine evaluation and assessment from the student’s point of view. Research findings reveal that students’ perceptions about assessment significantly influence their approaches to learning and studying. Conversely, students’ approaches to study influence the ways in which they perceive evaluation and assessment. Findings suggest that students hold strong views about different assessment and evaluation formats. In this respect students favour multiple-choice format exams to essay type questions. However, when compared with more innovative assessment methods, students call the ‘fairness’ of these well-known evaluation modes into question.", "title": "" }, { "docid": "074567500751d814eef4ba979dc3cc8d", "text": "Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner’s predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms’ merits and limitations. Reinforcement learning is of great interest because of the large number of practical applications that it can be used to address, ranging from problems in artificial intelligence to operations research or control engineering. In this book, we focus on those algorithms of reinforcement learning that build on the powerful theory of dynamic programming. We give a fairly comprehensive catalog of learning problems,", "title": "" }, { "docid": "7d7d8d521cc098a7672cbe2e387dde58", "text": "AIM\nThe purpose of this review is to represent acids that can be used as surface etchant before adhesive luting of ceramic restorations, placement of orthodontic brackets or repair of chipped porcelain restorations. Chemical reactions, application protocol, and etching effect are presented as well.\n\n\nSTUDY SELECTION\nAvailable scientific articles published in PubMed and Scopus literature databases, scientific reports and manufacturers' instructions and product information from internet websites, written in English, using following search terms: \"acid etching, ceramic surface treatment, hydrofluoric acid, acidulated phosphate fluoride, ammonium hydrogen bifluoride\", have been reviewed.\n\n\nRESULTS\nThere are several acids with fluoride ion in their composition that can be used as ceramic surface etchants. The etching effect depends on the acid type and its concentration, etching time, as well as ceramic type. The most effective etching pattern is achieved when using hydrofluoric acid; the numerous micropores and channels of different sizes, honeycomb-like appearance, extruded crystals or scattered irregular ceramic particles, depending on the ceramic type, have been detected on the etched surfaces.\n\n\nCONCLUSION\nAcid etching of the bonding surface of glass - ceramic restorations is considered as the most effective treatment method that provides a reliable bond with composite cement. Selective removing of the glassy matrix of silicate ceramics results in a micromorphological three-dimensional porous surface that allows micromechanical interlocking of the luting composite.", "title": "" }, { "docid": "c36dac0c410570e84bf8634b32a0cac3", "text": "The design of strategies for branching in Mixed Integer Programming (MIP) is guided by cycles of parameter tuning and offline experimentation on an extremely heterogeneous testbed, using the average performance. Once devised, these strategies (and their parameter settings) are essentially input-agnostic. To address these issues, we propose a machine learning (ML) framework for variable branching in MIP. Our method observes the decisions made by Strong Branching (SB), a time-consuming strategy that produces small search trees, collecting features that characterize the candidate branching variables at each node of the tree. Based on the collected data, we learn an easy-to-evaluate surrogate function that mimics the SB strategy, by means of solving a learning-to-rank problem, common in ML. The learned ranking function is then used for branching. The learning is instance-specific, and is performed on-the-fly while executing a branch-and-bound search to solve the instance. Experiments on benchmark instances indicate that our method produces significantly smaller search trees than existing heuristics, and is competitive with a state-of-the-art commercial solver.", "title": "" } ]
scidocsrr
2a927ff647178e776b0914fd7738d341
Collaborative Departure Queue Management An Example of Airport Collaborative Decision Making in the United States
[ { "docid": "4e2bfd87acf1287f36694634a6111b3f", "text": "This paper presents a model for managing departure aircraft at the spot or gate on the airport surface. The model is applied over two time frames: long term (one hour in future) for collaborative decision making, and short term (immediate) for decisions regarding the release of aircraft. The purpose of the model is to provide the controller a schedule of spot or gate release times optimized for runway utilization. This model was tested in nominal and heavy surface traffic scenarios in a simulated environment, and results indicate average throughput improvement of 10% in high traffic scenarios even with up to two minutes of uncertainty in spot arrival times.", "title": "" } ]
[ { "docid": "614174e5e1dffe9824d7ef8fae6fb499", "text": "This paper starts with presenting a fundamental principle based on which the celebrated orthogonal frequency division multiplexing (OFDM) waveform is constructed. It then extends the same principle to construct the newly introduced generalized frequency division multiplexing (GFDM) signals. This novel derivation sheds light on some interesting properties of GFDM. In particular, our derivation seamlessly leads to an implementation of GFDM transmitter which has significantly lower complexity than what has been reported so far. Our derivation also facilitates a trivial understanding of how GFDM (similar to OFDM) can be applied in MIMO channels.", "title": "" }, { "docid": "31e3fddcaeb7e4984ba140cb30ff49bf", "text": "We show that a maximum-weight triangle in an undirected graph with n vertices and real weights assigned to vertices can be found in time O(nω + n2+o(1)), where ω is the exponent of the fastest matrix multiplication algorithm. By the currently best bound on ω, the running time of our algorithm is O(n2.376). Our algorithm substantially improves the previous time-bounds for this problem, and its asymptotic time complexity matches that of the fastest known algorithm for finding any triangle (not necessarily a maximum-weight one) in a graph. We can extend our algorithm to improve the upper bounds on finding a maximum-weight triangle in a sparse graph and on finding a maximum-weight subgraph isomorphic to a fixed graph. We can find a maximum-weight triangle in a vertex-weighted graph with m edges in asymptotic time required by the fastest algorithm for finding any triangle in a graph with m edges, i.e., in time O(m1.41). Our algorithms for a maximum-weight fixed subgraph (in particular any clique of constant size) are asymptotically as fast as the fastest known algorithms for a fixed subgraph.", "title": "" }, { "docid": "e9e7cb42ed686ace9e9785fafd3c72f8", "text": "We present a fully automated multimodal medical image matching technique. Our method extends the concepts used in the computer vision SIFT technique for extracting and matching distinctive scale invariant features in 2D scalar images to scalar images of arbitrary dimensionality. This extension involves using hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. These features were successfully applied to determine accurate feature point correspondence between pairs of medical images (3D) and dynamic volumetric data (3D+time).", "title": "" }, { "docid": "4a164ec21fb69e7db5c90467c6f6af17", "text": "Recent technologies have made it cost-effective to collect diverse types of genome-wide data. Computational methods are needed to combine these data to create a comprehensive view of a given disease or a biological process. Similarity network fusion (SNF) solves this problem by constructing networks of samples (e.g., patients) for each available data type and then efficiently fusing these into one network that represents the full spectrum of underlying data. For example, to create a comprehensive view of a disease given a cohort of patients, SNF computes and fuses patient similarity networks obtained from each of their data types separately, taking advantage of the complementarity in the data. We used SNF to combine mRNA expression, DNA methylation and microRNA (miRNA) expression data for five cancer data sets. SNF substantially outperforms single data type analysis and established integrative approaches when identifying cancer subtypes and is effective for predicting survival.", "title": "" }, { "docid": "ce9b3c56208fbfb555be55acbf9f142e", "text": "Opinion mining and sentiment analysis is rapidly growing area. There are numerous e-commerce sites available on internet which provides options to users to give feedback about specific product. These feedbacks are very much helpful to both the individuals, who are willing to buy that product and the organizations. An accurate method for predicting sentiments could enable us, to extract opinions from the internet and predict customer’s preferences. There are various algorithms available for opinion mining. Before applying any algorithm for polarity detection, pre-processing on feedback is carried out. From these pre-processed reviews opinion words and object on which opinion is generated are extracted and any opinion mining technique is applied to find the polarity of the review. Opinion mining has three levels of granularities: Document level, Sentence level and Aspect level. In this paper various algorithms for sentiment analysis are studied and challenges and applications appear in this field are discussed.", "title": "" }, { "docid": "dbb7520f2f88005b70e0793c74b7b296", "text": "Spoken language understanding and dialog management have emerged as key technologies in interacting with personal digital assistants (PDAs). The coverage, complexity, and the scale of PDAs are much larger than previous conversational understanding systems. As such, new problems arise. In this paper, we provide an overview of the language understanding and dialog management capabilities of PDAs, focusing particularly on Cortana, Microsoft's PDA. We explain the system architecture for language understanding and dialog management for our PDA, indicate how it differs with prior state-of-the-art systems, and describe key components. We also report a set of experiments detailing system performance on a variety of scenarios and tasks. We describe how the quality of user experiences are measured end-to-end and also discuss open issues.", "title": "" }, { "docid": "72cc9333577fb255c97f137c5d19fd54", "text": "The purpose of this study was to provide insight on attitudes towards Facebook advertising. In order to figure out the attitudes towards Facebook advertising, a snowball survey was executed among Facebook users by spreading a link to the survey. This study was quantitative study but the results of the study were interpreted in qualitative way. This research was executed with the help of factor analysis and cluster analysis, after which Chisquare test was used. This research expected that the result of the survey would lead in to two different groups with negative and positive attitudes. Factor analysis was used to find relations between variables that the survey data generated. The factor analysis resulted in 12 factors that were put in a cluster analysis to find different kinds of groups. Surprisingly the cluster analysis enabled the finding of three groups with different interests and different attitudes towards Facebook advertising. These clusters were analyzed and compared. One group was clearly negative, tending to block and avoid advertisements. Second group was with more neutral attitude towards advertising, and more carefree internet using. They did not have blocking software in use and they like to participate in activities more often. The third group had positive attitude towards advertising. The result of this study can be used to help companies better plan their Facebook advertising according to groups. It also reminds about the complexity of people and their attitudes; not everything suits everybody.", "title": "" }, { "docid": "bf5f1cdcc71a76f33ad516aa165ffc41", "text": "The content protection of digital medical images is getting more importance, especially with the advance of computerized systems and communication networks which allows providing high quality images, sending and receiving such data in a realtime manner. Medical concernslead healthcare organizations to encrypt every patient data, such as images before transferring the data over computer networks. Therefore, designing and developing qualified encryption algorithms is quite important for contemporary medicine.Medical image encryption algorithmstry to convert a digital image to another image data format which would bedefinitely hard to recognize. In this paper, we technically review image encryption methods for medical images. We do hope the present work can highlight the most recent contributions in the research area, and provide wellorganized information to the medical image community.", "title": "" }, { "docid": "45be2fbf427a3ea954a61cfd5150db90", "text": "Linguistic style conveys the social context in which communication occurs and defines particular ways of using language to engage with the audiences to which the text is accessible. In this work, we are interested in the task of stylistic transfer in natural language generation (NLG) systems, which could have applications in the dissemination of knowledge across styles, automatic summarization and author obfuscation. The main challenges in this task involve the lack of parallel training data and the difficulty in using stylistic features to control generation. To address these challenges, we plan to investigate neural network approaches to NLG to automatically learn and incorporate stylistic features in the process of language generation. We identify several evaluation criteria, and propose manual and automatic evaluation approaches.", "title": "" }, { "docid": "6c8a864355c06fa42bad9f81100f627b", "text": "There is rich knowledge encoded in online web data. For example, punctuation and entity tags in Wikipedia data define some word boundaries in a sentence. In this paper we adopt partial-label learning with conditional random fields to make use of this valuable knowledge for semi-supervised Chinese word segmentation. The basic idea of partial-label learning is to optimize a cost function that marginalizes the probability mass in the constrained space that encodes this knowledge. By integrating some domain adaptation techniques, such as EasyAdapt, our result reaches an F-measure of 95.98% on the CTB-6 corpus, a significant improvement from both the supervised baseline and a previous proposed approach, namely constrained decode.", "title": "" }, { "docid": "24c62c2660ece8c0c724f745cb050964", "text": "Face detection is a classical problem in computer vision. It is still a difficult task due to many nuisances that naturally occur in the wild. In this paper, we propose a multi-scale fully convolutional network for face detection. To reduce computation, the intermediate convolutional feature maps (conv) are shared by every scale model. We up-sample and down-sample the final conv map to approximate K levels of a feature pyramid, leading to a wide range of face scales that can be detected. At each feature pyramid level, a FCN is trained end-to-end to deal with faces in a small range of scale change. Because of the up-sampling, our method can detect very small faces (10×10 pixels). We test our MS-FCN detector on four public face detection datasets, including FDDB, WIDER FACE, AFW and PASCAL FACE. Extensive experiments show that it outperforms state-of-the-art methods. Also, MS-FCN runs at 23 FPS on a GPU for images of size 640×480 with no assumption on the minimum detectable face size.", "title": "" }, { "docid": "d56ff4b194c123b19a335e00b38ea761", "text": "As the automobile industry evolves, a number of in-vehicle communication protocols are developed for different in-vehicle applications. With the emerging new applications towards Internet of Things (IoT), a more integral solution is needed to enable the pervasiveness of intra- and inter-vehicle communications. In this survey, we first introduce different classifications of automobile applications with focus on their bandwidth and latency. Then we survey different in-vehicle communication bus protocols including both legacy protocols and emerging Ethernet. In addition, we highlight our contribution in the field to employ power line as the in-vehicle communication medium. We believe power line communication will play an important part in future automobile which can potentially reduce the amount of wiring, simplify design and reduce cost. Based on these technologies, we also introduce some promising applications in future automobile enabled by the development of in-vehicle network. Finally, We will share our view on how the in-vehicle network can be merged into the future IoT.", "title": "" }, { "docid": "ab0b8cea87678dd7b5ea5057fbdb0ac1", "text": "Data collection is a crucial operation in wireless sensor networks. The design of data collection schemes is challenging due to the limited energy supply and the hot spot problem. Leveraging empirical observations that sensory data possess strong spatiotemporal compressibility, this paper proposes a novel compressive data collection scheme for wireless sensor networks. We adopt a power-law decaying data model verified by real data sets and then propose a random projection-based estimation algorithm for this data model. Our scheme requires fewer compressed measurements, thus greatly reduces the energy consumption. It allows simple routing strategy without much computation and control overheads, which leads to strong robustness in practical applications. Analytically, we prove that it achieves the optimal estimation error bound. Evaluations on real data sets (from the GreenOrbs, IntelLab and NBDC-CTD projects) show that compared with existing approaches, this new scheme prolongs the network lifetime by 1.5X to 2X for estimation error 5-20 percent.", "title": "" }, { "docid": "80ce6c8c9fc4bf0382c5f01d1dace337", "text": "Customer loyalty is viewed as the strength of the relationship between an individual's relative attitude and repeat patronage. The relationship is seen as mediated by social norms and situational factors. Cognitive, affective, and conative antecedents of relative attitude are identified as contributing to loyalty, along with motivational, perceptual, and behavioral consequences. Implications for research and for the management of loyalty are derived.", "title": "" }, { "docid": "b475ddb8c3ff32dfea5f51d054680bc3", "text": "An increasing price and demand for natural gas has made it possible to explore remote gas fields. Traditional offshore production platforms for natural gas have been exporting the partially processed natural gas to shore, where it is further processed to permit consumption by end-users. Such an approach is possible where the gas field is located within a reasonable distance from shore or from an existing gas pipeline network. However, much of the world’s gas reserves are found in remote offshore fields where transport via a pipeline is not feasible or is uneconomic to install and therefore, to date, has not been possible to explore. The development of floating production platforms and, on the receiving end, regasification platforms, have increased the possibilities to explore these fields and transport the liquefied gas in a more efficient form, i.e. liquefied natural gas (LNG), to the end user who in turn can readily import the gas. Floating production platforms and regasification platforms, collectively referred to as FLNG, imply a blend of technology from land-based LNG industry, offshore oil and gas industry and marine transport technology. Regulations and rules based on experience from these applications could become too conservative or not conservative enough when applied to a FLNG unit. Alignment with rules for conventional LNG carriers would be an advantage since this would increase the transparency and possibility for standardization in the building of floating LNG production vessels. The objective of this study is to identify the risks relevant to FLNG. The risks are compared to conventional LNG carriers and whether or not regulatory alignment possibilities exist. To identify the risks, a risk analysis was performed based on the principles of formal safety assessment methodology. To propose regulatory alignment possibilities, the risks found were also evaluated against the existing rules and regulations of Det Norske Veritas. The conclusion of the study is that the largest risk-contributing factor on an FLNG is the presence of processing, liquefaction or regasification equipment and for an LNG carrier it is collision, grounding and contact accidents. Experience from oil FPSOs could be used in the design of LNG FPSOs, and attention needs to be drawn to the additional requirements due to processing and storage of cryogenic liquid on board. FSRUs may follow either an approach for offshore rules or, if intended to follow a regular docking scheme, follow an approach for ship rules with additional issues addressed in classification notes.", "title": "" }, { "docid": "be3bf1e95312cc0ce115e3aaac2ecc96", "text": "This paper contributes a first study into how different human users deliver simultaneous control and feedback signals during human-robot interaction. As part of this work, we formalize and present a general interactive learning framework for online cooperation between humans and reinforcement learning agents. In many humanmachine interaction settings, there is a growing gap between the degrees-of-freedom of complex semi-autonomous systems and the number of human control channels. Simple human control and feedback mechanisms are required to close this gap and allow for better collaboration between humans and machines on complex tasks. To better inform the design of concurrent control and feedback interfaces, we present experimental results from a human-robot collaborative domain wherein the human must simultaneously deliver both control and feedback signals to interactively train an actor-critic reinforcement learning robot. We compare three experimental conditions: 1) human delivered control signals, 2) reward-shaping feedback signals, and 3) simultaneous control and feedback. Our results suggest that subjects provide less feedback when simultaneously delivering feedback and control signals and that control signal quality is not significantly diminished. Our data suggest that subjects may also modify when and how they provide feedback. Through algorithmic development and tuning informed by this study, we expect semi-autonomous actions of robotic agents can be better shaped by human feedback, allowing for seamless collaboration and improved performance in difficult interactive domains. University of Alberta, Dep. of Computing Science, Edmonton, Canada University of Alberta, Deps. of Medicine and Computing Science, Edmonton, Alberta, Canada. Correspondence to: Kory Mathewson <korym@ualberta.ca>. Under review for the 34 th International Conference on Machine Learning, Sydney, Australia, 2017. JMLR: W&CP. Copyright 2017 by the authors. Figure 1. Experimental configuration. One of the study participants with the Myo band on their right arm providing a control signal, while simultaneously providing feedback signals with their left hand. The Aldebaran Nao robot simulation is visible on the screen alongside experimental logging.", "title": "" }, { "docid": "79b8588f7c9b6dc87d90ddbd2e75a7d5", "text": "BACKGROUND\nDespite the progress in reducing malaria infections and related deaths, the disease remains a major global public health problem. The problem is among the top five leading causes of outpatient visits in Dembia district of the northwest Ethiopia. Therefore, this study aimed to assess the determinants of malaria infections in the district.\n\n\nMETHODS\nAn institution-based case-control study was conducted in Dembia district from October to November 2016. Out of the ten health centers in the district, four were randomly selected for the study in which 370 participants (185 cases and 185 controls) were enrolled. Data were collected using a pretested structured questionnaire. Factors associated with malaria infections were determined using logistic regression analysis. Odds ratio with 95% CI was used as a measure of association, and variables with a p-value of ≤0.05 were considered as statistically significant.\n\n\nRESULTS\nThe median age of all participants was 26 years, while that of cases and controls was 22 and 30 with a range of 1 to 80 and 2 to 71, respectively. In the multivariable logistic regression, over 15 years of age adjusted odds ratio(AOR) and confidence interval (CI) of (AOR = 18; 95% CI: 2.1, 161.5), being male (AOR = 2.2; 95% CI: 1.2, 3.9), outdoor activities at night (AOR = 5.7; 95% CI: 2.5, 12.7), bed net sharing (AOR = 3.9; 95% CI: 2.0, 7.7), and proximity to stagnant water sources (AOR = 2.7; 95% CI: 1.3, 5.4) were independent predictors.\n\n\nCONCLUSION\nBeing in over 15 years of age group, male gender, night time activity, bed net sharing and proximity to stagnant water sources were determinant factors of malaria infection in Dembia district. Additional interventions and strategies which focus on men, outdoor work at night, household net utilization, and nearby stagnant water sources are essential to reduce malaria infections in the area.", "title": "" }, { "docid": "ea308cdcedd9261fb9871cf84899b63f", "text": "Purpose To identify and discuss the issues and success factors surrounding biometrics, especially in the context of user authentication and controls in the banking sector, using a case study. Design/methodology/approach The literature survey and analysis of the security models of the present information systems and biometric technologies in the banking sector provide the theoretical and practical background for this work. The impact of adopting biometric solutions in banks was analysed by considering the various issues and challenges from technological, managerial, social and ethical angles. These explorations led to identifying the success factors that serve as possible guidelines for a viable implementation of a biometric enabled authentication system in banking organisations, in particular for a major bank in New Zealand. Findings As the level of security breaches and transaction frauds increase day by day, the need for highly secure identification and personal verification information systems is becoming extremely important especially in the banking and finance sector. Biometric technology appeals to many banking organisations as a near perfect solution to such security threats. Though biometric technology has gained traction in areas like healthcare and criminology, its application in banking security is still in its infancy. Due to the close association of biometrics to human, physical and behavioural aspects, such technologies pose a multitude of social, ethical and managerial challenges. The key success factors proposed through the case study served as a guideline for a biometric enabled security project called Bio Sec, which is envisaged in a large banking organisation in New Zealand. This pilot study reveals that more than coping with the technology issues of gelling biometrics into the existing information systems, formulating a viable security plan that addresses user privacy fears, human tolerance levels, organisational change and legal issues is of prime importance. Originality/value Though biometric systems are successfully adopted in areas such as immigration control and criminology, there is a paucity of their implementation and research pertaining to banking environments. Not all banks venture into biometric solutions to enhance their security systems due to their socio technological issues. This paper fulfils the need for a guideline to identify the various issues and success factors for a viable biometric implementation in a bank’s access control system. This work is only a starting point for academics to conduct more research in the application of biometrics in the various facets of banking businesses.", "title": "" }, { "docid": "3cd9aeb83ba379763c42f0c20a53851c", "text": "One of the main problems in many big and crowded cities is finding parking spaces for vehicles. With IoT technology and mobile applications, in this paper, we propose a design and development of a real smart parking system that can provide more than just information about vacant spaces but also help user to locate the space where the vehicle can be parked in order to reduce traffics in the parking area. Moreover, we use computer vision to detect vehicle plate number in order to monitor the vehicles in the parking area for enhancing security and also to help user find his/her car when he/she forgets where the car is parked. In our system, we also design the payment process using mobile payment in order to reduce time and remove bottleneck of the payment process at the entry/exit gate of the parking area.", "title": "" }, { "docid": "b02ebfa85f0948295b401152c0190d74", "text": "SAGE has had a remarkable impact at Microsoft.", "title": "" } ]
scidocsrr
adc94cd673f25c2caf8376617399ffe4
HyperQA : Hyperbolic Embeddings for Fast and E icient Ranking of estion Answer Pairs
[ { "docid": "a52d0679863b148b4fd6e112cd8b5596", "text": "Representation learning has become an invaluable approach for learning from symbolic data such as text and graphs. However, while complex symbolic datasets often exhibit a latent hierarchical structure, state-of-the-art methods typically learn embeddings in Euclidean vector spaces, which do not account for this property. For this purpose, we introduce a new approach for learning hierarchical representations of symbolic data by embedding them into hyperbolic space – or more precisely into an n-dimensional Poincaré ball. Due to the underlying hyperbolic geometry, this allows us to learn parsimonious representations of symbolic data by simultaneously capturing hierarchy and similarity. We introduce an efficient algorithm to learn the embeddings based on Riemannian optimization and show experimentally that Poincaré embeddings outperform Euclidean embeddings significantly on data with latent hierarchies, both in terms of representation capacity and in terms of generalization ability.", "title": "" }, { "docid": "1a6ece40fa87e787f218902eba9b89f7", "text": "Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3\\% absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers.", "title": "" }, { "docid": "b4ab51818d868b2f9796540c71a7bd17", "text": "We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.", "title": "" }, { "docid": "87284302ea96b36c769a4d2a05295a32", "text": "Retrieving similar questions is very important in community-based question answering. A major challenge is the lexical gap in sentence matching. In this paper, we propose a convolutional neural tensor network architecture to encode the sentences in semantic space and model their interactions with a tensor layer. Our model integrates sentence modeling and semantic matching into a single model, which can not only capture the useful information with convolutional and pooling layers, but also learn the matching metrics between the question and its answer. Besides, our model is a general architecture, with no need for the other knowledge such as lexical or syntactic analysis. The experimental results shows that our method outperforms the other methods on two matching tasks.", "title": "" }, { "docid": "340aa5616ef01e8d8a965f2efb510fe9", "text": "The paper presents a new unsupervised dimensionality reduction technique, called parametric t-SNE, that learns a parametric mapping between the high-dimensional data space and the low-dimensional latent space. Parametric t-SNE learns the parametric mapping in such a way that the local structure of the data is preserved as well as possible in the latent space. We evaluate the performance of parametric t-SNE in experiments on three datasets, in which we compare it to the performance of two other unsupervised parametric dimensionality reduction techniques. The results of experiments illustrate the strong performance of parametric t-SNE, in particular, in learning settings in which the dimensionality of the latent space is relatively low.", "title": "" } ]
[ { "docid": "00108ade18d287efa5a06ffe8a3fda59", "text": "Multi- and many-core processors are becoming increasingly popular in embedded systems. Many of these processors now feature hardware virtualization capabilities, as found on the ARM Cortex A15 and x86 architectures with Intel VT-x or AMD-V support. Hardware virtualization provides a way to partition physical resources, including processor cores, memory, and I/O devices, among guest virtual machines (VMs). Each VM is then able to host tasks of a specific criticality level, as part of a mixed-criticality system with different timing and safety requirements. However, traditional virtual machine systems are inappropriate for mixed-criticality computing. They use hypervisors to schedule separate VMs on physical processor cores. The costs of trapping into hypervisors to multiplex and manage machine physical resources on behalf of separate guests are too expensive for many time-critical tasks. Additionally, traditional hypervisors have memory footprints that are often too large for many embedded computing systems. In this article, we discuss the design of the Quest-V separation kernel, which partitions services of different criticality levels across separate VMs, or sandboxes. Each sandbox encapsulates a subset of machine physical resources that it manages without requiring intervention from a hypervisor. In Quest-V, a hypervisor is only needed to bootstrap the system, recover from certain faults, and establish communication channels between sandboxes. This not only reduces the memory footprint of the most privileged protection domain but also removes it from the control path during normal system operation, thereby heightening security.", "title": "" }, { "docid": "b062222917050f13c3a17e8de53a6abe", "text": "Exposed to traditional language learning strategies, students will gradually lose interest in and motivation to not only learn English, but also any language or culture. Hence, researchers are seeking technology-based learning strategies, such as digital game-mediated language learning, to motivate students and improve learning performance. This paper synthesizes the findings of empirical studies focused on the effectiveness of digital games in language education published within the last five years. Nine qualitative, quantitative, and mixed-method studies are collected and analyzed in this paper. The review found that recent empirical research was conducted primarily to examine the effectiveness by measuring language learning outcomes, motivation, and interactions. Weak proficiency was found in vocabulary retention, but strong proficiency was present in communicative skills such as speaking. Furthermore, in general, students reported that they are motivated to engage in language learning when digital games are involved; however, the motivation is also observed to be weak due to the design of the game and/or individual differences. The most effective method used to stimulate interaction language learning process seems to be digital games, as empirical studies demonstrate that it effectively promotes language education. However, significant work is still required to provide clear answers with respect to innovative and effective learning practice.", "title": "" }, { "docid": "dd911eff60469b32330c5627c288f19f", "text": "Routing Algorithms are driving the growth of the data transmission in wireless sensor networks. Contextually, many algorithms considered the data gathering and data aggregation. This paper uses the scenario of clustering and its impact over the SPIN protocol and also finds out the effect over the energy consumption in SPIN after uses of clustering. The proposed scheme is implemented using TCL/C++ programming language and evaluated using Ns2.34 simulator and compare with LEACH. Simulation shows proposed protocol exhibits significant performance gains over the LEACH for lifetime of network and guaranteed data transmission.", "title": "" }, { "docid": "1a5b63ae29de488a64518abcde04fb2f", "text": "A thorough review of available literature was conducted to inform of advancements in mobile LIDAR technology, techniques, and current and emerging applications in transportation. The literature review touches briefly on the basics of LIDAR technology followed by a more in depth description of current mobile LIDAR trends, including system components and software. An overview of existing quality control procedures used to verify the accuracy of the collected data is presented. A collection of case studies provides a clear description of the advantages of mobile LIDAR, including an increase in safety and efficiency. The final sections of the review identify current challenges the industry is facing, the guidelines that currently exist, and what else is needed to streamline the adoption of mobile LIDAR by transportation agencies. Unfortunately, many of these guidelines do not cover the specific challenges and concerns of mobile LIDAR use as many have been developed for airborne LIDAR acquisition and processing. From this review, there is a lot of discussion on “what” is being done in practice, but not a lot on “how” and “how well” it is being done. A willingness to share information going forward will be important for the successful use of mobile LIDAR.", "title": "" }, { "docid": "574c07709b65749bc49dd35d1393be80", "text": "Optical coherence tomography (OCT) is used for non-invasive diagnosis of diabetic macular edema assessing the retinal layers. In this paper, we propose a new fully convolutional deep architecture, termed ReLayNet, for end-to-end segmentation of retinal layers and fluid masses in eye OCT scans. ReLayNet uses a contracting path of convolutional blocks (encoders) to learn a hierarchy of contextual features, followed by an expansive path of convolutional blocks (decoders) for semantic segmentation. ReLayNet is trained to optimize a joint loss function comprising of weighted logistic regression and Dice overlap loss. The framework is validated on a publicly available benchmark dataset with comparisons against five state-of-the-art segmentation methods including two deep learning based approaches to substantiate its effectiveness.", "title": "" }, { "docid": "eeafcab155da5229bf26ddc350e37951", "text": "Interferons (IFNs) are the hallmark of the vertebrate antiviral system. Two of the three IFN families identified in higher vertebrates are now known to be important for antiviral defence in teleost fish. Based on the cysteine patterns, the fish type I IFN family can be divided into two subfamilies, which possibly interact with distinct receptors for signalling. The fish type II IFN family consists of two members, IFN-γ with similar functions to mammalian IFN-γ and a teleost specific IFN-γ related (IFN-γrel) molecule whose functions are not fully elucidated. These two type II IFNs also appear to bind to distinct receptors to exert their functions. It has become clear that fish IFN responses are mediated by the host pattern recognition receptors and an array of transcription factors including the IFN regulatory factors, the Jak/Stat proteins and the suppressor of cytokine signalling (SOCS) molecules.", "title": "" }, { "docid": "48623054af5217d48b05aed57a67ae66", "text": "This paper proposes an ontology-based approach to analyzing and assessing the security posture for software products. It provides measurements of trust for a software product based on its security requirements and evidence of assurance, which are retrieved from an ontology built for vulnerability management. Our approach differentiates with the previous work in the following aspects: (1) It is a holistic approach emphasizing that the system assurance cannot be determined or explained by its component assurance alone. Instead, the software system as a whole determines its assurance level. (2) Our approach is based on widely accepted standards such as CVSS, CVE, CWE, CPE, and CAPEC. Our ontology integrated these standards seamlessly thus provides a solid foundation for security assessment. (3) Automated tools have been built to support our approach, delivering the environmental scores for software products.", "title": "" }, { "docid": "3bbbdf4d6572e548106fc1d24b50cbc6", "text": "Predicting the a↵ective valence of unknown multiword expressions is key for concept-level sentiment analysis. A↵ectiveSpace 2 is a vector space model, built by means of random projection, that allows for reasoning by analogy on natural language concepts. By reducing the dimensionality of a↵ective common-sense knowledge, the model allows semantic features associated with concepts to be generalized and, hence, allows concepts to be intuitively clustered according to their semantic and a↵ective relatedness. Such an a↵ective intuition (so called because it does not rely on explicit features, but rather on implicit analogies) enables the inference of emotions and polarity conveyed by multi-word expressions, thus achieving e cient concept-level sentiment analysis.", "title": "" }, { "docid": "e96fddd8058e3dc98eb9f73aa387c9f9", "text": "There is often the need to perform sentiment classification in a particular domain where no labeled document is available. Although we could make use of a general-purpose off-the-shelf sentiment classifier or a pre-built one for a different domain, the effectiveness would be inferior. In this paper, we explore the possibility of building domain-specific sentiment classifiers with unlabeled documents only. Our investigation indicates that in the word embeddings learned from the unlabeled corpus of a given domain, the distributed word representations (vectors) for opposite sentiments form distinct clusters, though those clusters are not transferable across domains. Exploiting such a clustering structure, we are able to utilize machine learning algorithms to induce a quality domain-specific sentiment lexicon from just a few typical sentiment words (“seeds”). An important finding is that simple linear model based supervised learning algorithms (such as linear SVM) can actually work better than more sophisticated semi-supervised/transductive learning algorithms which represent the state-of-the-art technique for sentiment lexicon induction. The induced lexicon could be applied directly in a lexicon-based method for sentiment classification, but a higher performance could be achieved through a two-phase bootstrapping method which uses the induced lexicon to assign positive/negative sentiment scores to unlabeled documents first, a nd t hen u ses those documents found to have clear sentiment signals as pseudo-labeled examples to train a document sentiment classifier v ia supervised learning algorithms (such as LSTM). On several benchmark datasets for document sentiment classification, our end-to-end pipelined approach which is overall unsupervised (except for a tiny set of seed words) outperforms existing unsupervised approaches and achieves an accuracy comparable to that of fully supervised approaches.", "title": "" }, { "docid": "ad7b715f434f3a500be8d52a047b9be1", "text": "This paper presents a quantitative analysis of data collected by an online testing system for SQL \"select\" queries. The data was collected from almost one thousand students, over eight years. We examine which types of queries our students found harder to write. The seven types of SQL queries studied are: simple queries on one table; grouping, both with and without \"having\"; natural joins; simple and correlated sub-queries; and self-joins. The order of queries in the preceding sentence reflects the order of student difficulty we see in our data.", "title": "" }, { "docid": "ab3e279524995fbd2d362fa726c69065", "text": "In this work, we present an application of domain randomization and generative adversarial networks (GAN) to train a near real-time object detector for industrial electric parts, entirely in a simulated environment. Large scale availability of labelled real world data is typically rare and difficult to obtain in many industrial settings. As such here, only a few hundred of unlabelled real images are used to train a Cyclic-GAN network, in combination with various degree of domain randomization procedures. We demonstrate that this enables robust translation of synthetic images to the real world domain. We show that a combination of the original synthetic (simulation) and GAN translated images, when used for training a Mask-RCNN object detection network achieves greater than 0.95 mean average precision in detecting and classifying a collection of industrial electric parts. We evaluate the performance across different combinations of training data.", "title": "" }, { "docid": "8bf63451cf6b83f3da4d4378de7bfd7f", "text": "This paper presents a high-efficiency and smoothtransition buck-boost (BB) converter to extend the battery life of portable devices. Owing to the usage of four switches, the BB control topology needs to minimize the switching and conduction losses at the same time. Therefore, over a wide input voltage range, the proposed BB converter consumes minimum switching loss like the basic operation of buck or boost converter. Besides, the conduction loss is reduced by means of the reduction of the inductor current level. Especially, the proposed BB converter offers good line/load regulation and thus provides a smooth and stable output voltage when the battery voltage decreases. Simulation results show that the output voltage drops is very small during the whole battery life time and the output transition is very smooth during the mode transition by the proposed BB control scheme.", "title": "" }, { "docid": "ea200dc100d77d8c156743bede4a965b", "text": "We present a contextual spoken language understanding (contextual SLU) method using Recurrent Neural Networks (RNNs). Previous work has shown that context information, specifically the previously estimated domain assignment, is helpful for domain identification. We further show that other context information such as the previously estimated intent and slot labels are useful for both intent classification and slot filling tasks in SLU. We propose a step-n-gram model to extract sentence-level features from RNNs, which extract sequential features. The step-n-gram model is used together with a stack of Convolution Networks for training domain/intent classification. Our method therefore exploits possible correlations among domain/intent classification and slot filling and incorporates context information from the past predictions of domain/intent and slots. The proposed method obtains new state-of-the-art results on ATIS and improved performances over baseline techniques such as conditional random fields (CRFs) on a large context-sensitive SLU dataset.", "title": "" }, { "docid": "6059b4bbf5d269d0a5f1f596b48c1acb", "text": "The mathematical concept of document resemblance captures well the informal notion of syntactic similarity. The resemblance can be estimated using a fixed size “sketch” for each document. For a large collection of documents (say hundreds of millions) the size of this sketch is of the order of a few hundred bytes per document. However, for efficient large scale web indexing it is not necessary to determine the actual resemblance value: it suffices to determine whether newly encountered documents are duplicates or near-duplicates of documents already indexed. In other words, it suffices to determine whether the resemblance is above a certain threshold. In this talk we show how this determination can be made using a ”sample” of less than 50 bytes per document. The basic approach for computing resemblance has two aspects: first, resemblance is expressed as a set (of strings) intersection problem, and second, the relative size of intersections is evaluated by a process of random sampling that can be done independently for each document. The process of estimating the relative size of intersection of sets and the threshold test discussed above can be applied to arbitrary sets, and thus might be of independent interest. The algorithm for filtering near-duplicate documents discussed here has been successfully implemented and has been used for the last three years in the context of the AltaVista search engine.", "title": "" }, { "docid": "abdf1edfb2b93b3991d04d5f6d3d63d3", "text": "With the rapid growing of internet and networks applications, data security becomes more important than ever before. Encryption algorithms play a crucial role in information security systems. In this paper, we have a study of the two popular encryption algorithms: DES and Blowfish. We overviewed the base functions and analyzed the security for both algorithms. We also evaluated performance in execution speed based on different memory sizes and compared them. The experimental results show the relationship between function run speed and memory size.", "title": "" }, { "docid": "3bda091d69af44f28cb3bd5893a5b8ef", "text": "The method described assumes that a word which cannot be found in a dictionary has at most one error, which might be a wrong, missing or extra letter or a single transposition. The unidentified input word is compared to the dictionary again, testing each time to see if the words match—assuming one of these errors occurred. During a test run on garbled text, correct identifications were made for over 95 percent of these error types.", "title": "" }, { "docid": "c240da3cde126606771de3e6b3432962", "text": "Oscillations in the alpha and beta bands can display either an event-related blocking response or an event-related amplitude enhancement. The former is named event-related desynchronization (ERD) and the latter event-related synchronization (ERS). Examples of ERS are localized alpha enhancements in the awake state as well as sigma spindles in sleep and alpha or beta bursts in the comatose state. It was found that alpha band activity can be enhanced over the visual region during a motor task, or during a visual task over the sensorimotor region. This means ERD and ERS can be observed at nearly the same time; both form a spatiotemporal pattern, in which the localization of ERD characterizes cortical areas involved in task-relevant processing, and ERS marks cortical areas at rest or in an idling state.", "title": "" }, { "docid": "34c41c33ce2cd7642cf29d8bfcab8a3f", "text": "I2Head database has been created with the aim to become an optimal reference for low cost gaze estimation. It exhibits the following outstanding characteristics: it takes into account key aspects of low resolution eye tracking technology; it combines images of users gazing at different grids of points from alternative positions with registers of user’s head position and it provides calibration information of the camera and a simple 3D head model for each user. Hardware used to build the database includes a 6D magnetic sensor and a webcam. A careful calibration method between the sensor and the camera has been developed to guarantee the accuracy of the data. Different sessions have been recorded for each user including not only static head scenarios but also controlled displacements and even free head movements. The database is an outstanding framework to test both gaze estimation algorithms and head pose estimation methods.", "title": "" }, { "docid": "6cf4315ecce8a06d9354ca2f2684113c", "text": "BACKGROUND\nNutritional supplementation may be used to treat muscle loss with aging (sarcopenia). However, if physical activity does not increase, the elderly tend to compensate for the increased energy delivered by the supplements with reduced food intake, which results in a calorie substitution rather than supplementation. Thus, an effective supplement should stimulate muscle anabolism more efficiently than food or common protein supplements. We have shown that balanced amino acids stimulate muscle protein anabolism in the elderly, but it is unknown whether all amino acids are necessary to achieve this effect.\n\n\nOBJECTIVE\nWe assessed whether nonessential amino acids are required in a nutritional supplement to stimulate muscle protein anabolism in the elderly.\n\n\nDESIGN\nWe compared the response of muscle protein metabolism to either 18 g essential amino acids (EAA group: n = 6, age 69 +/- 2 y; +/- SD) or 40 g balanced amino acids (18 g essential amino acids + 22 g nonessential amino acids, BAA group; n = 8, age 71 +/- 2 y) given orally in small boluses every 10 min for 3 h to healthy elderly volunteers. Muscle protein metabolism was measured in the basal state and during amino acid administration via L-[ring-(2)H(5)]phenylalanine infusion, femoral arterial and venous catheterization, and muscle biopsies.\n\n\nRESULTS\nPhenylalanine net balance (in nmol x min(-1). 100 mL leg volume(-1)) increased from the basal state (P < 0.01), with no differences between groups (BAA: from -16 +/- 5 to 16 +/- 4; EAA: from -18 +/- 5 to 14 +/- 13) because of an increase (P < 0.01) in muscle protein synthesis and no change in breakdown.\n\n\nCONCLUSION\nEssential amino acids are primarily responsible for the amino acid-induced stimulation of muscle protein anabolism in the elderly.", "title": "" }, { "docid": "d51ddec1ea405d9bde56f3b3b6baefc7", "text": "Background. Inconsistent data exist about the role of probiotics in the treatment of constipated children. The aim of this study was to investigate the effectiveness of probiotics in childhood constipation. Materials and Methods. In this placebo controlled trial, fifty-six children aged 4-12 years with constipation received randomly lactulose plus Protexin or lactulose plus placebo daily for four weeks. Stool frequency and consistency, abdominal pain, fecal incontinence, and weight gain were studied at the beginning, after the first week, and at the end of the 4th week in both groups. Results. Forty-eight patients completed the study. At the end of the fourth week, the frequency and consistency of defecation improved significantly (P = 0.042 and P = 0.049, resp.). At the end of the first week, fecal incontinence and abdominal pain improved significantly in intervention group (P = 0.030 and P = 0.017, resp.) but, at the end of the fourth week, this difference was not significant (P = 0.125 and P = 0.161, resp.). A significant weight gain was observed at the end of the 1st week in the treatment group. Conclusion. This study showed that probiotics had a positive role in increasing the frequency and improving the consistency at the end of 4th week.", "title": "" } ]
scidocsrr
5c6e50513d395d2ed39b345149d45fbf
Annotating Characters in Literary Corpora: A Scheme, the CHARLES Tool, and an Annotated Novel
[ { "docid": "67992d0c0b5f32726127855870988b01", "text": "We present a method for extracting social networks from literature, namely, nineteenth-century British novels and serials. We derive the networks from dialogue interactions, and thus our method depends on the ability to determine when two characters are in conversation. Our approach involves character name chunking, quoted speech attribution and conversation detection given the set of quotes. We extract features from the social networks and examine their correlation with one another, as well as with metadata such as the novel’s setting. Our results provide evidence that the majority of novels in this time period do not fit two characterizations provided by literacy scholars. Instead, our results suggest an alternative explanation for differences in social networks.", "title": "" }, { "docid": "75f895ff76e7a55d589ff30637524756", "text": "This paper details the coreference resolution system submitted by Stanford at the CoNLL2011 shared task. Our system is a collection of deterministic coreference resolution models that incorporate lexical, syntactic, semantic, and discourse information. All these models use global document-level information by sharing mention attributes, such as gender and number, across mentions in the same cluster. We participated in both the open and closed tracks and submitted results using both predicted and gold mentions. Our system was ranked first in both tracks, with a score of 57.8 in the closed track and 58.3 in the open track.", "title": "" } ]
[ { "docid": "30b1b4df0901ab61ab7e4cfb094589d1", "text": "Direct modulation at 56 and 50 Gb/s of 1.3-μm InGaAlAs ridge-shaped-buried heterostructure (RS-BH) asymmetric corrugation-pitch-modulation (ACPM) distributed feedback lasers is experimentally demonstrated. The fabricated lasers have a low threshold current (5.6 mA at 85°C), high temperature characteristics (71 K), high slope relaxation frequency (3.2 GHz/mA1/2 at 85°C), and wide bandwidth (22.1 GHz at 85°C). These superior properties enable the lasers to run at 56 Gb/s and 55°C and 50 Gb/s at up to 80°C for backto-back operation with clear eye openings. This is achieved by the combination of a low-leakage RS-BH and an ACPM grating. Moreover, successful transmission of 56and 50-Gb/s modulated signals over a 10-km standard single-mode fiber is achieved. These results confirm the suitability of this type of laser for use as a cost-effective light source in 400 GbE and OTU5 applications.", "title": "" }, { "docid": "8e654ace264f8062caee76b0a306738c", "text": "We present a fully fledged practical working application for a rule-based NLG system that is able to create non-trivial, human sounding narrative from structured data, in any language (e.g., English, German, Arabic and Finnish) and for any topic.", "title": "" }, { "docid": "46950519803aba56a0cce475964b99d7", "text": "The coverage problem in the field of robotics is the problem of moving a sensor or actuator over all points in a given region. Example applications of this problem are lawn mowing, spray painting, and aerial or underwater mapping. In this paper, I consider the single-robot offline version of this problem, i.e. given a map of the region to be covered, plan an efficient path for a single robot that sweeps the sensor or actuator over all points. One basic approach to this problem is to decompose the region into subregions, select a sequence of those subregions, and then generate a path that covers each subregion in turn. This paper addresses the problem of creating a good decomposition. Under certain assumptions, the cost to cover a polygonal subregion is proportional to its minimum altitude. An optimal decomposition then minimizes the sum of subregion altitudes. This paper describes an algorithm to find the minimal sum of altitudes (MSA) decomposition of a region with a polygonal boundary and polygonal holes. This algorithm creates an initial decomposition based upon multiple line sweeps and then applies dynamic programming to find the optimal decomposition. This paper describes the algorithm and reports results from an implementation. Several appendices give details and proofs regarding line sweep algorithms.", "title": "" }, { "docid": "7ca66f5741b5ebe9a9f2cd15547f58dc", "text": "A vehicle management system based on UHF band RFID technology is proposed. This system is applied for vehicle entering/leaving at road gates. The system consists of tag-on-car, reader antenna, reader controller, and the monitoring and commanding software. It could effective control the vehicles passing through road gate and record the vehicles' data. The entering time, leaving time, and tag number of each vehicle are all recorded and saved for further processing. By the benefit of UHF band long distance sensing ability, within nine meter the distance between vehicle and reader antenna, the signal can be accurately detected even the vehicle's speed at nearly 30 km/hr. The monitoring and commanding software can not only identify car owners' identities but also determine the gate to open or not. The accessories: video recording and pressure sensing components are flexible to add for enhancing the system's performance. This system has been tested in many field tests and the results shown that it is suitable for vehicle management and the related applications.", "title": "" }, { "docid": "6e82e635682cf87a84463f01c01a1d33", "text": "Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.", "title": "" }, { "docid": "756b25456494b3ece9b240ba3957f91c", "text": "In this paper we introduce the task of fact checking, i.e. the assessment of the truthfulness of a claim. The task is commonly performed manually by journalists verifying the claims made by public figures. Furthermore, ordinary citizens need to assess the truthfulness of the increasing volume of statements they consume. Thus, developing fact checking systems is likely to be of use to various members of society. We first define the task and detail the construction of a publicly available dataset using statements fact-checked by journalists available online. Then, we discuss baseline approaches for the task and the challenges that need to be addressed. Finally, we discuss how fact checking relates to mainstream natural language processing tasks and can stimulate further research.", "title": "" }, { "docid": "6650966d57965a626fd6f50afe6cd7a4", "text": "This paper presents a generalized version of the linear threshold model for simulating multiple cascades on a network while allowing nodes to switch between them. The proposed model is shown to be a rapidly mixing Markov chain and the corresponding steady state distribution is used to estimate highly likely states of the cascades' spread in the network. Results on a variety of real world networks demonstrate the high quality of the estimated solution.", "title": "" }, { "docid": "fbb71a8a7630350a7f33f8fb90b57965", "text": "As the Web of Things (WoT) broadens real world interaction via the internet, there is an increasing need for a user centric model for managing and interacting with real world objects. We believe that online social networks can provide that capability and can enhance existing and future WoT platforms leading to a Social WoT. As both social overlays and user interface containers, online social networks (OSNs) will play a significant role in the evolution of the web of things. As user interface containers and social overlays, they can be used by end users and applications as an on-line entry point for interacting with things, both receiving updates from sensors and controlling things. Conversely, access to user identity and profile information, content and social graphs can be useful in physical social settings like cafés. In this paper we describe some of the key features of social networks used by existing social WoT systems. We follow this with a discussion of open research questions related to integration of OSNs and how OSNs may evolve to be more suitable for integration with places and things. Several ongoing projects in our lab leverage OSNs to connect places and things to online communities.", "title": "" }, { "docid": "a2ab7befeec6dbe3d8334ccf7f39fe1d", "text": "We present a method for finding the boundaries between adjacent regions in an image, where “seed” areas have already been identified in the individual regions to be segmented. This method was motivated by the problem of finding the borders of cells in microscopy images, given a labelling of the nuclei in the images. The method finds the Voronoi region of each seed on a manifold with a metric controlled by local image properties. We discuss similarities to other methods based on image-controlled metrics, such as Geodesic Active Contours, and give a fast algorithm for computing the Voronoi regions. We validate our method against hand-traced boundaries for cell images.", "title": "" }, { "docid": "3ca2933b896b6ab80ba91e00869b4f50", "text": "In recent years, the spectacular development of web technologies, lead to an enormous quantity of user generated information in online systems. This large amount of information on web platforms make them viable for use as data sources, in applications based on opinion mining and sentiment analysis. The paper proposes an algorithm for detecting sentiments on movie user reviews, based on naive Bayes classifier. We make an analysis of the opinion mining domain, techniques used in sentiment analysis and its applicability. We implemented the proposed algorithm and we tested its performance, and suggested directions of development.", "title": "" }, { "docid": "80b5030cbb923f32dc791409eb184a80", "text": "Bayesian Optimisation (BO) refers to a class of methods for global optimisation of a function f which is only accessible via point evaluations. It is typically used in settings where f is expensive to evaluate. A common use case for BO in machine learning is model selection, where it is not possible to analytically model the generalisation performance of a statistical model, and we resort to noisy and expensive training and validation procedures to choose the best model. Conventional BO methods have focused on Euclidean and categorical domains, which, in the context of model selection, only permits tuning scalar hyper-parameters of machine learning algorithms. However, with the surge of interest in deep learning, there is an increasing demand to tune neural network architectures. In this work, we develop NASBOT, a Gaussian process based BO framework for neural architecture search. To accomplish this, we develop a distance metric in the space of neural network architectures which can be computed efficiently via an optimal transport program. This distance might be of independent interest to the deep learning community as it may find applications outside of BO. We demonstrate that NASBOT outperforms other alternatives for architecture search in several cross validation based model selection tasks on multi-layer perceptrons and convolutional neural networks.", "title": "" }, { "docid": "fe03dc323c15d5ac390e67f9aa0415b8", "text": "Objects make distinctive sounds when they are hit or scratched. These sounds reveal aspects of an object's material properties, as well as the actions that produced them. In this paper, we propose the task of predicting what sound an object makes when struck as a way of studying physical interactions within a visual scene. We present an algorithm that synthesizes sound from silent videos of people hitting and scratching objects with a drumstick. This algorithm uses a recurrent neural network to predict sound features from videos and then produces a waveform from these features with an example-based synthesis procedure. We show that the sounds predicted by our model are realistic enough to fool participants in a \"real or fake\" psychophysical experiment, and that they convey significant information about material properties and physical interactions.", "title": "" }, { "docid": "da48aae7960f0871c91d4c6c9f5f44bf", "text": "It is often difficult to ground text to precise time intervals due to the inherent uncertainty arising from either missing or multiple expressions at year, month, and day time granularities. We address the problem of estimating an excerpt-time model capturing the temporal scope of a given news article excerpt as a probability distribution over chronons. For this, we propose a semi-supervised distribution propagation framework that leverages redundancy in the data to improve the quality of estimated time models. Our method generates an event graph with excerpts as nodes and models various inter-excerpt relations as edges. It then propagates empirical excerpt-time models estimated for temporally annotated excerpts, to those that are strongly related but miss annotations. In our experiments, we first generate a test query set by randomly sampling 100 Wikipedia events as queries. For each query, making use of a standard text retrieval model, we then obtain top-10 documents with an average of 150 excerpts. From these, each temporally annotated excerpt is considered as gold standard. The evaluation measures are first computed for each gold standard excerpt for a single query, by comparing the estimated model with our method to the empirical model from the original expressions. Final scores are reported by averaging over all the test queries. Experiments on the English Gigaword corpus show that our method estimates significantly better time models than several baselines taken from the literature.", "title": "" }, { "docid": "fc9a1db9842daa789b10aaff8fdbc996", "text": "Time series clustering has become an important topic, particularly for similarity search amongst long time series such as those arising in bioinformatics. Unfortunately, existing methods for time series clustering that rely on the actual time series point values can become impractical since the methods do not scale well for longer time series, and many clustering algorithms do not easily handle high dimensional data. In this paper we propose a scalable method for time series clustering that replaces the time series point values with some global measures of the characteristics of the time series. These global measures are then clustered using a selforganising map, which performs additional dimension reduction. The proposed approach has been tested using some benchmark time series previously reported for time series clustering, and is shown to yield useful and robust clustering. The resulting clusters are similar to those produced by other methods, with some interesting variations that can be intuitively explained with knowledge of the global characteristics of the time series.", "title": "" }, { "docid": "2d7ff73a3fb435bd11633f650b23172e", "text": "This study determined the effect of Tetracarpidium conophorum (black walnut) leaf extract on the male reproductive organs of albino rats. The effects of the leaf extracts were determined on the Epididymal sperm concentration, Testicular histology, and on testosterone concentration in the rat serum by a micro plate enzyme immunoassay (Testosterone assay). A total of sixteen (16) male albino wistar rats were divided into four (1, 2, 3 and 4) groups of four rats each. Group 1 served as the control and was fed with normal diet only, while groups 2, 3 and 4 were fed with 200, 400 and 600 mg/kg body weight (BW) of the extract for a period of two weeks. The Epididymal sperm concentration were not significantly affected (p>0.05) across the groups. The level of testosterone for the treatment groups 2 and 4 showed no significant difference (p>0.05) compared to the control while group 4 showed significant increase compared to that of the control (p<0.05). Pathologic changes were observed in testicular histology across the treatment groups. Robust seminiferous tubular lumen containing sperm cells and increased production of Leydig cells and Sertoli cells were observed across different treatment groups compared to that of the control.", "title": "" }, { "docid": "d29485bc844995b639bb497fb05fcb6a", "text": "Vol. LII (June 2015), 375–393 375 © 2015, American Marketing Association ISSN: 0022-2437 (print), 1547-7193 (electronic) *Paul R. Hoban is Assistant Professor of Marketing, Wisconsin School of Business, University of Wisconsin–Madison (e-mail: phoban@ bus. wisc. edu). Randolph E. Bucklin is Professor of Marketing, Peter W. Mullin Chair in Management, UCLA Anderson School of Management, University of California, Los Angeles (e-mail: randy.bucklin@anderson. ucla. edu). Avi Goldfarb served as associate editor for this article. PAUL R. HOBAN and RANDOLPH E. BUCKLIN*", "title": "" }, { "docid": "b33b2abdc858b25d3aae1e789bca535c", "text": "Rapid urbanization creates new challenges and issues, and the smart city concept offers opportunities to rise to these challenges, solve urban problems and provide citizens with a better living environment. This paper presents an exhaustive literature survey of smart cities. First, it introduces the origin and main issues facing the smart city concept, and then presents the fundamentals of a smart city by analyzing its definition and application domains. Second, a data-centric view of smart city architectures and key enabling technologies is provided. Finally, a survey of recent smart city research is presented. This paper provides a reference to researchers who intend to contribute to smart city research and implementation. 世界范围内的快速城镇化给城市发展带来了很多新的问题和挑战, 智慧城市概念的出现, 为解决当前城市难题、提供更好的城市环境提供了有效的解决途径。论文介绍了智慧城市的起源, 总结了智慧城市领域的三个主要问题, 通过详细的综述性文献研究展开对这些问题的探讨。论文首先对智慧城市的定义和应用领域进行了归纳和分析, 然后研究了智慧城市的体系架构, 提出了智慧城市以数据为中心、多领域融合的相关特征, 并定义了以数据活化技术为核心的层次化体系架构, 并介绍了其中的关键技术, 最后选取了城市交通、城市群体行为、城市规划三个具有代表性的应用领域介绍了城市数据分析与处理的最新研究进展和存在问题。", "title": "" }, { "docid": "91ef2853e45d9b82f92689e0b01e6d63", "text": "BACKGROUND\nThis study sought to evaluate the efficacy of nonoperative compression in correcting pectus carinatum in children.\n\n\nMATERIALS AND METHODS\nChildren presenting with pectus carinatum between August 1999 and January 2004 were prospectively enrolled in this study. The management protocol included custom compressive bracing, strengthening exercises, and frequent clinical follow-up.\n\n\nRESULTS\nThere were 30 children seen for evaluation. Their mean age was 13 years (range, 3-16 years) and there were 26 boys and 4 girls. Of the 30 original patients, 6 never returned to obtain the brace, leaving 24 patients in the study. Another 4 subjects were lost to follow-up. For the remaining 20 patients who have either completed treatment or continue in the study, the mean duration of bracing was 16 months, involving an average of 3 follow-up visits and 2 brace adjustments. Five of these patients had little or no improvement due to either too short a follow-up or noncompliance with the bracing. The other 15 patients (75%) had a significant to complete correction. There were no complications encountered during the study period.\n\n\nCONCLUSION\nCompressive orthotic bracing is a safe and effective alternative to both invasive surgical correction and no treatment for pectus carinatum in children. Compliance is critical to the success of this management strategy.", "title": "" }, { "docid": "fd18b3d4799d23735c48bff3da8fd5ff", "text": "There is need for an Integrated Event Focused Crawling system to collect Web data about key events. When a disaster or other significant event occurs, many users try to locate the most up-to-date information about that event. Yet, there is little systematic collecting and archiving anywhere of event information. We propose intelligent event focused crawling for automatic event tracking and archiving, ultimately leading to effective access. We developed an event model that can capture key event information, and incorporated that model into a focused crawling algorithm. For the focused crawler to leverage the event model in predicting webpage relevance, we developed a function that measures the similarity between two event representations. We then conducted two series of experiments to evaluate our system about two recent events: California shooting and Brussels attack. The first experiment series evaluated the effectiveness of our proposed event model representation when assessing the relevance of webpages. Our event model-based representation outperformed the baseline method (topic-only); it showed better results in precision, recall, and F1-score with an improvement of 20% in F1-score. The second experiment series evaluated the effectiveness of the event model-based focused crawler for collecting relevant webpages from the WWW. Our event model-based focused crawler outperformed the state-of-the-art baseline focused crawler (best-first); it showed better results in harvest ratio with an average improvement of 40%.", "title": "" }, { "docid": "417fe20322c4458c58553c6d0984cabe", "text": "Neural Turing Machines (NTMs) are an instance of Memory Augmented Neural Networks, a new class of recurrent neural networks which decouple computation from memory by introducing an external memory unit. NTMs have demonstrated superior performance over Long Short-Term Memory Cells in several sequence learning tasks. A number of open source implementations of NTMs exist but are unstable during training and/or fail to replicate the reported performance of NTMs. This paper presents the details of our successful implementation of a NTM. Our implementation learns to solve three sequential learning tasks from the original NTM paper. We find that the choice of memory contents initialization scheme is crucial in successfully implementing a NTM. Networks with memory contents initialized to small constant values converge on average 2 times faster than the next best memory contents initialization scheme.", "title": "" } ]
scidocsrr
beca077eb153f4fef0e3419a7517832a
Spatiotemporal Multi-Task Network for Human Activity Understanding
[ { "docid": "43e3d3639d30d9e75da7e3c5a82db60a", "text": "This paper studies deep network architectures to address the problem of video classification. A multi-stream framework is proposed to fully utilize the rich multimodal information in videos. Specifically, we first train three Convolutional Neural Networks to model spatial, short-term motion and audio clues respectively. Long Short Term Memory networks are then adopted to explore long-term temporal dynamics. With the outputs of the individual streams, we propose a simple and effective fusion method to generate the final predictions, where the optimal fusion weights are learned adaptively for each class, and the learning process is regularized by automatically estimated class relationships. Our contributions are two-fold. First, the proposed multi-stream framework is able to exploit multimodal features that are more comprehensive than those previously attempted. Second, we demonstrate that the adaptive fusion method using the class relationship as a regularizer outperforms traditional alternatives that estimate the weights in a “free” fashion. Our framework produces significantly better results than the state of the arts on two popular benchmarks, 92.2% on UCF-101 (without using audio) and 84.9% on Columbia Consumer Videos.", "title": "" }, { "docid": "47b4b22cee9d5693c16be296afe61982", "text": "In this work we introduce a fully end-to-end approach for action detection in videos that learns to directly predict the temporal bounds of actions. Our intuition is that the process of detecting actions is naturally one of observation and refinement: observing moments in video, and refining hypotheses about when an action is occurring. Based on this insight, we formulate our model as a recurrent neural network-based agent that interacts with a video over time. The agent observes video frames and decides both where to look next and when to emit a prediction. Since backpropagation is not adequate in this non-differentiable setting, we use REINFORCE to learn the agent's decision policy. Our model achieves state-of-the-art results on the THUMOS'14 and ActivityNet datasets while observing only a fraction (2% or less) of the video frames.", "title": "" } ]
[ { "docid": "19bb054fb4c6398df99a84a382354d59", "text": "Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms. We take the principled view of distributionally robust optimization, which guarantees performance under adversarial input perturbations. By considering a Lagrangian penalty formulation of perturbation of the underlying data distribution in a Wasserstein ball, we provide a training procedure that augments model parameter updates with worst-case perturbations of training data. For smooth losses, our procedure provably achieves moderate levels of robustness with little computational or statistical cost relative to empirical risk minimization. Furthermore, our statistical guarantees allow us to efficiently certify robustness for the population loss. We match or outperform heuristic approaches on supervised and reinforcement learning tasks.", "title": "" }, { "docid": "67e599e65a963f54356b78ce436096c2", "text": "This paper establishes the existence of observable footprints that reveal the causal dispositions of the object categories appearing in collections of images. We achieve this goal in two steps. First, we take a learning approach to observational causal discovery, and build a classifier that achieves state-of-the-art performance on finding the causal direction between pairs of random variables, given samples from their joint distribution. Second, we use our causal direction classifier to effectively distinguish between features of objects and features of their contexts in collections of static images. Our experiments demonstrate the existence of a relation between the direction of causality and the difference between objects and their contexts, and by the same token, the existence of observable signals that reveal the causal dispositions of objects.", "title": "" }, { "docid": "a0f11651bb4674fd3b425a65fcbe1d58", "text": "Two studies examined whether forgiveness in married couples is associated with better conflict resolution. Study 1 examined couples in their 3rd year of marriage and identified 2 forgiveness dimensions (retaliation and benevolence). Husbands' retaliatory motivation was a significant predictor of poorer wife-reported conflict resolution, whereas wives' benevolence motivation predicted husbands' reports of better conflict resolution. Examining longer term marriages, Study 2 identified three forgiveness dimensions (retaliation, avoidance and benevolence). Whereas wives' benevolence again predicted better conflict resolution, husbands' avoidance predicted wives' reports of poorer conflict resolution. All findings were independent of both spouses' marital satisfaction. The findings are discussed in terms of the importance of forgiveness for marital conflict and its implications for spouse goals. Future research directions on forgiveness are outlined.", "title": "" }, { "docid": "f5ce55253aa69ca09fde79d6fd1c830d", "text": "We present an approach for high-resolution video frame prediction by conditioning on both past frames and past optical flows. Previous approaches rely on resampling past frames, guided by a learned future optical flow, or on direct generation of pixels. Resampling based on flow is insufficient because it cannot deal with disocclusions. Generative models currently lead to blurry results. Recent approaches synthesis a pixel by convolving input patches with a predicted kernel. However, their memory requirement increases with kernel size. Here, we present spatially-displaced convolution (SDC) module for video frame prediction. We learn a motion vector and a kernel for each pixel and synthesize a pixel by applying the kernel at a displaced location in the source image, defined by the predicted motion vector. Our approach inherits the merits of both vector-based and kernel-based approaches, while ameliorating their respective disadvantages. We train our model on 428K unlabelled 1080p video game frames. Our approach produces state-of-the-art results, achieving an SSIM score of 0.904 on high-definition YouTube-8M videos, 0.918 on Caltech Pedestrian videos. Our model handles large motion effectively and synthesizes crisp frames with consistent motion.", "title": "" }, { "docid": "0eea594d14beea7be624d9cffc543f12", "text": "BACKGROUND\nLoss of the interproximal dental papilla may cause functional and, especially in the maxillary anterior region, phonetic and severe esthetic problems. The purpose of this study was to investigate whether the distance from the contact point to the bone crest on standardized periapical radiographs of the maxillary anterior teeth could be correlated with the presence of the interproximal papilla in Taiwanese patients.\n\n\nMETHODS\nIn total, 200 interproximal sites of maxillary anterior teeth in 45 randomly selected patients were examined. Selected subjects were adult Taiwanese with fully erupted permanent dentition. The presence of the interproximal papilla was determined visually. If there was no visible space apical to the contact area, the papilla was recorded as being present. The distance from the contact point to the crest of bone was measured on standardized periapical radiographs using a paralleling technique with a RinnXCP holder.\n\n\nRESULTS\nData revealed that when the distance from the contact point to the bone crest on standardized periapical radiographs was 5 mm or less, the papillae were almost 100% present. When the distance was 6 mm, 51% of the papillae were present, and when the distance was 7 mm or greater, only 23% of the papillae were present.\n\n\nCONCLUSION\nThe distance from the contact point to the bone crest on standardized periapical radiographs of the maxillary anterior teeth is highly associated with the presence or absence of the interproximal papilla in Taiwanese patients, and is a useful guide for clinical evaluation.", "title": "" }, { "docid": "b2cc05224008233e6a9b807b76a1fbf5", "text": "This paper presents a non-isolated, high boost ratio hybrid transformer dc-dc converter with applications for low voltage renewable energy sources. The proposed converter utilizes a hybrid transformer to transfer the inductive and capacitive energy simultaneously, achieving a high boost ratio with a smaller size magnetic component. As a result of incorporating the resonant operation mode into the traditional high boost ratio PWM converter, the turn off loss of the switch is reduced, increasing the efficiency of the converter under all load conditions. The input current ripple is also reduced because of the linear-sinusoidal hybrid waveforms. The voltage stresses on the active switch and diodes are maintained at a low level and are independent of the changing input voltage over a wide range as a result of the resonant capacitor transferring energy to the output. The effectiveness of the proposed converter was experimentally verified using a 220 W prototype circuit. Utilizing an input voltage ranging from 20V to 45V and a load range of 30W to 220W, the experimental results show system of efficiencies greater than 96% with a peak efficiency of 97.4% at 35V input, 160W output. Because of high efficiency over wide output power range and the ability to operate with a wide variable input voltage, the proposed converter is an attractive design for alternative low dc voltage energy sources, such as solar photovoltaic (PV) modules.", "title": "" }, { "docid": "423228556cb473e0fab48a2dc57cbf6f", "text": "This paper focus on the dynamic modeling and the LQR and PID controllers for the self balancing unicycle robot. The mechanism of the unicycle robot is designed. The pitching and rolling balance could be achieved by the driving of the motor on the wheel and the balance weight on the body of robot. The dynamic equations of the robot are presented based on the Routh equation. On this basis, the LQR and PID controllers of the unicycle robot are proposed. The experimentations of balance control are showed through the Simulink toolbox of Matlab. The simulation results show that the robot could achieve self balancing after a short period of time by the designed controllers. According to comparing the results, the errors of PID controller are relatively smaller than LQR. The response speed of LQR controller is faster than PID. At last a kind of LQR&PID controller is proposed. This controller has the advantages of both LQR and PID controllers.", "title": "" }, { "docid": "9901be4dddeb825f6443d75a6566f2d0", "text": "In this paper a new approach to gas leakage detection in high pressure natural gas transportation networks is proposed. The pipeline is modelled as a Linear Parameter Varying (LPV) System driven by the source node massflow with the gas inventory variation in the pipe (linepack variation, proportional to the pressure variation) as the scheduling parameter. The massflow at the offtake node is taken as the system output. The system is identified by the Successive Approximations LPV System Subspace Identification Algorithm which is also described in this paper. The leakage is detected using a Kalman filter where the fault is treated as an augmented state. Given that the gas linepack can be estimated from the massflow balance equation, a differential method is proposed to improve the leakage detector effectiveness. A small section of a gas pipeline crossing Portugal in the direction South to North is used as a case study. LPV models are identified from normal operational data and their accuracy is analyzed. The proposed LPV Kalman filter based methods are compared with a standard mass balance method in a simulated 10% leakage detection scenario. The Differential Kalman Filter method proved to be highly efficient.", "title": "" }, { "docid": "39673b789ee8d8c898c93b7627b31f0a", "text": "In this position paper, we initiate a systematic treatment of reaching consensus in a permissionless network. We prove several simple but hopefully insightful lower bounds that demonstrate exactly why reaching consensus in a permission-less setting is fundamentally more difficult than the classical, permissioned setting. We then present a simplified proof of Nakamoto's blockchain which we recommend for pedagogical purposes. Finally, we survey recent results including how to avoid well-known painpoints in permissionless consensus, and how to apply core ideas behind blockchains to solve consensus in the classical, permissioned setting and meanwhile achieve new properties that are not attained by classical approaches.", "title": "" }, { "docid": "5ce4f8227c5eebfb8b7b1dffc5557712", "text": "In this paper, we propose a novel approach for face spoofing detection using the high-order Local Derivative Pattern from Three Orthogonal Planes (LDP-TOP). The proposed method is not only simple to derive and implement, but also highly efficient, since it takes into account both spatial and temporal information in different directions of subtle face movements. According to experimental results, the proposed approach outperforms state-of-the-art methods on three reference datasets, namely Idiap REPLAY-ATTACK, CASIA-FASD, and MSU MFSD. Moreover, it requires only 25 video frames from each video, i.e., only one second, and thus potentially can be performed in real time even on low-cost devices.", "title": "" }, { "docid": "b02d9621ee919bccde66418e0681d1e6", "text": "A great deal of work has been done on the evaluation of information retrieval systems for alphanumeric data. The same thing can not be said about the newly emerging multimedia and image database systems. One of the central concerns in these systems is the automatic characterization of image content and retrieval of images based on similarity of image content. In this paper, we discuss effectiveness of several shape measures for content based similarity retrieval of images. The different shape measures we have implemented include outline based features (chain code based string features, Fourier descriptors, UNL Fourier features), region based features (invariant moments, Zemike moments, pseudoZemike moments), and combined features (invariant moments & Fourier descriptors, invariant moments & UNL Fourier features). Given an image, all these shape feature measures (vectors) are computed automatically, and the feature vector can either be used for the retrieval purpose or can be stored in the database for future queries. We have tested all of the above shape features for image retrieval on a database of 500 trademark images. The average retrieval efficiency values computed over a set of fifteen representative queries for all the methods is presented. The output of a sample shape similarity query using all the features is also shown.", "title": "" }, { "docid": "ccac025250d397a5bcc6a5f847d2cc81", "text": "With the widespread clinical use of comparative genomic hybridization chromosomal microarray technology, several previously unidentified clinically significant submicroscopic chromosome abnormalities have been discovered. Specifically, there have been reports of clinically significant microduplications found in regions of known microdeletion syndromes. In general, these microduplications have distinct features from those described in the corresponding microdeletion syndromes. We present a 5½-year-old patient with normal growth, borderline normal IQ, borderline hypertelorism, and speech and language delay who was found to have a submicroscopic 2.3 Mb terminal duplication involving the two proposed Wolf-Hirschhorn syndrome (WHS) critical regions at chromosome 4p16.3. This duplication was the result of a maternally inherited reciprocal translocation involving the breakpoints 4p16.3 and 17q25.3. Our patient's features are distinct from those described in WHS and are not as severe as those described in partial trisomy 4p. There are two other patients in the medical literature with 4p16.3 microduplications of similar size also involving the WHS critical regions. Our patient shows clinical overlap with these two patients, although overall her features are milder than what has been previously described. Our patient's features expand the knowledge of the clinical phenotype of a 4p16.3 microduplication and highlight the need for further information about it.", "title": "" }, { "docid": "c9e3521029a45be5e32d79700a096083", "text": "In this paper, we propose Dynamics Transfer GAN; a new method for generating video sequences based on generative adversarial learning. The spatial constructs of a generated video sequence are acquired from the target image. The dynamics of the generated video sequence are imported from a source video sequence, with arbitrary motion, and imposed onto the target image. To preserve the spatial construct of the target image, the appearance of the source video sequence is suppressed and only the dynamics are obtained before being imposed onto the target image. That is achieved using the proposed appearance suppressed dynamics feature. Moreover, the spatial and temporal consistencies of the generated video sequence are verified via two discriminator networks. One discriminator validates the fidelity of the generated frames appearance, while the other validates the dynamic consistency of the generated video sequence. Experiments have been conducted to verify the quality of the video sequences generated by the proposed method. The results verified that Dynamics Transfer GAN successfully transferred arbitrary dynamics of the source video sequence onto a target image when generating the output video sequence. The experimental results also showed that Dynamics Transfer GAN maintained the spatial constructs (appearance) of the target image while generating spatially and temporally consistent video sequences.", "title": "" }, { "docid": "e00295dc86476d1d350d11068439fe87", "text": "A 10-bit LCD column driver, consisting of piecewise linear digital to analog converters (DACs), is proposed. Piecewise linear compensation is utilized to reduce the die area and to increase the effective color depth. The data conversion is carried out by a resistor string type DAC (R-DAC) and a charge sharing DAC, which are used for the most significant bit and least significant bit data conversions, respectively. Gamma correction voltages are applied to the R-DAC to lit the inverse of the liquid crystal trans-mittance-voltage characteristic. The gamma correction can also be digitally fine-tuned in the timing controller or column drivers. A prototype 10-bit LCD column driver implemented in a 0.35-mum CMOS technology demonstrates that the settling time is within 3 mus and the average die size per channel is 0.063 mm2, smaller than those of column drivers based exclusively on R-DACs.", "title": "" }, { "docid": "12266d895ea552965d9bc06b676b2cab", "text": "A new concept development and practical implementation of an OFDM based secondary cognitive link are presented in this paper. Coexistence of a secondary user employing Orthogonal Frequency Division Multiplexing (OFDM) and a primary user employing Frequency Hopping (FH) is achieved. Secondary and primary links are realized using Universal Software Radio Peripheral (USRP) N210 platforms. Cognitive features of spectrum sensing and changing transmission parameters are implemented. Some experimental results are presented.", "title": "" }, { "docid": "1a620e17048fa25cfc54f5c9fb821f39", "text": "The performance of a detector depends much on its training dataset and drops significantly when the detector is applied to a new scene due to the large variations between the source training dataset and the target scene. In order to bridge this appearance gap, we propose a deep model to automatically learn scene-specific features and visual patterns in static video surveillance without any manual labels from the target scene. It jointly learns a scene-specific classifier and the distribution of the target samples. Both tasks share multi-scale feature representations with both discriminative and representative power. We also propose a cluster layer in the deep model that utilizes the scenespecific visual patterns for pedestrian detection. Our specifically designed objective function not only incorporates the confidence scores of target training samples but also automatically weights the importance of source training samples by fitting the marginal distributions of target samples. It significantly improves the detection rates at 1 FPPI by 10% compared with the state-of-the-art domain adaptation methods on MIT Traffic Dataset and CUHK Square Dataset.", "title": "" }, { "docid": "f518ee9b64721866d69f8d1982200c72", "text": "Bradyrhizobium japonicum is one of the soil bacteria that form nodules on soybean roots. The cell has two sets of flagellar systems, one thick flagellum and a few thin flagella, uniquely growing at subpolar positions. The thick flagellum appears to be semicoiled in morphology, and the thin flagella were in a tight-curly form as observed by dark-field microscopy. Flagellin genes were identified from the amino acid sequence of each flagellin. Flagellar genes for the thick flagellum are scattered into several clusters on the genome, while those genes for the thin flagellum are compactly organized in one cluster. Both types of flagella are powered by proton-driven motors. The swimming propulsion is supplied mainly by the thick flagellum. B. japonicum flagellar systems resemble the polar-lateral flagellar systems of Vibrio species but differ in several aspects.", "title": "" }, { "docid": "d51f0b51f03e310dd183e3a7cb199288", "text": "Traditional vision-based localization methods such as visual SLAM suffer from practical problems in outdoor environments such as unstable feature detection and inability to perform location recognition under lighting, perspective, weather and appearance change. Additionally map construction on a large scale in these systems presents its own challenges. In this work, we present a novel method for precisely localizing vehicles on the road using signs marked on the road (road markings), which have the advantage of being distinct and easy to detect, their detection being robust under changes in lighting and weather. Our method uses corners detected on road markings to perform localization in global coordinates. The method consists of two phases - a mapping phase when a high-quality GPS device is used to automatically survey road marks and add them to a light-weight “map” or database, and a localization phase where road mark detection and look-up in the map, combined with visual odometry, produces precise localization. We present experiments using a real-time implementation operating in a car that demonstrates the improved localization robustness and accuracy of our system even when using road marks alone. However, in this case the trajectory between road marks has to be filled-in by visual odometry, which contributes drift. Hence, we also present a mechanism for combining road-mark-based maps with sparse feature-based maps that results in greater accuracy still. We see our use of road marks as a significant step in the general trend of using higher-level features for improved localization performance irrespective of environment conditions.", "title": "" }, { "docid": "2a9c8e0b6c08905fc04415d36432afe0", "text": "Technological advancements have led to the development of numerous wearable robotic devices for the physical assistance and restoration of human locomotion. While many challenges remain with respect to the mechanical design of such devices, it is at least equally challenging and important to develop strategies to control them in concert with the intentions of the user. This work reviews the state-of-the-art techniques for controlling portable active lower limb prosthetic and orthotic (P/O) devices in the context of locomotive activities of daily living (ADL), and considers how these can be interfaced with the user’s sensory-motor control system. This review underscores the practical challenges and opportunities associated with P/O control, which can be used to accelerate future developments in this field. Furthermore, this work provides a classification scheme for the comparison of the various control strategies. As a novel contribution, a general framework for the control of portable gait-assistance devices is proposed. This framework accounts for the physical and informatic interactions between the controller, the user, the environment, and the mechanical device itself. Such a treatment of P/Os – not as independent devices, but as actors within an ecosystem – is suggested to be necessary to structure the next generation of intelligent and multifunctional controllers. Each element of the proposed framework is discussed with respect to the role that it plays in the assistance of locomotion, along with how its states can be sensed as inputs to the controller. The reviewed controllers are shown to fit within different levels of a hierarchical scheme, which loosely resembles the structure and functionality of the nominal human central nervous system (CNS). Active and passive safety mechanisms are considered to be central aspects underlying all of P/O design and control, and are shown to be critical for regulatory approval of such devices for real-world use. The works discussed herein provide evidence that, while we are getting ever closer, significant challenges still exist for the development of controllers for portable powered P/O devices that can seamlessly integrate with the user’s neuromusculoskeletal system and are practical for use in locomotive ADL.", "title": "" }, { "docid": "8c2d6aac36ea2c10463ad05fc5f9b854", "text": "Motion planning plays a key role in autonomous driving. In this work, we introduce the combinatorial aspect of motion planning which tackles the fact that there are usually many possible and locally optimal solutions to accomplish a given task. Those options we call maneuver variants. We argue that by partitioning the trajectory space into discrete solution classes, such that local optimization methods yield an optimum within each discrete class, we can improve the chance of finding the global optimum as the optimum trajectory among the manuever variants. This work provides methods to enumerate the maneuver variants as well as constraints to enforce them. The return of the effort put into the problem modification as suggested is gaining assuredness in the convergency behaviour of the optimization algorithm. We show an experiment where we identify three local optima that would not have been found with local optimization methods.", "title": "" } ]
scidocsrr
02945455bace14295528dd3daf6f847d
Magnetic induction for MWD telemetry system
[ { "docid": "dba3434c600ed7ddbb944f0a3adb1ba0", "text": "Although acoustic waves are the most versatile and widely used physical layer technology for underwater wireless communication networks (UWCNs), they are adversely affected by ambient noise, multipath propagation, and fading. The large propagation delays, low bandwidth, and high bit error rates of the underwater acoustic channel hinder communication as well. These operational limits call for complementary technologies or communication alternatives when the acoustic channel is severely degraded. Magnetic induction (MI) is a promising technique for UWCNs that is not affected by large propagation delays, multipath propagation, and fading. In this paper, the MI communication channel has been modeled. Its propagation characteristics have been compared to the electromagnetic and acoustic communication systems through theoretical analysis and numerical evaluations. The results prove the feasibility of MI communication in underwater environments. The MI waveguide technique is developed to reduce path loss. The communication range between source and destination is considerably extended to hundreds of meters in fresh water due to its superior bit error rate performance.", "title": "" } ]
[ { "docid": "bb570de5244b6d4bd066244722060830", "text": "Impact happens when two or more bodies collide, generating very large impulsive forces in a very short period of time during which kinetic energy is first absorbed and then released after some loss. This paper introduces a state transition diagram to model a frictionless multibody collision. Each state describes a different topology of the collision characterized by the set of instantaneously active contacts. A change of state happens when a contact disappears at the end of restitution, or when a disappeared contact reappears as the relative motion of two bodies goes from separation into penetration. Within a state, (normal) impulses are coupled differentially subject to relative stiffnesses at the active contact points and the strain energies stored there. Such coupling may cause restart of compression from restitution during a single impact. Impulses grow along a bounded curve with first-order continuity, and converge during the state transitions. To solve a multibody collision problem with friction and tangential compliance, the above impact model is integrated with a compliant impact model. The paper compares model predictions to a physical experiment for the massé shot, which is a difficult trick in billiards, with a good result.", "title": "" }, { "docid": "54d293423026d84bce69e8e073ebd6ac", "text": "AIMS\nPredictors of Response to Cardiac Resynchronization Therapy (CRT) (PROSPECT) was the first large-scale, multicentre clinical trial that evaluated the ability of several echocardiographic measures of mechanical dyssynchrony to predict response to CRT. Since response to CRT may be defined as a spectrum and likely influenced by many factors, this sub-analysis aimed to investigate the relationship between baseline characteristics and measures of response to CRT.\n\n\nMETHODS AND RESULTS\nA total of 286 patients were grouped according to relative reduction in left ventricular end-systolic volume (LVESV) after 6 months of CRT: super-responders (reduction in LVESV > or =30%), responders (reduction in LVESV 15-29%), non-responders (reduction in LVESV 0-14%), and negative responders (increase in LVESV). In addition, three subgroups were formed according to clinical and/or echocardiographic response: +/+ responders (clinical improvement and a reduction in LVESV > or =15%), +/- responders (clinical improvement or a reduction in LVESV > or =15%), and -/- responders (no clinical improvement and no reduction in LVESV > or =15%). Differences in clinical and echocardiographic baseline characteristics between these subgroups were analysed. Super-responders were more frequently females, had non-ischaemic heart failure (HF), and had a wider QRS complex and more extensive mechanical dyssynchrony at baseline. Conversely, negative responders were more frequently in New York Heart Association class IV and had a history of ventricular tachycardia (VT). Combined positive responders after CRT (+/+ responders) had more non-ischaemic aetiology, more extensive mechanical dyssynchrony at baseline, and no history of VT.\n\n\nCONCLUSION\nSub-analysis of data from PROSPECT showed that gender, aetiology of HF, QRS duration, severity of HF, a history of VT, and the presence of baseline mechanical dyssynchrony influence clinical and/or LV reverse remodelling after CRT. Although integration of information about these characteristics would improve patient selection and counselling for CRT, further randomized controlled trials are necessary prior to changing the current guidelines regarding patient selection for CRT.", "title": "" }, { "docid": "2df316f30952ffdb4da1e9797b9658bb", "text": "Breast cancer is a leading disease worldwide, and the success of medical therapies is heavily related to the availability of breast cancer imaging techniques. While current methods, mainly ultrasound, x-ray mammography, and magnetic resonance imaging, all exhibit some disadvantages, a possible alternative investigated in recent years is based on microwave and mm-wave imaging system. A key point for these systems is their reliability in terms of safety, in particular exposure limits. This paper presents a feasibility study for a mm-wave breast cancer imaging system, with the aim of ensuring safety and compliance with the widely adopted European ICNIRP recommendations. The study is based on finite element method models of human tissues, experimentally characterized by measures obtained at one of the most important European clinical center for cancer treatments. Results prove the feasibility of the system, which can meet the exposure limits while providing the required dynamic range to let the receiver detect the cancer anomaly. In addition, the dosimetric quantities used at the present and their maximum limits at mm-waves are taking into discussion and the possibility of needing moderns quantities and limitations is discussed.", "title": "" }, { "docid": "50e9cf4ff8265ce1567a9cc82d1dc937", "text": "Thu, 06 Dec 2018 02:11:00 GMT bayesian reasoning and machine learning pdf Bayesian Reasoning and Machine Learning [David Barber] on Amazon.com. *FREE* shipping on qualifying offers. Machine learning methods extract value from vast data sets ... Thu, 06 Dec 2018 14:35:00 GMT Bayesian Reasoning and Machine Learning: David Barber ... A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of ... Sat, 08 Dec 2018 04:53:00 GMT Bayesian network Wikipedia Bayesian Reasoning and Machine Learning. The book is available in hardcopy from Cambridge University Press. The publishers have kindly agreed to allow the online ... Sun, 09 Dec 2018 20:51:00 GMT Bayesian Reasoning and Machine Learning, David Barber Machine learning (ML) is the study of algorithms and mathematical models that computer systems use to progressively improve their performance on a specific task. Mon, 10 Dec 2018 14:02:00 GMT Machine learning Wikipedia Your friends and colleagues are talking about something called \"Bayes' Theorem\" or \"Bayes' Rule\", or something called Bayesian reasoning. They sound really ... Mon, 10 Dec 2018 14:24:00 GMT Yudkowsky Bayes' Theorem NIPS 2016 Tutorial on ML Methods for Personalization with Application to Medicine. More here. UAI 2017 Tutorial on Machine Learning and Counterfactual Reasoning for ... Thu, 06 Dec 2018 15:33:00 GMT Suchi Saria – Machine Learning, Computational Health ... Gaussian Processes and Kernel Methods Gaussian processes are non-parametric distributions useful for doing Bayesian inference and learning on unknown functions. Mon, 10 Dec 2018 05:12:00 GMT Machine Learning Group Publications University of This practical introduction is geared towards scientists who wish to employ Bayesian networks for applied research using the BayesiaLab software platform. Sun, 09 Dec 2018 17:17:00 GMT Bayesian Networks & BayesiaLab: A Practical Introduction ... Automated Bitcoin Trading via Machine Learning Algorithms Isaac Madan Department of Computer Science Stanford University Stanford, CA 94305 imadan@stanford.edu Tue, 27 Nov 2018 20:01:00 GMT Automated Bitcoin Trading via Machine Learning Algorithms 2.3. Naà ̄ve Bayesian classifier. A Naà ̄ve Bayesian classifier generally seems very simple; however, it is a pioneer in most information and computational applications ... Sun, 09 Dec 2018 03:48:00 GMT Proposed efficient algorithm to filter spam using machine ... Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning) [Kevin P. Murphy, Francis Bach] on Amazon.com. *FREE* shipping on qualifying ... Sun, 01 Jul 2018 19:30:00 GMT Machine Learning: A Probabilistic Perspective (Adaptive ... So itâ€TMs pretty clear by now that statistics and machine learning arenâ€TMt very different fields. I was recently pointed to a very amusing comparison by the ... Fri, 07 Dec 2018 19:56:00 GMT Statistics vs. Machine Learning, fight! | AI and Social ... Need help with Statistics for Machine Learning? Take my free 7-day email crash course now (with sample code). Click to sign-up and also get a free PDF Ebook version ... Thu, 06 Dec 2018 23:39:00 GMT Statistics for Evaluating Machine Learning Models", "title": "" }, { "docid": "deb2e0c23d3d9ad4d37a8f23bb2280f5", "text": "The purpose of this study was to test if replacement of trans fatty acids by palmitic acid in an experimental margarine results in unfavourable effects on serum lipids and haemostatic factors. We have compared the effects of three different margarines, one based on palm oil (PALM-margarine), one based on partially hydrogenated soybean oil (TRANS- margarine) and one with a high content of polyunsaturated fatty acids (PUFA-margarine), on serum lipids in 27 young women. In nine of the participants fasting levels and diurnal postprandial levels of haemostatic variables on the 3 diets were compared. The sum of 12:0, 14:0, 16:0 provided 11% of energy (E%) in the PALM diet, the same as the sum of 12:0, 14:0, 16:0 and trans fatty acids in the TRANS-diet. Oleic acid provided 10-11E% in all three diets, while PUFA provided 5.7, 5.5 and 10.2 E%, respectively. Total fat provided 30-31% and the test margarines 26% of total energy in all three diets. Each of the diets was consumed for 17 days in a crossover design. There were no significant differences in total cholesterol, LDL-cholesterol and apoB between the TRANS- and the PALM-diet. HDL-cholesterol and apoA-I were significantly higher on the PALM-diet compared to the TRANS-diet while the ratio of LDL- to HDL-cholesterol was lower, although not significantly (P = 0.077) on the PALM-diet. Total cholesterol, LDL-cholesterol and apoB were significantly lower on the PUFA-diet compared to the two other diets. HDL-cholesterol was not different on the PALM- and the PUFA-diet while it was significantly lower on the TRANS-diet compared to the PUFA-diet. Triglycerides and Lp(a) were not different among the three diets. The diurnal postprandial state level of tissue plasminogen activator (t-PA) activity was significantly decreased on the TRANS-diet compared to the PALM-diet. t-PA activity was also decreased on the PUFA-diet compared to PALM-diet although not significantly (P=0.07). There were no significant differences in neither fasting levels or in circadian variation of t-PA antigen, PAI-1 activity, PAI-1 antigen, factor VII coagulant activity or fibrinogen between the three diets. Our results suggest that dietary palm oil may have a more favourable effect on the fibrinolytic system compared to partially hydrogenated soybean oil. We conclude that from a nutritional point of view, palmitic acid from palm oil may be a reasonable alternative to trans fatty acids from partially hydrogenated soybean oil in margarine if the aim is to avoid trans fatty acids. A palm oil based margarine is, however, less favourable than one based on a more polyunsaturated vegetable oil.", "title": "" }, { "docid": "b8f50ba62325ffddcefda7030515fd22", "text": "The following statement is intended to provide an understanding of the governance and legal structure of the University of Sheffield. The University is an independent corporation whose legal status derives from a Royal Charter granted in 1905. It is an educational charity, with exempt status, regulated by the Office for Students in its capacity as Principal Regulator. The University has charitable purposes and applies them for the public benefit. It must comply with the general law of charity. The University’s objectives, powers and governance framework are set out in its Charter and supporting Statutes and Regulations.", "title": "" }, { "docid": "fb8e6eac761229fc8c12339fb68002ed", "text": "Cerebrovascular disease results from any pathological process of the blood vessels supplying the brain. Stroke, characterised by its abrupt onset, is the third leading cause of death in humans. This rare condition in dogs is increasingly being recognised with the advent of advanced diagnostic imaging. Magnetic resonance imaging (MRI) is the first choice diagnostic tool for stroke, particularly using diffusion-weighted images and magnetic resonance angiography for ischaemic stroke and gradient echo sequences for haemorrhagic stroke. An underlying cause is not always identified in either humans or dogs. Underlying conditions that may be associated with canine stroke include hypothyroidism, neoplasia, sepsis, hypertension, parasites, vascular malformation and coagulopathy. Treatment is mainly supportive and recovery often occurs within a few weeks. The prognosis is usually good if no underlying disease is found.", "title": "" }, { "docid": "cf88fe250c9dd50caf4f462acdd71238", "text": "We present Code Phage (CP), a system for automatically transferring correct code from donor applications into recipient applications that process the same inputs to successfully eliminate errors in the recipient. Experimental results using seven donor applications to eliminate ten errors in seven recipient applications highlight the ability of CP to transfer code across applications to eliminate out of bounds access, integer overflow, and divide by zero errors. Because CP works with binary donors with no need for source code or symbolic information, it supports a wide range of use cases. To the best of our knowledge, CP is the first system to automatically transfer code across multiple applications.", "title": "" }, { "docid": "4c175d69ae46f58dc217984192b1a0f0", "text": "Haptic interaction is an increasingly common form of interaction in virtual environment (VE) simulations. This medium introduces some new challenges. In this paper we study the problem arising from the difference between the sampling rate requirements of haptic interfaces and the significantly lower update rates of the physical models being manipulated. We propose a multirate simulation approach which uses a local linear approximation. The treatment includes a detailed analysis and experimental verification of the approach. The proposed method is also shown to improve the stability of the haptic interaction.", "title": "" }, { "docid": "8d3c1e649e40bf72f847a9f8ac6edf38", "text": "Many organizations are forming “virtual teams” of geographically distributed knowledge workers to collaborate on a variety of workplace tasks. But how effective are these virtual teams compared to traditional face-to-face groups? Do they create similar teamwork and is information exchanged as effectively? An exploratory study of a World Wide Web-based asynchronous computer conference system known as MeetingWebTM is presented and discussed. It was found that teams using this computer-mediated communication system (CMCS) could not outperform traditional (face-to-face) teams under otherwise comparable circumstances. Further, relational links among team members were found to be a significant contributor to the effectiveness of information exchange. Though virtual and face-to-face teams exhibit similar levels of communication effectiveness, face-to-face team members report higher levels of satisfaction. Therefore, the paper presents steps that can be taken to improve the interaction experience of virtual teams. Finally, guidelines for creating and managing virtual teams are suggested, based on the findings of this research and other authoritative sources. Subject Areas: Collaboration, Computer Conference, Computer-mediated Communication Systems (CMCS), Internet, Virtual Teams, and World Wide Web. *The authors wish to thank the Special Focus Editor and the reviewers for their thoughtful critique of the earlier versions of this paper. We also wish to acknowledge the contributions of the Northeastern University College of Business Administration and its staff, which provided the web server and the MeetingWebTM software used in these experiments.", "title": "" }, { "docid": "1042329bbc635f1b39a5d15d795be8a3", "text": "In this work we present a method to estimate a 3D face shape from a single image. Our method is based on a cascade regression framework that directly estimates face landmarks locations in 3D. We include the knowledge that a face is a 3D object into the learning pipeline and show how this information decreases localization errors while keeping the computational time low. We predict the actual positions of the landmarks even if they are occluded due to face rotation. To support the ability of our method to reliably reconstruct 3D shapes, we introduce a simple method for head pose estimation using a single image that reaches higher accuracy than the state of the art. Comparison of 3D face landmarks localization with the available state of the art further supports the feasibility of a single-step face shape estimation. The code, trained models and our 3D annotations will be made available to the research community.", "title": "" }, { "docid": "aabef3695f38fdf565700e5e374098fd", "text": "T are two broad categories of risk affecting supply chain design and management: (1) risks arising from the problems of coordinating supply and demand, and (2) risks arising from disruptions to normal activities. This paper is concerned with the second category of risks, which may arise from natural disasters, from strikes and economic disruptions, and from acts of purposeful agents, including terrorists. The paper provides a conceptual framework that reflects the joint activities of risk assessment and risk mitigation that are fundamental to disruption risk management in supply chains. We then consider empirical results from a rich data set covering the period 1995–2000 on accidents in the U.S. Chemical Industry. Based on these results and other literature, we discuss the implications for the design of management systems intended to cope with supply chain disruption risks.", "title": "" }, { "docid": "ee4288bcddc046ae5e9bcc330264dc4f", "text": "Emerging recognition of two fundamental errors underpinning past polices for natural resource issues heralds awareness of the need for a worldwide fundamental change in thinking and in practice of environmental management. The first error has been an implicit assumption that ecosystem responses to human use are linear, predictable and controllable. The second has been an assumption that human and natural systems can be treated independently. However, evidence that has been accumulating in diverse regions all over the world suggests that natural and social systems behave in nonlinear ways, exhibit marked thresholds in their dynamics, and that social-ecological systems act as strongly coupled, complex and evolving integrated systems. This article is a summary of a report prepared on behalf of the Environmental Advisory Council to the Swedish Government, as input to the process of the World Summit on Sustainable Development (WSSD) in Johannesburg, South Africa in 26 August 4 September 2002. We use the concept of resilience--the capacity to buffer change, learn and develop--as a framework for understanding how to sustain and enhance adaptive capacity in a complex world of rapid transformations. Two useful tools for resilience-building in social-ecological systems are structured scenarios and active adaptive management. These tools require and facilitate a social context with flexible and open institutions and multi-level governance systems that allow for learning and increase adaptive capacity without foreclosing future development options.", "title": "" }, { "docid": "6ffcdaafcda083517bbfe4fa06f5df87", "text": "This paper reports a qualitative study designed to investigate the issues of cybersafety and cyberbullying and report how students are coping with them. Through discussion with 74 students, aged from 10 to 17, in focus groups divided into three age levels, data were gathered in three schools in Victoria, Australia, where few such studies had been set. Social networking sites and synchronous chat sites were found to be the places where cyberbullying most commonly occurred, with email and texting on mobile phones also used for bullying. Grades 8 and 9 most often reported cyberbullying and also reported behaviours and internet contacts that were cybersafety risks. Most groups preferred to handle these issues themselves or with their friends rather then alert parents and teachers who may limit their technology access. They supported education about these issues for both adults and school students and favoured a structured mediation group of their peers to counsel and advise victims.", "title": "" }, { "docid": "9a00d5d6585cb766be0459bbdb76a612", "text": "Nature within cities will have a central role in helping address key global public health challenges associated with urbanization. However, there is almost no guidance on how much or how frequently people need to engage with nature, and what types or characteristics of nature need to be incorporated in cities for the best health outcomes. Here we use a nature dose framework to examine the associations between the duration, frequency and intensity of exposure to nature and health in an urban population. We show that people who made long visits to green spaces had lower rates of depression and high blood pressure, and those who visited more frequently had greater social cohesion. Higher levels of physical activity were linked to both duration and frequency of green space visits. A dose-response analysis for depression and high blood pressure suggest that visits to outdoor green spaces of 30 minutes or more during the course of a week could reduce the population prevalence of these illnesses by up to 7% and 9% respectively. Given that the societal costs of depression alone in Australia are estimated at AUD$12.6 billion per annum, savings to public health budgets across all health outcomes could be immense.", "title": "" }, { "docid": "0bfdad99e0762951f5cc57026cd364c9", "text": "Causal effects are defined as comparisons of potential outcomes under different treatments on a common set of units. Observed values of the potential outcomes are revealed by the assignment mechanism—a probabilistic model for the treatment each unit receives as a function of covariates and potential outcomes. Fisher made tremendous contributions to causal inference through his work on the design of randomized experiments, but the potential outcomes perspective applies to other complex experiments and nonrandomized studies as well. As noted by Kempthorne in his 1976 discussion of Savage’s Fisher lecture, Fisher never bridged his work on experimental design and his work on parametric modeling, a bridge that appears nearly automatic with an appropriate view of the potential outcomes framework, where the potential outcomes and covariates are given a Bayesian distribution to complete the model specification. Also, this framework crisply separates scientific inference for causal effects and decisions based on such inference, a distinction evident in Fisher’s discussion of tests of significance versus tests in an accept/reject framework. But Fisher never used the potential outcomes framework, originally proposed by Neyman in the context of randomized experiments, and as a result he provided generally flawed advice concerning the use of the analysis of covariance to adjust for posttreatment concomitants in randomized trials.", "title": "" }, { "docid": "6d23bd2813ea3785b8b20d24e31279d8", "text": "General-purpose GPUs have been widely utilized to accelerate parallel applications. Given a relatively complex programming model and fast architecture evolution, producing efficient GPU code is nontrivial. A variety of simulation and profiling tools have been developed to aid GPU application optimization and architecture design. However, existing tools are either limited by insufficient insights or lacking in support across different GPU architectures, runtime and driver versions. This paper presents CUDAAdvisor, a profiling framework to guide code optimization in modern NVIDIA GPUs. CUDAAdvisor performs various fine-grained analyses based on the profiling results from GPU kernels, such as memory-level analysis (e.g., reuse distance and memory divergence), control flow analysis (e.g., branch divergence) and code-/data-centric debugging. Unlike prior tools, CUDAAdvisor supports GPU profiling across different CUDA versions and architectures, including CUDA 8.0 and Pascal architecture. We demonstrate several case studies that derive significant insights to guide GPU code optimization for performance improvement.", "title": "" }, { "docid": "2504c87326f94f26a1209e197d351ecb", "text": "This paper summarizes recent contributions in the broad area of energy harvesting wireless communications. In particular, we provide the current state of the art for wireless networks composed of energy harvesting nodes, starting from the information-theoretic performance limits to transmission scheduling policies and resource allocation, medium access, and networking issues. The emerging related area of energy transfer for self-sustaining energy harvesting wireless networks is considered in detail covering both energy cooperation aspects and simultaneous energy and information transfer. Various potential models with energy harvesting nodes at different network scales are reviewed, as well as models for energy consumption at the nodes.", "title": "" }, { "docid": "09bb06388c9018c205c09406b360692b", "text": "Detecting anomalies in large-scale, streaming datasets has wide applicability in a myriad of domains like network intrusion detection for cyber-security, fraud detection for credit cards, system health monitoring, and fault detection in safety critical systems. Due to its wide applicability, the problem of anomaly detection has been well-studied by industry and academia alike, and many algorithms have been proposed for detecting anomalies in different problem settings. But until recently, there was no openly available, systematic dataset and/or framework using which the proposed anomaly detection algorithms could be compared and evaluated on a common ground. Numenta Anomaly Benchmark (NAB), made available by Numenta1 in 2015, addressed this gap by providing a set of openly-available, labeled data files and a common scoring system, using which different anomaly detection algorithms could be fairly evaluated and compared. In this paper, we provide an in-depth analysis of the key aspects of the NAB framework, and highlight inherent challenges therein, with the objective to provide insights about the gaps in the current framework that must be addressed so as to make it more robust and easy-to-use. Furthermore, we also provide additional evaluation of five state-of-the-art anomaly detection algorithms (including the ones proposed by Numenta) using the NAB datasets, and based on the evaluation results, we argue that the performance of these algorithms is not sufficient for practical, industry-scale applications, and must be improved upon so as to make them suitable for large-scale anomaly detection problems.", "title": "" }, { "docid": "b216a38960c537d52d94adc8d50a43df", "text": "BACKGROUND\nAutologous platelet-rich plasma has attracted attention in various medical fields recently, including orthopedic, plastic, and dental surgeries and dermatology for its wound healing ability. Further, it has been used clinically in mesotherapy for skin rejuvenation.\n\n\nOBJECTIVE\nIn this study, the effects of activated platelet-rich plasma (aPRP) and activated platelet-poor plasma (aPPP) have been investigated on the remodelling of the extracellular matrix, a process that requires activation of dermal fibroblasts, which is essential for rejuvenation of aged skin.\n\n\nMETHODS\nPlatelet-rich plasma (PRP) and platelet-poor plasma (PPP) were prepared using a double-spin method and then activated with thrombin and calcium chloride. The proliferative effects of aPRP and aPPP were measured by [(3)H]thymidine incorporation assay, and their effects on matrix protein synthesis were assessed by quantifying levels of procollagen type I carboxy-terminal peptide (PIP) by enzyme-linked immunosorbent assay (ELISA). The production of collagen and matrix metalloproteinases (MMP) was studied by Western blotting and reverse transcriptase-polymerase chain reaction.\n\n\nRESULTS\nPlatelet numbers in PRP increased to 9.4-fold over baseline values. aPRP and aPPP both stimulated cell proliferation, with peak proliferation occurring in cells grown in 5% aPRP. Levels of PIP were highest in cells grown in the presence of 5% aPRP. Additionally, aPRP and aPPP increased the expression of type I collagen, MMP-1 protein, and mRNA in human dermal fibroblasts.\n\n\nCONCLUSION\naPRP and aPPP promote tissue remodelling in aged skin and may be used as adjuvant treatment to lasers for skin rejuvenation in cosmetic dermatology.", "title": "" } ]
scidocsrr
15b6fd9c2de98c7ccab4ec576e555f04
Rules and Ontology Based Data Access
[ { "docid": "7ef20dc3eb5ec7aee75f41174c9fae12", "text": "As the data and ontology layers of the Semantic Web stack have achieved a certain level of maturity in standard recommendations such as RDF and OWL, the current focus lies on two related aspects. On the one hand, the definition of a suitable query language for RDF, SPARQL, is close to recommendation status within the W3C. The establishment of the rules layer on top of the existing stack on the other hand marks the next step to be taken, where languages with their roots in Logic Programming and Deductive Databases are receiving considerable attention. The purpose of this paper is threefold. First, we discuss the formal semantics of SPARQLextending recent results in several ways. Second, weprovide translations from SPARQL to Datalog with negation as failure. Third, we propose some useful and easy to implement extensions of SPARQL, based on this translation. As it turns out, the combination serves for direct implementations of SPARQL on top of existing rules engines as well as a basis for more general rules and query languages on top of RDF.", "title": "" }, { "docid": "dd7d17c7f36f74ea79832f9426dc936d", "text": "In the context of the emerging Semantic Web and the quest for a common logical framework underpinning its architecture, the relation of rule-based languages such as Answer Set Programming (ASP) and ontology languages such as OWL has attracted a lot of attention in the literature over the past years. With its roots in Deductive Databases and Datalog though, ASP shares much more commonality with another Semantic Web standard, namely the query language SPARQL. In this paper, we take the recent approval of the SPARQL1.1 standard by the World Wide Web consortium (W3C) as an opportunity to introduce this standard to the Logic Programming community by providing a translation of SPARQL1.1 into ASP. In this translation, we explain and highlight peculiarities of the new W3C standard. Along the way, we survey existing literature on foundations of SPARQL and SPARQL1.1, and also combinations of SPARQL with ontology and rules languages. Thereby, apart from providing means to implement and support SPARQL natively within Logic Programming engines and particularly ASP engines, we hope to pave the way for further research on a common logical framework for Semantic Web languages, including query languages, from an ASP point of view. 1Vienna University of Economics and Business (WU Wien), Welthandelsplatz 1, 1020 Vienna, Austria E-mail: axel.polleres@wu.ac.at 2Institute for Information Systems 184/2, Technische Universität Wien, Favoritenstrasse 9-11, 1040 Vienna, Austria. E-mail: wallner@dbai.tuwien.ac.at A journal version of this article has been published in JANCL. Please cite as: A. Polleres and J.P. Wallner. On the relation between SPARQL1.1 and Answer Set Programming. Journal of Applied Non-Classical Logics (JANCL), 23(1-2):159-212, 2013. Special issue on Equilibrium Logic and Answer Set Programming. Copyright c © 2014 by the authors TECHNICAL REPORT DBAI-TR-2013-84 2", "title": "" } ]
[ { "docid": "ad7852de8e1f80c68417c459d8a12e15", "text": "Quantum machine learning is expected to be one of the first potential general-purpose applications of near-term quantum devices. A major recent breakthrough in classical machine learning is the notion of generative adversarial training, where the gradients of a discriminator model are used to train a separate generative model. In this work and a companion paper, we extend adversarial training to the quantum domain and show how to construct generative adversarial networks using quantum circuits. Furthermore, we also show how to compute gradients – a key element in generative adversarial network training – using another quantum circuit. We give an example of a simple practical circuit ansatz to parametrize quantum machine learning models and perform a simple numerical experiment to demonstrate that quantum generative adversarial networks can be trained successfully.", "title": "" }, { "docid": "6c713b3d6c68830915f15bc5b327b301", "text": "Journal of Cutaneous and Aesthetic Surgery ¦ Volume 10 ¦ Issue 2 ¦ April‐June 2017 118 2003;9:CS1‐4. 4. Abrahamson TG, Davis DA. Angiolymphoid hyperplasia with eosinophilia responsive to pulsed dye laser. J Am Acad Dermatol 2003;49:S195‐6. 5. Kaur T, Sandhu K, Gupta S, Kanwar AJ, Kumar B. Treatment of angiolymphoid hyperplasia with eosinophilia with the carbon dioxide laser. J Dermatolog Treat 2004;15:328‐30. 6. Akdeniz N, Kösem M, Calka O, Bilgili SG, Metin A, Gelincik I. Intralesional bleomycin for angiolymphoid hyperplasia. Arch Dermatol 2007;143:841‐4.", "title": "" }, { "docid": "b18ecc94c1f42567b181c49090b03d8a", "text": "We propose a novel approach for inferring the individualized causal effects of a treatment (intervention) from observational data. Our approach conceptualizes causal inference as a multitask learning problem; we model a subject’s potential outcomes using a deep multitask network with a set of shared layers among the factual and counterfactual outcomes, and a set of outcome-specific layers. The impact of selection bias in the observational data is alleviated via a propensity-dropout regularization scheme, in which the network is thinned for every training example via a dropout probability that depends on the associated propensity score. The network is trained in alternating phases, where in each phase we use the training examples of one of the two potential outcomes (treated and control populations) to update the weights of the shared layers and the respective outcome-specific layers. Experiments conducted on data based on a real-world observational study show that our algorithm outperforms the state-of-the-art.", "title": "" }, { "docid": "06abf2a7c6d0c25cfe54422268300e58", "text": "The purpose of the present study is to provide useful data that could be applied to various types of periodontal plastic surgery by detailing the topography of the greater palatine artery (GPA), looking in particular at its depth from the palatal masticatory mucosa (PMM) and conducting a morphometric analysis of the palatal vault. Forty-three hemisectioned hard palates from embalmed Korean adult cadavers were used in this study. The morphometry of the palatal vault was analyzed, and then the specimens were decalcified and sectioned. Six parameters were measured using an image-analysis system after performing a standard calibration. In one specimen, the PMM was separated from the hard palate and subjected to a partial Sihler's staining technique, allowing the branching pattern of the GPA to be observed in a new method. The distances between the GPA and the gingival margin, and between the GPA and the cementoenamel junction were greatest at the maxillary second premolar. The shortest vertical distance between the GPA and the PMM decreased gradually as it proceeded anteriorly. The GPA was located deeper in the high-vault group than in the low-vault group. The premolar region should be recommended as the optimal donor site for tissue grafting, and in particular the second premolar region. The maximum size and thickness of tissue that can be harvested from the region were 9.3 mm and 4.0 mm, respectively.", "title": "" }, { "docid": "b3bcf4d5962cd2995d21cfbbe9767b9d", "text": "In computer, Cloud of Things (CoT) it is a Technique came by integrated two concepts Internet of Things(IoT) and Cloud Computing. Therefore, Cloud of Things is a currently a wide area of research and development. This paper discussed the concept of Cloud of Things (CoT) in detail and explores the challenges, open research issues, and various tools that can be used with Cloud of Things (CoT). As a result, this paper gives a knowledge and platform to explore Cloud of Things (CoT), and it gives new ideas for researchers to find the open research issues and solution to challenges.", "title": "" }, { "docid": "ae3770d75796453f83329b676fa884ba", "text": "This paper presents a real-time face detector, named Single Shot Scale-invariant Face Detector (S3FD), which performs superiorly on various scales of faces with a single deep neural network, especially for small faces. Specifically, we try to solve the common problem that anchorbased detectors deteriorate dramatically as the objects become smaller. We make contributions in the following three aspects: 1) proposing a scale-equitable face detection framework to handle different scales of faces well. We tile anchors on a wide range of layers to ensure that all scales of faces have enough features for detection. Besides, we design anchor scales based on the effective receptive field and a proposed equal proportion interval principle; 2) improving the recall rate of small faces by a scale compensation anchor matching strategy; 3) reducing the false positive rate of small faces via a max-out background label. As a consequence, our method achieves state-of-theart detection performance on all the common face detection benchmarks, including the AFW, PASCAL face, FDDB and WIDER FACE datasets, and can run at 36 FPS on a Nvidia Titan X (Pascal) for VGA-resolution images.", "title": "" }, { "docid": "a212c06f01d746779da52c6ead7e185c", "text": "Existing visual tracking methods usually localize the object with a bounding box, in which the foreground object trackers/detectors are often disturbed by the introduced background information. To handle this problem, we aim to learn a more robust object representation for visual tracking. In particular, the tracked object is represented with a graph structure (i.e., a set of non-overlapping image patches), in which the weight of each node (patch) indicates how likely it belongs to the foreground and edges are also weighed for indicating the appearance compatibility of two neighboring nodes. This graph is dynamically learnt (i.e., the nodes and edges received weights) and applied in object tracking and model updating. We constrain the graph learning from two aspects: i) the global low-rank structure over all nodes and ii) the local sparseness of node neighbors. During the tracking process, our method performs the following steps at each frame. First, the graph is initialized by assigning either 1 or 0 to the weights of some image patches according to the predicted bounding box. Second, the graph is optimized through designing a new ALM (Augmented Lagrange Multiplier) based algorithm. Third, the object feature representation is updated by imposing the weights of patches on the extracted image features. The object location is finally predicted by adopting the Struck tracker (Hare, Saffari, and Torr 2011). Extensive experiments show that our approach outperforms the state-of-the-art tracking methods on two standard benchmarks, i.e., OTB100 and NUS-PRO.", "title": "" }, { "docid": "1611448ce90278a329b1afe8fe598ba9", "text": "This paper is devoted to some mathematical considerations on the geometrical ideas contained in PNK, CN and, successively, in PR. Mainly, we will emphasize that these ideas give very promising suggestions for a modern point-free foundation of geometry. 1. Introduction Recently the researches in point-free geometry received an increasing interest in different areas. As an example, we can quote computability theory, lattice theory, computer science. Now, the basic ideas of point-free geometry were firstly formulated by A. N. Whitehead in PNK and CN where the extension relation between events is proposed as a primitive. The points, the lines and all the \" abstract \" geometrical entities are defined by suitable abstraction processes. As a matter of fact, as observed in Casati and Varzi 1997, the approach proposed in these books is a basis for a \"mereology\" (i.e. an investigation about the part-whole relation) rather than for a point-free geometry. Indeed , the inclusion relation is set-theoretical and not topological in nature and this generates several difficulties. As an example, the definition of point is unsatisfactory (see Section 6). So, it is not surprising that some years later the publication of PNK and CN, Whitehead in PR proposed a different approach in which the primitive notion is the one of connection relation. This idea was suggested in de Laguna 1922. The aim of this paper is not to give a precise account of geometrical ideas contained in these books but only to emphasize their mathematical potentialities. So, we translate the analysis of Whitehead into suitable first order theories and we examine these theories from a logical point of view. Also, we argue that multi-valued logic is a promising tool to reformulate the approach in PNK and CN.", "title": "" }, { "docid": "fdc01b87195272f8dec8ed32dfe8e664", "text": "Future search engines are expected to deliver pro and con arguments in response to queries on controversial topics. While argument mining is now in the focus of research, the question of how to retrieve the relevant arguments remains open. This paper proposes a radical model to assess relevance objectively at web scale: the relevance of an argument’s conclusion is decided by what other arguments reuse it as a premise. We build an argument graph for this model that we analyze with a recursive weighting scheme, adapting key ideas of PageRank. In experiments on a large ground-truth argument graph, the resulting relevance scores correlate with human average judgments. We outline what natural language challenges must be faced at web scale in order to stepwise bring argument relevance to web search engines.", "title": "" }, { "docid": "53a67740e444b5951bc6ab257236996e", "text": "Although human perception appears to be automatic and unconscious, complex sensory mechanisms exist that form the preattentive component of understanding and lead to awareness. Considerable research has been carried out into these preattentive mechanisms and computational models have been developed for similar problems in the fields of computer vision and speech analysis. The focus here is to explore aural and visual information in video streams for modeling attention and detecting salient events. The separate aural and visual modules may convey explicit, complementary or mutually exclusive information around the detected audiovisual events. Based on recent studies on perceptual and computational attention modeling, we formulate measures of attention using features of saliency for the audiovisual stream. Audio saliency is captured by signal modulations and related multifrequency band features, extracted through nonlinear operators and energy tracking. Visual saliency is measured by means of a spatiotemporal attention model driven by various feature cues (intensity, color, motion). Features from both modules mapped to one-dimensional, time-varying saliency curves, from which statistics of salient segments can be extracted and important audio or visual events can be detected through adaptive, threshold-based mechanisms. Audio and video curves are integrated in a single attention curve, where events may be enhanced, suppressed or vanished. Salient events from the audiovisual curve are detected through geometrical features such as local extrema, sharp transitions and level sets. The potential of inter-module fusion and audiovisual event detection is demonstrated in applications such as video key-frame selection, video skimming and video annotation.", "title": "" }, { "docid": "f0813fe6b6324e1056dc19a5259d9538", "text": "Plant disease detection is emerging field in India as agriculture is important sector in Economy and Social life. Earlier unscientific methods were in existence. Gradually with technical and scientific advancement, more reliable methods through lowest turnaround time are developed and proposed for early detection of plant disease. Such techniques are widely used and proved beneficial to farmers as detection of plant disease is possible with minimal time span and corrective actions are carried out at appropriate time. In this paper, we studied and evaluated existing techniques for detection of plant diseases to get clear outlook about the techniques and methodologies followed. The detection of plant disease is significantly based on type of family plants and same is carried out in two phases as segmentation and classification. Here, we have discussed existing segmentation method along with classifiers for detection of diseases in Monocot and Dicot family plant.", "title": "" }, { "docid": "4c607b142149504c2edad475d5613b86", "text": "This study uses a metatriangulation approach to explore the relationships between power and information technology impacts, development or deployment, and management or use in a sample Jasperson et al./Power & IT Research 398 MIS Quarterly Vol. 26 No. 4/December 2002 of 82 articles from 12 management and MIS journals published between 1980 and 1999. We explore the multiple paradigms underlying this research by applying two sets of lenses to examine the major findings from our sample. The technological imperative, organizational imperative , and emergent perspectives (Markus and Robey 1988) are used as one set of lenses to better understand researchers' views regarding the causal structure between IT and organizational power. A second set of lenses, which includes the rational, pluralist, interpretive, and radical perspectives (Bradshaw-Camball and Murray 1991), is used to focus on researchers' views of the role of power and different IT outcomes. We apply each lens separately to describe patterns emerging from the previous power and IT studies. In addition, we discuss the similarities and differences that occur when the two sets of lenses are simultaneously applied. We draw from this discussion to develop metaconjectures, (i.e., propositions that can be interpreted from multiple perspectives), and to suggest guidelines for studying power in future research.", "title": "" }, { "docid": "497678769826087f81d2a7a00b0bbb79", "text": "tRNAScan-SE is a tRNA detection program that is widely used for tRNA annotation; however, the false positive rate of tRNAScan-SE is unacceptable for large sequences. Here, we used a machine learning method to try to improve the tRNAScan-SE results. A new predictor, tRNA-Predict, was designed. We obtained real and pseudo-tRNA sequences as training data sets using tRNAScan-SE and constructed three different tRNA feature sets. We then set up an ensemble classifier, LibMutil, to predict tRNAs from the training data. The positive data set of 623 tRNA sequences was obtained from tRNAdb 2009 and the negative data set was the false positive tRNAs predicted by tRNAscan-SE. Our in silico experiments revealed a prediction accuracy rate of 95.1 % for tRNA-Predict using 10-fold cross-validation. tRNA-Predict was developed to distinguish functional tRNAs from pseudo-tRNAs rather than to predict tRNAs from a genome-wide scan. However, tRNA-Predict can work with the output of tRNAscan-SE, which is a genome-wide scanning method, to improve the tRNAscan-SE annotation results. The tRNA-Predict web server is accessible at http://datamining.xmu.edu.cn/∼gjs/tRNA-Predict.", "title": "" }, { "docid": "cdf78bab8d93eda7ccbb41674d24b1a2", "text": "OBJECTIVE\nThe U.S. Food and Drug Administration and Institute of Medicine are currently investigating front-of-package (FOP) food labelling systems to provide science-based guidance to the food industry. The present paper reviews the literature on FOP labelling and supermarket shelf-labelling systems published or under review by February 2011 to inform current investigations and identify areas of future research.\n\n\nDESIGN\nA structured search was undertaken of research studies on consumer use, understanding of, preference for, perception of and behaviours relating to FOP/shelf labelling published between January 2004 and February 2011.\n\n\nRESULTS\nTwenty-eight studies from a structured search met inclusion criteria. Reviewed studies examined consumer preferences, understanding and use of different labelling systems as well as label impact on purchasing patterns and industry product reformulation.\n\n\nCONCLUSIONS\nThe findings indicate that the Multiple Traffic Light system has most consistently helped consumers identify healthier products; however, additional research on different labelling systems' abilities to influence consumer behaviour is needed.", "title": "" }, { "docid": "81c2fca06af30c27e74267dbccd84080", "text": "Instability and variability of Deep Reinforcement Learning (DRL) algorithms tend to adversely affect their performance. Averaged-DQN is a simple extension to the DQN algorithm, based on averaging previously learned Q-values estimates, which leads to a more stable training procedure and improved performance by reducing approximation error variance in the target values. To understand the effect of the algorithm, we examine the source of value function estimation errors and provide an analytical comparison within a simplified model. We further present experiments on the Arcade Learning Environment benchmark that demonstrate significantly improved stability and performance due to the proposed extension.", "title": "" }, { "docid": "800befb527094bc6169809c6765d5d15", "text": "The problem of scheduling a weighted directed acyclic graph (DAG) to a set of homogeneous processors to minimize the completion time has been extensively studied. The NPcompleteness of the problem has instigated researchers to propose a myriad of heuristic algorithms. While these algorithms are individually reported to be efficient, it is not clear how effective they are and how well they compare against each other. A comprehensive performance evaluation and comparison of these algorithms entails addressing a number of difficult issues. One of the issues is that a large number of scheduling algorithms are based upon radically different assumptions, making their comparison on a unified basis a rather intricate task. Another issue is that there is no standard set of benchmarks that can be used to evaluate and compare these algorithms. Furthermore, most algorithms are evaluated using small problem sizes, and it is not clear how their performance scales with the problem size. In this paper, we first provide a taxonomy for classifying various algorithms into different categories according to their assumptions and functionalities. We then propose a set of benchmarks which are of diverse structures without being biased towards a particular scheduling technique and still allow variations in important parameters. We have evaluated 15 scheduling algorithms, and compared them using the proposed benchmarks. Based upon the design philosophies and principles behind these algorithms, we interpret the results and discuss why some algorithms perform better than the others.", "title": "" }, { "docid": "354500ae7e1ad1c6fd09438b26e70cb0", "text": "Dietary exposures can have consequences for health years or decades later and this raises questions about the mechanisms through which such exposures are 'remembered' and how they result in altered disease risk. There is growing evidence that epigenetic mechanisms may mediate the effects of nutrition and may be causal for the development of common complex (or chronic) diseases. Epigenetics encompasses changes to marks on the genome (and associated cellular machinery) that are copied from one cell generation to the next, which may alter gene expression, but which do not involve changes in the primary DNA sequence. These include three distinct, but closely inter-acting, mechanisms including DNA methylation, histone modifications and non-coding microRNAs (miRNA) which, together, are responsible for regulating gene expression not only during cellular differentiation in embryonic and foetal development but also throughout the life-course. This review summarizes the growing evidence that numerous dietary factors, including micronutrients and non-nutrient dietary components such as genistein and polyphenols, can modify epigenetic marks. In some cases, for example, effects of altered dietary supply of methyl donors on DNA methylation, there are plausible explanations for the observed epigenetic changes, but to a large extent, the mechanisms responsible for diet-epigenome-health relationships remain to be discovered. In addition, relatively little is known about which epigenomic marks are most labile in response to dietary exposures. Given the plasticity of epigenetic marks and their responsiveness to dietary factors, there is potential for the development of epigenetic marks as biomarkers of health for use in intervention studies.", "title": "" }, { "docid": "f59a7b518f5941cd42086dc2fe58fcea", "text": "This paper contributes a novel algorithm for effective and computationally efficient multilabel classification in domains with large label sets L. The HOMER algorithm constructs a Hierarchy Of Multilabel classifiERs, each one dealing with a much smaller set of labels compared to L and a more balanced example distribution. This leads to improved predictive performance along with linear training and logarithmic testing complexities with respect to |L|. Label distribution from parent to children nodes is achieved via a new balanced clustering algorithm, called balanced k means.", "title": "" }, { "docid": "2894570c1e8770874361943e17b13def", "text": "OBJECTIVES:Previous studies have suggested an association between cytomegalovirus (CMV) infection and steroid-refractory inflammatory bowel disease. In this study, the use of CMV DNA load during acute flare-ups of ulcerative colitis (UC) to predict resistance to immunosuppressive therapy was evaluated in intestinal tissue.METHODS:Forty-two consecutive patients (sex ratio M/F: 0.9, mean age: 43.6 years) hospitalized for moderate to severe UC and treated with IV steroids were included prospectively. A colonoscopy was performed for each patient at inclusion; colonic biopsy samples of the pathological tissue, and if possible, of the healthy mucosa, were tested for histological analysis and determination of CMV DNA load by real-time polymerase chain reaction assay. Patients were treated as recommended by the current guidelines.RESULTS:Sixteen patients were found positive for CMV DNA in inflamed intestinal tissue but negative in endoscopically healthy tissue; all of these patients were positive for anti-CMV IgG, three exhibited CMV DNA in blood, and none was positive for intestinal CMV antigen by immunohistochemistry detection. In the 26 remaining patients, no stigmata of recent CMV infection were recorded by any technique. By multivariate analysis, the only factor associated with CMV DNA in inflammatory tissue was the resistance to steroids or to three lines of treatment (risk ratio: 4.7; 95% confidence interval: 1.2–22.5). A CMV DNA load above 250 copies/mg in tissue was predictive of resistance to three successive regimens (likelihood ratio+=4.33; area under the receiver-operating characteristic curve=0.85). Eight UC patients with CMV DNA in inflamed tissue and therapeutic failure received ganciclovir; a clinical remission was observed in seven cases, with a sustained response in five of them.CONCLUSIONS:The CMV DNA load determined in inflamed intestinal tissue predicts resistance to steroid treatment and to three drug regimens in UC. Initiation of an early antiviral treatment in these patients might delay the occurrence of resistance to current treatments.", "title": "" }, { "docid": "60d21d395c472eb36bdfd014c53d918a", "text": "We introduce a fully differentiable approximation to higher-order inference for coreference resolution. Our approach uses the antecedent distribution from a span-ranking architecture as an attention mechanism to iteratively refine span representations. This enables the model to softly consider multiple hops in the predicted clusters. To alleviate the computational cost of this iterative process, we introduce a coarse-to-fine approach that incorporates a less accurate but more efficient bilinear factor, enabling more aggressive pruning without hurting accuracy. Compared to the existing state-of-the-art span-ranking approach, our model significantly improves accuracy on the English OntoNotes benchmark, while being far more computationally efficient.", "title": "" } ]
scidocsrr
ded77790e98f59b9b4512517625e0edf
Evolving Intelligent Mario Controller by Reinforcement Learning
[ { "docid": "47c004e7bc150685dafefcbb79f25657", "text": "REALM is a rule-based evolutionary computation agent for playing a modified version of Super Mario Bros. according to the rules stipulated in the Mario AI Competition held in the 2010 IEEE Symposium on Computational Intelligence and Games. Two alternate representations for the REALM rule sets are reported here, in both hand-coded and learned versions. Results indicate that the second version, with an abstracted action set, tends to perform better overall, but the first version shows a steeper learning curve. In both cases, learning quickly surpasses the hand-coded rule sets.", "title": "" } ]
[ { "docid": "81f504c4e378d0952231565d3ba4c555", "text": "The alignment problem—establishing links between corresponding phrases in two related sentences—is as important in natural language inference (NLI) as it is in machine translation (MT). But the tools and techniques of MT alignment do not readily transfer to NLI, where one cannot assume semantic equivalence, and for which large volumes of bitext are lacking. We present a new NLI aligner, the MANLI system, designed to address these challenges. It uses a phrase-based alignment representation, exploits external lexical resources, and capitalizes on a new set of supervised training data. We compare the performance of MANLI to existing NLI and MT aligners on an NLI alignment task over the well-known Recognizing Textual Entailment data. We show that MANLI significantly outperforms existing aligners, achieving gains of 6.2% in F1 over a representative NLI aligner and 10.5% over GIZA++.", "title": "" }, { "docid": "f720554ba9cff8bec781f4ad2ec538aa", "text": "English. Hate speech is prevalent in social media platforms. Systems that can automatically detect offensive content are of great value to assist human curators with removal of hateful language. In this paper, we present machine learning models developed at UW Tacoma for detection of misogyny, i.e. hate speech against women, in English tweets, and the results obtained with these models in the shared task for Automatic Misogyny Identification (AMI) at EVALITA2018. Italiano. Commenti offensivi nei confronti di persone con diversa orientazione sessuale o provenienza sociale sono oggigiorno prevalenti nelle piattaforme di social media. A tale fine, sistemi automatici in grado di rilevare contenuti offensivi nei confronti di alcuni gruppi sociali sono importanti per facilitare il lavoro dei moderatori di queste piattaforme a rimuovere ogni commento offensivo usato nei social media. In questo articolo, vi presentiamo sia dei modelli di apprendimento automatico sviluppati all’Università di Washington in Tacoma per il rilevamento della misoginia, ovvero discorsi offensivi usati nei tweet in lingua inglese contro le donne, sia i risultati ottenuti con questi modelli nel processo per l’identificazione automatica della misoginia in EVALITA2018.", "title": "" }, { "docid": "70becc434885af8f59ad39a3cedc8b6d", "text": "The trajectory of the heel and toe during the swing phase of human gait were analyzed on young adults. The magnitude and variability of minimum toe clearance and heel-contact velocity were documented on 10 repeat walking trials on 11 subjects. The energetics that controlled step length resulted from a separate study of 55 walking trials conducted on subjects walking at slow, natural, and fast cadences. A sensitivity analysis of the toe clearance and heel-contact velocity measures revealed the individual changes at each joint in the link-segment chain that could be responsible for changes in those measures. Toe clearance was very small (1.29 cm) and had low variability (about 4 mm). Heel-contact velocity was negligible vertically and small (0.87 m/s) horizontally. Six joints in the link-segment chain could, with very small changes (+/- 0.86 degrees - +/- 3.3 degrees), independently account for toe clearance variability. Only one muscle group in the chain (swing-phase hamstring muscles) could be responsible for altering the heel-contact velocity prior to heel contact. Four mechanical power phases in gait (ankle push-off, hip pull-off, knee extensor eccentric power at push-off, and knee flexor eccentric power prior to heel contact) could alter step length and cadence. These analyses demonstrate that the safe trajectory of the foot during swing is a precise endpoint control task that is under the multisegment motor control of both the stance and swing limbs.", "title": "" }, { "docid": "2915218bc86d049d6b8e3a844a9768fd", "text": "Power and energy systems are on the verge of a profound change where Smart Grid solutions will enhance their efficiency and flexibility. Advanced ICT and control systems are key elements of the Smart Grid to enable efficient integration of a high amount of renewable energy resources which in turn are seen as key elements of the future energy system. The corresponding distribution grids have to become more flexible and adaptable as the current ones in order to cope with the upcoming high share of energy from distributed renewable sources. The complexity of Smart Grids requires to consider and imply many components when a new application is designed. However, a holistic ICT-based approach for modelling, designing and validating Smart Grid developments is missing today. The goal of this paper therefore is to discuss an advanced design approach and the corresponding information model, covering system, application, control and communication aspects of Smart Grids.", "title": "" }, { "docid": "717ea3390ffe3f3132d4e2230e645ee5", "text": "Much of what is known about physiological systems has been learned using linear system theory. However, many biomedical signals are apparently random or aperiodic in time. Traditionally, the randomness in biological signals has been ascribed to noise or interactions between very large numbers of constituent components. One of the most important mathematical discoveries of the past few decades is that random behavior can arise in deterministic nonlinear systems with just a few degrees of freedom. This discovery gives new hope to providing simple mathematical models for analyzing, and ultimately controlling, physiological systems. The purpose of this chapter is to provide a brief pedagogic survey of the main techniques used in nonlinear time series analysis and to provide a MATLAB tool box for their implementation. Mathematical reviews of techniques in nonlinear modeling and forecasting can be found in Refs. 1-5. Biomedical signals that have been analyzed using these techniques include heart rate [6-8], nerve activity [9], renal flow [10], arterial pressure [11], electroencephalogram [12], and respiratory waveforms [13]. Section 2 provides a brief overview of dynamical systems theory including phase space portraits, Poincare surfaces of section, attractors, chaos, Lyapunov exponents, and fractal dimensions. The forced Duffing-Van der Pol oscillator (a ubiquitous model in engineering problems) is investigated as an illustrative example. Section 3 outlines the theoretical tools for time series analysis using dynamical systems theory. Reliability checks based on forecasting and surrogate data are also described. The time series methods are illustrated using data from the time evolution of one of the dynamical variables of the forced Duffing-Van der Pol oscillator. Section 4 concludes with a discussion of possible future directions for applications of nonlinear time series analysis in biomedical processes.", "title": "" }, { "docid": "bff6e87727db20562091a6c8c08f3667", "text": "Many trust-aware recommender systems have explored the value of explicit trust, which is specified by users with binary values and simply treated as a concept with a single aspect. However, in social science, trust is known as a complex term with multiple facets, which have not been well exploited in prior recommender systems. In this paper, we attempt to address this issue by proposing a (dis)trust framework with considerations of both interpersonal and impersonal aspects of trust and distrust. Specifically, four interpersonal aspects (benevolence, competence, integrity and predictability) are computationally modelled based on users’ historic ratings, while impersonal aspects are formulated from the perspective of user connections in trust networks. Two logistic regression models are developed and trained by accommodating these factors, and then applied to predict continuous values of users’ trust and distrust, respectively. Trust information is further refined by corresponding predicted distrust information. The experimental results on real-world data sets demonstrate the effectiveness of our proposed model in further improving the performance of existing state-of-the-art trust-aware recommendation approaches.", "title": "" }, { "docid": "acecf40720fd293972555918878b805e", "text": "This article outlines a number of important research issues in human-computer interaction in the e-commerce environment. It highlights some of the challenges faced by users in browsing Web sites and conducting searches for information, and suggests several areas of research for promoting ease of navigation and search. Also, it discusses the importance of trust in the online environment, describing some of the antecedents and consequences of trust, and provides guidelines for integrating trust into Web site design. The issues discussed in this article are presented under three broad categories of human-computer interaction – Web usability, interface design, and trust – and are intended to highlight what we believe are worthwhile areas for future research in e-commerce.", "title": "" }, { "docid": "54c6038cf2cfe9856c15fd6514e6ad9d", "text": "In this paper we examine an alternative interface for phonetic search, namely query-by-example, that avoids OOV issues associated with both standard word-based and phonetic search methods. We develop three methods that compare query lattices derived from example audio against a standard ngrambased phonetic index and we analyze factors affecting the performance of these systems. We show that the best systems under this paradigm are able to achieve 77% precision when retrieving utterances from conversational telephone speech and returning 10 results from a single query (performance that is better than a similar dictionary-based approach) suggesting significant utility for applications requiring high precision. We also show that these systems can be further improved using relevance feedback: By incorporating four additional queries the precision of the best system can be improved by 13.7% relative. Our systems perform well despite high phone recognition error rates (> 40%) and make use of no pronunciation or letter-to-sound resources.", "title": "" }, { "docid": "6570f9b4f8db85f40a99fb1911aa4967", "text": "Honey bees have played a major role in the history and development of humankind, in particular for nutrition and agriculture. The most important role of the western honey bee (Apis mellifera) is that of pollination. A large amount of crops consumed throughout the world today are pollinated by the activity of the honey bee. It is estimated that the total value of these crops stands at 155 billion euro annually. The goal of the work outlined in this paper was to use wireless sensor network technology to monitor a colony within the beehive with the aim of collecting image and audio data. These data allows the beekeeper to obtain a much more comprehensive view of the in-hive conditions, an indication of flight direction, as well as monitoring the hive outside of the traditional beekeeping times, i.e. during the night, poor weather, and winter months. This paper outlines the design of a fully autonomous beehive monitoring system which provided image and sound monitoring of the internal chambers of the hive, as well as a warning system for emergency events such as possible piping, dramatically increased hive activity, or physical damage to the hive. The final design included three wireless nodes: a digital infrared camera with processing capabilities for collecting imagery of the hive interior; an external thermal imaging camera node for monitoring the colony status and activity, and an accelerometer and a microphone connected to an off the shelf microcontroller node for processing. The system allows complex analysis and sensor fusion. Some scenarios based on sound processing, image collection, and accelerometers are presented. Power management was implemented which allowed the system to achieve energy neutrality in an outdoor deployment with a 525 × 345 mm solar panel.", "title": "" }, { "docid": "2ff15076533d1065209e0e62776eaa69", "text": "In less than a decade, Cubesats have evolved from purely educational tools to a standard platform for technology demonstration and scientific instrumentation. The use of COTS (Commercial-Off-The-Shelf) components and the ongoing miniaturization of several technologies have already led to scattered instances of missions with promising scientific value. Furthermore, advantages in terms of development cost and development time with respect to larger satellites, as well as the possibility of launching several dozens of Cubesats with a single rocket launch, have brought forth the potential for radically new mission architectures consisting of very large constellations or clusters of Cubesats. These architectures promise to combine the temporal resolution of GEO missions with the spatial resolution of LEO missions, thus breaking a traditional tradeoff in Earth observation mission design. This paper assesses the current capabilities of Cubesats with respect to potential employment in Earth observation missions. A thorough review of Cubesat bus technology capabilities is performed, identifying potential limitations and their implications on 17 different Earth observation payload technologies. These results are matched to an exhaustive review of scientific requirements in the field of Earth observation, assessing the possibilities of Cubesats to cope with the requirements set for each one of 21 measurement categories. Based on this review, several Earth observation measurements are identified that can potentially be compatible with the current state-of-the-art of Cubesat technology although some of them have actually never been addressed by any Cubesat mission. Simultaneously, other measurements are identified which are unlikely to be performed by Cubesats in the next few years due to insuperable constraints. Ultimately, this paper is intended to supply a box of ideas for universities to design future Cubesat missions with high", "title": "" }, { "docid": "1fa056e87c10811b38277d161c81c2ac", "text": "In this study, six kinds of the drivetrain systems of electric motor drives for EVs are discussed. Furthermore, the requirements of EVs on electric motor drives are presented. The comparative investigation on the efficiency, weight, cost, cooling, maximum speed, and fault-tolerance, safety, and reliability is carried out for switched reluctance motor, induction motor, permanent magnet blushless DC motor, and brushed DC motor drives, in order to find most appropriate electric motor drives for electric vehicle applications. The study shows that switched reluctance motor drives are the prior choice for electric vehicles.", "title": "" }, { "docid": "b24e5a512306f24568f3e21af08a1faf", "text": "We propose an object detection method that improves the accuracy of the conventional SSD (Single Shot Multibox Detector), which is one of the top object detection algorithms in both aspects of accuracy and speed. The performance of a deep network is known to be improved as the number of feature maps increases. However, it is difficult to improve the performance by simply raising the number of feature maps. In this paper, we propose and analyze how to use feature maps effectively to improve the performance of the conventional SSD. The enhanced performance was obtained by changing the structure close to the classifier network, rather than growing layers close to the input data, e.g., by replacing VGGNet with ResNet. The proposed network is suitable for sharing the weights in the classifier networks, by which property, the training can be faster with better generalization power. For the Pascal VOC 2007 test set trained with VOC 2007 and VOC 2012 training sets, the proposed network with the input size of 300×300 achieved 78.5% mAP (mean average precision) at the speed of 35.0 FPS (frame per second), while the network with a 512×512 sized input achieved 80.8% mAP at 16.6 FPS using Nvidia Titan X GPU. The proposed network shows state-of-the-art mAP, which is better than those of the conventional SSD, YOLO, Faster-RCNN and RFCN. Also, it is faster than Faster-RCNN and RFCN.", "title": "" }, { "docid": "f80dedfb0d0f7e5ba068e582517ac6f8", "text": "We present a physically-based approach to grasping and manipulation of virtual objects that produces visually realistic results, addresses the problem of visual interpenetration of hand and object models, and performs force rendering for force-feedback gloves in a single framework. Our approach couples tracked hand configuration to a simulation-controlled articulated hand model using a system of linear and torsional spring-dampers. We discuss an implementation of our approach that uses a widely-available simulation tool for collision detection and response. We illustrate the resulting behavior of the virtual hand model and of grasped objects, and we show that the simulation rate is sufficient for control of current force-feedback glove designs. We also present a prototype of a system we are developing to support natural whole-hand interactions in a desktop-sized workspace.", "title": "" }, { "docid": "b55d5967005d3b59063ffc4fd7eeb59a", "text": "In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss and not on finite-sum minimization, which is typically a much harder problem. While in the analysis we constrain ourselves to quadratic loss, the overall objective is not necessarily strongly convex.", "title": "" }, { "docid": "805ff3489d9bc145a0a8b91ce58ce3f9", "text": "The present experiment was designed to test the theory that psychological procedures achieve changes in behavior by altering the level and strength of self-efficacy. In this formulation, perceived self-efficacy. In this formulation, perceived self-efficacy influences level of performance by enhancing intensity and persistence of effort. Adult phobics were administered treatments based upon either performance mastery experiences, vicarious experiences., or they received no treatment. Their efficacy expectations and approach behavior toward threats differing on a similarity dimension were measured before and after treatment. In accord with our prediction, the mastery-based treatment produced higher, stronger, and more generalized expectations of personal efficacy than did the treatment relying solely upon vicarious experiences. Results of a microanalysis further confirm the hypothesized relationship between self-efficacy and behavioral change. Self-efficacy was a uniformly accurate predictor of performance on tasks of varying difficulty with different threats regardless of whether the changes in self-efficacy were produced through enactive mastery or by vicarious experience alone.", "title": "" }, { "docid": "fd5a586adf75dfc33171e077ecd039bb", "text": "An overview is presented of the medical image processing literature on mutual-information-based registration. The aim of the survey is threefold: an introduction for those new to the field, an overview for those working in the field, and a reference for those searching for literature on a specific application. Methods are classified according to the different aspects of mutual-information-based registration. The main division is in aspects of the methodology and of the application. The part on methodology describes choices made on facets such as preprocessing of images, gray value interpolation, optimization, adaptations to the mutual information measure, and different types of geometrical transformations. The part on applications is a reference of the literature available on different modalities, on interpatient registration and on different anatomical objects. Comparison studies including mutual information are also considered. The paper starts with a description of entropy and mutual information and it closes with a discussion on past achievements and some future challenges.", "title": "" }, { "docid": "474c4531ff58348d001320b824d626d6", "text": "As it becomes ever more pervasively engaged in data driven commerce, a modern enterprise becomes increasingly dependent upon reliable and high speed transaction services. At the same time it aspires to capitalize upon large inflows of information to draw timely business insights and improve business results. These two imperatives are frequently in conflict because of the widely divergent strategies that must be pursued: the need to bolster on-line transactional processing generally drives a business towards a small cluster of high-end servers running a mature, ACID compliant, SQL relational database, while high throughput analytics on massive and growing volumes of data favor the selection of very large clusters running non-traditional (NoSQL/NewSQL) databases that employ softer consistency protocols for performance and availability. This paper describes an approach in which the two imperatives are addressed by blending the two types (scale-up and scale-out) of data processing. It breaks down data growth that enterprises experience into three classes-Chronological, Horizontal, and Vertical, and picks out different approaches for blending SQL and NewSQL platforms for each class. To simplify application logic that must comprehend both types of data platforms, the paper describes two new capabilities: (a) a data integrator to quickly sift out updates that happen in an RDBMS and funnel them into a NewSQL database, and (b) extensions to the Hibernate-OGM framework that reduce the programming sophistication required for integrating HBase and Hive back ends with application logic designed for relational front ends. Finally the paper details several instances in which these approaches have been applied in real-world, at a number of software vendors with whom the authors have collaborated on design, implementation and deployment of blended solutions.", "title": "" }, { "docid": "eb2663865d0d7312641e0748978b238c", "text": "Recurrent Neural Networks (RNNs) are powerful models for sequential data that have the potential to learn long-term dependencies. However, they are computationally expensive to train and difficult to parallelize. Recent work has shown that normalizing intermediate representations of neural networks can significantly improve convergence rates in feed-forward neural networks [1]. In particular, batch normalization, which uses mini-batch statistics to standardize features, was shown to significantly reduce training time. In this paper, we investigate how batch normalization can be applied to RNNs. We show for both a speech recognition task and language modeling that the way we apply batch normalization leads to a faster convergence of the training criterion but doesn't seem to improve the generalization performance.", "title": "" }, { "docid": "3171893b6863e777141160c65f1b9616", "text": "This paper presents a method for face recognition across variations in pose, ranging from frontal to profile views, and across a wide range of illuminations, including cast shadows and specular reflections. To account for these variations, the algorithm simulates the process of image formation in 3D space, using computer graphics, and it estimates 3D shape and texture of faces from single images. The estimate is achieved by fitting a statistical, morphable model of 3D faces to images. The model is learned from a set of textured 3D scans of heads. We describe the construction of the morphable model, an algorithm to fit the model to images, and a framework for face identification. In this framework, faces are represented by model parameters for 3D shape and texture. We present results obtained with 4,488 images from the publicly available CMU-PIE database and 1,940 images from the FERET database.", "title": "" }, { "docid": "a8c7f588e4eb45e4a9be13c09abbf3eb", "text": "In this paper, a novel planar bandpass filter is proposed, designed, and implemented with a hybrid structure of substrate integrated waveguide (SIW) and coplanar waveguide (CPW), which has the advantages of good passband and stopband performance inherited from SIW and miniaturized size accompanying with the CPW. Additional design flexibility is introduced by the hybrid structure for efficiently controlling the mixed electric and magnetic coupling, and then planar bandpass filters with controllable transmission zeros and quasi-elliptic response can be achieved. Several prototypes with single and dual SIW cavities are fabricated. The measured results verified the performance of the proposed planar bandpass filters, such as low passband insertion loss, sharp roll-off characteristics at transition band, etc.", "title": "" } ]
scidocsrr
a160aeded508c7c8df01bc8aa16d837d
Security analysis of the Internet of Things: A systematic literature review
[ { "docid": "c381fdacde35fce7c8b869d512364a4f", "text": "IoT (Internet of Things) diversifies the future Internet, and has drawn much attention. As more and more gadgets (i.e. Things) connected to the Internet, the huge amount of data exchanged has reached an unprecedented level. As sensitive and private information exchanged between things, privacy becomes a major concern. Among many important issues, scalability, transparency, and reliability are considered as new challenges that differentiate IoT from the conventional Internet. In this paper, we enumerate the IoT communication scenarios and investigate the threats to the large-scale, unreliable, pervasive computing environment. To cope with these new challenges, the conventional security architecture will be revisited. In particular, various authentication schemes will be evaluated to ensure the confidentiality and integrity of the exchanged data.", "title": "" }, { "docid": "0d81a7af3c94e054841e12d4364b448c", "text": "Internet of Things (IoT) is characterized by heterogeneous technologies, which concur to the provisioning of innovative services in various application domains. In this scenario, the satisfaction of security and privacy requirements plays a fundamental role. Such requirements include data confidentiality and authentication, access control within the IoT network, privacy and trust among users and things, and the enforcement of security and privacy policies. Traditional security countermeasures cannot be directly applied to IoT technologies due to the different standards and communication stacks involved. Moreover, the high number of interconnected devices arises scalability issues; therefore a flexible infrastructure is needed able to deal with security threats in such a dynamic environment. In this survey we present the main research challenges and the existing solutions in the field of IoT security, identifying open issues, and suggesting some hints for future research. During the last decade, Internet of Things (IoT) approached our lives silently and gradually, thanks to the availability of wireless communication systems (e.g., RFID, WiFi, 4G, IEEE 802.15.x), which have been increasingly employed as technology driver for crucial smart monitoring and control applications [1–3]. Nowadays, the concept of IoT is many-folded, it embraces many different technologies, services, and standards and it is widely perceived as the angular stone of the ICT market in the next ten years, at least [4–6]. From a logical viewpoint, an IoT system can be depicted as a collection of smart devices that interact on a collabo-rative basis to fulfill a common goal. At the technological floor, IoT deployments may adopt different processing and communication architectures, technologies, and design methodologies, based on their target. For instance, the same IoT system could leverage the capabilities of a wireless sensor network (WSN) that collects the environmental information in a given area and a set of smartphones on top of which monitoring applications run. In the middle, a standardized or proprietary middle-ware could be employed to ease the access to virtualized resources and services. The middleware, in turn, might be implemented using cloud technologies, centralized overlays , or peer to peer systems [7]. Of course, this high level of heterogeneity, coupled to the wide scale of IoT systems, is expected to magnify security threats of the current Internet, which is being increasingly used to let interact humans, machines, and robots, in any combination. More in details, traditional security countermeasures and privacy enforcement cannot be directly applied to IoT technologies due to …", "title": "" }, { "docid": "62218093e4d3bf81b23512043fc7a013", "text": "The Internet of things (IoT) refers to every object, which is connected over a network with the ability to transfer data. Users perceive this interaction and connection as useful in their daily life. However any improperly designed and configured technology will exposed to security threats. Therefore an ecosystem for IoT should be designed with security embedded in each layer of its ecosystem. This paper will discussed the security threats to IoT and then proposed an IoT Security Framework to mitigate it. Then IoT Security Framework will be used to develop a Secure IoT Sensor to Cloud Ecosystem.", "title": "" } ]
[ { "docid": "5475df204bca627e73b077594af29d47", "text": "Multilayered artificial neural networks are becoming a pervasive tool in a host of application fields. At the heart of this deep learning revolution are familiar concepts from applied and computational mathematics; notably, in calculus, approximation theory, optimization and linear algebra. This article provides a very brief introduction to the basic ideas that underlie deep learning from an applied mathematics perspective. Our target audience includes postgraduate and final year undergraduate students in mathematics who are keen to learn about the area. The article may also be useful for instructors in mathematics who wish to enliven their classes with references to the application of deep learning techniques. We focus on three fundamental questions: what is a deep neural network? how is a network trained? what is the stochastic gradient method? We illustrate the ideas with a short MATLAB code that sets up and trains a network. We also show the use of state-of-the art software on a large scale image classification problem. We finish with references to the current literature.", "title": "" }, { "docid": "96e56dcf3d38c8282b5fc5c8ae747a66", "text": "The solid-state transformer (SST) was conceived as a replacement for the conventional power transformer, with both lower volume and weight. The smart transformer (ST) is an SST that provides ancillary services to the distribution and transmission grids to optimize their performance. Hence, the focus shifts from hardware advantages to functionalities. One of the most desired functionalities is the dc connectivity to enable a hybrid distribution system. For this reason, the ST architecture shall be composed of at least two power stages. The standard design procedure for this kind of system is to design each power stage for the maximum load. However, this design approach might limit additional services, like the reactive power compensation on the medium voltage (MV) side, and it does not consider the load regulation capability of the ST on the low voltage (LV) side. If the SST is tailored to the services that it shall provide, different stages will have different designs, so that the ST is no longer a mere application of the SST but an entirely new subject.", "title": "" }, { "docid": "f90efcef80233888fb8c218d1e5365a6", "text": "BACKGROUND\nMany low- and middle-income countries are undergoing a nutrition transition associated with rapid social and economic transitions. We explore the coexistence of over and under- nutrition at the neighborhood and household level, in an urban poor setting in Nairobi, Kenya.\n\n\nMETHODS\nData were collected in 2010 on a cohort of children aged under five years born between 2006 and 2010. Anthropometric measurements of the children and their mothers were taken. Additionally, dietary intake, physical activity, and anthropometric measurements were collected from a stratified random sample of adults aged 18 years and older through a separate cross-sectional study conducted between 2008 and 2009 in the same setting. Proportions of stunting, underweight, wasting and overweight/obesity were dettermined in children, while proportions of underweight and overweight/obesity were determined in adults.\n\n\nRESULTS\nOf the 3335 children included in the analyses with a total of 6750 visits, 46% (51% boys, 40% girls) were stunted, 11% (13% boys, 9% girls) were underweight, 2.5% (3% boys, 2% girls) were wasted, while 9% of boys and girls were overweight/obese respectively. Among their mothers, 7.5% were underweight while 32% were overweight/obese. A large proportion (43% and 37%%) of overweight and obese mothers respectively had stunted children. Among the 5190 adults included in the analyses, 9% (6% female, 11% male) were underweight, and 22% (35% female, 13% male) were overweight/obese.\n\n\nCONCLUSION\nThe findings confirm an existing double burden of malnutrition in this setting, characterized by a high prevalence of undernutrition particularly stunting early in life, with high levels of overweight/obesity in adulthood, particularly among women. In the context of a rapid increase in urban population, particularly in urban poor settings, this calls for urgent action. Multisectoral action may work best given the complex nature of prevailing circumstances in urban poor settings. Further research is needed to understand the pathways to this coexistence, and to test feasibility and effectiveness of context-specific interventions to curb associated health risks.", "title": "" }, { "docid": "121d3572c5a60a66da6bb42d0f7bf1af", "text": "The present study examined the relationships among grit, academic performance, perceived academic failure, and stress levels of Hong Kong associate degree students using path analysis. Three hundred and forty-five students from a community college in Hong Kong voluntarily participated in the study. They completed a questionnaire that measured their grit (operationalized as interest and perseverance) and stress levels. The students also provided their actual academic performance and evaluated their perception of their academic performance as a success or a failure. The results of the path analysis showed that interest and perseverance were negatively associated with stress, and only perceived academic failure was positively associated with stress. These findings suggest that psychological appraisal and resources are more important antecedents of stress than objective negative events. Therefore, fostering students' psychological resilience may alleviate the stress experienced by associate degree students or college students in general.", "title": "" }, { "docid": "d9888d448df6329e9a9b4fb5c1385ee3", "text": "Designing and developing a comfortable and convenient EEG system for daily usage that can provide reliable and robust EEG signal, encompasses a number of challenges. Among them, the most ambitious is the reduction of artifacts due to body movements. This paper studies the effect of head movement artifacts on the EEG signal and on the dry electrode-tissue impedance (ETI), monitored continuously using the imec's wireless EEG headset. We have shown that motion artifacts have huge impact on the EEG spectral content in the frequency range lower than 20Hz. Coherence and spectral analysis revealed that ETI is not capable of describing disturbances at very low frequencies (below 2Hz). Therefore, we devised a motion artifact reduction (MAR) method that uses a combination of a band-pass filtering and multi-channel adaptive filtering (AF), suitable for real-time MAR. This method was capable of substantially reducing artifacts produced by head movements.", "title": "" }, { "docid": "f717225fa7518383e0db362e673b9af4", "text": "The web has become the world's largest repository of knowledge. Web usage mining is the process of discovering knowledge from the interactions generated by the user in the form of access logs, cookies, and user sessions data. Web Mining consists of three different categories, namely Web Content Mining, Web Structure Mining, and Web Usage Mining (is the process of discovering knowledge from the interaction generated by the users in the form of access logs, browser logs, proxy-server logs, user session data, cookies). Accurate web log mining results and efficient online navigational pattern prediction are undeniably crucial for tuning up websites and consequently helping in visitors’ retention. Like any other data mining task, web log mining starts with data cleaning and preparation and it ends up discovering some hidden knowledge which cannot be extracted using conventional methods. After applying web mining on web sessions we will get navigation patterns which are important for web users such that appropriate actions can be adopted. Due to huge data in web, discovery of patterns and there analysis for further improvement in website becomes a real time necessity. The main focus of this paper is using of hybrid prediction engine to classify users on the basis of discovered patterns from web logs. Our proposed framework is to overcome the problem arise due to using of any single algorithm, we will give results based on comparison of two different algorithms like Longest Common Sequence (LCS) algorithm and Frequent Pattern (Growth) algorithm. Keywords— Web Usage Mining, Navigation Pattern, Frequent Pattern (Growth) Algorithm. ________________________________________________________________________________________________________", "title": "" }, { "docid": "c15618df21bce45cbad6766326de3dbd", "text": "The birth of intersexed infants, babies born with genitals that are neither clearly male nor clearly female, has been documented throughout recorded time.' In the late twentieth century, medical technology has advanced to allow scientists to determine chromosomal and hormonal gender, which is typically taken to be the real, natural, biological gender, usually referred to as \"sex.\"2 Nevertheless, physicians who handle the cases of intersexed infants consider several factors beside biological ones in determining, assigning, and announcing the gender of a particular infant. Indeed, biological factors are often preempted in their deliberations by such cultural factors as the \"correct\" length of the penis and capacity of the vagina.", "title": "" }, { "docid": "0e883a8ff7ccf82f1849d801754a5363", "text": "The purpose of this study was to investigate the structural relationships among students' expectation, perceived enjoyment, perceived usefulness, satisfaction, and continuance intention to use digital textbooks in middle school, based on Bhattacherjee's (2001) expectation-confirmation model. The subjects of this study were Korean middle school students taking an English class taught by a digital textbook in E middle school, Seoul. Data were collected via a paper-and-pencil-based questionnaire with 17 items; 137 responses were analyzed. The study found that (a) the more expectations of digital textbooks are satisfied, the more likely students are to perceive enjoyment and usefulness of digital textbooks, (b) satisfaction plays a mediating role in linking expectation, perceived enjoyment and usefulness, and continuance intention to use digital textbooks, (c) perceived usefulness and satisfaction have a direct and positive influence on continuance intention to use digital textbooks, and (d) perceived enjoyment has a non-significant influence on continuance intention to use digital textbooks with middle school students. Based on these findings, the implications and recommendations for future research are presented. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "7f51bdc05c4a1bf610f77b629d8602f7", "text": "Special Issue Anthony Vance Brigham Young University anthony@vance.name Bonnie Brinton Anderson Brigham Young University bonnie_anderson@byu.edu C. Brock Kirwan Brigham Young University kirwan@byu.edu Users’ perceptions of risks have important implications for information security because individual users’ actions can compromise entire systems. Therefore, there is a critical need to understand how users perceive and respond to information security risks. Previous research on perceptions of information security risk has chiefly relied on self-reported measures. Although these studies are valuable, risk perceptions are often associated with feelings—such as fear or doubt—that are difficult to measure accurately using survey instruments. Additionally, it is unclear how these self-reported measures map to actual security behavior. This paper contributes to this topic by demonstrating that risk-taking behavior is effectively predicted using electroencephalography (EEG) via event-related potentials (ERPs). Using the Iowa Gambling Task, a widely used technique shown to be correlated with real-world risky behaviors, we show that the differences in neural responses to positive and negative feedback strongly predict users’ information security behavior in a separate laboratory-based computing task. In addition, we compare the predictive validity of EEG measures to that of self-reported measures of information security risk perceptions. Our experiments show that self-reported measures are ineffective in predicting security behaviors under a condition in which information security is not salient. However, we show that, when security concerns become salient, self-reported measures do predict security behavior. Interestingly, EEG measures significantly predict behavior in both salient and non-salient conditions, which indicates that EEG measures are a robust predictor of security behavior.", "title": "" }, { "docid": "5d377a17d3444d6137be582cbbc6c1db", "text": "Next generation malware will by be characterized by the intense use of polymorphic and metamorphic techniques aimed at circumventing the current malware detectors, based on pattern matching. In order to deal with this new kind of threat novel techniques have to be devised for the realization of malware detectors. Recent papers started to address such issue and this paper represents a further contribution in such a field. More precisely in this paper we propose a strategy for the detection of malicious codes that adopt the most evolved self-mutation techniques; we also provide experimental data supporting the validity of", "title": "" }, { "docid": "31f5c712760d1733acb0d7ffd3cec6ad", "text": "Singular Spectrum Transform (SST) is a fundamental subspace analysis technique which has been widely adopted for solving change-point detection (CPD) problems in information security applications. However, the performance of a SST based CPD algorithm is limited to the lack of robustness to corrupted observations with large noises in practice. Based on the observation that large noises in practical time series are generally sparse, in this paper, we study a combination of Robust Principal Component Analysis (RPCA) and SST to obtain a robust CPD algorithm dealing with sparse large noises. The sparse large noises are to be eliminated from observation trajectory matrices by performing a low-rank matrix recovery procedure of RPCA. The noise-eliminated matrices are then used to extract SST subspaces for CPD. The effectiveness of the proposed method is demonstrated through experiments based on both synthetic and real-world datasets. Experimental results show that the proposed method outperforms the competing state-of-the-arts in terms of detection accuracy for time series with sparse large noises.", "title": "" }, { "docid": "8dd2eaece835686b73683f263428ecfa", "text": "Automating repetitive surgical subtasks such as suturing, cutting and debridement can reduce surgeon fatigue and procedure times and facilitate supervised tele-surgery. Programming is difficult because human tissue is deformable and highly specular. Using the da Vinci Research Kit (DVRK) robotic surgical assistant, we explore a “Learning By Observation” (LBO) approach where we identify, segment, and parameterize motion sequences and sensor conditions to build a finite state machine (FSM) for each subtask. The robot then executes the FSM repeatedly to tune parameters and if necessary update the FSM structure. We evaluate the approach on two surgical subtasks: debridement of 3D Viscoelastic Tissue Phantoms (3d-DVTP), in which small target fragments are removed from a 3D viscoelastic tissue phantom; and Pattern Cutting of 2D Orthotropic Tissue Phantoms (2d-PCOTP), a step in the standard Fundamentals of Laparoscopic Surgery training suite, in which a specified circular area must be cut from a sheet of orthotropic tissue phantom. We describe the approach and physical experiments with repeatability of 96% for 50 trials of the 3d-DVTP subtask and 70% for 20 trials of the 2d-PCOTP subtask. A video is available at: http://j.mp/Robot-Surgery-Video-Oct-2014.", "title": "" }, { "docid": "fa9abc74d3126e0822e7e815e135e845", "text": "Semantic interaction offers an intuitive communication mechanism between human users and complex statistical models. By shielding the users from manipulating model parameters, they focus instead on directly manipulating the spatialization, thus remaining in their cognitive zone. However, this technique is not inherently scalable past hundreds of text documents. To remedy this, we present the concept of multi-model semantic interaction, where semantic interactions can be used to steer multiple models at multiple levels of data scale, enabling users to tackle larger data problems. We also present an updated visualization pipeline model for generalized multi-model semantic interaction. To demonstrate multi-model semantic interaction, we introduce StarSPIRE, a visual text analytics prototype that transforms user interactions on documents into both small-scale display layout updates as well as large-scale relevancy-based document selection.", "title": "" }, { "docid": "3d28f86795ddcd249657703cbedf87b1", "text": "A 2.5V high precision BiCMOS bandgap reference with supply voltage range of 6V to 18V was proposed and realized. It could be applied to lots of Power Management ICs (Intergrated Circuits) due the high voltage. By introducing a preregulated current source, the PSRR (Power Supply Rejection Ratio) of 103dB at low frequency and the line regulation of 26.7μV/V was achieved under 15V supply voltage at ambient temperature of 27oC. Moreover, if the proper resistance trimming is implemented, the temperature coefficient could be reduced to less than 16.4ppm/oC. The start up time of the reference voltage could also be decreased with an additional bipolar and capacitor.", "title": "" }, { "docid": "3ec63f1c1f74c5d11eaa9d360ceaac55", "text": "High-level shape understanding and technique evaluation on large repositories of 3D shapes often benefit from additional information known about the shapes. One example of such information is the semantic segmentation of a shape into functional or meaningful parts. Generating accurate segmentations with meaningful segment boundaries is, however, a costly process, typically requiring large amounts of user time to achieve high quality results. In this paper we present an active learning framework for large dataset segmentation, which iteratively provides the user with new predictions by training new models based on already segmented shapes. Our proposed pipeline consists of three novel components. First, we a propose a fast and relatively accurate feature-based deep learning model to provide datasetwide segmentation predictions. Second, we propose an information theory measure to estimate the prediction quality and for ordering subsequent fast and meaningful shape selection. Our experiments show that such suggestive ordering helps reduce users time and effort, produce high quality predictions, and construct a model that generalizes well. Finally, we provide effective segmentation refinement features to help the user quickly correct any incorrect predictions. We show that our framework is more accurate and in general more efficient than state-of-the-art, for massive dataset segmentation with while also providing consistent segment boundaries.", "title": "" }, { "docid": "bb240f2e536e5e5cd80fcca8c9d98171", "text": "We propose a novel metaphor interpretation method, Meta4meaning. It provides interpretations for nominal metaphors by generating a list of properties that the metaphor expresses. Meta4meaning uses word associations extracted from a corpus to retrieve an approximation to properties of concepts. Interpretations are then obtained as an aggregation or difference of the saliences of the properties to the tenor and the vehicle. We evaluate Meta4meaning using a set of humanannotated interpretations of 84 metaphors and compare with two existing methods for metaphor interpretation. Meta4meaning significantly outperforms the previous methods on this task.", "title": "" }, { "docid": "4cda02d9f5b5b16773b8cbffc54e91ca", "text": "We present a novel global stereo model designed for view interpolation. Unlike existing stereo models which only output a disparity map, our model is able to output a 3D triangular mesh, which can be directly used for view interpolation. To this aim, we partition the input stereo images into 2D triangles with shared vertices. Lifting the 2D triangulation to 3D naturally generates a corresponding mesh. A technical difficulty is to properly split vertices to multiple copies when they appear at depth discontinuous boundaries. To deal with this problem, we formulate our objective as a two-layer MRF, with the upper layer modeling the splitting properties of the vertices and the lower layer optimizing a region-based stereo matching. Experiments on the Middlebury and the Herodion datasets demonstrate that our model is able to synthesize visually coherent new view angles with high PSNR, as well as outputting high quality disparity maps which rank at the first place on the new challenging high resolution Middlebury 3.0 benchmark.", "title": "" }, { "docid": "7bef0f8e1df99d525f3d2356bd129e45", "text": "The term 'participation' is traditionally used in HCI to describe the involvement of users and stakeholders in design processes, with a pretext of distributing control to participants to shape their technological future. In this paper we ask whether these values can hold up in practice, particularly as participation takes on new meanings and incorporates new perspectives. We argue that much HCI research leans towards configuring participation. In exploring this claim we explore three questions that we consider important for understanding how HCI configures participation; Who initiates, directs and benefits from user participation in design? In what forms does user participation occur? How is control shared with users in design? In answering these questions we consider the conceptual, ethical and pragmatic problems this raises for current participatory HCI research. Finally, we offer directions for future work explicitly dealing with the configuration of participation.", "title": "" }, { "docid": "60e16b0c5bff9f7153c64a38193b8759", "text": "The “Flash Crash” of May 6th, 2010 comprised an unprecedented 1,000 point, five-minute decline in the Dow Jones Industrial Average that was followed by a rapid, disorderly recovery of prices. We illuminate the causes of this singular event with the first analysis that tracks the full order book activity at millisecond granularity. We document previously overlooked market data anomalies and establish that these anomalies Granger-caused liquidity withdrawal. We offer a simulation model that formalizes the process by which large sell orders, combined with widespread liquidity withdrawal, can generate Flash Crash-like events in the absence of fundamental information arrival. ∗This work was supported by the Hellman Fellows Fund and the Rock Center for Corporate Governance at Stanford University. †Email: ealdrich@ucsc.edu. ‡Email: grundfest@stanford.edu §Email: gregory.laughlin@yale.edu", "title": "" }, { "docid": "aab6a2166b9d39a67ec9ebb127f0956a", "text": "A heuristic approximation algorithm that can optimise the order of firewall rules to minimise packet matching is presented. It has been noted that firewall operators tend to make use of the fact that some firewall rules match most of the traffic, and conversely that others match little of the traffic. Consequently, ordering the rules such that the highest matched rules are as high in the table as possible reduces the processing load in the firewall. Due to dependencies between rules in the rule set this problem, optimising the cost of the packet matching process, has been shown to be NP-hard. This paper proposes an algorithm that is designed to give good performance in terms of minimising the packet matching cost of the firewall. The performance of the algorithm is related to complexity of the firewall rule set and is compared to an alternative algorithm demonstrating that the algorithm here has improved the packet matching cost in all cases.", "title": "" } ]
scidocsrr
d1b33ce49666fa755a6cd629a1faaf25
Simplified modeling and identification approach for model-based control of parallel mechanism robot leg
[ { "docid": "69e381983f7af393ee4bbb62bb587a4e", "text": "This paper presents the design principles for highly efficient legged robots, the implementation of the principles in the design of the MIT Cheetah, and the analysis of the high-speed trotting experimental results. The design principles were derived by analyzing three major energy-loss mechanisms in locomotion: heat losses from the actuators, friction losses in transmission, and the interaction losses caused by the interface between the system and the environment. Four design principles that minimize these losses are discussed: employment of high torque-density motors, energy regenerative electronic system, low loss transmission, and a low leg inertia. These principles were implemented in the design of the MIT Cheetah; the major design features are large gap diameter motors, regenerative electric motor drivers, single-stage low gear transmission, dual coaxial motors with composite legs, and the differential actuated spine. The experimental results of fast trotting are presented; the 33-kg robot runs at 22 km/h (6 m/s). The total power consumption from the battery pack was 973 W and resulted in a total cost of transport of 0.5, which rivals running animals' at the same scale. 76% of the total energy consumption is attributed to heat loss from the motor, and the remaining 24% is used in mechanical work, which is dissipated as interaction loss as well as friction losses at the joint and transmission.", "title": "" } ]
[ { "docid": "dd06c1c39e9b4a1ae9ee75c3251f27dc", "text": "Magnetoencephalographic measurements (MEG) were used to examine the effect on the human auditory cortex of removing specific frequencies from the acoustic environment. Subjects listened for 3 h on three consecutive days to music \"notched\" by removal of a narrow frequency band centered on 1 kHz. Immediately after listening to the notched music, the neural representation for a 1-kHz test stimulus centered on the notch was found to be significantly diminished compared to the neural representation for a 0.5-kHz control stimulus centered one octave below the region of notching. The diminished neural representation for 1 kHz reversed to baseline between the successive listening sessions. These results suggest that rapid changes can occur in the tuning of neurons in the adult human auditory cortex following manipulation of the acoustic environment. A dynamic form of neural plasticity may underlie the phenomenon observed here.", "title": "" }, { "docid": "c4256017c214eabda8e5b47c604e0e49", "text": "In this paper, a multi-band antenna for 4G wireless systems is proposed. The proposed antenna consists of a modified planar inverted-F antenna with additional branch line for wide bandwidth and a folded monopole antenna. The antenna provides wide bandwidth for covering the hepta-band LTE/GSM/UMTS operation. The measured 6-dB return loss bandwidth was 169 MHz (793 MHz-962 MHz) at the low frequency band and 1030 MHz (1700 MHz-2730 MHz) at the high frequency band. The overall dimension of the proposed antenna is 55 mm × 110 mm × 5 mm.", "title": "" }, { "docid": "386af0520255ebd048cff30961973624", "text": "We present a linear optical receiver realized on 130 nm SiGe BiCMOS. Error-free operation assuming FEC is shown at bitrates up to 64 Gb/s (32 Gbaud) with 165mW power consumption, corresponding to 2.578 pJ/bit.", "title": "" }, { "docid": "d52bfde050e6535645c324e7006a50e7", "text": "Modern machine learning algorithms are increasingly computationally demanding, requiring specialized hardware and distributed computation to achieve high performance in a reasonable time frame. Many hyperparameter search algorithms have been proposed for improving the efficiency of model selection, however their adaptation to the distributed compute environment is often ad-hoc. We propose Tune, a unified framework for model selection and training that provides a narrow-waist interface between training scripts and search algorithms. We show that this interface meets the requirements for a broad range of hyperparameter search algorithms, allows straightforward scaling of search to large clusters, and simplifies algorithm implementation. We demonstrate the implementation of several state-of-the-art hyperparameter search algorithms in Tune. Tune is available at http://ray.readthedocs.io/en/latest/tune.html.", "title": "" }, { "docid": "ba87ca7a07065e25593e6ae5c173669d", "text": "The intelligence community (IC) is asked to predict outcomes that may often be inherently unpredictable-and is blamed for the inevitable forecasting failures, be they false positives or false negatives. To move beyond blame games of accountability ping-pong that incentivize bureaucratic symbolism over substantive reform, it is necessary to reach bipartisan agreements on performance indicators that are transparent enough to reassure clashing elites (to whom the IC must answer) that estimates have not been politicized. Establishing such transideological credibility requires (a) developing accuracy metrics for decoupling probability and value judgments; (b) using the resulting metrics as criterion variables in validity tests of the IC's selection, training, and incentive systems; and (c) institutionalizing adversarial collaborations that conduct level-playing-field tests of clashing perspectives.", "title": "" }, { "docid": "51fec678a2e901fdf109d4836ef1bf34", "text": "BACKGROUND\nFoot-and-mouth disease (FMD) is an acute, highly contagious disease that infects cloven-hoofed animals. Vaccination is an effective means of preventing and controlling FMD. Compared to conventional inactivated FMDV vaccines, the format of FMDV virus-like particles (VLPs) as a non-replicating particulate vaccine candidate is a promising alternative.\n\n\nRESULTS\nIn this study, we have developed a co-expression system in E. coli, which drove the expression of FMDV capsid proteins (VP0, VP1, and VP3) in tandem by a single plasmid. The co-expressed FMDV capsid proteins (VP0, VP1, and VP3) were produced in large scale by fermentation at 10 L scale and the chromatographic purified capsid proteins were auto-assembled as VLPs in vitro. Cattle vaccinated with a single dose of the subunit vaccine, comprising in vitro assembled FMDV VLP and adjuvant, developed FMDV-specific antibody response (ELISA antibodies and neutralizing antibodies) with the persistent period of 6 months. Moreover, cattle vaccinated with the subunit vaccine showed the high protection potency with the 50 % bovine protective dose (PD50) reaching 11.75 PD50 per dose.\n\n\nCONCLUSIONS\nOur data strongly suggest that in vitro assembled recombinant FMDV VLPs produced from E. coli could function as a potent FMDV vaccine candidate against FMDV Asia1 infection. Furthermore, the robust protein expression and purification approaches described here could lead to the development of industrial level large-scale production of E. coli-based VLPs against FMDV infections with different serotypes.", "title": "" }, { "docid": "a774567d957ed0ea209b470b8eced563", "text": "The vulnerability of the nervous system to advancing age is all too often manifest in neurodegenerative disorders such as Alzheimer's and Parkinson's diseases. In this review article we describe evidence suggesting that two dietary interventions, caloric restriction (CR) and intermittent fasting (IF), can prolong the health-span of the nervous system by impinging upon fundamental metabolic and cellular signaling pathways that regulate life-span. CR and IF affect energy and oxygen radical metabolism, and cellular stress response systems, in ways that protect neurons against genetic and environmental factors to which they would otherwise succumb during aging. There are multiple interactive pathways and molecular mechanisms by which CR and IF benefit neurons including those involving insulin-like signaling, FoxO transcription factors, sirtuins and peroxisome proliferator-activated receptors. These pathways stimulate the production of protein chaperones, neurotrophic factors and antioxidant enzymes, all of which help cells cope with stress and resist disease. A better understanding of the impact of CR and IF on the aging nervous system will likely lead to novel approaches for preventing and treating neurodegenerative disorders.", "title": "" }, { "docid": "b5dc56272d4dea04b756a8614d6762c9", "text": "Platforms have been considered as a paradigm for managing new product development and innovation. Since their introduction, studies on platforms have introduced multiple conceptualizations, leading to a fragmentation of research and different perspectives. By systematically reviewing the platform literature and combining bibliometric and content analyses, this paper examines the platform concept and its evolution, proposes a thematic classification, and highlights emerging trends in the literature. Based on this hybrid methodological approach (bibliometric and content analyses), the results show that platform research has primarily focused on issues that are mainly related to firms' internal aspects, such as innovation, modularity, commonality, and mass customization. Moreover, scholars have recently started to focus on new research themes, including managerial questions related to capability building, strategy, and ecosystem building based on platforms. As its main contributions, this paper improves the understanding of and clarifies the evolutionary trajectory of the platform concept, and identifies trends and emerging themes to be addressed in future studies.", "title": "" }, { "docid": "9500dfc92149c5a808cec89b140fc0c3", "text": "We present a new approach to the geometric alignment of a point cloud to a surface and to related registration problems. The standard algorithm is the familiar ICP algorithm. Here we provide an alternative concept which relies on instantaneous kinematics and on the geometry of the squared distance function of a surface. The proposed algorithm exhibits faster convergence than ICP; this is supported both by results of a local convergence analysis and by experiments.", "title": "" }, { "docid": "a1bf728c54cec3f621a54ed23a623300", "text": "Machine learning algorithms are now common in the state-ofthe-art spoken language understanding models. But to reach good performance they must be trained on a potentially large amount of data which are not available for a variety of tasks and languages of interest. In this work, we present a novel zero-shot learning method, based on word embeddings, allowing to derive a full semantic parser for spoken language understanding. No annotated in-context data are needed, the ontological description of the target domain and generic word embedding features (learned from freely available general domain data) suffice to derive the model. Two versions are studied with respect to how the model parameters and decoding step are handled, including an extension of the proposed approach in the context of conditional random fields. We show that this model, with very little supervision, can reach instantly performance comparable to those obtained by either state-of-the-art carefully handcrafted rule-based or trained statistical models for extraction of dialog acts on the Dialog State Tracking test datasets (DSTC2 and 3).", "title": "" }, { "docid": "9941cd183e2c7b79d685e0e9cef3c43e", "text": "We present a novel recursive Bayesian method in the DFT-domain to address the multichannel acoustic echo cancellation problem. We model the echo paths between the loudspeakers and the near-end microphone as a multichannel random variable with a first-order Markov property. The incorporation of the near-end observation noise, in conjunction with the multichannel Markov model, leads to a multichannel state-space model. We derive a recursive Bayesian solution to the multichannel state-space model, which turns out to be well suited for input signals that are not only auto-correlated but also cross-correlated. We show that the resulting multichannel state-space frequency-domain adaptive filter (MCSSFDAF) can be efficiently implemented due to the submatrix-diagonality of the state-error covariance. The filter offers optimal tracking and robust adaptation in the presence of near-end noise and echo path variability.", "title": "" }, { "docid": "433e7a8c4d4a16f562f9ae112102526e", "text": "Although both extrinsic and intrinsic factors have been identified that orchestrate the differentiation and maturation of oligodendrocytes, less is known about the intracellular signaling pathways that control the overall commitment to differentiate. Here, we provide evidence that activation of the mammalian target of rapamycin (mTOR) is essential for oligodendrocyte differentiation. Specifically, mTOR regulates oligodendrocyte differentiation at the late progenitor to immature oligodendrocyte transition as assessed by the expression of stage specific antigens and myelin proteins including MBP and PLP. Furthermore, phosphorylation of mTOR on Ser 2448 correlates with myelination in the subcortical white matter of the developing brain. We demonstrate that mTOR exerts its effects on oligodendrocyte differentiation through two distinct signaling complexes, mTORC1 and mTORC2, defined by the presence of the adaptor proteins raptor and rictor, respectively. Disrupting mTOR complex formation via siRNA mediated knockdown of raptor or rictor significantly reduced myelin protein expression in vitro. However, mTORC2 alone controlled myelin gene expression at the mRNA level, whereas mTORC1 influenced MBP expression via an alternative mechanism. In addition, investigation of mTORC1 and mTORC2 targets revealed differential phosphorylation during oligodendrocyte differentiation. In OPC-DRG cocultures, inhibiting mTOR potently abrogated oligodendrocyte differentiation and reduced numbers of myelin segments. These data support the hypothesis that mTOR regulates commitment to oligodendrocyte differentiation before myelination.", "title": "" }, { "docid": "7c13132ef5b2d67c4a7e3039db252302", "text": "Accurate estimation of the click-through rate (CTR) in sponsored ads significantly impacts the user search experience and businesses’ revenue, even 0.1% of accuracy improvement would yield greater earnings in the hundreds of millions of dollars. CTR prediction is generally formulated as a supervised classification problem. In this paper, we share our experience and learning on model ensemble design and our innovation. Specifically, we present 8 ensemble methods and evaluate them on our production data. Boosting neural networks with gradient boosting decision trees turns out to be the best. With larger training data, there is a nearly 0.9% AUC improvement in offline testing and significant click yield gains in online traffic. In addition, we share our experience and learning on improving the quality of training.", "title": "" }, { "docid": "1d3007738c259cdf08f515849c7939b8", "text": "Background: With an increase in the number of disciplines contributing to health literacy scholarship, we sought to explore the nature of interdisciplinary research in the field. Objective: This study sought to describe disciplines that contribute to health literacy research and to quantify how disciplines draw from and contribute to an interdisciplinary evidence base, as measured by citation networks. Methods: We conducted a literature search for health literacy articles published between 1991 and 2015 in four bibliographic databases, producing 6,229 unique bibliographic records. We employed a scientometric tool (CiteSpace [Version 4.4.R1]) to quantify patterns in published health literacy research, including a visual path from cited discipline domains to citing discipline domains. Key Results: The number of health literacy publications increased each year between 1991 and 2015. Two spikes, in 2008 and 2013, correspond to the introduction of additional subject categories, including information science and communication. Two journals have been cited more than 2,000 times—the Journal of General Internal Medicine (n = 2,432) and Patient Education and Counseling (n = 2,252). The most recently cited journal added to the top 10 list of cited journals is the Journal of Health Communication (n = 989). Three main citation paths exist in the health literacy data set. Articles from the domain “medicine, medical, clinical” heavily cite from one domain (health, nursing, medicine), whereas articles from the domain “psychology, education, health” cite from two separate domains (health, nursing, medicine and psychology, education, social). Conclusions: Recent spikes in the number of published health literacy articles have been spurred by a greater diversity of disciplines contributing to the evidence base. However, despite the diversity of disciplines, citation paths indicate the presence of a few, self-contained disciplines contributing to most of the literature, suggesting a lack of interdisciplinary research. To address complex and evolving challenges in the health literacy field, interdisciplinary team science, that is, integrating science from across multiple disciplines, should continue to grow. [Health Literacy Research and Practice. 2017;1(4):e182-e191.] Plain Language Summary: The addition of diverse disciplines conducting health literacy scholarship has spurred recent spikes in the number of publications. However, citation paths suggest that interdisciplinary research can be strengthened. Findings directly align with the increasing emphasis on team science, and support opportunities and resources that incentivize interdisciplinary health literacy research. The study of health literacy has significantly expanded over the past decade. It represents a dynamic area of inquiry that extends to multiple disciplines. Health literacy emerged as a derivative of literacy and early definitions focused on the ability to read and understand medical instructions and health care information (Parker, Baker, Williams, & Nurss, 1995; Williams et al., 1995). This early work led to a body of research demonstrating that people with low health literacy generally had poorer health outcomes, including lower levels of screening and medication adherence rates (Baker,", "title": "" }, { "docid": "cdc276a3c4305d6c7ba763332ae933cc", "text": "Synthetic aperture radar (SAR) image classification is a fundamental process for SAR image understanding and interpretation. With the advancement of imaging techniques, it permits to produce higher resolution SAR data and extend data amount. Therefore, intelligent algorithms for high-resolution SAR image classification are demanded. Inspired by deep learning technology, an end-to-end classification model from the original SAR image to final classification map is developed to automatically extract features and conduct classification, which is named deep recurrent encoding neural networks (DRENNs). In our proposed framework, a spatial feature learning network based on long–short-term memory (LSTM) is developed to extract contextual dependencies of SAR images, where 2-D image patches are transformed into 1-D sequences and imported into LSTM to learn the latent spatial correlations. After LSTM, nonnegative and Fisher constrained autoencoders (NFCAEs) are proposed to improve the discrimination of features and conduct final classification, where nonnegative constraint and Fisher constraint are developed in each autoencoder to restrict the training of the network. The whole DRENN not only combines the spatial feature learning power of LSTM but also utilizes the discriminative representation ability of our NFCAE to improve the classification performance. The experimental results tested on three SAR images demonstrate that the proposed DRENN is able to learn effective feature representations from SAR images and produce competitive classification accuracies to other related approaches.", "title": "" }, { "docid": "b52cadf9e20eebfd388c09c51cff2d74", "text": "Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and even for MNIST, one of the most common toy datasets in computer vision, no neural network model exists for which adversarial perturbations are large and make semantic sense to humans. We show that even the widely recognized and by far most successful defense by Madry et al. (1) overfits on the L∞ metric (it’s highly susceptible to L2 and L0 perturbations), (2) classifies unrecognizable images with high certainty, (3) performs not much better than simple input binarization and (4) features adversarial perturbations that make little sense to humans. These results suggest that MNIST is far from being solved in terms of adversarial robustness. We present a novel robust classification model that performs analysis by synthesis using learned class-conditional data distributions. We derive bounds on the robustness and go to great length to empirically evaluate our model using maximally effective adversarial attacks by (a) applying decisionbased, score-based, gradient-based and transfer-based attacks for several different Lp norms, (b) by designing a new attack that exploits the structure of our defended model and (c) by devising a novel decision-based attack that seeks to minimize the number of perturbed pixels (L0). The results suggest that our approach yields state-of-the-art robustness on MNIST against L0, L2 and L∞ perturbations and we demonstrate that most adversarial examples are strongly perturbed towards the perceptual boundary between the original and the adversarial class.", "title": "" }, { "docid": "6e0877f16e624bef547f76b80278f760", "text": "The importance of storytelling as the foundation of human experiences cannot be overestimated. The oral traditions focus upon educating and transmitting knowledge and skills and also evolved into one of the earliest methods of communicating scientific discoveries and developments. A wide ranging search of the storytelling, education and health-related literature encompassing the years 1975-2007 was performed. Evidence from disparate elements of education and healthcare were used to inform an exploration of storytelling. This conceptual paper explores the principles of storytelling, evaluates the use of storytelling techniques in education in general, acknowledges the role of storytelling in healthcare delivery, identifies some of the skills learned and benefits derived from storytelling, and speculates upon the use of storytelling strategies in nurse education. Such stories have, until recently been harvested from the experiences of students and of educators, however, there is a growing realization that patients and service users are a rich source of healthcare-related stories that can affect, change and benefit clinical practice. The use of technology such as the Internet discussion boards or digitally-facilitated storytelling has an evolving role in ensuring that patient-generated and experiential stories have a future within nurse education.", "title": "" }, { "docid": "64770c350dc1d260e24a43760d4e641b", "text": "A first step in the task of automatically generating questions for testing reading comprehension is to identify questionworthy sentences, i.e. sentences in a text passage that humans find it worthwhile to ask questions about. We propose a hierarchical neural sentence-level sequence tagging model for this task, which existing approaches to question generation have ignored. The approach is fully data-driven — with no sophisticated NLP pipelines or any hand-crafted rules/features — and compares favorably to a number of baselines when evaluated on the SQuAD data set. When incorporated into an existing neural question generation system, the resulting end-to-end system achieves stateof-the-art performance for paragraph-level question generation for reading comprehension.", "title": "" }, { "docid": "76eef8117ac0bc5dbb0529477d10108d", "text": "Most existing switched-capacitor (SC) DC-DC converters only offer a few voltage conversion ratios (VCRs), leading to significant efficiency fluctuations under wide input/output dynamics (e.g. up to 30% in [1]). Consequently, systematic SC DC-DC converters with fine-grained VCRs (FVCRs) become attractive to achieve high efficiency over a wide operating range. Both the Recursive SC (RSC) [2,3] and Negator-based SC (NSC) [4] topologies offer systematic FVCR generations with high conductance, but their binary-switching nature fundamentally results in considerable parasitic loss. In bulk CMOS, the restriction of using low-parasitic MIM capacitors for high efficiency ultimately limits their achievable power density to <1mW/mm2. This work reports a fully integrated fine-grained buck-boost SC DC-DC converter with 24 VCRs. It features an algorithmic voltage-feed-in (AVFI) topology to systematically generate any arbitrary buck-boost rational ratio with optimal conduction loss while achieving the lowest parasitic loss compared with [2,4]. With 10 main SC cells (MCs) and 10 auxiliary SC cells (ACs) controlled by the proposed reference-selective bootstrapping driver (RSBD) for wide-range efficient buck-boost operations, the AVFI converter in 65nm bulk CMOS achieves a peak efficiency of 84.1% at a power density of 13.2mW/mm2 over a wide range of input (0.22 to 2.4V) and output (0.85 to 1.2V).", "title": "" }, { "docid": "32b96d4d23a03b1828f71496e017193e", "text": "Camera-based lane detection algorithms are one of the key enablers for many semi-autonomous and fullyautonomous systems, ranging from lane keep assist to level-5 automated vehicles. Positioning a vehicle between lane boundaries is the core navigational aspect of a self-driving car. Even though this should be trivial, given the clarity of lane markings on most standard roadway systems, the process is typically mired with tedious pre-processing and computational effort. We present an approach to estimate lane positions directly using a deep neural network that operates on images from laterally-mounted down-facing cameras. To create a diverse training set, we present a method to generate semi-artificial images. Besides the ability to distinguish whether there is a lane-marker present or not, the network is able to estimate the position of a lane marker with sub-centimeter accuracy at an average of 100 frames/s on an embedded automotive platform, requiring no pre-or post-processing. This system can be used not only to estimate lane position for navigation, but also provide an efficient way to validate the robustness of driver-assist features which depend on lane information.", "title": "" } ]
scidocsrr
bf9b0467a5e1296d564a445a814627a9
Software-defined wireless network architectures for the Internet-of-Things
[ { "docid": "5a83cb0ef928b6cae6ce1e0b21d47f60", "text": "Software defined networking, characterized by a clear separation of the control and data planes, is being adopted as a novel paradigm for wired networking. With SDN, network operators can run their infrastructure more efficiently, supporting faster deployment of new services while enabling key features such as virtualization. In this article, we adopt an SDN-like approach applied to wireless mobile networks that will not only benefit from the same features as in the wired case, but will also leverage on the distinct features of mobile deployments to push improvements even further. We illustrate with a number of representative use cases the benefits of the adoption of the proposed architecture, which is detailed in terms of modules, interfaces, and high-level signaling. We also review the ongoing standardization efforts, and discuss the potential advantages and weaknesses, and the need for a coordinated approach.", "title": "" } ]
[ { "docid": "cf0b2ec813ac12c7cd3f3cbf7c133650", "text": "Cloud computing offers utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of electrical energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational costs. In this paper, we define an architectural framework and principles for energy-efficient Cloud computing. Based on this architecture, we present a vision, challenges, and resource provisioning and allocation algorithms for energy-efficient management of Cloud computing environments. The proposed energy-aware allocation heuristics provision data center resources to client applications in a way that improves energy efficiency of the data center, while delivering the negotiated Quality of Service (QoS). In particular, in this paper we propose: (a) architectural principles for energy-efficient management of Clouds; (b) energy-efficient resource allocation policies and scheduling algorithms considering QoS expectations, and devices power usage characteristics; and (c) a number of open research challenges, addressing which can bring substantial benefits to both resource providers and consumers. We have validated our approach by conducting a rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that Cloud computing model has immense potential as it offers significant cost savings and demonstrates high potential for the improvement of energy efficiency under dynamic workload scenarios.", "title": "" }, { "docid": "e465b9a38e7649f541ab9e419103b362", "text": "Spoken language based intelligent assistants (IAs) have been developed for a number of domains but their functionality has mostly been confined to the scope of a given app. One reason is that it’s is difficult for IAs to infer a user’s intent without access to relevant context and unless explicitly implemented, context is not available across app boundaries. We describe context-aware multi-app dialog systems that can learn to 1) identify meaningful user intents; 2) produce natural language representation for the semantics of such intents; and 3) predict user intent as they engage in multi-app tasks. As part of our work we collected data from the smartphones of 14 users engaged in real-life multi-app tasks. We found that it is reasonable to group tasks into high-level intentions. Based on the dialog content, IA can generate useful phrases to describe the intention. We also found that, with readily available contexts, IAs can effectively predict user’s intents during conversation, with accuracy at 58.9%.", "title": "" }, { "docid": "f63da8e7659e711bcb7a148ea12a11f2", "text": "We have presented two CCA-based approaches for data fusion and group analysis of biomedical imaging data and demonstrated their utility on fMRI, sMRI, and EEG data. The results show that CCA and M-CCA are powerful tools that naturally allow the analysis of multiple data sets. The data fusion and group analysis methods presented are completely data driven, and use simple linear mixing models to decompose the data into their latent components. Since CCA and M-CCA are based on second-order statistics they provide a relatively lessstrained solution as compared to methods based on higherorder statistics such as ICA. While this can be advantageous, the flexibility also tends to lead to solutions that are less sparse than those obtained using assumptions of non-Gaussianity-in particular superGaussianity-at times making the results more difficult to interpret. Thus, it is important to note that both approaches provide complementary perspectives, and hence it is beneficial to study the data using different analysis techniques.", "title": "" }, { "docid": "51be236c79d1af7a2aff62a8049fba34", "text": "BACKGROUND\nAs the number of children diagnosed with autism continues to rise, resources must be available to support parents of children with autism and their families. Parents need help as they assess their unique situations, reach out for help in their communities, and work to decrease their stress levels by using appropriate coping strategies that will benefit their entire family.\n\n\nMETHODS\nA descriptive, correlational, cross-sectional study was conducted with 75 parents/primary caregivers of children with autism. Using the McCubbin and Patterson model of family behavior, adaptive behaviors of children with autism, family support networks, parenting stress, and parent coping were measured.\n\n\nFINDINGS AND CONCLUSIONS\nAn association between low adaptive functioning in children with autism and increased parenting stress creates a need for additional family support as parents search for different coping strategies to assist the family with ongoing and new challenges. Professionals should have up-to-date knowledge of the supports available to families and refer families to appropriate resources to avoid overwhelming them with unnecessary and inappropriate referrals.", "title": "" }, { "docid": "13a64221ff915439d846481050e52108", "text": "This paper proposes a new maximum power point tracking (MPPT) method for photovoltaic (PV) systems by using Kalman filter. A Perturbation & Observation (P&O) method is widely used presently due to its easy implementation and simplicity. The P&O usually requires of dithering scheme to reduce noise effects, but it slows the tracking response. Tracking speed is the most important factor on improving efficiency in frequent environmental changes. The proposed method is based on the Kalman filter. It shows the fast tracking performance on noisy conditions, so that enables to generate more power in rapid weather changes than the P&O. Simulation results are provided the comparison between the proposed method and P&O on time responses for conditions of sudden system restart and sudden irradiance change.", "title": "" }, { "docid": "013ec46500a6419c371924b98dac7730", "text": "A four-quadrant CMOS analog multiplier is presented. The device is nominafly biased with +5-V supplies, has identicaf full-scafe single-ended x and y inputs of +4 V, and exhibits less than 0.5 percent Manuscript received March 1, 1987; revised July 18, 1987. The authors are with the Department of Electrical Engineering, Texas A&M University, College Station, TX 77843-3128. IEEE Log Number 8716852. + I–— ————_—", "title": "" }, { "docid": "5c898e311680199f1f369d3c264b2b14", "text": "Behaviour Driven Development (BDD) has gained increasing attention as an agile development approach in recent years. However, characteristics that constituite the BDD approach are not clearly defined. In this paper, we present a set of main BDD charactersitics identified through an analysis of relevant literature and current BDD toolkits. Our study can provide a basis for understanding BDD, as well as for extending the exisiting BDD toolkits or developing new ones.", "title": "" }, { "docid": "b9d78a4f1fc6587557057125343675ab", "text": "We propose a new computational approach for tracking and detecting statistically significant linguistic shifts in the meaning and usage of words. Such linguistic shifts are especially prevalent on the Internet, where the rapid exchange of ideas can quickly change a word's meaning. Our meta-analysis approach constructs property time series of word usage, and then uses statistically sound change point detection algorithms to identify significant linguistic shifts. We consider and analyze three approaches of increasing complexity to generate such linguistic property time series, the culmination of which uses distributional characteristics inferred from word co-occurrences. Using recently proposed deep neural language models, we first train vector representations of words for each time period. Second, we warp the vector spaces into one unified coordinate system. Finally, we construct a distance-based distributional time series for each word to track its linguistic displacement over time.\n We demonstrate that our approach is scalable by tracking linguistic change across years of micro-blogging using Twitter, a decade of product reviews using a corpus of movie reviews from Amazon, and a century of written books using the Google Book Ngrams. Our analysis reveals interesting patterns of language usage change commensurate with each medium.", "title": "" }, { "docid": "8f53acbe65e2b98efe5b3018c27d28a7", "text": "Oracle Materialized Views (MVs) are designed for data warehousing and replication. For data warehousing, MVs based on inner/outer equijoins with optional aggregation, can be refreshed on transaction boundaries, on demand, or periodically. Refreshes are optimized for bulk loads and can use a multi-MV scheduler. MVs based on subqueries on remote tables support bidirectional replication. Optimization with MVs includes transparent query rewrite based on costbased selection method. The ability to rewrite a large class of queries based on a small set of MVs is supported by using Dimensions (new Oracle object), losslessness of joins, functional dependency, column equivalence, join derivability, joinback and aggregate rollup.", "title": "" }, { "docid": "867a6923a650bdb1d1ec4f04cda37713", "text": "We examine Gärdenfors’ theory of conceptual spaces, a geometrical form of knowledge representation (Conceptual spaces: The geometry of thought, MIT Press, Cambridge, 2000), in the context of the general Creative Systems Framework introduced by Wiggins (J Knowl Based Syst 19(7):449–458, 2006a; New Generation Comput 24(3):209–222, 2006b). Gärdenfors’ theory offers a way of bridging the traditional divide between symbolic and sub-symbolic representations, as well as the gap between representational formalism and meaning as perceived by human minds. We discuss how both these qualities may be advantageous from the point of view of artificial creative systems. We take music as our example domain, and discuss how a range of musical qualities may be instantiated as conceptual spaces, and present a detailed conceptual space formalisation of musical metre.", "title": "" }, { "docid": "ba57149e82718bad622df36852906531", "text": "The classical psychedelic drugs, including psilocybin, lysergic acid diethylamide and mescaline, were used extensively in psychiatry before they were placed in Schedule I of the UN Convention on Drugs in 1967. Experimentation and clinical trials undertaken prior to legal sanction suggest that they are not helpful for those with established psychotic disorders and should be avoided in those liable to develop them. However, those with so-called 'psychoneurotic' disorders sometimes benefited considerably from their tendency to 'loosen' otherwise fixed, maladaptive patterns of cognition and behaviour, particularly when given in a supportive, therapeutic setting. Pre-prohibition studies in this area were sub-optimal, although a recent systematic review in unipolar mood disorder and a meta-analysis in alcoholism have both suggested efficacy. The incidence of serious adverse events appears to be low. Since 2006, there have been several pilot trials and randomised controlled trials using psychedelics (mostly psilocybin) in various non-psychotic psychiatric disorders. These have provided encouraging results that provide initial evidence of safety and efficacy, however the regulatory and legal hurdles to licensing psychedelics as medicines are formidable. This paper summarises clinical trials using psychedelics pre and post prohibition, discusses the methodological challenges of performing good quality trials in this area and considers a strategic approach to the legal and regulatory barriers to licensing psychedelics as a treatment in mainstream psychiatry. This article is part of the Special Issue entitled 'Psychedelics: New Doors, Altered Perceptions'.", "title": "" }, { "docid": "6f77e74cd8667b270fae0ccc673b49a5", "text": "GeneMANIA (http://www.genemania.org) is a flexible, user-friendly web interface for generating hypotheses about gene function, analyzing gene lists and prioritizing genes for functional assays. Given a query list, GeneMANIA extends the list with functionally similar genes that it identifies using available genomics and proteomics data. GeneMANIA also reports weights that indicate the predictive value of each selected data set for the query. Six organisms are currently supported (Arabidopsis thaliana, Caenorhabditis elegans, Drosophila melanogaster, Mus musculus, Homo sapiens and Saccharomyces cerevisiae) and hundreds of data sets have been collected from GEO, BioGRID, Pathway Commons and I2D, as well as organism-specific functional genomics data sets. Users can select arbitrary subsets of the data sets associated with an organism to perform their analyses and can upload their own data sets to analyze. The GeneMANIA algorithm performs as well or better than other gene function prediction methods on yeast and mouse benchmarks. The high accuracy of the GeneMANIA prediction algorithm, an intuitive user interface and large database make GeneMANIA a useful tool for any biologist.", "title": "" }, { "docid": "c62fc94fc0fe403f3a416d897b6b9336", "text": "Nutrigenomics is the application of high-throughput genomics tools in nutrition research. Applied wisely, it will promote an increased understanding of how nutrition influences metabolic pathways and homeostatic control, how this regulation is disturbed in the early phase of a diet-related disease and to what extent individual sensitizing genotypes contribute to such diseases. Ultimately, nutrigenomics will allow effective dietary-intervention strategies to recover normal homeostasis and to prevent diet-related diseases.", "title": "" }, { "docid": "b6ceacf3ad3773acddc3452933b57a0f", "text": "The growing interest in robots that interact safely with humans and surroundings have prompted the need for soft structural embodiments including soft actuators. This paper explores a class of soft actuators inspired in design and construction by Pneumatic Artificial Muscles (PAMs) or McKibben Actuators. These bio-inspired actuators consist of fluid-filled elastomeric enclosures that are reinforced with fibers along a specified orientation and are in general referred to as Fiber-Reinforced Elastomeric Enclosures (FREEs). Several recent efforts have mapped the fiber configurations to instantaneous deformation, forces, and moments generated by these actuators upon pressurization with fluid. However most of the actuators, when deployed undergo large deformations and large overall motions thus necessitating the study of their large-deformation kinematics. This paper analyzes the large deformation kinematics of FREEs. A concept called configuration memory effect is proposed to explain the smart nature of these actuators. This behavior is tested with experiments and finite element modeling for a small sample of actuators. The paper also describes different possibilities and design implications of the large deformation behavior of FREEs in successful creation of soft robots.", "title": "" }, { "docid": "cb8ffb03187583308eb8409d75a54172", "text": "Active Traffic Management (ATM) systems have been introduced by transportation agencies to manage recurrent and non-recurrent congestion. ATM systems rely on the interconnectivity of components made possible by wired and/or wireless networks. Unfortunately, this connectivity that supports ATM systems also provides potential system access points that results in vulnerability to cyberattacks. This is becoming more pronounced as ATM systems begin to integrate internet of things (IoT) devices. Hence, there is a need to rigorously evaluate ATM systems for cyberattack vulnerabilities, and explore design concepts that provide stability and graceful degradation in the face of cyberattacks. In this research, a prototype ATM system along with a real-time cyberattack monitoring system were developed for a 1.5-mile section of I-66 in Northern Virginia. The monitoring system detects deviation from expected operation of an ATM system by comparing lane control states generated by the ATM system with lane control states deemed most likely by the monitoring system. This comparison provides the functionality to continuously monitor the system for abnormalities that would result from a cyberattack. In case of any deviation between two sets of states, the monitoring system displays the lane control states generated by the back-up data source. In a simulation experiment, the prototype ATM system and cyberattack monitoring system were subject to emulated cyberattacks. The evaluation results showed that the ATM system, when operating properly in the absence of attacks, improved average vehicle speed in the system to 60mph (a 13% increase compared to the baseline case without ATM). However, when subject to cyberattack, the mean speed reduced by 15% compared to the case with the ATM system and was similar to the baseline case. This illustrates that the effectiveness of the ATM system was negated by cyberattacks. The monitoring system however, allowed the ATM system to revert to an expected state with a mean speed of 59mph and reduced the negative impact of cyberattacks. These results illustrate the need to revisit ATM system design concepts as a means to protect against cyberattacks in addition to traditional system intrusion prevention approaches.", "title": "" }, { "docid": "4737fe7f718f79c74595de40f8778da2", "text": "In this paper we describe a method of procedurally generating maps using Markov chains. This method learns statistical patterns from human-authored maps, which are assumed to be of high quality. Our method then uses those learned patterns to generate new maps. We present a collection of strategies both for training the Markov chains, and for generating maps from such Markov chains. We then validate our approach using the game Super Mario Bros., by evaluating the quality of the produced maps based on different configurations for training and generation.", "title": "" }, { "docid": "eb8e210fe9704a23157baffd36f1bdbb", "text": "This paper describes recent work on the DynDial project ∗ towards incremental semantic interpretation in dialogue. We outline our domain-general gramm r-based approach, using a variant of Dynamic Syntax integrated with Type Theory with Records and Davidsonian event-based semantics. We describe a Java-based implementation of the parser , u d within the Jindigo framework to produce an incremental dialogue system capable of handling inherently incremental phenomena such as split utterances, adjuncts, and mid-sentence clarificat ion requests or backchannels.", "title": "" }, { "docid": "5b984d57ad0940838b703eadd7c733b3", "text": "Neural sequence generation is commonly approached by using maximumlikelihood (ML) estimation or reinforcement learning (RL). However, it is known that they have their own shortcomings; ML presents training/testing discrepancy, whereas RL suffers from sample inefficiency. We point out that it is difficult to resolve all of the shortcomings simultaneously because of a tradeoff between ML and RL. In order to counteract these problems, we propose an objective function for sequence generation using α-divergence, which leads to an ML-RL integrated method that exploits better parts of ML and RL. We demonstrate that the proposed objective function generalizes ML and RL objective functions because it includes both as its special cases (ML corresponds to α→ 0 and RL to α→ 1). We provide a proposition stating that the difference between the RL objective function and the proposed one monotonically decreases with increasing α. Experimental results on machine translation tasks show that minimizing the proposed objective function achieves better sequence generation performance than ML-based methods.", "title": "" }, { "docid": "b8087b15edb4be5771aef83b1b18f723", "text": "The success of visual telecommunication systems depends on their ability to transmit and display users' natural nonverbal behavior. While video-mediated communication (VMC) is the most widely used form of interpersonal remote interaction, avatar-mediated communication (AMC) in shared virtual environments is increasingly common. This paper presents two experiments investigating eye tracking in AMC. The first experiment compares the degree of social presence experienced in AMC and VMC during truthful and deceptive discourse. Eye tracking data (gaze, blinking, and pupil size) demonstrates that oculesic behavior is similar in both mediation types, and uncovers systematic differences between truth telling and lying. Subjective measures show users' psychological arousal to be greater in VMC than AMC. The second experiment demonstrates that observers of AMC can more accurately detect truth and deception when viewing avatars with added oculesic behavior driven by eye tracking. We discuss implications for the design of future visual telecommunication media interfaces.", "title": "" }, { "docid": "d4896aa12be18aea9a6639422ee12d92", "text": "Recently, tag recommendation (TR) has become a very hot research topic in data mining and related areas. However, neither co-occurrence based methods which only use the item-tag matrix nor content based methods which only use the item content information can achieve satisfactory performance in real TR applications. Hence, how to effectively combine the item-tag matrix, item content information, and other auxiliary information into the same recommendation framework is the key challenge for TR. In this paper, we first adapt the collaborative topic regression (CTR) model, which has been successfully applied for article recommendation, to combine both item-tag matrix and item content information for TR. Furthermore, by extending CTR we propose a novel hierarchical Bayesian model, called CTR with social regularization (CTR-SR), to seamlessly integrate the item-tag matrix, item content information, and social networks between items into the same principled model. Experiments on real data demonstrate the effectiveness of our proposed models.", "title": "" } ]
scidocsrr
a739180950471aa7d7261ab1a8b9800f
Example-dependent cost-sensitive decision trees
[ { "docid": "b4bc5ccbe0929261856d18272c47a3de", "text": "ROC analysis is increasingly being recognised as an important tool for evaluation and comparison of classifiers when the operating characteristics (i.e. class distribution and cost parameters) are not known at training time. Usually, each classifier is characterised by its estimated true and false positive rates and is represented by a single point in the ROC diagram. In this paper, we show how a single decision tree can represent a set of classifiers by choosing different labellings of its leaves, or equivalently, an ordering on the leaves. In this setting, rather than estimating the accuracy of a single tree, it makes more sense to use the area under the ROC curve (AUC) as a quality metric. We also propose a novel splitting criterion which chooses the split with the highest local AUC. To the best of our knowledge, this is the first probabilistic splitting criterion that is not based on weighted average impurity. We present experiments suggesting that the AUC splitting criterion leads to trees with equal or better AUC value, without sacrificing accuracy if a single labelling is chosen.", "title": "" }, { "docid": "dbf5d0f6ce7161f55cf346e46150e8d7", "text": "Loan fraud is a critical factor in the insolvency of financial institutions, so companies make an effort to reduce the loss from fraud by building a model for proactive fraud prediction. However, there are still two critical problems to be resolved for the fraud detection: (1) the lack of cost sensitivity between type I error and type II error in most prediction models, and (2) highly skewed distribution of class in the dataset used for fraud detection because of sparse fraud-related data. The objective of this paper is to examine whether classification cost is affected both by the cost-sensitive approach and by skewed distribution of class. To that end, we compare the classification cost incurred by a traditional cost-insensitive classification approach and two cost-sensitive classification approaches, Cost-Sensitive Classifier (CSC) and MetaCost. Experiments were conducted with a credit loan dataset from a major financial institution in Korea, while varying the distribution of class in the dataset and the number of input variables. The experiments showed that the lowest classification cost was incurred when the MetaCost approach was used and when non-fraud data and fraud data were balanced. In addition, the dataset that includes all delinquency variables was shown to be most effective on reducing the classification cost. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "e9698e55abb8cee0f3a5663517bd0037", "text": "0377-2217/$ see front matter 2008 Elsevier B.V. A doi:10.1016/j.ejor.2008.06.027 * Corresponding author. Tel.: +32 16326817. E-mail address: Nicolas.Glady@econ.kuleuven.ac.b The definition and modeling of customer loyalty have been central issues in customer relationship management since many years. Recent papers propose solutions to detect customers that are becoming less loyal, also called churners. The churner status is then defined as a function of the volume of commercial transactions. In the context of a Belgian retail financial service company, our first contribution is to redefine the notion of customer loyalty by considering it from a customer-centric viewpoint instead of a product-centric one. We hereby use the customer lifetime value (CLV) defined as the discounted value of future marginal earnings, based on the customer’s activity. Hence, a churner is defined as someone whose CLV, thus the related marginal profit, is decreasing. As a second contribution, the loss incurred by the CLV decrease is used to appraise the cost to misclassify a customer by introducing a new loss function. In the empirical study, we compare the accuracy of various classification techniques commonly used in the domain of churn prediction, including two cost-sensitive classifiers. Our final conclusion is that since profit is what really matters in a commercial environment, standard statistical accuracy measures for prediction need to be revised and a more profit oriented focus may be desirable. 2008 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "41d9e95f3a761064a57da051e809dc44", "text": "The behaviour of a driven double well Duffing-van der Pol (DVP) oscillator for a specific parametric choice (| α |= β) is studied. The existence of different attractors in the system parameters (f − ω) domain is examined and a detailed account of various steady states for fixed damping is presented. Transition from quasiperiodic to periodic motion through chaotic oscillations is reported. The intervening chaotic regime is further shown to possess islands of phase-locked states and periodic windows (including period doubling regions), boundary crisis, all the three classes of intermittencies, and transient chaos. We also observe the existence of local-global bifurcation of intermittent catastrophe type and global bifurcation of blue-sky catastrophe type during transition from quasiperiodic to periodic solutions. Using a perturbative periodic solution, an investigation of the various forms of instablities allows one to predict Neimark instablity in the (f − ω) plane and eventually results in the approximate predictive criteria for the chaotic region.", "title": "" }, { "docid": "8eb2a660107b304caf574bdf7fad3f23", "text": "To enhance torque density by harmonic current injection, optimal slot/pole combinations for five-phase permanent magnet synchronous motors (PMSM) with fractional-slot concentrated windings (FSCW) are chosen. The synchronous and the third harmonic winding factors are calculated for a series of slot/pole combinations. Two five-phase PMSM, with general FSCW (GFSCW) and modular stator and FSCW (MFSCW), are analyzed and compared in detail, including the stator structures, star of slots diagrams, and MMF harmonic analysis based on the winding function theory. The analytical results are verified by finite element method, the torque characteristics and phase back-EMF are also taken into considerations. Results show that the MFSCW PMSM can produce higher average torque, while characterized by more MMF harmonic contents and larger ripple torque.", "title": "" }, { "docid": "bf71f7f57def7633a5390b572e983bc9", "text": "With the development of the Internet, cyber-attacks are changing rapidly and the cyber security situation is not optimistic. This survey report describes key literature surveys on machine learning (ML) and deep learning (DL) methods for network analysis of intrusion detection and provides a brief tutorial description of each ML/DL method. Papers representing each method were indexed, read, and summarized based on their temporal or thermal correlations. Because data are so important in ML/DL methods, we describe some of the commonly used network datasets used in ML/DL, discuss the challenges of using ML/DL for cybersecurity and provide suggestions for research directions.", "title": "" }, { "docid": "65de85b6befbcb8cb66ceb4e4346d3a9", "text": "BACKGROUND\nClinical observations have suggested that hippotherapy may be an effective strategy for habilitating balance deficits in children with movement disorders. However, there is limited research to support this notion.\n\n\nOBJECTIVE\nThe purposes of this study were to assess the effectiveness of hippotherapy for the management of postural instability in children with mild to moderate balance problems and to determine whether there is a correlation between balance and function.\n\n\nDESIGN\nA repeated-measures design for a cohort of children with documented balance deficits was used.\n\n\nMETHODS\nSixteen children (9 boys and 7 girls) who were 5 to 16 years of age and had documented balance problems participated in this study. Intervention consisted of 45-minute hippotherapy sessions twice per week for 6 weeks. Two baseline assessments and 1 postintervention assessment of balance, as measured with the Pediatric Balance Scale (PBS), and of function, as measured with the Activities Scale for Kids-Performance (ASKp), were performed.\n\n\nRESULTS\nWith the Friedman analysis of variance, the PBS and the ASKp were found to be statistically significant across all measurements (P<.0001 for both measures). Post hoc analysis revealed a statistical difference between baseline and postintervention measures (P≤.017). This degree of difference resulted in large effect sizes for PBS (d=1.59) and ASKp (d=1.51) scores after hippotherapy. A Spearman rho correlation of .700 indicated a statistical association between PBS and ASKp postintervention scores (P=.003). There was no correlation between the change in PBS scores and the change in ASKp scores (r(s)=.13, P>.05).\n\n\nLIMITATIONS\nLack of a control group and the short duration between baseline assessments are study limitations.\n\n\nCONCLUSIONS\nThe findings suggest that hippotherapy may be a viable strategy for reducing balance deficits and improving the performance of daily life skills in children with mild to moderate balance problems.", "title": "" }, { "docid": "ecce348941aeda57bd66dbd7836923e6", "text": "Moana (2016) continues a tradition of Disney princess movies that perpetuate gender stereotypes. The movie contains the usual Electral undercurrent, with Moana seeking to prove her independence to her overprotective father. Moana’s partner in her adventures, Maui, is overtly hypermasculine, a trait epitomized by a phallic fishhook that is critical to his identity. Maui’s struggles with shapeshifting also reflect male anxieties about performing masculinity. Maui violates the Mother Island, first by entering her cave and then by using his fishhook to rob her of her fertility. The repercussions of this act are the basis of the plot: the Mother Island abandons her form as a nurturing, youthful female (Te Fiti) focused on creation to become a vengeful lava monster (Te Kā). At the end, Moana successfully urges Te Kā to get in touch with her true self, a brave but simple act that is sufficient to bring back Te Fiti, a passive, smiling green goddess. The association of youthful, fertile females with good and witch-like infertile females with evil implies that women’s worth and well-being are dependent upon their procreative function. Stereotypical gender tropes that also include female abuse of power and a narrow conception of masculinity merit analysis in order to further progress in recognizing and addressing patterns of gender hegemony in popular Disney films.", "title": "" }, { "docid": "1f3d84321cc2843349c5b6ef43fc8b9a", "text": "It has long been posited that among emotional stimuli, only negative threatening information modulates early shifts of attention. However, in the last few decades there has been an increase in research showing that attention is also involuntarily oriented toward positive rewarding stimuli such as babies, food, and erotic information. Because reproduction-related stimuli have some of the largest effects among positive stimuli on emotional attention, the present work reviews recent literature and proposes that the cognitive and cerebral mechanisms underlying the involuntarily attentional orientation toward threat-related information are also sensitive to erotic information. More specifically, the recent research suggests that both types of information involuntarily orient attention due to their concern relevance and that the amygdala plays an important role in detecting concern-relevant stimuli, thereby enhancing perceptual processing and influencing emotional attentional processes.", "title": "" }, { "docid": "58f6247a0958bf0087620921c99103b1", "text": "This paper addresses an information-theoretic aspect of k-means and spectral clustering. First, we revisit the k-means clustering and show that its objective function is approximately derived from the minimum entropy principle when the Renyi's quadratic entropy is used. Then we present a maximum within-clustering association that is derived using a quadratic distance measure in the framework of minimum entropy principle, which is very similar to a class of spectral clustering algorithms that is based on the eigen-decomposition method.", "title": "" }, { "docid": "6377b90960aaaf2e815339a3315d72cd", "text": "Coronary artery disease (CAD) is one of the most common causes of death worldwide. In the last decade, significant advancements in CAD treatment have been made. The existing treatment is medical, surgical or a combination of both depending on the extent, severity and clinical presentation of CAD. The collaboration between different science disciplines such as biotechnology and tissue engineering has led to the development of novel therapeutic strategies such as stem cells, nanotechnology, robotic surgery and other advancements (3-D printing and drugs). These treatment modalities show promising effects in managing CAD and associated conditions. Research on stem cells focuses on studying the potential for cardiac regeneration, while nanotechnology research investigates nano-drug delivery and percutaneous coronary interventions including stent modifications and coatings. This article aims to provide an update on the literature (in vitro, translational, animal and clinical) related to these novel strategies and to elucidate the rationale behind their potential treatment of CAD. Through the extensive and continued efforts of researchers and clinicians worldwide, these novel strategies hold the promise to be effective alternatives to existing treatment modalities.", "title": "" }, { "docid": "ddffafc22209fc71c6c572dea0ddfca4", "text": "In the context of an ongoing digital transformation, companies across all industries are confronted with the challenge to exploit IT-induced business opportunities and to simultaneously avert IT-induced business risks. Due to this development, questions about a company’s overall status with regard to its digital transformation become more and more relevant. In recent years, an unclear number of maturity models was established in order to address these kind of questions by assessing a company’s digital maturity. Purpose of this Report is to show the large range of digital maturity models and to evaluate overall potential for approximating a company’s digital transformation status.", "title": "" }, { "docid": "a27660db1d7d2a6724ce5fd8991479f7", "text": "An electromyographic (EMG) activity pattern for individual muscles in the gait cycle exhibits a great deal of intersubject, intermuscle and context-dependent variability. Here we examined the issue of common underlying patterns by applying factor analysis to the set of EMG records obtained at different walking speeds and gravitational loads. To this end healthy subjects were asked to walk on a treadmill at speeds of 1, 2, 3 and 5 kmh(-1) as well as when 35-95% of the body weight was supported using a harness. We recorded from 12-16 ipsilateral leg and trunk muscles using both surface and intramuscular recording and determined the average, normalized EMG of each record for 10-15 consecutive step cycles. We identified five basic underlying factors or component waveforms that can account for about 90% of the total waveform variance across different muscles during normal gait. Furthermore, while activation patterns of individual muscles could vary dramatically with speed and gravitational load, both the limb kinematics and the basic EMG components displayed only limited changes. Thus, we found a systematic phase shift of all five factors with speed in the same direction as the shift in the onset of the swing phase. This tendency for the factors to be timed according to the lift-off event supports the idea that the origin of the gait cycle generation is the propulsion rather than heel strike event. The basic invariance of the factors with walking speed and with body weight unloading implies that a few oscillating circuits drive the active muscles to produce the locomotion kinematics. A flexible and dynamic distribution of these basic components to the muscles may result from various descending and proprioceptive signals that depend on the kinematic and kinetic demands of the movements.", "title": "" }, { "docid": "002572cf1381257e47f74fc2de9bdc83", "text": "As information technology becomes integral to the products and services in a growing range of industries, there has been a corresponding surge of interest in understanding how firms can effectively formulate and execute digital business strategies. This fusion of IT within the business environment gives rise to a strategic tension between investing in digital artifacts for long-term value creation and exploiting them for short-term value appropriation. Further, relentless innovation and competitive pressures dictate that firms continually adapt these artifacts to changing market and technological conditions, but sustained profitability requires scalable architectures that can serve a large customer base and stable interfaces that support integration across a diverse ecosystem of complementary offerings. The study of digital business strategy needs new concepts and methods to examine how these forces are managed in pursuit of competitive advantage. We conceptualize the logic of digital business strategy in terms of two constructs: design capital (i.e., the cumulative stock of designs owned or controlled by a firm), and design moves (i.e., the discrete strategic actions that enlarge, reduce, or modify a firm’s stock of designs). We also identify two salient dimensions of design capital, namely option value and technical debt. Using embedded case studies of four firms, we develop a rich conceptual model and testable propositions to lay out a design-based logic of digital business strategy. This logic highlights the interplay between design moves and design capital in the context of digital business strategy and contributes to a growing body of insights that link the design of digital artifacts to competitive strategy and firm-level performance.", "title": "" }, { "docid": "32172b93cb6050c4a93b8323a56ad6b4", "text": "This work presents a novel method for automatic detection and identification of heart sounds. Homomorphic filtering is used to obtain a smooth envelogram of the phono cardiogram, which enables a robust detection of events of interest in heart sound signal. Sequences of features extracted from the detected events are used as observations of a hidden Markov model. It is demonstrated that the task of detection and identification of the major heart sounds can be learned from unlabelled phono cardiograms by an unsupervised training process and without the assistance of any additional synchronizing channels", "title": "" }, { "docid": "68e4c1122a2339a89cb3873e1013a26e", "text": "Although there is a voluminous literature on mass media effects on body image concerns of young adult women in the U.S., there has been relatively little theoretically-driven research on processes and effects of social media on young women’s body image and self-perceptions. Yet given the heavy online presence of young adults, particularly women, and their reliance on social media, it is important to appreciate ways that social media can influence perceptions of body image and body image disturbance. Drawing on communication and social psychological theories, the present article articulates a series of ideas and a framework to guide research on social media effects on body image concerns of young adult women. The interactive format and content features of social media, such as the strong peer presence and exchange of a multitude of visual images, suggest that social media, working via negative social comparisons, transportation, and peer normative processes, can significantly influence body image concerns. A model is proposed that emphasizes the impact of predisposing individual vulnerability characteristics, social media uses, and mediating psychological processes on body dissatisfaction and eating disorders. Research-based ideas about social media effects on male body image, intersections with ethnicity, and ameliorative strategies are also discussed.", "title": "" }, { "docid": "1f6f4025fa450b845cefe5da2b842031", "text": "The Carnegie Mellon In Silico Vox project seeks to move best-quality speech recognition technology from its current software-only form into a range of efficient all-hardware implementations. The central thesis is that, like graphics chips, the application is simply too performance hungry, and too power sensitive, to stay as a large software application. As a first step in this direction, we describe the design and implementation of a fully functional speech-to-text recognizer on a single Xilinx XUP platform. The design recognizes a 1000 word vocabulary, is speaker-independent, recognizes continuous (connected) speech, and is a \"live mode\" engine, wherein recognition can start as soon as speech input appears. To the best of our knowledge, this is the most complex recognizer architecture ever fully committed to a hardware-only form. The implementation is extraordinarily small, and achieves the same accuracy as state-of-the-art software recognizers, while running at a fraction of the clock speed.", "title": "" }, { "docid": "5d9106a06f606cefb3b24fb14c72d41a", "text": "Most existing relation extraction models make predictions for each entity pair locally and individually, while ignoring implicit global clues available in the knowledge base, sometimes leading to conflicts among local predictions from different entity pairs. In this paper, we propose a joint inference framework that utilizes these global clues to resolve disagreements among local predictions. We exploit two kinds of clues to generate constraints which can capture the implicit type and cardinality requirements of a relation. Experimental results on three datasets, in both English and Chinese, show that our framework outperforms the state-of-theart relation extraction models when such clues are applicable to the datasets. And, we find that the clues learnt automatically from existing knowledge bases perform comparably to those refined by human.", "title": "" }, { "docid": "eec33c75a0ec9b055a857054d05bcf54", "text": "We introduce a logical process of three distinct phases to begin the evaluation of a new 3D dosimetry array. The array under investigation is a hollow cylinder phantom with diode detectors fixed in a helical shell forming an \"O\" axial detector cross section (ArcCHECK), with comparisons drawn to a previously studied 3D array with diodes fixed in two crossing planes forming an \"X\" axial cross section (Delta⁴). Phase I testing of the ArcCHECK establishes: robust relative calibration (response equalization) of the individual detectors, minor field size dependency of response not present in a 2D predecessor, and uncorrected angular response dependence in the axial plane. Phase II testing reveals vast differences between the two devices when studying fixed-width full circle arcs. These differences are primarily due to arc discretization by the TPS that produces low passing rates for the peripheral detectors of the ArcCHECK, but high passing rates for the Delta⁴. Similar, although less pronounced, effects are seen for the test VMAT plans modeled after the AAPM TG119 report. The very different 3D detector locations of the two devices, along with the knock-on effect of different percent normalization strategies, prove that the analysis results from the devices are distinct and noninterchangeable; they are truly measuring different things. The value of what each device measures, namely their correlation with--or ability to predict--clinically relevant errors in calculation and/or delivery of dose is the subject of future Phase III work.", "title": "" }, { "docid": "870ac1e223cc937e5f4416c9b2ee4a89", "text": "Effective weed control, using either mechanical or chemical means, relies on knowledge of the crop and weed plant occurrences in the field. This knowledge can be obtained automatically by analyzing images collected in the field. Many existing methods for plant detection in images make the assumption that plant foliage does not overlap. This assumption is often violated, reducing the performance of existing methods. This study overcomes this issue by training a convolutional neural network to create a pixel-wise classification of crops, weeds and soil in RGB images from fields, in order to know the exact position of the plants. This training is based on simulated top-down images of weeds and maize in fields. The results show an pixel accuracy over 94% and a 100% detection rate of both maize and weeds, when tested on real images, while a high intersection over union is kept. The system can handle 2.4 images per second for images with a resolution of 1MPix, when using an Nvidia Titan X GPU.", "title": "" }, { "docid": "c04065ff9cbeba50c0d70e30ab2e8b53", "text": "A linear model is suggested for the influence of covariates on the intensity function. This approach is less vulnerable than the Cox model to problems of inconsistency when covariates are deleted or the precision of covariate measurements is changed. A method of non-parametric estimation of regression functions is presented. This results in plots that may give information on the change over time in the influence of covariates. A test method and two goodness of fit plots are also given. The approach is illustrated by simulation as well as by data from a clinical trial of treatment of carcinoma of the oropharynx.", "title": "" }, { "docid": "e29d3ab3d3b9bd6cbff1c2a79a6c3070", "text": "This paper presents a study of passive Dickson based envelope detectors operating in the quadratic small signal regime, specifically intended to be used in RF front end of sensing units of IoE sensor nodes. Critical parameters such as open-circuit voltage sensitivity (OCVS), charge time, input impedance, and output noise are studied and simplified circuit models are proposed to predict the behavior of the detector, resulting in practical design intuitions. There is strong agreement between model predictions, simulation results and measurements of 15 representative test structures that were fabricated in a 130 nm RF CMOS process.", "title": "" }, { "docid": "eae5713c086986c4ef346d85ce06bf3d", "text": "We describe a study designed to assess properties of a P300 brain-computer interface (BCI). The BCI presents the user with a matrix containing letters and numbers. The user attends to a character to be communicated and the rows and columns of the matrix briefly intensify. Each time the attended character is intensified it serves as a rare event in an oddball sequence and it elicits a P300 response. The BCI works by detecting which character elicited a P300 response. We manipulated the size of the character matrix (either 3 x 3 or 6 x 6) and the duration of the inter stimulus interval (ISI) between intensifications (either 175 or 350 ms). Online accuracy was highest for the 3 x 3 matrix 175-ms ISI condition, while bit rate was highest for the 6 x 6 matrix 175-ms ISI condition. Average accuracy in the best condition for each subject was 88%. P300 amplitude was significantly greater for the attended stimulus and for the 6 x 6 matrix. This work demonstrates that matrix size and ISI are important variables to consider when optimizing a BCI system for individual users and that a P300-BCI can be used for effective communication.", "title": "" } ]
scidocsrr
d5101672c57631d725493d0793715aee
Universal Tuning System for Series-Resonant Induction Heating Applications
[ { "docid": "5cd9031a58457c0cb5fb2d49f1da40f6", "text": "Induction heating (IH) technology is nowadays the heating technology of choice in many industrial, domestic, and medical applications due to its advantages regarding efficiency, fast heating, safety, cleanness, and accurate control. Advances in key technologies, i.e., power electronics, control techniques, and magnetic component design, have allowed the development of highly reliable and cost-effective systems, making this technology readily available and ubiquitous. This paper reviews IH technology summarizing the main milestones in its development and analyzing the current state of art of IH systems in industrial, domestic, and medical applications, paying special attention to the key enabling technologies involved. Finally, an overview of future research trends and challenges is given, highlighting the promising future of IH technology.", "title": "" }, { "docid": "9cd18dd8709ae798c787ec44128bf8cd", "text": "This paper presents a cascaded coil flux control based on a Current Source Parallel Resonant Push-Pull Inverter (CSPRPI) for Induction Heating (IH) applications. The most important problems associated with current source parallel resonant inverters are start-up problems and the variable response of IH systems under load variations. This paper proposes a simple cascaded control method to increase an IH system’s robustness to load variations. The proposed IH has been analyzed in both the steady state and the transient state. Based on this method, the resonant frequency is tracked using Phase Locked Loop (PLL) circuits using a Multiplier Phase Detector (MPD) to achieve ZVS under the transient condition. A laboratory prototype was built with an operating frequency of 57-59 kHz and a rated power of 300 W. Simulation and experimental results verify the validity of the proposed power control method and the PLL dynamics.", "title": "" } ]
[ { "docid": "83d42bb6ce4d4bf73f5ab551d0b78000", "text": "An integrated 19-GHz Colpitts oscillator for a 77-GHz FMCW automotive radar frontend application is presented. The Colpitts oscillator has been realized in a fully differential circuit architecture. The VCO's 19 GHz output signal is buffered with an emitter follower stage and used as a LO signal source for a 77-GHz radar transceiver architecture. The LO frequency is quadrupled and amplified to drive the switching quad of a Gilbert-type mixer. As the quadrupler-mixer chip is required to describe the radar-sensor it is introduced, but the main focus of this paper aims the design of the sensor's LO source. In addition, the VCO-chip provides a divide-by-8 stage. The divider is either used for on-wafer measurements or later on in a PLL application.", "title": "" }, { "docid": "c08e9731b9a1135b7fb52548c5c6f77e", "text": "Many geometry processing applications, such as morphing, shape blending, transfer of texture or material properties, and fitting template meshes to scan data, require a bijective mapping between two or more models. This mapping, or cross-parameterization, typically needs to preserve the shape and features of the parameterized models, mapping legs to legs, ears to ears, and so on. Most of the applications also require the models to be represented by compatible meshes, i.e. meshes with identical connectivity, based on the cross-parameterization. In this paper we introduce novel methods for shape preserving cross-parameterization and compatible remeshing. Our cross-parameterization method computes a low-distortion bijective mapping between models that satisfies user prescribed constraints. Using this mapping, the remeshing algorithm preserves the user-defined feature vertex correspondence and the shape correlation between the models. The remeshing algorithm generates output meshes with significantly fewer elements compared to previous techniques, while accurately approximating the input geometry. As demonstrated by the examples, the compatible meshes we construct are ideally suitable for morphing and other geometry processing applications.", "title": "" }, { "docid": "d6f52736d78a5b860bdb364f64e4523c", "text": "Deep convolutional neural networks (CNN) have recently been shown to generate promising results for aesthetics assessment. However, the performance of these deep CNN methods is often compromised by the constraint that the neural network only takes the fixed-size input. To accommodate this requirement, input images need to be transformed via cropping, warping, or padding, which often alter image composition, reduce image resolution, or cause image distortion. Thus the aesthetics of the original images is impaired because of potential loss of fine grained details and holistic image layout. However, such fine grained details and holistic image layout is critical for evaluating an images aesthetics. In this paper, we present an Adaptive Layout-Aware Multi-Patch Convolutional Neural Network (A-Lamp CNN) architecture for photo aesthetic assessment. This novel scheme is able to accept arbitrary sized images, and learn from both fined grained details and holistic image layout simultaneously. To enable training on these hybrid inputs, we extend the method by developing a dedicated double-subnet neural network structure, i.e. a Multi-Patch subnet and a Layout-Aware subnet. We further construct an aggregation layer to effectively combine the hybrid features from these two subnets. Extensive experiments on the large-scale aesthetics assessment benchmark (AVA) demonstrate significant performance improvement over the state-of-the-art in photo aesthetic assessment.", "title": "" }, { "docid": "73e2738994b78d54d8fbad5df4622451", "text": "Although online consumer reviews (OCR) have helped consumers to know about the strengths and weaknesses of different products and find the ones that best suit their needs, they introduce a challenge for businesses to analyze them because of their volume, variety, velocity and veracity. This research investigates the predictors of readership and helpfulness of OCR using a sentiment mining approach for big data analytics. Our findings show that reviews with higher levels of positive sentiment in the title receive more readerships. Sentimental reviews with neutral polarity in the text are also perceived to be more helpful. The length and longevity of a review positively influence both its readership and helpfulness. Because the current methods used for sorting OCR may bias both their readership and helpfulness, the approach used in this study can be adopted by online vendors to develop scalable automated systems for sorting and classification of big OCR data which will benefit both vendors and consumers.", "title": "" }, { "docid": "4abae313432bbc338b096275bf3d7816", "text": "Phase change materials (PCM) take advantage of latent heat that can be stored or released from a material over a narrow temperature range. PCM possesses the ability to change their state with a certain temperature range. These materials absorb energy during the heating process as phase change takes place and release energy to the environment in the phase change range during a reverse cooling process. Insulation effect reached by the PCM depends on temperature and time. Recently, the incorporation of PCM in textiles by coating or encapsulation to make thermo-regulated smart textiles has grown interest to the researcher. Therefore, an attempt has been taken to review the working principle of PCM and their applications for smart temperature regulated textiles. Different types of phase change materials are introduced. This is followed by an account of incorporation of PCM in the textile structure are summarized. Concept of thermal comfort, clothing for cold environment, phase change materials and clothing comfort are discussed in this review paper. Some recent applications of PCM incorporated textiles are stated. Finally, the market of PCM in textiles field and some challenges are mentioned in this review paper. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "290ded425fe91bb0898a0e2fd815d575", "text": "We introduce the concept of the point cloud database, a new kind of database system aimed primarily towards scientific applications. Many scientific observations, experiments, feature extraction algorithms and large-scale simulations produce enormous amounts of data that are better represented as sparse (but often highly-clustered) points in a k-dimensional (k ≲ 10) metric space than on a multi-dimensional grid. Dimensionality reduction techniques, such as principal components, are also widely-used to project high dimensional data into similarly low dimensional spaces. Analysis techniques developed to work on multi-dimensional data points are usually implemented as in-memory algorithms and need to be modified to work in distributed cluster environments and on large amounts of disk-resident data. We conclude that the relational model, with certain additions, is appropriate for point clouds, but point cloud databases must also provide unique set of spatial search and proximity join operators, indexing schemes, and query language constructs that make them a distinct class of database systems.", "title": "" }, { "docid": "7d00770a64f25b728f149939fd2c1e7c", "text": "Replicated databases that use quorum-consensus algorithms to perform majority voting are prone to deadlocks. Due to the P-out-of-Q nature of quorum requests, deadlocks that arise are generalized deadlocks and are hard to detect. We present an efficient distributed algorithm to detect generalized deadlocks in replicated databases. The algorithm performs reduction of a distributed waitfor-graph (WFG) to determine the existence of a deadlock. if sufficient information to decide the reducibility of a node is not available at that node, the algorithm attempts reduction later in a lazy manner. We prove the correctness of the algorithm. The algorithm has a message complexity of 2n messages and a worst-case time complexity of 2d + 2 hops, where c is the number of edges and d is the diameter of the WFG. The algorithm is shown to perform significantly better in both time and message complexity than the best known existing algorithms. We conjecture that this is an optimal algorithm, in time and message complexity, to detect generalized deadlocks if no transaction has complete knowledge of the topology of the WFG or the system and the deadlock detection is to be carried out in a distributed manner.", "title": "" }, { "docid": "0e6d764934629e4ecfb85b5d49696b79", "text": "Traffic in large cities is one of the biggest problems that can lead to excess utilization of fuel by motor vehicles, accidents, and the waste of time of citizens. To have an effective and efficient city management system, it is necessary to intelligently control all the traffic light signals. For this reason, many researchers have tried to present optimal algorithms for traffic signal control. Some common methods exist for the control of traffic light signal, including the preset cycle time controller and vehicle-actuated controller. Results obtained from previous works indicate that these traffic light signal controllers do not exhibit an effective performance at moments of traffic peak. So to resolve this dilemma at such moments, traffic cops are employed. The application of fuzzy logic in traffic signal controllers has been seriously considered for several decades and many research works have been carried out in this regard. The fuzzy signal controllers perform the optimization task by minimizing the waiting time of the vehicles and maximizing the traffic capacity. A new fuzzy logic based algorithm is proposed in this article, which not only can reduce the waiting time and the number of vehicles behind a traffic light and at an intersection, but can consider the traffic situations at adjacent intersections as well. Finally, a comparison is made between the designed fuzzy controller and the preset cycle time controller.", "title": "" }, { "docid": "544c1608c03535121b8274ff51343e38", "text": "As multilevel models (MLMs) are useful in understanding relationships existent in hierarchical data structures, these models have started to be used more frequently in research developed in social and health sciences. In order to draw meaningful conclusions from MLMs, researchers need to make sure that the model fits the data. Model fit, and thus, ultimately model selection can be assessed by examining changes in several fit indices across nested and/or nonnested models [e.g., -2 log likelihood (-2LL), Akaike Information Criterion (AIC), and Schwarz’s Bayesian Information Criterion (BIC)]. In addition, the difference in pseudo-R 2 is often used to examine the practical significance between two nested models. Considering the importance of using all of these measures when determining model selection, researchers who use analyze multilevel models would benefit from being able to easily assess model fit across estimated models. Whereas SAS PROC MIXED produces the -2LL, AIC, and BIC, it does not provide the actual change in these fit indices or the change in pseudo-R 2 between different nested and non-nested models. In order to make this information more attainable, Bardenheier (2009) developed a macro that allowed researchers using PROC MIXED to obtain the test statistic for the difference in -2LL along with the p-value of the Likelihood Ratio Test (LRT). As an extension of Bardenheier’s work, this paper provides a comprehensive SAS macro that incorporates changes in model fit statistics (-2LL, AIC and BIC) as well as change in pseudo-R 2 . By utilizing data from PROC MIXED ODS tables, the macro produces a comprehensive table of changes in model fit measures. Thus, this expanded macro allows SAS users to examine model fit in both nested and non-nested models and both in terms of statistical and practical significance. This paper provides a review of the different methods used to assess model fit in multilevel analysis, the macro programming language, an executed example of the macro, and a copy of the complete macro.", "title": "" }, { "docid": "7f4701d8c9f651c3a551a91d19fd28d9", "text": "Road extraction from aerial images has been a hot research topic in the field of remote sensing image analysis. In this letter, a semantic segmentation neural network, which combines the strengths of residual learning and U-Net, is proposed for road area extraction. The network is built with residual units and has similar architecture to that of U-Net. The benefits of this model are twofold: first, residual units ease training of deep networks. Second, the rich skip connections within the network could facilitate information propagation, allowing us to design networks with fewer parameters, however, better performance. We test our network on a public road data set and compare it with U-Net and other two state-of-the-art deep-learning-based road extraction methods. The proposed approach outperforms all the comparing methods, which demonstrates its superiority over recently developed state of the arts.", "title": "" }, { "docid": "d5284538412222101f084fee2dc1acc4", "text": "The hand is an integral component of the human body, with an incredible spectrum of functionality. In addition to possessing gross and fine motor capabilities essential for physical survival, the hand is fundamental to social conventions, enabling greeting, grooming, artistic expression and syntactical communication. The loss of one or both hands is, thus, a devastating experience, requiring significant psychological support and physical rehabilitation. The majority of hand amputations occur in working-age males, most commonly as a result of work-related trauma or as casualties sustained during combat. For millennia, humans have used state-of-the-art technology to design clever devices to facilitate the reintegration of hand amputees into society. The present article provides a historical overview of the progress in replacing a missing hand, from early iron hands intended primarily for use in battle, to today's standard body-powered and myoelectric prostheses, to revolutionary advancements in the restoration of sensorimotor control with targeted reinnervation and hand transplantation.", "title": "" }, { "docid": "fb2b4ebce6a31accb3b5407f24ad64ba", "text": "The number of multi-robot systems deployed in field applications has risen dramatically over the years. Nevertheless, supervising and operating multiple robots at once is a difficult task for a single operator to execute. In this paper we propose a novel approach for utilizing advising automated agents when assisting an operator to better manage a team of multiple robots in complex environments. We introduce the Myopic Advice Optimization (MYAO) Problem and exemplify its implementation using an agent for the Search And Rescue (SAR) task. Our intelligent advising agent was evaluated through extensive field trials, with 44 non-expert human operators and 10 low-cost mobile robots, in simulation and physical deployment, and showed a significant improvement in both team performance and the operator’s satisfaction.", "title": "" }, { "docid": "89a1e532c8efe66a65a60a8635e37593", "text": "This paper presents an optimization based approach for cooperative multiple UAV attack missions. The objective is to determine the minimum resources required to coordinately attack a target at a given set of directions. We restrict the paths of the munitions to direct Dubins paths to satisfy field of view constraints and to avoid certain undesirable paths. The proposed algorithm derives the feasible positions and headings for each attack angle, and determines intersection regions corresponding to any two attack angles. We pose a set cover problem, the solution of which gives the minimum number of UAVs required to accomplish the mission.", "title": "" }, { "docid": "0879399fcb38c103a0e574d6d9010215", "text": "We present a content-based method for recommending citations in an academic paper draft. We embed a given query document into a vector space, then use its nearest neighbors as candidates, and rerank the candidates using a discriminative model trained to distinguish between observed and unobserved citations. Unlike previous work, our method does not require metadata such as author names which can be missing, e.g., during the peer review process. Without using metadata, our method outperforms the best reported results on PubMed and DBLP datasets with relative improvements of over 18% in F1@20 and over 22% in MRR. We show empirically that, although adding metadata improves the performance on standard metrics, it favors selfcitations which are less useful in a citation recommendation setup. We release an online portal for citation recommendation based on our method,1 and a new dataset OpenCorpus of 7 million research articles to facilitate future research on this task.", "title": "" }, { "docid": "2a31c9025e78b5a895d6bb64a6df3578", "text": "Galhardo L, Oliveira RF. Psychological Stress and Welfare in Fish. Annu Rev Biomed Sci 2009;11:1-20. The ability to respond to stress is vital to the survival of any living organism, though sustained reactions can become detrimental to the health and welfare of animals. Stress responses of vertebrates are known through several studies in their physiological, behavioural and psychological components, under acute and chronic contexts. In fish, the physiological and behavioural aspects of stress are considerably well known phenomena and show striking similarities to those of other vertebrates. However, the psychological component is not well known. Some authors deny mental experiences to fish on the basis of their lack of neocortex. Nevertheless, recent studies have shown neuroendocrine, cognitive and emotional processes in fish that are not only equivalent to other vertebrates, but also allow inferring some forms of mental representation. The integration of psychological elements in fish stress physiology is insufficiently studied, but, as discussed in this article, there is already indirect evidence to admit that some form of stimuli appraisal can take place in fish. This fact has profound implications on the regulation of the stress response, as well as on fish welfare and its management. ©by São Paulo State University ISSN 1806-8774", "title": "" }, { "docid": "68e137f9c722f833a7fdbc8032fc58be", "text": "BACKGROUND\nChronic Obstructive Pulmonary Disease (COPD) has been a leading cause of morbidity and mortality worldwide, over the years. In 1995, the implementation of a respiratory function survey seemed to be an adequate way to draw attention to neglected respiratory symptoms and increase the awareness of spirometry surveys. By 2002 there were new consensual guidelines in place and the awareness that prevalence of COPD depended on the criteria used for airway obstruction definition. The purpose of this study is to revisit the two studies and to turn public some of the data and respective methodologies.\n\n\nMETHODS\nFrom Pneumobil study database of 12,684 subjects, only the individuals with 40+ years old (n = 9.061) were selected. The 2002 study included a randomized representative sample of 1,384 individuals with 35-69 years old.\n\n\nRESULTS\nThe prevalence of COPD was 8.96% in Pneumobil and 5.34% in the 2002 study. In both studies, presence of COPD was greater in males and there was a positive association between presence of COPD and older age groups. Smokers and ex-smokers showed a higher proportion of cases of COPD.\n\n\nCONCLUSIONS\nPrevalence in Portugal is lower than in other European countries. This may be related to lower smokers' prevalence. Globally, the most important risk factors associated with COPD were age over 60 years, male gender and smoking exposure. All aspects and limitations regarding different recruitment methodologies and different criteria for defining COPD cases highlight the need of a standardized method to evaluate COPD prevalence and associated risks factors, whose results can be compared across countries, as it is the case of BOLD project.", "title": "" }, { "docid": "208c855d4ff1f756147d1a019dec99e0", "text": "When analyzing data, outlying observations cause problems because they may strongly influence the result. Robust statistics aims at detecting the outliers by searching for the model fitted by the majority of the data. We present an overview of several robust methods and outlier detection tools. We discuss robust procedures for univariate, low-dimensional, and high-dimensional data such as estimation of location and scatter, linear regression, principal component analysis, and classification. C © 2011 John Wiley & Sons, Inc. WIREs Data Mining Knowl Discov 2011 1 73–79 DOI: 10.1002/widm.2", "title": "" }, { "docid": "0d8f504eb7518f32c8c99e0ee9448389", "text": "Contemporary MOSFET mathematical models contain many parameters, most of which have little or no meaning to circuit designers. Designers therefore, continue to use obsolete models -such as the MOSFET square law -for circuit design calculations. However, low-voltage, lowpower systems development demands more advanced circuit design techniques. In this paper I present a brief literature review of MOSFET modeling, which has culminated in the development of the Advanced Compact MOSFET model. Next, I discuss the key ideas and equations of the ACM model, a physically based model with few parameters and equations. Additionally, I show that the ACM model can aid designers in small and large signal circuit analysis in three major respects. First, the ACM model is continuous throughout all regions of operation. Second, terms in ACM model equations appear explicitly in equations that specify circuit performance. Third, the ACM model can aid designers in neglecting MOSFET small signal components that have little influence on circuit performance. Lastly, I conclude with a brief discussion of transconductor linearity, and conclude by mentioning some promising areas of research. The Advanced Compact MOSFET Model and its Application to Inversion Coefficient Based Circuit Design Sean T. Nicolson Copyright © 2002 3", "title": "" }, { "docid": "4474a6b36b2da68b9ad2da4c782049e4", "text": "A novel stochastic adaptation of the recurrent reinforcement learning (RRL) methodology is applied to daily, weekly, and monthly stock index data, and compared to results obtained elsewhere using genetic programming (GP). The data sets used have been a considered a challenging test for algorithmic trading. It is demonstrated that RRL can reliably outperform buy-and-hold for the higher frequency data, in contrast to GP which performed best for monthly data.", "title": "" } ]
scidocsrr
25e842c602026aa56d6cc25fb005f9ad
Automatic Liver Segmentation Based on Shape Constraints and Deformable Graph Cut in CT Images
[ { "docid": "048f553914e3d7419918f6862a6eacd6", "text": "Automated retinal layer segmentation of optical coherence tomography (OCT) images has been successful for normal eyes but becomes challenging for eyes with retinal diseases if the retinal morphology experiences critical changes. We propose a method to automatically segment the retinal layers in 3-D OCT data with serous retinal pigment epithelial detachments (PED), which is a prominent feature of many chorioretinal disease processes. The proposed framework consists of the following steps: fast denoising and B-scan alignment, multi-resolution graph search based surface detection, PED region detection and surface correction above the PED region. The proposed technique was evaluated on a dataset with OCT images from 20 subjects diagnosed with PED. The experimental results showed the following. 1) The overall mean unsigned border positioning error for layer segmentation is 7.87±3.36 μm, and is comparable to the mean inter-observer variability ( 7.81±2.56 μm). 2) The true positive volume fraction (TPVF), false positive volume fraction (FPVF) and positive predicative value (PPV) for PED volume segmentation are 87.1%, 0.37%, and 81.2%, respectively. 3) The average running time is 220 s for OCT data of 512 × 64 × 480 voxels.", "title": "" }, { "docid": "5325778a57d0807e9b149108ea9e57d8", "text": "This paper presents a comparison study between 10 automatic and six interactive methods for liver segmentation from contrast-enhanced CT images. It is based on results from the \"MICCAI 2007 Grand Challenge\" workshop, where 16 teams evaluated their algorithms on a common database. A collection of 20 clinical images with reference segmentations was provided to train and tune algorithms in advance. Participants were also allowed to use additional proprietary training data for that purpose. All teams then had to apply their methods to 10 test datasets and submit the obtained results. Employed algorithms include statistical shape models, atlas registration, level-sets, graph-cuts and rule-based systems. All results were compared to reference segmentations five error measures that highlight different aspects of segmentation accuracy. All measures were combined according to a specific scoring system relating the obtained values to human expert variability. In general, interactive methods reached higher average scores than automatic approaches and featured a better consistency of segmentation quality. However, the best automatic methods (mainly based on statistical shape models with some additional free deformation) could compete well on the majority of test images. The study provides an insight in performance of different segmentation approaches under real-world conditions and highlights achievements and limitations of current image analysis techniques.", "title": "" } ]
[ { "docid": "d2e19aeb2969991ec18a71c877775c44", "text": "OBJECTIVES\nTo evaluate persistence and adherence to mirabegron and antimuscarinics in Japan using data from two administrative databases.\n\n\nMETHODS\nThe present retrospective study evaluated insurance claims for employees and dependents aged ≤75 years, and pharmacy claims for outpatients. From October 2012 to September 2014, new users of mirabegron or five individual antimuscarinics indicated for overactive bladder in Japan (fesoterodine, imidafenacin, propiverine, solifenacin and tolterodine) were identified and followed for 1 year. Persistence with mirabegron and antimuscarinics were evaluated using Kaplan-Meier methods. Any associations between baseline characteristics (age, sex and previous medication use) and persistence were explored. Adherence was assessed using the medication possession ratio.\n\n\nRESULTS\nIn total, 3970 and 16 648 patients were included from the insurance and pharmacy claims databases, respectively. Mirabegron treatment was associated with longer median persistence compared with antimuscarinics (insurance claims: 44 [95% confidence intervals 37-56] vs 21 [14-28] to 30 [30-33] days, pharmacy claims: 105 [96-113] vs 62 [56-77] to 84 [77-86] days). The results were consistent when patients were stratified by age, sex and previous medication. Persistence rate at 1 year was higher for mirabegron (insurance claims: 14.0% [11.5-16.8%] vs 5.4% [4.1-7.0%] to 9.1% [5.3-14.2%], pharmacy claims: 25.9% [24.6-27.3%] vs 16.3% [14.0-18.6%] to 21.3% [20.2-22.4%]). Compared with each antimuscarinic, a higher proportion of mirabegron-treated patients had medication possession ratios ≥0.8.\n\n\nCONCLUSIONS\nThis large nationwide Japanese study shows that persistence and adherence are greater with mirabegron compared with five antimuscarinics.", "title": "" }, { "docid": "ac08ee44179751a99db0e95fe3b0ac18", "text": "In this paper we tackle the problem of generating natural route descriptions on the basis of input obtained from a commercially available way-finding system. Our framework and architecture incorporates the use of general principles drawn from the domain of natural language generation. Through examples we demonstrate that it is possible to bridge the gap between underlying data representations and natural sounding linguistic descriptions. The work presented contributes both to the area of natural language generation and to the improvement of way-finding system interfaces.", "title": "" }, { "docid": "8140838d7ef17b3d6f6c042442de0f73", "text": "The two vascular systems of our body are the blood and lymphatic vasculature. Our understanding of the cellular and molecular processes controlling the development of the lymphatic vasculature has progressed significantly in the last decade. In mammals, this is a stepwise process that starts in the embryonic veins, where lymphatic EC (LEC) progenitors are initially specified. The differentiation and maturation of these progenitors continues as they bud from the veins to produce scattered primitive lymph sacs, from which most of the lymphatic vasculature is derived. Here, we summarize our current understanding of the key steps leading to the formation of a functional lymphatic vasculature.", "title": "" }, { "docid": "c9ea42872164e65424498c6a5c5e0c6d", "text": "Inverse problems appear in many applications, such as image deblurring and inpainting. The common approach to address them is to design a specific algorithm for each problem. The Plug-and-Play (P&P) framework, which has been recently introduced, allows solving general inverse problems by leveraging the impressive capabilities of existing denoising algorithms. While this fresh strategy has found many applications, a burdensome parameter tuning is often required in order to obtain high-quality results. In this paper, we propose an alternative method for solving inverse problems using off-the-shelf denoisers, which requires less parameter tuning. First, we transform a typical cost function, composed of fidelity and prior terms, into a closely related, novel optimization problem. Then, we propose an efficient minimization scheme with a P&P property, i.e., the prior term is handled solely by a denoising operation. Finally, we present an automatic tuning mechanism to set the method’s parameters. We provide a theoretical analysis of the method and empirically demonstrate its competitiveness with task-specific techniques and the P&P approach for image inpainting and deblurring.", "title": "" }, { "docid": "5f8956868216a6c85fadfaba6aed1413", "text": "Recent years have witnessed an incredibly increasing interest in the topic of incremental learning. Unlike conventional machine learning situations, data flow targeted by incremental learning becomes available continuously over time. Accordingly, it is desirable to be able to abandon the traditional assumption of the availability of representative training data during the training period to develop decision boundaries. Under scenarios of continuous data flow, the challenge is how to transform the vast amount of stream raw data into information and knowledge representation, and accumulate experience over time to support future decision-making process. In this paper, we propose a general adaptive incremental learning framework named ADAIN that is capable of learning from continuous raw data, accumulating experience over time, and using such knowledge to improve future learning and prediction performance. Detailed system level architecture and design strategies are presented in this paper. Simulation results over several real-world data sets are used to validate the effectiveness of this method.", "title": "" }, { "docid": "89d4143e7845d191433882f3fa5aaa26", "text": "There is a large variety of objects and appliances in human environments, such as stoves, coffee dispensers, juice extractors, and so on. It is challenging for a roboticist to program a robot for each of these object types and for each of their instantiations. In this work, we present a novel approach to manipulation planning based on the idea that many household objects share similarly-operated object parts. We formulate the manipulation planning as a structured prediction problem and design a deep learning model that can handle large noise in the manipulation demonstrations and learns features from three different modalities: point-clouds, language and trajectory. In order to collect a large number of manipulation demonstrations for different objects, we developed a new crowd-sourcing platform called Robobarista. We test our model on our dataset consisting of 116 objects with 249 parts along with 250 language instructions, for which there are 1225 crowd-sourced manipulation demonstrations. We further show that our robot can even manipulate objects it has never seen before. Keywords— Robotics and Learning, Crowd-sourcing, Manipulation", "title": "" }, { "docid": "e2988860c1e8b4aebd6c288d37d1ca4e", "text": "Numerous studies have shown that datacenter computers rarely operate at full utilization, leading to a number of proposals for creating servers that are energy proportional with respect to the computation that they are performing.\n In this paper, we show that as servers themselves become more energy proportional, the datacenter network can become a significant fraction (up to 50%) of cluster power. In this paper we propose several ways to design a high-performance datacenter network whose power consumption is more proportional to the amount of traffic it is moving -- that is, we propose energy proportional datacenter networks.\n We first show that a flattened butterfly topology itself is inherently more power efficient than the other commonly proposed topology for high-performance datacenter networks. We then exploit the characteristics of modern plesiochronous links to adjust their power and performance envelopes dynamically. Using a network simulator, driven by both synthetic workloads and production datacenter traces, we characterize and understand design tradeoffs, and demonstrate an 85% reduction in power --- which approaches the ideal energy-proportionality of the network.\n Our results also demonstrate two challenges for the designers of future network switches: 1) We show that there is a significant power advantage to having independent control of each unidirectional channel comprising a network link, since many traffic patterns show very asymmetric use, and 2) system designers should work to optimize the high-speed channel designs to be more energy efficient by choosing optimal data rate and equalization technology. Given these assumptions, we demonstrate that energy proportional datacenter communication is indeed possible.", "title": "" }, { "docid": "232b960cc16aa558538858aefd0a7651", "text": "This paper presents a video-based solution for real time vehicle detection and counting system, using a surveillance camera mounted on a relatively high place to acquire the traffic video stream.The two main methods applied in this system are: the adaptive background estimation and the Gaussian shadow elimination. The former allows a robust moving detection especially in complex scenes. The latter is based on color space HSV, which is able to deal with different size and intensity shadows. After these two operations, it obtains an image with moving vehicle extracted, and then operation counting is effected by a method called virtual detector.", "title": "" }, { "docid": "9c28badf1e53e69452c1d7aad2a87fab", "text": "While an al dente character of 5G is yet to emerge, network densification, miscellany of node types, split of control and data plane, network virtualization, heavy and localized cache, infrastructure sharing, concurrent operation at multiple frequency bands, simultaneous use of different medium access control and physical layers, and flexible spectrum allocations can be envisioned as some of the potential ingredients of 5G. It is not difficult to prognosticate that with such a conglomeration of technologies, the complexity of operation and OPEX can become the biggest challenge in 5G. To cope with similar challenges in the context of 3G and 4G networks, recently, self-organizing networks, or SONs, have been researched extensively. However, the ambitious quality of experience requirements and emerging multifarious vision of 5G, and the associated scale of complexity and cost, demand a significantly different, if not totally new, approach toward SONs in order to make 5G technically as well as financially feasible. In this article we first identify what challenges hinder the current self-optimizing networking paradigm from meeting the requirements of 5G. We then propose a comprehensive framework for empowering SONs with big data to address the requirements of 5G. Under this framework we first characterize big data in the context of future mobile networks, identifying its sources and future utilities. We then explicate the specific machine learning and data analytics tools that can be exploited to transform big data into the right data that provides a readily useable knowledge base to create end-to-end intelligence of the network. We then explain how a SON engine can build on the dynamic models extractable from the right data. The resultant dynamicity of a big data empowered SON (BSON) makes it more agile and can essentially transform the SON from being a reactive to proactive paradigm and hence act as a key enabler for 5G's extremely low latency requirements. Finally, we demonstrate the key concepts of our proposed BSON framework through a case study of a problem that the classic 3G/4G SON fails to solve.", "title": "" }, { "docid": "795d4e73b3236a2b968609c39ce8f417", "text": "In this paper, we are introducing an intelligent valet parking management system that guides the cars to autonomously park within a parking lot. The IPLMS for Intelligent Parking Lot Management System, consists of two modules: 1) a model car with a set of micro-controllers and sensors which can scan the environment for suitable parking spot and avoid collision to obstacles, and a Parking Lot Management System (IPLMS) which screens the parking spaces within the parking lot and offers guidelines to the car. The model car has the capability to autonomously maneuver within the parking lot using a fuzzy logic algorithm, and execute parking in the spot determined by the IPLMS, using a parking algorithm. The car receives the instructions from the IPLMS through a wireless communication link. The IPLMS has the flexibility to be adopted by any parking management system, and can potentially save the clients time to look for a parking spot, and/or to stroll from an inaccessible parking space. Moreover, the IPLMS can decrease the financial burden from the parking lot management by offering an easy-to-install system for self-guided valet parking.", "title": "" }, { "docid": "845d5fa10e3bf779ea68331022592011", "text": "Remote sensing is one of the tool which is very important for the production of Land use and land cover maps through a process called image classification. For the image classification process to be successfully, several factors should be considered including availability of quality Landsat imagery and secondary data, a precise classification process and user’s experiences and expertise of the procedures. The objective of this research was to classify and map land-use/land-cover of the study area using remote sensing and Geospatial Information System (GIS) techniques. This research includes two sections (1) Landuse/Landcover (LULC) classification and (2) accuracy assessment. In this study supervised classification was performed using Non Parametric Rule. The major LULC classified were agriculture (65.0%), water body (4.0%), and built up areas (18.3%), mixed forest (5.2%), shrubs (7.0%), and Barren/bare land (0.5%). The study had an overall classification accuracy of 81.7% and kappa coefficient (K) of 0.722. The kappa coefficient is rated as substantial and hence the classified image found to be fit for further research. This study present essential source of information whereby planners and decision makers can use to sustainably plan the environment.", "title": "" }, { "docid": "880d6636a2939ee232da5c293f29ae44", "text": "BACKGROUND\nMicrocannulas with blunt tips for filler injections have recently been developed for use with dermal fillers. Their utility, ease of use, cosmetic outcomes, perceived pain, and satisfaction ratings amongst patients in terms of comfort and aesthetic outcomes when compared to sharp hypodermic needles has not previously been investigated.\n\n\nOBJECTIVE\nTo compare injections of filler with microcannulas versus hypodermic needles in terms of ease of use, amount of filler required to achieve desired aesthetic outcome, perceived pain by patient, adverse events such as bleeding and bruising and to demonstrate the advantages of single-port injection technique with the blunt-tip microcannula.\n\n\nMATERIALS AND METHODS\nNinety-five patients aged 30 to 76 years with a desire to augment facial, décolleté, and hand features were enrolled in the study. Subjects were recruited in a consecutive manner from patients interested in receiving dermal filler augmentation. Each site was cleaned with alcohol before injection. Anesthesia was obtained with a topical anesthesia peel off mask of lidocaine/tetracaine. Cross-linked hyaluronic acid (20 mg to 28 mg per mL) was injected into the mid-dermis. The microcannula or a hypodermic needle was inserted the entire length of the fold, depression or lip and the filler was injected in a linear retrograde fashion. The volume injected was variable, depending on the depth and the extent of the defect. The injecting physician assessed the ease of injection. Subjects used the Visual Analog Scale (0-10) for pain assessment. Clinical efficacy was assessed by the patients and the investigators immediately after injection, and at one and six months after injection using the Global Aesthetic Improvement Scale (GAIS) and digital photography.\n\n\nRESULTS\nOverall, the Global Aesthetic Improvements Scale (GAIS) results were excellent (55%), moderate (35%), and somewhat improved (10%) one month after the procedure, decreasing to 23%, 44%, and 33%, respectively, at the six month evaluation. There was no significant differences in the GAIS score between the microcannula and the hypodermic needle. However, the Visual Analog Scale for pain assessment during the injections was quite different. The pain was described as 3 (mild) for injections with the microcannula, increasing to 6 (moderate) for injections with the hypodermic needle. Bruising and ecchymosis was more marked following use of the hypodermic needle.\n\n\nCONCLUSION\nUsing the blunt-tip microcannula as an alternative to the hypodermic needles has simplified filler injections and produced less bruising, echymosis, and pain with faster recovery.", "title": "" }, { "docid": "36d1cb90c0c94fab646ff90065b40258", "text": "This paper provides an in-depth view on nanosensor technology and electromagnetic communication among nanosensors. First, the state of the art in nanosensor technology is surveyed from the device perspective, by explaining the details of the architecture and components of individual nanosensors, as well as the existing manufacturing and integration techniques for nanosensor devices. Some interesting applications of wireless nanosensor networks are highlighted to emphasize the need for communication among nanosensor devices. A new network architecture for the interconnection of nanosensor deviceswith existing communicationnetworks is provided. The communication challenges in terms of terahertz channelmodeling, information encoding andprotocols for nanosensor networks are highlighted, defining a roadmap for the development of this new networking", "title": "" }, { "docid": "b419f58b8a89f5451a6e0efd8f6d5e80", "text": "Knowledge processing systems recently regained attention in the context of big \"knowledge\" processing and cloud platforms. Therefore, the development of such systems with a high software quality has to be ensured. In this paper an approach to contribute to an architectural guideline for developing such systems using the concept of design patterns is shown. The need, as well as current research in this domain is presented. Further, possible design pattern candidates are introduced that have been extracted from literature.", "title": "" }, { "docid": "49388f99a08a41d713b701cf063a71be", "text": "In this paper, we present the first-of-its-kind machine learning (ML) system, called AI Programmer, that can automatically generate full software programs requiring only minimal human guidance. At its core, AI Programmer uses genetic algorithms (GA) coupled with a tightly constrained programming language that minimizes the overhead of its ML search space. Part of AI Programmer’s novelty stems from (i) its unique system design, including an embedded, hand-crafted interpreter for efficiency and security and (ii) its augmentation of GAs to include instruction-gene randomization bindings and programming language-specific genome construction and elimination techniques. We provide a detailed examination of AI Programmer’s system design, several examples detailing how the system works, and experimental data demonstrating its software generation capabilities and performance using only mainstream CPUs.", "title": "" }, { "docid": "30a0b6c800056408b32e9ed013565ae0", "text": "This case report presents the successful use of palatal mini-implants for rapid maxillary expansion and mandibular distalization in a skeletal Class III malocclusion. The patient was a 13-year-old girl with the chief complaint of facial asymmetry and a protruded chin. Camouflage orthodontic treatment was chosen, acknowledging the possibility of need for orthognathic surgery after completion of her growth. A bone-borne rapid expander (BBRME) was used to correct the transverse discrepancy and was then used as indirect anchorage for distalization of the lower dentition with Class III elastics. As a result, a Class I occlusion with favorable inclination of the upper teeth was achieved without any adverse effects. The total treatment period was 25 months. Therefore, BBRME can be considered an alternative treatment in skeletal Class III malocclusion.", "title": "" }, { "docid": "d2d8f1079b5bab3f37ec74a9bf3ac018", "text": "This paper is focused on the design of generalized composite right/left handed (CRLH) transmission lines in a fully planar configuration, that is, without the use of surface-mount components. These artificial lines exhibit multiple, alternating backward and forward-transmission bands, and are therefore useful for the synthesis of multi-band microwave components. Specifically, a quad-band power splitter, a quad-band branch line hybrid coupler and a dual-bandpass filter, all of them based on fourth-order CRLH lines (i.e., lines exhibiting 2 left-handed and 2 right-handed bands alternating), are presented in this paper. The accurate circuit models, including parasitics, of the structures under consideration (based on electrically small planar resonators), as well as the detailed procedure for the synthesis of these lines using such circuit models, are given. It will be shown that satisfactory results in terms of performance and size can be obtained through the proposed approach, fully compatible with planar technology.", "title": "" }, { "docid": "a968a9842bb49f160503b24bff57cdd6", "text": "This paper addresses target discrimination in synthetic aperture radar (SAR) imagery using linear and nonlinear adaptive networks. Neural networks are extensively used for pattern classification but here the goal is discrimination. We show that the two applications require different cost functions. We start by analyzing with a pattern recognition perspective the two-parameter constant false alarm rate (CFAR) detector which is widely utilized as a target detector in SAR. Then we generalize its principle to construct the quadratic gamma discriminator (QGD), a nonparametrically trained classifier based on local image intensity. The linear processing element of the QCD is further extended with nonlinearities yielding a multilayer perceptron (MLP) which we call the NL-QGD (nonlinear QGD). MLPs are normally trained based on the L(2) norm. We experimentally show that the L(2) norm is not recommended to train MLPs for discriminating targets in SAR. Inspired by the Neyman-Pearson criterion, we create a cost function based on a mixed norm to weight the false alarms and the missed detections differently. Mixed norms can easily be incorporated into the backpropagation algorithm, and lead to better performance. Several other norms (L(8), cross-entropy) are applied to train the NL-QGD and all outperformed the L(2) norm when validated by receiver operating characteristics (ROC) curves. The data sets are constructed from TABILS 24 ISAR targets embedded in 7 km(2) of SAR imagery (MIT/LL mission 90).", "title": "" }, { "docid": "f7710fb5fad8092b8a7cc490fb50fe4d", "text": "Speech is one of the most effective ways of communication among humans. Even though audio is the most common way of transmitting speech, very important information can be fou nd in other modalities, such as vision. Vision is particularly us ef l when the acoustic signal is corrupted. Multi-modal speech recog nition however has not yet found wide-spread use, mostly because th e temporal alignment and fusion of the different information sources is challenging. This paper presents an end-to-end audiovisual speech recog nizer (AVSR), based on recurrent neural networks (RNN) with a conn ectionist temporal classification (CTC) [1] loss function. CTcreates sparse “peaky” output activations, and we analyze the diffe rences in the alignments of output targets (phonemes or visemes) be tween audio-only, video-only, and audio-visual feature represe ntations. We present the first such experiments on the large vocabulary IB M ViaVoice database, which outperform previously published ap proaches on phone accuracy in clean and noisy conditions.", "title": "" }, { "docid": "bf5d53e5465dd5e64385bf9204324059", "text": "A model of core losses, in which the hysteresis coefficients are variable with the frequency and induction (flux density) and the eddy-current and excess loss coefficients are variable only with the induction, is proposed. A procedure for identifying the model coefficients from multifrequency Epstein tests is described, and examples are provided for three typical grades of non-grain-oriented laminated steel suitable for electric motor manufacturing. Over a wide range of frequencies between 20-400 Hz and inductions from 0.05 to 2 T, the new model yielded much lower errors for the specific core losses than conventional models. The applicability of the model for electric machine analysis is also discussed, and examples from an interior permanent-magnet and an induction motor are included.", "title": "" } ]
scidocsrr
81414936aa5050eedd06446fa90d18e2
Human factors in cybersecurity; examining the link between Internet addiction, impulsivity, attitudes towards cybersecurity, and risky cybersecurity behaviours
[ { "docid": "99ffc7cd601d1c43bbf7e3537632e95c", "text": "Despite numerous advances in IT security, many computer users are still vulnerable to security-related risks because they do not comply with organizational policies and procedures. In a network setting, individual risk can extend to all networked users. Endpoint security refers to the set of organizational policies, procedures, and practices directed at securing the endpoint of the network connections – the individual end user. As such, the challenges facing IT managers in providing effective endpoint security are unique in that they often rely heavily on end user participation. But vulnerability can be minimized through modification of desktop security programs and increased vigilance on the part of the system administrator or CSO. The cost-prohibitive nature of these measures generally dictates targeting high-risk users on an individual basis. It is therefore important to differentiate between individuals who are most likely to pose a security risk and those who will likely follow most organizational policies and procedures.", "title": "" } ]
[ { "docid": "e9e7cb42ed686ace9e9785fafd3c72f8", "text": "We present a fully automated multimodal medical image matching technique. Our method extends the concepts used in the computer vision SIFT technique for extracting and matching distinctive scale invariant features in 2D scalar images to scalar images of arbitrary dimensionality. This extension involves using hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. These features were successfully applied to determine accurate feature point correspondence between pairs of medical images (3D) and dynamic volumetric data (3D+time).", "title": "" }, { "docid": "4bd161b3e91dea05b728a72ade72e106", "text": "Julio Rodriguez∗ Faculté des Sciences et Techniques de l’Ingénieur (STI), Institut de Microtechnique (IMT), Laboratoire de Production Microtechnique (LPM), Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland and Fakultät für Physik, Universität Bielefeld, D-33501 Bielefeld, Germany ∗Corresponding author. Email: julio.rodriguez@epfl.ch and jrodrigu@physik.uni-bielefeld.de", "title": "" }, { "docid": "a4affb4b3a83573571e1af3009b187f6", "text": " Existing path following algorithms for graph matching can be viewed as special cases of the numerical continuation method (NCM), and correspond to particular implementation named generic predictor corrector (GPC).  The GPC approach succeeds at regular points, but may fail at singular points. Illustration of GPC and the proposed method is shown in Fig. 1.  This paper presents a branching path following (BPF) method to exploring potentially better paths at singular points to improve matching performance. Tao Wang , Haibin Ling 1,3, Congyan Lang , Jun Wu 1Meitu HiScene Lab, HiScene Information Technologies, Shanghai, China 2 School of Computer & Information Technology, Beijing Jiaotong University, Beijing 100044, China 3 Computer & Information Sciences Department, Temple University, Philadelphia 19122, USA Email: twang@bjtu.edu.cn, hbling@temple.edu, cylang@bjtu.edu.cn, wuj@bjtu.edu.cn Branching Path Following for Graph Matching", "title": "" }, { "docid": "766b726231f9d9540deb40183b49a655", "text": "This paper presents a survey of georeferenced point clouds. Concentration is, on the one hand, put on features, which originate in the measurement process themselves, and features derived by processing the point cloud. On the other hand, approaches for the processing of georeferenced point clouds are reviewed. This includes the data structures, but also spatial processing concepts. We suggest a categorization of features into levels that reflect the amount of processing. Point clouds are found across many disciplines, which is reflected in the versatility of the literature suggesting specific features.", "title": "" }, { "docid": "4702fceea318c326856cc2a7ae553e1f", "text": "The Institute of Medicine identified “timeliness” as one of six key “aims for improvement” in its most recent report on quality. Yet patient delays remain prevalent, resulting in dissatisfaction, adverse clinical consequences, and often, higher costs. This tutorial describes several areas in which patients routinely experience significant and potentially dangerous delays and presents operations research (OR) models that have been developed to help reduce these delays, often at little or no cost. I also describe the difficulties in developing and implementing models as well as the factors that increase the likelihood of success. Finally, I discuss the opportunities, large and small, for using OR methodologies to significantly impact practices and policies that will affect timely access to healthcare.", "title": "" }, { "docid": "20dfc70e3563d5aded0cf34000dff907", "text": "This paper presents development of a quad rotor tail-sitter VTOL UAV (Vertical Takeoff and Landing Unmanned Aerial Vehicle) which is composed of four rotors and a fixed wing. The conventional VTOL UAVs have a drawback in the accuracy of the attitude control in stationary hovering because they were developed based on a fixed-wing aircraft and they used the control surfaces, such as aileron, elevator, and rudder for the attitude control. To overcome such a drawback, we developed a quad rotor tail-sitter VTOL UAV. The quad rotor tail-sitter VTOL UAV realizes high accuracy in the attitude control with four rotors like a quad rotor helicopter and achieves level flight like a fixed-wing airplane. The remarkable characteristic of the developed quad rotor tail-sitter VTOL UAV is that it does not use any control surfaces even in the level flight. This paper shows the design concept of the developed UAV and experimental verification of all flight modes including hovering, transition flight and level flight.", "title": "" }, { "docid": "d1cacda6383211c78f8aa4138f709d5f", "text": "Sentiment analysis of reviews traditionally ignored the association between the features of the given product domain. The hierarchical relationship between the features of a product and their associated sentiment that influence the polarity of a review is not dealt with very well. In this work, we analyze the influence of the hierarchical relationship between the product attributes and their sentiments on the overall review polarity. ConceptNet is used to automatically create a product specific ontology that depicts the hierarchical relationship between the product attributes. The ontology tree is annotated with feature-specific polarities which are aggregated bottom-up, exploiting the ontological information, to find the overall review polarity. We propose a weakly supervised system that achieves a reasonable performance improvement over the baseline without requiring any tagged training data.", "title": "" }, { "docid": "3e335d336d3c9bce4dbdf24402b8eb17", "text": "Unlike traditional database management systems which are organized around a single data model, a multi-model database (MMDB) utilizes a single, integrated back-end to support multiple data models, such as document, graph, relational, and key-value. As more and more platforms are proposed to deal with multi-model data, it becomes crucial to establish a benchmark for evaluating the performance and usability of MMDBs. Previous benchmarks, however, are inadequate for such scenario because they lack a comprehensive consideration for multiple models of data. In this paper, we present a benchmark, called UniBench, with the goal of facilitating a holistic and rigorous evaluation of MMDBs. UniBench consists of a mixed data model, a synthetic multi-model data generator, and a set of core workloads. Specifically, the data model simulates an emerging application: Social Commerce, a Web-based application combining E-commerce and social media. The data generator provides diverse data format including JSON, XML, key-value, tabular, and graph. The workloads are comprised of a set of multi-model queries and transactions, aiming to cover essential aspects of multi-model data management. We implemented all workloads on ArangoDB and OrientDB to illustrate the feasibility of our proposed benchmarking system and show the learned lessons through the evaluation of these two multi-model databases. The source code and data of this benchmark can be downloaded at http://udbms.cs.helsinki.fi/bench/.", "title": "" }, { "docid": "375766c4ae473312c73e0487ab57acc8", "text": "There are three reasons why the asymmetric crooked nose is one of the greatest challenges in rhinoplasty surgery. First, the complexity of the problem is not appreciated by the patient nor understood by the surgeon. Patients often see the obvious deviation of the nose, but not the distinct differences between the right and left sides. Surgeons fail to understand and to emphasize to the patient that each component of the nose is asymmetric. Second, these deformities can be improved, but rarely made flawless. For this reason, patients are told that the result will be all \"-er words,\" better, straighter, cuter, but no \"t-words,\" there is no perfect nor straight. Most surgeons fail to realize that these cases represent asymmetric noses on asymmetric faces with the variable of ipsilateral and contralateral deviations. Third, these cases demand a wide range of sophisticated surgical techniques, some of which have a minimal margin of error. This article offers an in-depth look at analysis, preoperative planning, and surgical techniques available for dealing with the asymmetric crooked nose.", "title": "" }, { "docid": "565dcf584448f6724a6529c3d2147a68", "text": "People are fond of taking and sharing photos in their social life, and a large part of it is face images, especially selfies. A lot of researchers are interested in analyzing attractiveness of face images. Benefited from deep neural networks (DNNs) and training data, researchers have been developing deep learning models that can evaluate facial attractiveness of photos. However, recent development on DNNs showed that they could be easily fooled even when they are trained on a large dataset. In this paper, we used two approaches to generate adversarial examples that have high attractiveness scores but low subjective scores for face attractiveness evaluation on DNNs. In the first approach, experimental results using the SCUT-FBP dataset showed that we could increase attractiveness score of 20 test images from 2.67 to 4.99 on average (score range: [1, 5]) without noticeably changing the images. In the second approach, we could generate similar images from noise image with any target attractiveness score. Results show by using this approach, a part of attractiveness information could be manipulated artificially.", "title": "" }, { "docid": "5325672f176fd572f7be68a466538d95", "text": "The successful execution of location-based and feature-based queries on spatial databases requires the construction of spatial indexes on the spatial attributes. This is not simple when the data is unstructured as is the case when the data is a collection of documents such as news articles, which is the domain of discourse, where the spatial attribute consists of text that can be (but is not required to be) interpreted as the names of locations. In other words, spatial data is specified using text (known as a toponym) instead of geometry, which means that there is some ambiguity involved. The process of identifying and disambiguating references to geographic locations is known as geotagging and involves using a combination of internal document structure and external knowledge, including a document-independent model of the audience's vocabulary of geographic locations, termed its spatial lexicon. In contrast to previous work, a new spatial lexicon model is presented that distinguishes between a global lexicon of locations known to all audiences, and an audience-specific local lexicon. Generic methods for inferring audiences' local lexicons are described. Evaluations of this inference method and the overall geotagging procedure indicate that establishing local lexicons cannot be overlooked, especially given the increasing prevalence of highly local data sources on the Internet, and will enable the construction of more accurate spatial indexes.", "title": "" }, { "docid": "3cbc035529138be1d6f8f66a637584dd", "text": "Regression models such as the Cox proportional hazards model have had increasing use in modelling and estimating the prognosis of patients with a variety of diseases. Many applications involve a large number of variables to be modelled using a relatively small patient sample. Problems of overfitting and of identifying important covariates are exacerbated in analysing prognosis because the accuracy of a model is more a function of the number of events than of the sample size. We used a general index of predictive discrimination to measure the ability of a model developed on training samples of varying sizes to predict survival in an independent test sample of patients suspected of having coronary artery disease. We compared three methods of model fitting: (1) standard 'step-up' variable selection, (2) incomplete principal components regression, and (3) Cox model regression after developing clinical indices from variable clusters. We found regression using principal components to offer superior predictions in the test sample, whereas regression using indices offers easily interpretable models nearly as good as the principal components models. Standard variable selection has a number of deficiencies.", "title": "" }, { "docid": "16dc05092756ca157476b6aeb7705915", "text": "Model checkers and other nite-state veriication tools allow developers to detect certain kinds of errors automatically. Nevertheless, the transition of this technology from research to practice has been slow. While there are a number of potential causes for reluctance to adopt such formal methods, we believe that a primary cause is that practitioners are unfamiliar with specii-cation processes, notations, and strategies. In a recent paper, we proposed a pattern-based approach to the presentation, codiication and reuse of property specii-cations for nite-state veriication. Since then, we have carried out a survey of available speciications, collecting over 500 examples of property speciications. We found that most are instances of our proposed patterns. Furthermore, we have updated our pattern system to accommodate new patterns and variations of existing patterns encountered in this survey. This paper reports the results of the survey and the current status of our pattern system.", "title": "" }, { "docid": "78ee892fada4ec9ff860072d0d0ecbe3", "text": "The popularity of FPGAs is rapidly growing due to the unique advantages that they offer. However, their distinctive features also raise new questions concerning the security and communication capabilities of an FPGA-based hardware platform. In this paper, we explore the some of the limits of FPGA side-channel communication. Specifically, we identify a previously unexplored capability that significantly increases both the potential benefits and risks associated with side-channel communication on an FPGA: an in-device receiver. We designed and implemented three new communication mechanisms: speed modulation, timing modulation and pin hijacking. These non-traditional interfacing techniques have the potential to provide reliable communication with an estimated maximum bandwidth of 3.3 bit/sec, 8 Kbits/sec, and 3.4 Mbits/sec, respectively.", "title": "" }, { "docid": "1d8917f5faaed1531fdcd4df06ff0920", "text": "4G cellular standards are targeting aggressive spectrum reuse (frequency reuse 1) to achieve high system capacity and simplify radio network planning. The increase in system capacity comes at the expense of SINR degradation due to increased intercell interference, which severely impacts cell-edge user capacity and overall system throughput. Advanced interference management schemes are critical for achieving the required cell edge spectral efficiency targets and to provide ubiquity of user experience throughout the network. In this article we compare interference management solutions across the two main 4G standards: IEEE 802.16m (WiMAX) and 3GPP-LTE. Specifically, we address radio resource management schemes for interference mitigation, which include power control and adaptive fractional frequency reuse. Additional topics, such as interference management for multitier cellular deployments, heterogeneous architectures, and smart antenna schemes will be addressed in follow-up papers.", "title": "" }, { "docid": "b65ead6ac95bff543a5ea690caade548", "text": "Theory and experiments show that as the per-flow product of bandwidth and latency increases, TCP becomes inefficient and prone to instability, regardless of the queuing scheme. This failing becomes increasingly important as the Internet evolves to incorporate very high-bandwidth optical links and more large-delay satellite links.To address this problem, we develop a novel approach to Internet congestion control that outperforms TCP in conventional environments, and remains efficient, fair, scalable, and stable as the bandwidth-delay product increases. This new eXplicit Control Protocol, XCP, generalizes the Explicit Congestion Notification proposal (ECN). In addition, XCP introduces the new concept of decoupling utilization control from fairness control. This allows a more flexible and analytically tractable protocol design and opens new avenues for service differentiation.Using a control theory framework, we model XCP and demonstrate it is stable and efficient regardless of the link capacity, the round trip delay, and the number of sources. Extensive packet-level simulations show that XCP outperforms TCP in both conventional and high bandwidth-delay environments. Further, XCP achieves fair bandwidth allocation, high utilization, small standing queue size, and near-zero packet drops, with both steady and highly varying traffic. Additionally, the new protocol does not maintain any per-flow state in routers and requires few CPU cycles per packet, which makes it implementable in high-speed routers.", "title": "" }, { "docid": "e517370f733c10190da90c834f0f486a", "text": "The planning and organization of athletic training have historically been much discussed and debated in the coaching and sports science literature. Various influential periodization theorists have devised, promoted, and substantiated particular training-planning models based on interpretation of the scientific evidence and individual beliefs and experiences. Superficially, these proposed planning models appear to differ substantially. However, at a deeper level, it can be suggested that such models share a deep-rooted cultural heritage underpinned by a common set of historically pervasive planning beliefs and assumptions. A concern with certain of these formative assumptions is that, although no longer scientifically justifiable, their shaping influence remains deeply embedded. In recent years substantial evidence has emerged demonstrating that training responses vary extensively, depending upon multiple underlying factors. Such findings challenge the appropriateness of applying generic methodologies, founded in overly simplistic rule-based decision making, to the planning problems posed by inherently complex biological systems. The purpose of this review is not to suggest a whole-scale rejection of periodization theories but to promote a refined awareness of their various strengths and weaknesses. Eminent periodization theorists-and their variously proposed periodization models-have contributed substantially to the evolution of training-planning practice. However, there is a logical line of reasoning suggesting an urgent need for periodization theories to be realigned with contemporary elite practice and modern scientific conceptual models. In concluding, it is recommended that increased emphasis be placed on the design and implementation of sensitive and responsive training systems that facilitate the guided emergence of customized context-specific training-planning solutions.", "title": "" }, { "docid": "03ce79214eb7e7f269464574b1e5c208", "text": "Variable draft is shown to be an essential feature for a research and survey SWATH ship large enough for unrestricted service worldwide. An ongoing semisubmerged (variable draft) SWATH can be designed for access to shallow harbors. Speed at transit (shallow) draft can be comparable to monohulls of the same power while assuring equal or better seakeeping characteristics. Seakeeping with the ship at deeper drafts can be superior to an equivalent SWATH that is designed for all operations at a single draft. The lower hulls of the semisubmerged SWATH ship can be devoid of fins. A practical target for interior clear spacing between the lower hulls is about 50 feet. Access to the sea surface for equipment can be provided astern, over the side, or from within a centerwell amidships. One of the lower hulls can be optimized to carry acoustic sounding equipment. A design is presented in this paper for a semisubmerged ship with a trial speed in excess of 15 knots, a scientific mission payload of 300 tons, and accommodations for 50 personnel. 1. SEMISUBMERGED SWATH TECHNOLOGY A single draft for the full range of operating conditions is a comon feature of typical SWATH ship designs. This constant draft characteristic is found in the SWATH ships built by Mitsuil” , most notably the KAIY03, and the SWATH T-AGOS4 which is now under construction for the U.S. Navy. The constant draft design for ships of this size (about 3,500 tons displacement) poses two significant drawbacks. One is that the draft must be at least 25 feet to satisfy seakeeping requirements. This draft is restrictive for access to many harbors that would be useful for research and survey functions. The second is that hull and column (strut) hydrodynamics generally result in the SWATH being a larger ship and having greater power requirements than for an equivalent monohull. The ship size and hull configuration, together with the necessity for a. President, Blue Sea Corporation b. President, Alan C. McClure Associates, Inc. stabilizing fins, usually leads to a higher capital cost than for a rougher riding, but otherwise equivalent, monohull. The distinguishing feature of the semisubmerged SWATH ship is variable draft. Sufficient allowance for ballast transfer is made to enable the ship to vary its draft under all load conditions. The shallowest draft is well within usual harbor limits and gives the lower hulls a slight freeboard. It also permits transit in low to moderate sea conditions using less propulsion power than is needed by a constant draft SWATH. The semisubmerged SWATH gives more design flexibility to provide for deep draft conditions that strike a balance between operating requirements and seakeeping characteristics. Intermediate “storm” drafts can be selected that are a compromise between seakeeping, speed, and upper hull clearance to avoid slamming. A discussion of these and other tradeoffs in semisubmerged SWATH ship design for oceanographic applications is given in a paper by Gaul and McClure’ . A more general discussion of design tradeoffs is given in a later paper6. The semisubmerged SWATH technology gives rise to some notable contrasts with constant draft SWATH ships. For any propulsion power applied, the semisubmerged SWATH has a range of speed that depends on draft. Highest speeds are obtained at minimum (transit) draft. Because the lower hull freeboard is small at transit draft, seakeeping at service speed can be made equal to or better than an equivalent monohull. The ship is designed for maximum speed at transit draft so the lower hull form is more akin to a surface craft than a submarine. This allows use of a nearly rectangular cross section for the lower hulls which provides damping of vertical motion. For moderate speeds at deeper drafts with the highly damped lower hull form, the ship need not be equipped with stabilizing fins. Since maximum speed is achieved with the columns of the water, it is practical (struts) out to use two c. President, Omega Marine Engineering Systems, Inc. d. Joint venture of Blue Sea Corporation and Martran Consultants, Inc. columns, rather than one, on each lower hull. The four column configuration at deep drafts minimizes the variation of ship motion response with change in course relative to surface wave direction. The width of the ship and lack of appendages on the lower hulls increases the utility of a large underside deck opening (moonpool) amidship. The basic Semisubmerged SWATH Research and Survey Ship design has evolved from requirements first stated by the Institute for Geophysics of the University of Texas (UTIG) in 1984. Blue Sea McClure provided the only SWATH configuration in a set of five conceptual designs procured competitively by the University. Woods Hole Oceanographic Institution, on behalf of the University-National Oceanographic Laboratory System, subsequently contracted for a revision of the UTIG design to meet requirements for an oceanographic research ship. The design was further refined to meet requirements posed by the U.S. Navy for an oceanographic research ship. The intent of this paper is to use this generic design to illustrate the main features of semisubmerged SWATH ships.", "title": "" }, { "docid": "f2634c4a479e58cef42ae776390aee91", "text": "From the Division of General Medicine and Primary Care, Department of Medicine (D.W.B.), and the Department of Surgery (A.A.G.), Brigham and Women’s Hospital; the Center for Applied Medical Information Systems, Partners HealthCare System (D.W.B.); and Harvard Medical School (D.W.B., A.A.G.) — all in Boston. Address reprint requests to Dr. Bates at the Division of General Medicine and Primary Care, Brigham and Women’s Hospital, 75 Francis St., Boston, MA 02115, or at dbates@ partners.org.", "title": "" }, { "docid": "892c75c6b719deb961acfe8b67b982bb", "text": "Growing interest in data and analytics in education, teaching, and learning raises the priority for increased, high-quality research into the models, methods, technologies, and impact of analytics. Two research communities -- Educational Data Mining (EDM) and Learning Analytics and Knowledge (LAK) have developed separately to address this need. This paper argues for increased and formal communication and collaboration between these communities in order to share research, methods, and tools for data mining and analysis in the service of developing both LAK and EDM fields.", "title": "" } ]
scidocsrr
8beddac83b8e402fea1171c9f2825d94
TransmiR: a transcription factor–microRNA regulation database
[ { "docid": "b324860905b6d8c4b4a8429d53f2543d", "text": "MicroRNAs (miRNAs) are endogenous approximately 22 nt RNAs that can play important regulatory roles in animals and plants by targeting mRNAs for cleavage or translational repression. Although they escaped notice until relatively recently, miRNAs comprise one of the more abundant classes of gene regulatory molecules in multicellular organisms and likely influence the output of many protein-coding genes.", "title": "" } ]
[ { "docid": "ddb01f456d904151238ecf695483a2f4", "text": "If there were only one truth, you couldn't paint a hundred canvases on the same theme.", "title": "" }, { "docid": "ae59ef9772ea8f8277a2d91030bd6050", "text": "Modelling and exploiting teammates’ policies in cooperative multi-agent systems have long been an interest and also a big challenge for the reinforcement learning (RL) community. The interest lies in the fact that if the agent knows the teammates’ policies, it can adjust its own policy accordingly to arrive at proper cooperations; while the challenge is that the agents’ policies are changing continuously due to they are learning concurrently, which imposes difficulty to model the dynamic policies of teammates accurately. In this paper, we present ATTention Multi-Agent Deep Deterministic Policy Gradient (ATT-MADDPG) to address this challenge. ATT-MADDPG extends DDPG, a single-agent actor-critic RL method, with two special designs. First, in order to model the teammates’ policies, the agent should get access to the observations and actions of teammates. ATT-MADDPG adopts a centralized critic to collect such information. Second, to model the teammates’ policies using the collected information in an effective way, ATT-MADDPG enhances the centralized critic with an attention mechanism. This attention mechanism introduces a special structure to explicitly model the dynamic joint policy of teammates, making sure that the collected information can be processed efficiently. We evaluate ATT-MADDPG on both benchmark tasks and the real-world packet routing tasks. Experimental results show that it not only outperforms the state-of-the-art RL-based methods and rule-based methods by a large margin, but also achieves better performance in terms of scalability and robustness.", "title": "" }, { "docid": "8a09944155d35b4d1229b0778baf58a4", "text": "The recent Omnidirectional MediA Format (OMAF) standard specifies delivery of 360° video content. OMAF supports only equirectangular (ERP) and cubemap projections and their region-wise packing with a limitation on video decoding capability to the maximum resolution of 4K (e.g., 4096x2048). Streaming of 4K ERP content allows only a limited viewport resolution, which is lower than the resolution of many current head-mounted displays (HMDs). In order to take the full advantage of those HMDs, this work proposes a specific mixed-resolution packing of 6K (6144x3072) ERP content and its realization in tile-based streaming, while complying with the 4K-decoding constraint and the High Efficiency Video Coding (HEVC) standard. Experimental results indicate that, using Zonal-PSNR test methodology, the proposed layout decreases the streaming bitrate up to 32% in terms of BD-rate, when compared to mixed-quality viewport-adaptive streaming of 4K ERP as an alternative solution.", "title": "" }, { "docid": "0343f1a0be08ff53e148ef2eb22aaf14", "text": "Tables are a ubiquitous form of communication. While everyone seems to know what a table is, a precise, analytical definition of “tabularity” remains elusive because some bureaucratic forms, multicolumn text layouts, and schematic drawings share many characteristics of tables. There are significant differences between typeset tables, electronic files designed for display of tables, and tables in symbolic form intended for information retrieval. Most past research has addressed the extraction of low-level geometric information from raster images of tables scanned from printed documents, although there is growing interest in the processing of tables in electronic form as well. Recent research on table composition and table analysis has improved our understanding of the distinction between the logical and physical structures of tables, and has led to improved formalisms for modeling tables. This review, which is structured in terms of generalized paradigms for table processing, indicates that progress on half-a-dozen specific research issues would open the door to using existing paper and electronic tables for database update, tabular browsing, structured information retrieval through graphical and audio interfaces, multimedia table editing, and platform-independent display.", "title": "" }, { "docid": "c29a5acf052aed206d7d7a9078e66ff9", "text": "Argumentation mining aims to automatically detect, classify and structure argumentation in text. Therefore, argumentation mining is an important part of a complete argumentation analyisis, i.e. understanding the content of serial arguments, their linguistic structure, the relationship between the preceding and following arguments, recognizing the underlying conceptual beliefs, and understanding within the comprehensive coherence of the specific topic. We present different methods to aid argumentation mining, starting with plain argumentation detection and moving forward to a more structural analysis of the detected argumentation. Different state-of-the-art techniques on machine learning and context free grammars are applied to solve the challenges of argumentation mining. We also highlight fundamental questions found during our research and analyse different issues for future research on argumentation mining.", "title": "" }, { "docid": "2136c0e78cac259106d5424a2985e5d7", "text": "Stylistic composition is a creative musical activity, in which students as well as renowned composers write according to the style of another composer or period. We describe and evaluate two computational models of stylistic composition, called Racchman-Oct2010 and Racchmaninof-Oct2010. The former is a constrained Markov model and the latter embeds this model in an analogy-based design system. Racchmaninof-Oct2010 applies a pattern discovery algorithm called SIACT and a perceptually validated formula for rating pattern importance, to guide the generation of a new target design from an existing source design. A listening study is reported concerning human judgments of music excerpts that are, to varying degrees, in the style of mazurkas by Frédédric Chopin (1810-1849). The listening study acts as an evaluation of the two computational models and a third, benchmark system called Experiments in Musical Intelligence (EMI). Judges’ responses indicate that some aspects of musical style, such as phrasing and rhythm, are being modeled effectively by our algorithms. Judgments are also used to identify areas for future improvements. We discuss the broader implications of this work for the fields of engineering and design, where there is potential to make use of our models of hierarchical repetitive structure. Data and code to accompany this paper are available from www.tomcollinsresearch.net", "title": "" }, { "docid": "aec0c79ea90de753a010abfb43dc3f59", "text": "Style transfer methods have achieved significant success in recent years with the use of convolutional neural networks. However, many of these methods concentrate on artistic style transfer with few constraints on the output image appearance. We address the challenging problem of transferring face texture from a style face image to a content face image in a photorealistic manner without changing the identity of the original content image. Our framework for face texture transfer (FaceTex) augments the prior work of MRF-CNN with a novel facial semantic regularization that incorporates a face prior regularization smoothly suppressing the changes around facial meso-structures (e.g eyes, nose and mouth) and a facial structure loss function which implicitly preserves the facial structure so that face texture can be transferred without changing the original identity. We demonstrate results on face images and compare our approach with recent state-of-the-art methods. Our results demonstrate superior texture transfer because of the ability to maintain the identity of the original face image.", "title": "" }, { "docid": "b2a0755176f20cd8ee2ca19c091d022d", "text": "Models are among the most essential tools in robotics, such as kinematics and dynamics models of the robot’s own body and controllable external objects. It is widely believed that intelligent mammals also rely on internal models in order to generate their actions. However, while classical robotics relies on manually generated models that are based on human insights into physics, future autonomous, cognitive robots need to be able to automatically generate models that are based on information which is extracted from the data streams accessible to the robot. In this paper, we survey the progress in model learning with a strong focus on robot control on a kinematic as well as dynamical level. Here, a model describes essential information about the behavior of the environment and the influence of an agent on this environment. In the context of model-based learning control, we view the model from three different perspectives. First, we need to study the different possible model learning architectures for robotics. Second, we discuss what kind of problems these architecture and the domain of robotics imply for the applicable learning methods. From this discussion, we deduce future directions of real-time learning algorithms. Third, we show where these scenarios have been used successfully in several case studies.", "title": "" }, { "docid": "17d6bcff27325d7142d520fa87fb6a88", "text": "India is a vast country depicting wide social, cultural and sexual variations. Indian concept of sexuality has evolved over time and has been immensely influenced by various rulers and religions. Indian sexuality is manifested in our attire, behavior, recreation, literature, sculptures, scriptures, religion and sports. It has influenced the way we perceive our health, disease and device remedies for the same. In modern era, with rapid globalization the unique Indian sexuality is getting diffused. The time has come to rediscover ourselves in terms of sexuality to attain individual freedom and to reinvest our energy to social issues related to sexuality.", "title": "" }, { "docid": "fcca051539729b005271e4f96563538d", "text": "!is paper presents a novel methodological approach of how to design, conduct and analyse robot-assisted play. !is approach is inspired by non-directive play therapy. !e experimenter participates in the experiments, but the child remains the main leader for play. Besides, beyond inspiration from non-directive play therapy, this approach enables the experimenter to regulate the interaction under speci\"c conditions in order to guide the child or ask her questions about reasoning or a#ect related to the robot. !is approach has been tested in a longterm study with six children with autism in a school setting. An autonomous robot with zoomorphic, dog-like appearance was used in the studies. !e children’s progress was analyzed according to three dimensions, namely, Play, Reasoning and A#ect. Results from the case-study evaluations have shown the capability of the method to meet each child’s needs and abilities. Children who mainly played solitarily progressively experienced basic imitation games with the experimenter. Children who proactively played socially progressively experienced higher levels of play and constructed more reasoning related to the robot. !ey also expressed some interest in the robot, including, on occasion, a#ect.", "title": "" }, { "docid": "d82553a7bf94647aaf60eb36748e567f", "text": "We propose a novel image-based rendering algorithm for handling complex scenes that may include reflective surfaces. Our key contribution lies in treating the problem in the gradient domain. We use a standard technique to estimate scene depth, but assign depths to image gradients rather than pixels. A novel view is obtained by rendering the horizontal and vertical gradients, from which the final result is reconstructed through Poisson integration using an approximate solution as a data term. Our algorithm is able to handle general scenes including reflections and similar effects without explicitly separating the scene into reflective and transmissive parts, as required by previous work. Our prototype renderer is fully implemented on the GPU and runs in real time on commodity hardware.", "title": "" }, { "docid": "9096a4dac61f8a87da4f5cbfca5899a8", "text": "OBJECTIVE\nTo evaluate the CT findings of ruptured corpus luteal cysts.\n\n\nMATERIALS AND METHODS\nSix patients with a surgically proven ruptured corpus luteal cyst were included in this series. The prospective CT findings were retrospectively analyzed in terms of the size and shape of the cyst, the thickness and enhancement pattern of its wall, the attenuation of its contents, and peritoneal fluid.\n\n\nRESULTS\nThe mean diameter of the cysts was 2.8 (range, 1.5-4.8) cm; three were round and three were oval. The mean thickness of the cyst wall was 4.7 (range, 1-10) mm; in all six cases it showed strong enhancement, and in three was discontinuous. In five of six cases, the cystic contents showed high attenuation. Peritoneal fluid was present in all cases, and its attenuation was higher, especially around the uterus and adnexa, than that of urine present in the bladder.\n\n\nCONCLUSION\nIn a woman in whom CT reveals the presence of an ovarian cyst with an enhancing rim and highly attenuated contents, as well as highly attenuated peritoneal fluid, a ruptured corpus luteal cyst should be suspected. Other possible evidence of this is focal interruption of the cyst wall and the presence of peritoneal fluid around the adnexa.", "title": "" }, { "docid": "ae57246e37060c8338ad9894a19f1b6b", "text": "This paper seeks to establish the conceptual and empirical basis for an innovative instrument of corporate knowledge management: the knowledge map. It begins by briefly outlining the rationale for knowledge mapping, i.e., providing a common context to access expertise and experience in large companies. It then conceptualizes five types of knowledge maps that can be used in managing organizational knowledge. They are knowledge-sources, assets, -structures, -applications, and -development maps. In order to illustrate these five types of maps, a series of examples will be presented (from a multimedia agency, a consulting group, a market research firm, and a mediumsized services company) and the advantages and disadvantages of the knowledge mapping technique for knowledge management will be discussed. The paper concludes with a series of quality criteria for knowledge maps and proposes a five step procedure to implement knowledge maps in a corporate intranet.", "title": "" }, { "docid": "b151866647ad5e4cd50279bfdde4984a", "text": "Li-Fi stands for Light-Fidelity. Li-Fi innovation, which was suggested by Harald Haas, a German physicist, gives conduction of information over brightening through distribution of information via a LED light which changes in force quicker when compared to the vision of human beings which could take after. Wi-Fi is extraordinary for overall remote scope inside structures, while Li-Fi has been perfect for high thickness remote information scope in limited range besides for calming wireless impedance concerns. Smart meters are electronic devices which are used for recording consumption of electrical energy on a regular basis at an interval of an hour or less. In this paper, we motivate the need to learn and understand about the various new technologies like LiFi and its advantages. Further, we will understand the comparison between LiFi and Wi-Fi and learn about the advantages of using LiFi over WiFi. In addition to that we will also learn about the working of smart meters and its communication of the recorded information on a daily basis to the utility for monitoring and billing purposes.", "title": "" }, { "docid": "86a622185eeffc4a7ea96c307aed225a", "text": "Copyright © 2014 Massachusetts Medical Society. In light of the rapidly shifting landscape regarding the legalization of marijuana for medical and recreational purposes, patients may be more likely to ask physicians about its potential adverse and beneficial effects on health. The popular notion seems to be that marijuana is a harmless pleasure, access to which should not be regulated or considered illegal. Currently, marijuana is the most commonly used “illicit” drug in the United States, with about 12% of people 12 years of age or older reporting use in the past year and particularly high rates of use among young people.1 The most common route of administration is inhalation. The greenish-gray shredded leaves and flowers of the Cannabis sativa plant are smoked (along with stems and seeds) in cigarettes, cigars, pipes, water pipes, or “blunts” (marijuana rolled in the tobacco-leaf wrapper from a cigar). Hashish is a related product created from the resin of marijuana flowers and is usually smoked (by itself or in a mixture with tobacco) but can be ingested orally. Marijuana can also be used to brew tea, and its oil-based extract can be mixed into food products. The regular use of marijuana during adolescence is of particular concern, since use by this age group is associated with an increased likelihood of deleterious consequences2 (Table 1). Although multiple studies have reported detrimental effects, others have not, and the question of whether marijuana is harmful remains the subject of heated debate. Here we review the current state of the science related to the adverse health effects of the recreational use of marijuana, focusing on those areas for which the evidence is strongest.", "title": "" }, { "docid": "4ddd48db66a5951b82d5b7c2d9b8345a", "text": "In this paper we address the memory demands that come with the processing of 3-dimensional, high-resolution, multi-channeled medical images in deep learning. We exploit memory-efficient backpropagation techniques, to reduce the memory complexity of network training from being linear in the network’s depth, to being roughly constant – permitting us to elongate deep architectures with negligible memory increase. We evaluate our methodology in the paradigm of Image Quality Transfer, whilst noting its potential application to various tasks that use deep learning. We study the impact of depth on accuracy and show that deeper models have more predictive power, which may exploit larger training sets. We obtain substantially better results than the previous state-of-the-art model with a slight memory increase, reducing the rootmean-squared-error by 13%. Our code is publicly available.", "title": "" }, { "docid": "235899b940c658316693d0a481e2d954", "text": "BACKGROUND\nImmunohistochemical markers are often used to classify breast cancer into subtypes that are biologically distinct and behave differently. The aim of this study was to estimate mortality for patients with the major subtypes of breast cancer as classified using five immunohistochemical markers, to investigate patterns of mortality over time, and to test for heterogeneity by subtype.\n\n\nMETHODS AND FINDINGS\nWe pooled data from more than 10,000 cases of invasive breast cancer from 12 studies that had collected information on hormone receptor status, human epidermal growth factor receptor-2 (HER2) status, and at least one basal marker (cytokeratin [CK]5/6 or epidermal growth factor receptor [EGFR]) together with survival time data. Tumours were classified as luminal and nonluminal tumours according to hormone receptor expression. These two groups were further subdivided according to expression of HER2, and finally, the luminal and nonluminal HER2-negative tumours were categorised according to expression of basal markers. Changes in mortality rates over time differed by subtype. In women with luminal HER2-negative subtypes, mortality rates were constant over time, whereas mortality rates associated with the luminal HER2-positive and nonluminal subtypes tended to peak within 5 y of diagnosis and then decline over time. In the first 5 y after diagnosis the nonluminal tumours were associated with a poorer prognosis, but over longer follow-up times the prognosis was poorer in the luminal subtypes, with the worst prognosis at 15 y being in the luminal HER2-positive tumours. Basal marker expression distinguished the HER2-negative luminal and nonluminal tumours into different subtypes. These patterns were independent of any systemic adjuvant therapy.\n\n\nCONCLUSIONS\nThe six subtypes of breast cancer defined by expression of five markers show distinct behaviours with important differences in short term and long term prognosis. Application of these markers in the clinical setting could have the potential to improve the targeting of adjuvant chemotherapy to those most likely to benefit. The different patterns of mortality over time also suggest important biological differences between the subtypes that may result in differences in response to specific therapies, and that stratification of breast cancers by clinically relevant subtypes in clinical trials is urgently required.", "title": "" }, { "docid": "389a8e74f6573bd5e71b7c725ec3a4a7", "text": "Paucity of large curated hand-labeled training data forms a major bottleneck in the deployment of machine learning models in computer vision and other fields. Recent work (Data Programming) has shown how distant supervision signals in the form of labeling functions can be used to obtain labels for given data in near-constant time. In this work, we present Adversarial Data Programming (ADP), which presents an adversarial methodology to generate data as well as a curated aggregated label, given a set of weak labeling functions. We validated our method on the MNIST, Fashion MNIST, CIFAR 10 and SVHN datasets, and it outperformed many state-of-the-art models. We conducted extensive experiments to study its usefulness, as well as showed how the proposed ADP framework can be used for transfer learning as well as multi-task learning, where data from two domains are generated simultaneously using the framework along with the label information. Our future work will involve understanding the theoretical implications of this new framework from a game-theoretic perspective, as well as explore the performance of the method on more complex datasets.", "title": "" }, { "docid": "e28f51ea5a09081bd3037a26ca25aebd", "text": "Eye tracking specialists often need to understand and represent aggregate scanning strategies, but methods to identify similar scanpaths and aggregate multiple scanpaths have been elusive. A new method is proposed here to identify scanning strategies by aggregating groups of matching scanpaths automatically. A dataset of scanpaths is first converted to sequences of viewed area names, which are then represented in a dotplot. Matching sequences in the dotplot are found with linear regressions, and then used to cluster the scanpaths hierarchically. Aggregate scanning strategies are generated for each cluster and presented in an interactive dendrogram. While the clustering and aggregation method works in a bottom-up fashion, based on pair-wise matches, a top-down extension is also described, in which a scanning strategy is first input by cursor gesture, then matched against the dataset. The ability to discover both bottom-up and top-down strategy matches provides a powerful tool for scanpath analysis, and for understanding group scanning strategies.", "title": "" }, { "docid": "52faf4868f53008eec1f3ea4f39ed3f0", "text": "Hyaluronic acid (HA) soft-tissue fillers are the most popular degradable injectable products used for correcting skin depressions and restoring facial volume loss. From a rheological perspective, HA fillers are commonly characterised through their viscoelastic properties under shear-stress. However, despite the continuous mechanical pressure that the skin applies on the fillers, compression properties in static and dynamic modes are rarely considered. In this article, three different rheological tests (shear-stress test and compression tests in static and dynamic mode) were carried out on nine CE-marked cross-linked HA fillers. Corresponding shear-stress (G', tanδ) and compression (E', tanδc, normal force FN) parameters were measured. We show here that the tested products behave differently under shear-stress and under compression even though they are used for the same indications. G' showed the expected influence on the tissue volumising capacity, and the same influence was also observed for the compression parameters E'. In conclusion, HA soft-tissue fillers exhibit widely different biophysical characteristics and many variables contribute to their overall performance. The elastic modulus G' is not the only critical parameter to consider amongst the rheological properties: the compression parameters E' and FN also provide key information, which should be taken into account for a better prediction of clinical outcomes, especially for predicting the volumising capacity and probably the ability to stimulate collagen production by fibroblasts.", "title": "" } ]
scidocsrr
035b5b19237126eeb0a28beda02691df
Exploring the patterns of social behavior in GitHub
[ { "docid": "0153774b49121d8735cc3d33df69fc00", "text": "A common requirement of many empirical software engineering studies is the acquisition and curation of data from software repositories. During the last few years, GitHub has emerged as a popular project hosting, mirroring and collaboration platform. GitHub provides an extensive rest api, which enables researchers to retrieve both the commits to the projects' repositories and events generated through user actions on project resources. GHTorrent aims to create a scalable off line mirror of GitHub's event streams and persistent data, and offer it to the research community as a service. In this paper, we present the project's design and initial implementation and demonstrate how the provided datasets can be queried and processed.", "title": "" }, { "docid": "bac117da7b07fff75cf039165fc4e57e", "text": "The advent of distributed version control systems has led to the development of a new paradigm for distributed software development; instead of pushing changes to a central repository, developers pull them from other repositories and merge them locally. Various code hosting sites, notably Github, have tapped on the opportunity to facilitate pull-based development by offering workflow support tools, such as code reviewing systems and integrated issue trackers. In this work, we explore how pull-based software development works, first on the GHTorrent corpus and then on a carefully selected sample of 291 projects. We find that the pull request model offers fast turnaround, increased opportunities for community engagement and decreased time to incorporate contributions. We show that a relatively small number of factors affect both the decision to merge a pull request and the time to process it. We also examine the reasons for pull request rejection and find that technical ones are only a small minority.", "title": "" } ]
[ { "docid": "3b4ad43c44d824749da5487b34f31291", "text": "Recent terrorist attacks carried out on behalf of ISIS on American and European soil by lone wolf attackers or sleeper cells remind us of the importance of understanding the dynamics of radicalization mediated by social media communication channels. In this paper, we shed light on the social media activity of a group of twenty-five thousand users whose association with ISIS online radical propaganda has been manually verified. By using a computational tool known as dynamical activity-connectivity maps, based on network and temporal activity patterns, we investigate the dynamics of social influence within ISIS supporters. We finally quantify the effectiveness of ISIS propaganda by determining the adoption of extremist content in the general population and draw a parallel between radical propaganda and epidemics spreading, highlighting that information broadcasters and influential ISIS supporters generate highly-infectious cascades of information contagion. Our findings will help generate effective countermeasures to combat the group and other forms of online extremism.", "title": "" }, { "docid": "1e18d34152a15d84993124b1e689714a", "text": "Objectives\nEconomic, social, technical, and political drivers are fundamentally changing the nature of work and work environments, with profound implications for the field of occupational health. Nevertheless, researchers and practitioners entering the field are largely being trained to assess and control exposures using approaches developed under old models of work and risks.\n\n\nMethods\nA speaker series and symposium were organized to broadly explore current challenges and future directions for the occupational health field. Broad themes identified throughout these discussions are characterized and discussed to highlight important future directions of occupational health.\n\n\nFindings\nDespite the relatively diverse group of presenters and topics addressed, some important cross-cutting themes emerged. Changes in work organization and the resulting insecurity and precarious employment arrangements change the nature of risk to a large fraction of the workforce. Workforce demographics are changing, and economic disparities among working groups are growing. Globalization exacerbates the 'race to the bottom' for cheap labor, poor regulatory oversight, and limited labor rights. Largely, as a result of these phenomena, the historical distinction between work and non-work exposures has become largely artificial and less useful in understanding risks and developing effective public health intervention models. Additional changes related to climate change, governmental and regulatory limitations, and inadequate surveillance systems challenge and frustrate occupational health progress, while new biomedical and information technologies expand the opportunities for understanding and intervening to improve worker health.\n\n\nConclusion\nThe ideas and evidences discussed during this project suggest that occupational health training, professional practice, and research evolve towards a more holistic, public health-oriented model of worker health. This will require engagement with a wide network of stakeholders. Research and training portfolios need to be broadened to better align with the current realities of work and health and to prepare practitioners for the changing array of occupational health challenges.", "title": "" }, { "docid": "a5001e03007f3fd166e15db37dcd3bc7", "text": "Instrumental learning involves corticostriatal circuitry and the dopaminergic system. This system is typically modeled in the reinforcement learning (RL) framework by incrementally accumulating reward values of states and actions. However, human learning also implicates prefrontal cortical mechanisms involved in higher level cognitive functions. The interaction of these systems remains poorly understood, and models of human behavior often ignore working memory (WM) and therefore incorrectly assign behavioral variance to the RL system. Here we designed a task that highlights the profound entanglement of these two processes, even in simple learning problems. By systematically varying the size of the learning problem and delay between stimulus repetitions, we separately extracted WM-specific effects of load and delay on learning. We propose a new computational model that accounts for the dynamic integration of RL and WM processes observed in subjects' behavior. Incorporating capacity-limited WM into the model allowed us to capture behavioral variance that could not be captured in a pure RL framework even if we (implausibly) allowed separate RL systems for each set size. The WM component also allowed for a more reasonable estimation of a single RL process. Finally, we report effects of two genetic polymorphisms having relative specificity for prefrontal and basal ganglia functions. Whereas the COMT gene coding for catechol-O-methyl transferase selectively influenced model estimates of WM capacity, the GPR6 gene coding for G-protein-coupled receptor 6 influenced the RL learning rate. Thus, this study allowed us to specify distinct influences of the high-level and low-level cognitive functions on instrumental learning, beyond the possibilities offered by simple RL models.", "title": "" }, { "docid": "658ad1e8c3b98c1ccbaa5fe69e762246", "text": "Depth estimation is a fundamental problem for light field photography applications. Numerous methods have been proposed in recent years, which either focus on crafting cost terms for more robust matching, or on analyzing the geometry of scene structures embedded in the epipolar-plane images. Significant improvements have been made in terms of overall depth estimation error; however, current state-of-the-art methods still show limitations in handling intricate occluding structures and complex scenes with multiple occlusions. To address these challenging issues, we propose a very effective depth estimation framework which focuses on regularizing the initial label confidence map and edge strength weights. Specifically, we first detect partially occluded boundary regions (POBR) via superpixel-based regularization. Series of shrinkage/reinforcement operations are then applied on the label confidence map and edge strength weights over the POBR. We show that after weight manipulations, even a low-complexity weighted least squares model can produce much better depth estimation than the state-of-the-art methods in terms of average disparity error rate, occlusion boundary precision-recall rate, and the preservation of intricate visual features.", "title": "" }, { "docid": "8c4c469a3fee72e93f60fd47ef78d482", "text": "With the continuously increasing demand of cost effective, broadband wireless access, radio-over-fiber (RoF) starts to gain more and more momentum. Various techniques already exist, using analog (ARoF) or digitized (DRoF) radio signals over fiber; each with their own advantages and disadvantages. By transmitting a sigma delta modulated signal over fiber (SDoF), a similar immunity to impairments as DRoF can be obtained while maintaining the low complexity of ARoF. This letter describes a detailed experimental comparison between ARoF and SDoF that quantifies the improvement in linearity and error vector magnitude (EVM) of SDoF over ARoF. The experiments were carried out using a 16-QAM constellation with a baudrate from 20 to 125 MBd modulated on a central carrier frequency of 1 GHz. The sigma delta modulator runs at 8 or 13.5 Gbps. A high-speed vertical-cavity surface-emitting laser (VCSEL) operating at 850 nm is used to transmit the signal over 200-m multimode fiber. The receiver amplifies the electrical signals and subsequently filters to recover the original RF signal. Compared with ARoF, improvements exceeding 40 dB were measured on the third order intermodulation products when SDoF was employed, the EVM improves between 2.4 and 7.1 dB.", "title": "" }, { "docid": "da8cdee004db530e262a13e21daf4970", "text": "Arcing between the plasma and the wafer, kit, or target in PVD processes can cause significant wafer damage and foreign material contamination which limits wafer yield. Monitoring the plasma and quickly detecting this arcing phenomena is critical to ensuring that today's PVD processes run optimally and maximize product yield. This is particularly true in 300mm semiconductor manufacturing, where energies used are higher and more product is exposed to the plasma with each wafer run than in similar 200mm semiconductor manufacturing processes.", "title": "" }, { "docid": "d81d4bc4e8d2bfb0db1fd4141bf2191c", "text": "Anton 2 is a second-generation special-purpose supercomputer for molecular dynamics simulations that achieves significant gains in performance, programmability, and capacity compared to its predecessor, Anton 1. The architecture of Anton 2 is tailored for fine-grained event-driven operation, which improves performance by increasing the overlap of computation with communication, and also allows a wider range of algorithms to run efficiently, enabling many new software-based optimizations. A 512-node Anton 2 machine, currently in operation, is up to ten times faster than Anton 1 with the same number of nodes, greatly expanding the reach of all-atom biomolecular simulations. Anton 2 is the first platform to achieve simulation rates of multiple microseconds of physical time per day for systems with millions of atoms. Demonstrating strong scaling, the machine simulates a standard 23,558-atom benchmark system at a rate of 85 μs/day---180 times faster than any commodity hardware platform or general-purpose supercomputer.", "title": "" }, { "docid": "9167fbdd1fe4d5c17ffeaf50c6fd32b7", "text": "For many networked games, such as the Defense of the Ancients and StarCraft series, the unofficial leagues created by players themselves greatly enhance user-experience, and extend the success of each game. Understanding the social structure that players of these game s implicitly form helps to create innovative gaming services to the benefit of both players and game operators. But how to extract and analyse the implicit social structure? We address this question by first proposing a formalism consisting of various ways to map interaction to social structure, and apply this to real-world data collected from three different game genres. We analyse the implications of these mappings for in-game and gaming-related services, ranging from network and socially-aware matchmaking of players, to an investigation of social network robustnes against player departure.", "title": "" }, { "docid": "bbb08c98a2265c53ba590e0872e91e1d", "text": "Reinforcement learning (RL) is one of the most general approaches to learning control. Its applicability to complex motor systems, however, has been largely impossible so far due to the computational difficulties that reinforcement learning encounters in high dimensional continuous state-action spaces. In this paper, we derive a novel approach to RL for parameterized control policies based on the framework of stochastic optimal control with path integrals. While solidly grounded in optimal control theory and estimation theory, the update equations for learning are surprisingly simple and have no danger of numerical instabilities as neither matrix inversions nor gradient learning rates are required. Empirical evaluations demonstrate significant performance improvements over gradient-based policy learning and scalability to high-dimensional control problems. Finally, a learning experiment on a robot dog illustrates the functionality of our algorithm in a real-world scenario. We believe that our new algorithm, Policy Improvement with Path Integrals (PI2), offers currently one of the most efficient, numerically robust, and easy to implement algorithms for RL in robotics.", "title": "" }, { "docid": "40464db4c2deea0e4c1c3b760745c168", "text": "It is challenging to effectively check a regular property of a program. This paper presents RGSE, a regular property guided dynamic symbolic execution (DSE) engine, for finding a program path satisfying a regular property as soon as possible. The key idea is to evaluate the candidate branches based on the history and future information, and explore the branches along which the paths are more likely to satisfy the property in priority. We have applied RGSE to 16 real-world open source Java programs, totaling 270K lines of code. Compared with the state-of-the-art, RGSE achieves two orders of magnitude speedups for finding the first target path. RGSE can benefit many research topics of software testing and analysis, such as path-oriented test case generation, typestate bug finding, and performance tuning. The demo video is at: https://youtu.be/7zAhvRIdaUU, and RGSE can be accessed at: http://jrgse.github.io.", "title": "" }, { "docid": "97f153d8139958fd00002e6a2365d965", "text": "A method is proposed for fused three-dimensional (3-D) shape estimation and visibility analysis of an unknown, markerless, deforming object through a multicamera vision system. Complete shape estimation is defined herein as the process of 3-D reconstruction of a model through fusion of stereo triangulation data and a visual hull. The differing accuracies of both methods rely on the number and placement of the cameras. Stereo triangulation yields a high-density, high-accuracy reconstruction of a surface patch from a small surface area, while a visual hull yields a complete, low-detail volumetric approximation of the object. The resultant complete 3-D model is, then, temporally projected based on the tracked object’s deformation, yielding a robust deformed shape prediction. Visibility and uncertainty analyses, on the projected model, estimate the expected accuracy of reconstruction at the next sampling instant. In contrast to common techniques that rely on a priori known models and identities of static objects, our method is distinct in its direct application to unknown, markerless, deforming objects, where the object model and identity are unknown to the system. Extensive simulations and comparisons, some of which are presented herein, thoroughly demonstrate the proposed method and its benefits over individual reconstruction techniques. © 2016 SPIE and IS&T [DOI: 10.1117/1.JEI.25.4.041009]", "title": "" }, { "docid": "4be57bfa4e510cdf0e8ad833034d7fce", "text": "Dynamic data flow tracking (DFT) is a technique broadly used in a variety of security applications that, unfortunately, exhibits poor performance, preventing its adoption in production systems. We present ShadowReplica, a new and efficient approach for accelerating DFT and other shadow memory-based analyses, by decoupling analysis from execution and utilizing spare CPU cores to run them in parallel. Our approach enables us to run a heavyweight technique, like dynamic taint analysis (DTA), twice as fast, while concurrently consuming fewer CPU cycles than when applying it in-line. DFT is run in parallel by a second shadow thread that is spawned for each application thread, and the two communicate using a shared data structure. We avoid the problems suffered by previous approaches, by introducing an off-line application analysis phase that utilizes both static and dynamic analysis methodologies to generate optimized code for decoupling execution and implementing DFT, while it also minimizes the amount of information that needs to be communicated between the two threads. Furthermore, we use a lock-free ring buffer structure and an N-way buffering scheme to efficiently exchange data between threads and maintain high cache-hit rates on multi-core CPUs. Our evaluation shows that ShadowReplica is on average ~2.3× faster than in-line DFT (~2.75× slowdown over native execution) when running the SPEC CPU2006 benchmark, while similar speed ups were observed with command-line utilities and popular server software. Astoundingly, ShadowReplica also reduces the CPU cycles used up to 30%.", "title": "" }, { "docid": "0ee09adae30459337f8e7261165df121", "text": "Mobile malware threats (e.g., on Android) have recently become a real concern. In this paper, we evaluate the state-of-the-art commercial mobile anti-malware products for Android and test how resistant they are against various common obfuscation techniques (even with known malware). Such an evaluation is important for not only measuring the available defense against mobile malware threats, but also proposing effective, next-generation solutions. We developed DroidChameleon, a systematic framework with various transformation techniques, and used it for our study. Our results on 10 popular commercial anti-malware applications for Android are worrisome: none of these tools is resistant against common malware transformation techniques. In addition, a majority of them can be trivially defeated by applying slight transformation over known malware with little effort for malware authors. Finally, in light of our results, we propose possible remedies for improving the current state of malware detection on mobile devices.", "title": "" }, { "docid": "5c0d3c8962d1f18a50162bbf3dcd4658", "text": "The field of power electronics poses challenging control problems that cannot be treated in a complete manner using traditional modelling and controller design approaches. The main difficulty arises from the hybrid nature of these systems due to the presence of semiconductor switches that induce different modes of operation and operate with a high switching frequency. Since the control techniques traditionally employed in industry feature a significant potential for improving the performance and the controller design, the field of power electronics invites the application of advanced hybrid systems methodologies. The computational power available today and the recent theoretical advances in the control of hybrid systems allow one to tackle these problems in a novel way that improves the performance of the system, and is systematic and implementable. In this paper, this is illustrated by two examples, namely the Direct Torque Control of three-phase induction motors and the optimal control of switch-mode dc-dc converters.", "title": "" }, { "docid": "b27276c9743bdb33c0cb807653588521", "text": "Most previous neurophysiological studies evoked emotions by presenting visual stimuli. Models of the emotion circuits in the brain have for the most part ignored emotions arising from musical stimuli. To our knowledge, this is the first emotion brain study which examined the influence of visual and musical stimuli on brain processing. Highly arousing pictures of the International Affective Picture System and classical musical excerpts were chosen to evoke the three basic emotions of happiness, sadness and fear. The emotional stimuli modalities were presented for 70 s either alone or combined (congruent) in a counterbalanced and random order. Electroencephalogram (EEG) Alpha-Power-Density, which is inversely related to neural electrical activity, in 30 scalp electrodes from 24 right-handed healthy female subjects, was recorded. In addition, heart rate (HR), skin conductance responses (SCR), respiration, temperature and psychometrical ratings were collected. Results showed that the experienced quality of the presented emotions was most accurate in the combined conditions, intermediate in the picture conditions and lowest in the sound conditions. Furthermore, both the psychometrical ratings and the physiological involvement measurements (SCR, HR, Respiration) were significantly increased in the combined and sound conditions compared to the picture conditions. Finally, repeated measures ANOVA revealed the largest Alpha-Power-Density for the sound conditions, intermediate for the picture conditions, and lowest for the combined conditions, indicating the strongest activation in the combined conditions in a distributed emotion and arousal network comprising frontal, temporal, parietal and occipital neural structures. Summing up, these findings demonstrate that music can markedly enhance the emotional experience evoked by affective pictures.", "title": "" }, { "docid": "c47f251cc62b405be1eb1b105f443466", "text": "The conceptualization of gender variant populations within studies have consisted of imposed labels and a diversity of individual identities that preclude any attempt at examining the variations found among gender variant populations, while at the same time creating artificial distinctions between groups that may not actually exist. Data were collected from 90 transgender/transsexual people using confidential, self-administered questionnaires. Factors like age of transition, being out to others, and participant's race and class were associated with experiences of transphobic life events. Discrimination can have profound impact on transgender/transsexual people's lives, but different factors can influence one's experience of transphobia. Further studies are needed to examine how transphobia manifests, and how gender characteristics impact people's lives.", "title": "" }, { "docid": "21af4f870f466baa4bdb02b37c4d9656", "text": "Software maps -- linking rectangular 3D-Treemaps, software system structure, and performance indicators -- are commonly used to support informed decision making in software-engineering processes. A key aspect for this decision making is that software maps provide the structural context required for correct interpretation of these performance indicators. In parallel, source code repositories and collaboration platforms are an integral part of today's software-engineering tool set, but cannot properly incorporate software maps since implementations are only available as stand-alone applications. Hence, software maps are 'disconnected' from the main body of this tool set, rendering their use and provisioning overly complicated, which is one of the main reasons against regular use. We thus present a web-based rendering system for software maps that achieves both fast client-side page load time and interactive frame rates even with large software maps. We significantly reduce page load time by efficiently encoding hierarchy and geometry data for the net transport. Apart from that, appropriate interaction, layouting, and labeling techniques as well as common image enhancements aid evaluation of project-related quality aspects. Metrics provisioning can further be implemented by predefined attribute mappings to simplify communication of project specific quality aspects. The system is integrated into dashboards to demonstrate how our web-based approach makes software maps more accessible to many different stakeholders in software-engineering projects.", "title": "" }, { "docid": "590cf6884af6223ce4e827ba2fe18209", "text": "1. The extracellular patch clamp method, which first allowed the detection of single channel currents in biological membranes, has been further refined to enable higher current resolution, direct membrane patch potential control, and physical isolation of membrane patches. 2. A description of a convenient method for the fabrication of patch recording pipettes is given together with procedures followed to achieve giga-seals i.e. pipettemembrane seals with resistances of 109–1011Ω. 3. The basic patch clamp recording circuit, and designs for improved frequency response are described along with the present limitations in recording the currents from single channels. 4. Procedures for preparation and recording from three representative cell types are given. Some properties of single acetylcholine-activated channels in muscle membrane are described to illustrate the improved current and time resolution achieved with giga-seals. 5. A description is given of the various ways that patches of membrane can be physically isolated from cells. This isolation enables the recording of single channel currents with well-defined solutions on both sides of the membrane. Two types of isolated cell-free patch configurations can be formed: an inside-out patch with its cytoplasmic membrane face exposed to the bath solution, and an outside-out patch with its extracellular membrane face exposed to the bath solution. 6. The application of the method for the recording of ionic currents and internal dialysis of small cells is considered. Single channel resolution can be achieved when recording from whole cells, if the cell diameter is small (<20μm). 7. The wide range of cell types amenable to giga-seal formation is discussed. The extracellular patch clamp method, which first allowed the detection of single channel currents in biological membranes, has been further refined to enable higher current resolution, direct membrane patch potential control, and physical isolation of membrane patches. A description of a convenient method for the fabrication of patch recording pipettes is given together with procedures followed to achieve giga-seals i.e. pipettemembrane seals with resistances of 109–1011Ω. The basic patch clamp recording circuit, and designs for improved frequency response are described along with the present limitations in recording the currents from single channels. Procedures for preparation and recording from three representative cell types are given. Some properties of single acetylcholine-activated channels in muscle membrane are described to illustrate the improved current and time resolution achieved with giga-seals. A description is given of the various ways that patches of membrane can be physically isolated from cells. This isolation enables the recording of single channel currents with well-defined solutions on both sides of the membrane. Two types of isolated cell-free patch configurations can be formed: an inside-out patch with its cytoplasmic membrane face exposed to the bath solution, and an outside-out patch with its extracellular membrane face exposed to the bath solution. The application of the method for the recording of ionic currents and internal dialysis of small cells is considered. Single channel resolution can be achieved when recording from whole cells, if the cell diameter is small (<20μm). The wide range of cell types amenable to giga-seal formation is discussed.", "title": "" }, { "docid": "17a475b655134aafde0f49db06bec127", "text": "Estimating the number of persons in a public place provides useful information for video-based surveillance and monitoring applications. In the case of oblique camera setup, counting is either achieved by detecting individuals or by statistically establishing relations between values of simple image features (e.g. amount of moving pixels, edge density, etc.) to the number of people. While the methods of the first category exhibit poor accuracy in cases of occlusions, the second category of methods are sensitive to perspective distortions, and require people to move in order to be counted. In this paper we investigate the possibilities of developing a robust statistical method for people counting. To maximize its applicability scope, we choose-in contrast to the majority of existing methods from this category-not to require prior learning of categories corresponding to different number of people. Second, we search for a suitable way of correcting the perspective distortion. Finally, we link the estimation to a confidence value that takes into account the known factors being of influence on the result. The confidence is then used to refine final results.", "title": "" }, { "docid": "6bd3614d830cbef03c9567bf096e417a", "text": "Rehabilitation robots start to become an important tool in stroke rehabilitation. Compared to manual arm training, robot-supported training can be more intensive, of longer duration, repetitive and task-oriented. Therefore, these devices have the potential to improve the rehabilitation process in stroke patients. While in the past, most groups have been working with endeffector-based robots, exoskeleton robots become more and more important, mainly because they offer a better guidance of the single human joints, especially during movements with large ranges. Regarding the upper extremities, the shoulder is the most complex human joint and its actuation is, therefore, challenging. This paper deals with shoulder actuation principles for exoskeleton robots. First, a quantitative analysis of the human shoulder movement is presented. Based on that analysis two shoulder actuation principles that provide motion of the center of the glenohumeral joint are presented and evaluated.", "title": "" } ]
scidocsrr
dc9b28f89bc3939ec6b55eb4ce11ab84
Computer-Based Clinical Decision Support System for Prediction of Heart Diseases Using Naïve Bayes Algorithm
[ { "docid": "30d7f140a5176773611b3c1f8ec4953e", "text": "The healthcare environment is generally perceived as being ‘information rich’ yet ‘knowledge poor’. There is a wealth of data available within the healthcare systems. However, there is a lack of effective analysis tools to discover hidden relationships and trends in data. Knowledge discovery and data mining have found numerous applications in business and scientific domain. Valuable knowledge can be discovered from application of data mining techniques in healthcare system. In this study, we briefly examine the potential use of classification based data mining techniques such as Rule based, decision tree and Artificial Neural Network to massive volume of healthcare data. In particular we consider a case study using classification techniques on a medical data set of diabetic patients.", "title": "" } ]
[ { "docid": "eea9332a263b7e703a60c781766620e5", "text": "The use of topic models to analyze domainspecific texts often requires manual validation of the latent topics to ensure that they are meaningful. We introduce a framework to support such a large-scale assessment of topical relevance. We measure the correspondence between a set of latent topics and a set of reference concepts to quantify four types of topical misalignment: junk, fused, missing, and repeated topics. Our analysis compares 10,000 topic model variants to 200 expertprovided domain concepts, and demonstrates how our framework can inform choices of model parameters, inference algorithms, and intrinsic measures of topical quality.", "title": "" }, { "docid": "29b1aa2ead1e961ddf9ae85e4b53ffa5", "text": "Robot-assisted dressing offers an opportunity to benefit the lives of many people with disabilities, such as some older adults. However, robots currently lack common sense about the physical implications of their actions on people. The physical implications of dressing are complicated by non-rigid garments, which can result in a robot indirectly applying high forces to a person's body. We present a deep recurrent model that, when given a proposed action by the robot, predicts the forces a garment will apply to a person's body. We also show that a robot can provide better dressing assistance by using this model with model predictive control. The predictions made by our model only use haptic and kinematic observations from the robot's end effector, which are readily attainable. Collecting training data from real world physical human-robot interaction can be time consuming, costly, and put people at risk. Instead, we train our predictive model using data collected in an entirely self-supervised fashion from a physics-based simulation. We evaluated our approach with a PR2 robot that attempted to pull a hospital gown onto the arms of 10 human participants. With a 0.2s prediction horizon, our controller succeeded at high rates and lowered applied force while navigating the garment around a persons fist and elbow without getting caught. Shorter prediction horizons resulted in significantly reduced performance with the sleeve catching on the participants' fists and elbows, demonstrating the value of our model's predictions. These behaviors of mitigating catches emerged from our deep predictive model and the controller objective function, which primarily penalizes high forces.", "title": "" }, { "docid": "42c2e599dbbb00784e2a6837ebd17ade", "text": "Several real-world classification problems are example-dependent cost-sensitive in nature, where the costs due to misclassification vary between examples. However, standard classification methods do not take these costs into account, and assume a constant cost of misclassification errors. State-of-the-art example-dependent cost-sensitive techniques only introduce the cost to the algorithm, either before or after training, therefore, leaving opportunities to investigate the potential impact of algorithms that take into account the real financial example-dependent costs during an algorithm training. In this paper, we propose an example-dependent cost-sensitive decision tree algorithm, by incorporating the different example-dependent costs into a new cost-based impurity measure and a new cost-based pruning criteria. Then, using three different databases, from three real-world applications: credit card fraud detection, credit scoring and direct marketing, we evaluate the proposed method. The results show that the proposed algorithm is the best performing method for all databases. Furthermore, when compared against a standard decision tree, our method builds significantly smaller trees in only a fifth of the time, while having a superior performance measured by cost savings, leading to a method that not only has more business-oriented results, but also a method that creates simpler models that are easier to analyze. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "9f987cd94d103fb3d4496b7d95b6079f", "text": "In the world of sign language, and gestures, a lot of research work has been done over the past three decades. This has brought about a gradual transition from isolated to continuous, and static to dynamic gesture recognition for operations on a limited vocabulary. In present scenario, human machine interactive systems facilitate communication between the deaf, and hearing people in real world situations. In order to improve the accuracy of recognition, many researchers have deployed methods such as HMM, Artificial Neural Networks, and Kinect platform. Effective algorithms for segmentation, classification, pattern matching and recognition have evolved. The main purpose of this paper is to analyze these methods and to effectively compare them, which will enable the reader to reach an optimal solution. This creates both, challenges and opportunities for sign language recognition related research. KeywordsSign Language Recognition, Hidden Markov Model, Artificial Neural Network, Kinect Platform, Fuzzy Logic.", "title": "" }, { "docid": "14857144b52dbfb661d6ef4cd2c59b64", "text": "The candidate confirms that the work submitted is his/her own and that appropriate credit has been given where reference has been made to the work of others. i ACKNOWLEDGMENT I am truly indebted and thankful to my scholarship sponsor ―National Information Technology Development Agency (NITDA), Nigeria‖ for giving me the rare privilege to study at the University of Leeds. I am sincerely and heartily grateful to my supervisor Dr. Des McLernon for his valuable support, patience and guidance throughout the course of this dissertation. I am sure it would not have been possible without his help. I would like to express my deep gratitude to Romero-Zurita Nabil for his enthusiastic encouragement, useful critique, recommendation and providing me with great information resources. I also acknowledge my colleague Frempong Kwadwo for his invaluable suggestions and discussion. Finally, I would like to appreciate my parents for their support and encouragement throughout my study at Leeds. Above all, special thanks to God Almighty for the gift of life. ii DEDICATION This thesis is dedicated to family especially; to my parents for inculcating the importance of hardwork and higher education to Omobolanle for being a caring and loving sister. to Abimbola for believing in me.", "title": "" }, { "docid": "e08e0eea0e3f3735b53f9eb76c155f9c", "text": "The temporal-difference methods TD(λ) and Sarsa(λ) form a core part of modern reinforcement learning. Their appeal comes from their good performance, low computational cost, and their simple interpretation, given by their forward view. Recently, new versions of these methods were introduced, called true online TD(λ) and true online Sarsa(λ), respectively (van Seijen and Sutton, 2014). Algorithmically, these true online methods only make two small changes to the update rules of the regular methods, and the extra computational cost is negligible in most cases. However, they follow the ideas underlying the forward view much more closely. In particular, they maintain an exact equivalence with the forward view at all times, whereas the traditional versions only approximate it for small step-sizes. We hypothesize that these true online methods not only have better theoretical properties, but also dominate the regular methods empirically. In this article, we put this hypothesis to the test by performing an extensive empirical comparison. Specifically, we compare the performance of true online TD(λ)/Sarsa(λ) with regular TD(λ)/Sarsa(λ) on random MRPs, a real-world myoelectric prosthetic arm, and a domain from the Arcade Learning Environment. We use linear function approximation with tabular, binary, and non-binary features. Our results suggest that the true online methods indeed dominate the regular methods. Across all domains/representations the learning speed of the true online methods are often better, but never worse than that of the regular methods. An additional advantage is that no choice between traces has to be made for the true online methods. We show that new true online temporal-difference methods can be derived by making changes to the real-time forward view and then rewriting the update equations.", "title": "" }, { "docid": "a2688a1169babed7e35a52fa875505d4", "text": "Crowdsourcing label generation has been a crucial component for many real-world machine learning applications. In this paper, we provide finite-sample exponential bounds on the error rate (in probability and in expectation) of hyperplane binary labeling rules for the Dawid-Skene (and Symmetric DawidSkene ) crowdsourcing model. The bounds can be applied to analyze many commonly used prediction methods, including the majority voting, weighted majority voting and maximum a posteriori (MAP) rules. These bound results can be used to control the error rate and design better algorithms. In particular, under the Symmetric Dawid-Skene model we use simulation to demonstrate that the data-driven EM-MAP rule is a good approximation to the oracle MAP rule which approximately optimizes our upper bound on the mean error rate for any hyperplane binary labeling rule. Meanwhile, the average error rate of the EM-MAP rule is bounded well by the upper bound on the mean error rate of the oracle MAP rule in the simulation.", "title": "" }, { "docid": "faf9c570aacd161296de180850153078", "text": "Two problems occur when bundle adjustment (BA) is applied on long image sequences: the large calculation time and the drift (or error accumulation). In recent work, the calculation time is reduced by local BAs applied in an incremental scheme. The drift may be reduced by fusion of GPS and Structure-from-Motion. An existing fusion method is BA minimizing a weighted sum of image and GPS errors. This paper introduces two constrained BAs for fusion, which enforce an upper bound for the reprojection error. These BAs are alternatives to the existing fusion BA, which does not guarantee a small reprojection error and requires a weight as input. Then the three fusion BAs are integrated in an incremental Structure-from-Motion method based on local BA. Lastly, we will compare the fusion results on a long monocular image sequence and a low cost GPS.", "title": "" }, { "docid": "203f34a946e00211ebc6fce8e2a061ed", "text": "We propose a new personalized document summarization method that observes a user's personal reading preferences. These preferences are inferred from the user's reading behaviors, including facial expressions, gaze positions, and reading durations that were captured during the user's past reading activities. We compare the performance of our algorithm with that of a few peer algorithms and software packages. The results of our comparative study show that our algorithm can produce more superior personalized document summaries than all the other methods in that the summaries generated by our algorithm can better satisfy a user's personal preferences.", "title": "" }, { "docid": "c6befaca710e45101b9a12dbc8110a0b", "text": "The realized strategy contents of information systems (IS) strategizing are a result of both deliberate and emergent patterns of action. In this paper, we focus on emergent patterns of action by studying the formation of strategies that build on local technology-mediated practices. This is done through case study research of the emergence of a sustainability strategy at a European automaker. Studying the practices of four organizational sub-communities, we develop a process perspective of sub-communities’ activity-based production of strategy contents. The process model explains the contextual conditions that make subcommunities initiate SI strategy contents production, the activity-based process of strategy contents production, and the IS strategy outcome. The process model, which draws on Jarzabkowski’s strategy-as-practice lens and Mintzberg’s strategy typology, contributes to the growing IS strategizing literature that examines local practices in IS efforts of strategic importance. 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "d247f00420b872fb0153a343d2b44dd3", "text": "Network embedding in heterogeneous information networks (HINs) is a challenging task, due to complications of different node types and rich relationships between nodes. As a result, conventional network embedding techniques cannot work on such HINs. Recently, metapathbased approaches have been proposed to characterize relationships in HINs, but they are ineffective in capturing rich contexts and semantics between nodes for embedding learning, mainly because (1) metapath is a rather strict single path node-node relationship descriptor, which is unable to accommodate variance in relationships, and (2) only a small portion of paths can match the metapath, resulting in sparse context information for embedding learning. In this paper, we advocate a new metagraph concept to capture richer structural contexts and semantics between distant nodes. A metagraph contains multiple paths between nodes, each describing one type of relationships, so the augmentation of multiple metapaths provides an effective way to capture rich contexts and semantic relations between nodes. This greatly boosts the ability of metapath-based embedding techniques in handling very sparse HINs. We propose a new embedding learning algorithm, namely MetaGraph2Vec, which uses metagraph to guide the generation of random walks and to learn latent embeddings of multi-typed HIN nodes. Experimental results show that MetaGraph2Vec is able to outperform the state-of-theart baselines in various heterogeneous network mining tasks such as node classification, node clustering, and similarity search.", "title": "" }, { "docid": "8b85dc461c11f44e27caaa8c8816a49b", "text": "In a Role-Playing Game, finding optimal trajectories is one of the most important tasks. In fact, the strategy decision system becomes a key component of a game engine. Determining the way in which decisions are taken (online, batch or simulated) and the consumed resources in decision making (e.g. execution time, memory) will influence, in mayor degree, the game performance. When classical search algorithms such as A∗ can be used, they are the very first option. Nevertheless, such methods rely on precise and complete models of the search space, and there are many interesting scenarios where their application is not possible. Then, model free methods for sequential decision making under uncertainty are the best choice. In this paper, we propose a heuristic planning strategy to incorporate the ability of heuristic-search in path-finding into a Dyna agent. The proposed Dyna-H algorithm, as A∗ does, selects branches more likely to produce outcomes than other branches. Besides, it has the advantages of being a modelfree online reinforcement learning algorithm. The proposal was evaluated against the one-step Q-Learning and Dyna-Q algorithms obtaining excellent experimental results: Dyna-H significatively overcomes both methods in all experiments. We suggest also, a functional analogy between the proposed sampling from worst trajectories heuristic and the role of dreams (e.g. nightmares) in human behavior.", "title": "" }, { "docid": "7894b8eae0ceacc92ef2103f0ea8e693", "text": "In this paper, different first and second derivative filters are investigated to find edge map after denoising a corrupted gray scale image. We have proposed a new derivative filter of first order and described a novel approach of edge finding with an aim to find better edge map in a restored gray scale image. Subjective method has been used by visually comparing the performance of the proposed derivative filter with other existing first and second order derivative filters. The root mean square error and root mean square of signal to noise ratio have been used for objective evaluation of the derivative filters. Finally, to validate the efficiency of the filtering schemes different algorithms are proposed and the simulation study has been carried out using MATLAB 5.0.", "title": "" }, { "docid": "c388626855099e1e9f8e5f46d4e271fc", "text": "The literature assumes that Enterprise Resource Planning (ERP) systems are complex tools. Due to this complexity, ERP produce negative impacts on the users’ acceptation. However, few studies have tried to identify the factors that influence the ERP users’ acceptance. This paper’s aim is to focus on decisive factors influencing the ERP users’ acceptance and use. Specifically, the authors have developed a research model based on the Technology Acceptance Model (TAM) for testing the influence of the Critical Success Factors (CSFs) on ERP implementation. The CSFs used are: (1) top management support, (2) communication, (3) cooperation, (4) training and (5) technological complexity. This research model has offered some evidence about main acceptance factors on ERP which help to set the users’ behavior toward ERP. 2008 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "df4477952bc78f9ddca6a637b0d9b990", "text": "Food preference learning is an important component of wellness applications and restaurant recommender systems as it provides personalized information for effective food targeting and suggestions. However, existing systems require some form of food journaling to create a historical record of an individual's meal selections. In addition, current interfaces for food or restaurant preference elicitation rely extensively on text-based descriptions and rating methods, which can impose high cognitive load, thereby hampering wide adoption.\n In this paper, we propose PlateClick, a novel system that bootstraps food preference using a simple, visual quiz-based user interface. We leverage a pairwise comparison approach with only visual content. Using over 10,028 recipes collected from Yummly, we design a deep convolutional neural network (CNN) to learn the similarity distance metric between food images. Our model is shown to outperform state-of-the-art CNN by 4 times in terms of mean Average Precision. We explore a novel online learning framework that is suitable for learning users' preferences across a large scale dataset based on a small number of interactions (≤ 15). Our online learning approach balances exploitation-exploration and takes advantage of food similarities using preference-propagation in locally connected graphs.\n We evaluated our system in a field study of 227 anonymous users. The results demonstrate that our method outperforms other baselines by a significant margin, and the learning process can be completed in less than one minute. In summary, PlateClick provides a light-weight, immersive user experience for efficient food preference elicitation.", "title": "" }, { "docid": "c4f851911ed4bc21d666cce45d5595eb", "text": "! ABSTRACT Purpose The lack of a security evaluation method might expose organizations to several risky situations. This paper aims at presenting a cyclical evaluation model of information security maturity. Design/methodology/approach This model was developed through the definition of a set of steps to be followed in order to obtain periodical evaluation of maturity and continuous improvement of controls. Findings – This model is based on controls present in ISO/IEC 27002, provides a means to measure the current situation of information security management through the use of a maturity model and provides a subsidy to take appropriate and feasible improvement actions, based on risks. A case study is performed and the results indicate that the method is efficient for evaluating the current state of information security, to support information security management, risks identification and business and internal control processes. Research limitations/implications It is possible that modifications to the process may be needed where there is less understanding of security requirements, such as in a less mature organization. Originality/value This paper presents a generic model applicable to all kinds of organizations. The main contribution of this paper is the use of a maturity scale allied to the cyclical process of evaluation, providing the generation of immediate indicators for the management of information security. !", "title": "" }, { "docid": "9a08871e40f477aac7b2e15fcf4ab266", "text": "Article history: Accepted 10 November 2015 Available online xxxx This paper investigates the role of heterogeneity in the insurance sector. Here, heterogeneity is represented by different types of insurance provided and regions served. Using a balanced panel data set on Brazilian insurance companies as a case study, results corroborate this underlying hypothesis of heterogeneity's impact on performance. The implications of this research for practitioners andacademics are not only addressed in termsofmarket segmentation —which ones are the best performers—but also in terms of mergers and acquisitions—as long as insurance companies may increase their performance with the right balance of types of insurance offered and regions served. © 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "1bdf1bfe81bf6f947df2254ae0d34227", "text": "We investigate the problem of incorporating higher-level symbolic score-like information into Automatic Music Transcription (AMT) systems to improve their performance. We use recurrent neural networks (RNNs) and their variants as music language models (MLMs) and present a generative architecture for combining these models with predictions from a frame level acoustic classifier. We also compare different neural network architectures for acoustic modeling. The proposed model computes a distribution over possible output sequences given the acoustic input signal and we present an algorithm for performing a global search for good candidate transcriptions. The performance of the proposed model is evaluated on piano music from the MAPS dataset and we observe that the proposed model consistently outperforms existing transcription methods.", "title": "" }, { "docid": "7332ba6aff8c966d76b1c8f451a02ccf", "text": "A light-emitting diode (LED) driver compatible with fluorescent lamp (FL) ballasts is presented for a lamp-only replacement without rewiring the existing lamp fixture. Ballasts have a common function to regulate the lamp current, despite widely different circuit topologies. In this paper, magnetic and electronic ballasts are modeled as nonideal current sources and a current-sourced boost converter, which is derived from the duality, is adopted for the power conversion from ballasts. A rectifier circuit with capacitor filaments is proposed to interface the converter with the four-wire output of the ballast. A digital controller emulates the high-voltage discharge of the FL and operates adaptively with various ballasts. A prototype 20-W LED driver for retrofitting T8 36-W FL is evaluated with both magnetic and electronic ballasts. In addition to wide compatibility, accurate regulation of the LED current within 0.6% error and high driver efficiency over 89.7% are obtained.", "title": "" } ]
scidocsrr
85466a98cc53eb47040fad30d7570779
The multisensory perception of flavor
[ { "docid": "470265e6acd60a190401936fb7121c75", "text": "Synesthesia is a conscious experience of systematically induced sensory attributes that are not experienced by most people under comparable conditions. Recent findings from cognitive psychology, functional brain imaging and electrophysiology have shed considerable light on the nature of synesthesia and its neurocognitive underpinnings. These cognitive and physiological findings are discussed with respect to a neuroanatomical framework comprising hierarchically organized cortical sensory pathways. We advance a neurobiological theory of synesthesia that fits within this neuroanatomical framework.", "title": "" } ]
[ { "docid": "5fc02317117c3068d1409a42b025b018", "text": "Explaining the causes of infeasibility of Boolean formulas has practical applications in numerous fields, such as artificial intelligence (repairing inconsistent knowledge bases), formal verification (abstraction refinement and unbounded model checking), and electronic design (diagnosing and correcting infeasibility). Minimal unsatisfiable subformulas (MUSes) provide useful insights into the causes of infeasibility. An unsatisfiable formula often has many MUSes. Based on the application domain, however, MUSes with specific properties might be of interest. In this paper, we tackle the problem of finding a smallest-cardinality MUS (SMUS) of a given formula. An SMUS provides a succinct explanation of infeasibility and is valuable for applications that are heavily affected by the size of the explanation. We present (1) a baseline algorithm for finding an SMUS, founded on earlier work for finding all MUSes, and (2) a new branch-and-bound algorithm called Digger that computes a strong lower bound on the size of an SMUS and splits the problem into more tractable subformulas in a recursive search tree. Using two benchmark suites, we experimentally compare Digger to the baseline algorithm and to an existing incomplete genetic algorithm approach. Digger is shown to be faster in nearly all cases. It is also able to solve far more instances within a given runtime limit than either of the other approaches.", "title": "" }, { "docid": "74c48ec7adb966fc3024ed87f6102a1a", "text": "Quantitative accessibility metrics are widely used in accessibility evaluation, which synthesize a summative value to represent the accessibility level of a website. Many of these metrics are the results of a two-step process. The first step is the inspection with regard to potential barriers while different properties are reported, and the second step aggregates these fine-grained reports with varying weights for checkpoints. Existing studies indicate that finding appropriate weights for different checkpoint types is a challenging issue. Although some metrics derive the checkpoint weights from the WCAG priority levels, previous investigations reveal that the correlation between the WCAG priority levels and the user experience is not significant. Moreover, our website accessibility evaluation results also confirm the mismatches between the ranking of websites using existing metrics and the ranking based on user experience. To overcome this limitation, we propose a novel metric called the Web Accessibility Experience Metric (WAEM) that can better match the accessibility evaluation results with the user experience of people with disabilities by aligning the evaluation metric with the partial user experience order (PUEXO), i.e. pairwise comparisons between different websites. A machine learning model is developed to derive the optimal checkpoint weights from the PUEXO. Experiments on real-world web accessibility evaluation data sets validate the effectiveness of WAEM.", "title": "" }, { "docid": "f296b374b635de4f4c6fc9c6f415bf3e", "text": "People increasingly use the Internet for obtaining information regarding diseases, diagnoses and available treatments. Currently, many online health portals already provide non-personalized health information in the form of articles. However, it can be challenging to find information relevant to one's condition, interpret this in context, and understand the medical terms and relationships. Recommender Systems (RS) already help these systems perform precise information filtering. In this short paper, we look one step ahead and show the progress made towards RS helping users find personalized, complex medical interventions or support them with preventive healthcare measures. We identify key challenges that need to be addressed for RS to offer the kind of decision support needed in high-risk domains like healthcare.", "title": "" }, { "docid": "6e30761b695e22a29f98a051dbccac6f", "text": "This paper explores the use of clickthrough data for query spelling correction. First, large amounts of query-correction pairs are derived by analyzing users' query reformulation behavior encoded in the clickthrough data. Then, a phrase-based error model that accounts for the transformation probability between multi-term phrases is trained and integrated into a query speller system. Experiments are carried out on a human-labeled data set. Results show that the system using the phrase-based error model outperforms significantly its baseline systems.", "title": "" }, { "docid": "bb6d1f3618d8a3427f642c3db75ef1ed", "text": "In this letter, we propose a dual linearly polarized unit cell with 1-bit phase resolution for transmitarray application in X-band. It consists of two-layer metallic patterns connected by a metallized via-hole. One layer of the metallic pattern is a rectangular patch with two p-i-n diodes loaded in O-slot along electric field polarization direction, which is utilized as a receiver-antenna to achieve 1-bit phase tuning. The other metallic pattern is a dual linearly polarized transmitter-antenna that adopts a square ring patch with two p-i-n diodes distributed at the cross-polarization directions. The simulation results show that the designed antenna can achieve 1-bit phase tuning and linearly polarization reconfiguration at 10.5 GHz with insertion loss of about 1.1 dB. The characteristic of the designed transmitarray element is then experimentally validated by an ad-hoc waveguide simulator. The measured results agree with the simulated ones.", "title": "" }, { "docid": "fe142a6a39b17aa0a901cebbd759c003", "text": "Distant supervision has been widely used in the task of relation extraction (RE). However, when we carefully examine the experimental settings of previous work, we find two issues: (i) The compared models were trained on different training datasets. (ii) The existing testing data contains noise and bias issues. These issues may affect the conclusions in previous work. In this paper, our primary aim is to re-examine the distant supervision-based approaches under the experimental settings without the above issues. We approach this by training models on the same dataset and creating a new testing dataset annotated by the workers on Amzaon Mechanical Turk. We draw new conclusions based on the new testing dataset. The new testing data can be obtained from http://aka.ms/relationie.", "title": "" }, { "docid": "e6a5ce99e55594cd945a57f801bd2d35", "text": "Cloud Computing is a powerful, flexible, cost efficient platform for providing consumer IT services over the Internet. However Cloud Computing has various levels of risk factors because most important information is outsourced by third party vendors, which means harder to maintain the level of security for data. Steganography is art of hiding information in an image. In this most of the techniques are based on the Least Significant Bit(LSB) bit ,but the hackers easily detect as it embed data sequentially in all pixels .Instead of embedding data sequentially some of the techniques choose randomly. A better approach for this chooses edge pixels for embedding data. So we propose novel technique to hide the data in the Fibonacci edge pixels of an image by extending previous edge based algorithms. This algorithm hides the data in the Fibonacci edge pixels of an image and thus ensures better security against attackers.", "title": "" }, { "docid": "fe06ac2458e00c5447a255486189f1d1", "text": "The design and control of robots from the perspective of human safety is desired. We propose a mechanical compliance control system as a new pneumatic arm control system. However, safety against collisions with obstacles in an unpredictable environment is difficult to insure in previous system. The main feature of the proposed system is that the two desired pressure values are calculated by using two other desired values, the end compliance of the arm and the end position and posture of the arm.", "title": "" }, { "docid": "d2f36cc750703f5bbec2ea3ef4542902", "text": "ixed reality (MR) is a kind of virtual reality (VR) but a broader concept than augmented reality (AR), which augments the real world with synthetic electronic data. On the opposite side, there is a term, augmented virtuality (AV), which enhances or augments the virtual environment (VE) with data from the real world. Mixed reality covers a continuum from AR to AV. This concept embraces the definition of MR stated by Paul Milgram. 1 We participated in the Key Technology Research Project on Mixed Reality Systems (MR Project) in Japan. The Japanese government and Canon funded the Mixed Reality Systems Laboratory (MR Lab) and launched it in January 1997. We completed this national project in March 2001. At the end of the MR Project, an event called MiRai-01 (mirai means future in Japanese) was held at Yokohama, Japan, to demonstrate this emerging technology all over the world. This event was held in conjunction with two international conferences, IEEE Virtual Reality 2001 and the Second International Symposium on Mixed Reality (ISMR) and aggregated about 3,000 visitors for two days. This project aimed to produce an innovative information technology that could be used in the first decade of the 21st century while expanding the limitations of traditional VR technology. The basic policy we maintained throughout this project was to emphasize a pragmatic system development rather than a theory and to make such a system always available to people. Since MR is an advanced form of VR, the MR system inherits a VR char-acteristic—users can experience the world of MR interactively. According to this policy, we tried to make the system work in real time. Then, we enhanced each of our systems in their response speed and image quality in real time to increase user satisfaction. We describe the aim and research themes of the MR Project in Tamura et al. 2 To develop MR systems along this policy, we studied the fundamental problems of AR and AV and developed several methods to solve them in addition to system development issues. For example, we created a new image-based rendering method for AV systems, hybrid registration methods, and new types of see-through head-mounted displays (ST-HMDs) for AR systems. Three universities in Japan—University of Tokyo (Michi-taka Hirose), University of Tsukuba (Yuichic Ohta), and Hokkaido University (Tohru Ifukube)—collaborated with us to study the broad research area of MR. The side-bar, \" Four Types of MR Visual Simulation, …", "title": "" }, { "docid": "535b093171db9cfafba4fc91c4254137", "text": "Millimeter-wave communication is one way to alleviate the spectrum gridlock at lower frequencies while simultaneously providing high-bandwidth communication channels. MmWave makes use of MIMO through large antenna arrays at both the base station and the mobile station to provide sufficient received signal power. This article explains how beamforming and precoding are different in MIMO mmWave systems than in their lower-frequency counterparts, due to different hardware constraints and channel characteristics. Two potential architectures are reviewed: hybrid analog/digital precoding/combining and combining with low-resolution analog- to-digital converters. The potential gains and design challenges for these strategies are discussed, and future research directions are highlighted.", "title": "" }, { "docid": "7d09c7f94dda81e095b80736e229d00e", "text": "With the constant deepening of research on marine environment simulation and information expression, there are higher and higher requirements for the sense of reality of ocean data visualization results and the real-time interaction in the visualization process. This paper tackle the challenge of key technology of three-dimensional interaction and volume rendering technology based on GPU technology, develops large scale marine hydrological environmental data-oriented visualization software and realizes oceanographic planar graph, contour line rendering, isosurface rendering, factor field volume rendering and dynamic simulation of current field. To express the spatial characteristics and real-time update of massive marine hydrological environmental data better, this study establishes nodes in the scene for the management of geometric objects to realize high-performance dynamic rendering. The system employs CUDA (Computing Unified Device Architecture) parallel computing for the improvement of computation rate, uses NetCDF (Network Common Data Form) file format for data access and applies GPU programming technology to realize fast volume rendering of marine water environmental factors. The visualization software of marine hydrological environment developed can simulate and show properties and change process of marine water environmental factors efficiently and intuitively. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "76262c43c175646d7a00e02a7a49ab81", "text": "Self-compassion has been linked to higher levels of psychological well-being. The current study evaluated whether this effect also extends to a more adaptive food intake process. More specifically, this study investigated the relationship between self-compassion and intuitive eating among 322 college women. In order to further clarify the nature of this relationship this research additionally examined the indirect effects of self-compassion on intuitive eating through the pathways of distress tolerance and body image acceptance and action using both parametric and non-parametric bootstrap resampling analytic procedures. Results based on responses to the self-report measures of the constructs of interest indicated that individual differences in body image acceptance and action (β = .31, p < .001) but not distress tolerance (β = .00, p = .94) helped explain the relationship between self-compassion and intuitive eating. This effect was retained in a subsequent model adjusted for body mass index (BMI) and self-esteem (β = .19, p < .05). Results provide preliminary support for a complementary perspective on the role of acceptance in the context of intuitive eating to that of existing theory and research. The present findings also suggest the need for additional research as it relates to the development and fostering of self-compassion as well as the potential clinical implications of using acceptance-based interventions for college-aged women currently engaging in or who are at risk for disordered eating patterns.", "title": "" }, { "docid": "7ba33d9f57a6fde047e246d154869ded", "text": "UNLABELLED\nOrthodontic camouflage in patients with slight or moderate skeletal Class III malocclusions, can be obtained through different treatment alternatives. The purpose of this paper is to present a treatment that has not been described in the literature and which consists of the extraction of lower second molars and distal movement of the posterior segments by means of mandibular cervical headgear (MCH) and fixed appliances as a camouflage alternative. The force applied by the MCH was 250 gr per side (14hr/day). The total treatment time was 1 1/2 years.\n\n\nCONCLUSION\nthe extraction of lower second molars along with the use of mandibular cervical headgear is a good treatment alternative for camouflage in moderate Class III patients in order to obtain good occlusal relationships without affecting facial esthetics or producing marked dental compensations.", "title": "" }, { "docid": "b773df87bf97191a8dd33bd81a7ee2e5", "text": "We consider the problem of recommending comment-worthy articles such as news and blog-posts. An article is defined to be comment-worthy for a particular user if that user is interested to leave a comment on it. We note that recommending comment-worthy articles calls for elicitation of commenting-interests of the user from the content of both the articles and the past comments made by users. We thus propose to develop content-driven user profiles to elicit these latent interests of users in commenting and use them to recommend articles for future commenting. The difficulty of modeling comment content and the varied nature of users' commenting interests make the problem technically challenging. The problem of recommending comment-worthy articles is resolved by leveraging article and comment content through topic modeling and the co-commenting pattern of users through collaborative filtering, combined within a novel hierarchical Bayesian modeling approach. Our solution, Collaborative Correspondence Topic Models (CCTM), generates user profiles which are leveraged to provide a personalized ranking of comment-worthy articles for each user. Through these content-driven user profiles, CCTM effectively handle the ubiquitous problem of cold-start without relying on additional meta-data. The inference problem for the model is intractable with no off-the-shelf solution and we develop an efficient Monte Carlo EM algorithm. CCTM is evaluated on three real world data-sets, crawled from two blogs, ArsTechnica (AT) Gadgets (102,087 comments) and AT-Science (71,640 comments), and a news site, DailyMail (33,500 comments). We show average improvement of 14% (warm-start) and 18% (cold-start) in AUC, and 80% (warm-start) and 250% (cold-start) in Hit-Rank@5, over state of the art.", "title": "" }, { "docid": "98cef46a572d3886c8a11fa55f5ff83c", "text": "Deep convolutional neural networks (CNNs) have proven highly effective for visual recognition, where learning a universal representation from activations of convolutional layer plays a fundamental problem. In this paper, we present Fisher Vector encoding with Variational Auto-Encoder (FV-VAE), a novel deep architecture that quantizes the local activations of convolutional layer in a deep generative model, by training them in an end-to-end manner. To incorporate FV encoding strategy into deep generative models, we introduce Variational Auto-Encoder model, which steers a variational inference and learning in a neural network which can be straightforwardly optimized using standard stochastic gradient method. Different from the FV characterized by conventional generative models (e.g., Gaussian Mixture Model) which parsimoniously fit a discrete mixture model to data distribution, the proposed FV-VAE is more flexible to represent the natural property of data for better generalization. Extensive experiments are conducted on three public datasets, i.e., UCF101, ActivityNet, and CUB-200-2011 in the context of video action recognition and fine-grained image classification, respectively. Superior results are reported when compared to state-of-the-art representations. Most remarkably, our proposed FV-VAE achieves to-date the best published accuracy of 94.2% on UCF101.", "title": "" }, { "docid": "938f8383d25d30b39b6cd9c78d1b3ab5", "text": "In the last two decades, the Lattice Boltzmann method (LBM) has emerged as a promising tool for modelling the Navier-Stokes equations and simulating complex fluid flows. LBM is based on microscopic models and mesoscopic kinetic equations. In some perspective, it can be viewed as a finite difference method for solving the Boltzmann transport equation. Moreover the Navier-Stokes equations can be recovered by LBM with a proper choice of the collision operator. In Section 2 and 3, we first introduce this method and describe some commonly used boundary conditions. In Section 4, the validity of this method is confirmed by comparing the numerical solution to the exact solution of the steady plane Poiseuille flow and convergence of solution is established. Some interesting numerical simulations, including the lid-driven cavity flow, flow past a circular cylinder and the Rayleigh-Bénard convection for a range of Reynolds numbers, are carried out in Section 5, 6 and 7. In Section 8, we briefly highlight the procedure of recovering the Navier-Stokes equations from LBM. A summary is provided in Section 9.", "title": "" }, { "docid": "2a58426989cbfab0be9e18b7ee272b0a", "text": "Potholes are a nuisance, especially in the developing world, and can often result in vehicle damage or physical harm to the vehicle occupants. Drivers can be warned to take evasive action if potholes are detected in real-time. Moreover, their location can be logged and shared to aid other drivers and road maintenance agencies. This paper proposes a vehicle-based computer vision approach to identify potholes using a window-mounted camera. Existing literature on pothole detection uses either theoretically constructed pothole models or footage taken from advantageous vantage points at low speed, rather than footage taken from within a vehicle at speed. A distinguishing feature of the work presented in this paper is that a thorough exercise was performed to create an image library of actual and representative potholes under different conditions, and results are obtained using a part of this library. A model of potholes is constructed using the image library, which is used in an algorithmic approach that combines a road colour model with simple image processing techniques such as a Canny filter and contour detection. Using this approach, it was possible to detect potholes with a precision of 81.8% and recall of 74.4.%.", "title": "" }, { "docid": "8fc05d9e26c0aa98ffafe896d8c5a01b", "text": "We describe our clinical question answering system implemented for the Text Retrieval Conference (TREC 2016) Clinical Decision Support (CDS) track. We submitted five runs using a combination of knowledge-driven (based on a curated knowledge graph) and deep learning-based (using key-value memory networks) approaches to retrieve relevant biomedical articles for answering generic clinical questions (diagnoses, treatment, and test) for each clinical scenario provided in three forms: notes, descriptions, and summaries. The submitted runs were varied based on the use of notes, descriptions, or summaries in association with different diagnostic inferencing methodologies applied prior to biomedical article retrieval. Evaluation results demonstrate that our systems achieved best or close to best scores for 20% of the topics and better than median scores for 40% of the topics across all participants considering all evaluation measures. Further analysis shows that on average our clinical question answering system performed best with summaries using diagnostic inferencing from the knowledge graph whereas our key-value memory network model with notes consistently outperformed the knowledge graph-based system for notes and descriptions. ∗The author is also affiliated with Worcester Polytechnic Institute (szhao@wpi.edu). †The author is also affiliated with Northwestern University (kathy.lee@eecs.northwestern.edu). ‡The author is also affiliated with Brandeis University (aprakash@brandeis.edu).", "title": "" }, { "docid": "5a716affe340e69dffef3cc1532f7c33", "text": "The automated separation of plastic waste fractions intended for mechanical recycling is associated with substantial investments. It is therefore essential to evaluate to what degree separation really brings value to waste plastics as raw materials for new products. The possibility of reducing separation requirements and broadening the range of possible applications for recycled materials through the addition of elastomers, mineral fillers or other additives, should also taken into consideration. Material from a Swedish collection system for rigid (non-film) plastic packaging waste was studied. The non-film polyolefin fraction, which dominated the collected material, consisted of 55% polyethylene (PE) and 45% polypropylene (PP). Mechanical tests for injection-moulded blends of varying composition showed that complete separation of PE and PP is favourable for yield strength, impact strength, tensile energy to break and tensile modulus. Yield strength exhibited a minimum at 80% PE whereas fracture toughness was lowest for blends with 80% PP. The PE fraction, which was dominated by blow-moulded high density polyethylene (HDPE) containers, could be made more suitable for injection-moulding by commingling with the PP fraction. Nucleating agents present in the recycled material were found to influence the microstructure by causing PP to crystallise at a higher temperature than PE in PP-rich blends but not in PE-rich blends. Studies of sheet-extruded multi-component polyolefin mixtures, containing some film plastics, showed that fracture toughness was severely disfavoured if the PE-film component was dominated by low density polyethylene (LDPE) rather than linear low density polyethylene (LLDPE). This trend was reduced when the non-film component was dominated by bottle -grade HDPE. A modifier can be added if it is desired to increase fracture toughness or if there are substantial variations in the composition of the waste-stream. A very low density polyethylene (VLDPE) was found to be a more effective modifier than poly(ethylene-co-vinyl acetate) and poly(1-butene). The addition of 20% VLDPE to multi-component polyolefin mixtures increased the tensile strength and tear propagation resistance by 30% on average, while standard deviations for mechanical properties were reduced by 50%, which would allow product quality to be kept more consistent. ABS was found to be more sensitive to contamination by small amounts of talc-filled PP than viceversa. Contamination levels over 3% of talc -filled PP in ABS gave a very brittle material whereas talcfilled PP retained a ductile behaviour in blends with up to 9% ABS. Compatibility in blends of ABS, high-impact polystyrene and talc -filled PP was poorer at high deformation rates, as opposed to blends of PE and PP from rigid packaging waste where incompatibility was lower at fast deformation. This difference was explained by a higher degree of interfacial interaction through chain entanglements in PE/PP blends.", "title": "" }, { "docid": "8cd62b12b4406db29b289a3e1bd5d05a", "text": "Humor generation is a very hard problem in the area of computational humor. In this paper, we present a joke generation model based on neural networks. The model can generate a short joke relevant to the topic that the user specifies. Inspired by the architecture of neural machine translation and neural image captioning, we use an encoder for representing user-provided topic information and an RNN decoder for joke generation. We trained the model by short jokes of Conan O’Brien with the help of POS Tagger. We evaluate the performance of our model by human ratings from five English speakers. In terms of the average score, our model outperforms a probabilistic model that puts words into slots in a fixed-structure sentence.", "title": "" } ]
scidocsrr
ff87137881321554168d6922bafec025
Benchmarking Database Systems A Systematic Approach
[ { "docid": "978b1e9b3a5c4c92f265795a944e575d", "text": "The currently operational (March 1976) version of the INGRES database management system is described. This multiuser system gives a relational view of data, supports two high level nonprocedural data sublanguages, and runs as a collection of user processes on top of the UNIX operating system for Digital Equipment Corporation PDP 11/40, 11/45, and 11/70 computers. Emphasis is on the design decisions and tradeoffs related to (1) structuring the system into processes, (2) embedding one command language in a general purpose programming language, (3) the algorithms implemented to process interactions, (4) the access methods implemented, (5) the concurrency and recovery control currently provided, and (6) the data structures used for system catalogs and the role of the database administrator.\nAlso discussed are (1) support for integrity constraints (which is only partly operational), (2) the not yet supported features concerning views and protection, and (3) future plans concerning the system.", "title": "" } ]
[ { "docid": "e0b85ff6cd78f1640f25215ede3a39e6", "text": "Grammatical error diagnosis is an important task in natural language processing. This paper introduces our Chinese Grammatical Error Diagnosis (CGED) system in the NLP-TEA-3 shared task for CGED. The CGED system can diagnose four types of grammatical errors which are redundant words (R), missing words (M), bad word selection (S) and disordered words (W). We treat the CGED task as a sequence labeling task and describe three models, including a CRFbased model, an LSTM-based model and an ensemble model using stacking. We also show in details how we build and train the models. Evaluation includes three levels, which are detection level, identification level and position level. On the CGED-HSK dataset of NLP-TEA-3 shared task, our system presents the best F1-scores in all the three levels and also the best recall in the last two levels.", "title": "" }, { "docid": "6af138889b6eaeaa6ea8ee4edd7f8aaf", "text": "University of Leipzig, Natural Language Processing Department, Johannisgasse 26, 04081 Leipzig, Germany robert.remus@googlemail.com, {quasthoff, heyer}@informatik.uni-leipzig.de Abstract SentimentWortschatz, or SentiWS for short, is a publicly available German-language resource for sentiment analysis, opinion mining etc. It lists positive and negative sentiment bearing words weighted within the interval of [−1; 1] plus their part of speech tag, and if applicable, their inflections. The current version of SentiWS (v1.8b) contains 1,650 negative and 1,818 positive words, which sum up to 16,406 positive and 16,328 negative word forms, respectively. It not only contains adjectives and adverbs explicitly expressing a sentiment, but also nouns and verbs implicitly containing one. The present work describes the resource’s structure, the three sources utilised to assemble it and the semi-supervised method incorporated to weight the strength of its entries. Furthermore the resource’s contents are extensively evaluated using a German-language evaluation set we constructed. The evaluation set is verified being reliable and its shown that SentiWS provides a beneficial lexical resource for German-language sentiment analysis related tasks to build on.", "title": "" }, { "docid": "77c18ca76341a691b7c0093a88583c82", "text": "Biometric characteristics can be utilized in order to enable reliable and robust-to-impostor-attacks person recognition. Speaker recognition technology is commonly utilized in various systems enabling natural human computer interaction. The majority of the speaker recognition systems rely only on acoustic information, ignoring the visual modality. However, visual information conveys correlated and complimentary information to the audio information and its integration into a recognition system can potentially increase the system's performance, especially in the presence of adverse acoustic conditions. Acoustic and visual biometric signals, such as the person's voice and face, can be obtained using unobtrusive and user-friendly procedures and low-cost sensors. Developing unobtrusive biometric systems makes biometric technology more socially acceptable and accelerates its integration into every day life. In this paper, we describe the main components of audio-visual biometric systems, review existing systems and their performance, and discuss future research and development directions in this area", "title": "" }, { "docid": "a78913db9636369b2d7d8cb5e5a6a351", "text": "We propose a simple but strong baseline for time series classification from scratch with deep neural networks. Our proposed baseline models are pure end-to-end without any heavy preprocessing on the raw data or feature crafting. The proposed Fully Convolutional Network (FCN) achieves premium performance to other state-of-the-art approaches and our exploration of the very deep neural networks with the ResNet structure is also competitive. The global average pooling in our convolutional model enables the exploitation of the Class Activation Map (CAM) to find out the contributing region in the raw data for the specific labels. Our models provides a simple choice for the real world application and a good starting point for the future research. An overall analysis is provided to discuss the generalization capability of our models, learned features, network structures and the classification semantics.", "title": "" }, { "docid": "f264d5b90dfb774e9ec2ad055c4ebe62", "text": "Automatic citation recommendation can be very useful for authoring a paper and is an AI-complete problem due to the challenge of bridging the semantic gap between citation context and the cited paper. It is not always easy for knowledgeable researchers to give an accurate citation context for a cited paper or to find the right paper to cite given context. To help with this problem, we propose a novel neural probabilistic model that jointly learns the semantic representations of citation contexts and cited papers. The probability of citing a paper given a citation context is estimated by training a multi-layer neural network. We implement and evaluate our model on the entire CiteSeer dataset, which at the time of this work consists of 10,760,318 citation contexts from 1,017,457 papers. We show that the proposed model significantly outperforms other stateof-the-art models in recall, MAP, MRR, and nDCG.", "title": "" }, { "docid": "f35d164bd1b19f984b10468c41f149e3", "text": "Recent technological advancements have led to a deluge of data from distinctive domains (e.g., health care and scientific sensors, user-generated data, Internet and financial companies, and supply chain systems) over the past two decades. The term big data was coined to capture the meaning of this emerging trend. In addition to its sheer volume, big data also exhibits other unique characteristics as compared with traditional data. For instance, big data is commonly unstructured and require more real-time analysis. This development calls for new system architectures for data acquisition, transmission, storage, and large-scale data processing mechanisms. In this paper, we present a literature survey and system tutorial for big data analytics platforms, aiming to provide an overall picture for nonexpert readers and instill a do-it-yourself spirit for advanced audiences to customize their own big-data solutions. First, we present the definition of big data and discuss big data challenges. Next, we present a systematic framework to decompose big data systems into four sequential modules, namely data generation, data acquisition, data storage, and data analytics. These four modules form a big data value chain. Following that, we present a detailed survey of numerous approaches and mechanisms from research and industry communities. In addition, we present the prevalent Hadoop framework for addressing big data challenges. Finally, we outline several evaluation benchmarks and potential research directions for big data systems.", "title": "" }, { "docid": "2baf55123171c6e2110b19b1583c3d17", "text": "A novel three-way power divider using tapered lines is presented. It has several strip resistors which are formed like a ladder between the tapered-line conductors to achieve a good output isolation. The equivalent circuits are derived with the EE/OE/OO-mode analysis based on the fundamental propagation modes in three-conductor coupled lines. The fabricated three-way power divider shows a broadband performance in input return loss which is greater than 20 dB over a 3:1 bandwidth in the C-Ku bands.", "title": "" }, { "docid": "86497dcdfd05162804091a3368176ad5", "text": "This paper reviews the current status and implementation of battery chargers, charging power levels and infrastructure for plug-in electric vehicles and hybrids. Battery performance depends both on types and design of the batteries, and on charger characteristics and charging infrastructure. Charger systems are categorized into off-board and on-board types with unidirectional or bidirectional power flow. Unidirectional charging limits hardware requirements and simplifies interconnection issues. Bidirectional charging supports battery energy injection back to the grid. Typical onboard chargers restrict the power because of weight, space and cost constraints. They can be integrated with the electric drive for avoiding these problems. The availability of a charging infrastructure reduces on-board energy storage requirements and costs. On-board charger systems can be conductive or inductive. While conductive chargers use direct contact, inductive chargers transfer power magnetically. An off-board charger can be designed for high charging rates and is less constrained by size and weight. Level 1 (convenience), Level 2 (primary), and Level 3 (fast) power levels are discussed. These system configurations vary from country to country depending on the source and plug capacity standards. Various power level chargers and infrastructure configurations are presented, compared, and evaluated based on amount of power, charging time and location, cost, equipment, effect on the grid, and other factors.", "title": "" }, { "docid": "e43242ed17a0b2fa9fca421179135ce1", "text": "Direct digital synthesis (DDS) is a useful tool for generating periodic waveforms. In this two-part article, the basic idea of this synthesis technique is presented and then focused on the quality of the sinewave a DDS can create, introducing the SFDR quality parameter. Next effective methods to increase the SFDR are presented through sinewave approximations, hardware schemes such as dithering and noise shaping, and an extensive list of reference. When the desired output is a digital signal, the signal's characteristics can be accurately predicted using the formulas given in this article. When the desired output is an analog signal, the reader should keep in mind that the performance of the DDS is eventually limited by the performance of the digital-to-analog converter and the follow-on analog filter. Hoping that this article would incite engineers to use DDS either in integrated circuits DDS or software-implemented DDS. From the author's experience, this technique has proven valuable when frequency resolution is the challenge, particularly when using low-cost microcontrollers.", "title": "" }, { "docid": "b8d8785968023a38d742abc15c01ee28", "text": "Cryptocurrencies (or digital tokens, digital currencies, e.g., BTC, ETH, XRP, NEO) have been rapidly gaining ground in use, value, and understanding among the public, bringing astonishing profits to investors. Unlike other money and banking systems, most digital tokens do not require central authorities. Being decentralized poses significant challenges for credit rating. Most ICOs are currently not subject to government regulations, which makes a reliable credit rating system for ICO projects necessary and urgent. In this paper, we introduce ICORATING, the first learning–based cryptocurrency rating system. We exploit natural-language processing techniques to analyze various aspects of 2,251 digital currencies to date, such as white paper content, founding teams, Github repositories, websites, etc. Supervised learning models are used to correlate the life span and the price change of cryptocurrencies with these features. For the best setting, the proposed system is able to identify scam ICO projects with 0.83 precision. We hope this work will help investors identify scam ICOs and attract more efforts in automatically evaluating and analyzing ICO projects. 1 2 Author contributions: J. Li designed research; Z. Sun, Z. Deng, F. Li and P. Shi prepared the data; S. Bian and A. Yuan contributed analytic tools; P. Shi and Z. Deng labeled the dataset; J. Li, W. Monroe and W. Wang designed the experiments; J. Li, W. Wu, Z. Deng and T. Zhang performed the experiments; J. Li and T. Zhang wrote the paper; W. Monroe and A. Yuan proofread the paper. Author Contacts: Figure 1: Market capitalization v.s. time. Figure 2: The number of new ICO projects v.s. time.", "title": "" }, { "docid": "3a95be7cbc37f20a6c41b84f78013263", "text": "We demonstrate a simple strategy to cope with missing data in sequential inputs, addressing the task of multilabel classification of diagnoses given clinical time series. Collected from the pediatric intensive care unit (PICU) at Children’s Hospital Los Angeles, our data consists of multivariate time series of observations. The measurements are irregularly spaced, leading to missingness patterns in temporally discretized sequences. While these artifacts are typically handled by imputation, we achieve superior predictive performance by treating the artifacts as features. Unlike linear models, recurrent neural networks can realize this improvement using only simple binary indicators of missingness. For linear models, we show an alternative strategy to capture this signal. Training models on missingness patterns only, we show that for some diseases, what tests are run can as predictive as the results themselves.", "title": "" }, { "docid": "27f773226c458febb313fd48b59c7222", "text": "This thesis presents extensions to the local binary pattern (LBP) texture analysis operator. The operator is defined as a gray-scale invariant texture measure, derived from a general definition of texture in a local neighborhood. It is made invariant against the rotation of the image domain, and supplemented with a rotation invariant measure of local contrast. The LBP is proposed as a unifying texture model that describes the formation of a texture with micro-textons and their statistical placement rules. The basic LBP is extended to facilitate the analysis of textures with multiple scales by combining neighborhoods with different sizes. The possible instability in sparse sampling is addressed with Gaussian low-pass filtering, which seems to be somewhat helpful. Cellular automata are used as texture features, presumably for the first time ever. With a straightforward inversion algorithm, arbitrarily large binary neighborhoods are encoded with an eight-bit cellular automaton rule, resulting in a very compact multi-scale texture descriptor. The performance of the new operator is shown in an experiment involving textures with multiple spatial scales. An opponent-color version of the LBP is introduced and applied to color textures. Good results are obtained in static illumination conditions. An empirical study with different color and texture measures however shows that color and texture should be treated separately. A number of different applications of the LBP operator are presented, emphasizing real-time issues. A very fast software implementation of the operator is introduced, and different ways of speeding up classification are evaluated. The operator is successfully applied to industrial visual inspection applications and to image retrieval.", "title": "" }, { "docid": "aa13ec272d10ba36ef0d7e530e5dbb39", "text": "Markov chain Monte Carlo (MCMC) methods are often deemed far too computationally intensive to be of any practical use for large datasets. This paper describes a methodology that aims to scale up the Metropolis-Hastings (MH) algorithm in this context. We propose an approximate implementation of the accept/reject step of MH that only requires evaluating the likelihood of a random subset of the data, yet is guaranteed to coincide with the accept/reject step based on the full dataset with a probability superior to a user-specified tolerance level. This adaptive subsampling technique is an alternative to the recent approach developed in (Korattikara et al., 2014), and it allows us to establish rigorously that the resulting approximate MH algorithm samples from a perturbed version of the target distribution of interest, whose total variation distance to this very target is controlled explicitly. We explore the benefits and limitations of this scheme on several examples.", "title": "" }, { "docid": "d2086d9c52ca9d4779a2e5070f9f3009", "text": "Though action recognition based on complete videos has achieved great success recently, action prediction remains a challenging task as the information provided by partial videos is not discriminative enough for classifying actions. In this paper, we propose a Deep Residual Feature Learning (DeepRFL) framework to explore more discriminative information from partial videos, achieving similar representations as those of complete videos. The proposed method is based on residual learning, which captures the salient differences between partial videos and their corresponding full videos. The partial videos can attain the missing information by learning from features of complete videos and thus improve the discriminative power. Moreover, our model can be trained efficiently in an end-to-end fashion. Extensive evaluations on the challenging UCF101 and HMDB51 datasets demonstrate that the proposed method outperforms state-of-the-art results.", "title": "" }, { "docid": "512bd1e06d0ce9c920382e1f0843ea33", "text": "— Diagnosis of the Parkinson disease through machine learning approache provides better understanding from PD dataset in the present decade. Orange v2.0b and weka v3.4.10 has been used in the present experimentation for the statistical analysis, classification, Evaluation and unsupervised learning methods. Voice dataset for Parkinson disease has been retrieved from UCI Machine learning repository from Center for Machine Learning and Intelligent Systems. The dataset contains name, attributes. The parallel coordinates shows higher variation in Parkinson disease dataset. SVM has shown good accuracy (88.9%) compared to Majority and k-NN algorithms. Classification algorithm like Random Forest has shown good accuracy (90.26) and Naïve Bayes has shown least accuracy (69.23. Higher number of clusters in healthy dataset in Fo and less number in diseased data has been predicted by Hierarchal clustering and SOM.", "title": "" }, { "docid": "e7a6bb8f63e35f3fb0c60bdc26817e03", "text": "A simple mechanism is presented, based on ant-like agents, for routing and load balancing in telecommunications networks, following the initial works of Appleby and Stewart (1994) and Schoonderwoerd et al. (1997). In the present work, agents are very similar to those proposed by Schoonderwoerd et al. (1997), but are supplemented with a simplified dynamic programming capability, initially experimented by Guérin (1997) with more complex agents, which is shown to significantly improve the network's relaxation and its response to perturbations. Topic area: Intelligent agents and network management", "title": "" }, { "docid": "491f49dd73578b751f8f3e9afe64341e", "text": "Multitask learning often improves system performance for morphosyntactic and semantic tagging tasks. However, the question of when and why this is the case has yet to be answered satisfactorily. Although previous work has hypothesised that this is linked to the label distributions of the auxiliary task, we argue that this is not sufficient. We show that information-theoretic measures which consider the joint label distributions of the main and auxiliary tasks offer far more explanatory value. Our findings are empirically supported by experiments for morphosyntactic tasks on 39 languages, and are in line with findings in the literature for several semantic tasks.", "title": "" }, { "docid": "1b7f31c73dd99b6957d8b5c85240b060", "text": "We propose a novel approach to address the Simultaneous Detection and Segmentation problem introduced in [8]. Using the hierarchical structures first presented in [1] we use an efficient and accurate procedure that exploits the hierarchy feature information using Locality Sensitive Hashing. We build on recent work that utilizes convolutional neural networks to detect bounding boxes in an image (Faster R-CNN [11]) and then use the top similar hierarchical region that best fits each bounding box after hashing, we call this approach HashBox. We then refine our final segmentation results by automatic hierarchy pruning. HashBox introduces a train-free alternative to Hypercolumns [7]. We conduct extensive experiments on Pascal VOC 2012 segmentation dataset, showing that HashBox gives competitive state-of-the-art object segmentations.", "title": "" }, { "docid": "b31676e958e8345132780499e5dd968d", "text": "Following triggered corporate bankruptcies, an increasing number of prediction models have emerged since 1960s. This study provides a critical analysis of methodologies and empirical findings of applications of these models across 10 different countries. The study’s empirical exercise finds that predictive accuracies of different corporate bankruptcy prediction models are, generally, comparable. Artificially Intelligent Expert System (AIES) models perform marginally better than statistical and theoretical models. Overall, use of Multiple Discriminant Analysis (MDA) dominates the research followed by logit models. Study deduces useful observations and recommendations for future research in this field. JEL classification: G33; C49; C88", "title": "" }, { "docid": "d882657765647d9e84b8ad729a079833", "text": "Multiple treebanks annotated under heterogeneous standards give rise to the research question of best utilizing multiple resources for improving statistical models. Prior research has focused on discrete models, leveraging stacking and multi-view learning to address the problem. In this paper, we empirically investigate heterogeneous annotations using neural network models, building a neural network counterpart to discrete stacking and multiview learning, respectively, finding that neural models have their unique advantages thanks to the freedom from manual feature engineering. Neural model achieves not only better accuracy improvements, but also an order of magnitude faster speed compared to its discrete baseline, adding little time cost compared to a neural model trained on a single treebank.", "title": "" } ]
scidocsrr
77710eebf12562ab763ec52a8fbca309
Harassment Detection on Twitter using Conversations
[ { "docid": "84d39e615b8b674cee53741f87a733da", "text": "Cyber Bullying, which often has a deeply negative impact on the victim, has grown as a serious issue among adolescents. To understand the phenomenon of cyber bullying, experts in social science have focused on personality, social relationships and psychological factors involving both the bully and the victim. Recently computer science researchers have also come up with automated methods to identify cyber bullying messages by identifying bullying-related keywords in cyber conversations. However, the accuracy of these textual feature based methods remains limited. In this work, we investigate whether analyzing social network features can improve the accuracy of cyber bullying detection. By analyzing the social network structure between users and deriving features such as number of friends, network embeddedness, and relationship centrality, we find that the detection of cyber bullying can be significantly improved by integrating the textual features with social network features.", "title": "" } ]
[ { "docid": "6b410b123925efb0dae519ab8455cc75", "text": "Attributes, or semantic features, have gained popularity in the past few years in domains ranging from activity recognition in video to face verification. Improving the accuracy of attribute classifiers is an important first step in any application which uses these attributes. In most works to date, attributes have been considered to be independent. However, we know this not to be the case. Many attributes are very strongly related, such as heavy makeup and wearing lipstick. We propose to take advantage of attribute relationships in three ways: by using a multi-task deep convolutional neural network (MCNN) sharing the lowest layers amongst all attributes, sharing the higher layers for related attributes, and by building an auxiliary network on top of the MCNN which utilizes the scores from all attributes to improve the final classification of each attribute. We demonstrate the effectiveness of our method by producing results on two challenging publicly available datasets.", "title": "" }, { "docid": "b816908582329f7959bd6918d9077074", "text": "Deep neural networks (DNNs) have gained remarkable success in speech recognition, partially attributed to the flexibility of DNN models in learning complex patterns of speech signals. This flexibility, however, may lead to serious over-fitting and hence miserable performance degradation in adverse acoustic conditions such as those with high ambient noises. We propose a noisy training approach to tackle this problem: by injecting moderate noises into the training data intentionally and randomly, more generalizable DNN models can be learned. This ‘noise injection’ technique, although known to the neural computation community already, has not been studied with DNNs which involve a highly complex objective function. The experiments presented in this paper confirm that the noisy training approach works well for the DNN model and can provide substantial performance improvement for DNN-based speech recognition.", "title": "" }, { "docid": "c2c056ae22c22e2a87b9eca39d125cc2", "text": "The web provides an unprecedented opportunity to evaluate ideas quickly using controlled experiments, also called randomized experiments, A/B tests (and their generalizations), split tests, Control/Treatment tests, MultiVariable Tests (MVT) and parallel flights. Controlled experiments embody the best scientific design for establishing a causal relationship between changes and their influence on user-observable behavior. We provide a practical guide to conducting online experiments, where end-users can help guide the development of features. Our experience indicates that significant learning and return-on-investment (ROI) are seen when development teams listen to their customers, not to the Highest Paid Person’s Opinion (HiPPO). We provide several examples of controlled experiments with surprising results. We review the important ingredients of running controlled experiments, and discuss their limitations (both technical and organizational). We focus on several areas that are critical to experimentation, including statistical power, sample size, and techniques for variance reduction. We describe common architectures for experimentation systems and analyze their advantages and disadvantages. We evaluate randomization and hashing techniques, which we show are not as simple in practice as is often assumed. Controlled experiments typically generate large amounts of data, which can be analyzed using data mining techniques to gain deeper understanding of the factors influencing the outcome of interest, leading to new hypotheses and creating a virtuous cycle of improvements. Organizations that embrace controlled experiments with clear evaluation criteria can evolve their systems with automated optimizations and real-time analyses. Based on our extensive practical experience with multiple systems and organizations, we share key lessons that will help practitioners in running trustworthy controlled experiments.", "title": "" }, { "docid": "38d9a18ba942e401c3d0638f88bc948c", "text": "The question whether preemptive algorithms are better than nonpreemptive ones for scheduling a set of real-time tasks has been debated for a long time in the research community. In fact, especially under fixed priority systems, each approach has advantages and disadvantages, and no one dominates the other when both predictability and efficiency have to be taken into account in the system design. Recently, limited preemption models have been proposed as a viable alternative between the two extreme cases of fully preemptive and nonpreemptive scheduling. This paper presents a survey of the existing approaches for reducing preemptions and compares them under different metrics, providing both qualitative and quantitative performance evaluations.", "title": "" }, { "docid": "9ad145cd939284ed77919b73452236c0", "text": "While WiFi-based indoor localization is attractive, the need for a significant degree of pre-deployment effort is a key challenge. In this paper, we ask the question: can we perform indoor localization with no pre-deployment effort? Our setting is an indoor space, such as an office building or a mall, with WiFi coverage but where we do not assume knowledge of the physical layout, including the placement of the APs. Users carrying WiFi-enabled devices such as smartphones traverse this space in normal course. The mobile devices record Received Signal Strength (RSS) measurements corresponding to APs in their view at various (unknown) locations and report these to a localization server. Occasionally, a mobile device will also obtain and report a location fix, say by obtaining a GPS lock at the entrance or near a window. The centerpiece of our work is the EZ Localization algorithm, which runs on the localization server. The key intuition is that all of the observations reported to the server, even the many from unknown locations, are constrained by the physics of wireless propagation. EZ models these constraints and then uses a genetic algorithm to solve them. The results from our deployment in two different buildings are promising. Despite the absence of any explicit pre-deployment calibration, EZ yields a median localization error of 2m and 7m, respectively, in a small building and a large building, which is only somewhat worse than the 0.7m and 4m yielded by the best-performing but calibration-intensive Horus scheme [29] from prior work.", "title": "" }, { "docid": "4c1da8d356e4f793d76f79d4270ecbd0", "text": "As the proportion of the ageing population in industrialized countries continues to increase, the dermatological concerns of the aged grow in medical importance. Intrinsic structural changes occur as a natural consequence of ageing and are genetically determined. The rate of ageing is significantly different among different populations, as well as among different anatomical sites even within a single individual. The intrinsic rate of skin ageing in any individual can also be dramatically influenced by personal and environmental factors, particularly the amount of exposure to ultraviolet light. Photodamage, which considerably accelerates the visible ageing of skin, also greatly increases the risk of cutaneous neoplasms. As the population ages, dermatological focus must shift from ameliorating the cosmetic consequences of skin ageing to decreasing the genuine morbidity associated with problems of the ageing skin. A better understanding of both the intrinsic and extrinsic influences on the ageing of the skin, as well as distinguishing the retractable aspects of cutaneous ageing (primarily hormonal and lifestyle influences) from the irretractable (primarily intrinsic ageing), is crucial to this endeavour.", "title": "" }, { "docid": "b3d8c827ac58e5e385179275a2c73b31", "text": "It is the purpose of this article to identify and review criteria that rehabilitation technology should meet in order to offer arm-hand training to stroke patients, based on recent principles of motor learning. A literature search was conducted in PubMed, MEDLINE, CINAHL, and EMBASE (1997–2007). One hundred and eighty seven scientific papers/book references were identified as being relevant. Rehabilitation approaches for upper limb training after stroke show to have shifted in the last decade from being analytical towards being focussed on environmentally contextual skill training (task-oriented training). Training programmes for enhancing motor skills use patient and goal-tailored exercise schedules and individual feedback on exercise performance. Therapist criteria for upper limb rehabilitation technology are suggested which are used to evaluate the strengths and weaknesses of a number of current technological systems. This review shows that technology for supporting upper limb training after stroke needs to align with the evolution in rehabilitation training approaches of the last decade. A major challenge for related technological developments is to provide engaging patient-tailored task oriented arm-hand training in natural environments with patient-tailored feedback to support (re) learning of motor skills.", "title": "" }, { "docid": "9c24c2372ffd9526ee5c80c69685d01f", "text": "This work explores the use of tow steered composite laminates, functionally graded metals (FGM), thickness distributions, and curvilinear rib/spar/stringer topologies for aeroelastic tailoring. Parameterized models of the Common Research Model (CRM) wing box have been developed for passive aeroelastic tailoring trade studies. Metrics of interest include the wing weight, the onset of dynamic flutter, and the static aeroelastic stresses. Compared to a baseline structure, the lowest aggregate static wing stresses could be obtained with tow steered skins (47% improvement), and many of these designs could reduce weight as well (up to 14%). For these structures, the trade-off between flutter speed and weight is generally strong, although one case showed both a 100% flutter improvement and a 3.5% weight reduction. Material grading showed no benefit in the skins, but moderate flutter speed improvements (with no weight or stress increase) could be obtained by grading the spars (4.8%) or ribs (3.2%), where the best flutter results were obtained by grading both thickness and material. For the topology work, large weight reductions were obtained by removing an inner spar, and performance was maintained by shifting stringers forward and/or using curvilinear ribs: 5.6% weight reduction, a 13.9% improvement in flutter speed, but a 3.0% increase in stress levels. Flutter resistance was also maintained using straightrotated ribs although the design had a 4.2% lower flutter speed than the curved ribs of similar weight and stress levels were higher. These results will guide the development of a future design optimization scheme established to exploit and combine the individual attributes of these technologies.", "title": "" }, { "docid": "09fe7cffb7871977c1cd383396c44262", "text": "We are interested in the automatic interpretation of how-to instructions, such as cooking recipes, into semantic representations that can facilitate sophisticated question answering. Recent work has shown impressive results on semantic parsing of instructions with minimal supervision, but such techniques cannot handle much of the situated and ambiguous language used in instructions found on the web. In this paper, we suggest how to extend such methods using a model of pragmatics, based on a rich representation of world state.", "title": "" }, { "docid": "f463ee2dd3a9243ed7536d88d8c2c568", "text": "A new silicon controlled rectifier-based power-rail electrostatic discharge (ESD) clamp circuit was proposed with a novel trigger circuit that has very low leakage current in a small layout area for implementation. This circuit was successfully verified in a 40-nm CMOS process by using only low-voltage devices. The novel trigger circuit uses a diode-string based level-sensing ESD detection circuit, but not using MOS capacitor, which has very large leakage current. Moreover, the leakage current on the ESD detection circuit is further reduced, adding a diode in series with the trigger transistor. By combining these two techniques, the total silicon area of the power-rail ESD clamp circuit can be reduced three times, whereas the leakage current is three orders of magnitude smaller than that of the traditional design.", "title": "" }, { "docid": "1b9bcb2ab5bc0b2b2e475066a1f78fbe", "text": "Fragility curves are becoming increasingly common components of flood risk assessments. This report introduces the concept of the fragility curve and shows how fragility curves are related to more familiar reliability concepts, such as the deterministic factor of safety and the relative reliability index. Examples of fragility curves are identified in the literature on structures and risk assessment to identify what methods have been used to develop fragility curves in practice. Four basic approaches are identified: judgmental, empirical, hybrid, and analytical. Analytical approaches are, by far, the most common method encountered in the literature. This group of methods is further decomposed based on whether the limit state equation is an explicit function or an implicit function and on whether the probability of failure is obtained using analytical solution methods or numerical solution methods. Advantages and disadvantages of the various approaches are considered. DISCLAIMER: The contents of this report are not to be used for advertising, publication, or promotional purposes. Citation of trade names does not constitute an official endorsement or approval of the use of such commercial products. All product names and trademarks cited are the property of their respective owners. The findings of this report are not to be construed as an official Department of the Army position unless so designated by other authorized documents. DESTROY THIS REPORT WHEN NO LONGER NEEDED. DO NOT RETURN IT TO THE ORIGINATOR.", "title": "" }, { "docid": "c2055f8366e983b45d8607c877126797", "text": "This paper proposes and investigates an offline finite-element-method (FEM)-assisted position and speed observer for brushless dc permanent-magnet (PM) (BLDC-PM) motor drive sensorless control based on the line-to-line PM flux linkage estimation. The zero crossing of the line-to-line PM flux linkage occurs right in the middle of two commutation points (CPs) and is used as a basis for the position and speed observer. The position between CPs is obtained by comparing the estimated line-to-line PM flux with the FEM-calculated line-to-line PM flux. Even if the proposed observer relies on the fundamental model of the machine, a safe starting strategy under heavy load torque, called I-f control, is used, with seamless transition to the proposed sensorless control. The I-f starting method allows low-speed sensorless control, without knowing the initial position and without machine parameter identification. Digital simulations and experimental results are shown, demonstrating the reliability of the FEM-assisted position and speed observer for BLDC-PM motor sensorless control operation.", "title": "" }, { "docid": "46d239e66c1de735f80312d8458b131d", "text": "Cloud computing is a dynamic, scalable and payper-use distributed computing model empowering designers to convey applications amid job designation and storage distribution. Cloud computing encourages to impart a pool of virtualized computer resource empowering designers to convey applications amid job designation and storage distribution. The cloud computing mainly aims to give proficient access to remote and geographically distributed resources. As cloud technology is evolving day by day and confronts numerous challenges, one of them being uncovered is scheduling. Scheduling is basically a set of constructs constructed to have a controlling hand over the order of work to be performed by a computer system. Algorithms are vital to schedule the jobs for execution. Job scheduling algorithms is one of the most challenging hypothetical problems in the cloud computing domain area. Numerous deep investigations have been carried out in the domain of job scheduling of cloud computing. This paper intends to present the performance comparison analysis of various pre-existing job scheduling algorithms considering various parameters. This paper discusses about cloud computing and its constructs in section (i). In section (ii) job scheduling concept in cloud computing has been elaborated. In section (iii) existing algorithms for job scheduling are discussed, and are compared in a tabulated form with respect to various parameters and lastly section (iv) concludes the paper giving brief summary of the work.", "title": "" }, { "docid": "2d3adb98f6b1b4e161d84314958960e5", "text": "BACKGROUND\nBright light therapy was shown to be a promising treatment for depression during pregnancy in a recent open-label study. In an extension of this work, we report findings from a double-blind placebo-controlled pilot study.\n\n\nMETHOD\nTen pregnant women with DSM-IV major depressive disorder were randomly assigned from April 2000 to January 2002 to a 5-week clinical trial with either a 7000 lux (active) or 500 lux (placebo) light box. At the end of the randomized controlled trial, subjects had the option of continuing in a 5-week extension phase. The Structured Interview Guide for the Hamilton Depression Scale-Seasonal Affective Disorder Version was administered to assess changes in clinical status. Salivary melatonin was used to index circadian rhythm phase for comparison with antidepressant results.\n\n\nRESULTS\nAlthough there was a small mean group advantage of active treatment throughout the randomized controlled trial, it was not statistically significant. However, in the longer 10-week trial, the presence of active versus placebo light produced a clear treatment effect (p =.001) with an effect size (0.43) similar to that seen in antidepressant drug trials. Successful treatment with bright light was associated with phase advances of the melatonin rhythm.\n\n\nCONCLUSION\nThese findings provide additional evidence for an active effect of bright light therapy for antepartum depression and underscore the need for an expanded randomized clinical trial.", "title": "" }, { "docid": "75a1c22e950ccb135c054353acb8571a", "text": "We study the problem of building generative models of natural source code (NSC); that is, source code written and understood by humans. Our primary contribution is to describe a family of generative models for NSC that have three key properties: First, they incorporate both sequential and hierarchical structure. Second, we learn a distributed representation of source code elements. Finally, they integrate closely with a compiler, which allows leveraging compiler logic and abstractions when building structure into the model. We also develop an extension that includes more complex structure, refining how the model generates identifier tokens based on what variables are currently in scope. Our models can be learned efficiently, and we show empirically that including appropriate structure greatly improves the models, measured by the probability of generating test programs.", "title": "" }, { "docid": "bf0531b03cc36a69aca1956b21243dc6", "text": "Sound of their breath fades with the light. I think about the loveless fascination, Under the milky way tonight. Lower the curtain down in memphis, Lower the curtain down all right. I got no time for private consultation, Under the milky way tonight. Wish I knew what you were looking for. Might have known what you would find. And it's something quite peculiar, Something thats shimmering and white. It leads you here despite your destination, Under the milky way tonight (chorus) Preface This Master's Thesis concludes my studies in Human Aspects of Information Technology (HAIT) at Tilburg University. It describes the development, implementation, and analysis of an automatic mood classifier for music. I would like to thank those who have contributed to and supported the contents of the thesis. Special thanks goes to my supervisor Menno van Zaanen for his dedication and support during the entire process of getting started up to the final results. Moreover, I would like to express my appreciation to Fredrik Mjelle for providing the user-tagged instances exported out of the MOODY database, which was used as the dataset for the experiments. Furthermore, I would like to thank Toine Bogers for pointing me out useful website links regarding music mood classification and sending me papers with citations and references. I would also like to thank Michael Voong for sending me his papers on music mood classification research, Jaap van den Herik for his support and structuring of my writing and thinking. I would like to recognise Eric Postma and Marieke van Erp for their time assessing the thesis as members of the examination committee. Finally, I would like to express my gratitude to my family for their enduring support. Abstract This research presents the outcomes of research into using the lingual part of music for building an automatic mood classification system. Using a database consisting of extracted lyrics and user-tagged mood attachments, we built a classifier based on machine learning techniques. By testing the classification system on various mood frameworks (or dimensions) we examined to what extent it is possible to attach mood tags automatically to songs based on lyrics only. Furthermore, we examined to what extent the linguistic part of music revealed adequate information for assigning a mood category and which aspects of mood can be classified best. Our results show that the use of term frequencies and tf*idf values provide a valuable source of …", "title": "" }, { "docid": "c4171bd7b870d26e0b2520fc262e7c88", "text": "Each year, the treatment decisions for more than 230, 000 breast cancer patients in the U.S. hinge on whether the cancer has metastasized away from the breast. Metastasis detection is currently performed by pathologists reviewing large expanses of biological tissues. This process is labor intensive and error-prone. We present a framework to automatically detect and localize tumors as small as 100×100 pixels in gigapixel microscopy images sized 100, 000×100, 000 pixels. Our method leverages a convolutional neural network (CNN) architecture and obtains state-of-the-art results on the Camelyon16 dataset in the challenging lesion-level tumor detection task. At 8 false positives per image, we detect 92.4% of the tumors, relative to 82.7% by the previous best automated approach. For comparison, a human pathologist attempting exhaustive search achieved 73.2% sensitivity. We achieve image-level AUC scores above 97% on both the Camelyon16 test set and an independent set of 110 slides. In addition, we discover that two slides in the Camelyon16 training set were erroneously labeled normal. Our approach could considerably reduce false negative rates in metastasis detection.", "title": "" }, { "docid": "ef598ba4f9a4df1f42debc0eabd1ead8", "text": "Software developers interact with the development environments they use by issuing commands that execute various programming tools, from source code formatters to build tools. However, developers often only use a small subset of the commands offered by modern development environments, reducing their overall development fluency. In this paper, we use several existing command recommender algorithms to suggest new commands to developers based on their existing command usage history, and also introduce several new algorithms. By running these algorithms on data submitted by several thousand Eclipse users, we describe two studies that explore the feasibility of automatically recommending commands to software developers. The results suggest that, while recommendation is more difficult in development environments than in other domains, it is still feasible to automatically recommend commands to developers based on their usage history, and that using patterns of past discovery is a useful way to do so.", "title": "" }, { "docid": "32dbbc1b9cc78f2a4db0cffd12cd2467", "text": "OBJECTIVE\nTo evaluate existing automatic speech-recognition (ASR) systems to measure their performance in interpreting spoken clinical questions and to adapt one ASR system to improve its performance on this task.\n\n\nDESIGN AND MEASUREMENTS\nThe authors evaluated two well-known ASR systems on spoken clinical questions: Nuance Dragon (both generic and medical versions: Nuance Gen and Nuance Med) and the SRI Decipher (the generic version SRI Gen). The authors also explored language model adaptation using more than 4000 clinical questions to improve the SRI system's performance, and profile training to improve the performance of the Nuance Med system. The authors reported the results with the NIST standard word error rate (WER) and further analyzed error patterns at the semantic level.\n\n\nRESULTS\nNuance Gen and Med systems resulted in a WER of 68.1% and 67.4% respectively. The SRI Gen system performed better, attaining a WER of 41.5%. After domain adaptation with a language model, the performance of the SRI system improved 36% to a final WER of 26.7%.\n\n\nCONCLUSION\nWithout modification, two well-known ASR systems do not perform well in interpreting spoken clinical questions. With a simple domain adaptation, one of the ASR systems improved significantly on the clinical question task, indicating the importance of developing domain/genre-specific ASR systems.", "title": "" }, { "docid": "e2b653e1d4faf4067cd791a58f48c9fa", "text": "Direct visualization of plant tissues by matrix assisted laser desorption ionization-mass spectrometry imaging (MALDI-MSI) has revealed key insights into the localization of metabolites in situ. Recent efforts have determined the spatial distribution of primary and secondary metabolites in plant tissues and cells. Strategies have been applied in many areas of metabolism including isotope flux analyses, plant interactions, and transcriptional regulation of metabolite accumulation. Technological advances have pushed achievable spatial resolution to subcellular levels and increased instrument sensitivity by several orders of magnitude. It is anticipated that MALDI-MSI and other MSI approaches will bring a new level of understanding to metabolomics as scientists will be encouraged to consider spatial heterogeneity of metabolites in descriptions of metabolic pathway regulation.", "title": "" } ]
scidocsrr
7e58060dcc5ecad17ce076b4ed098c05
Erratum to: FUGE: A joint meta-heuristic approach to cloud job scheduling algorithm using fuzzy theory and a genetic method
[ { "docid": "5bb390a0c9e95e0691ac4ba07b5eeb9d", "text": "Clearing the clouds away from the true potential and obstacles posed by this computing capability.", "title": "" } ]
[ { "docid": "8da2450cbcb9b43d07eee187e5bf07f1", "text": "We propose a unified approach for bottom-up hierarchical image segmentation and object proposal generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object proposals by exploring efficiently their combinatorial space. We also present Single-scale Combinatorial Grouping (SCG), a faster version of MCG that produces competitive proposals in under five seconds per image. We conduct an extensive and comprehensive empirical validation on the BSDS500, SegVOC12, SBD, and COCO datasets, showing that MCG produces state-of-the-art contours, hierarchical regions, and object proposals.", "title": "" }, { "docid": "b426696d7c1764502706696b0d462a34", "text": "The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.", "title": "" }, { "docid": "fe34fcd09a10c382596cffcd13f17a3c", "text": "As Granular Computing has gained interest, more research has lead into using different representations for Information Granules, i.e., rough sets, intervals, quotient space, fuzzy sets; where each representation offers different approaches to information granulation. These different representations have given more flexibility to what information granulation can achieve. In this overview paper, the focus is only on journal papers where Granular Computing is studied when fuzzy logic systems are used, covering research done with Type-1 Fuzzy Logic Systems, Interval Type-2 Fuzzy Logic Systems, as well as the usage of general concepts of Fuzzy Systems.", "title": "" }, { "docid": "26c4cded1181ce78cc9b61a668e57939", "text": "Monitoring crop condition and production estimates at the state and county level is of great interest to the U.S. Department of Agriculture. The National Agricultural Statistical Service (NASS) of the U.S. Department of Agriculture conducts field interviews with sampled farm operators and obtains crop cuttings to make crop yield estimates at regional and state levels. NASS needs supplemental spatial data that provides timely information on crop condition and potential yields. In this research, the crop model EPIC (Erosion Productivity Impact Calculator) was adapted for simulations at regional scales. Satellite remotely sensed data provide a real-time assessment of the magnitude and variation of crop condition parameters, and this study investigates the use of these parameters as an input to a crop growth model. This investigation was conducted in the semi-arid region of North Dakota in the southeastern part of the state. The primary objective was to evaluate a method of integrating parameters retrieved from satellite imagery in a crop growth model to simulate spring wheat yields at the sub-county and county levels. The input parameters derived from remotely sensed data provided spatial integrity, as well as a real-time calibration of model simulated parameters during the season, to ensure that the modeled and observed conditions agree. A radiative transfer model, SAIL (Scattered by Arbitrary Inclined Leaves), provided the link between the satellite data and crop model. The model parameters were simulated in a geographic information system grid, which was the platform for aggregating yields at local and regional scales. A model calibration was performed to initialize the model parameters. This calibration was performed using Landsat data over three southeast counties in North Dakota. The model was then used to simulate crop yields for the state of North Dakota with inputs derived from NOAA AVHRR data. The calibration and the state level simulations are compared with spring wheat yields reported by NASS objective yield surveys. Introduction Monitoring agricultural crop conditions during the growing season and estimating the potential crop yields are both important for the assessment of seasonal production. Accurate and timely assessment of particularly decreased production caused by a natural disaster, such as drought or pest infestation, can be critical for countries where the economy is dependent on the crop harvest. Early assessment of yield reductions could avert a disastrous situation and help in strategic planning to meet the demands. The National Agricultural Statistics Service (NASS) of the U.S. Department of Agriculture (USDA) monitors crop conditions in the U.S. and provides monthly projected estimates of crop yield and production. NASS has developed methods to assess crop growth and development from several sources of information, including several types of surveys of farm operators. Field offices in each state are responsible for monitoring the progress and health of the crop and integrating crop condition with local weather information. This crop information is also distributed in a biweekly report on regional weather conditions. NASS provides monthly information to the Agriculture Statistics Board, which assesses the potential yields of all commodities based on crop condition information acquired from different sources. This research complements efforts to independently assess crop condition at the county, agricultural statistics district, and state levels. In the early 1960s, NASS initiated “objective yield” surveys for crops such as corn, soybean, wheat, and cotton in States with the greatest acreages (Allen et al., 1994). These surveys establish small sample units in randomly selected fields which are visited monthly to determine numbers of plants, numbers of fruits (wheat heads, corn ears, soybean pods, etc.), and weight per fruit. Yield forecasting models are based on relationships of samples of the same maturity stage in comparable months during the past four years in each State. Additionally, the Agency implemented a midyear Area Frame that enabled creation of probabilistic based acreage estimates. For major crops, sampling errors are as low as 1 percent at the U.S. level and 2 to 3 percent in the largest producing States. Accurate crop production forecasts require accurate forecasts of acreage at harvest, its geographic distribution, and the associated crop yield determined by local growing conditions. There can be significant year-to-year variability which requires a systematic monitoring capability. To quantify the complex effects of environment, soils, and management practices, both yield and acreage must be assessed at sub-regional levels where a limited range of factors and simple interactions permit modeling and estimation. A yield forecast within homogeneous soil type, land use, crop variety, and climate preclude the necessity for use of a complex forecast model. In 1974, the Large Area Crop Inventory Experiment (LACIE), a joint effort of the National Aeronautics and Space Administration (NASA), the USDA, and the National Oceanic and Atmospheric Administration (NOAA) began to apply satellite remote sensing technology on experimental bases to forecast harvests in important wheat producing areas (MacDonald, 1979). In 1977 LACIE in-season forecasted a 30 percent shortfall in Soviet spring wheat production that came within 10 percent of the official Soviet estimate that came several months after the harvest (Myers, 1983). P H O T O G R A M M E T R I C E N G I N E E R I N G & R E M O T E S E N S I N G Photogrammetric Engineering & Remote Sensing Vol. 69, No. 6, June 2003, pp. 665–674. 0099-1112/03/6906–665$3.00/0 © 2003 American Society for Photogrammetry and Remote Sensing P.C. Doraiswamy and A. Stern are with the USDA, ARS, Hydrology and Remote Sensing Lab, Bldg 007, Rm 104/ BARC West, Beltsville, MD 20705 (pdoraiswamy@ hydrolab.arsusda.gov). Sophie Moulin is with INRA/Unite Climat–Sol–Environnement, Domaine St paul, Site Agroparc, 84914 Avignon Cedex 9, France. P.W. Cook is with the USDA, National Agricultural Statistical Service, Research and Development Division, 3251 Old Lee Highway, Rm 305, Fairfax, VA 22030-1504. IPC_Grams_03-905 4/15/03 1:19 AM Page 1", "title": "" }, { "docid": "56b58efbeab10fa95e0f16ad5924b9e5", "text": "This paper investigates (i) preplanned switching events and (ii) fault events that lead to islanding of a distribution subsystem and formation of a micro-grid. The micro-grid includes two distributed generation (DG) units. One unit is a conventional rotating synchronous machine and the other is interfaced through a power electronic converter. The interface converter of the latter unit is equipped with independent real and reactive power control to minimize islanding transients and maintain both angle stability and voltage quality within the micro-grid. The studies are performed based on a digital computer simulation approach using the PSCAD/EMTDC software package. The studies show that an appropriate control strategy for the power electronically interfaced DG unit can ensure stability of the micro-grid and maintain voltage quality at designated buses, even during islanding transients. This paper concludes that presence of an electronically-interfaced DG unit makes the concept of micro-grid a technically viable option for further investigations.", "title": "" }, { "docid": "13fed0d1099638f536c5a950e3d54074", "text": "Notes: (1) These questions require thought, but do not require long answers. Please be as concise as possible. (2) If you have a question about this homework, we encourage you to post your question on our Piazza forum, at https://piazza.com/stanford/autumn2016/cs229. (3) If you missed the first lecture or are unfamiliar with the collaboration or honor code policy, please read the policy on Handout #1 (available from the course website) before starting work. (4) For problems that require programming, please include in your submission a printout of your code (with comments) and any figures that you are asked to plot. If you are skipping a question, please include it on your PDF/photo, but leave the question blank and tag it appropriately on Gradescope. This includes extra credit problems. If you are scanning your document by cellphone, please check the Piazza forum for recommended cellphone scanning apps and best practices. 1. [23 points] Uniform convergence You are hired by CNN to help design the sampling procedure for making their electoral predictions for the next presidential election in the (fictitious) country of Elbania. The country of Elbania is organized into states, and there are only two candidates running in this election: One from the Elbanian Democratic party, and another from the Labor Party of Elbania. The plan for making our electorial predictions is as follows: We'll sample m voters from each state, and ask whether they're voting democrat. We'll then publish, for each state, the estimated fraction of democrat voters. In this problem, we'll work out how many voters we need to sample in order to ensure that we get good predictions with high probability. One reasonable goal might be to set m large enough that, with high probability, we obtain uniformly accurate estimates of the fraction of democrat voters in every state. But this might require surveying very many people, which would be prohibitively expensive. So, we're instead going to demand only a slightly lower degree of accuracy. Specifically, we'll say that our prediction for a state is \" highly inaccurate \" if the estimated fraction of democrat voters differs from the actual fraction of democrat voters within that state by more than a tolerance factor γ. CNN knows that their viewers will tolerate some small number of states' estimates being highly inaccurate; however, their credibility would be damaged if they reported highly inaccurate estimates for too many states. So, rather than …", "title": "" }, { "docid": "a3227034d28c2f2a0f858e1a233ecbc4", "text": "With the persistent shift towards multi-sourcing, the complexity of service delivery is continuously increasing. This presents new challenges for clients who now have to integrate interdependent services from multiple providers. As other functions, service integration is subject to make-or-buy decisions: clients can either build the required capabilities themselves or delegate service integration to external functions. To define detailed organizational models, one requires understanding of specific tasks and how to allocate them. Based on a qualitative and quantitative expert study, we analyze generic organizational models, and identify key service integration tasks. The allocation of these tasks to clients or their providers generates a set of granular organizational structures. We analyze drivers for delegating these tasks, and develop typical allocations in practice. Our work contributes to expanding the theoretical foundations of service integration. Moreover, our findings will assist clients to design their service integration organization, and to build more effective multi-sourcing solutions.", "title": "" }, { "docid": "ebb024bbd923d35fd86adc2351073a48", "text": "Background: Depression is a chronic condition that results in considerable disability, and particularly in later life, severely impacts the life quality of the individual with this condition. The first aim of this review article was to summarize, synthesize, and evaluate the research base concerning the use of dance-based exercises on health status, in general, and secondly, specifically for reducing depressive symptoms, in older adults. A third was to provide directives for professionals who work or are likely to work with this population in the future. Methods: All English language peer reviewed publications detailing the efficacy of dance therapy as an intervention strategy for older people in general, and specifically for minimizing depression and dependence among the elderly were analyzed.", "title": "" }, { "docid": "074de6f0c250f5c811b69598551612e4", "text": "In this paper we present a novel GPU-friendly real-time voxelization technique for rendering homogeneous media that is defined by particles, e.g. fluids obtained from particle-based simulations such as Smoothed Particle Hydrodynamics (SPH). Our method computes view-adaptive binary voxelizations with on-the-fly compression of a tiled perspective voxel grid, achieving higher resolutions than previous approaches. It allows for interactive generation of realistic images, enabling advanced rendering techniques such as ray casting-based refraction and reflection, light scattering and absorption, and ambient occlusion. In contrast to previous methods, it does not rely on preprocessing such as expensive, and often coarse, scalar field conversion or mesh generation steps. Our method directly takes unsorted particle data as input. It can be further accelerated by identifying fully populated simulation cells during simulation. The extracted surface can be filtered to achieve smooth surface appearance.", "title": "" }, { "docid": "ed9c0cdb74950bf0f1288931707b9d08", "text": "Introduction This chapter reviews the theoretical and empirical literature on the concept of credibility and its areas of application relevant to information science and technology, encompassing several disciplinary approaches. An information seeker's environment—the Internet, television, newspapers, schools, libraries, bookstores, and social networks—abounds with information resources that need to be evaluated for both their usefulness and their likely level of accuracy. As people gain access to a wider variety of information resources, they face greater uncertainty regarding who and what can be believed and, indeed, who or what is responsible for the information they encounter. Moreover, they have to develop new skills and strategies for determining how to assess the credibility of an information source. Historically, the credibility of information has been maintained largely by professional knowledge workers such as editors, reviewers, publishers, news reporters, and librarians. Today, quality control mechanisms are evolving in such a way that a vast amount of information accessed through a wide variety of systems and resources is out of date, incomplete, poorly organized, or simply inaccurate (Janes & Rosenfeld, 1996). Credibility has been examined across a number of fields ranging from communication, information science, psychology, marketing, and the management sciences to interdisciplinary efforts in human-computer interaction (HCI). Each field has examined the construct and its practical significance using fundamentally different approaches, goals, and presuppositions, all of which results in conflicting views of credibility and its effects. The notion of credibility has been discussed at least since Aristotle's examination of ethos and his observations of speakers' relative abilities to persuade listeners. Disciplinary approaches to investigating credibility systematically developed only in the last century, beginning within the field of communication. A landmark among these efforts was the work of Hovland and colleagues (Hovland, Jannis, & Kelley, 1953; Hovland & Weiss, 1951), who focused on the influence of various characteristics of a source on a recipient's message acceptance. This work was followed by decades of interest in the relative credibility of media involving comparisons between newspapers, radio, television, Communication researchers have tended to focus on sources and media, viewing credibility as a perceived characteristic. Within information science, the focus is on the evaluation of information, most typically instantiated in documents and statements. Here, credibility has been viewed largely as a criterion for relevance judgment, with researchers focusing on how information seekers assess a document's likely level of This brief account highlights an often implicit focus on varying objects …", "title": "" }, { "docid": "0cecb071d4358e60a113a9815272959f", "text": "Single-cell RNA-Sequencing (scRNA-Seq) has become the most widely used high-throughput method for transcription profiling of individual cells. Systematic errors, including batch effects, have been widely reported as a major challenge in high-throughput technologies. Surprisingly, these issues have received minimal attention in published studies based on scRNA-Seq technology. We examined data from five published studies and found that systematic errors can explain a substantial percentage of observed cell-to-cell expression variability. Specifically, we found that the proportion of genes reported as expressed explains a substantial part of observed variability and that this quantity varies systematically across experimental batches. Furthermore, we found that the implemented experimental designs confounded outcomes of interest with batch effects, a design that can bring into question some of the conclusions of these studies. Finally, we propose a simple experimental design that can ameliorate the effect of theses systematic errors have on downstream results. . CC-BY 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/025528 doi: bioRxiv preprint first posted online Aug. 25, 2015; Single-cell RNA-Sequencing (scRNA-Seq) has become the primary tool for profiling the transcriptomes of hundreds or even thousands of individual cells in parallel. Our experience with high-throughput genomic data in general, is that well thought-out data processing pipelines are essential to produce meaningful downstream results. We expect the same to be true for scRNA-seq data. Here we show that while some tools developed for analyzing bulk RNA-Seq can be used for scRNA-Seq data, such as the mapping and alignment software, other steps in the processing, such as normalization, quality control and quantification, require new methods to account for the additional variability that is specific to this technology. One of the most challenging sources of unwanted variability and systematic error in highthroughput data are what are commonly referred to as batch effects. Given the way that scRNASeq experiments are conducted, there is much room for concern regarding batch effects. Specifically, batch effects occur when cells from one biological group or condition are cultured, captured and sequenced separate from cells in a second condition. Although batch information is not always included in the experimental annotations that are publicly available, one can extract surrogate variables from the raw sequencing (FASTQ) files. Namely, the sequencing instrument used, the run number from the instrument and the flow cell lane. Although the sequencing is unlikely to be a major source of unwanted variability, it serves as a surrogate for other experimental procedures that very likely do have an effect, such as starting material, PCR amplification reagents/conditions, and cell cycle stage of the cells. Here we will refer to the resulting differences induced by different groupings of these sources of variability as batch effects. In a completely confounded study, it is not possible to determine if the biological condition or batch effects are driving the observed variation. In contrast, incorporating biological replicates across in the experimental design and processing the replicates across multiple batches permits observed variation to be attributed to biology or batch effects (Figure 1). To demonstrate the widespread problem of systematic bias, batch effects, and confounded experimental designs in scRNA-Seq studies, we surveyed several published data sets. We discuss the consequences of failing to consider the presence of this unwanted technical variability, and consider new strategies to minimize its impact on scRNA-Seq data. . CC-BY 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/025528 doi: bioRxiv preprint first posted online Aug. 25, 2015;", "title": "" }, { "docid": "f2d8e0ae632ec9970351aff34f58badc", "text": "The high potential of superquadrics as modeling elements for image segmentation tasks has been pointed out for years in the computer vision community. In this work, we employ superquadrics as modeling elements for multiple object segmentation in range images. Segmentation is executed in two stages: First, a hypothesis about the values of the segmentation parameters is generated. Second, the hypothesis is refined locally. In both stages, object boundary and region information are considered. Boundary information is derived via model-based edge detection in the input range image. Hypothesis generation uses boundary information to isolate image regions that can be accurately described by superquadrics. Within hypothesis refinement, a game-theoretic framework is used to fuse the two information sources by associating an objective function to each information source. Iterative optimization of the two objective functions in succession, outputs a precise description of all image objects. We demonstrate experimentally that this approach substantially improves the most established method in superquadric segmentation in terms of accuracy and computational efficiency. We demonstrate the applicability of our segmentation framework in real-world applications by constructing a novel robotic system for automatic unloading of jumbled box-like objects from platforms.", "title": "" }, { "docid": "2f2291baa6c8a74744a16f27df7231d2", "text": "Malicious programs, such as viruses and worms, are frequently related to previous programs through evolutionary relationships. Discovering those relationships and constructing a phylogeny model is expected to be helpful for analyzing new malware and for establishing a principled naming scheme. Matching permutations of code may help build better models in cases where malware evolution does not keep things in the same order. We describe methods for constructing phylogeny models that uses features called n-perms to match possibly permuted codes. An experiment was performed to compare the relative effectiveness of vector similarity measures using n-perms and n-grams when comparing permuted variants of programs. The similarity measures using n-perms maintained a greater separation between the similarity scores of permuted families of specimens versus unrelated specimens. A subsequent study using a tree generated through n-perms suggests that phylogeny models based on n-perms may help forensic analysts investigate new specimens, and assist in reconciling malware naming inconsistencies Škodlivé programy, jako viry a červy (malware), jsou zřídka psány narychlo, jen tak. Obvykle jsou výsledkem svých evolučních vztahů. Zjištěním těchto vztahů a tvorby v přesné fylogenezi se předpokládá užitečná pomoc v analýze nového malware a ve vytvoření zásad pojmenovacího schématu. Porovnávání permutací kódu uvnitř malware mů že nabídnout výhody pro fylogenní generování, protože evoluční kroky implementované autory malware nemohou uchovat posloupnosti ve sdíleném kódu. Popisujeme rodinu fylogenních generátorů, které provádějí clustering pomocí PQ stromově založených extrakčních vlastností. Byl vykonán experiment v němž výstup stromu z těchto generátorů byl vyhodnocen vzhledem k fylogenezím generovaným pomocí vážených n-gramů. Výsledky ukazují výhody přístupu založeného na permutacích ve fylogenním generování malware. Les codes malveillants, tels que les virus et les vers, sont rarement écrits de zéro; en conséquence, il existe des relations de nature évolutive entre ces différents codes. Etablir ces relations et construire une phylogénie précise permet d’espérer une meilleure capacité d’analyse de nouveaux codes malveillants et de disposer d’une méthode de fait de nommage de ces codes. La concordance de permutations de code avec des parties de codes malveillants sont susceptibles d’être très intéressante dans l’établissement d’une phylogénie, dans la mesure où les étapes évolutives réalisées par les auteurs de codes malveillants ne conservent généralement pas l’ordre des instructions présentes dans le code commun. Nous décrivons ici une famille de générateurs phylogénétiques réalisant des regroupements à l’aide de caractéristiques extraites d’arbres PQ. Une expérience a été réalisée, dans laquelle l’arbre produit par ces générateurs est évalué d’une part en le comparant avec les classificiations de références utilisées par les antivirus par scannage, et d’autre part en le comparant aux phylogénies produites à l’aide de polygrammes de taille n (n-grammes), pondérés. Les résultats démontrent l’intérêt de l’approche utilisant les permutations dans la génération phylogénétique des codes malveillants. Haitalliset ohjelmat, kuten tietokonevirukset ja -madot, kirjoitetaan harvoin alusta alkaen. Tämän seurauksena niistä on löydettävissä evoluution kaltaista samankaltaisuutta. Samankaltaisuuksien löytämisellä sekä rakentamalla tarkka evoluutioon perustuva malli voidaan helpottaa uusien haitallisten ohjelmien analysointia sekä toteuttaa nimeämiskäytäntöjä. Permutaatioiden etsiminen koodista saattaa antaa etuja evoluutiomallin muodostamiseen, koska haitallisten ohjelmien kirjoittajien evolutionääriset askeleet eivät välttämättä säilytä jaksoittaisuutta ohjelmakoodissa. Kuvaamme joukon evoluutiomallin muodostajia, jotka toteuttavat klusterionnin käyttämällä PQ-puuhun perustuvia ominaisuuksia. Teimme myös kokeen, jossa puun tulosjoukkoa verrattiin virustentorjuntaohjelman muodostamaan viitejoukkoon sekä evoluutiomalleihin, jotka oli muodostettu painotetuilla n-grammeilla. Tulokset viittaavat siihen, että permutaatioon perustuvaa lähestymistapaa voidaan menestyksekkäästi käyttää evoluutiomallien muodostamineen. Maliziöse Programme, wie z.B. Viren und Würmer, werden nur in den seltensten Fällen komplett neu geschrieben; als Ergebnis können zwischen verschiedenen maliziösen Codes Abhängigkeiten gefunden werden. Im Hinblick auf Klassifizierung und wissenschaftlichen Aufarbeitung neuer maliziöser Codes kann es sehr hilfreich erweisen, Abhängigkeiten zu bestehenden maliziösen Codes darzulegen und somit einen Stammbaum zu erstellen. In dem Artikel wird u.a. auf moderne Ansätze innerhalb der Staumbaumgenerierung anhand ausgewählter Win32 Viren eingegangen. I programmi maligni, quali virus e worm, sono raramente scritti da zero; questo significa che vi sono delle relazioni di evoluzione tra di loro. Scoprire queste relazioni e costruire una filogenia accurata puo’aiutare sia nell’analisi di nuovi programmi di questo tipo, sia per stabilire una nomenclatura avente una base solida. Cercare permutazioni di codice tra vari programmi puo’ dare un vantaggio per la generazione delle filogenie, dal momento che i passaggi evolutivi implementati dagli autori possono non aver preservato la sequenzialita’ del codice originario. In questo articolo descriviamo una famiglia di generatori di filogenie che effettuano clustering usando feature basate su alberi PQ. In un esperimento l’albero di output dei generatori viene confrontato con una classificazione di rifetimento ottenuta da un programma anti-virus, e con delle filogenie generate usando n-grammi pesati. I risultati indicano i risultati positivi dell’approccio basato su permutazioni nella generazione delle filogenie del malware. ", "title": "" }, { "docid": "17dfbb112878f4cf4344c5dff195fa18", "text": "Hybrid vehicle techniques have been widely studied recently because of their potential to significantly improve the fuel economy and drivability of future ground vehicles. Due to the dualpower-source nature of these vehicles, control strategies based on engineering intuition frequently fail to fully explore the potential of these advanced vehicles. In this paper, we will present a procedure for the design of an approximately optimal power management strategy. The design procedure starts by defining a cost function, such as minimizing a combination of fuel consumption and selected emission species over a driving cycle. Dynamic Programming (DP) is then utilized to find the optimal control actions. Through analysis of the behavior of the DP control actions, approximately optimal rules are extracted, which, unlike DP control signals, are implementable. The performance of the power management control strategy is verified by using the hybrid vehicle model HE-VESIM developed at the Automotive Research Center of the University of Michigan. A trade-off study between fuel economy and emissions was performed. It was found that significant emission reduction can be achieved at the expense of a small increase in fuel consumption. Power Management Strategy for a Parallel Hybrid Electric Truck", "title": "" }, { "docid": "0397514e0d4a87bd8b59d9b317f8c660", "text": "Formula 1 motorsport is a platform for maximum race car driving performance resulting from high-tech developments in the area of lightweight materials and aerodynamic design. In order to ensure the driver’s safety in case of high-speed crashes, special impact structures are designed to absorb the race car’s kinetic energy and limit the decelerations acting on the human body. These energy absorbing structures are made of laminated composite sandwich materials like the whole monocoque chassis and have to meet defined crash test requirements specified by the FIA. This study covers the crash behaviour of the nose cone as the F1 racing car front impact structure. Finite element models for dynamic simulations with the explicit solver LS-DYNA are developed with the emphasis on the composite material modelling. Numerical results are compared to crash test data in terms of deceleration levels, absorbed energy and crushing mechanisms. The validation led to satisfying results and the overall conclusion that dynamic simulations with LS-DYNA can be a helpful tool in the design phase of an F1 racing car front impact structure.", "title": "" }, { "docid": "3ab4b094f3e32a4f467a849347157264", "text": "Overview of geographically explicit momentary assessment research, applied to the study of mental health and well-being, which allows for cross-validation, extension, and enrichment of research on place and health. Building on the historical foundations of both ecological momentary assessment and geographic momentary assessment research, this review explores their emerging synergy into a more generalized and powerful research framework. Geographically explicit momentary assessment methods are rapidly advancing across a number of complimentary literatures that intersect but have not yet converged. Key contributions from these areas reveal tremendous potential for transdisciplinary and translational science. Mobile communication devices are revolutionizing research on mental health and well-being by physically linking momentary experience sampling to objective measures of socio-ecological context in time and place. Methodological standards are not well-established and will be required for transdisciplinary collaboration and scientific inference moving forward.", "title": "" }, { "docid": "7cd8dee294d751ec6c703d628e0db988", "text": "A major component of secondary education is learning to write effectively, a skill which is bolstered by repeated practice with formative guidance. However, providing focused feedback to every student on multiple drafts of each essay throughout the school year is a challenge for even the most dedicated of teachers. This paper first establishes a new ordinal essay scoring model and its state of the art performance compared to recent results in the Automated Essay Scoring field. Extending this model, we describe a method for using prediction on realistic essay variants to give rubric-specific formative feedback to writers. This method is used in Revision Assistant, a deployed data-driven educational product that provides immediate, rubric-specific, sentence-level feedback to students to supplement teacher guidance. We present initial evaluations of this feedback generation, both offline and in deployment.", "title": "" }, { "docid": "02b764f5b047e3ed6f014f6df7c1c91a", "text": "Policy learning for partially observed control tasks requires policies that can remember salient information from past observations. In this paper, we present a method for learning policies with internal memory for high-dimensional, continuous systems, such as robotic manipulators. Our approach consists of augmenting the state and action space of the system with continuous-valued memory states that the policy can read from and write to. Learning general-purpose policies with this type of memory representation directly is difficult, because the policy must automatically figure out the most salient information to memorize at each time step. We show that, by decomposing this policy search problem into a trajectory optimization phase and a supervised learning phase through a method called guided policy search, we can acquire policies with effective memorization and recall strategies. Intuitively, the trajectory optimization phase chooses the values of the memory states that will make it easier for the policy to produce the right action in future states, while the supervised learning phase encourages the policy to use memorization actions to produce those memory states. We evaluate our method on tasks involving continuous control in manipulation and navigation settings, and show that our method can learn complex policies that successfully complete a range of tasks that require memory.", "title": "" }, { "docid": "73abeef146be96d979a56a4794a5e130", "text": "Regular path queries (RPQs) are a fundamental part of recent graph query languages like SPARQL and PGQL. They allow the definition of recursive path structures through regular expressions in a declarative pattern matching environment. We study the use of the K2-tree graph compression technique to materialize RPQ results with low memory consumption for indexing. Compact index representations enable the efficient storage of multiple indexes for varying RPQs.", "title": "" }, { "docid": "038064c2998a5da8664be1ba493a0326", "text": "The bandit problem is revisited and considered under the PAC model. Our main contribution in this part is to show that given n arms, it suffices to pull the arms O( n 2 log 1 δ ) times to find an -optimal arm with probability of at least 1 − δ. This is in contrast to the naive bound of O( n 2 log n δ ). We derive another algorithm whose complexity depends on the specific setting of the rewards, rather than the worst case setting. We also provide a matching lower bound. We show how given an algorithm for the PAC model Multi-Armed Bandit problem, one can derive a batch learning algorithm for Markov Decision Processes. This is done essentially by simulating Value Iteration, and in each iteration invoking the multi-armed bandit algorithm. Using our PAC algorithm for the multi-armed bandit problem we improve the dependence on the number of actions.", "title": "" } ]
scidocsrr
bad06d4da6237eeb9f836e7be361c431
Arabic speech recognition using MFCC feature extraction and ANN classification
[ { "docid": "e7d3fae34553c61827b78e50c2e205ee", "text": "Speaker Identification (SI) is the process of identifying the speaker from a given utterance by comparing the voice biometrics of the utterance with those utterance models stored beforehand. SI technologies are taken a new direction due to the advances in artificial intelligence and have been used widely in various domains. Feature extraction is one of the most important aspects of SI, which significantly influences the SI process and performance. This systematic review is conducted to identify, compare, and analyze various feature extraction approaches, methods, and algorithms of SI to provide a reference on feature extraction approaches for SI applications and future studies. The review was conducted according to Kitchenham systematic review methodology and guidelines, and provides an in-depth analysis on proposals and implementations of SI feature extraction methods discussed in the literature between year 2011 and 2106. Three research questions were determined and an initial set of 535 publications were identified to answer the questions. After applying exclusion criteria 160 related publications were shortlisted and reviewed in this paper; these papers were considered to answer the research questions. Results indicate that pure Mel-Frequency Cepstral Coefficients (MFCCs) based feature extraction approaches have been used more than any other approach. Furthermore, other MFCC variations, such as MFCC fusion and cleansing approaches, are proven to be very popular as well. This study identified that the current SI research trend is to develop a robust universal SI framework to address the important problems of SI such as adaptability, complexity, multi-lingual recognition, and noise robustness. The results presented in this research are based on past publications, citations, and number of implementations with citations being most relevant. This paper also presents the general process of SI. © 2017 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "2a7de9a210dd074caebeef62d0a56700", "text": "We describe a new algorithm to enumerate the k shortest simple (loopless) paths in a directed graph and report on its implementation. Our algorithm is based on a replacement paths algorithm proposed by Hershberger and Suri [2001], and can yield a factor Θ(n) improvement for this problem. But there is a caveat: The fast replacement paths subroutine is known to fail for some directed graphs. However, the failure is easily detected, and so our k shortest paths algorithm optimistically uses the fast subroutine, then switches to a slower but correct algorithm if a failure is detected. Thus, the algorithm achieves its Θ(n) speed advantage only when the optimism is justified. Our empirical results show that the replacement paths failure is a rare phenomenon, and the new algorithm outperforms the current best algorithms; the improvement can be substantial in large graphs. For instance, on GIS map data with about 5,000 nodes and 12,000 edges, our algorithm is 4--8 times faster. In synthetic graphs modeling wireless ad hoc networks, our algorithm is about 20 times faster.", "title": "" }, { "docid": "1f629796e9180c14668e28b83dc30675", "text": "In this article we tackle the issue of searchable encryption with a generalized query model. Departing from many previous works that focused on queries consisting of a single keyword, we consider the the case of queries consisting of arbitrary boolean expressions on keywords, that is to say conjunctions and disjunctions of keywords and their complement. Our construction of boolean symmetric searchable encryption BSSE is mainly based on the orthogonalization of the keyword field according to the Gram-Schmidt process. Each document stored in an outsourced server is associated with a label which contains all the keywords corresponding to the document, and searches are performed by way of a simple inner product. Furthermore, the queries in the BSSE scheme are randomized. This randomization hides the search pattern of the user since the search results cannot be associated deterministically to queries. We formally define an adaptive security model for the BSSE scheme. In addition, the search complexity is in $O(n)$ where $n$ is the number of documents stored in the outsourced server.", "title": "" }, { "docid": "7360c92ef44058694135338acad6838c", "text": "Modern chip multiprocessor (CMP) systems employ multiple memory controllers to control access to main memory. The scheduling algorithm employed by these memory controllers has a significant effect on system throughput, so choosing an efficient scheduling algorithm is important. The scheduling algorithm also needs to be scalable — as the number of cores increases, the number of memory controllers shared by the cores should also increase to provide sufficient bandwidth to feed the cores. Unfortunately, previous memory scheduling algorithms are inefficient with respect to system throughput and/or are designed for a single memory controller and do not scale well to multiple memory controllers, requiring significant finegrained coordination among controllers. This paper proposes ATLAS (Adaptive per-Thread Least-Attained-Service memory scheduling), a fundamentally new memory scheduling technique that improves system throughput without requiring significant coordination among memory controllers. The key idea is to periodically order threads based on the service they have attained from the memory controllers so far, and prioritize those threads that have attained the least service over others in each period. The idea of favoring threads with least-attained-service is borrowed from the queueing theory literature, where, in the context of a single-server queue it is known that least-attained-service optimally schedules jobs, assuming a Pareto (or any decreasing hazard rate) workload distribution. After verifying that our workloads have this characteristic, we show that our implementation of least-attained-service thread prioritization reduces the time the cores spend stalling and significantly improves system throughput. Furthermore, since the periods over which we accumulate the attained service are long, the controllers coordinate very infrequently to form the ordering of threads, thereby making ATLAS scalable to many controllers. We evaluate ATLAS on a wide variety of multiprogrammed SPEC 2006 workloads and systems with 4–32 cores and 1–16 memory controllers, and compare its performance to five previously proposed scheduling algorithms. Averaged over 32 workloads on a 24-core system with 4 controllers, ATLAS improves instruction throughput by 10.8%, and system throughput by 8.4%, compared to PAR-BS, the best previous CMP memory scheduling algorithm. ATLAS's performance benefit increases as the number of cores increases.", "title": "" }, { "docid": "154fce165c43c3e90a172ffc6864ba39", "text": "BACKGROUND CONTEXT\nSeveral studies report a favorable short-term outcome after nonoperatively treated two-column thoracic or lumbar burst fractures in patients without neurological deficits. Few reports have described the long-term clinical and radiological outcome after these fractures, and none have, to our knowledge, specifically evaluated the long-term outcome of the discs adjacent to the fractured vertebra, often damaged at injury and possibly at an increased risk of height reduction and degeneration with subsequent chronic back pain.\n\n\nPURPOSE\nTo evaluate the long-term clinical and radiological outcome after nonoperatively treated thoracic or lumbar burst fractures in adults, with special attention to posttraumatic radiological disc height reduction.\n\n\nSTUDY DESIGN\nCase series.\n\n\nPATIENT SAMPLE\nSixteen men with a mean age of 31 years (range, 19-44) and 11 women with a mean age of 40 years (range, 23-61) had sustained a thoracic or lumbar burst fracture during the years 1965 to 1973. Four had sustained a burst fracture Denis type A, 18 a Denis type B, 1 a Denis type C, and 4 a Denis type E. Seven of these patients had neurological deficits at injury, all retrospectively classified as Frankel D.\n\n\nOUTCOME MEASURES\nThe clinical outcome was evaluated subjectively with Oswestry score and questions regarding work capacity and objectively with the Frankel scale. The radiological outcome was evaluated with measurements of local kyphosis over the fractured segment, ratios of anterior and posterior vertebral body heights, adjacent disc heights, pedicle widths, sagittal width of the spinal canal, and lateral and anteroposterior displacement.\n\n\nMETHODS\nFrom the radiographical archives of an emergency hospital, all patients with a nonoperatively treated thoracic or lumbar burst fracture during the years 1965 to 1973 were registered. The fracture type, localization, primary treatment, and outcome were evaluated from the old radiographs, referrals, and reports. Twenty-seven individuals were clinically and radiologically evaluated a mean of 27 years (range, 23-41) after the injury.\n\n\nRESULTS\nAt follow-up, 21 former patients reported no or minimal back pain or disability (Oswestry Score mean 4; range, 0-16), whereas 6 former patients (of whom 3 were classified as Frankel D at baseline) reported moderate or severe disability (Oswestry Score mean 39; range, 26-54). Six former patients were classified as Frankel D, and the rest as Frankel E. Local kyphosis had increased by a mean of 3 degrees (p<.05), whereas the discs adjacent to the fractured vertebrae remained unchanged in height during the follow-up.\n\n\nCONCLUSIONS\nNonoperatively treated burst fractures of the thoracic or lumbar spine in adults with or without minor neurological deficits have a predominantly favorable long-term outcome, and there seems to be no increased risk for subsequent disc height reduction in the adjacent discs.", "title": "" }, { "docid": "c6d2371a165acc46029eb4ad42df3270", "text": "Video game playing is a popular activity and its enjoyment among frequent players has been associated with absorption and immersion experiences. This paper examines how immersion in the video game environment can influence the player during the game and afterwards (including fantasies, thoughts, and actions). This is what is described as Game Transfer Phenomena (GTP). GTP occurs when video game elements are associated with real life elements triggering subsequent thoughts, sensations and/or player actions. To investigate this further, a total of 42 frequent video game players aged between 15 and 21 years old were interviewed. Thematic analysis showed that many players experienced GTP, where players appeared to integrate elements of video game playing into their real lives. These GTP were then classified as either intentional or automatic experiences. Results also showed that players used video games for interacting with others as a form of amusement, modeling or mimicking video game content, and daydreaming about video games. Furthermore, the findings demonstrate how video games triggered intrusive thoughts, sensations, impulses, reflexes, visual illusions, and dissociations. DOI: 10.4018/ijcbpl.2011070102 16 International Journal of Cyber Behavior, Psychology and Learning, 1(3), 15-33, July-September 2011 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. 24/7 activity (e.g., Ng & Weimer-Hastings, 2005; Chappell, Eatough, Davies, & Griffiths, 2006; Grüsser, Thalemann, & Griffiths, 2007). Today’s video games have evolved due to technological advance, resulting in high levels of realism and emotional design that include diversity, experimentation, and (perhaps in some cases) sensory overload. Furthermore, video games have been considered as fantasy triggers because they offer ‘what if’ scenarios (Baranowski, Buday, Thompson, & Baranowski, 2008). What if the player could become someone else? What if the player could inhabit an improbable world? What if the player could interact with fantasy characters or situations (Woolley, 1995)? Entertainment media content can be very effective in capturing the minds and eliciting emotions in the individual. Research about novels, films, fairy tales and television programs has shown that entertainment can generate emotions such as joy, awe, compassion, fear and anger (Oatley, 1999; Tan 1996; Valkenburg Cantor & Peeters, 2000, cited in Jansz et al., 2005). Video games also have the capacity to generate such emotions and have the capacity for players to become both immersed in, and dissociated from, the video game. Dissociation and Immersion It is clear that dissociation is a somewhat “fuzzy” concept as there is no clear accepted definition of what it actually constitutes (Griffiths, Wood, Parke, & Parke, 2006). Most would agree that dissociation is a form of altered state of consciousness. However, dissociative behaviours lie on a continuum and range from individuals losing track of time, feeling like they are someone else, blacking out, not recalling how they got somewhere or what they did, and being in a trance like state (Griffiths et al., 2006). Studies have found that dissociation is related to an extensive involvement in fantasizing, and daydreaming (Giesbrecht, Geraerts, & Merckelbach, 2007). Dissociative phenomena of the non-pathological type include absorption and imaginative involvement (Griffith et al., 2006) and are psychological phenomena that can occur during video game playing. Anyone can, to some degree, experience dissociative states in their daily lives (Giesbrecht et al., 2007). Furthermore, these states can happen episodically and can be situationally triggered (Griffiths et al., 2006). When people become engaged in games they may experience psychological absorption. More commonly known as ‘immersion’, this refers to when individual logical integration of thoughts, feelings and experiences is suspended (Funk, Chan, Brouwer, & Curtiss, 2006; Wood, Griffiths, & Parke, 2007). This can incur an altered state of consciousness such as altered time perception and change in degree of control over cognitive functioning (Griffiths et al., 2006). Video game enjoyment has been associated with absorption and immersion experiences (IJsselsteijn, Kort, de Poels, Jurgelionis, & Belotti, 2007). How an individual can get immersed in video games has been explained by the phenomenon of ‘flow’ (Csikszentmihalyi, 1988). Flow refers to the optimum experience a person achieves when performing an activity (e.g., video game playing) and may be induced, in part, by the structural characteristics of the activity itself. Structural characteristics of video games (i.e., the game elements that are incorporated into the game by the games designers) are usually based on a balance between skill and challenge (Wood et al., 2004; King, Delfabbro, & Griffiths, 2010), and help make playing video games an intrinsically rewarding activity (Csikszentmihalyi, 1988; King, et al. 2010). Studying Video Game Playing Studying the effects of video game playing requires taking in consideration four independent dimensions suggested by Gentile and Stone (2005); amount, content, form, and mechanism. The amount is understood as the time spent playing and gaming habits. Content refers to the message and topic delivered by the video game. Form focuses on the types of activity necessary to perform in the video game. The mechanism refers to the input-output devices used, which 17 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/article/game-transfer-phenomena-videogame/58041?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Communications and Social Science, InfoSciCommunications, Online Engagement, and Media eJournal Collection, InfoSci-Educational Leadership, Administration, and Technologies eJournal Collection, InfoSci-Healthcare Administration, Clinical Practice, and Bioinformatics eJournal Collection, InfoSci-Select, InfoSci-Journal Disciplines Library Science, Information Studies, and Education, InfoSci-Journal Disciplines Medicine, Healthcare, and Life Science. Recommend this product to your librarian: www.igi-global.com/e-resources/libraryrecommendation/?id=2", "title": "" }, { "docid": "38bd1d3ef5c314b380ad6459392a7fd8", "text": "Routing Protocol for Low power and Lossy network (RPL) topology attacks can downgrade the network performance significantly by disrupting the optimal protocol structure. To detect such threats, we propose a RPL-specification, obtained by a semi-auto profiling technique that constructs a high-level abstract of operations through network simulation traces, to use as reference for verifying the node behaviors. This specification, including all the legitimate protocol states and transitions with corresponding statistics, will be implemented as a set of rules in the intrusion detection agents, in the form of the cluster heads propagated to monitor the whole network. In order to save resources, we set the cluster members to report related information about itself and other neighbors to the cluster head instead of making the head overhearing all the communication. As a result, information about a cluster member will be reported by different neighbors, which allow the cluster head to do cross-check. We propose to record the sequence in RPL Information Object (DIO) and Information Solicitation (DIS) messages to eliminate the synchronized issue created by the delay in transmitting the report, in which the cluster head only does cross-check on information that come from sources with the same sequence. Simulation results show that the proposed Intrusion Detection System (IDS) has a high accuracy rate in detecting RPL topology attacks, while only creating insignificant overhead (about 6.3%) that enable its scalability in large-scale network.", "title": "" }, { "docid": "3132db67005f04591f93e77a2855caab", "text": "Money laundering refers to activities pertaining to hiding the true income, evading taxes, or converting illegally earned money for normal use. These activities are often performed through shell companies that masquerade as real companies but where actual the purpose is to launder money. Shell companies are used in all the three phases of money laundering, namely, placement, layering, and integration, often simultaneously. In this paper, we aim to identify shell companies. We propose to use only bank transactions since that is easily available. In particular, we look at all incoming and outgoing transactions from a particular bank account along with its various attributes, and use anomaly detection techniques to identify the accounts that pertain to shell companies. Our aim is to create an initial list of potential shell company candidates which can be investigated by financial experts later. Due to lack of real data, we propose a banking transactions simulator (BTS) to simulate both honest as well as shell company transactions by studying a host of actual real-world fraud cases. We apply anomaly detection algorithms to detect candidate shell companies. Results indicate that we are able to identify the shell companies with a high degree of precision and recall.1", "title": "" }, { "docid": "410a173b55faaad5a7ab01cf6e4d4b69", "text": "BACKGROUND\nCommunication skills training (CST) based on the Japanese SHARE model of family-centered truth telling in Asian countries has been adopted in Taiwan. However, its effectiveness in Taiwan has only been preliminarily verified. This study aimed to test the effect of SHARE model-centered CST on Taiwanese healthcare providers' truth-telling preference, to determine the effect size, and to compare the effect of 1-day and 2-day CST programs on participants' truth-telling preference.\n\n\nMETHOD\nFor this one-group, pretest-posttest study, 10 CST programs were conducted from August 2010 to November 2011 under certified facilitators and with standard patients. Participants (257 healthcare personnel from northern, central, southern, and eastern Taiwan) chose the 1-day (n = 94) or 2-day (n = 163) CST program as convenient. Participants' self-reported truth-telling preference was measured before and immediately after CST programs, with CST program assessment afterward.\n\n\nRESULTS\nThe CST programs significantly improved healthcare personnel's truth-telling preference (mean pretest and posttest scores ± standard deviation (SD): 263.8 ± 27.0 vs. 281.8 ± 22.9, p < 0.001). The CST programs effected a significant, large (d = 0.91) improvement in overall truth-telling preference and significantly improved method of disclosure, emotional support, and additional information (p < 0.001). Participation in 1-day or 2-day CST programs did not significantly affect participants' truth-telling preference (p > 0.05) except for the setting subscale. Most participants were satisfied with the CST programs (93.8%) and were willing to recommend them to colleagues (98.5%).\n\n\nCONCLUSIONS\nThe SHARE model-centered CST programs significantly improved Taiwanese healthcare personnel's truth-telling preference. Future studies should objectively assess participants' truth-telling preference, for example, by cancer patients, their families, and other medical team personnel and at longer times after CST programs.", "title": "" }, { "docid": "dea3bce3f636c87fad95f255aceec858", "text": "In recent work, conditional Markov chain models (CMM) have been used to extract information from semi-structured text (one example is the Conditional Random Field [10]). Applications range from finding the author and title in research papers to finding the phone number and street address in a web page. The CMM framework combines a priori knowledge encoded as features with a set of labeled training data to learn an efficient extraction process. We will show that similar problems can be solved more effectively by learning a discriminative context free grammar from training data. The grammar has several distinct advantages: long range, even global, constraints can be used to disambiguate entity labels; training data is used more efficiently; and a set of new more powerful features can be introduced. The grammar based approach also results in semantic information (encoded in the form of a parse tree) which could be used for IR applications like question answering. The specific problem we consider is of extracting personal contact, or address, information from unstructured sources such as documents and emails. While linear-chain CMMs perform reasonably well on this task, we show that a statistical parsing approach results in a 50% reduction in error rate. This system also has the advantage of being interactive, similar to the system described in [9]. In cases where there are multiple errors, a single user correction can be propagated to correct multiple errors automatically. Using a discriminatively trained grammar, 93.71% of all tokens are labeled correctly (compared to 88.43% for a CMM) and 72.87% of records have all tokens labeled correctly (compared to 45.29% for the CMM).", "title": "" }, { "docid": "8499953a543d16f321c2fd97b1edd7a4", "text": "The purpose of this phenomenological study was to identify commonly occurring factors in filicide-suicide offenders, to describe this phenomenon better, and ultimately to enhance prevention of child murder. Thirty families' files from a county coroner's office were reviewed for commonly occurring factors in cases of filicide-suicide. Parental motives for filicide-suicide included altruistic and acutely psychotic motives. Twice as many fathers as mothers committed filicide-suicide during the study period, and older children were more often victims than infants. Records indicated that parents frequently showed evidence of depression or psychosis and had prior mental health care. The data support the hypothesis that traditional risk factors for violence appear different from commonly occurring factors in filicide-suicide. This descriptive study represents a step toward understanding filicide-suicide risk.", "title": "" }, { "docid": "30155835ff3e74f0beb3c9b84ce9306f", "text": "Wireless Sensor Networks (WSNs) are gradually adopted in the industrial world due to their advantages over wired networks. In addition to saving cabling costs, WSNs widen the realm of environments feasible for monitoring. They thus add sensing and acting capabilities to objects in the physical world and allow for communication among these objects or with services in the future Internet. However, the acceptance of WSNs by the industrial automation community is impeded by open issues, such as security guarantees and provision of Quality of Service (QoS). To examine both of these perspectives, we select and survey relevant WSN technologies dedicated to industrial automation. We determine QoS requirements and carry out a threat analysis, which act as basis of our evaluation of the current state-of-the-art. According to the results of this evaluation, we identify and discuss open research issues.", "title": "" }, { "docid": "63198927563faa609e6520a01a56b20c", "text": "A 1.2 V 4 Gb DDR4 SDRAM is presented in a 30 nm CMOS technology. DDR4 SDRAM is developed to raise memory bandwidth with lower power consumption compared with DDR3 SDRAM. Various functions and circuit techniques are newly adopted to reduce power consumption and secure stable transaction. First, dual error detection scheme is proposed to guarantee the reliability of signals. It is composed of cyclic redundancy check (CRC) for DQ channel and command-address (CA) parity for command and address channel. For stable reception of high speed signals, a gain enhanced buffer and PVT tolerant data fetch scheme are adopted for CA and DQ respectively. To reduce the output jitter, the type of delay line is selected depending on data rate at initial stage. As a result, test measurement shows 3.3 Gb/s DDR operation at 1.14 V.", "title": "" }, { "docid": "da9ad1156191f725b1a55f7b886b7746", "text": "As the quality of natural language generated by artificial intelligence systems improves, writing interfaces can support interventions beyond grammar-checking and spell-checking, such as suggesting content to spark new ideas. To explore the possibility of machine-in-the-loop creative writing, we performed two case studies using two system prototypes, one for short story writing and one for slogan writing. Participants in our studies were asked to write with a machine in the loop or alone (control condition). They assessed their writing and experience through surveys and an open-ended interview. We collected additional assessments of the writing from Amazon Mechanical Turk crowdworkers. Our findings indicate that participants found the process fun and helpful and could envision use cases for future systems. At the same time, machine suggestions do not necessarily lead to better written artifacts. We therefore suggest novel natural language models and design choices that may better support creative writing.", "title": "" }, { "docid": "d994b23ea551f23215232c0771e7d6b3", "text": "It is said that there’s nothing so practical as good theory. It may also be said that there’s nothing so theoretically interesting as good practice1. This is particularly true of efforts to relate constructivism as a theory of learning to the practice of instruction. Our goal in this paper is to provide a clear link between the theoretical principles of constructivism, the practice of instructional design, and the practice of teaching. We will begin with a basic characterization of constructivism identifying what we believe to be the central principles in learning and understanding. We will then identify and elaborate on eight instructional principles for the design of a constructivist learning environment. Finally, we will examine what we consider to be one of the best exemplars of a constructivist learning environment -Problem Based Learning as described by Barrows (1985, 1986, 1992).", "title": "" }, { "docid": "b9bf838263410114ec85c783d26d92aa", "text": "We give a denotational framework (a “meta model”) within which certain properties of models of computation can be compared. It describes concurrent processes in general terms as sets of possible behaviors. A process is determinate if, given the constraints imposed by the inputs, there are exactly one or exactly zero behaviors. Compositions of processes are processes with behaviors in the intersection of the behaviors of the component processes. The interaction between processes is through signals, which are collections of events. Each event is a value-tag pair, where the tags can come from a partially ordered or totally ordered set. Timed models are where the set of tags is totally ordered. Synchronous events share the same tag, and synchronous signals contain events with the same set of tags. Synchronous processes have only synchronous signals as behaviors. Strict causality (in timed tag systems) and continuity (in untimed tag systems) ensure determinacy under certain technical conditions. The framework is used to compare certain essential features of various models of computation, including Kahn process networks, dataflow, sequential processes, concurrent sequential processes with rendezvous, Petri nets, and discrete-event systems.", "title": "" }, { "docid": "151f05c2604c60d3b779a7059ed797e6", "text": "This study used quantitative volumetric magnetic resonance imaging techniques to explore the neuroanatomic correlates of chronic, combat-related posttraumatic stress disorder (PTSD) in seven Vietnam veterans with PTSD compared with seven nonPTSD combat veterans and eight normal nonveterans. Both left and right hippocampi were significantly smaller in the PTSD subjects compared to the Combat Control and Normal subjects, even after adjusting for age, whole brain volume, and lifetime alcohol consumption. There were no statistically significant group differences in intracranial cavity, whole brain, ventricles, ventricle:brain ratio, or amygdala. Subarachnoidal cerebrospinal fluid was increased in both veteran groups. Our finding of decreased hippocampal volume in PTSD subjects is consistent with results of other investigations which utilized only trauma-unexposed control groups. Hippocampal volume was directly correlated with combat exposure, which suggests that traumatic stress may damage the hippocampus. Alternatively, smaller hippocampi volume may be a pre-existing risk factor for combat exposure and/or the development of PTSD upon combat exposure.", "title": "" }, { "docid": "ce74305a30bd322a78b3827921ae7224", "text": "While computerised tomography (CT) may have been the first imaging tool to study human brain, it has not yet been implemented into clinical decision making process for diagnosis of Alzheimer's disease (AD). On the other hand, with the nature of being prevalent, inexpensive and non-invasive, CT does present diagnostic features of AD to a great extent. This study explores the significance and impact on the application of the burgeoning deep learning techniques to the task of classification of CT brain images, in particular utilising convolutional neural network (CNN), aiming at providing supplementary information for the early diagnosis of Alzheimer's disease. Towards this end, three categories of CT images (N = 285) are clustered into three groups, which are AD, lesion (e.g. tumour) and normal ageing. In addition, considering the characteristics of this collection with larger thickness along the direction of depth (z) (~3-5 mm), an advanced CNN architecture is established integrating both 2D and 3D CNN networks. The fusion of the two CNN networks is subsequently coordinated based on the average of Softmax scores obtained from both networks consolidating 2D images along spatial axial directions and 3D segmented blocks respectively. As a result, the classification accuracy rates rendered by this elaborated CNN architecture are 85.2%, 80% and 95.3% for classes of AD, lesion and normal respectively with an average of 87.6%. Additionally, this improved CNN network appears to outperform the others when in comparison with 2D version only of CNN network as well as a number of state of the art hand-crafted approaches. As a result, these approaches deliver accuracy rates in percentage of 86.3, 85.6 ± 1.10, 86.3 ± 1.04, 85.2 ± 1.60, 83.1 ± 0.35 for 2D CNN, 2D SIFT, 2D KAZE, 3D SIFT and 3D KAZE respectively. The two major contributions of the paper constitute a new 3-D approach while applying deep learning technique to extract signature information rooted in both 2D slices and 3D blocks of CT images and an elaborated hand-crated approach of 3D KAZE.", "title": "" }, { "docid": "e4f31c3e7da3ad547db5fed522774f0e", "text": "Surface reconstruction from oriented points can be cast as a spatial Poisson problem. This Poisson formulation considers all the points at once, without resorting to heuristic spatial partitioning or blending, and is therefore highly resilient to data noise. Unlike radial basis function schemes, the Poisson approach allows a hierarchy of locally supported basis functions, and therefore the solution reduces to a well conditioned sparse linear system. To reconstruct detailed models in limited memory, we solve this Poisson formulation efficiently using a streaming framework. Specifically, we introduce a multilevel streaming representation, which enables efficient traversal of a sparse octree by concurrently advancing through multiple streams, one per octree level. Remarkably, for our reconstruction application, a sufficiently accurate solution to the global linear system is obtained using a single iteration of cascadic multigrid, which can be evaluated within a single multi-stream pass. Finally, we explore the application of Poisson reconstruction to the setting of multi-view stereo, to reconstruct detailed 3D models of outdoor scenes from collections of Internet images.\n This is joint work with Michael Kazhdan, Matthew Bolitho, and Randal Burns (Johns Hopkins University), and Michael Goesele, Noah Snavely, Brian Curless, and Steve Seitz (University of Washington).", "title": "" }, { "docid": "83e897a37aca4c349b4a910c9c0787f4", "text": "Computational imaging methods that can exploit multiple modalities have the potential to enhance the capabilities of traditional sensing systems. In this paper, we propose a new method that reconstructs multimodal images from their linear measurements by exploiting redundancies across different modalities. Our method combines a convolutional group-sparse representation of images with total variation (TV) regularization for high-quality multimodal imaging. We develop an online algorithm that enables the unsupervised learning of convolutional dictionaries on large-scale datasets that are typical in such applications. We illustrate the benefit of our approach in the context of joint intensity-depth imaging.", "title": "" }, { "docid": "de455ce971c40fe49d14415cd8164122", "text": "Cardiovascular disease remains the most common health problem in developed countries, and residual risk after implementing all current therapies is still high. Permanent changes in lifestyle may be hard to achieve and people may not always be motivated enough to make the recommended modifications. Emerging research has explored the application of natural food-based strategies in disease management. In recent years, much focus has been placed on the beneficial effects of fish consumption. Many of the positive effects of fish consumption on dyslipidemia and heart diseases have been attributed to n-3 polyunsaturated fatty acids (n-3 PUFAs, i.e., EPA and DHA); however, fish is also an excellent source of protein and, recently, fish protein hydrolysates containing bioactive peptides have shown promising activities for the prevention/management of cardiovascular disease and associated health complications. The present review will focus on n-3 PUFAs and bioactive peptides effects on cardiovascular disease risk factors. Moreover, since considerable controversy exists regarding the association between n-3 PUFAs and major cardiovascular endpoints, we have also reviewed the main clinical trials supporting or not this association.", "title": "" } ]
scidocsrr
85919d20abd30448a6b7840f8fadcbba
Active Learning of Pareto Fronts
[ { "docid": "3228d57f3d74f56444ce7fb9ed18e042", "text": "Gaussian process (GP) models are widely used to perform Bayesian nonlinear regression and classification — tasks that are central to many machine learning problems. A GP is nonparametric, meaning that the complexity of the model grows as more data points are received. Another attractive feature is the behaviour of the error bars. They naturally grow in regions away from training data where we have high uncertainty about the interpolating function. In their standard form GPs have several limitations, which can be divided into two broad categories: computational difficulties for large data sets, and restrictive modelling assumptions for complex data sets. This thesis addresses various aspects of both of these problems. The training cost for a GP hasO(N3) complexity, whereN is the number of training data points. This is due to an inversion of the N × N covariance matrix. In this thesis we develop several new techniques to reduce this complexity to O(NM2), whereM is a user chosen number much smaller thanN . The sparse approximation we use is based on a set of M ‘pseudo-inputs’ which are optimised together with hyperparameters at training time. We develop a further approximation based on clustering inputs that can be seen as a mixture of local and global approximations. Standard GPs assume a uniform noise variance. We use our sparse approximation described above as a way of relaxing this assumption. By making a modification of the sparse covariance function, we can model input dependent noise. To handle high dimensional data sets we use supervised linear dimensionality reduction. As another extension of the standard GP, we relax the Gaussianity assumption of the process by learning a nonlinear transformation of the output space. All these techniques further increase the applicability of GPs to real complex data sets. We present empirical comparisons of our algorithms with various competing techniques, and suggest problem dependent strategies to follow in practice.", "title": "" } ]
[ { "docid": "6bab9326dd38f25794525dc852ece818", "text": "The transformation from high level task speci cation to low level motion control is a fundamental issue in sensorimotor control in animals and robots. This thesis develops a control scheme called virtual model control which addresses this issue. Virtual model control is a motion control language which uses simulations of imagined mechanical components to create forces, which are applied through joint torques, thereby creating the illusion that the components are connected to the robot. Due to the intuitive nature of this technique, designing a virtual model controller requires the same skills as designing the mechanism itself. A high level control system can be cascaded with the low level virtual model controller to modulate the parameters of the virtual mechanisms. Discrete commands from the high level controller would then result in uid motion. An extension of Gardner's Partitioned Actuator Set Control method is developed. This method allows for the speci cation of constraints on the generalized forces which each serial path of a parallel mechanism can apply. Virtual model control has been applied to a bipedal walking robot. A simple algorithm utilizing a simple set of virtual components has successfully compelled the robot to walk eight consecutive steps. Thesis Supervisor: Gill A. Pratt Title: Assistant Professor of Electrical Engineering and Computer Science", "title": "" }, { "docid": "2ce9d2923b6b8be5027e23fb905e8b4d", "text": "A number of recent advances have been achieved in the study of midbrain dopaminergic neurons. Understanding these advances and how they relate to one another requires a deep understanding of the computational models that serve as an explanatory framework and guide ongoing experimental inquiry. This intertwining of theory and experiment now suggests very clearly that the phasic activity of the midbrain dopamine neurons provides a global mechanism for synaptic modification. These synaptic modifications, in turn, provide the mechanistic underpinning for a specific class of reinforcement learning mechanisms that now seem to underlie much of human and animal behavior. This review describes both the critical empirical findings that are at the root of this conclusion and the fantastic theoretical advances from which this conclusion is drawn.", "title": "" }, { "docid": "26140dbe32672dc138c46e7fd6f39b1a", "text": "The state of the art in probabilistic demand forecasting [40] minimizes Quantile Loss to predict the future demand quantiles for different horizons. However, since quantiles aren’t additive, in order to predict the total demand for any wider future interval all required intervals are usually appended to the target vector during model training. The separate optimization of these overlapping intervals can lead to inconsistent forecasts, i.e. forecasts which imply an invalid joint distribution between different horizons. As a result, inter-temporal decision making algorithms that depend on the joint or step-wise conditional distribution of future demand cannot utilize these forecasts. In this work, we address the problem by using sample paths to predict future demand quantiles in a consistent manner and propose several novel methodologies to solve this problem. Our work covers the use of covariance shrinkage methods, autoregressive models, generative adversarial networks and also touches on the use of variational autoencoders and Bayesian Dropout.", "title": "" }, { "docid": "f92f0a3d46eaf14e478a41f87b8ad369", "text": "The agricultural productivity of India is gradually declining due to destruction of crops by various natural calamities and the crop rotation process being affected by irregular climate patterns. Also, the interest and efforts put by farmers lessen as they grow old which forces them to sell their agricultural lands, which automatically affects the production of agricultural crops and dairy products. This paper mainly focuses on the ways by which we can protect the crops during an unavoidable natural disaster and implement technology induced smart agro-environment, which can help the farmer manage large fields with less effort. Three common issues faced during agricultural practice are shearing furrows in case of excess rain or flood, manual watering of plants and security against animal grazing. This paper provides a solution for these problems by helping farmer monitor and control various activities through his mobile via GSM and DTMF technology in which data is transmitted from various sensors placed in the agricultural field to the controller and the status of the agricultural parameters are notified to the farmer using which he can take decisions accordingly. The main advantage of this system is that it is semi-automated i.e. the decision is made by the farmer instead of fully automated decision that results in precision agriculture. It also overcomes the existing traditional practices that require high money investment, energy, labour and time.", "title": "" }, { "docid": "67da4c8ba04d3911118147b829ba9c50", "text": "A methodology for the development of a fuzzy expert system (FES) with application to earthquake prediction is presented. The idea is to reproduce the performance of a human expert in earthquake prediction. To do this, at the first step, rules provided by the human expert are used to generate a fuzzy rule base. These rules are then fed into an inference engine to produce a fuzzy inference system (FIS) and to infer the results. In this paper, we have used a Sugeno type fuzzy inference system to build the FES. At the next step, the adaptive network-based fuzzy inference system (ANFIS) is used to refine the FES parameters and improve its performance. The proposed framework is then employed to attain the performance of a human expert used to predict earthquakes in the Zagros area based on the idea of coupled earthquakes. While the prediction results are promising in parts of the testing set, the general performance indicates that prediction methodology based on coupled earthquakes needs more investigation and more complicated reasoning procedure to yield satisfactory predictions.", "title": "" }, { "docid": "d579ed125d3a051069b69f634fffe488", "text": "Culture can be thought of as a set of everyday practices and a core theme-individualism, collectivism, or honor-as well as the capacity to understand each of these themes. In one's own culture, it is easy to fail to see that a cultural lens exists and instead to think that there is no lens at all, only reality. Hence, studying culture requires stepping out of it. There are two main methods to do so: The first involves using between-group comparisons to highlight differences and the second involves using experimental methods to test the consequences of disruption to implicit cultural frames. These methods highlight three ways that culture organizes experience: (a) It shields reflexive processing by making everyday life feel predictable, (b) it scaffolds which cognitive procedure (connect, separate, or order) will be the default in ambiguous situations, and (c) it facilitates situation-specific accessibility of alternate cognitive procedures. Modern societal social-demographic trends reduce predictability and increase collectivism and honor-based go-to cognitive procedures.", "title": "" }, { "docid": "1971cb1d7876256ecf0342d0a51fe7e7", "text": "Senescent cells accumulate with aging and at sites of pathology in multiple chronic diseases. Senolytics are drugs that selectively promote apoptosis of senescent cells by temporarily disabling the pro-survival pathways that enable senescent cells to resist the pro-apoptotic, pro-inflammatory factors that they themselves secrete. Reducing senescent cell burden by genetic approaches or by administering senolytics delays or alleviates multiple age- and disease-related adverse phenotypes in preclinical models. Reported senolytics include dasatinib, quercetin, navitoclax (ABT263), and piperlongumine. Here we report that fisetin, a naturally-occurring flavone with low toxicity, and A1331852 and A1155463, selective BCL-XL inhibitors that may have less hematological toxicity than the less specific BCL-2 family inhibitor navitoclax, are senolytic. Fisetin selectively induces apoptosis in senescent but not proliferating human umbilical vein endothelial cells (HUVECs). It is not senolytic in senescent IMR90 cells, a human lung fibroblast strain, or primary human preadipocytes. A1331852 and A1155463 are senolytic in HUVECs and IMR90 cells, but not preadipocytes. These agents may be better candidates for eventual translation into clinical interventions than some existing senolytics, such as navitoclax, which is associated with hematological toxicity.", "title": "" }, { "docid": "941dc605dab6cf9bfe89bedb2b4f00a3", "text": "Word boundary detection in continuous speech is very common and important problem in speech synthesis and recognition. Several researches are open on this field. Since there is no sign of start of the word, end of the word and number of words in the spoken utterance of any natural language, one must study the intonation pattern of a particular language. In this paper an algorithm is proposed to detect word boundaries in continuous speech of Hindi language. A careful study of the intonation pattern of Hindi language has been done. Based on this study it is observed that, there are several suprasegmental parameters of speech signal such as pitch, F0 fundamental frequency, duration, intensity, and pause, which can play important role in finding some clues to detect the start and the end of the word from the spoken utterance of Hindi Language. The proposed algorithm is based mainly on two prosodic parameters, pitch and intensity.", "title": "" }, { "docid": "c10ac9c3117627b2abb87e268f5de6b1", "text": "Now days, the number of crime over children is increasing day by day. the implementation of School Security System(SSS) via RFID to avoid crime, illegal activates by students and reduce worries among parents. The project is the combination of latest Technology using RFID, GPS/GSM, image processing, WSN and web based development using Php,VB.net language apache web server and SQL. By using RFID technology it is easy track the student thus enhances the security and safety in selected zone. The information about student such as in time and out time from Bus and campus will be recorded to web based system and the GPS/GSM system automatically sends information (SMS / Phone Call) toothier parents. That the student arrived to Bus/Campus safely.", "title": "" }, { "docid": "b07ea7995bb865b226f5834a54c70aa4", "text": "The explosive growth in the usage of IEEE 802.11 network has resulted in dense deployments in diverse environments. Most recently, the IEEE working group has triggered the IEEE 802.11ax project, which aims to amend the current IEEE 802.11 standard to improve efficiency of dense WLANs. In this paper, we evaluate the Dynamic Sensitivity Control (DSC) Algorithm proposed for IEEE 802.11ax. This algorithm dynamically adjusts the Carrier Sense Threshold (CST) based on the average received signal strength. We show that the aggregate throughput of a dense network utilizing DSC is considerably improved (i.e. up to 20%) when compared with the IEEE 802.11 legacy network.", "title": "" }, { "docid": "f20c0ace77f7b325d2ae4862d300d440", "text": "http://dx.doi.org/10.1016/j.knosys.2014.02.003 0950-7051/ 2014 Elsevier B.V. All rights reserved. ⇑ Corresponding author. Address: Zhejiang University, Hangzhou 310027, China. Tel.: +86 571 87951453. E-mail addresses: xlzheng@zju.edu.cn (X. Zheng), nblin@zju.edu.cn (Z. Lin), alexwang@zju.edu.cn (X. Wang), klin@ece.uci.edu (K.-J. Lin), mnsong@bupt.edu.cn (M. Song). 1 http://www.yelp.com/. Xiaolin Zheng a,b,⇑, Zhen Lin , Xiaowei Wang , Kwei-Jay Lin , Meina Song e", "title": "" }, { "docid": "37ceb75634c9801e3f83c36a15dc879b", "text": "Semantic visualization integrates topic modeling and visualization, such that every document is associated with a topic distribution as well as visualization coordinates on a low-dimensional Euclidean space. We address the problem of semantic visualization for short texts. Such documents are increasingly common, including tweets, search snippets, news headlines, or status updates. Due to their short lengths, it is difficult to model semantics as the word co-occurrences in such a corpus are very sparse. Our approach is to incorporate auxiliary information, such as word embeddings from a larger corpus, to supplement the lack of co-occurrences. This requires the development of a novel semantic visualization model that seamlessly integrates visualization coordinates, topic distributions, and word vectors. We propose a model called GaussianSV, which outperforms pipelined baselines that derive topic models and visualization coordinates as disjoint steps, as well as semantic visualization baselines that do not consider word embeddings.", "title": "" }, { "docid": "09c209f1e36dc97458a8edc4a08e5351", "text": "We proposed neural network architecture based on Convolution Neural Network(CNN) for temporal relation classification in sentence. First, we transformed word into vector by using word embedding. In Feature Extraction, we extracted two type of features. Lexical level feature considered meaning of marked entity and Sentence level feature considered context of the sentence. Window processing was used to reflect local context and Convolution and Max-pooling operation were used for global context. We concatenated both feature vectors and used softmax operation to compute confidence score. Because experiment results didn't outperform the state-of-the-art methods, we suggested some future works to do.", "title": "" }, { "docid": "e23cebac640a47643b3a3249eae62f89", "text": "Objective: To assess the factors that contribute to impaired quinine clearance in acute falciparum malaria. Patients: Sixteen adult Thai patients with severe or moderately severe falciparum malaria were studied, and 12 were re-studied during convalescence. Methods: The clearance of quinine, dihydroquinine (an impurity comprising up to 10% of commercial quinine formulations), antipyrine (a measure of hepatic mixed-function oxidase activity), indocyanine green (ICG) (a measure of liver blood flow), and iothalamate (a measure of glomerular filtration rate) were measured simultaneously, and the relationship of these values to the␣biotransformation of quinine to the active metabolite 3-hydroxyquinine was assessed. Results: During acute malaria infection, the systemic clearance of quinine, antipyrine and ICG and the biotransformation of quinine to 3-hydroxyquinine were all reduced significantly when compared with values during convalescence. Iothalamate clearance was not affected significantly and did not correlate with the clearance of any of the other compounds. The clearance of total and free quinine correlated significantly with antipyrine clearance (r s = 0.70, P = 0.005 and r s = 0.67, P = 0.013, respectively), but not with ICG clearance (r s = 0.39 and 0.43 respectively, P > 0.15). In a multiple regression model, antipyrine clearance and plasma protein binding accounted for 71% of the variance in total quinine clearance in acute malaria. The pharmacokinetic properties of dihydroquinine were generally similar to those of quinine, although dihydroquinine clearance was less affected by acute malaria. The mean ratio of quinine to 3-hydroxyquinine area under the plasma concentration-time curve (AUC) values in acute malaria was 12.03 compared with 6.92 during convalescence P=0.01. The mean plasma protein binding of 3-hydroxyquinine was 46%, which was significantly lower than that of quinine (90.5%) or dihydroquinine (90.5%). Conclusion: The reduction in quinine clearance in acute malaria results predominantly from a disease-induced dysfunction in hepatic mixed-function oxidase activity (principally CYP 3A) which impairs the conversion of quinine to its major metabolite, 3-hydroxyquinine. The metabolite contributes approximately 5% of the antimalarial activity of the parent compound in malaria, but up to 10% during convalescence.", "title": "" }, { "docid": "48126a601f93eea84b157040c83f8861", "text": "Citation counts and intra-conference citations are one useful measure of the impact of prior research in a field. We have developed CiteVis, a visualization system for portraying citation data about the IEEE InfoVis Conference and its papers. Rather than use a node-link network visualization, we employ an attribute-based layout along with interaction to foster exploration and knowledge discovery.", "title": "" }, { "docid": "af7803b0061e75659f718d56ba9715b3", "text": "An emerging body of multidisciplinary literature has documented the beneficial influence of physical activity engendered through aerobic exercise on selective aspects of brain function. Human and non-human animal studies have shown that aerobic exercise can improve a number of aspects of cognition and performance. Lack of physical activity, particularly among children in the developed world, is one of the major causes of obesity. Exercise might not only help to improve their physical health, but might also improve their academic performance. This article examines the positive effects of aerobic physical activity on cognition and brain function, at the molecular, cellular, systems and behavioural levels. A growing number of studies support the idea that physical exercise is a lifestyle factor that might lead to increased physical and mental health throughout life.", "title": "" }, { "docid": "d40aa76e76c44da4c6237f654dcdab45", "text": "The flipped classroom pedagogy has achieved significant mention in academic circles in recent years. \"Flipping\" involves the reinvention of a traditional course so that students engage with learning materials via recorded lectures and interactive exercises prior to attending class and then use class time for more interactive activities. Proper implementation of a flipped classroom is difficult to gauge, but combines successful techniques for distance education with constructivist learning theory in the classroom. While flipped classrooms are not a novel concept, technological advances and increased comfort with distance learning have made the tools to produce and consume course materials more pervasive. Flipped classroom experiments have had both positive and less-positive results and are generally measured by a significant improvement in learning outcomes. This study, however, analyzes the opinions of students in a flipped sophomore-level information technology course by using a combination of surveys and reflective statements. The author demonstrates that at the outset students are new - and somewhat receptive - to the concept of the flipped classroom. By the conclusion of the course satisfaction with the pedagogy is significant. Finally, student feedback is provided in an effort to inform instructors in the development of their own flipped classrooms.", "title": "" }, { "docid": "e13d6cd043ea958e9731c99a83b6de18", "text": "In this article, an overview and an in-depth analysis of the most discussed 5G waveform candidates are presented. In addition to general requirements, the nature of each waveform is revealed including the motivation, the underlying methodology, and the associated advantages and disadvantages. Furthermore, these waveform candidates are categorized and compared both qualitatively and quantitatively. By doing all these, the study in this work offers not only design guidelines but also operational suggestions for the 5G waveform.", "title": "" }, { "docid": "6ab8b5bd7ce3582df99d5601225c1779", "text": "Nowadays, the number of users, speed of internet and processing power of devices are increasing at a tremendous rate. For maintaining the balance between users and company networks with product or service, our system must evolve and modify to handle the future load of data. Currently, we are using file systems, Database servers and some semi-structured file systems. But all these systems are mostly independent, differ from each other in many except and never on the single roof for easy, effective use. So, to minimize the problems for developing apps, website, game development easier, Google came with the solution as their product Firebase. Firebase is implementing a real-time database, crash reporting, authentication, cloud functions, cloud storage, hosting, test-lab, performance monitoring and analytics on a single system platform for speed, security as well as efficiency. Systems like these are also developed by some big companies like Facebook, IBM, Linkedin, etc for their personal use. So we can say that Firebase will have the power to handle the future requirement.", "title": "" }, { "docid": "6476066913e37c88e94cc83c15b05f43", "text": "The Aduio-visual Speech Recognition (AVSR) which employs both the video and audio information to do Automatic Speech Recognition (ASR) is one of the application of multimodal leaning making ASR system more robust and accuracy. The traditional models usually treated AVSR as inference or projection but strict prior limits its ability. As the revival of deep learning, Deep Neural Networks (DNN) becomes an important toolkit in many traditional classification tasks including ASR, image classification, natural language processing. Some DNN models were used in AVSR like Multimodal Deep Autoencoders (MDAEs), Multimodal Deep Belief Network (MDBN) and Multimodal Deep Boltzmann Machine (MDBM) that actually work better than traditional methods. However, such DNN models have several shortcomings: (1) They don’t balance the modal fusion and temporal fusion, or even haven’t temporal fusion; (2)The architecture of these models isn’t end-to-end, the training and testing getting cumbersome. We propose a DNN model, Auxiliary Multimodal LSTM (am-LSTM), to overcome such weakness. The am-LSTM could be trained and tested in one time, alternatively easy to train and preventing overfitting automatically. The extensibility and flexibility are also take into consideration. The experiments shows that am-LSTM is much better than traditional methods and other DNN models in three datasets: AVLetters, AVLetters2, AVDigits.", "title": "" } ]
scidocsrr
e690711cb18766db09e76ccc5c36c03c
VisReduce: Fast and responsive incremental information visualization of large datasets
[ { "docid": "98e170b4beb59720e49916835572d1b0", "text": "Scatterplot matrices (SPLOMs), parallel coordinates, and glyphs can all be used to visualize the multiple continuous variables (i.e., dependent variables or measures) in multidimensional multivariate data. However, these techniques are not well suited to visualizing many categorical variables (i.e., independent variables or dimensions). To visualize multiple categorical variables, 'hierarchical axes' that 'stack dimensions' have been used in systems like Polaris and Tableau. However, this approach does not scale well beyond a small number of categorical variables. Emerson et al. [8] extend the matrix paradigm of the SPLOM to simultaneously visualize several categorical and continuous variables, displaying many kinds of charts in the matrix depending on the kinds of variables involved. We propose a variant of their technique, called the Generalized Plot Matrix (GPLOM). The GPLOM restricts Emerson et al.'s technique to only three kinds of charts (scatterplots for pairs of continuous variables, heatmaps for pairs of categorical variables, and barcharts for pairings of categorical and continuous variable), in an effort to make it easier to understand. At the same time, the GPLOM extends Emerson et al.'s work by demonstrating interactive techniques suited to the matrix of charts. We discuss the visual design and interactive features of our GPLOM prototype, including a textual search feature allowing users to quickly locate values or variables by name. We also present a user study that compared performance with Tableau and our GPLOM prototype, that found that GPLOM is significantly faster in certain cases, and not significantly slower in other cases.", "title": "" } ]
[ { "docid": "40b18b69a3a4011f163d06ef476d9954", "text": "Potential benefits of using online social network data for clinical studies on depression are tremendous. In this paper, we present a preliminary result on building a research framework that utilizes real-time moods of users captured in the Twitter social network and explore the use of language in describing depressive moods. First, we analyzed a random sample of tweets posted by the general Twitter population during a two-month period to explore how depression is talked about in Twitter. A large number of tweets contained detailed information about depressed feelings, status, as well as treatment history. Going forward, we conducted a study on 69 participants to determine whether the use of sentiment words of depressed users differed from a typical user. We found that the use of words related to negative emotions and anger significantly increased among Twitter users with major depressive symptoms compared to those otherwise. However, no difference was found in the use of words related to positive emotions between the two groups. Our work provides several evidences that online social networks provide meaningful data for capturing depressive moods of users.", "title": "" }, { "docid": "db6e0dff6ba7bd5a0041ef4affe50e9b", "text": "The flipped voltage follower (FVF), a variant of the common-drain transistor amplifier, comprising local feedback, finds application in circuits such as voltage buffers, current mirrors, class AB amplifiers, frequency compensation circuits and low dropout voltage regulators (LDOs). One of the most important characteristics of the FVF, is its low output impedance. In this tutorial-flavored paper, we perform a theoretical analysis of the transfer function, poles and zeros of the output impedance of the FVF and correlate it with transistor-level simulation results. Utilization of the FVF and its variants has wide application in the analog, mixed-signal and power management circuit design space.", "title": "" }, { "docid": "482ff6c78f7b203125781f5947990845", "text": "TH1 and TH17 cells mediate neuroinflammation in experimental autoimmune encephalomyelitis (EAE), a mouse model of multiple sclerosis. Pathogenic TH cells in EAE must produce the pro-inflammatory cytokine granulocyte-macrophage colony stimulating factor (GM-CSF). TH cell pathogenicity in EAE is also regulated by cell-intrinsic production of the immunosuppressive cytokine interleukin 10 (IL-10). Here we demonstrate that mice deficient for the basic helix-loop-helix (bHLH) transcription factor Bhlhe40 (Bhlhe40(-/-)) are resistant to the induction of EAE. Bhlhe40 is required in vivo in a T cell-intrinsic manner, where it positively regulates the production of GM-CSF and negatively regulates the production of IL-10. In vitro, GM-CSF secretion is selectively abrogated in polarized Bhlhe40(-/-) TH1 and TH17 cells, and these cells show increased production of IL-10. Blockade of IL-10 receptor in Bhlhe40(-/-) mice renders them susceptible to EAE. These findings identify Bhlhe40 as a critical regulator of autoreactive T-cell pathogenicity.", "title": "" }, { "docid": "2e89bc59f85b14cf40a868399a3ce351", "text": "CONTEXT\nYouth worldwide play violent video games many hours per week. Previous research suggests that such exposure can increase physical aggression.\n\n\nOBJECTIVE\nWe tested whether high exposure to violent video games increases physical aggression over time in both high- (United States) and low- (Japan) violence cultures. We hypothesized that the amount of exposure to violent video games early in a school year would predict changes in physical aggressiveness assessed later in the school year, even after statistically controlling for gender and previous physical aggressiveness.\n\n\nDESIGN\nIn 3 independent samples, participants' video game habits and physically aggressive behavior tendencies were assessed at 2 points in time, separated by 3 to 6 months.\n\n\nPARTICIPANTS\nOne sample consisted of 181 Japanese junior high students ranging in age from 12 to 15 years. A second Japanese sample consisted of 1050 students ranging in age from 13 to 18 years. The third sample consisted of 364 United States 3rd-, 4th-, and 5th-graders ranging in age from 9 to 12 years. RESULTS. Habitual violent video game play early in the school year predicted later aggression, even after controlling for gender and previous aggressiveness in each sample. Those who played a lot of violent video games became relatively more physically aggressive. Multisample structure equation modeling revealed that this longitudinal effect was of a similar magnitude in the United States and Japan for similar-aged youth and was smaller (but still significant) in the sample that included older youth.\n\n\nCONCLUSIONS\nThese longitudinal results confirm earlier experimental and cross-sectional studies that had suggested that playing violent video games is a significant risk factor for later physically aggressive behavior and that this violent video game effect on youth generalizes across very different cultures. As a whole, the research strongly suggests reducing the exposure of youth to this risk factor.", "title": "" }, { "docid": "58c357c0edd0dfe07ec699d4fba0514b", "text": "There exist a multitude of execution models available today for a developer to target. The choices vary from general purpose processors to fixed-function hardware accelerators with a large number of variations in-between. There is a growing demand to assess the potential benefits of porting or rewriting an application to a target architecture in order to fully exploit the benefits of performance and/or energy efficiency offered by such targets. However, as a first step of this process, it is necessary to determine whether the application has characteristics suitable for acceleration.\n In this paper, we present Peruse, a tool to characterize the features of loops in an application and to help the programmer understand the amenability of loops for acceleration. We consider a diverse set of features ranging from loop characteristics (e.g., loop exit points) and operation mixes (e.g., control vs data operations) to wider code region characteristics (e.g., idempotency, vectorizability). Peruse is language, architecture, and input independent and uses the intermediate representation of compilers to do the characterization. Using static analyses makes Peruse scalable and enables analysis of large applications to identify and extract interesting loops suitable for acceleration. We show analysis results for unmodified applications from the SPEC CPU benchmark suite, Polybench, and HPC workloads.\n For an end-user it is more desirable to get an estimate of the potential speedup due to acceleration. We use the workload characterization results of Peruse as features and develop a machine-learning based model to predict the potential speedup of a loop when off-loaded to a fixed function hardware accelerator. We use the model to predict the speedup of loops selected by Peruse and achieve an accuracy of 79%.", "title": "" }, { "docid": "acbdb3f3abf3e56807a4e7f60869a2ee", "text": "In this paper we present a new approach to high quality 3D object reconstruction. Starting from a calibrated sequence of color images, the algorithm is able to reconstruct both the 3D geometry and the texture. The core of the method is based on a deformable model, which defines the framework where texture and silhouette information can be fused. This is achieved by defining two external forces based on the images: a texture driven force and a silhouette driven force. The texture force is computed in two steps: a multi-stereo correlation voting approach and a gradient vector flow diffusion. Due to the high resolution of the voting approach, a multi-grid version of the gradient vector flow has been developed. Concerning the silhouette force, a new formulation of the silhouette constraint is derived. It provides a robust way to integrate the silhouettes in the evolution algorithm. As a consequence, we are able to recover the apparent contours of the model at the end of the iteration process. Finally, a texture map is computed from the original images for the reconstructed 3D model.", "title": "" }, { "docid": "1cb47f75cde728f7ba7c75b54516bc46", "text": "This paper considers the electrical actuation of aircraft wing surfaces, with particular emphasis on flap systems. It discusses existing hydraulic and electrohydraulic systems and proposes an electrical alternative, examining the potential system benefits in terms of increased functionality, maintenance, and life-cycle costs. This paper then progresses to describe a full-scale actuation demonstrator of the flap system, including the high-speed electrical drive, step-down gearbox, and flaps. Detailed descriptions of the fault-tolerant motor, power electronics, control architecture, and position sensor systems are given, along with a range of test results, demonstrating the system in operation.", "title": "" }, { "docid": "d931f6f9960e8688c2339a27148efe74", "text": "Most knowledge on the Web is encoded as natural language text, which is convenient for human users but very difficult for software agents to understand. Even with increased use of XML-encoded information, software agents still need to process the tags and literal symbols using application dependent semantics. The Semantic Web offers an approach in which knowledge can be published by and shared among agents using symbols with a well defined, machine-interpretable semantics. The Semantic Web is a “web of data” in that (i) both ontologies and instance data are published in a distributed fashion; (ii) symbols are either ‘literals’ or universally addressable ‘resources’ (URI references) each of which comes with unique semantics; and (iii) information is semi-structured. The Friend-of-a-Friend (FOAF) project (http://www.foafproject.org/) is a good application of the Semantic Web in which users publish their personal profiles by instantiating the foaf:Personclass and adding various properties drawn from any number of ontologies. The Semantic Web’s distributed nature raises significant data access problems – how can an agent discover, index, search and navigate knowledge on the Semantic Web? Swoogle (Dinget al. 2004) was developed to facilitate webscale semantic web data access by providing these services to both human and software agents. It focuses on two levels of knowledge granularity: URI based semantic web vocabulary andsemantic web documents (SWDs), i.e., RDF and OWL documents encoded in XML, NTriples or N3. Figure 1 shows Swoogle’s architecture. The discovery component automatically discovers and revisits SWDs using a set of integrated web crawlers. The digest component computes metadata for SWDs and semantic web terms (SWTs) as well as identifies relations among them, e.g., “an SWD instantiates an SWT class”, and “an SWT class is the domain of an SWT property”. The analysiscomponent uses cached SWDs and their metadata to derive analytical reports, such as classifying ontologies among SWDs and ranking SWDs by their importance. The s rvicecomponent sup-", "title": "" }, { "docid": "a20a03fcb848c310cb966f6e6bc37c86", "text": "A broad class of problems at the core of computational imaging, sensing, and low-level computer vision reduces to the inverse problem of extracting latent images that follow a prior distribution, from measurements taken under a known physical image formation model. Traditionally, hand-crafted priors along with iterative optimization methods have been used to solve such problems. In this paper we present unrolled optimization with deep priors, a principled framework for infusing knowledge of the image formation into deep networks that solve inverse problems in imaging, inspired by classical iterative methods. We show that instances of the framework outperform the state-of-the-art by a substantial margin for a wide variety of imaging problems, such as denoising, deblurring, and compressed sensing magnetic resonance imaging (MRI). Moreover, we conduct experiments that explain how the framework is best used and why it outperforms previous methods.", "title": "" }, { "docid": "45c3d3a765e565ad3b870b95f934592a", "text": "This paper describes a fully automated framework to generate realistic head motion, eye gaze, and eyelid motion simultaneously based on live (or recorded) speech input. Its central idea is to learn separate yet interrelated statistical models for each component (head motion, gaze, or eyelid motion) from a prerecorded facial motion data set: 1) Gaussian Mixture Models and gradient descent optimization algorithm are employed to generate head motion from speech features; 2) Nonlinear Dynamic Canonical Correlation Analysis model is used to synthesize eye gaze from head motion and speech features, and 3) nonnegative linear regression is used to model voluntary eye lid motion and log-normal distribution is used to describe involuntary eye blinks. Several user studies are conducted to evaluate the effectiveness of the proposed speech-driven head and eye motion generator using the well-established paired comparison methodology. Our evaluation results clearly show that this approach can significantly outperform the state-of-the-art head and eye motion generation algorithms. In addition, a novel mocap+video hybrid data acquisition technique is introduced to record high-fidelity head movement, eye gaze, and eyelid motion simultaneously.", "title": "" }, { "docid": "a7c9d58c49f1802b94395c6f12c2d6dd", "text": "Signature-based network intrusion detection systems (NIDSs) have been widely deployed in current network security infrastructure. However, these detection systems suffer from some limitations such as network packet overload, expensive signature matching and massive false alarms in a large-scale network environment. In this paper, we aim to develop an enhanced filter mechanism (named EFM) to comprehensively mitigate these issues, which consists of three major components: a context-aware blacklist-based packet filter, an exclusive signature matching component and a KNN-based false alarm filter. The experiments, which were conducted with two data sets and in a network environment, demonstrate that our proposed EFM can overall enhance the performance of a signaturebased NIDS such as Snort in the aspects of packet filtration, signature matching improvement and false alarm reduction without affecting network security. a 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "7c097c95fb50750c082877ab7e277cd9", "text": "40BAbstract: Disease Intelligence (DI) is based on the acquisition and aggregation of fragmented knowledge of diseases at multiple sources all over the world to provide valuable information to doctors, researchers and information seeking community. Some diseases have their own characteristics changed rapidly at different places of the world and are reported on documents as unrelated and heterogeneous information which may be going unnoticed and may not be quickly available. This research presents an Ontology based theoretical framework in the context of medical intelligence and country/region. Ontology is designed for storing information about rapidly spreading and changing diseases with incorporating existing disease taxonomies to genetic information of both humans and infectious organisms. It further maps disease symptoms to diseases and drug effects to disease symptoms. The machine understandable disease ontology represented as a website thus allows the drug effects to be evaluated on disease symptoms and exposes genetic involvements in the human diseases. Infectious agents which have no known place in an existing classification but have data on genetics would still be identified as organisms through the intelligence of this system. It will further facilitate researchers on the subject to try out different solutions for curing diseases.", "title": "" }, { "docid": "c5f0155b2f6ce35a9cbfa38773042833", "text": "Leishmaniasis is caused by protozoa of the genus Leishmania, with the presentation restricted to the mucosa being infrequent. Although the nasal mucosa is the main site affected in this form of the disease, it is also possible the involvement of the lips, mouth, pharynx and larynx. The lesions are characteristically ulcerative-vegetative, with granulation tissue formation. Patients usually complain of pain, dysphagia and odynophagia. Differential diagnosis should include cancer, infectious diseases and granulomatous diseases. We present a case of a 64-year-old male patient, coming from an endemic area for American Tegumentary Leishmaniasis (ATL), with a chief complaint of persistent dysphagia and nasal obstruction for 6 months. The lesion was ulcerative with a purulent infiltration into the soft palate and uvula. After excluding other diseases, ATL was suggested as a hypothesis, having been requested serology and biopsy of the lesions. Was started the treatment with pentavalent antimony and the patient presented regression of the lesions in 30 days, with no other complications.", "title": "" }, { "docid": "ce22073b8dbc3a910fa8811a2a8e5c87", "text": "Ethernet is going to play a major role in automotive communications, thus representing a significant paradigm shift in automotive networking. Ethernet technology will allow for multiple in-vehicle systems (such as, multimedia/infotainment, camera-based advanced driver assistance and on-board diagnostics) to simultaneously access information over a single unshielded twisted pair cable. The leading technology for automotive applications is the IEEE Audio Video Bridging (AVB), which offers several advantages, such as open specification, multiple sources of electronic components, high bandwidth, the compliance with the challenging EMC/EMI automotive requirements, and significant savings on cabling costs, thickness and weight. This paper surveys the state of the art on Ethernet-based automotive communications and especially on the IEEE AVB, with a particular focus on the way to provide support to the so-called scheduled traffic, that is a class of time-sensitive traffic (e.g., control traffic) that is transmitted according to a time schedule.", "title": "" }, { "docid": "d558f980b85bf970a7b57c00df361591", "text": "URL shortener services today have come to play an important role in our social media landscape. They direct user attention and disseminate information in online social media such as Twitter or Facebook. Shortener services typically provide short URLs in exchange for long URLs. These short URLs can then be shared and diffused by users via online social media, e-mail or other forms of electronic communication. When another user clicks on the shortened URL, she will be redirected to the underlying long URL. Shortened URLs can serve many legitimate purposes, such as click tracking, but can also serve illicit behavior such as fraud, deceit and spam. Although usage of URL shortener services today is ubiquituous, our research community knows little about how exactly these services are used and what purposes they serve. In this paper, we study usage logs of a URL shortener service that has been operated by our group for more than a year. We expose the extent of spamming taking place in our logs, and provide first insights into the planetary-scale of this problem. Our results are relevant for researchers and engineers interested in understanding the emerging phenomenon and dangers of spamming via URL shortener services.", "title": "" }, { "docid": "0d11c687fbf4a0834e753145fec7d7d2", "text": "A single line feed stacked microstrip antenna for 4G system is presented. The proposed antenna with two properly square patches are stacked. The top patch can perform as a driven element is design on 2.44 GHz and lower patch is also design on 2.44 GHz. The performance of proposed antenna for 4G band frequency (2400-2500 MHz). Also gating the improvement of bandwidth (15%) and antenna efficiency (95%) are very high compared to conventional antenna. Key word — Microstrip patch antenna; stacked, 4G, Antenna efficiency.", "title": "" }, { "docid": "3fe3d1f8b5e141b9044686491fffe12f", "text": "Data stream is a potentially massive, continuous, rapid sequence of data information. It has aroused great concern and research upsurge in the field of data mining. Clustering is an effective tool of data mining, so data stream clustering will undoubtedly become the focus of the study in data stream mining. In view of the characteristic of the high dimension, dynamic, real-time, many effective data stream clustering algorithms have been proposed. In addition, data stream information are not deterministic and always exist outliers and contain noises, so developing effective data stream clustering algorithm is crucial. This paper reviews the development and trend of data stream clustering and analyzes typical data stream clustering algorithms proposed in recent years, such as Birch algorithm, Local Search algorithm, Stream algorithm and CluStream algorithm. We also summarize the latest research achievements in this field and introduce some new strategies to deal with outliers and noise data. At last, we put forward the focal points and difficulties of future research for data stream clustering.", "title": "" }, { "docid": "133af3ba5310a05ac3bfdaf6178feb6f", "text": "A new gate drive for high-voltage, high-power IGBT has been developed for the SLAC NLC (Next Linear Collider) Solid State Induction Modulator. This paper describes the design and implementation of a driver that allows an IGBT module rated at 800 A/3300 V to switch up to 3000 A at 2200 V in 3 /spl mu/s with a rate of current rise of more than 10000 A//spl mu/s, while still being short circuit protected. Issues regarding fast turn on, high de-saturation voltage detection, and low short circuit peak current are presented. A novel approach is also used to counter the effect of unequal current sharing between parallel chips inside most high-power IGBT modules. It effectively reduces the collector-emitter peak currents and thus protects the IGBT from being destroyed during soft short circuit conditions at high di/dt.", "title": "" }, { "docid": "1830c839960f8ce9b26c906cc21e2a39", "text": "This comparative review highlights the relationships between the disciplines of bloodstain pattern analysis (BPA) in forensics and that of fluid dynamics (FD) in the physical sciences. In both the BPA and FD communities, scientists study the motion and phase change of a liquid in contact with air, or with other liquids or solids. Five aspects of BPA related to FD are discussed: the physical forces driving the motion of blood as a fluid; the generation of the drops; their flight in the air; their impact on solid or liquid surfaces; and the production of stains. For each of these topics, the relevant literature from the BPA community and from the FD community is reviewed. Comments are provided on opportunities for joint BPA and FD research, and on the development of novel FD-based tools and methods for BPA. Also, the use of dimensionless numbers is proposed to inform BPA analyses.", "title": "" }, { "docid": "a208f2a2720313479773c00a74b1cbc6", "text": "I present a web service for querying an embedding of entities in the Wikidata knowledge graph. The embedding is trained on the Wikidata dump using Gensim’s Word2Vec implementation and a simple graph walk. A REST API is implemented. Together with the Wikidata API the web service exposes a multilingual resource for over 600’000 Wikidata items and properties.", "title": "" } ]
scidocsrr
47049efc46eda3078c30357036fa2ddf
Multiple object identification with passive RFID tags
[ { "docid": "1c7251c55cf0daea9891c8a522bbd3ec", "text": "The role of computers in the modern office has divided ouractivities between virtual interactions in the realm of thecomputer and physical interactions with real objects within thetraditional office infrastructure. This paper extends previous workthat has attempted to bridge this gap, to connect physical objectswith virtual representations or computational functionality, viavarious types of tags. We discuss a variety of scenarios we haveimplemented using a novel combination of inexpensive, unobtrusiveand easy to use RFID tags, tag readers, portable computers andwireless networking. This novel combination demonstrates theutility of invisibly, seamlessly and portably linking physicalobjects to networked electronic services and actions that arenaturally associated with their form.", "title": "" }, { "docid": "9c751a7f274827e3d8687ea520c6e9a9", "text": "Radio frequency identification systems with passive tags are powerful tools for object identification. However, if multiple tags are to be identified simultaneously, messages from the tags can collide and cancel each other out. Therefore, multiple read cycles have to be performed in order to achieve a high recognition rate. For a typical stochastic anti-collision scheme, we show how to determine the optimal number of read cycles to perform under a given assurance level determining the acceptable rate of missed tags. This yields an efficient procedure for object identification. We also present results on the performance of an implementation.", "title": "" } ]
[ { "docid": "2944000757568f330b495ba2a446b0a0", "text": "In this paper, we propose Deep Alignment Network (DAN), a robust face alignment method based on a deep neural network architecture. DAN consists of multiple stages, where each stage improves the locations of the facial landmarks estimated by the previous stage. Our method uses entire face images at all stages, contrary to the recently proposed face alignment methods that rely on local patches. This is possible thanks to the use of landmark heatmaps which provide visual information about landmark locations estimated at the previous stages of the algorithm. The use of entire face images rather than patches allows DAN to handle face images with large variation in head pose and difficult initializations. An extensive evaluation on two publicly available datasets shows that DAN reduces the state-of-the-art failure rate by up to 70%. Our method has also been submitted for evaluation as part of the Menpo challenge.", "title": "" }, { "docid": "2891ce3327617e9e957488ea21e9a20c", "text": "Recently, remote healthcare systems have received increasing attention in the last decade, explaining why intelligent systems with physiology signal monitoring for e-health care are an emerging area of development. Therefore, this study adopts a system which includes continuous collection and evaluation of multiple vital signs, long-term healthcare, and a cellular connection to a medical center in emergency case and it transfers all acquired raw data by the internet in normal case. The proposed system can continuously acquire four different physiological signs, for example, ECG, SpO2, temperature, and blood pressure and further relayed them to an intelligent data analysis scheme to diagnose abnormal pulses for exploring potential chronic diseases. The proposed system also has a friendly web-based interface for medical staff to observe immediate pulse signals for remote treatment. Once abnormal event happened or the request to real-time display vital signs is confirmed, all physiological signs will be immediately transmitted to remote medical server through both cellular networks and internet. Also data can be transmitted to a family member's mobile phone or doctor's phone through GPRS. A prototype of such system has been successfully developed and implemented, which will offer high standard of healthcare with a major reduction in cost for our society.", "title": "" }, { "docid": "457f10c4c5d5b748a4f35abd89feb519", "text": "Document image binarization is an important step in the document image analysis and recognition pipeline. H-DIBCO 2014 is the International Document Image Binarization Competition which is dedicated to handwritten document images organized in conjunction with ICFHR 2014 conference. The objective of the contest is to identify current advances in handwritten document image binarization using meaningful evaluation performance measures. This paper reports on the contest details including the evaluation measures used as well as the performance of the 7 submitted methods along with a short description of each method.", "title": "" }, { "docid": "144bb8e869671843cb5d8053e2ee861d", "text": "We investigate whether physicians' financial incentives influence health care supply, technology diffusion, and resulting patient outcomes. In 1997, Medicare consolidated the geographic regions across which it adjusts physician payments, generating area-specific price shocks. Areas with higher payment shocks experience significant increases in health care supply. On average, a 2 percent increase in payment rates leads to a 3 percent increase in care provision. Elective procedures such as cataract surgery respond much more strongly than less discretionary services. Non-radiologists expand their provision of MRIs, suggesting effects on technology adoption. We estimate economically small health impacts, albeit with limited precision.", "title": "" }, { "docid": "39a59eac80c6f4621971399dde2fbb7f", "text": "Social media sites such as Flickr, YouTube, and Facebook host substantial amounts of user-contributed materials (e.g., photographs, videos, and textual content) for a wide variety of real-world events. These range from widely known events, such as the presidential inauguration, to smaller, community-specific events, such as annual conventions and local gatherings. By identifying these events and their associated user-contributed social media documents, which is the focus of this paper, we can greatly improve local event browsing and search in state-of-the-art search engines. To address our problem of focus, we exploit the rich “context” associated with social media content, including user-provided annotations (e.g., title, tags) and automatically generated information (e.g., content creation time). We form a variety of representations of social media documents using different context dimensions, and combine these dimensions in a principled way into a single clustering solution—where each document cluster ideally corresponds to one event—using a weighted ensemble approach. We evaluate our approach on a large-scale, real-world dataset of event images, and report promising performance with respect to several baseline approaches. Our preliminary experiments suggest that our ensemble approach identifies events, and their associated images, more effectively than the state-of-the-art strategies on which we build.", "title": "" }, { "docid": "47bae1df7bc512e8a458122892e145f8", "text": "This paper presents an inertial-measurement-unit-based pen (IMUPEN) and its associated trajectory reconstruction algorithm for motion trajectory reconstruction and handwritten digit recognition applications. The IMUPEN is composed of a triaxial accelerometer, two gyroscopes, a microcontroller, and an RF wireless transmission module. Users can hold the IMUPEN to write numerals or draw simple symbols at normal speed. During writing or drawing movements, the inertial signals generated for the movements are transmitted to a computer via the wireless module. A trajectory reconstruction algorithm composed of the procedures of data collection, signal preprocessing, and trajectory reconstruction has been developed for reconstructing the trajectories of movements. In order to minimize the cumulative errors caused by the intrinsic noise/drift of sensors, we have developed an orientation error compensation method and a multiaxis dynamic switch. The advantages of the IMUPEN include the following: 1) It is portable and can be used anywhere without any external reference device or writing ambit limitations, and 2) its trajectory reconstruction algorithm can reduce orientation and integral errors effectively and thus can reconstruct the trajectories of movements accurately. Our experimental results on motion trajectory reconstruction and handwritten digit recognition have successfully validated the effectiveness of the IMUPEN and its trajectory reconstruction algorithm.", "title": "" }, { "docid": "992d71459b616bfe72845493a6f8f910", "text": "Finding patterns and trends in spatial and temporal datasets has been a long studied problem in statistics and different domains of science. This paper presents a visual analytics approach for the interactive exploration and analysis of spatiotemporal correlations among multivariate datasets. Our approach enables users to discover correlations and explore potentially causal or predictive links at different spatiotemporal aggregation levels among the datasets, and allows them to understand the underlying statistical foundations that precede the analysis. Our technique utilizes the Pearson's product-moment correlation coefficient and factors in the lead or lag between different datasets to detect trends and periodic patterns amongst them.", "title": "" }, { "docid": "c2a2c29b03ee90558325df7461124092", "text": "Effective thermal conductivity of mixtures of  uids and nanometer-size particles is measured by a steady-state parallel-plate method. The tested  uids contain two types of nanoparticles, Al2O3 and CuO, dispersed in water, vacuum pump  uid, engine oil, and ethylene glycol. Experimental results show that the thermal conductivities of nanoparticle– uid mixtures are higher than those of the base  uids. Using theoretical models of effective thermal conductivity of a mixture, we have demonstrated that the predicted thermal conductivities of nanoparticle– uid mixtures are much lower than our measured data, indicating the deŽ ciency in the existing models when used for nanoparticle– uid mixtures. Possible mechanisms contributing to enhancement of the thermal conductivity of the mixtures are discussed. A more comprehensive theory is needed to fully explain the behavior of nanoparticle– uid mixtures.", "title": "" }, { "docid": "1278d0b3ea3f06f52b2ec6b20205f8d0", "text": "The future global Internet is going to have to cater to users that will be largely mobile. Mobility is one of the main factors affecting the design and performance of wireless networks. Mobility modeling has been an active field for the past decade, mostly focusing on matching a specific mobility or encounter metric with little focus on matching protocol performance. This study investigates the adequacy of existing mobility models in capturing various aspects of human mobility behavior (including communal behavior), as well as network protocol performance. This is achieved systematically through the introduction of a framework that includes a multi-dimensional mobility metric space. We then introduce COBRA, a new mobility model capable of spanning the mobility metric space to match realistic traces. A methodical analysis using a range of protocol (epidemic, spraywait, Prophet, and Bubble Rap) dependent and independent metrics (modularity) of various mobility models (SMOOTH and TVC) and traces (university campuses, and theme parks) is done. Our results indicate significant gaps in several metric dimensions between real traces and existing mobility models. Our findings show that COBRA matches communal aspect and realistic protocol performance, reducing the overhead gap (w.r.t existing models) from 80% to less than 12%, showing the efficacy of our framework.", "title": "" }, { "docid": "e28c2662f3948d346a00298976d9b37c", "text": "Analysts engaged in real-time monitoring of cybersecurity incidents must quickly and accurately respond to alerts generated by intrusion detection systems. We investigated two complementary approaches to improving analyst performance on this vigilance task: a graph-based visualization of correlated IDS output and defensible recommendations based on machine learning from historical analyst behavior. We tested our approach with 18 professional cybersecurity analysts using a prototype environment in which we compared the visualization with a conventional tabular display, and the defensible recommendations with limited or no recommendations. Quantitative results showed improved analyst accuracy with the visual display and the defensible recommendations. Additional qualitative data from a \"talk aloud\" protocol illustrated the role of displays and recommendations in analysts' decision-making process. Implications for the design of future online analysis environments are discussed.", "title": "" }, { "docid": "50c762b9e01347df5be904c311e42548", "text": "This paper introduces redundant spin-transfer-torque (STT) magnetic tunnel junction (MTJ) based nonvolatile flip-flops (NVFFs) for low write-error rate (WER) operations. STT-MTJ NVFFs are key components for ultra-low power VLSI systems thanks to zero standby current, but suffers from write errors due to probabilistic switching, causing a failure backup/restore operation. To reduce the WER, redundant STT-MTJ devices are exploited in the proposed NVFFs. As one-bit information is redundantly represented, it is correctly stored upon a few bit write errors, lowering WERs compared to a conventional NVFF at the same write time. Three different redundant structures are presented and discussed in terms of WER and write energy dissipation. For performance comparisons, the proposed redundant STT-MTJ NVFFs are designed using hybrid 90nm CMOS and MTJ technologies and evaluated using NSSPICE that handles both transistors and MTJs. The simulation results show that the proposed NVFF reduces the write time to 36.2% and the write energy to 70.7% at a WER of 10-12 compared to the conventional NVFF.", "title": "" }, { "docid": "4a9ad387ad16727d9ac15ac667d2b1c3", "text": "In recent years face recognition has received substantial attention from both research communities and the market, but still remained very challenging in real applications. A lot of face recognition algorithms, along with their modifications, have been developed during the past decades. A number of typical algorithms are presented, being categorized into appearancebased and model-based schemes. For appearance-based methods, three linear subspace analysis schemes are presented, and several non-linear manifold analysis approaches for face recognition are briefly described. The model-based approaches are introduced, including Elastic Bunch Graph matching, Active Appearance Model and 3D Morphable Model methods. A number of face databases available in the public domain and several published performance evaluation results are digested. Future research directions based on the current recognition results are pointed out.", "title": "" }, { "docid": "31fb6df8d386f28b63140ee2ad8d11ea", "text": "The problem and the solution.The majority of the literature on creativity has focused on the individual, yet the social environment can influence both the level and frequency of creative behavior. This article reviews the literature for factors related to organizational culture and climate that act as supports and impediments to organizational creativity and innovation. The work of Amabile, Kanter, Van de Ven, Angle, and others is reviewed and synthesized to provide an integrative understanding of the existing literature. Implications for human resource development research and practice are discussed.", "title": "" }, { "docid": "3d911d6eeefefd16f898200da0e1a3ef", "text": "We introduce Reality-based User Interface System (RUIS), a virtual reality (VR) toolkit aimed for students and hobbyists, which we have used in an annually organized VR course for the past four years. RUIS toolkit provides 3D user interface building blocks for creating immersive VR applications with spatial interaction and stereo 3D graphics, while supporting affordable VR peripherals like Kinect, PlayStation Move, Razer Hydra, and Oculus Rift. We describe a novel spatial interaction scheme that combines freeform, full-body interaction with traditional video game locomotion, which can be easily implemented with RUIS. We also discuss the specific challenges associated with developing VR applications, and how they relate to the design principles behind RUIS. Finally, we validate our toolkit by comparing development difficulties experienced by users of different software toolkits, and by presenting several VR applications created with RUIS, demonstrating a variety of spatial user interfaces that it can produce.", "title": "" }, { "docid": "1c117c63455c2b674798af0e25e3947c", "text": "We are studying the manufacturing performance of semiconductor wafer fabrication plants in the US, Asia, and Europe. There are great similarities in production equipment, manufacturing processes, and products produced at semiconductor fabs around the world. However, detailed comparisons over multi-year intervals show that important quantitative indicators of productivity, including defect density (yield), major equipment production rates, wafer throughput time, and effective new process introduction to manufacturing, vary by factors of 3 to as much as 5 across an international sample of 28 fabs. We conduct on-site observations, and interviews with manufacturing personnel at all levels from operator to general manager, to better understand reasons for the observed wide variations in performance. We have identified important factors in the areas of information systems, organizational practices, process and technology improvements, and production control that correlate strongly with high productivity. Optimum manufacturing strategy is different for commodity products, high-value proprietary products, and foundry business.", "title": "" }, { "docid": "df2bc3dce076e3736a195384ae6c9902", "text": "In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it.", "title": "" }, { "docid": "83ee7b71813ead9656e2972e700ade24", "text": "In many visual domains (like fashion, furniture, etc.) the search for products on online platforms requires matching textual queries to image content. For example, the user provides a search query in natural language (e.g.,pink floral top) and the results obtained are of a different modality (e.g., the set of images of pink floral tops). Recent work on multimodal representation learning enables such cross-modal matching by learning a common representation space for text and image. While such representations ensure that the n-dimensional representation of pink floral top is very close to representation of corresponding images, they do not ensure that the first k1 (< n) dimensions correspond to color, the next k2 (< n) correspond to style and so on. In other words, they learn entangled representations where each dimension does not correspond to a specific attribute. We propose two simple variants which can learn disentangled common representations for the fashion domain wherein each dimension would correspond to a specific attribute (color, style, silhoutte, etc.). Our proposed variants can be integrated with any existing multimodal representation learning method. We use a large fashion dataset of over 700K fashion items crawled from multiple fashion e-commerce portals to evaluate the learned representations on four different applications from the fashion domain, namely, cross-modal image retrieval, visual search, image tagging, and query expansion. Our experimental results show that the proposed variants lead to better performance for each of these applications while learning disentangled representations.", "title": "" }, { "docid": "cea9c1bab28363fc6f225b7843b8df99", "text": "Published in Agron. J. 104:1336–1347 (2012) Posted online 29 June 2012 doi:10.2134/agronj2012.0065 Copyright © 2012 by the American Society of Agronomy, 5585 Guilford Road, Madison, WI 53711. All rights reserved. No part of this periodical may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. T leaf area index (LAI), the ratio of leaf area to ground area, typically reported as square meters per square meter, is a commonly used biophysical characteristic of vegetation (Watson, 1947). The LAI can be subdivided into photosynthetically active and photosynthetically inactive components. The former, the gLAI, is a metric commonly used in climate (e.g., Buermann et al., 2001), ecological (e.g., Bulcock and Jewitt, 2010), and crop yield (e.g., Fang et al., 2011) models. Because of its wide use and applicability to modeling, there is a need for a nondestructive remote estimation of gLAI across large geographic areas. Various techniques based on remotely sensed data have been utilized for assessing gLAI (see reviews by Pinter et al., 2003; Hatfield et al., 2004, 2008; Doraiswamy et al., 2003; le Maire et al., 2008, and references therein). Vegetation indices, particularly the NDVI (Rouse et al., 1974) and SR (Jordan, 1969), are the most widely used. The NDVI, however, is prone to saturation at moderate to high gLAI values (Kanemasu, 1974; Curran and Steven, 1983; Asrar et al., 1984; Huete et al., 2002; Gitelson, 2004; Wu et al., 2007; González-Sanpedro et al., 2008) and requires reparameterization for different crops and species. The saturation of NDVI has been attributed to insensitivity of reflectance in the red region at moderate to high gLAI values due to the high absorption coefficient of chlorophyll. For gLAI below 3 m2/m2, total absorption by a canopy in the red range reaches 90 to 95%, and further increases in gLAI do not bring additional changes in absorption and reflectance (Hatfield et al., 2008; Gitelson, 2011). Another reason for the decrease in the sensitivity of NDVI to moderate to high gLAI values is the mathematical formulation of that index. At moderate to high gLAI, the NDVI is dominated by nearinfrared (NIR) reflectance. Because scattering by the cellular or leaf structure causes the NIR reflectance to be high and the absorption by chlorophyll causes the red reflectance to be low, NIR reflectance is considerably greater than red reflectance: e.g., for gLAI >3 m2/m2, NIR reflectance is >40% while red reflectance is <5%. Thus, NDVI becomes insensitive to changes in both red and NIR reflectance. Other commonly used VIs include the Enhanced Vegetation Index, EVI (Liu and Huete, 1995; Huete et al., 1997, 2002), its ABStrAct", "title": "" }, { "docid": "fba7801d0b187a9a5fbb00c9d4690944", "text": "Acute pulmonary embolism (PE) poses a significant burden on health and survival. Its severity ranges from asymptomatic, incidentally discovered subsegmental thrombi to massive, pressor-dependent PE complicated by cardiogenic shock and multisystem organ failure. Rapid and accurate risk stratification is therefore of paramount importance to ensure the highest quality of care. This article critically reviews currently available and emerging tools for risk-stratifying acute PE, and particularly for distinguishing between elevated (intermediate) and low risk among normotensive patients. We focus on the potential value of risk assessment strategies for optimizing severity-adjusted management. Apart from reviewing the current evidence on advanced early therapy of acute PE (thrombolysis, surgery, catheter interventions, vena cava filters), we discuss recent advances in oral anticoagulation with vitamin K antagonists, and with new direct inhibitors of factor Xa and thrombin, which may contribute to profound changes in the treatment and secondary prophylaxis of venous thrombo-embolism in the near future.", "title": "" }, { "docid": "63063c0a2b08f068c11da6d80236fa87", "text": "This paper addresses the problem of hallucinating the missing high-resolution (HR) details of a low-resolution (LR) video while maintaining the temporal coherence of the hallucinated HR details by using dynamic texture synthesis (DTS). Most existing multi-frame-based video super-resolution (SR) methods suffer from the problem of limited reconstructed visual quality due to inaccurate sub-pixel motion estimation between frames in a LR video. To achieve high-quality reconstruction of HR details for a LR video, we propose a texture-synthesis-based video super-resolution method, in which a novel DTS scheme is proposed to render the reconstructed HR details in a time coherent way, so as to effectively address the temporal incoherence problem caused by traditional texture synthesis based image SR methods. To further reduce the complexity of the proposed method, our method only performs the DTS-based SR on a selected set of key-frames, while the HR details of the remaining non-key-frames are simply predicted using the bi-directional overlapped block motion compensation. Experimental results demonstrate that the proposed method achieves significant subjective and objective quality improvement over state-of-the-art video SR methods.", "title": "" } ]
scidocsrr
d2c5e7e28483513056efb2c69fc35df9
SQL-IDS: a specification-based approach for SQL-injection detection
[ { "docid": "d1f771fd1b0f8e5d91bbf65bc19aeb54", "text": "Web-based systems are often a composition of infrastructure components, such as web servers and databases, and of applicationspecific code, such as HTML-embedded scripts and server-side applications. While the infrastructure components are usually developed by experienced programmers with solid security skills, the application-specific code is often developed under strict time constraints by programmers with little security training. As a result, vulnerable web-applications are deployed and made available to the Internet at large, creating easilyexploitable entry points for the compromise of entire networks. Web-based applications often rely on back-end database servers to manage application-specific persistent state. The data is usually extracted by performing queries that are assembled using input provided by the users of the applications. If user input is not sanitized correctly, it is possible to mount a variety of attacks that leverage web-based applications to compromise the security of back-end databases. Unfortunately, it is not always possible to identify these attacks using signature-based intrusion detection systems, because of the ad hoc nature of many web-based applications. Signatures are rarely written for this class of applications due to the substantial investment of time and expertise this would require. We have developed an anomaly-based system that learns the profiles of the normal database access performed by web-based applications using a number of different models. These models allow for the detection of unknown attacks with reduced false positives and limited overhead. In addition, our solution represents an improvement with respect to previous approaches because it reduces the possibility of executing SQL-based mimicry attacks.", "title": "" }, { "docid": "5025766e66589289ccc31e60ca363842", "text": "The use of web applications has become increasingly popular in our routine activities, such as reading the news, paying bills, and shopping on-line. As the availability of these services grows, we are witnessing an increase in the number and sophistication of attacks that target them. In particular, SQL injection, a class of code-injection attacks in which specially crafted input strings result in illegal queries to a database, has become one of the most serious threats to web applications. In this paper we present and evaluate a new technique for detecting and preventing SQL injection attacks. Our technique uses a model-based approach to detect illegal queries before they are executed on the database. In its static part, the technique uses program analysis to automatically build a model of the legitimate queries that could be generated by the application. In its dynamic part, the technique uses runtime monitoring to inspect the dynamically-generated queries and check them against the statically-built model. We developed a tool, AMNESIA, that implements our technique and used the tool to evaluate the technique on seven web applications. In the evaluation we targeted the subject applications with a large number of both legitimate and malicious inputs and measured how many attacks our technique detected and prevented. The results of the study show that our technique was able to stop all of the attempted attacks without generating any false positives.", "title": "" } ]
[ { "docid": "e58036f93195603cb7dc7265b9adeb25", "text": "Pseudomonas aeruginosa thrives in many aqueous environments and is an opportunistic pathogen that can cause both acute and chronic infections. Environmental conditions and host defenses cause differing stresses on the bacteria, and to survive in vastly different environments, P. aeruginosa must be able to adapt to its surroundings. One strategy for bacterial adaptation is to self-encapsulate with matrix material, primarily composed of secreted extracellular polysaccharides. P. aeruginosa has the genetic capacity to produce at least three secreted polysaccharides; alginate, Psl, and Pel. These polysaccharides differ in chemical structure and in their biosynthetic mechanisms. Since alginate is often associated with chronic pulmonary infections, its biosynthetic pathway is the best characterized. However, alginate is only produced by a subset of P. aeruginosa strains. Most environmental and other clinical isolates secrete either Pel or Psl. Little information is available on the biosynthesis of these polysaccharides. Here, we review the literature on the alginate biosynthetic pathway, with emphasis on recent findings describing the structure of alginate biosynthetic proteins. This information combined with the characterization of the domain architecture of proteins encoded on the Psl and Pel operons allowed us to make predictive models for the biosynthesis of these two polysaccharides. The results indicate that alginate and Pel share certain features, including some biosynthetic proteins with structurally or functionally similar properties. In contrast, Psl biosynthesis resembles the EPS/CPS capsular biosynthesis pathway of Escherichia coli, where the Psl pentameric subunits are assembled in association with an isoprenoid lipid carrier. These models and the environmental cues that cause the cells to produce predominantly one polysaccharide over the others are subjects of current investigation.", "title": "" }, { "docid": "188ab32548b91fd1bf1edf34ff3d39d9", "text": "With the marvelous development of wireless techniques and ubiquitous deployment of wireless systems indoors, myriad indoor location-based services (ILBSs) have permeated into numerous aspects of modern life. The most fundamental functionality is to pinpoint the location of the target via wireless devices. According to how wireless devices interact with the target, wireless indoor localization schemes roughly fall into two categories: device based and device free. In device-based localization, a wireless device (e.g., a smartphone) is attached to the target and computes its location through cooperation with other deployed wireless devices. In device-free localization, the target carries no wireless devices, while the wireless infrastructure deployed in the environment determines the target’s location by analyzing its impact on wireless signals.\n This article is intended to offer a comprehensive state-of-the-art survey on wireless indoor localization from the device perspective. In this survey, we review the recent advances in both modes by elaborating on the underlying wireless modalities, basic localization principles, and data fusion techniques, with special emphasis on emerging trends in (1) leveraging smartphones to integrate wireless and sensor capabilities and extend to the social context for device-based localization, and (2) extracting specific wireless features to trigger novel human-centric device-free localization. We comprehensively compare each scheme in terms of accuracy, cost, scalability, and energy efficiency. Furthermore, we take a first look at intrinsic technical challenges in both categories and identify several open research issues associated with these new challenges.", "title": "" }, { "docid": "bbd64fe2f05e53ca14ad1623fe51cd1c", "text": "Virtual assistants are the cutting edge of end user interaction, thanks to endless set of capabilities across multiple services. The natural language techniques thus need to be evolved to match the level of power and sophistication that users expect from virtual assistants. In this report we investigate an existing deep learning model for semantic parsing, and we apply it to the problem of converting natural language to trigger-action programs for the Almond virtual assistant. We implement a one layer seq2seq model with attention layer, and experiment with grammar constraints and different RNN cells. We take advantage of its existing dataset and we experiment with different ways to extend the training set. Our parser shows mixed results on the different Almond test sets, performing better than the state of the art on synthetic benchmarks by about 10% but poorer on realistic user data by about 15%. Furthermore, our parser is shown to be extensible to generalization, as well as or better than the current system employed by Almond.", "title": "" }, { "docid": "38935c773fb3163a1841fcec62b3e15a", "text": "We investigate how neural networks can learn and process languages with hierarchical, compositional semantics. To this end, we define the artificial task of processing nested arithmetic expressions, and study whether different types of neural networks can learn to compute their meaning. We find that recursive neural networks can implement a generalising solution to this problem, and we visualise this solution by breaking it up in three steps: project, sum and squash. As a next step, we investigate recurrent neural networks, and show that a gated recurrent unit, that processes its input incrementally, also performs very well on this task: the network learns to predict the outcome of the arithmetic expressions with high accuracy, although performance deteriorates somewhat with increasing length. To develop an understanding of what the recurrent network encodes, visualisation techniques alone do not suffice. Therefore, we develop an approach where we formulate and test multiple hypotheses on the information encoded and processed by the network. For each hypothesis, we derive predictions about features of the hidden state representations at each time step, and train ‘diagnostic classifiers’ to test those predictions. Our results indicate that the networks follow a strategy similar to our hypothesised ‘cumulative strategy’, which explains the high accuracy of the network on novel expressions, the generalisation to longer expressions than seen in training, and the mild deterioration with increasing length. This is turn shows that diagnostic classifiers can be a useful technique for opening up the black box of neural networks. We argue that diagnostic classification, unlike most visualisation techniques, does scale up from small networks in a toy domain, to larger and deeper recurrent networks dealing with real-life data, and may therefore contribute to a better understanding of the internal dynamics of current state-of-the-art models in natural language processing.", "title": "" }, { "docid": "bd8b0a2b060594d8513f43fbfe488443", "text": "Part 1 of the paper presents the detection and sizing capability based on image display of sectorial scan. Examples are given for different types of weld defects: toe cracks, internal porosity, side-wall lack of fusion, underbead crack, inner-surface breaking cracks, slag inclusions, incomplete root penetration and internal cracks. Based on combination of S-scan and B-scan plotted into 3-D isometric part, the defect features could be reconstructed and measured into a draft package. Comparison between plotted data and actual defect sizes are also presented.", "title": "" }, { "docid": "a0a73cc2b884828eb97ff8045bfe50a6", "text": "A variety of antennas have been engineered with metamaterials (MTMs) and metamaterial-inspired constructs to improve their performance characteristics. Examples include electrically small, near-field resonant parasitic (NFRP) antennas that require no matching network and have high radiation efficiencies. Experimental verification of their predicted behaviors has been obtained. Recent developments with this NFRP electrically small paradigm will be reviewed. They include considerations of increased bandwidths, as well as multiband and multifunctional extensions.", "title": "" }, { "docid": "64a345ae00db3b84fb254725bf14edb7", "text": "The research interest in unmanned aerial vehicles (UAV) has grown rapidly over the past decade. UAV applications range from purely scientific over civil to military. Technical advances in sensor and signal processing technologies enable the design of light weight and economic airborne platforms. This paper presents a complete mechatronic design process of a quadrotor UAV, including mechanical design, modeling of quadrotor and actuator dynamics and attitude stabilization control. Robust attitude estimation is achieved by fusion of low-cost MEMS accelerometer and gyroscope signals with a Kalman filter. Experiments with a gimbal mounted quadrotor testbed allow a quantitative analysis and comparision of the PID and Integral-Backstepping (IB) controller design for attitude stabilization with respect to reference signal tracking, disturbance rejection and robustness.", "title": "" }, { "docid": "6097315ac2e4475e8afd8919d390babf", "text": "This paper presents an origami-inspired technique which allows the application of 2-D fabrication methods to build 3-D robotic systems. The ability to design robots as origami structures introduces a fast and low-cost fabrication method to modern, real-world robotic applications. We employ laser-machined origami patterns to build a new class of robotic systems for mobility and manipulation. Origami robots use only a flat sheet as the base structure for building complicated bodies. An arbitrarily complex folding pattern can be used to yield an array of functionalities, in the form of actuated hinges or active spring elements. For actuation, we use compact NiTi coil actuators placed on the body to move parts of the structure on-demand. We demonstrate, as a proof-of-concept case study, the end-to-end fabrication and assembly of a simple mobile robot that can undergo worm-like peristaltic locomotion.", "title": "" }, { "docid": "dc71729ebd3c2a66c73b16685c8d12af", "text": "A list of related materials, with annotations to guide further exploration of the article's ideas and applications 11 Further Reading A company's bid to rally an industry ecosystem around a new competitive view is an uncertain gambit. But the right strategic approaches and the availability of modern digital infrastructures improve the odds for success.", "title": "" }, { "docid": "6384a691d3b50e252ab76a61e28f012e", "text": "We study the algorithmics of information structure design --- a.k.a. persuasion or signaling --- in a fundamental special case introduced by Arieli and Babichenko: multiple agents, binary actions, and no inter-agent externalities. Unlike prior work on this model, we allow many states of nature. We assume that the principal's objective is a monotone set function, and study the problem both in the public signal and private signal models, drawing a sharp contrast between the two in terms of both efficacy and computational complexity.\n When private signals are allowed, our results are largely positive and quite general. First, we use linear programming duality and the equivalence of separation and optimization to show polynomial-time equivalence between (exactly) optimal signaling and the problem of maximizing the objective function plus an additive function. This yields an efficient implementation of the optimal scheme when the objective is supermodular or anonymous. Second, we exhibit a (1-1/e)-approximation of the optimal private signaling scheme, modulo an additive loss of ε, when the objective function is submodular. These two results simplify, unify, and generalize results of [Arieli and Babichenko, 2016] and [Babichenko and Barman, 2016], extending them from a binary state of nature to many states (modulo the additive loss in the latter result). Third, we consider the binary-state case with a submodular objective, and simplify and slightly strengthen the result of [Babichenko and Barman, 2016] to obtain a (1-1/e)-approximation via a scheme which (i) signals independently to each receiver and (ii) is \"oblivious\" in that it does not depend on the objective function so long as it is monotone submodular.\n When only a public signal is allowed, our results are negative. First, we show that it is NP-hard to approximate the optimal public scheme, within any constant factor, even when the objective is additive. Second, we show that the optimal private scheme can outperform the optimal public scheme, in terms of maximizing the sender's objective, by a polynomial factor.", "title": "" }, { "docid": "104c9ef558234250d56ef941f09d6a7c", "text": "The first of these questions is in the province of sensory physiology, and is the only one for which appreciable understanding has been achieved. This article will be concerned primarily with the second and third questions, which are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory. With regard to the second question, two alternative positions have been maintained. The first suggests that storage of sensory information is in the form of coded representations or images, with some sort of one-to-one mapping between the sensory stimulus", "title": "" }, { "docid": "ca94b1bb1f4102ed6b4506441b2431fc", "text": "It is often a difficult task to accurately segment images with intensity inhomogeneity, because most of representative algorithms are region-based that depend on intensity homogeneity of the interested object. In this paper, we present a novel level set method for image segmentation in the presence of intensity inhomogeneity. The inhomogeneous objects are modeled as Gaussian distributions of different means and variances in which a sliding window is used to map the original image into another domain, where the intensity distribution of each object is still Gaussian but better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying a bias field with the original signal within the window. A maximum likelihood energy functional is then defined on the whole image region, which combines the bias field, the level set function, and the piecewise constant function approximating the true image signal. The proposed level set method can be directly applied to simultaneous segmentation and bias correction for 3 and 7T magnetic resonance images. Extensive evaluation on synthetic and real-images demonstrate the superiority of the proposed method over other representative algorithms.", "title": "" }, { "docid": "322f6321bc34750344064d474206fddb", "text": "BACKGROUND AND PURPOSE\nThis study was undertaken to elucidate whether and how age influences stroke outcome.\n\n\nMETHODS\nThis prospective and community-based study comprised 515 consecutive acute stroke patients. Computed tomographic scan was performed in 79% of patients. Activities of daily living (ADL) and neurological status were assessed weekly during hospital stay using the Barthel Index (BI) and the Scandinavian Stroke Scale (SSS), respectively. Information regarding social condition and comorbidity before stroke was also registered. A multiple regression model was used to analyze the independent influence of age on stroke outcome.\n\n\nRESULTS\nAge was not related to the type of stroke lesion or infarct size. However, age independently influenced initial BI (-4 points per 10 years, P < .01), initial SSS (-2 points per 10 years, P = .01), and discharge BI (-3 points per 10 years, P < .01). No independent influence of age was found regarding mortality within 3 months, discharge SSS, length of hospital stay, and discharge placement. ADL improvement was influenced independently by age (-3 points per 10 years, P < .01), whereas age had no influence on neurological improvement or on speed of recovery.\n\n\nCONCLUSIONS\nAge independently influences stroke outcome selectively in ADL-related aspects (BI) but not in neurological aspects (SSS), suggesting a poorer compensatory ability in elderly stroke patients. Therefore, rehabilitation of elderly stroke patients should be focused more on ADL and compensation rather than on the recovery of neurological status, and age itself should not be a selection criterion for rehabilitation.", "title": "" }, { "docid": "7448b45dd5809618c3b6bb667cb1004f", "text": "We first provide criteria for assessing informed consent online. Then we examine how cookie technology and Web browser designs have responded to concerns about informed consent. Specifically, we document relevant design changes in Netscape Navigator and Internet Explorer over a 5-year period, starting in 1995. Our retrospective analyses leads us to conclude that while cookie technology has improved over time regarding informed consent, some startling problems remain. We specify six of these problems and offer design remedies. This work fits within the emerging field of Value-Sensitive Design.", "title": "" }, { "docid": "4e8c39eaa7444158a79573481b80a77f", "text": "Image patch classification is an important task in many different medical imaging applications. In this work, we have designed a customized Convolutional Neural Networks (CNN) with shallow convolution layer to classify lung image patches with interstitial lung disease (ILD). While many feature descriptors have been proposed over the past years, they can be quite complicated and domain-specific. Our customized CNN framework can, on the other hand, automatically and efficiently learn the intrinsic image features from lung image patches that are most suitable for the classification purpose. The same architecture can be generalized to perform other medical image or texture classification tasks.", "title": "" }, { "docid": "5fd2d67291f7957eee20495c5baeb1ef", "text": "Many interesting real-world textures are inhomogeneous and/or anisotropic. An inhomogeneous texture is one where various visual properties exhibit significant changes across the texture’s spatial domain. Examples include perceptible changes in surface color, lighting, local texture pattern and/or its apparent scale, and weathering effects, which may vary abruptly, or in a continuous fashion. An anisotropic texture is one where the local patterns exhibit a preferred orientation, which also may vary across the spatial domain. While many example-based texture synthesis methods can be highly effective when synthesizing uniform (stationary) isotropic textures, synthesizing highly non-uniform textures, or ones with spatially varying orientation, is a considerably more challenging task, which so far has remained underexplored. In this paper, we propose a new method for automatic analysis and controlled synthesis of such textures. Given an input texture exemplar, our method generates a source guidance map comprising: (i) a scalar progression channel that attempts to capture the low frequency spatial changes in color, lighting, and local pattern combined, and (ii) a direction field that captures the local dominant orientation of the texture. Having augmented the texture exemplar with this guidance map, users can exercise better control over the synthesized result by providing easily specified target guidance maps, which are used to constrain the synthesis process.", "title": "" }, { "docid": "763372dc4ebc2cd972a5b851be014bba", "text": "Parametric piecewise-cubic functions are used throughout the computer graphics industry to represent curved shapes. For many applications, it would be useful to be able to reliably derive this representation from a closely spaced set of points that approximate the desired curve, such as the input from a digitizing tablet or a scanner. This paper presents a solution to the problem of automatically generating efficient piecewise parametric cubic polynomial approximations to shapes from sampled data. We have developed an algorithm that takes a set of sample points, plus optional endpoint and tangent vector specifications, and iteratively derives a single parametric cubic polynomial that lies close to the data points as defined by an error metric based on least-squares. Combining this algorithm with dynamic programming techniques to determine the knot placement gives good results over a range of shapes and applications.", "title": "" }, { "docid": "221541e0ef8cf6cd493843fd53257a62", "text": "Content-based shape retrieval techniques can facilitate 3D model resource reuse, 3D model modeling, object recognition, and 3D content classification. Recently more and more researchers have attempted to solve the problems of partial retrieval in the domain of computer graphics, vision, CAD, and multimedia. Unfortunately, in the literature, there is little comprehensive discussion on the state-of-the-art methods of partial shape retrieval. In this article we focus on reviewing the partial shape retrieval methods over the last decade, and help novices to grasp latest developments in this field. We first give the definition of partial retrieval and discuss its desirable capabilities. Secondly, we classify the existing methods on partial shape retrieval into three classes by several criteria, describe the main ideas and techniques for each class, and detailedly compare their advantages and limits. We also present several relevant 3D datasets and corresponding evaluation metrics, which are necessary for evaluating partial retrieval performance. Finally, we discuss possible research directions to address partial shape retrieval.", "title": "" }, { "docid": "bf14f996f9013351aca1e9935157c0e3", "text": "Attributed graphs are becoming important tools for modeling information networks, such as the Web and various social networks (e.g. Facebook, LinkedIn, Twitter). However, it is computationally challenging to manage and analyze attributed graphs to support effective decision making. In this paper, we propose, Pagrol, a parallel graph OLAP (Online Analytical Processing) system over attributed graphs. In particular, Pagrol introduces a new conceptual Hyper Graph Cube model (which is an attributed-graph analogue of the data cube model for relational DBMS) to aggregate attributed graphs at different granularities and levels. The proposed model supports different queries as well as a new set of graph OLAP Roll-Up/Drill-Down operations. Furthermore, on the basis of Hyper Graph Cube, Pagrol provides an efficient MapReduce-based parallel graph cubing algorithm, MRGraph-Cubing, to compute the graph cube for an attributed graph. Pagrol employs numerous optimization techniques: (a) a self-contained join strategy to minimize I/O cost; (b) a scheme that groups cuboids into batches so as to minimize redundant computations; (c) a cost-based scheme to allocate the batches into bags (each with a small number of batches); and (d) an efficient scheme to process a bag using a single MapReduce job. Results of extensive experimental studies using both real Facebook and synthetic datasets on a 128-node cluster show that Pagrol is effective, efficient and scalable.", "title": "" }, { "docid": "f37d9a57fd9100323c70876cf7a1d7ad", "text": "Neural networks encounter serious catastrophic forgetting when information is learned sequentially, which is unacceptable for both a model of human memory and practical engineering applications. In this study, we propose a novel biologically inspired dual-network memory model that can significantly reduce catastrophic forgetting. The proposed model consists of two distinct neural networks: hippocampal and neocortical networks. Information is first stored in the hippocampal network, and thereafter, it is transferred to the neocortical network. In the hippocampal network, chaotic behavior of neurons in the CA3 region of the hippocampus and neuronal turnover in the dentate gyrus region are introduced. Chaotic recall by CA3 enables retrieval of stored information in the hippocampal network. Thereafter, information retrieved from the hippocampal network is interleaved with previously stored information and consolidated by using pseudopatterns in the neocortical network. The computer simulation results show the effectiveness of the proposed dual-network memory model. & 2014 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr
32262952ce4d4b250f0be1985e087814
Runtime Prediction for Scale-Out Data Analytics
[ { "docid": "66f684ba92fe735fecfbfb53571bad5f", "text": "Some empirical learning tasks are concerned with predicting values rather than the more familiar categories. This paper describes a new system, m5, that constructs tree-based piecewise linear models. Four case studies are presented in which m5 is compared to other methods.", "title": "" }, { "docid": "a50ec2ab9d5d313253c6656049d608b3", "text": "A cluster algorithm for graphs called the Markov Cluster algorithm (MCL algorithm) is introduced. The algorithm provides basically an interface to an algebraic process de ned on stochastic matrices, called the MCL process. The graphs may be both weighted (with nonnegative weight) and directed. Let G be such a graph. The MCL algorithm simulates ow in G by rst identifying G in a canonical way with a Markov graph G1. Flow is then alternatingly expanded and contracted, leading to a row of Markov Graphs G(i). Flow expansion corresponds with taking the k power of a stochastic matrix, where k 2 IN . Flow contraction corresponds with a parametrized operator r, r 0, which maps the set of (column) stochastic matrices onto itself. The image rM is obtained by raising each entry in M to the r th power and rescaling each column to have sum 1 again. The heuristic underlying this approach is the expectation that ow between dense regions which are sparsely connected will evaporate. The invariant limits of the process are easily derived and in practice the process converges very fast to such a limit, the structure of which has a generic interpretation as an overlapping clustering of the graph G. Overlap is limited to cases where the input graph has a symmetric structure inducing it. The contraction and expansion parameters of the MCL process in uence the granularity of the output. The algorithm is space and time e cient and lends itself to drastic scaling. This report describes the MCL algorithm and process, convergence towards equilibrium states, interpretation of the states as clusterings, and implementation and scalability. The algorithm is introduced by rst considering several related proposals towards graph clustering, of both combinatorial and probabilistic nature. 2000 Mathematics Subject Classi cation: 05B20, 15A48, 15A51, 62H30, 68R10, 68T10, 90C35.", "title": "" }, { "docid": "6c2a0afc5a93fe4d73661a3f50fab126", "text": "As massive data acquisition and storage becomes increasingly a↵ordable, a wide variety of enterprises are employing statisticians to engage in sophisticated data analysis. In this paper we highlight the emerging practice of Magnetic, Agile, Deep (MAD) data analysis as a radical departure from traditional Enterprise Data Warehouses and Business Intelligence. We present our design philosophy, techniques and experience providing MAD analytics for one of the world’s largest advertising networks at Fox Audience Network, using the Greenplum parallel database system. We describe database design methodologies that support the agile working style of analysts in these settings. We present dataparallel algorithms for sophisticated statistical techniques, with a focus on density methods. Finally, we reflect on database system features that enable agile design and flexible algorithm development using both SQL and MapReduce interfaces over a variety of storage mechanisms.", "title": "" } ]
[ { "docid": "df7a68ebb9bc03d8a73a54ab3474373f", "text": "We report on the implementation of a color-capable sub-pixel resolving optofluidic microscope based on the pixel super-resolution algorithm and sequential RGB illumination, for low-cost on-chip color imaging of biological samples with sub-cellular resolution.", "title": "" }, { "docid": "2f9ebb8992542b8d342642b6ea361b54", "text": "Falsifying Financial Statements involves the manipulation of financial accounts by overstating assets, sales and profit, or understating liabilities, expenses, or losses. This paper explores the effectiveness of an innovative classification methodology in detecting firms that issue falsified financial statements (FFS) and the identification of the factors associated to FFS. The methodology is based on the concepts of multicriteria decision aid (MCDA) and the application of the UTADIS classification method (UTilités Additives DIScriminantes). A sample of 76 Greek firms (38 with FFS and 38 non-FFS) described over ten financial ratios is used for detecting factors associated with FFS. A Jackknife procedure approach is employed for model validation and comparison with multivariate statistical techniques, namely discriminant and logit analysis. The results indicate that the proposed MCDA methodology outperforms traditional statistical techniques which are widely used for FFS detection purposes. Furthermore, the results indicate that the investigation of financial information can be helpful towards the identification of FFS and highlight the importance of financial ratios such as the total debt to total assets ratio, the inventories to sales ratio, the net profit to sales ratio and the sales to total assets ratio.", "title": "" }, { "docid": "e96b49a1ee9dd65bb920507d65810501", "text": "The objective of this paper is to compare the time specification performance between conventional controller PID and modern controller SMC for an inverted pendulum system. The goal is to determine which control strategy delivers better performance with respect to pendulum’s angle and cart’s position. The inverted pendulum represents a challenging control problem, which continually moves toward an uncontrolled state. Two controllers are presented such as Sliding Mode Control (SMC) and ProportionalIntegral-Derivatives (PID) controllers for controlling the highly nonlinear system of inverted pendulum model. Simulation study has been done in Matlab Mfile and simulink environment shows that both controllers are capable to control multi output inverted pendulum system successfully. The result shows that Sliding Mode Control (SMC) produced better response compared to PID control strategies and the responses are presented in time domain with the details analysis. Keywords—SMC, PID, Inverted Pendulum System.", "title": "" }, { "docid": "9f362249c508abe7f0146158d9370395", "text": "A shadow appears on an area when the light from a source cannot reach the area due to obstruction by an object. The shadows are sometimes helpful for providing useful information about objects. However, they cause problems in computer vision applications, such as segmentation, object detection and object counting. Thus shadow detection and removal is a pre-processing task in many computer vision applications. This paper proposes a simple method to detect and remove shadows from a single RGB image. A shadow detection method is selected on the basis of the mean value of RGB image in A and B planes of LAB equivalent of the image. The shadow removal is done by multiplying the shadow region by a constant. Shadow edge correction is done to reduce the errors due to diffusion in the shadow boundary.", "title": "" }, { "docid": "719b4c5352d94d5ae52172b3c8a2512d", "text": "Acts of violence account for an estimated 1.43 million deaths worldwide annually. While violence can occur in many contexts, individual acts of aggression account for the majority of instances. In some individuals, repetitive acts of aggression are grounded in an underlying neurobiological susceptibility that is just beginning to be understood. The failure of \"top-down\" control systems in the prefrontal cortex to modulate aggressive acts that are triggered by anger provoking stimuli appears to play an important role. An imbalance between prefrontal regulatory influences and hyper-responsivity of the amygdala and other limbic regions involved in affective evaluation are implicated. Insufficient serotonergic facilitation of \"top-down\" control, excessive catecholaminergic stimulation, and subcortical imbalances of glutamatergic/gabaminergic systems as well as pathology in neuropeptide systems involved in the regulation of affiliative behavior may contribute to abnormalities in this circuitry. Thus, pharmacological interventions such as mood stabilizers, which dampen limbic irritability, or selective serotonin reuptake inhibitors (SSRIs), which may enhance \"top-down\" control, as well as psychosocial interventions to develop alternative coping skills and reinforce reflective delays may be therapeutic.", "title": "" }, { "docid": "b57006686160241bf118c2c638971764", "text": "Reproducibility is the hallmark of good science. Maintaining a high degree of transparency in scientific reporting is essential not just for gaining trust and credibility within the scientific community but also for facilitating the development of new ideas. Sharing data and computer code associated with publications is becoming increasingly common, motivated partly in response to data deposition requirements from journals and mandates from funders. Despite this increase in transparency, it is still difficult to reproduce or build upon the findings of most scientific publications without access to a more complete workflow. Version control systems (VCS), which have long been used to maintain code repositories in the software industry, are now finding new applications in science. One such open source VCS, Git, provides a lightweight yet robust framework that is ideal for managing the full suite of research outputs such as datasets, statistical code, figures, lab notes, and manuscripts. For individual researchers, Git provides a powerful way to track and compare versions, retrace errors, explore new approaches in a structured manner, while maintaining a full audit trail. For larger collaborative efforts, Git and Git hosting services make it possible for everyone to work asynchronously and merge their contributions at any time, all the while maintaining a complete authorship trail. In this paper I provide an overview of Git along with use-cases that highlight how this tool can be leveraged to make science more reproducible and transparent, foster new collaborations, and support novel uses.", "title": "" }, { "docid": "55aa10937266b6f24157b87a9ecc6e34", "text": "For thousands of years, honey has been used for medicinal applications. The beneficial effects of honey, particularly its anti-microbial activity represent it as a useful option for management of various wounds. Honey contains major amounts of carbohydrates, lipids, amino acids, proteins, vitamin and minerals that have important roles in wound healing with minimum trauma during redressing. Because bees have different nutritional behavior and collect the nourishments from different and various plants, the produced honeys have different compositions. Thus different types of honey have different medicinal value leading to different effects on wound healing. This review clarifies the mechanisms and therapeutic properties of honey on wound healing. The mechanisms of action of honey in wound healing are majorly due to its hydrogen peroxide, high osmolality, acidity, non-peroxide factors, nitric oxide and phenols. Laboratory studies and clinical trials have shown that honey promotes autolytic debridement, stimulates growth of wound tissues and stimulates anti-inflammatory activities thus accelerates the wound healing processes. Compared with topical agents such as hydrofiber silver or silver sulfadiazine, honey is more effective in elimination of microbial contamination, reduction of wound area, promotion of re-epithelialization. In addition, honey improves the outcome of the wound healing by reducing the incidence and excessive scar formation. Therefore, application of honey can be an effective and economical approach in managing large and complicated wounds.", "title": "" }, { "docid": "673bf6ecf9ae6fb61f7b01ff284c0a5f", "text": "We describe a method for visual question answering which is capable of reasoning about contents of an image on the basis of information extracted from a large-scale knowledge base. The method not only answers natural language questions using concepts not contained in the image, but can provide an explanation of the reasoning by which it developed its answer. The method is capable of answering far more complex questions than the predominant long short-term memory-based approach, and outperforms it significantly in the testing. We also provide a dataset and a protocol by which to evaluate such methods, thus addressing one of the key issues in general visual question answering.", "title": "" }, { "docid": "d72f47ad136ebb9c74abe484980b212f", "text": "This paper introduces a novel architecture for reinforcement learning with deep neural networks designed to handle state and action spaces characterized by natural language, as found in text-based games. Termed a deep reinforcement relevance network (DRRN), the architecture represents action and state spaces with separate embedding vectors, which are combined with an interaction function to approximate the Q-function in reinforcement learning. We evaluate the DRRN on two popular text games, showing superior performance over other deep Qlearning architectures. Experiments with paraphrased action descriptions show that the model is extracting meaning rather than simply memorizing strings of text.", "title": "" }, { "docid": "3fb8519ca0de4871b105df5c5d8e489f", "text": "Intra-Body Communication (IBC), which modulates ionic currents over the human body as the communication medium, offers a low power and reliable signal transmission method for information exchange across the body. This paper first briefly reviews the quasi-static electromagnetic (EM) field modeling for a galvanic-type IBC human limb operating below 1 MHz and obtains the corresponding transfer function with correction factor using minimum mean square error (MMSE) technique. Then, the IBC channel characteristics are studied through the comparison between theoretical calculations via this transfer function and experimental measurements in both frequency domain and time domain. High pass characteristics are obtained in the channel gain analysis versus different transmission distances. In addition, harmonic distortions are analyzed in both baseband and passband transmissions for square input waves. The experimental results are consistent with the calculation results from the transfer function with correction factor. Furthermore, we also explore both theoretical and simulation results for the bit-error-rate (BER) performance of several common modulation schemes in the IBC system with a carrier frequency of 500 kHz. It is found that the theoretical results are in good agreement with the simulation results.", "title": "" }, { "docid": "5717a94b8dd53e42bc96c4e1444d5903", "text": "A spoken dialogue system (SDS) is a specialised form of computer system that operates as an interface between users and the application, using spoken natural language as the primary means of communication. The motivation for spoken interaction with such systems is that it allows for a natural and efficient means of communication. It is for this reason that the use of an SDS has been considered as a means for furthering development of DST Group’s Consensus project by providing an engaging spoken interface to high-level information fusion software. This document provides a general overview of the key issues surrounding the development of such interfaces.", "title": "" }, { "docid": "0870519536e7229f861323bd4a44c4d2", "text": "It has become increasingly common for websites and computer media to provide computer generated visual images, called avatars, to represent users and bots during online interactions. In this study, participants (N=255) evaluated a series of avatars in a static context in terms of their androgyny, anthropomorphism, credibility, homophily, attraction, and the likelihood they would choose them during an interaction. The responses to the images were consistent with what would be predicted by uncertainty reduction theory. The results show that the masculinity or femininity (lack of androgyny) of an avatar, as well as anthropomorphism, significantly influence perceptions of avatars. Further, more anthropomorphic avatars were perceived to be more attractive and credible, and people were more likely to choose to be represented by them. Participants reported masculine avatars as less attractive than feminine avatars, and most people reported a preference for human avatars that matched their gender. Practical and theoretical implications of these results for users, designers, and researchers of avatars are discussed.", "title": "" }, { "docid": "b30af7c9565effd44f433abc62e1ff14", "text": "Feedback on designs is critical for helping users iterate toward effective solutions. This paper presents Voyant, a novel system giving users access to a non-expert crowd to receive perception-oriented feedback on their designs from a selected audience. Based on a formative study, the system generates the elements seen in a design, the order in which elements are noticed, impressions formed when the design is first viewed, and interpretation of the design relative to guidelines in the domain and the user's stated goals. An evaluation of the system was conducted with users and their designs. Users reported the feedback about impressions and interpretation of their goals was most helpful, though the other feedback types were also valued. Users found the coordinated views in Voyant useful for analyzing relations between the crowd's perception of a design and the visual elements within it. The cost of generating the feedback was considered a reasonable tradeoff for not having to organize critiques or interrupt peers.", "title": "" }, { "docid": "96f4f77f114fec7eca22d0721c5efcbe", "text": "Aggregation structures with explicit information, such as image attributes and scene semantics, are effective and popular for intelligent systems for assessing aesthetics of visual data. However, useful information may not be available due to the high cost of manual annotation and expert design. In this paper, we present a novel multi-patch (MP) aggregation method for image aesthetic assessment. Different from state-of-the-art methods, which augment an MP aggregation network with various visual attributes, we train the model in an end-to-end manner with aesthetic labels only (i.e., aesthetically positive or negative). We achieve the goal by resorting to an attention-based mechanism that adaptively adjusts the weight of each patch during the training process to improve learning efficiency. In addition, we propose a set of objectives with three typical attention mechanisms (i.e., average, minimum, and adaptive) and evaluate their effectiveness on the Aesthetic Visual Analysis (AVA) benchmark. Numerical results show that our approach outperforms existing methods by a large margin. We further verify the effectiveness of the proposed attention-based objectives via ablation studies and shed light on the design of aesthetic assessment systems.", "title": "" }, { "docid": "a88b5c0c627643e0d7b17649ac391859", "text": "Abduction is a useful decision problem that is related to diagnostics. Given some observation in form of a set of axioms, that is not entailed by a knowledge base, we are looking for explanations, sets of axioms, that can be added to the knowledge base in order to entail the observation. ABox abduction limits both observations and explanations to ABox assertions. In this work we focus on direct tableau-based approach to answer ABox abduction. We develop an ABox abduction algorithm for the ALCHO DL, that is based on Reiter’s minimal hitting set algorithm. We focus on the class of explanations allowing atomic and negated atomic concept assertions, role assertions, and negated role assertions. The algorithm is sound and complete for this class. The algorithm was also implemented, on top of the Pellet reasoner.", "title": "" }, { "docid": "f783860e569d9f179466977db544bd01", "text": "In medical research, continuous variables are often converted into categorical variables by grouping values into two or more categories. We consider in detail issues pertaining to creating just two groups, a common approach in clinical research. We argue that the simplicity achieved is gained at a cost; dichotomization may create rather than avoid problems, notably a considerable loss of power and residual confounding. In addition, the use of a data-derived 'optimal' cutpoint leads to serious bias. We illustrate the impact of dichotomization of continuous predictor variables using as a detailed case study a randomized trial in primary biliary cirrhosis. Dichotomization of continuous data is unnecessary for statistical analysis and in particular should not be applied to explanatory variables in regression models.", "title": "" }, { "docid": "f14757e2e1d893b5cc0c7498f531d0e0", "text": "A new irradiation facility has been developed in the RA-3 reactor in order to perform trials for the treatment of liver metastases using boron neutron capture therapy (BNCT). RA-3 is a production research reactor that works continuously five days a week. It had a thermal column with a small cross section access tunnel that was not accessible during operation. The objective of the work was to perform the necessary modifications to obtain a facility for irradiating a portion of the human liver. This irradiation facility must be operated without disrupting the normal reactor schedule and requires a highly thermalized neutron spectrum, a thermal flux of around 10(10) n cm(-2)s(-1) that is as isotropic and uniform as possible, as well as on-line instrumentation. The main modifications consist of enlarging the access tunnel inside the thermal column to the suitable dimensions, reducing the gamma dose rate at the irradiation position, and constructing properly shielded entrance gates enabled by logical control to safely irradiate and withdraw samples with the reactor at full power. Activation foils and a neutron shielded graphite ionization chamber were used for a preliminary in-air characterization of the irradiation site. The constructed facility is very practical and easy to use. Operational authorization was obtained from radioprotection personnel after confirming radiation levels did not significantly increase after the modification. A highly thermalized and homogenous irradiation field was obtained. Measurements in the empty cavity showed a thermal flux near 10(10) n cm(-2)s(-1), a cadmium ratio of 4100 for gold foils and a gamma dose rate of approximately 5 Gy h(-1).", "title": "" }, { "docid": "799904b20f1174f01c0d2dd87c57e097", "text": "ix", "title": "" }, { "docid": "90c8deec8869977ac5e3feb9a6037569", "text": "Want to get experience? Want to get any ideas to create new things in your life? Read memory a contribution to experimental psychology now! By reading this book as soon as possible, you can renew the situation to get the inspirations. Yeah, this way will lead you to always think more and more. In this case, this book will be always right for you. When you can observe more about the book, you will know why you need this.", "title": "" }, { "docid": "723bfb5acef53d78a05660e5d9710228", "text": "Cheap micro-controllers, such as the Arduino or other controllers based on the Atmel AVR CPUs are being deployed in a wide variety of projects, ranging from sensors networks to robotic submarines. In this paper, we investigate the feasibility of using the Arduino as a true random number generator (TRNG). The Arduino Reference Manual recommends using it to seed a pseudo random number generator (PRNG) due to its ability to read random atmospheric noise from its analog pins. This is an enticing application since true bits of entropy are hard to come by. Unfortunately, we show by statistical methods that the atmospheric noise of an Arduino is largely predictable in a variety of settings, and is thus a weak source of entropy. We explore various methods to extract true randomness from the micro-controller and conclude that it should not be used to produce randomness from its analog pins.", "title": "" } ]
scidocsrr
40c24a69387dd3269018b94f2ee88032
University of Mannheim @ CLSciSumm-17: Citation-Based Summarization of Scientific Articles Using Semantic Textual Similarity
[ { "docid": "16de36d6bf6db7c294287355a44d0f61", "text": "The Computational Linguistics (CL) Summarization Pilot Task was created to encourage a community effort to address the research problem of summarizing research articles as “faceted summaries” in the domain of computational linguistics. In this pilot stage, a handannotated set of citing papers was provided for ten reference papers to help in automating the citation span and discourse facet identification problems. This paper details the corpus construction efforts by the organizers and the participating teams, who also participated in the task-based evaluation. The annotated development corpus used for this pilot task is publicly available at: https://github.com/WING-", "title": "" }, { "docid": "ce2ef27f032d30ce2bc6aa5509a58e49", "text": "Bibliometric measures are commonly used to estimate the popularity and the impact of published research. Existing bibliometric measures provide “quantitative” indicators of how good a published paper is. This does not necessarily reflect the “quality” of the work presented in the paper. For example, when hindex is computed for a researcher, all incoming citations are treated equally, ignoring the fact that some of these citations might be negative. In this paper, we propose using NLP to add a “qualitative” aspect to biblometrics. We analyze the text that accompanies citations in scientific articles (which we term citation context). We propose supervised methods for identifying citation text and analyzing it to determine the purpose (i.e. author intention) and the polarity (i.e. author sentiment) of citation.", "title": "" } ]
[ { "docid": "4dcdb2520ec5f9fc9c32f2cbb343808c", "text": "Shannon’s mathematical theory of communication defines fundamental limits on how much information can be transmitted between the different components of any man-made or biological system. This paper is an informal but rigorous introduction to the main ideas implicit in Shannon’s theory. An annotated reading list is provided for further reading.", "title": "" }, { "docid": "6dbf49c714f6e176273317d4274b93de", "text": "Categorical compositional distributional model of [9] sug gests a way to combine grammatical composition of the formal, type logi cal models with the corpus based, empirical word representations of distribut ional semantics. This paper contributes to the project by expanding the model to al so capture entailment relations. This is achieved by extending the representatio s of words from points in meaning space to density operators, which are probabilit y d stributions on the subspaces of the space. A symmetric measure of similarity an d an asymmetric measure of entailment is defined, where lexical entailment i s measured using von Neumann entropy, the quantum variant of Kullback-Leibl er divergence. Lexical entailment, combined with the composition map on wo rd representations, provides a method to obtain entailment relations on the leve l of sentences. Truth theoretic and corpus-based examples are provided.", "title": "" }, { "docid": "d350335bab7278f5c8c0d9ceb0e6b50b", "text": "New remote sensing sensors now acquire high spatial and spectral Satellite Image Time Series (SITS) of the world. These series of images are a key component of classification systems that aim at obtaining up-to-date and accurate land cover maps of the Earth’s surfaces. More specifically, the combination of the temporal, spectral and spatial resolutions of new SITS makes possible to monitor vegetation dynamics. Although traditional classification algorithms, such as Random Forest (RF), have been successfully applied for SITS classification, these algorithms do not make the most of the temporal domain. Conversely, some approaches that take into account the temporal dimension have recently been tested, especially Recurrent Neural Networks (RNNs). This paper proposes an exhaustive study of another deep learning approaches, namely Temporal Convolutional Neural Networks (TempCNNs) where convolutions are applied in the temporal dimension. The goal is to quantitatively and qualitatively evaluate the contribution of TempCNNs for SITS classification. This paper proposes a set of experiments performed on one million time series extracted from 46 Formosat-2 images. The experimental results show that TempCNNs are more accurate than RF and RNNs, that are the current state of the art for SITS classification. We also highlight some differences with results obtained in computer vision, e.g. about pooling layers. Moreover, we provide some general guidelines on the network architecture, common regularization mechanisms, and hyper-parameter values such as batch size. Finally, we assess the visual quality of the land cover maps produced by TempCNNs.", "title": "" }, { "docid": "4db9cf56991edae0f5ca34546a8052c4", "text": "This chapter presents a survey of interpolation and resampling techniques in the context of exact, separable interpolation of regularly sampled data. In this context, the traditional view of interpolation is to represent an arbitrary continuous function as a discrete sum of weighted and shifted synthesis functions—in other words, a mixed convolution equation. An important issue is the choice of adequate synthesis functions that satisfy interpolation properties. Examples of finite-support ones are the square pulse (nearest-neighbor interpolation), the hat function (linear interpolation), the cubic Keys' function, and various truncated or windowed versions of the sinc function. On the other hand, splines provide examples of infinite-support interpolation functions that can be realized exactly at a finite, surprisingly small computational cost. We discuss implementation issues and illustrate the performance of each synthesis function. We also highlight several artifacts that may arise when performing interpolation, such as ringing, aliasing, blocking and blurring. We explain why the approximation order inherent in the synthesis function is important to limit these interpolation artifacts, which motivates the use of splines as a tunable way to keep them in check without any significant cost penalty. I. I NTRODUCTION Interpolation is a technique that pervades many an application. Interpolation is almost never the goal in itself, yet it affects both the desired results and the ways to obtain them. Notwithstanding its nearly universal relevance, some authors give it less importance than it deserves, perhaps because considerations on interpolation are felt as being paltry when compared to the description of a more inspiring grand scheme of things of some algorithm or method. Due to this indifference, it appears as if the basic principles that underlie interpolation might be sometimes cast aside, or even misunderstood. The goal of this chapter is to refresh the notions encountered in classical interpolation, as well as to introduce the reader to more general approaches. 1.1. Definition What is interpolation? Several answers coexist. One of them defines interpolation as an informed estimate of the unknown [1]. We prefer the following—admittedly less concise—definition: modelbased recovery of continuous data from discrete data within a known range of abscissa. The reason for this preference is to allow for a clearer distinction between interpolation and extrapolation. The former postulates the existence of a known range where the model applies, and asserts that the deterministicallyrecovered continuous data is entirely described by the discrete data, while the latter authorizes the use of the model outside of the known range, with the implicit assumption that the model is \"good\" near data samples, and possibly less good elsewhere. Finally, the three most important hypothesis for interpolation are:", "title": "" }, { "docid": "25bd9169c68ff39ee3a7edbdb65f1aa2", "text": "Social networks such as Twitter and Facebook are important and widely used communication environments that exhibit scale, complexity, node interaction, and emergent behavior. In this paper, we analyze emergent behavior in Twitter and propose a definition of emergent behavior focused on the pervasiveness of a topic within a community. We extend an existing stochastic model for user behavior, focusing on advocate-follower relationships. The new user posting model includes retweets, replies, and mentions as user responses. To capture emergence, we propose a RPBS (Rising, Plateau, Burst and Stabilization) topic pervasiveness model with a new metric that captures how frequent and in what form the community is talking about a particular topic. Our initial validation compares our model with four Twitter datasets. Our extensive experimental analysis allows us to explore several “what-if” scenarios with respect to topic and knowledge sharing, showing how a pervasive topic evolves given various popularity scenarios.", "title": "" }, { "docid": "e9f9d022007833ab7ae928619641e1b1", "text": "BACKGROUND\nDissemination and implementation of health care interventions are currently hampered by the variable quality of reporting of implementation research. Reporting of other study types has been improved by the introduction of reporting standards (e.g. CONSORT). We are therefore developing guidelines for reporting implementation studies (StaRI).\n\n\nMETHODS\nUsing established methodology for developing health research reporting guidelines, we systematically reviewed the literature to generate items for a checklist of reporting standards. We then recruited an international, multidisciplinary panel for an e-Delphi consensus-building exercise which comprised an initial open round to revise/suggest a list of potential items for scoring in the subsequent two scoring rounds (scale 1 to 9). Consensus was defined a priori as 80% agreement with the priority scores of 7, 8, or 9.\n\n\nRESULTS\nWe identified eight papers from the literature review from which we derived 36 potential items. We recruited 23 experts to the e-Delphi panel. Open round comments resulted in revisions, and 47 items went forward to the scoring rounds. Thirty-five items achieved consensus: 19 achieved 100% agreement. Prioritised items addressed the need to: provide an evidence-based justification for implementation; describe the setting, professional/service requirements, eligible population and intervention in detail; measure process and clinical outcomes at population level (using routine data); report impact on health care resources; describe local adaptations to the implementation strategy and describe barriers/facilitators. Over-arching themes from the free-text comments included balancing the need for detailed descriptions of interventions with publishing constraints, addressing the dual aims of reporting on the process of implementation and effectiveness of the intervention and monitoring fidelity to an intervention whilst encouraging adaptation to suit diverse local contexts.\n\n\nCONCLUSIONS\nWe have identified priority items for reporting implementation studies and key issues for further discussion. An international, multidisciplinary workshop, where participants will debate the issues raised, clarify specific items and develop StaRI standards that fit within the suite of EQUATOR reporting guidelines, is planned.\n\n\nREGISTRATION\nThe protocol is registered with Equator: http://www.equator-network.org/library/reporting-guidelines-under-development/#17 .", "title": "" }, { "docid": "e2cf52f0625af866c8842fb3d5c49d04", "text": "Human immunodeficiency virus type 1 (HIV-1) can infect nondividing cells via passing through the nuclear pore complex. The nuclear membrane-imbedded protein SUN2 was recently reported to be involved in the nuclear import of HIV-1. Whether SUN1, which shares many functional similarities with SUN2, is involved in this process remained to be explored. Here we report that overexpression of SUN1 specifically inhibited infection by HIV-1 but not that by simian immunodeficiency virus (SIV) or murine leukemia virus (MLV). Overexpression of SUN1 did not affect reverse transcription but led to reduced accumulation of the 2-long-terminal-repeat (2-LTR) circular DNA and integrated viral DNA, suggesting a block in the process of nuclear import. HIV-1 CA was mapped as a determinant for viral sensitivity to SUN1. Treatment of SUN1-expressing cells with cyclosporine (CsA) significantly reduced the sensitivity of the virus to SUN1, and an HIV-1 mutant containing CA-G89A, which does not interact with cyclophilin A (CypA), was resistant to SUN1 overexpression. Downregulation of endogenous SUN1 inhibited the nuclear entry of the wild-type virus but not that of the G89A mutant. These results indicate that SUN1 participates in the HIV-1 nuclear entry process in a manner dependent on the interaction of CA with CypA.IMPORTANCE HIV-1 infects both dividing and nondividing cells. The viral preintegration complex (PIC) can enter the nucleus through the nuclear pore complex. It has been well known that the viral protein CA plays an important role in determining the pathways by which the PIC enters the nucleus. In addition, the interaction between CA and the cellular protein CypA has been reported to be important in the selection of nuclear entry pathways, though the underlying mechanisms are not very clear. Here we show that both SUN1 overexpression and downregulation inhibited HIV-1 nuclear entry. CA played an important role in determining the sensitivity of the virus to SUN1: the regulatory activity of SUN1 toward HIV-1 relied on the interaction between CA and CypA. These results help to explain how SUN1 is involved in the HIV-1 nuclear entry process.", "title": "" }, { "docid": "345e46da9fc01a100f10165e82d9ca65", "text": "We present a new theoretical framework for analyzing and learning artificial neural networks. Our approach simultaneously and adaptively learns both the structure of the network as well as its weights. The methodology is based upon and accompanied by strong data-dependent theoretical learning guarantees, so that the final network architecture provably adapts to the complexity of any given problem.", "title": "" }, { "docid": "fceb43462f77cf858ef9747c1c5f0728", "text": "MapReduce has become a dominant parallel computing paradigm for big data, i.e., colossal datasets at the scale of tera-bytes or higher. Ideally, a MapReduce system should achieve a high degree of load balancing among the participating machines, and minimize the space usage, CPU and I/O time, and network transfer at each machine. Although these principles have guided the development of MapReduce algorithms, limited emphasis has been placed on enforcing serious constraints on the aforementioned metrics simultaneously. This paper presents the notion of minimal algorithm, that is, an algorithm that guarantees the best parallelization in multiple aspects at the same time, up to a small constant factor. We show the existence of elegant minimal algorithms for a set of fundamental database problems, and demonstrate their excellent performance with extensive experiments.", "title": "" }, { "docid": "3bf37b20679ca6abd022571e3356e95d", "text": "OBJECTIVE\nOur goal is to create an ontology that will allow data integration and reasoning with subject data to classify subjects, and based on this classification, to infer new knowledge on Autism Spectrum Disorder (ASD) and related neurodevelopmental disorders (NDD). We take a first step toward this goal by extending an existing autism ontology to allow automatic inference of ASD phenotypes and Diagnostic & Statistical Manual of Mental Disorders (DSM) criteria based on subjects' Autism Diagnostic Interview-Revised (ADI-R) assessment data.\n\n\nMATERIALS AND METHODS\nKnowledge regarding diagnostic instruments, ASD phenotypes and risk factors was added to augment an existing autism ontology via Ontology Web Language class definitions and semantic web rules. We developed a custom Protégé plugin for enumerating combinatorial OWL axioms to support the many-to-many relations of ADI-R items to diagnostic categories in the DSM. We utilized a reasoner to infer whether 2642 subjects, whose data was obtained from the Simons Foundation Autism Research Initiative, meet DSM-IV-TR (DSM-IV) and DSM-5 diagnostic criteria based on their ADI-R data.\n\n\nRESULTS\nWe extended the ontology by adding 443 classes and 632 rules that represent phenotypes, along with their synonyms, environmental risk factors, and frequency of comorbidities. Applying the rules on the data set showed that the method produced accurate results: the true positive and true negative rates for inferring autistic disorder diagnosis according to DSM-IV criteria were 1 and 0.065, respectively; the true positive rate for inferring ASD based on DSM-5 criteria was 0.94.\n\n\nDISCUSSION\nThe ontology allows automatic inference of subjects' disease phenotypes and diagnosis with high accuracy.\n\n\nCONCLUSION\nThe ontology may benefit future studies by serving as a knowledge base for ASD. In addition, by adding knowledge of related NDDs, commonalities and differences in manifestations and risk factors could be automatically inferred, contributing to the understanding of ASD pathophysiology.", "title": "" }, { "docid": "7e264804d56cab24454c59fe73b51884", "text": "General Douglas MacArthur remarked that \"old soldiers never die; they just fade away.\" For decades, researchers have concluded that visual working memories, like old soldiers, fade away gradually, becoming progressively less precise as they are retained for longer periods of time. However, these conclusions were based on threshold-estimation procedures in which the complete termination of a memory could artifactually produce the appearance of lower precision. Here, we use a recall-based visual working memory paradigm that provides separate measures of the probability that a memory is available and the precision of the memory when it is available. Using this paradigm, we demonstrate that visual working memory representations may be retained for several seconds with little or no loss of precision, but that they may terminate suddenly and completely during this period.", "title": "" }, { "docid": "d19503f965e637089d9fa200329f1349", "text": "Almost a half century ago, regular endurance exercise was shown to improve the capacity of skeletal muscle to oxidize substrates to produce ATP for muscle work. Since then, adaptations in skeletal muscle mRNA level were shown to happen with a single bout of exercise. Protein changes occur within days if daily endurance exercise continues. Some of the mRNA and protein changes cause increases in mitochondrial concentrations. One mitochondrial adaptation that occurs is an increase in fatty acid oxidation at a given absolute, submaximal workload. Mechanisms have been described as to how endurance training increases mitochondria. Importantly, Pgc-1α is a master regulator of mitochondrial biogenesis by increasing many mitochondrial proteins. However, not all adaptations to endurance training are associated with increased mitochondrial concentrations. Recent evidence suggests that the energetic demands of muscle contraction are by themselves stronger controllers of body weight and glucose control than is muscle mitochondrial content. Endurance exercise has also been shown to regulate the processes of mitochondrial fusion and fission. Mitophagy removes damaged mitochondria, a process that maintains mitochondrial quality. Skeletal muscle fibers are composed of different phenotypes, which are based on concentrations of mitochondria and various myosin heavy chain protein isoforms. Endurance training at physiological levels increases type IIa fiber type with increased mitochondria and type IIa myosin heavy chain. Endurance training also improves capacity of skeletal muscle blood flow. Endurance athletes possess enlarged arteries, which may also exhibit decreased wall thickness. VEGF is required for endurance training-induced increases in capillary-muscle fiber ratio and capillary density.", "title": "" }, { "docid": "58b957db2e72d76e5ee1fc5102df7dc1", "text": "This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN) architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional ANN has only the desired position and orientation of the end effector in the input pattern of neural network. In this paper, a six DOF Denso robotic arm with a gripper is controlled by ANN. The comprehensive experimental results proved the applicability and the efficiency of the proposed approach in robotic motion control. The inclusion of current configuration of joint angles in ANN significantly increased the accuracy of ANN estimation of the joint angles output. The new controller design has advantages over the existing techniques for minimizing the position error in unconventional tasks and increasing the accuracy of ANN in estimation of robot's joint angles.", "title": "" }, { "docid": "ba966c2fc67b88d26a3030763d56ed1a", "text": "Design of a long read-range, reconfigurable operating frequency radio frequency identification (RFID) metal tag is proposed in this paper. The antenna structure consists of two nonconnected load bars and two bowtie patches electrically connected through four pairs of vias to a conducting backplane to form a looped-bowtie RFID tag antenna that is suitable for mounting on metallic objects. The design offers more degrees of freedom to tune the input impedance of the proposed antenna. The load bars, which have a cutoff point on each bar, can be used to reconfigure the operating frequency of the tag by exciting any one of the three possible frequency modes; hence, this tag can be used worldwide for the UHF RFID frequency band. Experimental tests show that the maximum read range of the prototype, placed on a metallic object, are found to be 3.0, 3.2, and 3.3 m, respectively, for the three operating modes, which has been tested for an RFID reader with only 0.4 W error interrupt pending register (EIPR). The paper shows that the simulated and measured results are in good agreement with each other.", "title": "" }, { "docid": "84963fdc37a3beb8eebc8d5626b53428", "text": "A fundamental assumption in software security is that memory contents do not change unless there is a legitimate deliberate modification. Classical fault attacks show that this assumption does not hold if the attacker has physical access. Rowhammer attacks showed that local code execution is already sufficient to break this assumption. Rowhammer exploits parasitic effects in DRAM tomodify the content of a memory cell without accessing it. Instead, other memory locations are accessed at a high frequency. All Rowhammer attacks so far were local attacks, running either in a scripted language or native code. In this paper, we present Nethammer. Nethammer is the first truly remote Rowhammer attack, without a single attacker-controlled line of code on the targeted system. Systems that use uncached memory or flush instructions while handling network requests, e.g., for interaction with the network device, can be attacked using Nethammer. Other systems can still be attacked if they are protected with quality-of-service techniques like Intel CAT. We demonstrate that the frequency of the cache misses is in all three cases high enough to induce bit flips. We evaluated different bit flip scenarios. Depending on the location, the bit flip compromises either the security and integrity of the system and the data of its users, or it can leave persistent damage on the system, i.e., persistent denial of service. We investigated Nethammer on personal computers, servers, and mobile phones. Nethammer is a security landslide, making the formerly local attack a remote attack. With this work we invalidate all defenses and mitigation strategies against Rowhammer build upon the assumption of a local attacker. Consequently, this paradigm shift impacts the security of millions of devices where the attacker is not able to execute attacker-controlled code. Nethammer requires threat models to be re-evaluated for most network-connected systems. We discuss state-of-the-art countermeasures and show that most of them have no effect on our attack, including the targetrow-refresh (TRR) countermeasure of modern hardware. Disclaimer: This work on Rowhammer attacks over the network was conducted independently and unaware of other research groups working on truly remote Rowhammer attacks. Experiments and observations presented in this paper, predate the publication of the Throwhammer attack by Tatar et al. [81]. We will thoroughly study the differences between both papers and compare the advantages and disadvantages in a future version of this paper.", "title": "" }, { "docid": "7d7c596d334153f11098d9562753a1ee", "text": "The design of systems for intelligent control of urban traffic is important in providing a safe environment for pedestrians and motorists. Artificial neural networks (ANNs) (learning systems) and expert systems (knowledge-based systems) have been extensively explored as approaches for decision making. While the ANNs compute decisions by learning from successfully solved examples, the expert systems rely on a knowledge base developed by human reasoning for decision making. It is possible to integrate the learning abilities of an ANN and the knowledge-based decision-making ability of the expert system. This paper presents a real-time intelligent decision making system, IDUTC, for urban traffic control applications. The system integrates a backpropagation-based ANN that can learn and adapt to the dynamically changing environment and a fuzzy expert system for decision making. The performance of the proposed intelligent decision-making system is evaluated by mapping the the adaptable traffic light control problem. The application is implemented using the ANN approach, the FES approach, and the proposed integrated system approach. The results of extensive simulations using the three approaches indicate that the integrated system provides better performance and leads to a more efficient implementation than the other two approaches.", "title": "" }, { "docid": "8914e1a38db6b47f4705f0c684350d38", "text": "Style transfer is the task of rephrasing the text to contain specific stylistic properties without changing the intent or affect within the context. This paper introduces a new method for automatic style transfer. We first learn a latent representation of the input sentence which is grounded in a language translation model in order to better preserve the meaning of the sentence while reducing stylistic properties. Then adversarial generation techniques are used to make the output match the desired style. We evaluate this technique on three different style transformations: sentiment, gender and political slant. Compared to two state-of-the-art style transfer modeling techniques we show improvements both in automatic evaluation of style transfer and in manual evaluation of meaning preservation and fluency.", "title": "" }, { "docid": "62d63357923c5a7b1ea21b8448e3cba3", "text": "This paper presents a monocular and purely vision based pedestrian trajectory tracking and prediction framework with integrated map-based hazard inference. In Advanced Driver Assistance systems research, a lot of effort has been put into pedestrian detection over the last decade, and several pedestrian detection systems are indeed showing impressive results. Considerably less effort has been put into processing the detections further. We present a tracking system for pedestrians, which based on detection bounding boxes tracks pedestrians and is able to predict their positions in the near future. The tracking system is combined with a module which, based on the car's GPS position acquires a map and uses the road information in the map to know where the car can drive. Then the system warns the driver about pedestrians at risk, by combining the information about hazardous areas for pedestrians with a probabilistic position prediction for all observed pedestrians.", "title": "" }, { "docid": "21822a9c37a315e6282200fe605debfe", "text": "This paper provides a survey on speech recognition and discusses the techniques and system that enables computers to accept speech as input. This paper shows the major developments in the field of speech recognition. This paper highlights the speech recognition techniques and provides a brief description about the four stages in which the speech recognition techniques are classified. In addition, this paper gives a description of four feature extraction techniques: Linear Predictive Coding (LPC), Mel-frequency cepstrum (MFFCs), RASTA filtering and Probabilistic Linear Discriminate Analysis (PLDA). The objective of this paper is to summarize the feature extraction techniques used in speech recognition system.", "title": "" }, { "docid": "732fd5463462d11451d78d97dc821d78", "text": "Since sensors have limited range and coverage, mobile robots often have to make decisions on where to point their sensors. A good sensing strategy allows a robot to collect information that is useful for its tasks. Most existing solutions to this active sensing problem choose the direction that maximally reduces the uncertainty in a single state variable. In more complex problem domains, however, uncertainties exist in multiple state variables, and they affect the performance of the robot in different ways. The robot thus needs to have more sophisticated sensing strategies in order to decide which uncertainties to reduce, and to make the correct trade-offs. In this work, we apply a least squares reinforcement learning method to solve this problem. We implemented and tested the learning approach in the RoboCup domain, where the robot attempts to reach a ball and accurately kick it into the goal. We present experimental results that suggest our approach is able to learn highly effective sensing strategies.", "title": "" } ]
scidocsrr
ee92ea3d8841fa379ff3ff4b3bf68fcb
Puberty suppression in gender identity disorder: the Amsterdam experience
[ { "docid": "fe2b8921623f3bcf7b8789853b45e912", "text": "OBJECTIVE\nTo establish the psychosexual outcome of gender-dysphoric children at 16 years or older and to examine childhood characteristics related to psychosexual outcome.\n\n\nMETHOD\nWe studied 77 children who had been referred in childhood to our clinic because of gender dysphoria (59 boys, 18 girls; mean age 8.4 years, age range 5-12 years). In childhood, we measured the children's cross-gender identification and discomfort with their own sex and gender roles. At follow-up 10.4 +/- 3.4 years later, 54 children (mean age 18.9 years, age range 16-28 years) agreed to participate. In this group, we assessed gender dysphoria and sexual orientation.\n\n\nRESULTS\nAt follow-up, 30% of the 77 participants (19 boys and 4 girls) did not respond to our recruiting letter or were not traceable; 27% (12 boys and 9 girls) were still gender dysphoric (persistence group), and 43% (desistance group: 28 boys and 5 girls) were no longer gender dysphoric. Both boys and girls in the persistence group were more extremely cross-gendered in behavior and feelings and were more likely to fulfill gender identity disorder (GID) criteria in childhood than the children in the other two groups. At follow-up, nearly all male and female participants in the persistence group reported having a homosexual or bisexual sexual orientation. In the desistance group, all of the girls and half of the boys reported having a heterosexual orientation. The other half of the boys in the desistance group had a homosexual or bisexual sexual orientation.\n\n\nCONCLUSIONS\nMost children with gender dysphoria will not remain gender dysphoric after puberty. Children with persistent GID are characterized by more extreme gender dysphoria in childhood than children with desisting gender dysphoria. With regard to sexual orientation, the most likely outcome of childhood GID is homosexuality or bisexuality.", "title": "" }, { "docid": "3f292307824ed0b4d7fd59824ff9dd2b", "text": "The aim of this qualitative study was to obtain a better understanding of the developmental trajectories of persistence and desistence of childhood gender dysphoria and the psychosexual outcome of gender dysphoric children. Twenty five adolescents (M age 15.88, range 14-18), diagnosed with a Gender Identity Disorder (DSM-IV or DSM-IV-TR) in childhood, participated in this study. Data were collected by means of biographical interviews. Adolescents with persisting gender dysphoria (persisters) and those in whom the gender dysphoria remitted (desisters) indicated that they considered the period between 10 and 13 years of age to be crucial. They reported that in this period they became increasingly aware of the persistence or desistence of their childhood gender dysphoria. Both persisters and desisters stated that the changes in their social environment, the anticipated and actual feminization or masculinization of their bodies, and the first experiences of falling in love and sexual attraction had influenced their gender related interests and behaviour, feelings of gender discomfort and gender identification. Although, both persisters and desisters reported a desire to be the other gender during childhood years, the underlying motives of their desire seemed to be different.", "title": "" } ]
[ { "docid": "51e78c504a3977ea7e706da7e3a06c25", "text": "This work introduces an affordance characterization employing mechanical wrenches as a metric for predicting and planning with workspace affordances. Although affordances are a commonly used high-level paradigm for robotic task-level planning and learning, the literature has been sparse regarding how to characterize the agent in this object-agent-environment framework. In this work, we propose decomposing a behavior into a vocabulary of characteristic requirements and capabilities that are suitable to predict the affordances of various parts of the workspace. Specifically, we investigate mechanical wrenches as a viable representation of these affordance requirements and capabilities. We then use this vocabulary in a planning system to compose complex motions from simple behavior types in continuous space. The utility of the framework for complex planning is demonstrated on example scenarios both in simulation and with real-world industrial manipulators.", "title": "" }, { "docid": "0eb659fd66ad677f90019f7214aae7e8", "text": "In this article a relational database schema for a bibliometric database is developed. After the introduction explaining the motivation to use relational databases in bibliometrics, an overview of the related literature is given. A review of typical bibliometric questions serves as an informal requirement analysis. The database schema is developed as an entity-relationship diagram using the structural information typically found in scientific articles. Several SQL queries for the tasks presented in the requirement analysis show the usefulness of the developed database schema.", "title": "" }, { "docid": "d74df8673db783ff80d01f2ccc0fe5bf", "text": "The search for strategies to mitigate undesirable economic, ecological, and social effects of harmful resource consumption has become an important, socially relevant topic. An obvious starting point for businesses that wish to make value creation more sustainable is to increase the utilization rates of existing resources. Modern social Internet technology is an effective means by which to achieve IT-enabled sharing services, which make idle resource capacity owned by one entity accessible to others who need them but do not want to own them. Successful sharing services require synchronized participation of providers and users of resources. The antecedents of the participation behavior of providers and users has not been systematically addressed by the extant literature. This article therefore proposes a model that explains and predicts the participation behavior in sharing services. Our search for a theoretical foundation revealed the Theory of Planned Behavior as most appropriate lens, because this theory enables us to integrate provider behavior and user behavior as constituents of participation behavior. The model is novel for that it is the first attempt to study the interdependencies between the behavior types in sharing service participation and for that it includes both general and specific determinants of the participation behavior.", "title": "" }, { "docid": "90a1fc43ee44634bce3658463503994e", "text": "Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure. The situation gets even worse with distributed training on mobile devices (federated learning), which suffers from higher latency, lower throughput, and intermittent poor connections. In this paper, we find 99.9% of the gradient exchange in distributed SGD are redundant, and propose Deep Gradient Compression (DGC) to greatly reduce the communication bandwidth. To preserve accuracy during this compression, DGC employs four methods: momentum correction, local gradient clipping, momentum factor masking, and warm-up training. We have applied Deep Gradient Compression to image classification, speech recognition, and language modeling with multiple datasets including Cifar10, ImageNet, Penn Treebank, and Librispeech Corpus. On these scenarios, Deep Gradient Compression achieves a gradient compression ratio from 270× to 600× without losing accuracy, cutting the gradient size of ResNet-50 from 97MB to 0.35MB, and for DeepSpeech from 488MB to 0.74MB. Deep gradient compression enables large-scale distributed training on inexpensive commodity 1Gbps Ethernet and facilitates distributed training on mobile.", "title": "" }, { "docid": "b4c5ddab0cb3e850273275843d1f264f", "text": "The increase of malware that are exploiting the Internet daily has become a serious threat. The manual heuristic inspection of malware analysis is no longer considered effective and efficient compared against the high spreading rate of malware. Hence, automated behavior-based malware detection using machine learning techniques is considered a profound solution. The behavior of each malware on an emulated (sandbox) environment will be automatically analyzed and will generate behavior reports. These reports will be preprocessed into sparse vector models for further machine learning (classification). The classifiers used in this research are k-Nearest Neighbors (kNN), Naïve Bayes, J48 Decision Tree, Support Vector Machine (SVM), and Multilayer Perceptron Neural Network (MlP). Based on the analysis of the tests and experimental results of all the 5 classifiers, the overall best performance was achieved by J48 decision tree with a recall of 95.9%, a false positive rate of 2.4%, a precision of 97.3%, and an accuracy of 96.8%. In summary, it can be concluded that a proof-of-concept based on automatic behavior-based malware analysis and the use of machine learning techniques could detect malware quite effectively and efficiently.", "title": "" }, { "docid": "e96fddd8058e3dc98eb9f73aa387c9f9", "text": "There is often the need to perform sentiment classification in a particular domain where no labeled document is available. Although we could make use of a general-purpose off-the-shelf sentiment classifier or a pre-built one for a different domain, the effectiveness would be inferior. In this paper, we explore the possibility of building domain-specific sentiment classifiers with unlabeled documents only. Our investigation indicates that in the word embeddings learned from the unlabeled corpus of a given domain, the distributed word representations (vectors) for opposite sentiments form distinct clusters, though those clusters are not transferable across domains. Exploiting such a clustering structure, we are able to utilize machine learning algorithms to induce a quality domain-specific sentiment lexicon from just a few typical sentiment words (“seeds”). An important finding is that simple linear model based supervised learning algorithms (such as linear SVM) can actually work better than more sophisticated semi-supervised/transductive learning algorithms which represent the state-of-the-art technique for sentiment lexicon induction. The induced lexicon could be applied directly in a lexicon-based method for sentiment classification, but a higher performance could be achieved through a two-phase bootstrapping method which uses the induced lexicon to assign positive/negative sentiment scores to unlabeled documents first, a nd t hen u ses those documents found to have clear sentiment signals as pseudo-labeled examples to train a document sentiment classifier v ia supervised learning algorithms (such as LSTM). On several benchmark datasets for document sentiment classification, our end-to-end pipelined approach which is overall unsupervised (except for a tiny set of seed words) outperforms existing unsupervised approaches and achieves an accuracy comparable to that of fully supervised approaches.", "title": "" }, { "docid": "5a07f2e8b28d788673800ff22a6b99b4", "text": "Recently , we introduced a software linearization technique for frequency-modulated continuous-wave (FMCW) radar applications using a nonlinear direct digital synthesizer based frequency source. In this letter, we present a method that uses this unconventional, cost efficient, basically nonlinear synthesizer concept, but is capable of linearizing the frequency chirp directly in hardware by means of defined sweep predistortion. Additionally, the concept is extended for the generation of defined nonlinear frequency courses and verified on measurements with a 2.45-GHz FMCW radar prototype", "title": "" }, { "docid": "b552bfedda08c1d040e34472117a15bd", "text": "Four hundred and fiftynine students from 20 different high school classrooms in Michigan participated in focus group discussions about the character strengths included in the Values in Action Classification. Students were interested in the subject of good character and able to discuss with candor and sophistication instances of each strength. They were especially drawn to the positive traits of leadership, practical intelligence, wisdom, social intelligence, love of learning, spirituality, and the capacity to love and be loved. Students believed that strengths were largely acquired rather than innate and that these strengths developed through ongoing life experience as opposed to formal instruction. They cited an almost complete lack of contemporary role models exemplifying different strengths of character. Implications of these findings for the quantitative assessment of positive traits were discussed, as were implications for designing character education programs for adolescents. We suggest that peers can be an especially important force in encouraging the development and display of good character among youth.", "title": "" }, { "docid": "7916a261319dad5f257a0b8e0fa97fec", "text": "INTRODUCTION\nPreliminary research has indicated that recreational ketamine use may be associated with marked cognitive impairments and elevated psychopathological symptoms, although no study to date has determined how these are affected by differing frequencies of use or whether they are reversible on cessation of use. In this study we aimed to determine how variations in ketamine use and abstention from prior use affect neurocognitive function and psychological wellbeing.\n\n\nMETHOD\nWe assessed a total of 150 individuals: 30 frequent ketamine users, 30 infrequent ketamine users, 30 ex-ketamine users, 30 polydrug users and 30 controls who did not use illicit drugs. Cognitive tasks included spatial working memory, pattern recognition memory, the Stockings of Cambridge (a variant of the Tower of London task), simple vigilance and verbal and category fluency. Standardized questionnaires were used to assess psychological wellbeing. Hair analysis was used to verify group membership.\n\n\nRESULTS\nFrequent ketamine users were impaired on spatial working memory, pattern recognition memory, Stockings of Cambridge and category fluency but exhibited preserved verbal fluency and prose recall. There were no differences in the performance of the infrequent ketamine users or ex-users compared to the other groups. Frequent users showed increased delusional, dissociative and schizotypal symptoms which were also evident to a lesser extent in infrequent and ex-users. Delusional symptoms correlated positively with the amount of ketamine used currently by the frequent users.\n\n\nCONCLUSIONS\nFrequent ketamine use is associated with impairments in working memory, episodic memory and aspects of executive function as well as reduced psychological wellbeing. 'Recreational' ketamine use does not appear to be associated with distinct cognitive impairments although increased levels of delusional and dissociative symptoms were observed. As no performance decrements were observed in the ex-ketamine users, it is possible that the cognitive impairments observed in the frequent ketamine group are reversible upon cessation of ketamine use, although delusional symptoms persist.", "title": "" }, { "docid": "64221753135508ef3d041e0aab83039a", "text": "Cryptocurrency platforms such as Bitcoin and Ethereum have become more popular due to decentralized control and the promise of anonymity. Ethereum is particularly powerful due to its support for smart contracts which are implemented through Turing complete scripting languages and digital tokens that represent fungible tradable goods. It is necessary to understand whether de-anonymization is feasible to quantify the promise of anonymity. Cryptocurrencies are increasingly being used in online black markets like Silk Road and ransomware like CryptoLocker and WannaCry. In this paper, we propose a model for persisting transactions from Ethereum into a graph database, Neo4j. We propose leveraging graph compute or analytics against the transactions persisted into a graph database.", "title": "" }, { "docid": "7957ba93e63f753336281fcb31e35cab", "text": "This paper proposed a method that combines Polar Fourier Transform, color moments, and vein features to retrieve leaf images based on a leaf image. The method is very useful to help people in recognizing foliage plants. Foliage plants are plants that have various colors and unique patterns in the leaf. Therefore, the colors and its patterns are information that should be counted on in the processing of plant identification. To compare the performance of retrieving system to other result, the experiments used Flavia dataset, which is very popular in recognizing plants. The result shows that the method gave better performance than PNN, SVM, and Fourier Transform. The method was also tested using foliage plants with various colors. The accuracy was 90.80% for 50 kinds of plants.", "title": "" }, { "docid": "84b0d19d5d383ea3fd99e20740ebf5d6", "text": "We propose a robust proactive threshold signature scheme, a multisignature scheme and a blind signature scheme which work in any Gap Diffie-Hellman (GDH) group (where the Computational Diffie-Hellman problem is hard but the Decisional Diffie-Hellman problem is easy). Our constructions are based on the recently proposed GDH signature scheme of Boneh et al. [BLS]. Due to the nice properties of GDH groups and of the base scheme, it turns out that most of our constructions are much simpler, more efficient and have more useful characteristics than similar existing constructions. We support all the proposed schemes with proofs under the appropriate computational assumptions, using the corresponding notions of security.", "title": "" }, { "docid": "6570f9b4f8db85f40a99fb1911aa4967", "text": "Honey bees have played a major role in the history and development of humankind, in particular for nutrition and agriculture. The most important role of the western honey bee (Apis mellifera) is that of pollination. A large amount of crops consumed throughout the world today are pollinated by the activity of the honey bee. It is estimated that the total value of these crops stands at 155 billion euro annually. The goal of the work outlined in this paper was to use wireless sensor network technology to monitor a colony within the beehive with the aim of collecting image and audio data. These data allows the beekeeper to obtain a much more comprehensive view of the in-hive conditions, an indication of flight direction, as well as monitoring the hive outside of the traditional beekeeping times, i.e. during the night, poor weather, and winter months. This paper outlines the design of a fully autonomous beehive monitoring system which provided image and sound monitoring of the internal chambers of the hive, as well as a warning system for emergency events such as possible piping, dramatically increased hive activity, or physical damage to the hive. The final design included three wireless nodes: a digital infrared camera with processing capabilities for collecting imagery of the hive interior; an external thermal imaging camera node for monitoring the colony status and activity, and an accelerometer and a microphone connected to an off the shelf microcontroller node for processing. The system allows complex analysis and sensor fusion. Some scenarios based on sound processing, image collection, and accelerometers are presented. Power management was implemented which allowed the system to achieve energy neutrality in an outdoor deployment with a 525 × 345 mm solar panel.", "title": "" }, { "docid": "404bd4b3c7756c87805fa286415aac43", "text": "Although key techniques for next-generation wireless communication have been explored separately, relatively little work has been done to investigate their potential cooperation for performance optimization. To address this problem, we propose a holistic framework for robust 5G communication based on multiple-input-multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM). More specifically, we design a new framework that supports: 1) index modulation based on OFDM (OFDM–M) [1]; 2) sub-band beamforming and channel estimation to achieve massive path gains by exploiting multiple antenna arrays [2]; and 3) sub-band pre-distortion for peak-to-average-power-ratio (PAPR) reduction [3] to significantly decrease the PAPR and communication errors in OFDM-IM by supporting a linear behavior of the power amplifier in the modem. The performance of the proposed framework is evaluated against the state-of-the-art QPSK, OFDM-IM [1] and QPSK-spatiotemporal QPSK-ST [2] schemes. The results show that our framework reduces the bit error rate (BER), mean square error (MSE) and PAPR compared to the baselines by approximately 6–13dB, 8–13dB, and 50%, respectively.", "title": "" }, { "docid": "447f7e2ddc5607019cd53716abbbb4d4", "text": "In recent years, massive amounts of identified and unidentified facial data have become available—often publicly so—through Web 2.0 applications. So have also the infrastructure and technologies necessary to navigate through those data in real time, matching individuals across online services, independently of their knowledge or consent. In the literature on statistical re-identification [5, 6], an identified database is pinned against an unidentified database in order to recognize individuals in the latter and associate them with information from the former. Many online services make available to visitors identified facial images: social networks such as Facebook and LinkedIn, online services such as Amazon.com profiles, or organizational rosters. Consider Facebook, for example. Most active Facebook users (currently estimated at 1.35 billion monthly active users worldwide [7], with over 250 billion photos uploaded photos [8]) use photos of themselves as their primary profile image. These photos are often identifiable: Facebook has pursued a ‘real identity’ policy, under which members are expected to appear on the network under their real names under penalty of account cancellation [9]. Using tagging features and login security questions, Facebook has encouraged users to associate their and their friends’ names to uploaded photos. Facebook photos are also frequently publicly available. Primary profile photos must be shared with strangers un-", "title": "" }, { "docid": "6c5a5bc775316efc278285d96107ddc6", "text": "STUDY DESIGN\nRetrospective study of 55 consecutive patients with spinal metastases secondary to breast cancer who underwent surgery.\n\n\nOBJECTIVE\nTo evaluate the predictive value of the Tokuhashi score for life expectancy in patients with breast cancer with spinal metastases.\n\n\nSUMMARY OF BACKGROUND DATA\nThe score, composed of 6 parameters each rated from 0 to 2, has been proposed by Tokuhashi and colleagues for the prognostic assessment of patients with spinal metastases.\n\n\nMETHODS\nA total of 55 patients surgically treated for vertebral metastases secondary to breast cancer were studied. The score was calculated for each patient and, according to Tokuhashi, the patients were divided into 3 groups with different life expectancy according to their total number of scoring points. In a second step, the grouping for prognosis was modified to get a better correlation of the predicted and definitive survival.\n\n\nRESULTS\nApplying the Tokuhashi score for the estimation of life expectancy of patients with breast cancer with vertebral metastases provided very reliable results. However, the original analysis by Tokuhashi showed a limited correlation between predicted and real survival for each prognostic group. Therefore, our patients were divided into modified prognostic groups regarding their total number of scoring points, leading to a higher significance of the predicted prognosis in each group (P < 0.0001), and a better correlation of the predicted and real survival.\n\n\nCONCLUSION\nThe modified Tokuhashi score assists in decision making based on reliable estimators of life expectancy in patients with spinal metastases secondary to breast cancer.", "title": "" }, { "docid": "64406c6b0e45eb49743f0789dcb89029", "text": "Hand gesture is one of the typical methods used in sign language for non-verbal communication. Sign gestures are a non-verbal visual language, different from the spoken language, but serving the same function. It is often very difficult for the hearing impaired community to communicate their ideas and creativity to the normal humans. This paper presents a system that will not only automatically recognize the hand gestures but also convert it into corresponding speech output so that speaking impaired person can easily communicate with normal people. The gesture to speech system, G2S, has been developed using the skin colour segmentation. The system consists of camera attached to computer that will take images of hand gestures. Image segmentation & feature extraction algorithm is used to recognize the hand gestures of the signer. According to recognized hand gestures, corresponding pre-recorded sound track will be played.", "title": "" }, { "docid": "b5fd22854e75a29507cde380999705a2", "text": "This study presents a high-efficiency-isolated single-input multiple-output bidirectional (HISMB) converter for a power storage system. According to the power management, the proposed HISMB converter can operate at a step-up state (energy release) and a step-down state (energy storage). At the step-up state, it can boost the voltage of a low-voltage input power source to a high-voltage-side dc bus and middle-voltage terminals. When the high-voltage-side dc bus has excess energy, one can reversely transmit the energy. The high-voltage dc bus can take as the main power, and middle-voltage output terminals can supply powers for individual middle-voltage dc loads or to charge auxiliary power sources (e.g., battery modules). In this study, a coupled-inductor-based HISMB converter accomplishes the bidirectional power control with the properties of voltage clamping and soft switching, and the corresponding device specifications are adequately designed. As a result, the energy of the leakage inductor of the coupled inductor can be recycled and released to the high-voltage-side dc bus and auxiliary power sources, and the voltage stresses on power switches can be greatly reduced. Moreover, the switching losses can be significantly decreased because of all power switches with zero-voltage-switching features. Therefore, the objectives of high-efficiency power conversion, electric isolation, bidirectional energy transmission, and various output voltage with different levels can be obtained. The effectiveness of the proposed HISMB converter is verified by experimental results of a kW-level prototype in practical applications.", "title": "" }, { "docid": "a4dea5e491657e1ba042219401ebcf39", "text": "Beam scanning arrays typically suffer from scan loss; an increasing degradation in gain as the beam is scanned from broadside toward the horizon in any given scan plane. Here, a metasurface is presented that reduces the effects of scan loss for a leaky-wave antenna (LWA). The metasurface is simple, being composed of an ultrathin sheet of subwavelength split-ring resonators. The leaky-wave structure is balanced, scanning from the forward region, through broadside, and into the backward region, and designed to scan in the magnetic plane. The metasurface is effectively invisible at broadside, where balanced LWAs are most sensitive to external loading. It is shown that the introduction of the metasurface results in increased directivity, and hence, gain, as the beam is scanned off broadside, having an increasing effect as the beam is scanned to the horizon. Simulations show that the metasurface improves the effective aperture distribution at higher scan angles, resulting in a more directive main beam, while having a negligible impact on cross-polarization gain. Experimental validation results show that the scan range of the antenna is increased from $-39 {^{\\circ }} \\leq \\theta \\leq +32 {^{\\circ }}$ to $-64 {^{\\circ }} \\leq \\theta \\leq +70 {^{\\circ }}$ , when loaded with the metasurface, demonstrating a flattened gain profile over a 135° range centered about broadside. Moreover, this scan range occurs over a frequency band spanning from 9 to 15.5 GHz, demonstrating a relative bandwidth of 53% for the metasurface.", "title": "" }, { "docid": "77cfb72acbc2f077c3d9b909b0a79e76", "text": "In this paper, we analyze two general-purpose encoding types, trees and graphs systematically, focusing on trends over increasingly complex problems. Tree and graph encodings are similar in application but offer distinct advantages and disadvantages in genetic programming. We describe two implementations and discuss their evolvability. We then compare performance using symbolic regression on hundreds of random nonlinear target functions of both 1-dimensional and 8-dimensional cases. Results show the graph encoding has less bias for bloating solutions but is slower to converge and deleterious crossovers are more frequent. The graph encoding however is found to have computational benefits, suggesting it to be an advantageous trade-off between regression performance and computational effort.", "title": "" } ]
scidocsrr
02a3b81a7117985ca5b91ab8868070a6
Towards Neural Theorem Proving at Scale Anonymous
[ { "docid": "4381ee2e578a640dda05e609ed7f6d53", "text": "We introduce neural networks for end-to-end differentiable proving of queries to knowledge bases by operating on dense vector representations of symbols. These neural networks are constructed recursively by taking inspiration from the backward chaining algorithm as used in Prolog. Specifically, we replace symbolic unification with a differentiable computation on vector representations of symbols using a radial basis function kernel, thereby combining symbolic reasoning with learning subsymbolic vector representations. By using gradient descent, the resulting neural network can be trained to infer facts from a given incomplete knowledge base. It learns to (i) place representations of similar symbols in close proximity in a vector space, (ii) make use of such similarities to prove queries, (iii) induce logical rules, and (iv) use provided and induced logical rules for multi-hop reasoning. We demonstrate that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules.", "title": "" }, { "docid": "98cc792a4fdc23819c877634489d7298", "text": "This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The euclidean distance between two vectors can be efficiently estimated from their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy, outperforming three state-of-the-art approaches. The scalability of our approach is validated on a data set of two billion vectors.", "title": "" } ]
[ { "docid": "9a63a5db2a40df78a436e7be87f42ff7", "text": "A quantitative, coordinate-based meta-analysis combined data from 354 participants across 22 fMRI studies and one positron emission tomography (PET) study to identify the differences in neural correlates of figurative and literal language processing, and to investigate the role of the right hemisphere (RH) in figurative language processing. Studies that reported peak activations in standard space contrasting figurative vs. literal language processing at whole brain level in healthy adults were included. The left and right IFG, large parts of the left temporal lobe, the bilateral medial frontal gyri (medFG) and an area around the left amygdala emerged for figurative language processing across studies. Conditions requiring exclusively literal language processing did not activate any selective regions in most of the cases, but if so they activated the cuneus/precuneus, right MFG and the right IPL. No general RH advantage for metaphor processing could be found. On the contrary, significant clusters of activation for metaphor conditions were mostly lateralized to the left hemisphere (LH). Subgroup comparisons between experiments on metaphors, idioms, and irony/sarcasm revealed shared activations in left frontotemporal regions for idiom and metaphor processing. Irony/sarcasm processing was correlated with activations in midline structures such as the medFG, ACC and cuneus/precuneus. To test the graded salience hypothesis (GSH, Giora, 1997), novel metaphors were contrasted against conventional metaphors. In line with the GSH, RH involvement was found for novel metaphors only. Here we show that more analytic, semantic processes are involved in metaphor comprehension, whereas irony/sarcasm comprehension involves theory of mind processes.", "title": "" }, { "docid": "57c705e710f99accab3d9242fddc5ac8", "text": "Although much research has been conducted in the area of organizational commitment, few studies have explicitly examined how organizations facilitate commitment among members. Using a sample of 291 respondents from 45 firms, the results of this study show that rigorous recruitment and selection procedures and a strong, clear organizational value system are associated with higher levels of employee commitment based on internalization and identification. Strong organizational career and reward systems are related to higher levels of instrumental or compliance-based commitment.", "title": "" }, { "docid": "f013f58d995693a79cd986a028faff38", "text": "We present the design and implementation of a system for axiomatic programming, and its application to mathematical software construction. Key novelties include a direct support for user-defined axioms establishing local equalities between types, and overload resolution based on equational theories and user-defined local axioms. We illustrate uses of axioms, and their organization into concepts, in structured generic programming as practiced in computational mathematical systems.", "title": "" }, { "docid": "f97d81a177ca629da5fe0d707aec4b8a", "text": "This paper highlights the two machine learning approaches, viz. Rough Sets and Decision Trees (DT), for the prediction of Learning Disabilities (LD) in school-age children, with an emphasis on applications of data mining. Learning disability prediction is a very complicated task. By using these two approaches, we can easily and accurately predict LD in any child and also we can determine the best classification method. In this study, in rough sets the attribute reduction and classification are performed using Johnson’s reduction algorithm and Naive Bayes algorithm respectively for rule mining and in construction of decision trees, J48 algorithm is used. From this study, it is concluded that, the performance of decision trees are considerably poorer in several important aspects compared to rough sets. It is found that, for selection of attributes, rough sets is very useful especially in the case of inconsistent data and it also gives the information about the attribute correlation which is very important in the case of learning disability.", "title": "" }, { "docid": "5d154a62b22415cbedd165002853315b", "text": "Unaccompanied immigrant children are a highly vulnerable population, but research into their mental health and psychosocial context remains limited. This study elicited lawyers’ perceptions of the mental health needs of unaccompanied children in U.S. deportation proceedings and their mental health referral practices with this population. A convenience sample of 26 lawyers who work with unaccompanied children completed a semi-structured, online survey. Lawyers surveyed frequently had mental health concerns about their unaccompanied child clients, used clinical and lay terminology to describe symptoms, referred for both expert testimony and treatment purposes, frequently encountered barriers to accessing appropriate services, and expressed interest in mental health training. The results of this study suggest a complex intersection between the legal and mental health needs of unaccompanied children, and the need for further research and improved service provision in support of their wellbeing.", "title": "" }, { "docid": "d6586a261e22e9044425cb27462c3435", "text": "In this work, we develop a planner for high-speed navigation in unknown environments, for example reaching a goal in an unknown building in minimum time, or flying as fast as possible through a forest. This planning task is challenging because the distribution over possible maps, which is needed to estimate the feasibility and cost of trajectories, is unknown and extremely hard to model for real-world environments. At the same time, the worst-case assumptions that a receding-horizon planner might make about the unknown regions of the map may be overly conservative, and may limit performance. Therefore, robots must make accurate predictions about what will happen beyond the map frontiers to navigate as fast as possible. To reason about uncertainty in the map, we model this problem as a POMDP and discuss why it is so difficult given that we have no accurate probability distribution over real-world environments. We then present a novel method of predicting collision probabilities based on training data, which compensates for the missing environment distribution and provides an approximate solution to the POMDP. Extending our previous work, the principal result of this paper is that by using a Bayesian non-parametric learning algorithm that encodes formal safety constraints as a prior over collision probabilities, our planner seamlessly reverts to safe behavior when it encounters a novel environment for which it has no relevant training data. This strategy generalizes our method across all environment types, including those for which we have training data as well as those for which we do not. In familiar environment types with dense training data, we show an 80% speed improvement compared to a planner that is constrained to guarantee safety. In experiments, our planner has reached over 8 m/s in unknown cluttered indoor spaces. Video of our experimental demonstration is available at http://groups.csail.mit.edu/ rrg/bayesian_learning_high_speed_nav.", "title": "" }, { "docid": "7b385edcbb0e3fa5bfffca2e1a9ecf13", "text": "A goal of runtime software-fault monitoring is to observe software behavior to determine whether it complies with its intended behavior. Monitoring allows one to analyze and recover from detected faults, providing additional defense against catastrophic failure. Although runtime monitoring has been in use for over 30 years, there is renewed interest in its application to fault detection and recovery, largely because of the increasing complexity and ubiquitous nature of software systems. We present taxonomy that developers and researchers can use to analyze and differentiate recent developments in runtime software fault-monitoring approaches. The taxonomy categorizes the various runtime monitoring research by classifying the elements that are considered essential for building a monitoring system, i.e., the specification language used to define properties; the monitoring mechanism that oversees the program's execution; and the event handler that captures and communicates monitoring results. After describing the taxonomy, the paper presents the classification of the software-fault monitoring systems described in the literature.", "title": "" }, { "docid": "5371c5b8e9db3334ed144be4354336cc", "text": "E-learning is related to virtualised distance learning by means of electronic communication mechanisms, using its functionality as a support in the process of teaching-learning. When the learning process becomes computerised, educational data mining employs the information generated from the electronic sources to enrich the learning model for academic purposes. To provide support to e-learning systems, cloud computing is set as a natural platform, as it can be dynamically adapted by presenting a scalable system for the changing necessities of the computer resources over time. It also eases the implementation of data mining techniques to work in a distributed scenario, regarding the large databases generated from e-learning. We give an overview of the current state of the structure of cloud computing, and we provide details of the most common infrastructures that have been developed for such a system. We also present some examples of e-learning approaches for cloud computing, and finally, we discuss the suitability of this environment for educational data mining, suggesting the migration of this approach to this computational scenario.", "title": "" }, { "docid": "768749e22e03aecb29385e39353dd445", "text": "Query logs are of great interest for scientists and companies for research, statistical and commercial purposes. However, the availability of query logs for secondary uses raises privacy issues since they allow the identification and/or revelation of sensitive information about individual users. Hence, query anonymization is crucial to avoid identity disclosure. To enable the publication of privacy-preserved -but still usefulquery logs, in this paper, we present an anonymization method based on semantic microaggregation. Our proposal aims at minimizing the disclosure risk of anonymized query logs while retaining their semantics as much as possible. First, a method to map queries to their formal semantics extracted from the structured categories of the Open Directory Project is presented. Then, a microaggregation method is adapted to perform a semantically-grounded anonymization of query logs. To do so, appropriate semantic similarity and semantic aggregation functions are proposed. Experiments performed using real AOL query logs show that our proposal better retains the utility of anonymized query logs than other related works, while also minimizing the disclosure risk.", "title": "" }, { "docid": "85605e6617a68dff216f242f31306eac", "text": "Steered molecular dynamics (SMD) permits efficient investigations of molecular processes by focusing on selected degrees of freedom. We explain how one can, in the framework of SMD, employ Jarzynski's equality (also known as the nonequilibrium work relation) to calculate potentials of mean force (PMF). We outline the theory that serves this purpose and connects nonequilibrium processes (such as SMD simulations) with equilibrium properties (such as the PMF). We review the derivation of Jarzynski's equality, generalize it to isobaric--isothermal processes, and discuss its implications in relation to the second law of thermodynamics and computer simulations. In the relevant regime of steering by means of stiff springs, we demonstrate that the work on the system is Gaussian-distributed regardless of the speed of the process simulated. In this case, the cumulant expansion of Jarzynski's equality can be safely terminated at second order. We illustrate the PMF calculation method for an exemplary simulation and demonstrate the Gaussian nature of the resulting work distribution.", "title": "" }, { "docid": "d509cb384ecddafa0c4f866882af2c77", "text": "On 9 January 1857, a large earthquake of magnitude 7.9 occurred on the San Andreas fault, with rupture initiating at Parkfield in central California and propagating in a southeasterly direction over a distance of more than 360 km. Such a unilateral rupture produces significant directivity toward the San Fernando and Los Angeles basins. Indeed, newspaper reports of sloshing observed in the Los Angeles river point to long-duration (1–2 min) and long-period (2–8 sec) shaking. If such an earthquake were to happen today, it could impose significant seismic demand on present-day tall buildings. Using state-of-the-art computational tools in seismology and structural engineering, validated using data from the 17 January 1994, magnitude 6.7 Northridge earthquake, we determine the damage to an existing and a new 18story steel moment-frame building in southern California due to ground motion from two hypothetical magnitude 7.9 earthquakes on the San Andreas fault. Our study indicates that serious damage occurs in these buildings at many locations in the region in one of the two scenarios. For a north-to-south rupture scenario, the peak velocity is of the order of 1 m • sec 1 in the Los Angeles basin, including downtown Los Angeles, and 2 m • sec 1 in the San Fernando valley, while the peak displacements are of the order of 1 m and 2 m in the Los Angeles basin and San Fernando valley, respectively. For a south-to-north rupture scenario the peak velocities and displacements are reduced by a factor of roughly 2.", "title": "" }, { "docid": "d529b4f1992f438bb3ce4373090f8540", "text": "One conventional tool for interpolating surfaces over scattered data, the thin-plate spline, has an elegant algebra expressing the dependence of the physical bending energy of a thin metal plate on point constraints. For interpolation of a surface over a fixed set of nodes in the plane, the bending energy is a quadratic form in the heights assigned to the surface. The spline is the superposition of eigenvectors of the bending energy matrix, of successively larger physical scales, over a tilted flat plane having no bending energy at all. When these splines are paired, one representing the x-coordinate of another form and the other the y-coordinate, they aid greatly in the modeling of biological shape change as deformation. In this context, the pair becomes an interpolation map from RZ to R' relating two sets of landmark points. The spline maps decompose, in the same way as the spline surfaces, into a linear part (an affine transformation) together with the superposition of principal warps, which are geometrically independent, affine-free deformations of progressively smaller geometrical scales. The warps decompose an empirical deformation into orthogonal features more or less as a conventional orthogonal functional analysis decomposes the single scene. This paper demonstrates the decomposition of deformations by principal warps, extends the method to deal with curving edges between landmarks, relates this formalism to other applications of splines current in computer vision, and indicates how they might aid in the extraction of features for analysis, comparison, and diagnosis of biological and medical images.", "title": "" }, { "docid": "aeaee20b184e346cd469204dcf49d815", "text": "Naresh Kumari , Nitin Malik , A. N. Jha , Gaddam Mallesham #*4 # Department of Electrical, Electronics and Communication Engineering, The NorthCap University, Gurgaon, India 1 nareshkumari@ncuindia.edu 2 nitinmalik77@gmail.com * Ex-Professor, Electrical Engineering, Indian Institute of Technology, New Delhi, India 3 anjha@ee.iitd.ac.in #* Department of Electrical Engineering, Osmania University, Hyderabad, India 4 gm.eed.cs@gmail.com", "title": "" }, { "docid": "6ebce4adb3693070cac01614078d68fc", "text": "The recent COCO object detection dataset presents several new challenges for object detection. In particular, it contains objects at a broad range of scales, less prototypical images, and requires more precise localization. To address these challenges, we test three modifications to the standard Fast R-CNN object detector: (1) skip connections that give the detector access to features at multiple network layers, (2) a foveal structure to exploit object context at multiple object resolutions, and (3) an integral loss function and corresponding network adjustment that improve localization. The result of these modifications is that information can flow along multiple paths in our network, including through features from multiple network layers and from multiple object views. We refer to our modified classifier as a ‘MultiPath’ network. We couple our MultiPath network with DeepMask object proposals, which are well suited for localization and small objects, and adapt our pipeline to predict segmentation masks in addition to bounding boxes. The combined system improves results over the baseline Fast R-CNN detector with Selective Search by 66% overall and by 4× on small objects. It placed second in both the COCO 2015 detection and segmentation challenges.", "title": "" }, { "docid": "28e8bc5b0d1fa9fa46b19c8c821a625c", "text": "This work aims to develop a smart LED lighting system, which is remotely controlled by Android apps via handheld devices, e.g., smartphones, tablets, and so forth. The status of energy use is reflected by readings displayed on a handheld device, and it is treated as a criterion in the lighting mode design of a system. A multimeter, a wireless light dimmer, an IR learning remote module, etc. are connected to a server by means of RS 232/485 and a human computer interface on a touch screen. The wireless data communication is designed to operate in compliance with the ZigBee standard, and signal processing on sensed data is made through a self adaptive weighted data fusion algorithm. A low variation in data fusion together with a high stability is experimentally demonstrated in this work. The wireless light dimmer as well as the IR learning remote module can be instructed directly by command given on the human computer interface, and the reading on a multimeter can be displayed thereon via the server. This proposed smart LED lighting system can be remotely controlled and self learning mode can be enabled by a single handheld device via WiFi transmission. Hence, this proposal is validated as an approach to power monitoring for home appliances, and is demonstrated as a digital home network in consideration of energy efficiency.", "title": "" }, { "docid": "645f320514b0fa5a8b122c4635bc3df6", "text": "A critical decision problem for top management, and the focus of this study, is whether the CEO (chief executive officer) and CIO (chief information officer) should commit their time to formal planning with the expectation of producing an information technology (IT)-based competitive advantage. Using the perspective of the resource-based view, a model is presented that examines how strategic IT alignment can produce enhanced organizational strategies that yield competitive advantage. One hundred sixty-one CIOs provided data using a postal survey. Results supported seven of the eight hypotheses. They showed that information intensity is an important antecedent to strategic IT alignment, that strategic IT alignment is best explained by multiple constructs which operationalize both process and content measures, and that alignment between the IT plan and the business plan is significantly related to the use of IT for competitive advantage. Study results raise questions about the effect of CEO participation, which appears to be the weak link in the process, and also about the perception of the CIO on the importance of CEO involvement. The paper contributes to our understanding of how knowledge sharing in the alignment process contributes to the creation of superior organizational strategies, provides a framework of the alignment-performance relationship, and furnishes several new constructs. Subject Areas: Competitive Advantage, Information Systems Planning, Knowledge Sharing, Resource-Based View, Strategic Planning, and Structural Equation Modeling.", "title": "" }, { "docid": "a85511bfaa47701350f4d97ec94453fd", "text": "We propose a novel expression transfer method based on an analysis of the frequency of multi-expression facial images. We locate the facial features automatically and describe the shape deformations between a neutral expression and non-neutral expressions. The subtle expression changes are important visual clues to distinguish different expressions. These changes are more salient in the frequency domain than in the image domain. We extract the subtle local expression deformations for the source subject, coded in the wavelet decomposition. This information about expressions is transferred to a target subject. The resulting synthesized image preserves both the facial appearance of the target subject and the expression details of the source subject. This method is extended to dynamic expression transfer to allow a more precise interpretation of facial expressions. Experiments on Japanese Female Facial Expression (JAFFE), the extended Cohn-Kanade (CK+) and PIE facial expression databases show the superiority of our method over the state-of-the-art method.", "title": "" }, { "docid": "bb0dce17b5810ebd7173ea35545c3bf6", "text": "Five studies demonstrated that highly guilt-prone people may avoid forming interdependent partnerships with others whom they perceive to be more competent than themselves, as benefitting a partner less than the partner benefits one's self could trigger feelings of guilt. Highly guilt-prone people who lacked expertise in a domain were less willing than were those low in guilt proneness who lacked expertise in that domain to create outcome-interdependent relationships with people who possessed domain-specific expertise. These highly guilt-prone people were more likely than others both to opt to be paid on their performance alone (Studies 1, 3, 4, and 5) and to opt to be paid on the basis of the average of their performance and that of others whose competence was more similar to their own (Studies 2 and 5). Guilt proneness did not predict people's willingness to form outcome-interdependent relationships with potential partners who lacked domain-specific expertise (Studies 4 and 5). It also did not predict people's willingness to form relationships when poor individual performance would not negatively affect partner outcomes (Study 4). Guilt proneness therefore predicts whether, and with whom, people develop interdependent relationships. The findings also demonstrate that highly guilt-prone people sacrifice financial gain out of concern about how their actions would influence others' welfare. As such, the findings demonstrate a novel way in which guilt proneness limits free-riding and therefore reduces the incidence of potentially unethical behavior. Lastly, the findings demonstrate that people who lack competence may not always seek out competence in others when choosing partners.", "title": "" }, { "docid": "a9a8baf6dfb2526d75b0d7e49bb9b138", "text": "Many classification problems require decisions among a large number of competing classes. These tasks, however, are not handled well by general purpose learning methods and are usually addressed in an ad-hoc fashion. We suggest a general approach – a sequential learning model that utilizes classifiers to sequentially restrict the number of competing classes while maintaining, with high probability, the presence of the true outcome in the candidates set. Some theoretical and computational properties of the model are discussed and we argue that these are important in NLP-like domains. The advantages of the model are illustrated in an experiment in partof-speech tagging.", "title": "" }, { "docid": "890236dc21eef6d0523ee1f5e91bf784", "text": "Perhaps the most amazing property of these word embeddings is that somehow these vector encodings effectively capture the semantic meanings of the words. The question one might ask is how or why? The answer is that because the vectors adhere surprisingly well to our intuition. For instance, words that we know to be synonyms tend to have similar vectors in terms of cosine similarity and antonyms tend to have dissimilar vectors. Even more surprisingly, word vectors tend to obey the laws of analogy. For example, consider the analogy ”Woman is to queen as man is to king”. It turns out that", "title": "" } ]
scidocsrr
79746946cd66c344af505c1977c9d15d
A 12-bit 20 MS/s 56.3 mW Pipelined ADC With Interpolation-Based Nonlinear Calibration
[ { "docid": "96d0cfd6349e02a90528b40c5e3decc6", "text": "A 16-bit 125 MS/s pipeline analog-to-digital converter (ADC) implemented in a 0.18 ¿m CMOS process is presented in this paper. A SHA-less 4-bit front-end is used to achieve low power and minimize the size of the input sampling capacitance in order to ease drivability. The ADC includes foreground factory digital calibration to correct for capacitor mismatches and dithering that can be optionally enabled to improve small-signal linearity. This ADC achieves an SNR of 78.7 dB, an SNDR of 78.6 dB and an SFDR of 96 dB with a 30 MHz input signal, while maintaining an SNR > 76 dB and an SFDR > 85 dB up to 150 MHz input signals. Further, with dithering enabled the worst spur is <-98 dB for inputs below -4 dBFS at 100 MHz IF. The ADC consumes 385 mW from a 1.8 V supply.", "title": "" } ]
[ { "docid": "4d396614420b24265d05b265b7ae6cd5", "text": "The objective of this study was to characterise the antagonistic activity of cellular components of potential probiotic bacteria isolated from the gut of healthy rohu (Labeo rohita), a tropical freshwater fish, against the fish pathogen, Aeromonas hydrophila. Three potential probiotic strains (referred to as R1, R2, and R5) were screened using a well diffusion, and their antagonistic activity against A. hydrophila was determined. Biochemical tests and 16S rRNA gene analysis confirmed that R1, R2, and R5 were Lactobacillus plantarum VSG3, Pseudomonas aeruginosa VSG2, and Bacillus subtilis VSG1, respectively. Four different fractions of cellular components (i.e. the whole-cell product, heat-killed whole-cell product [HKWCP], intracellular product [ICP], and extracellular product) of these selected strains were effective in an in vitro sensitivity test against 6 A. hydrophila strains. Among the cellular components, the ICP of R1, HKWCP of R2, and ICP of R5 exhibited the strongest antagonistic activities, as evidenced by their inhibition zones. The antimicrobial compounds from these selected cellular components were partially purified by thin-layer and high-performance liquid chromatography, and their properties were analysed. The ranges of pH stability of the purified compounds were wide (3.0-10.0), and compounds were thermally stable up to 90 °C. Considering these results, isolated probiotic strains may find potential applications in the prevention and treatment of aquatic aeromonosis.", "title": "" }, { "docid": "66c49b0dbdbdf29ace0f60839b867e43", "text": "The job shop scheduling problem with the makespan criterion is a certain NP-hard case from OR theory having excellent practical applications. This problem, having been examined for years, is also regarded as an indicator of the quality of advanced scheduling algorithms. In this paper we provide a new approximate algorithm that is based on the big valley phenomenon, and uses some elements of so-called path relinking technique as well as new theoretical properties of neighbourhoods. The proposed algorithm owns, unprecedented up to now, accuracy, obtainable in a quick time on a PC, which has been confirmed after wide computer tests.", "title": "" }, { "docid": "5fe43f0b23b0cfd82b414608e60db211", "text": "The Distress Analysis Interview Corpus (DAIC) contains clinical interviews designed to support the diagnosis of psychological distress conditions such as anxiety, depression, and post traumatic stress disorder. The interviews are conducted by humans, human controlled agents and autonomous agents, and the participants include both distressed and non-distressed individuals. Data collected include audio and video recordings and extensive questionnaire responses; parts of the corpus have been transcribed and annotated for a variety of verbal and non-verbal features. The corpus has been used to support the creation of an automated interviewer agent, and for research on the automatic identification of psychological distress.", "title": "" }, { "docid": "1ae3eb81ae75f6abfad4963ee0056be5", "text": "Due to the shared responsibility model of clouds, tenants have to manage the security of their workloads and data. Developing security solutions using VMs or containers creates further problems as these resources also need to be secured. In this paper, we advocate for taking a serverless approach by proposing six serverless design patterns to build security services in the cloud. For each design pattern, we describe the key advantages and present applications and services utilizing the pattern. Using the proposed patterns as building blocks, we introduce a threat-intelligence platform that collects logs from various sources, alerts malicious activities, and takes actions against such behaviors. We also discuss the limitations of serverless design and how future implementations can overcome those limitations.", "title": "" }, { "docid": "c69e002a71132641947d8e30bb2e74f7", "text": "In this paper, we investigate a new stealthy attack simultaneously compromising actuators and sensors. This attack is referred to as coordinated attack. We show that the coordinated attack is capable of deriving the system states far away from the desired without being detected. Furthermore, designing such an attack practically does not require knowledge on target systems, which makes the attack much more dangerous compared to the other known attacks. Also, we present a method to detect the coordinated attack. To validate the effect of the proposed attack, we carry out experiments using a quadrotor.", "title": "" }, { "docid": "023ad4427627e7bdb63ba5e15c3dff32", "text": "Recent works have been shown effective in using neural networks for Chinese word segmentation. However, these models rely on large-scale data and are less effective for low-resource datasets because of insufficient training data. Thus, we propose a transfer learning method to improve low-resource word segmentation by leveraging high-resource corpora. First, we train a teacher model on high-resource corpora and then use the learned knowledge to initialize a student model. Second, a weighted data similarity method is proposed to train the student model on low-resource data with the help of highresource corpora. Finally, given that insufficient data puts forward higher requirements for feature extraction, we propose a novel neural network which improves feature learning. Experiment results show that our work significantly improves the performance on low-resource datasets: 2.3% and 1.5% F-score on PKU and CTB datasets. Furthermore, this paper achieves state-of-the-art results: 96.1%, and 96.2% F-score on PKU and CTB datasets1. Besides, we explore an asynchronous parallel method on neural word segmentation to speed up training. The parallel method accelerates training substantially and is almost five times faster than a serial mode.", "title": "" }, { "docid": "e68fc0a0522f7cd22c7071896263a1f4", "text": "OBJECTIVES\nThe aim of this study was to evaluate the costs of subsidized care for an adult population provided by private and public sector dentists.\n\n\nMETHODS\nA sample of 210 patients was drawn systematically from the waiting list for nonemergency dental treatment in the city of Turku. Questionnaire data covering sociodemographic background, dental care utilization and marginal time cost estimates were combined with data from patient registers on treatment given. Information was available on 104 patients (52 from each of the public and the private sectors).\n\n\nRESULTS\nThe overall time taken to provide treatment was 181 days in the public sector and 80 days in the private sector (P<0.002). On average, public sector patients had significantly (P < 0.01) more dental visits (5.33) than private sector patients (3.47), which caused higher visiting fees. In addition, patients in the public sector also had higher other out-of-pocket costs than in the private sector. Those who needed emergency dental treatment during the waiting time for comprehensive care had significantly more costly treatment and higher total costs than the other patients. Overall time required for dental visits significantly increased total costs. The total cost of dental care in the public sector was slightly higher (P<0.05) than in the private sector.\n\n\nCONCLUSIONS\nThere is no direct evidence of moral hazard on the provider side from this study. The observed cost differences between the two sectors may indicate that private practitioners could manage their publicly funded patients more quickly than their private paying patients. On the other hand, private dentists providing more treatment per visit could be explained by private dentists providing more than is needed by increasing the content per visit.", "title": "" }, { "docid": "d956c805ee88d1b0ca33ce3f0f838441", "text": "The task of relation classification in the biomedical domain is complex due to the presence of samples obtained from heterogeneous sources such as research articles, discharge summaries, or electronic health records. It is also a constraint for classifiers which employ manual feature engineering. In this paper, we propose a convolutional recurrent neural network (CRNN) architecture that combines RNNs and CNNs in sequence to solve this problem. The rationale behind our approach is that CNNs can effectively identify coarse-grained local features in a sentence, while RNNs are more suited for long-term dependencies. We compare our CRNN model with several baselines on two biomedical datasets, namely the i2b22010 clinical relation extraction challenge dataset, and the SemEval-2013 DDI extraction dataset. We also evaluate an attentive pooling technique and report its performance in comparison with the conventional max pooling method. Our results indicate that the proposed model achieves state-of-the-art performance on both datasets.1", "title": "" }, { "docid": "8b49149b3288b9565263b7c4d6978378", "text": "This paper produces a baseline security analysis of the Cloud Computing Operational Environment in terms of threats, vulnerabilities and impacts. An analysis is conducted and the top three threats are identified with recommendations for practitioners. The conclusion of the analysis is that the most serious threats are non-technical and can be solved via management processes rather than technical countermeasures.", "title": "" }, { "docid": "c27b61685ae43c7cd1b60ca33ab209df", "text": "The establishment of damper settings that provide an optimal compromise between wobble- and weave-mode damping is discussed. The conventional steering damper is replaced with a network of interconnected mechanical components comprised of springs, dampers and inerters - that retain the virtue of the damper, while improving the weave-mode performance. The improved performance is due to the fact that the network introduces phase compensation between the relative angular velocity of the steering system and the resulting steering technique", "title": "" }, { "docid": "7f848facaa535d53e7a6fe7aa2435473", "text": "The data structure used to represent image information can be critical to the successful completion of an image processing task. One structure that has attracted considerable attention is the image pyramid This consists of a set of lowpass or bandpass copies of an image, each representing pattern information of a different scale. Here we describe a variety of pyramid methods that we have developed for image data compression, enhancement, analysis and graphics. ©1984 RCA Corporation Final manuscript received November 12, 1984 Reprint Re-29-6-5 that can perform most of the routine visual tasks that humans do effortlessly. It is becoming increasingly clear that the format used to represent image data can be as critical in image processing as the algorithms applied to the data. A digital image is initially encoded as an array of pixel intensities, but this raw format is not suited to most asks. Alternatively, an image may be represented by its Fourier transform, with operations applied to the transform coefficients rather than to the original pixel values. This is appropriate for some data compression and image enhancement tasks, but inappropriate for others. The transform representation is particularly unsuited for machine vision and computer graphics, where the spatial location of pattem elements is critical. Recently there has been a great deal of interest in representations that retain spatial localization as well as localization in the spatial—frequency domain. This is achieved by decomposing the image into a set of spatial frequency bandpass component images. Individual samples of a component image represent image pattern information that is appropriately localized, while the bandpassed image as a whole represents information about a particular fineness of detail or scale. There is evidence that the human visual system uses such a representation, 1 and multiresolution schemes are becoming increasingly popular in machine vision and in image processing in general. The importance of analyzing images at many scales arises from the nature of images themselves. Scenes in the world contain objects of many sizes, and these objects contain features of many sizes. Moreover, objects can be at various distances from the viewer. As a result, any analysis procedure that is applied only at a single scale may miss information at other scales. The solution is to carry out analyses at all scales simultaneously. Convolution is the basic operation of most image analysis systems, and convolution with large weighting functions is a notoriously expensive computation. In a multiresolution system one wishes to perform convolutions with kernels of many sizes, ranging from very small to very large. and the computational problems appear forbidding. Therefore one of the main problems in working with multiresolution representations is to develop fast and efficient techniques. Members of the Advanced Image Processing Research Group have been actively involved in the development of multiresolution techniques for some time. Most of the work revolves around a representation known as a \"pyramid,\" which is versatile, convenient, and efficient to use. We have applied pyramid-based methods to some fundamental problems in image analysis, data compression, and image manipulation.", "title": "" }, { "docid": "dc418c7add2456b08bc3a6f15b31da9f", "text": "In professional search environments, such as patent search or legal search, search tasks have unique characteristics: 1) users interactively issue several queries for a topic, and 2) users are willing to examine many retrieval results, i.e., there is typically an emphasis on recall. Recent surveys have also verified that professional searchers continue to have a strong preference for Boolean queries because they provide a record of what documents were searched. To support this type of professional search, we propose a novel Boolean query suggestion technique. Specifically, we generate Boolean queries by exploiting decision trees learned from pseudo-labeled documents and rank the suggested queries using query quality predictors. We evaluate our algorithm in simulated patent and medical search environments. Compared with a recent effective query generation system, we demonstrate that our technique is effective and general.", "title": "" }, { "docid": "633d32667221f53def4558db23a8b8af", "text": "In this paper we present, ARCTREES, a novel way of visualizing hierarchical and non-hierarchical relations within one interactive visualization. Such a visualization is challenging because it must display hierarchical information in a way that the user can keep his or her mental map of the data set and include relational information without causing misinterpretation. We propose a hierarchical view derived from traditional Treemaps and augment this view with an arc diagram to depict relations. In addition, we present interaction methods that allow the exploration of the data set using Focus+Context techniques for navigation. The development was motivated by a need for understanding relations in structured documents but it is also useful in many other application domains such as project management and calendars.", "title": "" }, { "docid": "c2ac1c1f08e7e4ccba14ea203acba661", "text": "This paper describes an approach to determine a layout for the order picking area in warehouses, such that the average travel distance for the order pickers is minimized. We give analytical formulas by which the average length of an order picking route can be calculated for two different routing policies. The optimal layout can be determined by using such formula as an objective function in a non-linear programming model. The optimal number of aisles in an order picking area appears to depend strongly on the required storage space and the pick list size.", "title": "" }, { "docid": "c10a83c838f59adeb50608d5b96c0fbc", "text": "Robots are typically equipped with multiple complementary sensors such as cameras and laser range finders. Camera generally provides dense 2D information while range sensors give sparse and accurate depth information in the form of a set of 3D points. In order to represent the different data sources in a common coordinate system, extrinsic calibration is needed. This paper presents a pipeline for extrinsic calibration a zed setero camera with Velodyne LiDAR puck using a novel self-made 3D marker whose edges can be robustly detected in the image and 3d point cloud. Our approach first estimate the large sensor displacement using just a single frame. then we optimize the coarse results by finding the best align of edges in order to obtain a more accurate calibration. Finally, the ratio of the 3D points correctly projected onto proper image segments is used to evaluate the accuracy of calibration.", "title": "" }, { "docid": "eda3987f781263615ccf53dd9a7d1a27", "text": "The study gives a synopsis over condition monitoring methods both as a diagnostic tool and as a technique for failure identification in high voltage induction motors in industry. New running experience data for 483 motor units with 6135 unit years are registered and processed statistically, to reveal the connection between motor data, protection and condition monitoring methods, maintenance philosophy and different types of failures. The different types of failures are further analyzed to failure-initiators, -contributors and -underlying causes. The results have been compared with those of a previous survey, IEEE Report of Large Motor Reliability Survey of Industrial and Commercial Installations, 1985. In the present survey the motors are in the range of 100 to 1300 kW, 47% of them between 100 and 500 kW.", "title": "" }, { "docid": "f36348f2909a9642c18590fca6c9b046", "text": "This study explores the use of data mining methods to detect fraud for on e-ledgers through financial statements. For this purpose, data set were produced by rule-based control application using 72 sample e-ledger and error percentages were calculated and labeled. The financial statements created from the labeled e-ledgers were trained by different data mining methods on 9 distinguishing features. In the training process, Linear Regression, Artificial Neural Networks, K-Nearest Neighbor algorithm, Support Vector Machine, Decision Stump, M5P Tree, J48 Tree, Random Forest and Decision Table were used. The results obtained are compared and interpreted.", "title": "" }, { "docid": "7c11bd23338b6261f44319198fcdc082", "text": "Zooplankton are quite significant to the ocean ecosystem for stabilizing balance of the ecosystem and keeping the earth running normally. Considering the significance of zooplantkon, research about zooplankton has caught more and more attentions. And zooplankton recognition has shown great potential for science studies and mearsuring applications. However, manual recognition on zooplankton is labour-intensive and time-consuming, and requires professional knowledge and experiences, which can not scale to large-scale studies. Deep learning approach has achieved remarkable performance in a number of object recognition benchmarks, often achieveing the current best performance on detection or classification tasks and the method demonstrates very promising and plausible results in many applications. In this paper, we explore a deep learning architecture: ZooplanktoNet to classify zoolankton automatically and effectively. The deep network is characterized by capturing more general and representative features than previous predefined feature extraction algorithms in challenging classification. Also, we incorporate some data augmentation to aim at reducing the overfitting for lacking of zooplankton images. And we decide the zooplankton class according to the highest score in the final predictions of ZooplanktoNet. Experimental results demonstrate that ZooplanktoNet can solve the problem effectively with accuracy of 93.7% in zooplankton classification.", "title": "" }, { "docid": "c86aad62e950d7c10f93699d421492d5", "text": "Carotid intima-media thickness (CIMT) is a good surrogate for atherosclerosis. Hyperhomocysteinemia is an independent risk factor for cardiovascular diseases. We aim to investigate the relationships between homocysteine (Hcy) related biochemical indexes and CIMT, the associations between Hcy related SNPs and CIMT, as well as the potential gene–gene interactions. The present study recruited full siblings (186 eligible families with 424 individuals) with no history of cardiovascular events from a rural area of Beijing. We examined CIMT, intima-media thickness for common carotid artery (CCA-IMT) and carotid bifurcation, tested plasma levels for Hcy, vitamin B6 (VB6), vitamin B12 (VB12) and folic acid (FA), and genotyped 9 SNPs on MTHFR, MTR, MTRR, BHMT, SHMT1, CBS genes. Associations between SNPs and biochemical indexes and CIMT indexes were analyzed using family-based association test analysis. We used multi-level mixed-effects regression model to verify SNP-CIMT associations and to explore the potential gene–gene interactions. VB6, VB12 and FA were negatively correlated with CIMT indexes (p < 0.05). rs2851391 T allele was associated with decreased plasma VB12 levels (p = 0.036). In FABT, CBS rs2851391 was significantly associated with CCA-IMT (p = 0.021) and CIMT (p = 0.019). In multi-level mixed-effects regression model, CBS rs2851391 was positively significantly associated with CCA-IMT (Coef = 0.032, se = 0.009, raw p < 0.001) after Bonferoni correction (corrected α = 0.0056). Gene–gene interactions were found between CBS rs2851391 and BHMT rs10037045 for CCA-IMT (p = 0.011), as well as between CBS rs2851391 and MTR rs1805087 for CCA-IMT (p = 0.007) and CIMT (p = 0.022). Significant associations are found between Hcy metabolism related genetic polymorphisms, biochemical indexes and CIMT indexes. There are complex interactions between genetic polymorphisms for CCA-IMT and CIMT.", "title": "" }, { "docid": "2eebc7477084b471f9e9872ba8751359", "text": "Despite significant progress in the development of human action detection datasets and algorithms, no current dataset is representative of real-world aerial view scenarios. We present Okutama-Action, a new video dataset for aerial view concurrent human action detection. It consists of 43 minute-long fully-annotated sequences with 12 action classes. Okutama-Action features many challenges missing in current datasets, including dynamic transition of actions, significant changes in scale and aspect ratio, abrupt camera movement, as well as multi-labeled actors. As a result, our dataset is more challenging than existing ones, and will help push the field forward to enable real-world applications.", "title": "" } ]
scidocsrr
5e06328e2a74b35fe5b70d5bffb0c06c
Clone Detection Using Abstract Syntax Suffix Trees
[ { "docid": "a17052726cbf3239c3f516b51af66c75", "text": "Source code duplication occurs frequently within large software systems. Pieces of source code, functions, and data types are often duplicated in part, or in whole, for a variety of reasons. Programmers may simply be reusing a piece of code via copy and paste or they may be “reinventing the wheel”. Previous research on the detection of clones is mainly focused on identifying pieces of code with similar (or nearly similar) structure. Our approach is to examine the source code text (comments and identifiers) and identify implementations of similar high-level concepts (e.g., abstract data types). The approach uses an information retrieval technique (i.e., latent semantic indexing) to statically analyze the software system and determine semantic similarities between source code documents (i.e., functions, files, or code segments). These similarity measures are used to drive the clone detection process. The intention of our approach is to enhance and augment existing clone detection methods that are based on structural analysis. This synergistic use of methods will improve the quality of clone detection. A set of experiments is presented that demonstrate the usage of semantic similarity measure to identify clones within a version of NCSA Mosaic.", "title": "" }, { "docid": "b09eedfc1b27d5666846c18423d1ad54", "text": "Recent years have seen many significant advances in program comprehension and software maintenance automation technology. In spite of the enormous potential savings in software maintenance costs, for the most part adoption of these ideas in industry remains at the experimental prototype stage. In this paper I explore some of the practical reasons for industrial resistance to adoption of software maintenance automation. Based on the experience of six years of software maintenance automation services to the financial industry involving more than 4.5 Gloc of code at Legasys Corporation, I discuss some of the social, technical and business realities that lie at the root of this resistance, outline various Legasys attempts overcome these barriers, and suggest some approaches to software maintenance automation that may lead to higher levels of industrial acceptance in the future.", "title": "" } ]
[ { "docid": "dd634fe7f5bfb5d08d0230c3e64220a4", "text": "Living in an oxygenated environment has required the evolution of effective cellular strategies to detect and detoxify metabolites of molecular oxygen known as reactive oxygen species. Here we review evidence that the appropriate and inappropriate production of oxidants, together with the ability of organisms to respond to oxidative stress, is intricately connected to ageing and life span.", "title": "" }, { "docid": "df96263c86a36ed30e8a074354b09239", "text": "We propose three iterative superimposed-pilot based channel estimators for Orthogonal Frequency Division Multiplexing (OFDM) systems. Two are approximate maximum-likelihood, derived by using a Taylor expansion of the conditional probability density function of the received signal or by approximating the OFDM time signal as Gaussian, and one is minimum-mean square error. The complexity per iteration of these estimators is given by approximately O(NL2), O(N3) and O(NL), where N is the number of OFDM subcarriers and L is the channel length (time). Two direct (non-iterative) data detectors are also derived by averaging the log likelihood function over the channel statistics. These detectors require minimising the cost metric in an integer space, and we suggest the use of the sphere decoder for them. The Cramér--Rao bound for superimposed pilot based channel estimation is derived, and this bound is achieved by the proposed estimators. The optimal pilot placement is shown to be the equally spaced distribution of pilots. The bit error rate of the proposed estimators is simulated for N = 32 OFDM system. Our estimators perform fairly close to a separated training scheme, but without any loss of spectral efficiency. Copyright © 2011 John Wiley & Sons, Ltd. *Correspondence Chintha Tellambura, Department of Electrical and Computer Engineering, University Alberta, Edmonton, Alberta, Canada T6G 2C5. E-mail: chintha@ece.ualberta.ca Received 20 July 2009; Revised 23 July 2010; Accepted 13 October 2010", "title": "" }, { "docid": "d4ac0d6890cc89e2525b9537376cce39", "text": "Unsupervised over-segmentation of an image into regions of perceptually similar pixels, known as super pixels, is a widely used preprocessing step in segmentation algorithms. Super pixel methods reduce the number of regions that must be considered later by more computationally expensive algorithms, with a minimal loss of information. Nevertheless, as some information is inevitably lost, it is vital that super pixels not cross object boundaries, as such errors will propagate through later steps. Existing methods make use of projected color or depth information, but do not consider three dimensional geometric relationships between observed data points which can be used to prevent super pixels from crossing regions of empty space. We propose a novel over-segmentation algorithm which uses voxel relationships to produce over-segmentations which are fully consistent with the spatial geometry of the scene in three dimensional, rather than projective, space. Enforcing the constraint that segmented regions must have spatial connectivity prevents label flow across semantic object boundaries which might otherwise be violated. Additionally, as the algorithm works directly in 3D space, observations from several calibrated RGB+D cameras can be segmented jointly. Experiments on a large data set of human annotated RGB+D images demonstrate a significant reduction in occurrence of clusters crossing object boundaries, while maintaining speeds comparable to state-of-the-art 2D methods.", "title": "" }, { "docid": "95efc564448b3ec74842d047f94cb779", "text": "Over the past 25 years or so there has been much interest in the use of digital pre-distortion (DPD) techniques for the linearization of RF and microwave power amplifiers. In this paper, we describe the important system and hardware requirements for the four main subsystems found in the DPD linearized transmitter: RF/analog, data converters, digital signal processing, and the DPD architecture and algorithms, and illustrate how the overall DPD system architecture is influenced by the design choices that may be made in each of these subsystems. We shall also consider the challenges presented to future applications of DPD systems for wireless communications, such as higher operating frequencies, wider signal bandwidths, greater spectral efficiency signals, resulting in higher peak-to-average power ratios, multiband and multimode operation, lower power consumption requirements, faster adaption, and how these affect the system design choices.", "title": "" }, { "docid": "ed0342748fff5c1ced69700cfd922884", "text": "Many applications of histograms for the purposes of image processing are well known. However, applying this process to the transform domain by way of a transform coefficient histogram has not yet been fully explored. This paper proposes three methods of image enhancement: a) logarithmic transform histogram matching, b) logarithmic transform histogram shifting, and c) logarithmic transform histogram shaping using Gaussian distributions. They are based on the properties of the logarithmic transform domain histogram and histogram equalization. The presented algorithms use the fact that the relationship between stimulus and perception is logarithmic and afford a marriage between enhancement qualities and computational efficiency. A human visual system-based quantitative measurement of image contrast improvement is also defined. This helps choose the best parameters and transform for each enhancement. A number of experimental results are presented to illustrate the performance of the proposed algorithms", "title": "" }, { "docid": "6c5c6e201e2ae886908aff554866b9ed", "text": "HDBSCAN: Hierarchical Density-Based Spatial Clustering of Applications with Noise (Campello, Moulavi, and Sander 2013), (Campello et al. 2015). Performs DBSCAN over varying epsilon values and integrates the result to find a clustering that gives the best stability over epsilon. This allows HDBSCAN to find clusters of varying densities (unlike DBSCAN), and be more robust to parameter selection. The library also includes support for Robust Single Linkage clustering (Chaudhuri et al. 2014), (Chaudhuri and Dasgupta 2010), GLOSH outlier detection (Campello et al. 2015), and tools for visualizing and exploring cluster structures. Finally support for prediction and soft clustering is also available.", "title": "" }, { "docid": "827c9d65c2c3a2a39d07c9df7a21cfe2", "text": "A worldwide movement in advanced manufacturing countries is seeking to reinvigorate (and revolutionize) the industrial and manufacturing core competencies with the use of the latest advances in information and communications technology. Visual computing plays an important role as the \"glue factor\" in complete solutions. This article positions visual computing in its intrinsic crucial role for Industrie 4.0 and provides a general, broad overview and points out specific directions and scenarios for future research.", "title": "" }, { "docid": "1f3e600ce5be2a55234c11e19e11cb67", "text": "In this paper, we propose a noise robust speech recognition system built using generalized distillation framework. It is assumed that during training, in addition to the training data, some kind of ”privileged” information is available and can be used to guide the training process. This allows to obtain a system which at test time outperforms those built on regular training data alone. In the case of noisy speech recognition task, the privileged information is obtained from a model, called ”teacher”, trained on clean speech only. The regular model, called ”student”, is trained on noisy utterances and uses teacher’s output for the corresponding clean utterances. Thus, for this framework a parallel clean/noisy speech data are required. We experimented on the Aurora2 database which provides such kind of data. Our system uses hybrid DNN-HMM acoustic model where neural networks provide HMM state probabilities during decoding. The teacher DNN is trained on the clean data, while the student DNN is trained using multi-condition (various SNRs) data. The student DNN loss function combines the targets obtained from forced alignment of the training data and the outputs of the teacher DNN when fed with the corresponding clean features. Experimental results clearly show that distillation framework is effective and allows to achieve significant reduction in the word error rate.", "title": "" }, { "docid": "4c5d12c3b1254c83819eac53dd57ce40", "text": "traditional topic detection method can not be applied to the microblog topic detection directly, because the microblog text is a kind of the short, fractional and grass-roots text. In order to detect the hot topic in the microblog text effectively, we propose a microblog topic detection method based on the combination of the latent semantic analysis and the structural property. According to the dialogic property of the microblog, our proposed method firstly creates semantic space based on the replies to the thread, with the aim to solve the data sparseness problem; secondly, create the microblog model based on the latent semantic analysis; finally, propose a semantic computation method combined with the time information. We then adopt the agglomerative hierarchical clustering method as the microblog topic detection method. Experimental results show that our proposed methods improve the performances of the microblog topic detection greatly.", "title": "" }, { "docid": "a31358ffda425f8e3f7fd15646d04417", "text": "We elaborate the design and simulation of a planar antenna that is suitable for CubeSat picosatellites. The antenna operates at 436 MHz and its main features are miniature size and the built-in capability to produce circular polarization. The miniaturization procedure is given in detail, and the electrical performance of this small antenna is documented. Two main miniaturization techniques have been applied, i.e. dielectric loading and distortion of the current path. We have added an extra degree of freedom to the latter. The radiator is integrated with the chassis of the picosatellite and, at the same time, operates at the lower end of the UHF spectrum. In terms of electrical size, the structure presented herein is one of the smallest antennas that have been proposed for small satellites. Despite its small electrical size, the antenna maintains acceptable efficiency and gain performance in the band of interest.", "title": "" }, { "docid": "1c66d84dfc8656a23e2a4df60c88ab51", "text": "Our method aims at reasoning over natural language questions and visual images. Given a natural language question about an image, our model updates the question representation iteratively by selecting image regions relevant to the query and learns to give the correct answer. Our model contains several reasoning layers, exploiting complex visual relations in the visual question answering (VQA) task. The proposed network is end-to-end trainable through back-propagation, where its weights are initialized using pre-trained convolutional neural network (CNN) and gated recurrent unit (GRU). Our method is evaluated on challenging datasets of COCO-QA [19] and VQA [2] and yields state-of-the-art performance.", "title": "" }, { "docid": "ea05a43abee762d4b484b5027e02a03a", "text": "One essential task in information extraction from the medical corpus is drug name recognition. Compared with text sources come from other domains, the medical text mining poses more challenges, for example, more unstructured text, the fast growing of new terms addition, a wide range of name variation for the same drug, the lack of labeled dataset sources and external knowledge, and the multiple token representations for a single drug name. Although many approaches have been proposed to overwhelm the task, some problems remained with poor F-score performance (less than 0.75). This paper presents a new treatment in data representation techniques to overcome some of those challenges. We propose three data representation techniques based on the characteristics of word distribution and word similarities as a result of word embedding training. The first technique is evaluated with the standard NN model, that is, MLP. The second technique involves two deep network classifiers, that is, DBN and SAE. The third technique represents the sentence as a sequence that is evaluated with a recurrent NN model, that is, LSTM. In extracting the drug name entities, the third technique gives the best F-score performance compared to the state of the art, with its average F-score being 0.8645.", "title": "" }, { "docid": "551e890f5b62ed3fbcaef10101787120", "text": "Plagiarism detection is a sensitive field of research which has gained lot of interest in the past few years. Although plagiarism detection systems are developed to check text in a variety of languages, they perform better when they are dedicated to check a specific language as they take into account the specificity of the language which leads to better quality results. Query optimization and document reduction constitute two major processing modules which play a major role in optimizing the response time and the results quality of these systems and hence determine their efficiency and effectiveness. This paper proposes an analysis of approaches, an architecture, and a system for detecting plagiarism in Arabic documents. This analysis is particularly focused on the methods and techniques used to detect plagiarism. The proposed web-based architecture exhibits the major processing modules of a plagiarism detection system which are articulated into four layers inside a processing component. The architecture has been used to develop a plagiarism detection system for the Arabic language proposing a set of functions to the user for checking a text and analyzing the results through a well-designed graphical user interface. Subject Categories and Descriptors [H.3.1 Content Analysis and Indexing]: Linguistic processing; [I.2 Artificial Intelligencd]; Natural language interfaces: [I.2.7 Natural Language Processing]; Text Analysis; [I.2.3 Clustering]; Similarity Measures General Terms: Text Analysis, Arabic Language Processing, Similarity Detection", "title": "" }, { "docid": "cdc3b46933db0c88f482ded1dcdff9e6", "text": "Overvoltages in low voltage (LV) feeders with high penetration of photovoltaics (PV) are usually prevented by limiting the feeder's PV capacity to very conservative values, even if the critical periods rarely occur. This paper discusses the use of droop-based active power curtailment techniques for overvoltage prevention in radial LV feeders as a means for increasing the installed PV capacity and energy yield. Two schemes are proposed and tested in a typical 240-V/75-kVA Canadian suburban distribution feeder with 12 houses with roof-top PV systems. In the first scheme, all PV inverters have the same droop coefficients. In the second, the droop coefficients are different so as to share the total active power curtailed among all PV inverters/houses. Simulation results demonstrate the effectiveness of the proposed schemes and that the option of sharing the power curtailment among all customers comes at the cost of an overall higher amount of power curtailed.", "title": "" }, { "docid": "e0ee22a0df1c13511909cb5f7d2b4d82", "text": "Growing use of the Internet as a major means of communication has led to the formation of cyber-communities, which have become increasingly appealing to terrorist groups due to the unregulated nature of Internet communication. Online communities enable violent extremists to increase recruitment by allowing them to build personal relationships with a worldwide audience capable of accessing uncensored content. This article presents methods for identifying the recruitment activities of violent groups within extremist social media websites. Specifically, these methods apply known techniques within supervised learning and natural language processing to the untested task of automatically identifying forum posts intended to recruit new violent extremist members. We used data from the western jihadist website Ansar AlJihad Network, which was compiled by the University of Arizona’s Dark Web Project. Multiple judges manually annotated a sample of these data, marking 192 randomly sampled posts as recruiting (Yes) or non-recruiting (No). We observed significant agreement between the judges’ labels; Cohen’s κ=(0.5,0.9) at p=0.01. We tested the feasibility of using naive Bayes models, logistic regression, classification trees, boosting, and support vector machines (SVM) to classify the forum posts. Evaluation with receiver operating characteristic (ROC) curves shows that our SVM classifier achieves an 89% area under the curve (AUC), a significant improvement over the 63% AUC performance achieved by our simplest naive Bayes model (Tukey’s test at p=0.05). To our knowledge, this is the first result reported on this task, and our analysis indicates that automatic detection of online terrorist recruitment is a feasible task. We also identify a number of important areas of future work including classifying non-English posts and measuring how recruitment posts and current events change membership numbers over time.", "title": "" }, { "docid": "9b32c1ea81eb8d8eb3675c577cc0e2fc", "text": "Users' addiction to online social networks is discovered to be highly correlated with their social connections in the networks. Dense social connections can effectively help online social networks retain their active users and improve the social network services. Therefore, it is of great importance to make a good prediction of the social links among users. Meanwhile, to enjoy more social network services, users nowadays are usually involved in multiple online social networks simultaneously. Formally, the social networks which share a number of common users are defined as the \"aligned networks\".With the information transferred from multiple aligned social networks, we can gain a more comprehensive knowledge about the social preferences of users in the pre-specified target network, which will benefit the social link prediction task greatly. However, when transferring the knowledge from other aligned source networks to the target network, there usually exists a shift in information distribution between different networks, namely domain difference. In this paper, we study the social link prediction problem of the target network, which is aligned with multiple social networks concurrently. To accommodate the domain difference issue, we project the features extracted for links from different aligned networks into a shared lower-dimensional feature space. Moreover, users in social networks usually tend to form communities and would only connect to a small number of users. Thus, the target network structure has both the low-rank and sparse properties. We propose a novel optimization framework, SLAMPRED, to combine both these two properties aforementioned of the target network and the information of multiple aligned networks with nice domain adaptations. Since the objective function is a linear combination of convex and concave functions involving nondifferentiable regularizers, we propose a novel optimization method to iteratively solve it. Extensive experiments have been done on real-world aligned social networks, and the experimental results demonstrate the effectiveness of the proposed model.", "title": "" }, { "docid": "91f718a69532c4193d5e06bf1ea19fd3", "text": "Factorization approaches provide high accuracy in several important prediction problems, for example, recommender systems. However, applying factorization approaches to a new prediction problem is a nontrivial task and requires a lot of expert knowledge. Typically, a new model is developed, a learning algorithm is derived, and the approach has to be implemented.\n Factorization machines (FM) are a generic approach since they can mimic most factorization models just by feature engineering. This way, factorization machines combine the generality of feature engineering with the superiority of factorization models in estimating interactions between categorical variables of large domain. libFM is a software implementation for factorization machines that features stochastic gradient descent (SGD) and alternating least-squares (ALS) optimization, as well as Bayesian inference using Markov Chain Monto Carlo (MCMC). This article summarizes the recent research on factorization machines both in terms of modeling and learning, provides extensions for the ALS and MCMC algorithms, and describes the software tool libFM.", "title": "" }, { "docid": "48966a0436405a6656feea3ce17e87c3", "text": "Complex regional pain syndrome (CRPS) is a chronic, intensified localized pain condition that can affect children and adolescents as well as adults, but is more common among adolescent girls. Symptoms include limb pain; allodynia; hyperalgesia; swelling and/or changes in skin color of the affected limb; dry, mottled skin; hyperhidrosis and trophic changes of the nails and hair. The exact mechanism of CRPS is unknown, although several different mechanisms have been suggested. The diagnosis is clinical, with the aid of the adult criteria for CRPS. Standard care consists of a multidisciplinary approach with the implementation of intensive physical therapy in conjunction with psychological counseling. Pharmacological treatments may aid in reducing pain in order to allow the patient to participate fully in intensive physiotherapy. The prognosis in pediatric CRPS is favorable.", "title": "" }, { "docid": "b00311730b7b9b4f79cdd7bde5aa84f6", "text": "While neural networks demonstrate stronger capabilities in pattern recognition nowadays, they are also becoming larger and deeper. As a result, the effort needed to train a network also increases dramatically. In many cases, it is more practical to use a neural network intellectual property (IP) that an IP vendor has already trained. As we do not know about the training process, there can be security threats in the neural IP: the IP vendor (attacker) may embed hidden malicious functionality, i.e neural Trojans, into the neural IP. We show that this is an effective attack and provide three mitigation techniques: input anomaly detection, re-training, and input preprocessing. All the techniques are proven effective. The input anomaly detection approach is able to detect 99.8% of Trojan triggers although with 12.2% false positive. The re-training approach is able to prevent 94.1% of Trojan triggers from triggering the Trojan although it requires that the neural IP be reconfigurable. In the input preprocessing approach, 90.2% of Trojan triggers are rendered ineffective and no assumption about the neural IP is needed.", "title": "" }, { "docid": "9b17dd1fc2c7082fa8daecd850fab91c", "text": "This paper presents all the stages of development of a solar tracker for a photovoltaic panel. The system was made with a microcontroller which was design as an embedded control. It has a data base of the angles of orientation horizontal axle, therefore it has no sensor inlet signal and it function as an open loop control system. Combined of above mention characteristics in one the tracker system is a new technique of the active type. It is also a rotational robot of 1 degree of freedom.", "title": "" } ]
scidocsrr
5cca4d416eca68d5bbb65d6ef7654e16
Fast locality-sensitive hashing
[ { "docid": "7963ceddcf75f2e563ddd9501230a93f", "text": "Advances in data collection and storage capabilities during the past decades have led to an information overload in most sciences. Researchers working in domains as diverse as engineering, astronomy, biology, remote sensing, economics, and consumer transactions, face larger and larger observations and simulations on a daily basis. Such datasets, in contrast with smaller, more traditional datasets that have been studied extensively in the past, present new challenges in data analysis. Traditional statistical methods break down partly because of the increase in the number of observations, but mostly because of the increase in the number of variables associated with each observation. The dimension of the data is the number of variables that are measured on each observation. High-dimensional datasets present many mathematical challenges as well as some opportunities, and are bound to give rise to new theoretical developments [11]. One of the problems with high-dimensional datasets is that, in many cases, not all the measured variables are “important” for understanding the underlying phenomena of interest. While certain computationally expensive novel methods [4] can construct predictive models with high accuracy from high-dimensional data, it is still of interest in many applications to reduce the dimension of the original data prior to any modeling of the data. In mathematical terms, the problem we investigate can be stated as follows: given the p-dimensional random variable x = (x1, . . . , xp) T , find a lower dimensional representation of it, s = (s1, . . . , sk) T with k ≤ p, that captures the content in the original data, according to some criterion. The components of s are sometimes called the hidden components. Different fields use different names for the p multivariate vectors: the term “variable” is mostly used in statistics, while “feature” and “attribute” are alternatives commonly used in the computer science and machine learning literature. Throughout this paper, we assume that we have n observations, each being a realization of the pdimensional random variable x = (x1, . . . , xp) T with mean E(x) = μ = (μ1, . . . , μp) T and covariance matrix E{(x − μ)(x− μ) } = Σp×p. We denote such an observation matrix by X = {xi,j : 1 ≤ i ≤ p, 1 ≤ j ≤ n}. If μi and σi = √", "title": "" } ]
[ { "docid": "bc49930fa967b93ed1e39b3a45237652", "text": "In gene expression data, a bicluster is a subset of the genes exhibiting consistent patterns over a subset of the conditions. We propose a new method to detect significant biclusters in large expression datasets. Our approach is graph theoretic coupled with statistical modelling of the data. Under plausible assumptions, our algorithm is polynomial and is guaranteed to find the most significant biclusters. We tested our method on a collection of yeast expression profiles and on a human cancer dataset. Cross validation results show high specificity in assigning function to genes based on their biclusters, and we are able to annotate in this way 196 uncharacterized yeast genes. We also demonstrate how the biclusters lead to detecting new concrete biological associations. In cancer data we are able to detect and relate finer tissue types than was previously possible. We also show that the method outperforms the biclustering algorithm of Cheng and Church (2000).", "title": "" }, { "docid": "f048f684d71d811ac0a9fbd58a76d580", "text": "Frequency: The course will be offered annually beginning in the Spring 2011 semester. Points and prerequisites: The course will carry four points. The prerequisites for this course are Economic Principles II (V31.0002) and Calculus I (V63.0121). The lectures will focus mainly on conceptual material and applications. Properties: The course will meet two times each week for one hour and fifteen minutes each. No unusual audiovisual or technological aids will be used. This course serves as an introduction to game theory as the study of incentives and strategic behavior in collective and interdependent decision making. The course will develop the necessary theoretical tools for the study of game theory, while concurrently introducing applications in areas such as bargaining, competition, auction theory and strategic voting. This is a course indicated for any student with interest in learning how to apply game theoretical analysis to a variety of disciplines. The aim of the course is to provide a mostly applied overview of game theoretical concepts and emphasize their use in real world situations. By the end of the course, the student should have developed tools which will allow her/him to formally analyze outcomes in strategic situations. There will be one midterm and one final exam and approximately 8 problem sets for this class. The midterm and final exam scores count for 30%, and 60% respectively, of your course grade. The problem set score will be calculated ignoring your lowest score during the semester and will count for 10% of the final grade.", "title": "" }, { "docid": "d8d0b6d8b422b8d1369e99ff8b9dee0e", "text": "The advent of massive open online courses (MOOCs) poses new learning opportunities for learners as well as challenges for researchers and designers. MOOC students approach MOOCs in a range of fashions, based on their learning goals and preferred approaches, which creates new opportunities for learners but makes it difficult for researchers to figure out what a student’s behavior means, and makes it difficult for designers to develop MOOCs appropriate for all of their learners. Towards better understanding the learners who take MOOCs, we conduct a survey of MOOC learners’ motivations and correlate it to which students complete the course according to the pace set by the instructor/platform (which necessitates having the goal of completing the course, as well as succeeding in that goal). The results showed that course completers tend to be more interested in the course content, whereas non-completers tend to be more interested in MOOCs as a type of learning experience. Contrary to initial hypotheses, however, no substantial differences in mastery-goal orientation or general academic efficacy were observed between completers and non-completers. However, students who complete the course tend to have more self-efficacy for their ability to complete the course, from the beginning.", "title": "" }, { "docid": "f96bc7911cbabeddc6e6362c48e2fcb1", "text": "In order to identify vulnerable software components, developers can take software metrics as predictors or use text mining techniques to build vulnerability prediction models. A recent study reported that text mining based models have higher recall than software metrics based models. However, this conclusion was drawn without considering the sizes of individual components which affects the code inspection effort to determine whether a component is vulnerable. In this paper, we investigate the predictive power of these two kinds of prediction models in the context of effort-aware vulnerability prediction. To this end, we use the same data sets, containing 223 vulnerabilities found in three web applications, to build vulnerability prediction models. The experimental results show that: (1) in the context of effort-aware ranking scenario, text mining based models only slightly outperform software metrics based models, (2) in the context of effort-aware classification scenario, text mining based models perform similarly to software metrics based models in most cases, and (3) most of the effect sizes (i.e. the magnitude of the differences) between these two kinds of models are trivial. These results suggest that, from the viewpoint of practical application, software metrics based models are comparable to text mining based models. Therefore, for developers, software metrics based models are practical choices for vulnerability prediction, as the cost to build and apply these models is much lower.", "title": "" }, { "docid": "e7c848d4661bab87e39243834be80046", "text": "2048 is an engaging single-player nondeterministic video puzzle game, which, thanks to the simple rules and hard-to-master gameplay, has gained massive popularity in recent years. As 2048 can be conveniently embedded into the discrete-state Markov decision processes framework, we treat it as a testbed for evaluating existing and new methods in reinforcement learning. With the aim to develop a strong 2048 playing program, we employ temporal difference learning with systematic n-tuple networks. We show that this basic method can be significantly improved with temporal coherence learning, multi-stage function approximator with weight promotion, carousel shaping, and redundant encoding. In addition, we demonstrate how to take advantage of the characteristics of the n-tuple network, to improve the algorithmic effectiveness of the learning process by delaying the (decayed) update and applying lock-free optimistic parallelism to effortlessly make advantage of multiple CPU cores. This way, we were able to develop the best known 2048 playing program to date, which confirms the effectiveness of the introduced methods for discrete-state Markov decision problems.", "title": "" }, { "docid": "6f176e780d94a8fa8c5b1d6d364c4363", "text": "Current uses of smartwatches are focused solely around the wearer's content, viewed by the wearer alone. When worn on a wrist, however, watches are often visible to many other people, making it easy to quickly glance at their displays. We explore the possibility of extending smartwatch interactions to turn personal wearables into more public displays. We begin opening up this area by investigating fundamental aspects of this interaction form, such as the social acceptability and noticeability of looking at someone else's watch, as well as the likelihood of a watch face being visible to others. We then sketch out interaction dimensions as a design space, evaluating each aspect via a web-based study and a deployment of three potential designs. We conclude with a discussion of the findings, implications of the approach and ways in which designers in this space can approach public wrist-worn wearables.", "title": "" }, { "docid": "717988e7bada51ad5c4115f4d43de01a", "text": "I offer an overview of the rapidly growing field of mindfulness-based interventions (MBIs). A working definition of mindfulness in this context includes the brahma viharas, sampajanna and appamada, and suggests a very particular mental state which is both wholesome and capable of clear and penetrating insight into the nature of reality. The practices in mindfulness-based stress reduction (MBSR) that apply mindfulness to the four foundations are outlined, along with a brief history of the program and the original intentions of the founder, Jon Kabat-Zinn. The growth and scope of these interventions are detailed with demographics provided by the Center for Mindfulness, an overview of salient research studies and a listing of the varied MBIs that have grown out of MBSR. The question of ethics is explored, and other challenges are raised including teacher qualification and clarifying the “outer limits,” or minimum requirements, of what constitutes an MBI. Current trends are explored, including the increasing number of cohort-specific interventions as well as the publication of books, articles, and workbooks by a new generation of MBI teachers. Together, they form an emerging picture of MBIs as their own new “lineage,” which look to MBSR as their inspiration and original source. The potential to bring benefit to new fields, such as government and the military, represent exciting opportunities for MBIs, along with the real potential to transform health care. Sufficient experience in the delivery of MBIs has been garnered to offer the greater contemplative community valuable resources such as secular language, best practices, and extensive research.", "title": "" }, { "docid": "2615de62d2b2fa8a15e79ca2a3a57a3b", "text": "Recent evidence has shown that entrants into self-employment are disproportionately drawn from the tails of the earnings and ability distributions. This observation is explained by a multi-task model of occupational choice in which frictions in the labor market induces mismatches between firms and workers, and mis-assignment of workers to tasks. The model also yields distinctive predictions relating prior work histories to earnings and to the probability of entry into self-employment. These predictions are tested with the Korean Labor and Income Panel Study, from which we find considerable support for the model.", "title": "" }, { "docid": "33ab76f714ca23bdfddecfe436fd1ee2", "text": "A rational agent (artificial or otherwise) residing in a complex changing environment must gather information perceptually, update that information as the world changes, and combine that information with causal information to reason about the changing world. Using the system of defeasible reasoning that is incorporated into the OSCAR architecture for rational agents, a set of reason-schemas is proposed for enabling an agent to perform some of the requisite reasoning. Along the way, solutions are proposed for the Frame Problem, the Qualification Problem, and the Ramification Problem. The principles and reasoning described have all been implemented in OSCAR. keywords: defeasible reasoning, nonmonotonic logic, perception, causes, causation, time, temporal This work was supported in part by NSF grant no. IRI-9634106. An early version of some of this material appears in Pollock (1996), but it has undergone substantial change in the present paper. projection, frame problem, qualification problem, ramification problem, OSCAR.", "title": "" }, { "docid": "05127dab049ef7608932913f66db0990", "text": "This paper presents a hybrid tele-manipulation system, comprising of a sensorized 3-D-printed soft robotic gripper and a soft fabric-based haptic glove that aim at improving grasping manipulation and providing sensing feedback to the operators. The flexible 3-D-printed soft robotic gripper broadens what a robotic gripper can do, especially for grasping tasks where delicate objects, such as glassware, are involved. It consists of four pneumatic finger actuators, casings with through hole for housing the actuators, and adjustable base. The grasping length and width can be configured easily to suit a variety of objects. The soft haptic glove is equipped with flex sensors and soft pneumatic haptic actuator, which enables the users to control the grasping, to determine whether the grasp is successful, and to identify the grasped object shape. The fabric-based soft pneumatic haptic actuator can simulate haptic perception by producing force feedback to the users. Both the soft pneumatic finger actuator and haptic actuator involve simple fabrication technique, namely 3-D-printed approach and fabric-based approach, respectively, which reduce fabrication complexity as compared to the steps involved in a traditional silicone-based approach. The sensorized soft robotic gripper is capable of picking up and holding a wide variety of objects in this study, ranging from lightweight delicate object weighing less than 50 g to objects weighing 1100 g. The soft haptic actuator can produce forces of up to 2.1 N, which is more than the minimum force of 1.5 N needed to stimulate haptic perception. The subjects are able to differentiate the two objects with significant shape differences in the pilot test. Compared to the existing soft grippers, this is the first soft sensorized 3-D-printed gripper, coupled with a soft fabric-based haptic glove that has the potential to improve the robotic grasping manipulation by introducing haptic feedback to the users.", "title": "" }, { "docid": "301ce75026839f85bc15100a9a7cc5ca", "text": "This paper presents a novel visual-inertial integration system for human navigation in free-living environments, where the measurements from wearable inertial and monocular visual sensors are integrated. The preestimated orientation, obtained from magnet, angular rate, and gravity sensors, is used to estimate the translation based on the data from the visual and inertial sensors. This has a significant effect on the performance of the fusion sensing strategy and makes the fusion procedure much easier, because the gravitational acceleration can be correctly removed from the accelerometer measurements before the fusion procedure, where a linear Kalman filter is selected as the fusion estimator. Furthermore, the use of preestimated orientation can help to eliminate erroneous point matches based on the properties of the pure camera translation and thus the computational requirements can be significantly reduced compared with the RANdom SAmple Consensus algorithm. In addition, an adaptive-frame rate single camera is selected to not only avoid motion blur based on the angular velocity and acceleration after compensation, but also to make an effect called visual zero-velocity update for the static motion. Thus, it can recover a more accurate baseline and meanwhile reduce the computational requirements. In particular, an absolute scale factor, which is usually lost in monocular camera tracking, can be obtained by introducing it into the estimator. Simulation and experimental results are presented for different environments with different types of movement and the results from a Pioneer robot are used to demonstrate the accuracy of the proposed method.", "title": "" }, { "docid": "9bba22f8f70690bee5536820567546e6", "text": "Graph clustering involves the task of dividing nodes into clusters, so that the edge density is higher within clusters as opposed to across clusters. A natural, classic, and popular statistical setting for evaluating solutions to this problem is the stochastic block model, also referred to as the planted partition model. In this paper, we present a new algorithm-a convexified version of maximum likelihood-for graph clustering. We show that, in the classic stochastic block model setting, it outperforms existing methods by polynomial factors when the cluster size is allowed to have general scalings. In fact, it is within logarithmic factors of known lower bounds for spectral methods, and there is evidence suggesting that no polynomial time algorithm would do significantly better. We then show that this guarantee carries over to a more general extension of the stochastic block model. Our method can handle the settings of semirandom graphs, heterogeneous degree distributions, unequal cluster sizes, unaffiliated nodes, partially observed graphs, planted clique/coloring, and so on. In particular, our results provide the best exact recovery guarantees to date for the planted partition, planted k-disjoint-cliques and planted noisy coloring models with general cluster sizes; in other settings, we match the best existing results up to logarithmic factors.", "title": "" }, { "docid": "b67e6d5ee2451912ea6267cbc5274440", "text": "The paper presents theoretical analyses, simulations and design of a PTAT (proportional to absolute temperature) temperature sensor that is based on the vertical PNP structure and dedicated to CMOS VLSI circuits. Performed considerations take into account specific properties of materials that forms electronic elements. The electrothermal simulations are performed in order to verify the unwanted self-heating effect of the sensor", "title": "" }, { "docid": "c61e25e5896ff588764639b6a4c18d2e", "text": "Social media is continually emerging as a platform of information exchange around health challenges. We study mental health discourse on the popular social media: reddit. Building on findings about health information seeking and sharing practices in online forums, and social media like Twitter, we address three research challenges. First, we present a characterization of self-disclosure in mental illness communities on reddit. We observe individuals discussing a variety of concerns ranging from the daily grind to specific queries about diagnosis and treatment. Second, we build a statistical model to examine the factors that drive social support on mental health reddit communities. We also develop language models to characterize mental health social support, which are observed to bear emotional, informational, instrumental, and prescriptive information. Finally, we study disinhibition in the light of the dissociative anonymity that reddit’s throwaway accounts provide. Apart from promoting open conversations, such anonymity surprisingly is found to gather feedback that is more involving and emotionally engaging. Our findings reveal, for the first time, the kind of unique information needs that a social media like reddit might be fulfilling when it comes to a stigmatic illness. They also expand our understanding of the role of the social web in behavioral therapy.", "title": "" }, { "docid": "4eeb792ffb70d9ae015e806c85000cd7", "text": "Optimal instruction scheduling and register allocation are NP-complete problems that require heuristic solutions. By restricting the problem of register allocation and instruction scheduling for delayed-load architectures to expression trees we are able to nd optimal schedules quickly. This thesis presents a fast, optimal code scheduling algorithm for processors with a delayed load of 1 instruction cycle. The algorithm minimizes both execution time and register use and runs in time proportional to the size of the expression tree. In addition, the algorithm is simple; it ts on one page. The dominant paradigm in modern global register allocation is graph coloring. Unlike graph-coloring, our technique, Probabilistic Register Allocation, is unique in its ability to quantify the likelihood that a particular value might actually be allocated a register before allocation actually completes. By computing the likelihood that a value will be assigned a register by a register allocator, register candidates that are competing heavily for scarce registers can be isolated from those that have less competition. Probabilities allow the register allocator to concentrate its e orts where bene t is high and the likelihood of a successful allocation is also high. Probabilistic register allocation also avoids backtracking and complicated live-range splitting heuristics that plague graph-coloring algorithms. ii Optimal algorithms for instruction selection in tree-structured intermediate representations rely on dynamic programming techniques. Bottom-Up Rewrite System (BURS) technology produces extremely fast code generators by doing all possible dynamic programming before code generation. Thus, the dynamic programming process can be very slow. To make BURS technology more attractive, much e ort has gone into reducing the time to produce BURS code generators. Current techniques often require a signi cant amount of time to process a complex machine description (over 10 minutes on a fast workstation). This thesis presents an improved, faster BURS table generation algorithm that makes BURS technology more attractive for instruction selection. The optimized techniques have increased the speed to generate BURS code generators by a factor of 10 to 30. In addition, the algorithms simplify previous techniques, and were implemented in fewer than 2000 lines of C. iii Acknowledgements I have bene ted from the help and support of many people while attending the University of Wisconsin. They deserve my thanks. My mother encouraged me to pursue a PhD, and supported me, in too many ways to list, throughout the process. Professor Charles Fischer, my advisor, generously shared his time, guidance, and ideas with me. Professors Susan Horwitz and James Larus patiently read (and re-read) my thesis. Chris Fraser's zealous quest for small, simple and fast programs was a welcome change from the prevailing trend towards bloated, complex and slow software. Robert Henry explained his early BURS research and made his Codegen system available to me. Lorenz Huelsbergen distracted me with enough creative research ideas to keep graduate school fun. National Science Foundation grant CCR{8908355 provided my nancial support. Some computer resources were obtained through Digital Equipment Corporation External Research Grant 48428. iv", "title": "" }, { "docid": "0a1925251cac8d15da9bbc90627c28dc", "text": "The Madden–Julian oscillation (MJO) is the dominant mode of tropical atmospheric intraseasonal variability and a primary source of predictability for global sub-seasonal prediction. Understanding the origin and perpetuation of the MJO has eluded scientists for decades. The present paper starts with a brief review of progresses in theoretical studies of the MJO and a discussion of the essential MJO characteristics that a theory should explain. A general theoretical model framework is then described in an attempt to integrate the major existing theoretical models: the frictionally coupled Kelvin–Rossby wave, the moisture mode, the frictionally coupled dynamic moisture mode, the MJO skeleton, and the gravity wave interference, which are shown to be special cases of the general MJO model. The last part of the present paper focuses on a special form of trio-interaction theory in terms of the general model with a simplified Betts–Miller (B-M) cumulus parameterization scheme. This trio-interaction theory extends the Matsuno–Gill theory by incorporating a trio-interaction among convection, moisture, and wave-boundary layer (BL) dynamics. The model is shown to produce robust large-scale characteristics of the observed MJO, including the coupled Kelvin–Rossby wave structure, slow eastward propagation (~5 m/s) over warm pool, the planetary (zonal) scale circulation, the BL low-pressure and moisture convergence preceding major convection, and amplification/decay over warm/cold sea surface temperature (SST) regions. The BL moisture convergence feedback plays a central role in coupling equatorial Kelvin and Rossby waves with convective heating, selecting a preferred eastward propagation, and generating instability. The moisture feedback can enhance Rossby wave component, thereby substantially slowing down eastward propagation. With the trio-interaction theory, a number of fundamental issues of MJO dynamics are addressed: why the MJO possesses a mixed Kelvin–Rossby wave structure and how the Kelvin and Rossby waves, which propagate in opposite directions, could couple together with convection and select eastward propagation; what makes the MJO move eastward slowly in the eastern hemisphere, resulting in the 30–60-day periodicity; why MJO amplifies over the warm pool ocean and decays rapidly across the dateline. Limitation and ramifications of the model results to general circulation modeling of MJO are discussed.", "title": "" }, { "docid": "9f8ff3d7322aefafb99e5cc0dd3b33c2", "text": "We report on the use of scenario-based methods for evaluating collaborative systems. We describe the method, the case study where it was applied, and provide results of its efficacy in the field. The results suggest that scenario-based evaluation is effective in helping to focus evaluation efforts and in identifying the range of technical, human, organizational and other contextual factors that impact system success. The method also helps identify specific actions, for example, prescriptions for design to enhance system effectiveness. However, we found the method somewhat less useful for identifying the measurable benefits gained from a CSCW implementation, which was one of our primary goals. We discuss challenges faced applying the technique, suggest recommendations for future research, and point to implications for practice.", "title": "" }, { "docid": "fee504e2184570e80956ff1c8a4ec83c", "text": "The use of computed tomography (CT) in clinical practice has been increasing rapidly, with the number of CT examinations performed in adults and children rising by 10% per year in England. Because the radiology community strives to reduce the radiation dose associated with pediatric examinations, external factors, including guidelines for pediatric head injury, are raising expectations for use of cranial CT in the pediatric population. Thus, radiologists are increasingly likely to encounter pediatric head CT examinations in daily practice. The variable appearance of cranial sutures at different ages can be confusing for inexperienced readers of radiologic images. The evolution of multidetector CT with thin-section acquisition increases the clarity of some of these sutures, which may be misinterpreted as fractures. Familiarity with the normal anatomy of the pediatric skull, how it changes with age, and normal variants can assist in translating the increased resolution of multidetector CT into more accurate detection of fractures and confident determination of normality, thereby reducing prolonged hospitalization of children with normal developmental structures that have been misinterpreted as fractures. More important, the potential morbidity and mortality related to false-negative interpretation of fractures as normal sutures may be avoided. The authors describe the normal anatomy of all standard pediatric sutures, common variants, and sutural mimics, thereby providing an accurate and safe framework for CT evaluation of skull trauma in pediatric patients.", "title": "" }, { "docid": "555a0c7b435cbafa49ca6b3b365a6d68", "text": "We propose a joint framework combining speech enhancement (SE) and voice activity detection (VAD) to increase the speech intelligibility in low signal-noise-ratio (SNR) environments. Deep Neural Networks (DNN) have recently been successfully adopted as a regression model in SE. Nonetheless, the performance in harsh environments is not always satisfactory because the noise energy is often dominating in certain speech segments causing speech distortion. Based on the analysis of SNR information at the frame level in the training set, our approach consists of two steps, namely: (1) a DNN-based VAD model is trained to generate frame-level speech/non-speech probabilities; and (2) the final enhanced speech features are obtained by a weighted sum of the estimated clean speech features processed by incorporating VAD information. Experimental results demonstrate that the proposed SE approach effectively improves short-time objective intelligibility (STOI) by 0.161 and perceptual evaluation of speech quality (PESQ) by 0.333 over the already-good SE baseline systems at -5dB SNR of babble noise.", "title": "" } ]
scidocsrr
f649e6aff9c45d19a82cf43afa2a6cb6
Joint virtual machine and bandwidth allocation in software defined network (SDN) and cloud computing environments
[ { "docid": "7544daa81ddd9001772d48846e3097c3", "text": "In cloud computing, cloud providers can offer cloud consumers two provisioning plans for computing resources, namely reservation and on-demand plans. In general, cost of utilizing computing resources provisioned by reservation plan is cheaper than that provisioned by on-demand plan, since cloud consumer has to pay to provider in advance. With the reservation plan, the consumer can reduce the total resource provisioning cost. However, the best advance reservation of resources is difficult to be achieved due to uncertainty of consumer's future demand and providers' resource prices. To address this problem, an optimal cloud resource provisioning (OCRP) algorithm is proposed by formulating a stochastic programming model. The OCRP algorithm can provision computing resources for being used in multiple provisioning stages as well as a long-term plan, e.g., four stages in a quarter plan and twelve stages in a yearly plan. The demand and price uncertainty is considered in OCRP. In this paper, different approaches to obtain the solution of the OCRP algorithm are considered including deterministic equivalent formulation, sample-average approximation, and Benders decomposition. Numerical studies are extensively performed in which the results clearly show that with the OCRP algorithm, cloud consumer can successfully minimize total cost of resource provisioning in cloud computing environments.", "title": "" } ]
[ { "docid": "8cfa2086e1c73bae6945d1a19d52be26", "text": "We present a unified dynamics framework for real-time visual effects. Using particles connected by constraints as our fundamental building block allows us to treat contact and collisions in a unified manner, and we show how this representation is flexible enough to model gases, liquids, deformable solids, rigid bodies and cloth with two-way interactions. We address some common problems with traditional particle-based methods and describe a parallel constraint solver based on position-based dynamics that is efficient enough for real-time applications.", "title": "" }, { "docid": "5e7d5a86a007efd5d31e386c862fef5c", "text": "This systematic review examined the published scientific research on the psychosocial impact of cleft lip and palate (CLP) among children and adults. The primary objective of the review was to determine whether having CLP places an individual at greater risk of psychosocial problems. Studies that examined the psychosocial functioning of children and adults with repaired non-syndromal CLP were suitable for inclusion. The following sources were searched: Medline (January 1966-December 2003), CINAHL (January 1982-December 2003), Web of Science (January 1981-December 2003), PsycINFO (January 1887-December 2003), the reference section of relevant articles, and hand searches of relevant journals. There were 652 abstracts initially identified through database and other searches. On closer examination of these, only 117 appeared to meet the inclusion criteria. The full text of these papers was examined, with only 64 articles finally identified as suitable for inclusion in the review. Thirty of the 64 studies included a control group. The studies were longitudinal, cross-sectional, or retrospective in nature.Overall, the majority of children and adults with CLP do not appear to experience major psychosocial problems, although some specific problems may arise. For example, difficulties have been reported in relation to behavioural problems, satisfaction with facial appearance, depression, and anxiety. A few differences between cleft types have been found in relation to self-concept, satisfaction with facial appearance, depression, attachment, learning problems, and interpersonal relationships. With a few exceptions, the age of the individual with CLP does not appear to influence the occurrence or severity of psychosocial problems. However, the studies lack the uniformity and consistency required to adequately summarize the psychosocial problems resulting from CLP.", "title": "" }, { "docid": "6720ae7a531d24018bdd1d3d1c7eb28b", "text": "This study investigated the effects of mobile phone text-messaging method (predictive and multi-press) and experience (in texters and non-texters) on children’s textism use and understanding. It also examined popular claims that the use of text-message abbreviations, or textese spelling, is associated with poor literacy skills. A sample of 86 children aged 10 to 12 years read and wrote text messages in conventional English and in textese, and completed tests of spelling, reading, and non-word reading. Children took significantly longer, and made more errors, when reading messages written in textese than in conventional English. Further, they were no faster at writing messages in textese than in conventional English, regardless of texting method or experience. Predictive texters were faster at reading and writing messages than multi-press texters, and texting experience increased writing, but not reading, speed. General spelling and reading scores did not differ significantly with usual texting method. However, better literacy skills were associated with greater textese reading speed and accuracy. These findings add to the growing evidence for a positive relationship between texting proficiency and traditional literacy skills. Children’s text-messaging and literacy skills 3 The advent of mobile phones, and of text-messaging in particular, has changed the way that people communicate, and adolescents and children seem especially drawn to such technology. Australian surveys have revealed that 19% of 8to 11-year-olds and 76% of 12to 14-year-olds have their own mobile phone (Cupitt, 2008), and that 69% of mobile phone users aged 14 years and over use text-messaging (Australian Government, 2008), with 90% of children in Grades 7-12 sending a reported average of 11 texts per week (ABS, 2008). Text-messaging has also been the catalyst for a new writing style: textese. Described as a hybrid of spoken and written English (Plester & Wood, 2009), textese is a largely soundbased, or phonological, form of spelling that can reduce the time and cost of texting (Leung, 2007). Common abbreviations, or textisms, include letter and number homophones (c for see, 2 for to), contractions (txt for text), and non-conventional spellings (skool for school) (Plester, Wood, & Joshi, 2009; Thurlow, 2003). Estimates of the proportion of textisms that children use in their messages range from 21-47% (increasing with age) in naturalistic messages (Wood, Plester, & Bowyer, 2009), to 34% for messages elicited by a given scenario (Plester et al., 2009), to 50-58% for written messages that children ‘translated’ to and from textese (Plester, Wood, & Bell, 2008). One aim of the current study was to examine the efficiency of using textese for both the message writer and the reader, in order to understand the reasons behind (Australian) children’s use of textisms. The spread of textese has been attributed to texters’ desire to overcome the confines of the alphanumeric mobile phone keypad (Crystal, 2008). Since several letters are assigned to each number, the multi-press style of texting requires the somewhat laborious pressing of the same button one to four times to type each letter (Taylor & Vincent, 2005). The use of textese thus has obvious savings for multi-press texters, of both time and screen-space (as message character count cannot exceed 160). However, there is evidence, discussed below, that reading textese can be relatively slow and difficult for the message recipient, compared to Children’s text-messaging and literacy skills 4 reading conventional English. Since the use of textese is now widespread, it is important to examine the potential advantages and disadvantages that this form of writing may have for message senders and recipients, especially children, whose knowledge of conventional English spelling is still developing. To test the potential advantages of using textese for multi-press texters, Neville (2003) examined the speed and accuracy of textese versus conventional English in writing and reading text messages. British girls aged 11-16 years were dictated two short passages to type into a mobile phone: one using conventional English spelling, and the other “as if writing to a friend”. They also read two messages aloud from the mobile phone, one in conventional English, and the other in textese. The proportion of textisms produced is not reported, but no differences in textese use were observed between texters and non-texters. Writing time was significantly faster for textese than conventional English messages, with greater use of textisms significantly correlated with faster message typing times. However, participants were significantly faster at reading messages written in conventional English than in textese, regardless of their usual texting frequency. Kemp (2010) largely followed Neville’s (2003) design, but with 61 Australian undergraduates (mean age 22 years), all regular texters. These adults, too, were significantly faster at writing, but slower at reading, messages written in textese than in conventional English, regardless of their usual messaging frequency. Further, adults also made significantly more reading errors for messages written in textese than conventional English. These findings converge on the important conclusion that while the use of textisms makes writing more efficient for the message sender, it costs the receiver more time to read it. However, both Neville (2003) and Kemp (2010) examined only multi-press method texting, and not the predictive texting method now also available. Predictive texting requires only a single key-press per letter, and a dictionary-based system suggests one or more likely words Children’s text-messaging and literacy skills 5 based on the combinations entered (Taylor & Vincent, 2005). Textese may be used less by predictive texters than multi-press texters for two reasons. Firstly, predictive texting requires fewer key-presses than multi-press texting, which reduces the need to save time by taking linguistic short-cuts. Secondly, the dictionary-based predictive system makes it more difficult to type textisms that are not pre-programmed into the dictionary. Predictive texting is becoming increasingly popular, with recent studies reporting that 88% of Australian adults (Kemp, in press), 79% of Australian 13to 15-year-olds (De Jonge & Kemp, in press) and 55% of British 10to 12-year-olds (Plester et al., 2009) now use this method. Another aim of this study was thus to compare the reading and writing of textese and conventional English messages in children using their typical input method: predictive or multi-press texting, as well as in children who do not normally text. Finally, this study sought to investigate the popular assumption that exposure to unconventional word spellings might compromise children’s conventional literacy skills (e.g., Huang, 2008; Sutherland, 2002), with media articles revealing widespread disapproval of this communication style (Thurlow, 2006). In contrast, some authors have suggested that the use of textisms might actually improve children’s literacy skills (e.g., Crystal, 2008). Many textisms commonly used by children rely on the ability to distinguish, blend, and/or delete letter sounds (Plester et al., 2008, 2009). Practice at reading and creating textisms may therefore lead to improved phonological awareness (Crystal, 2008), which consistently predicts both reading and spelling prowess (e.g., Bradley & Bryant, 1983; Lundberg, Frost, & Petersen, 1988). Alternatively, children who use more textisms may do so because they have better phonological awareness, or poorer spellers may be drawn to using textisms to mask weak spelling ability (e.g., Sutherland, 2002). Thus, studying children’s textism use can provide further information on the links between the component skills that constitute both conventional and alternative, including textism-based, literacy. Children’s text-messaging and literacy skills 6 There is evidence for a positive link between the use of textisms and literacy skills in preteen children. Plester et al. (2008) asked 10to 12-year-old British children to translate messages from standard English to textese, and vice versa, with pen and paper. They found a significant positive correlation between textese use and verbal reasoning scores (Study 1) and spelling scores (Study 2). Plester et al. (2009) elicited text messages from a similar group of children by asking them to write messages in response to a given scenario. Again, textism use was significantly positively associated with word reading ability and phonological awareness scores (although not with spelling scores). Neville (2003) found that the number of textisms written, and the number read accurately, as well as the speed with which both conventional and textese messages were read and written, all correlated significantly with general spelling skill in 11to 16-year-old girls. The cross-sectional nature of these studies, and of the current study, means that causal relationships cannot be firmly established. However, Wood et al. (2009) report on a longitudinal study in which 8to 12-year-old children’s use of textese at the beginning of the school year predicted their skills in reading ability and phonological awareness at the end of the year, even after controlling for verbal IQ. These results provide the first support for the idea that textism use is driving the development of literacy skills, and thus that this use of technology can improve learning in the area of language and literacy. Taken together, these findings also provide important evidence against popular media claims that the use of textese is harming children’s traditional literacy skills. No similar research has yet been published with children outside the UK. The aim of the current study was thus to examine the speed and proficiency of textese use in Australian 10to 12-year-olds and, for the first time, to compare the r", "title": "" }, { "docid": "764d6f45cd9dc08963a0e4d21b23d470", "text": "Implementing and fleshing out a number of psychological and neuroscience theories of cognition, the LIDA conceptual model aims at being a cognitive “theory of everything.” With modules or processes for perception, working memory, episodic memories, “consciousness,” procedural memory, action selection, perceptual learning, episodic learning, deliberation, volition, and non-routine problem solving, the LIDA model is ideally suited to provide a working ontology that would allow for the discussion, design, and comparison of AGI systems. The LIDA architecture is based on the LIDA cognitive cycle, a sort of “cognitive atom.” The more elementary cognitive modules and processes play a role in each cognitive cycle. Higher-level processes are performed over multiple cycles. In addition to giving a quick overview of the LIDA conceptual model, and its underlying computational technology, we argue for the LIDA architecture’s role as a foundational architecture for an AGI. Finally, lessons For AGI researchers drawn from the model and its architecture are discussed.", "title": "" }, { "docid": "47e06f5c195d2e1ecb6199b99ef1ee2d", "text": "We study weakly-supervised video object grounding: given a video segment and a corresponding descriptive sentence, the goal is to localize objects that are mentioned from the sentence in the video. During training, no object bounding boxes are available, but the set of possible objects to be grounded is known beforehand. Existing approaches in the image domain use Multiple Instance Learning (MIL) to ground objects by enforcing matches between visual and semantic features. A naive extension of this approach to the video domain is to treat the entire segment as a bag of spatial object proposals. However, an object existing sparsely across multiple frames might not be detected completely since successfully spotting it from one single frame would trigger a satisfactory match. To this end, we propagate the weak supervisory signal from the segment level to frames that likely contain the target object. For frames that are unlikely to contain the target objects, we use an alternative penalty loss. We also leverage the interactions among objects as a textual guide for the grounding. We evaluate our model on the newlycollected benchmark YouCook2-BoundingBox and show improvements over competitive baselines.", "title": "" }, { "docid": "b1d534c6df789c45f636e69480517183", "text": "Virtual switches are a crucial component of SDN-based cloud systems, enabling the interconnection of virtual machines in a flexible and “software-defined” manner. This paper raises the alarm on the security implications of virtual switches. In particular, we show that virtual switches not only increase the attack surface of the cloud, but virtual switch vulnerabilities can also lead to attacks of much higher impact compared to traditional switches. We present a systematic security analysis and identify four design decisions which introduce vulnerabilities. Our findings motivate us to revisit existing threat models for SDN-based cloud setups, and introduce a new attacker model for SDN-based cloud systems using virtual switches. We demonstrate the practical relevance of our analysis using a case study with Open vSwitch and OpenStack. Employing a fuzzing methodology, we find several exploitable vulnerabilities in Open vSwitch. Using just one vulnerability we were able to create a worm that can compromise hundreds of servers in a matter of minutes. Our findings are applicable beyond virtual switches: NFV and high-performance fast path implementations face similar issues. This paper also studies various mitigation techniques and discusses how to redesign virtual switches for their integration. ∗Also with, Internet Network Architectures, TU Berlin. †Also with, Dept. of Computer Science, Aalborg University. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. SOSR’18, March 28-29, 2018, Los Angeles, CA, USA © 2018 Copyright held by the owner/author(s). Publication rights licensed to the Association for Computing Machinery. ACM ISBN .... . . $15.00 https://doi.org/...", "title": "" }, { "docid": "fc2a45aa3ec8e4d27b9fc1a86d24b86d", "text": "Information and Communication Technologies (ICT) rapidly migrate towards the Future Internet (FI) era, which is characterized, among others, by powerful and complex network infrastructures and innovative applications, services and content. An application area that attracts immense research interest is transportation. In particular, traffic congestions, emergencies and accidents reveal inefficiencies in transportation infrastructures, which can be overcome through the exploitation of ICT findings, in designing systems that are targeted at traffic / emergency management, namely Intelligent Transportation Systems (ITS). This paper considers the potential connection of vehicles to form vehicular networks that communicate with each other at an IP-based level, exchange information either directly or indirectly (e.g. through social networking applications and web communities) and contribute to a more efficient and green future world of transportation. In particular, the paper presents the basic research areas that are associated with the concept of Internet of Vehicles (IoV) and outlines the fundamental research challenges that arise there from.", "title": "" }, { "docid": "c62dfcc83ca24450ea1a7e12a17ac93e", "text": "Lymphedema and lipedema are chronic progressive disorders for which no causal therapy exists so far. Many general practitioners will rarely see these disorders with the consequence that diagnosis is often delayed. The pathophysiological basis is edematization of the tissues. Lymphedema involves an impairment of lymph drainage with resultant fluid build-up. Lipedema arises from an orthostatic predisposition to edema in pathologically increased subcutaneous tissue. Treatment includes complex physical decongestion by manual lymph drainage and absolutely uncompromising compression therapy whether it is by bandage in the intensive phase to reduce edema or with a flat knit compression stocking to maintain volume.", "title": "" }, { "docid": "17d927926f34efbdcb542c15fcf4e442", "text": "Automated Guided Vehicles (AGVs) are now becoming popular in automated materials handling systems, flexible manufacturing systems and even containers handling applications at seaports. In the past two decades, much research and many papers have been devoted to various aspects of the AGV technology and rapid progress has been witnessed. As one of the enabling technologies, scheduling and routing of AGVs have attracted considerable attention; many algorithms about scheduling and routing of AGVs have been proposed. However, most of the existing results are applicable to systems with small number of AGVs, offering low degree of concurrency. With drastically increased number of AGVs in recent applications (e.g. in the order of a hundred in a container terminal), efficient scheduling and routing algorithms are needed to resolve the increased contention of resources (e.g. path, loading and unloading buffers) among AGVs. Because they often employ regular route topologies, the new applications also demand innovative strategies to increase system performance. This survey paper first gives an account of the emergence of the problems of AGV scheduling and routing. It then differentiates them from several related problems, and surveys and classifies major existing algorithms for the problems. Noting the similarities with known problems in parallel and distributed systems, it suggests to apply analogous ideas in routing and scheduling AGVs. It concludes by pointing out fertile areas for future study.", "title": "" }, { "docid": "4b5ac4095cb2695a1e5282e1afca80a4", "text": "Threeexperimentsdocument that14-month-old infants’construalofobjects (e.g.,purple animals) is influenced by naming, that they can distinguish between the grammatical form noun and adjective, and that they treat this distinction as relevant to meaning. In each experiment, infants extended novel nouns (e.g., “This one is a blicket”) specifically to object categories (e.g., animal), and not to object properties (e.g., purple things). This robust noun–category link is related to grammatical form and not to surface differences in the presentation of novel words (Experiment 3). Infants’extensions of novel adjectives (e.g., “This one is blickish”) were more fragile: They extended adjectives specifically to object properties when the property was color (Experiment 1), but revealed a less precise mapping when the property was texture (Experiment 2). These results reveal that by 14 months, infants distinguish between grammatical forms and utilize these distinctions in determining the meaning of novel words.", "title": "" }, { "docid": "146387ae8853279d21f0b4c2f9b3e400", "text": "We address a class of manipulation problems where the robot perceives the scene with a depth sensor and can move its end effector in a space with six degrees of freedom – 3D position and orientation. Our approach is to formulate the problem as a Markov decision process (MDP) with abstract yet generally applicable state and action representations. Finding a good solution to the MDP requires adding constraints on the allowed actions. We develop a specific set of constraints called hierarchical SE(3) sampling (HSE3S) which causes the robot to learn a sequence of gazes to focus attention on the task-relevant parts of the scene. We demonstrate the effectiveness of our approach on three challenging pick-place tasks (with novel objects in clutter and nontrivial places) both in simulation and on a real robot, even though all training is done in simulation.", "title": "" }, { "docid": "3c631c249254a24d9343a971a05af74e", "text": "The selection of the new requirements which should be included in the development of the release of a software product is an important issue for software companies. This problem is known in the literature as the Next Release Problem (NRP). It is an NP-hard problem which simultaneously addresses two apparently contradictory objectives: the total cost of including the selected requirements in the next release of the software package, and the overall satisfaction of a set of customers who have different opinions about the priorities which should be given to the requirements, and also have different levels of importance within the company. Moreover, in the case of managing real instances of the problem, the proposed solutions have to satisfy certain interaction constraints which arise among some requirements. In this paper, the NRP is formulated as a multiobjective optimization problem with two objectives (cost and satisfaction) and three constraints (types of interactions). A multiobjective swarm intelligence metaheuristic is proposed to solve two real instances generated from data provided by experts. Analysis of the results showed that the proposed algorithm can efficiently generate high quality solutions. These were evaluated by comparing them with different proposals (in terms of multiobjective metrics). The results generated by the present approach surpass those generated in other relevant work in the literature (e.g. our technique can obtain a HV of over 60% for the most complex dataset managed, while the other approaches published cannot obtain an HV of more than 40% for the same dataset). 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "ce2a19f9f3ee13978845f1ede238e5b2", "text": "Optimised allocation of system architectures is a well researched area as it can greatly reduce the developmental cost of systems and increase performance and reliability in their respective applications. In conjunction with the recent shift from federated to integrated architectures in automotive, and the increasing complexity of computer systems, both in terms of software and hardware, the applications of design space exploration and optimised allocation of system architectures are of great interest. This thesis proposes a method to derive architectures and their allocations for systems with real-time constraints. The method implements integer linear programming to solve for an optimised allocation of system architectures according to a set of linear constraints while taking resource requirements, communication dependencies, and manual design choices into account. Additionally, this thesis describes and evaluates an industrial use case using the method wherein the timing characteristics of a system were evaluated, and, the method applied to simultaneously derive a system architecture, and, an optimised allocation of the system architecture. This thesis presents evidence and validations that suggest the viability of the method and its use case in an industrial setting. The work in this thesis sets precedence for future research and development, as well as future applications of the method in both industry and academia.", "title": "" }, { "docid": "7998670588bee1965fd5a18be9ccb0d9", "text": "In this letter, a hybrid visual servoing with a hierarchical task-composition control framework is described for aerial manipulation, i.e., for the control of an aerial vehicle endowed with a robot arm. The proposed approach suitably combines into a unique hybrid-control framework the main benefits of both image-based and position-based control schemes. Moreover, the underactuation of the aerial vehicle has been explicitly taken into account in a general formulation, together with a dynamic smooth activation mechanism. Both simulation case studies and experiments are presented to demonstrate the performance of the proposed technique.", "title": "" }, { "docid": "099a2ee305b703a765ff3579f0e0c1c3", "text": "To enhance the security of mobile cloud users, a few proposals have been presented recently. However we argue that most of them are not suitable for mobile cloud where mobile users might join or leave the mobile networks arbitrarily. In this paper, we design a secure mobile user-based data service mechanism (SDSM) to provide confidentiality and fine-grained access control for data stored in the cloud. This mechanism enables the mobile users to enjoy a secure outsourced data services at a minimized security management overhead. The core idea of SDSM is that SDSM outsources not only the data but also the security management to the mobile cloud in a trust way. Our analysis shows that the proposed mechanism has many advantages over the existing traditional methods such as lower overhead and convenient update, which could better cater the requirements in mobile cloud computing scenarios.", "title": "" }, { "docid": "84dbdf4c145fc8213424f6d51550faa9", "text": "Because acute cholangitis sometimes rapidly progresses to a severe form accompanied by organ dysfunction, caused by the systemic inflammatory response syndrome (SIRS) and/or sepsis, prompt diagnosis and severity assessment are necessary for appropriate management, including intensive care with organ support and urgent biliary drainage in addition to medical treatment. However, because there have been no standard criteria for the diagnosis and severity assessment of acute cholangitis, practical clinical guidelines have never been established. The aim of this part of the Tokyo Guidelines is to propose new criteria for the diagnosis and severity assessment of acute cholangitis based on a systematic review of the literature and the consensus of experts reached at the International Consensus Meeting held in Tokyo 2006. Acute cholangitis can be diagnosed if the clinical manifestations of Charcot's triad, i.e., fever and/or chills, abdominal pain (right upper quadrant or epigastric), and jaundice are present. When not all of the components of the triad are present, then a definite diagnosis can be made if laboratory data and imaging findings supporting the evidence of inflammation and biliary obstruction are obtained. The severity of acute cholangitis can be classified into three grades, mild (grade I), moderate (grade II), and severe (grade III), on the basis of two clinical factors, the onset of organ dysfunction and the response to the initial medical treatment. \"Severe (grade III)\" acute cholangitis is defined as acute cholangitis accompanied by at least one new-onset organ dysfunction. \"Moderate (grade II)\" acute cholangitis is defined as acute cholangitis that is unaccompanied by organ dysfunction, but that does not respond to the initial medical treatment, with the clinical manifestations and/or laboratory data not improved. \"Mild (grade I)\" acute cholangitis is defined as acute cholangitis that responds to the initial medical treatment, with the clinical findings improved.", "title": "" }, { "docid": "5f54125c0114f4fadc055e721093a49e", "text": "In this study, a fuzzy logic based autonomous vehicle control system is designed and tested in The Open Racing Car Simulator (TORCS) environment. The aim of this study is that vehicle complete the race without to get any damage and to get out of the way. In this context, an intelligent control system composed of fuzzy logic and conventional control structures has been developed such that the racing car is able to compete the race autonomously. In this proposed structure, once the vehicle's gearshifts have been automated, a fuzzy logic based throttle/brake control system has been designed such that the racing car is capable to accelerate/decelerate in a realistic manner as well as to drive at desired velocity. The steering control problem is also handled to end up with a racing car that is capable to travel on the road even in the presence of sharp curves. In this context, we have designed a fuzzy logic based positioning system that uses the knowledge of the curvature ahead to determine an appropriate position. The game performance of the developed fuzzy logic systems can be observed from https://youtu.be/qOvEz3-PzRo.", "title": "" }, { "docid": "319ba1d449d2b65c5c58b5cc0fdbed67", "text": "This paper introduces a new technology and tools from the field of text-based information retrieval. The authors have developed – a fingerprint-based method for a highly efficient near similarity search, and – an application of this method to identify plagiarized passages in large document collections. The contribution of our work is twofold. Firstly, it is a search technology that enables a new quality for the comparative analysis of complex and large scientific texts. Secondly, this technology gives rise to a new class of tools for plagiarism analysis, since the comparison of entire books becomes computationally feasible. The paper is organized as follows. Section 1 gives an introduction to plagiarism delicts and related detection methods, Section 2 outlines the method of fuzzy-fingerprints as a means for near similarity search, and Section 3 shows our methods in action: It gives examples for near similarity search as well as plagiarism detection and discusses results from a comprehensive performance analyses. 1 Plagiarism Analysis Plagiarism is the act of claiming to be the author of material that someone else actually wrote (Encyclopædia Britannica 2005), and, with the ubiquitousness", "title": "" }, { "docid": "98911eead8eb90ca295425917f5cd522", "text": "We provide strong evidence from multiple tests that credit lines (CLs) play special roles in syndicated loan packages. We find that CLs are associated with lower interest rate spreads on institutional term loans (ITLs) in the same loan packages. CLs also help improve secondary market liquidity of ITLs. These effects are robust to within-firm-year analysis. Using Lehman Brothers bankruptcy as a quasi-natural experiment further confirms our conclusions. These findings support the Bank Specialness Hypothesis that banks play valuable roles in alleviating information problems and that CLs are one conduit for this specialness.", "title": "" } ]
scidocsrr
e3aea73581e42c468cb3c5f58d648ad1
Reputation and social network analysis in multi-agent systems
[ { "docid": "8e70aea51194dba675d4c3e88ee6b9ad", "text": "Trust is central to all transactions and yet economists rarely discuss the notion. It is treated rather as background environment, present whenever called upon, a sort of ever-ready lubricant that permits voluntary participation in production and exchange. In the standard model of a market economy it is taken for granted that consumers meet their budget constraints: they are not allowed to spend more than their wealth. Moreover, they always deliver the goods and services they said they would. But the model is silent on the rectitude of such agents. We are not told if they are persons of honour, conditioned by their upbringing always to meet the obligations they have chosen to undertake, or if there is a background agency which enforces contracts, credibly threatening to mete out punishment if obligations are not fulfilled a punishment sufficiently stiff to deter consumers from ever failing to fulfil them. The same assumptions are made for producers. To be sure, the standard model can be extended to allow for bankruptcy in the face of an uncertain future. One must suppose that there is a special additional loss to becoming bankrupt a loss of honour when honour matters, social and economic ostracism, a term in a debtors’ prison, and so forth. Otherwise, a person may take silly risks or, to make a more subtle point, take insufficient care in managing his affairs, but claim that he ran into genuine bad luck, that it was Mother Nature’s fault and not his own lack of ability or zeal.", "title": "" } ]
[ { "docid": "16c87d75564404d52fc2abac55297931", "text": "SHADE is an adaptive DE which incorporates success-history based parameter adaptation and one of the state-of-the-art DE algorithms. This paper proposes L-SHADE, which further extends SHADE with Linear Population Size Reduction (LPSR), which continually decreases the population size according to a linear function. We evaluated the performance of L-SHADE on CEC2014 benchmarks and compared its search performance with state-of-the-art DE algorithms, as well as the state-of-the-art restart CMA-ES variants. The experimental results show that L-SHADE is quite competitive with state-of-the-art evolutionary algorithms.", "title": "" }, { "docid": "a33c723760f9870744ab004b693e8904", "text": "Portfolio analysis of the publication profile of a unit of interest, ranging from individuals, organizations, to a scientific field or interdisciplinary programs, aims to inform analysts and decision makers about the position of the unit, where it has been, and where it may go in a complex adaptive environment. A portfolio analysis may aim to identify the gap between the current position of an organization and a goal that it intends to achieve or identify competencies of multiple institutions. We introduce a new visual analytic method for analyzing, comparing, and contrasting characteristics of publication portfolios. The new method introduces a novel design of dual-map thematic overlays on global maps of science. Each publication portfolio can be added as one layer of dual-map overlays over two related but distinct global maps of science, one for citing journals and the other for cited journals. We demonstrate how the new design facilitates a portfolio analysis in terms of patterns emerging from the distributions of citation threads and the dynamics of trajectories as a function of space and time. We first demonstrate the analysis of portfolios defined on a single source article. Then we contrast publication portfolios of multiple comparable units of interest, namely, colleges in universities, corporate research organizations. We also include examples of overlays of scientific fields. We expect the new method will provide new insights to portfolio analysis.", "title": "" }, { "docid": "d597d4a1c32256b95524876218d963da", "text": "E-commerce in today's conditions has the highest dependence on network infrastructure of banking. However, when the possibility of communicating with the Banking network is not provided, business activities will suffer. This paper proposes a new approach of digital wallet based on mobile devices without the need to exchange physical money or communicate with banking network. A digital wallet is a software component that allows a user to make an electronic payment in cash (such as a credit card or a digital coin), and hides the low-level details of executing the payment protocol that is used to make the payment. The main features of proposed architecture are secure awareness, fault tolerance, and infrastructure-less protocol.", "title": "" }, { "docid": "17b85b7a5019248c4e43b4f5edc68ffb", "text": "We establish a new connection between value and policy based reinforcement learning (RL) based on a relationship between softmax temporal value consistency and policy optimality under entropy regularization. Specifically, we show that softmax consistent action values correspond to optimal entropy regularized policy probabilities along any action sequence, regardless of provenance. From this observation, we develop a new RL algorithm, Path Consistency Learning (PCL), that minimizes a notion of soft consistency error along multi-step action sequences extracted from both onand off-policy traces. We examine the behavior of PCL in different scenarios and show that PCL can be interpreted as generalizing both actor-critic and Q-learning algorithms. We subsequently deepen the relationship by showing how a single model can be used to represent both a policy and the corresponding softmax state values, eliminating the need for a separate critic. The experimental evaluation demonstrates that PCL significantly outperforms strong actor-critic and Q-learning baselines across several benchmarks.2", "title": "" }, { "docid": "1a9026e0e8fdcd1fab24661beb9ac400", "text": "Please check this box if you do not wish your email address to be published Acknowledgments: The authors would like to thank the anonymous reviewers for their valuable comments that have enabled the improvement of manuscript's quality. The authors would also like to acknowledge that the Before that, he served as a Researcher Grade D at the research center CERTH/ITI and at research center NCSR \" Demokritos \". He was also founder and manager of the eGovernment Unit at Archetypon SA, an international IT company. He holds a Diploma in Electrical Engineering from the National Technical University of Athens, Greece, and an MSc and PhD from Brunel University, UK. During the past years he has initiated and managed several research projects (e.g. Automation. He has about 200 research publications in the areas of software modeling and development for the domains of eGovernment, eBusiness, eLearning, eManufacturing etc. Structured Abstract: Purpose The purpose of this article is to consolidate existing knowledge and provide a deeper understanding of the use of Social Media (SM) data for predictions in various areas, such as disease outbreaks, product sales, stock market volatility, and elections outcome predictions. Design/methodology/approach The scientific literature was systematically reviewed to identify relevant empirical studies. These studies were analyzed and synthesized in the form of a proposed conceptual framework, which was thereafter applied to further analyze this literature, hence gaining new insights into the field. Findings The proposed framework reveals that all relevant studies can be decomposed into a small number of steps, and different approaches can be followed in each step. The application of the framework resulted in interesting findings. For example, most studies support SM predictive power, however more than one-third of these studies infer predictive power without employing predictive analytics. In addition, analysis suggests that there is a clear need for more advanced sentiment analysis methods as well as methods for identifying search terms for collection and filtering of raw SM data. Value The proposed framework enables researchers to classify and evaluate existing studies, to design scientifically rigorous new studies, and to identify the field's weaknesses, hence proposing future research directions. Purpose: The purpose of this article is to consolidate existing knowledge and provide a deeper understanding of the use of Social Media (SM) data for predictions in various areas, such as disease outbreaks, product sales, stock market volatility, and elections outcome predictions. Design/methodology/approach: The scientific literature was systematically reviewed …", "title": "" }, { "docid": "f6669d0b53dd0ca789219874d35bf14e", "text": "Saliva in the mouth is a biofluid produced mainly by three pairs of major salivary glands--the submandibular, parotid and sublingual glands--along with secretions from many minor submucosal salivary glands. Salivary gland secretion is a nerve-mediated reflex and the volume of saliva secreted is dependent on the intensity and type of taste and on chemosensory, masticatory or tactile stimulation. Long periods of low (resting or unstimulated) flow are broken by short periods of high flow, which is stimulated by taste and mastication. The nerve-mediated salivary reflex is modulated by nerve signals from other centers in the central nervous system, which is most obvious as hyposalivation at times of anxiety. An example of other neurohormonal influences on the salivary reflex is the circadian rhythm, which affects salivary flow and ionic composition. Cholinergic parasympathetic and adrenergic sympathetic autonomic nerves evoke salivary secretion, signaling through muscarinic M3 and adrenoceptors on salivary acinar cells and leading to secretion of fluid and salivary proteins. Saliva gland acinar cells are chloride and sodium secreting, and the isotonic fluid produced is rendered hypotonic by salivary gland duct cells as it flows to the mouth. The major proteins present in saliva are secreted by salivary glands, creating viscoelasticity and enabling the coating of oral surfaces with saliva. Salivary films are essential for maintaining oral health and regulating the oral microbiome. Saliva in the mouth contains a range of validated and potential disease biomarkers derived from epithelial cells, neutrophils, the microbiome, gingival crevicular fluid and serum. For example, cortisol levels are used in the assessment of stress, matrix metalloproteinases-8 and -9 appear to be promising markers of caries and periodontal disease, and a panel of mRNA and proteins has been proposed as a marker of oral squamous cell carcinoma. Understanding the mechanisms by which components enter saliva is an important aspect of validating their use as biomarkers of health and disease.", "title": "" }, { "docid": "28f1b7635b777cf278cc8d53a5afafb9", "text": "Visual Question Answering (VQA) is the task of taking as input an image and a free-form natural language question about the image, and producing an accurate answer. In this work we view VQA as a “feature extraction” module to extract image and caption representations. We employ these representations for the task of image-caption ranking. Each feature dimension captures (imagines) whether a fact (question-answer pair) could plausibly be true for the image and caption. This allows the model to interpret images and captions from a wide variety of perspectives. We propose score-level and representation-level fusion models to incorporate VQA knowledge in an existing state-of-the-art VQA-agnostic image-caption ranking model. We find that incorporating and reasoning about consistency between images and captions significantly improves performance. Concretely, our model improves state-of-the-art on caption retrieval by 7.1% and on image retrieval by 4.4% on the MSCOCO dataset.", "title": "" }, { "docid": "cf6816d0a38296a3dc2c04894a102283", "text": "This paper presents a high-efficiency positive buck- boost converter with mode-select circuits and feed-forward techniques. Four power transistors produce more conduction and more switching losses when the positive buck-boost converter operates in buck-boost mode. Utilizing the mode-select circuit, the proposed converter can decrease the loss of switches and let the positive buck-boost converter operate in buck, buck-boost, or boost mode. By adding feed-forward techniques, the proposed converter can improve transient response when the supply voltages are changed. The proposed converter has been fabricated with TSMC 0.35-μm CMOS 2P4M processes. The total chip area is 2.59 × 2.74 mm2 (with PADs), the output voltage is 3.3 V, and the regulated supply voltage range is from 2.5-5 V. Its switching frequency is 500 kHz and the maximum power efficiency is 91.6% as the load current equals 150 mA.", "title": "" }, { "docid": "0f4ac688367d3ea43643472b7d75ffc9", "text": "Many non-photorealistic rendering techniques exist to produce artistic ef fe ts from given images. Inspired by various artists, interesting effects can be produced b y using a minimal rendering, where the minimum refers to the number of tones as well as the nu mber and complexity of the primitives used for rendering. Our method is based on va rious computer vision techniques, and uses a combination of refined lines and blocks (po tentially simplified), as well as a small number of tones, to produce abstracted artistic re ndering with sufficient elements from the original image. We also considered a variety of methods to produce different artistic styles, such as colour and two-tone drawing s, and use semantic information to improve renderings for faces. By changing some intuitive par ameters a wide range of visually pleasing results can be produced. Our method is fully automatic. We demonstrate the effectiveness of our method with extensive experiments and a user study.", "title": "" }, { "docid": "8961d0bd4ba45849bd8fa5c53c0cfb1d", "text": "SUMMARY\nThe program MODELTEST uses log likelihood scores to establish the model of DNA evolution that best fits the data.\n\n\nAVAILABILITY\nThe MODELTEST package, including the source code and some documentation is available at http://bioag.byu. edu/zoology/crandall_lab/modeltest.html.", "title": "" }, { "docid": "d00691959822087a1bddc3b411d27239", "text": "We consider the lattice Boltzmann method for immiscible multiphase flow simulations. Classical lattice Boltzmann methods for this problem, e.g. the colour gradient method or the free energy approach, can only be applied when density and viscosity ratios are small. Moreover, they use additional fields defined on the whole domain to describe the different phases and model phase separation by special interactions at each node. In contrast, our approach simulates the flow using a single field and separates the fluid phases by a free moving interface. The scheme is based on the lattice Boltzmann method and uses the level set method to compute the evolution of the interface. To couple the fluid phases, we develop new boundary conditions which realise the macroscopic jump conditions at the interface and incorporate surface tension in the lattice Boltzmann framework. Various simulations are presented to validate the numerical scheme, e.g. two-phase channel flows, the Young-Laplace law for a bubble and viscous fingering in a Hele-Shaw cell. The results show that the method is feasible over a wide range of density and viscosity differences.", "title": "" }, { "docid": "704df193801e9cd282c0ce2f8a72916b", "text": "We present our preliminary work in developing augmented reali ty systems to improve methods for the construction, inspection, and renovatio n of architectural structures. Augmented reality systems add virtual computer-generated mate rial to the surrounding physical world. Our augmented reality systems use see-through headworn displays to overlay graphics and sounds on a person’s naturally occurring vision and hearing. As the person moves about, the position and orientation of his or her head is tracked, allowing the overlaid material to remai n tied to the physical world. We describe an experimental augmented reality system tha t shows the location of columns behind a finished wall, the location of re-bar s inside one of the columns, and a structural analysis of the column. We also discuss our pre liminary work in developing an augmented reality system for improving the constructio n of spaceframes. Potential uses of more advanced augmented reality systems are presented.", "title": "" }, { "docid": "c8daa2571cd7808664d3dbe775cf60ab", "text": "OBJECTIVE\nTo review the research addressing the relationship of childhood trauma to psychosis and schizophrenia, and to discuss the theoretical and clinical implications.\n\n\nMETHOD\nRelevant studies and previous review papers were identified via computer literature searches.\n\n\nRESULTS\nSymptoms considered indicative of psychosis and schizophrenia, particularly hallucinations, are at least as strongly related to childhood abuse and neglect as many other mental health problems. Recent large-scale general population studies indicate the relationship is a causal one, with a dose-effect.\n\n\nCONCLUSION\nSeveral psychological and biological mechanisms by which childhood trauma increases risk for psychosis merit attention. Integration of these different levels of analysis may stimulate a more genuinely integrated bio-psycho-social model of psychosis than currently prevails. Clinical implications include the need for staff training in asking about abuse and the need to offer appropriate psychosocial treatments to patients who have been abused or neglected as children. Prevention issues are also identified.", "title": "" }, { "docid": "5752868bb14f434ce281733f2ecf84f8", "text": "Tessellation in fundus is not only a visible feature for aged-related and myopic maculopathy but also confuse retinal vessel segmentation. The detection of tessellated images is an inevitable processing in retinal image analysis. In this work, we propose a model using convolutional neural network for detecting tessellated images. The input to the model is pre-processed fundus image, and the output indicate whether this photograph has tessellation or not. A database with 12,000 colour retinal images is collected to evaluate the classification performance. The best tessellation classifier achieves accuracy of 97.73% and AUC value of 0.9659 using pretrained GoogLeNet and transfer learning technique.", "title": "" }, { "docid": "1f7bd85c5b28f97565d8b38781e875ab", "text": "Parental socioeconomic status is among the widely cited factors that has strong association with academic performance of students. Explanatory research design was employed to assess the effects of parents’ socioeconomic status on the academic achievement of students in regional examination. To that end, regional examination result of 538 randomly selected students from thirteen junior secondary schools has been analysed using percentage, independent samples t-tests, Spearman’s rho correlation and one way ANOVA. The results of the analysis revealed that socioeconomic status of parents (particularly educational level and occupational status of parents) has strong association with the academic performance of students. Students from educated and better off families have scored higher result in their regional examination than their counterparts. Being a single parent student and whether parents are living together or not have also a significant impact on the academic performance of students. Parents’ age did not have a significant association with the performance of students.", "title": "" }, { "docid": "6868e3b2432d9914a9b4a4fd2b50b3ee", "text": "Nutritional deficiencies detection for coffee leaves is a task which is often undertaken manually by experts on the field known as agronomists. The process they follow to carry this task is based on observation of the different characteristics of the coffee leaves while relying on their own experience. Visual fatigue and human error in this empiric approach cause leaves to be incorrectly labeled and thus affecting the quality of the data obtained. In this context, different crowdsourcing approaches can be applied to enhance the quality of the data extracted. These approaches separately propose the use of voting systems, association rule filters and evolutive learning. In this paper, we extend the use of association rule filters and evolutive approach by combining them in a methodology to enhance the quality of the data while guiding the users during the main stages of data extraction tasks. Moreover, our methodology proposes a reward component to engage users and keep them motivated during the crowdsourcing tasks. The extracted dataset by applying our proposed methodology in a case study on Peruvian coffee leaves resulted in 93.33% accuracy with 30 instances collected by 8 experts and evaluated by 2 agronomic engineers with background on coffee leaves. The accuracy of the dataset was higher than independently implementing the evolutive feedback strategy and an empiric approach which resulted in 86.67% and 70% accuracy respectively under the same conditions.", "title": "" }, { "docid": "20f43c14feaf2da1e8999403bf350855", "text": "In this paper we propose a new approach to genetic optimization of modular neural networks with fuzzy response integration. The architecture of the modular neural network and the structure of the fuzzy system (for response integration) are designed using genetic algorithms. The proposed methodology is applied to the case of human recognition based on three biometric measures, namely iris, ear, and voice. Experimental results show that optimal modular neural networks can be designed with the use of genetic algorithms and as a consequence the recognition rates of such networks can be improved significantly. In the case of optimization of the fuzzy system for response integration, the genetic algorithm not only adjusts the number of membership functions and rules, but also allows the variation on the type of logic (type-1 or type-2) and the change in the inference model (switching to Mamdani model or Sugeno model). Another interesting finding of this work is that when human recognition is performed under noisy conditions, the response integrators of the modular networks constructed by the genetic algorithm are found to be optimal when using type-2 fuzzy logic. This could have been expected as there has been experimental evidence from previous works that type-2 fuzzy logic is better suited to model higher levels of uncertainty. 2012 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "a03a67b3442ef08fe378976377e76f76", "text": "The method of conjugate gradients provides a very effective way to optimize large, deterministic systems by gradient descent. In its standard form, however, it is not amenable to stochastic approximation of the gradient. Here we explore ideas from conjugate gradient in the stochastic (online) setting, using fast Hessian-gradient products to set up low-dimensional Krylov subspaces within individual mini-batches. In our benchmark experiments the resulting online learning algorithms converge orders of magnitude faster than ordinary stochastic gradient descent.", "title": "" }, { "docid": "4584a3a2b0e1cb30ba1976bd564d74b9", "text": "Deep neural networks (DNNs) have achieved great success, but the applications to mobile devices are limited due to their huge model size and low inference speed. Much effort thus has been devoted to pruning DNNs. Layer-wise neuron pruning methods have shown their effectiveness, which minimize the reconstruction error of linear response with a limited number of neurons in each single layer pruning. In this paper, we propose a new layer-wise neuron pruning approach by minimizing the reconstruction error of nonlinear units, which might be more reasonable since the error before and after activation can change significantly. An iterative optimization procedure combining greedy selection with gradient decent is proposed for single layer pruning. Experimental results on benchmark DNN models show the superiority of the proposed approach. Particularly, for VGGNet, the proposed approach can compress its disk space by 13.6× and bring a speedup of 3.7×; for AlexNet, it can achieve a compression rate of 4.1× and a speedup of 2.2×, respectively.", "title": "" }, { "docid": "f1a0ea0829f44b3ec235074521dc55c3", "text": "CONTEXT\nWithout detailed evidence of their effectiveness, pedometers have recently become popular as a tool for motivating physical activity.\n\n\nOBJECTIVE\nTo evaluate the association of pedometer use with physical activity and health outcomes among outpatient adults.\n\n\nDATA SOURCES\nEnglish-language articles from MEDLINE, EMBASE, Sport Discus, PsychINFO, Cochrane Library, Thompson Scientific (formerly known as Thompson ISI), and ERIC (1966-2007); bibliographies of retrieved articles; and conference proceedings.\n\n\nSTUDY SELECTION\nStudies were eligible for inclusion if they reported an assessment of pedometer use among adult outpatients, reported a change in steps per day, and included more than 5 participants.\n\n\nDATA EXTRACTION AND DATA SYNTHESIS\nTwo investigators independently abstracted data about the intervention; participants; number of steps per day; and presence or absence of obesity, diabetes, hypertension, or hyperlipidemia. Data were pooled using random-effects calculations, and meta-regression was performed.\n\n\nRESULTS\nOur searches identified 2246 citations; 26 studies with a total of 2767 participants met inclusion criteria (8 randomized controlled trials [RCTs] and 18 observational studies). The participants' mean (SD) age was 49 (9) years and 85% were women. The mean intervention duration was 18 weeks. In the RCTs, pedometer users significantly increased their physical activity by 2491 steps per day more than control participants (95% confidence interval [CI], 1098-3885 steps per day, P < .001). Among the observational studies, pedometer users significantly increased their physical activity by 2183 steps per day over baseline (95% CI, 1571-2796 steps per day, P < .0001). Overall, pedometer users increased their physical activity by 26.9% over baseline. An important predictor of increased physical activity was having a step goal such as 10,000 steps per day (P = .001). When data from all studies were combined, pedometer users significantly decreased their body mass index by 0.38 (95% CI, 0.05-0.72; P = .03). This decrease was associated with older age (P = .001) and having a step goal (P = .04). Intervention participants significantly decreased their systolic blood pressure by 3.8 mm Hg (95% CI, 1.7-5.9 mm Hg, P < .001). This decrease was associated with greater baseline systolic blood pressure (P = .009) and change in steps per day (P = .08).\n\n\nCONCLUSIONS\nThe results suggest that the use of a pedometer is associated with significant increases in physical activity and significant decreases in body mass index and blood pressure. Whether these changes are durable over the long term is undetermined.", "title": "" } ]
scidocsrr
e78085305a6078d0f412ce3784ef2718
Post-Quantum Cryptography on FPGA Based on Isogenies on Elliptic Curves
[ { "docid": "4dcc069e33f2831c7ccdd719c51607e1", "text": "We survey the progress that has been made on the arithmetic of elliptic curves in the past twenty-five years, with particular attention to the questions highlighted in Tate’s 1974 Inventiones paper.", "title": "" } ]
[ { "docid": "6399b2d75c6051d284594d327b2ad17a", "text": "System design and evaluation methodologies receive significant attention in natural language processing (NLP), with the systems typically being evaluated on a common task and against shared data sets. This enables direct system comparison and facilitates progress in the field. However, computational work on metaphor is considerably more fragmented than similar research efforts in other areas of NLP and semantics. Recent years have seen a growing interest in computational modeling of metaphor, with many new statistical techniques opening routes for improving system accuracy and robustness. However, the lack of a common task definition, shared data set, and evaluation strategy makes the methods hard to compare, and thus hampers our progress as a community in this area. The goal of this article is to review the system features and evaluation strategies that have been proposed for the metaphor processing task, and to analyze their benefits and downsides, with the aim of identifying the desired properties of metaphor processing systems and a set of requirements for their evaluation.", "title": "" }, { "docid": "72f6f6484499ccaa0188d2a795daa74c", "text": "Road detection is one of the most important research areas in driver assistance and automated driving field. However, the performance of existing methods is still unsatisfactory, especially in severe shadow conditions. To overcome those difficulties, first we propose a novel shadow-free feature extractor based on the color distribution of road surface pixels. Then we present a road detection framework based on the extractor, whose performance is more accurate and robust than that of existing extractors. Also, the proposed framework has much low-complexity, which is suitable for usage in practical systems.", "title": "" }, { "docid": "98218545bf3474b46857d828e1b86004", "text": "Blockchain-based smart contracts are considered a promising technology for handling financial agreements securely. In order to realize this vision, we need a formal language to unambiguously describe contract clauses. We introduce Findel – a purely declarative financial domain-specific language (DSL) well suited for implementation in blockchain networks. We implement an Ethereum smart contract that acts as a marketplace for Findel contracts and measure the cost of its operation. We analyze challenges in modeling financial agreements in decentralized networks and outline directions for future work.", "title": "" }, { "docid": "a2575a6a0516db2e47aab0388c5e9677", "text": "Isaac Miller and Mark Campbell Sibley School of Mechanical and Aerospace Engineering Dan Huttenlocher and Frank-Robert Kline Computer Science Department Aaron Nathan, Sergei Lupashin, and Jason Catlin School of Electrical and Computer Engineering Brian Schimpf School of Operations Research and Information Engineering Pete Moran, Noah Zych, Ephrahim Garcia, Mike Kurdziel, and Hikaru Fujishima Sibley School of Mechanical and Aerospace Engineering Cornell University Ithaca, New York 14853 e-mail: itm2@cornell.edu, mc288@cornell.edu, dph@cs.cornell.edu, amn32@cornell.edu, fk36@cornell.edu, pfm24@cornell.edu, ncz2@cornell.edu, bws22@cornell.edu, sv15@cornell.edu, eg84@cornell.edu, jac267@cornell.edu, msk244@cornell.edu, hf86@cornell.edu", "title": "" }, { "docid": "deccbb39b92e01611de6d0749f550726", "text": "As product prices become increasingly available on the World Wide Web, consumers attempt to understand how corporations vary these prices over time. However, corporations change prices based on proprietary algorithms and hidden variables (e.g., the number of unsold seats on a flight). Is it possible to develop data mining techniques that will enable consumers to predict price changes under these conditions?This paper reports on a pilot study in the domain of airline ticket prices where we recorded over 12,000 price observations over a 41 day period. When trained on this data, Hamlet --- our multi-strategy data mining algorithm --- generated a predictive model that saved 341 simulated passengers $198,074 by advising them when to buy and when to postpone ticket purchases. Remarkably, a clairvoyant algorithm with complete knowledge of future prices could save at most $320,572 in our simulation, thus HAMLET's savings were 61.8% of optimal. The algorithm's savings of $198,074 represents an average savings of 23.8% for the 341 passengers for whom savings are possible. Overall, HAMLET saved 4.4% of the ticket price averaged over the entire set of 4,488 simulated passengers. Our pilot study suggests that mining of price data available over the web has the potential to save consumers substantial sums of money per annum.", "title": "" }, { "docid": "5f5cf5235c10fe84e39e6725705a9940", "text": "A fully automatic method for descreening halftone images is presented based on convolutional neural networks with end-to-end learning. Incorporating context level information, the proposed method not only removes halftone artifacts but also synthesizes the fine details lost during halftone. The method consists of two main stages. In the first stage, intrinsic features of the scene are extracted, the low-frequency reconstruction of the image is estimated, and halftone patterns are removed. For the intrinsic features, the edges and object-categories are estimated and fed to the next stage as strong visual and contextual cues. In the second stage, fine details are synthesized on top of the low-frequency output based on an adversarial generative model. In addition, the novel problem of rescreening is addressed, where a natural input image is halftoned so as to be similar to a separately given reference halftone image. To this end, a two-stage convolutional neural network is also presented. Both networks are trained with millions of before-and-after example image pairs of various halftone styles. Qualitative and quantitative evaluations are provided, which demonstrates the effectiveness of the proposed methods.", "title": "" }, { "docid": "78976c627fb72db5393837169060a92a", "text": "Although many variants of language models have been proposed for information retrieval, there are two related retrieval heuristics remaining \"external\" to the language modeling approach: (1) proximity heuristic which rewards a document where the matched query terms occur close to each other; (2) passage retrieval which scores a document mainly based on the best matching passage. Existing studies have only attempted to use a standard language model as a \"black box\" to implement these heuristics, making it hard to optimize the combination parameters.\n In this paper, we propose a novel positional language model (PLM) which implements both heuristics in a unified language model. The key idea is to define a language model for each position of a document, and score a document based on the scores of its PLMs. The PLM is estimated based on propagated counts of words within a document through a proximity-based density function, which both captures proximity heuristics and achieves an effect of \"soft\" passage retrieval. We propose and study several representative density functions and several different PLM-based document ranking strategies. Experiment results on standard TREC test collections show that the PLM is effective for passage retrieval and performs better than a state-of-the-art proximity-based retrieval model.", "title": "" }, { "docid": "23987d01051f470e26666d6db340018b", "text": "This paper presents a device that is able to reproduce atmospheric discharges in a small scale. First, there was simulated an impulse generator circuit that could meet the main characteristics of the common lightning strokes waveform. Later, four different generator circuits were developed with the selection made by a microcontroller. Finally, the output was subject to amplification circuits that increased its amplitude. The impulses generated had a very similar shape compared to the real atmospheric discharges to the international standards for impulse testing. The apparatus is meant for application in electric grounding systems and for tests in high frequency to measure the soil impedance.", "title": "" }, { "docid": "43100f1c6563b4af125c1c6040daa437", "text": "Humans can naturally understand an image in depth with the aid of rich knowledge accumulated from daily lives or professions. For example, to achieve fine-grained image recognition (e.g., categorizing hundreds of subordinate categories of birds) usually requires a comprehensive visual concept organization including category labels and part-level attributes. In this work, we investigate how to unify rich professional knowledge with deep neural network architectures and propose a Knowledge-Embedded Representation Learning (KERL) framework for handling the problem of fine-grained image recognition. Specifically, we organize the rich visual concepts in the form of knowledge graph and employ a Gated Graph Neural Network to propagate node message through the graph for generating the knowledge representation. By introducing a novel gated mechanism, our KERL framework incorporates this knowledge representation into the discriminative image feature learning, i.e., implicitly associating the specific attributes with the feature maps. Compared with existing methods of fine-grained image classification, our KERL framework has several appealing properties: i) The embedded high-level knowledge enhances the feature representation, thus facilitating distinguishing the subtle differences among subordinate categories. ii) Our framework can learn feature maps with a meaningful configuration that the highlighted regions finely accord with the nodes (specific attributes) of the knowledge graph. Extensive experiments on the widely used CaltechUCSD bird dataset demonstrate the superiority of ∗Corresponding author is Liang Lin (Email: linliang@ieee.org). This work was supported by the National Natural Science Foundation of China under Grant 61622214, the Science and Technology Planning Project of Guangdong Province under Grant 2017B010116001, and Guangdong Natural Science Foundation Project for Research Teams under Grant 2017A030312006. head-pattern: masked Bohemian", "title": "" }, { "docid": "c82c28a44adb4a67e44e1d680b1d13ad", "text": "Cipherbase is a comprehensive database system that provides strong end-to-end data confidentiality through encryption. Cipherbase is based on a novel architecture that combines an industrial strength database engine (SQL Server) with lightweight processing over encrypted data that is performed in secure hardware. The overall architecture provides significant benefits over the state-of-the-art in terms of security, performance, and functionality. This paper presents a prototype of Cipherbase that uses FPGAs to provide secure processing and describes the system engineering details implemented to achieve competitive performance for transactional workloads. This includes hardware-software co-design issues (e.g. how to best offer parallelism), optimizations to hide the latency between the secure hardware and the main system, and techniques to cope with space inefficiencies. All these optimizations were carefully designed not to affect end-to-end data confidentiality. Our experiments with the TPC-C benchmark show that in the worst case when all data are strongly encrypted, Cipherbase achieves 40% of the throughput of plaintext SQL Server. In more realistic cases, if only critical data such as customer names are encrypted, the Cipherbase throughput is more than 90% of plaintext SQL Server.", "title": "" }, { "docid": "6dd5e223a54b9f812031ecff80d39445", "text": "In modern smart grid networks, the traditional power grid is enabled by the technological advances in sensing, measurement, and control devices with two-way communications between the suppliers and customers. The smart grid integration helps the power grid networks to be smarter, but it also increases the risk of adversaries because of the currently obsoleted cyber-infrastructure. Adversaries can easily paralyzes the power facility by misleading the energy management system with injecting false data. In this paper, we proposes a defense strategy to the malicious data injection attack for smart grid state estimation at the control center. The proposed “adaptive CUSUM algorithm”, is recursive in nature, and each recursion comprises two inter-leaved stages: Stage 1 introduces the linear unknown parameter solver technique, and Stage 2 applies the multi-thread CUSUM algorithm for quickest change detection. The proposed scheme is able to determine the possible existence of adversary at the control center as quickly as possible without violating the given constraints such as a certain level of detection accuracy and false alarm. The performance of the proposed algorithm is evaluated by both mathematic analysis and numerical simulation.", "title": "" }, { "docid": "60a6c8588c46fa2aa63a3348723f2bb1", "text": "An early warning system can help to identify at-risk students, or predict student learning performance by analyzing learning portfolios recorded in a learning management system (LMS). Although previous studies have shown the applicability of determining learner behaviors from an LMS, most investigated datasets are not assembled from online learning courses or from whole learning activities undertaken on courses that can be analyzed to evaluate students’ academic achievement. Previous studies generally focus on the construction of predictors for learner performance evaluation after a course has ended, and neglect the practical value of an ‘‘early warning’’ system to predict at-risk students while a course is in progress. We collected the complete learning activities of an online undergraduate course and applied data-mining techniques to develop an early warning system. Our results showed that, timedependent variables extracted from LMS are critical factors for online learning. After students have used an LMS for a period of time, our early warning system effectively characterizes their current learning performance. Data-mining techniques are useful in the construction of early warning systems; based on our experimental results, classification and regression tree (CART), supplemented by AdaBoost is the best classifier for the evaluation of learning performance investigated by this study. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4d2e2c25a32dc54219336c886b79b2ef", "text": "YouTube is one of the largest video sharing websites (with social networking features) on the Internet. The immense popularity of YouTube, anonymity and low publication barrier has resulted in several forms of misuse and video pollution such as uploading of malicious, copyright violated and spam video or content. It has been observed that the presence of opportunistic users post unrelated, promotional, pornographic videos (spam videos posted manually or using automated scripts). A method of mining YouTube to classify a video as spam or legitimate based on video attributes has been presented. The empirical analysis reveals that certain linguistic features (presence of certain terms in the title or description of the YouTube video), temporal features, popularity based features, time based features can be used to predict the video type. We identify features with discriminatory powers and use it to recognize video response spam. Keywords— video spam; spam detection; YouTube; TubeKit", "title": "" }, { "docid": "7f52960fb76c3c697ef66ffee91b13ee", "text": "The aim of this work was to explore the feasibility of combining hot melt extrusion (HME) with 3D printing (3DP) technology, with a view to producing different shaped tablets which would be otherwise difficult to produce using traditional methods. A filament extruder was used to obtain approx. 4% paracetamol loaded filaments of polyvinyl alcohol with characteristics suitable for use in fused-deposition modelling 3DP. Five different tablet geometries were successfully 3D-printed-cube, pyramid, cylinder, sphere and torus. The printing process did not affect the stability of the drug. Drug release from the tablets was not dependent on the surface area but instead on surface area to volume ratio, indicating the influence that geometrical shape has on drug release. An erosion-mediated process controlled drug release. This work has demonstrated the potential of 3DP to manufacture tablet shapes of different geometries, many of which would be challenging to manufacture by powder compaction.", "title": "" }, { "docid": "5546ec134b205144fed46a585db447b4", "text": "Historically, the control of wound infection depended on antiseptic and aseptic techniques directed at coping with the infecting organism. In the 19th century and the early part of the 20th century, wound infections had devastating consequences and a measurable mortality. Even in the 1960s, before the correct use of antibiotics and the advent of modern preoperative and postoperative care, as much as one quarter of a surgical ward might have been occupied by patients with wound complications. As a result, wound management, in itself, became an important component of ward care and of medical education. It is fortunate that many factors have intervened so that the so-called wound rounds have become a practice of the past.The epidemiology of wound infection has changed as surgeons have learned to control bacteria and the inoculum as well as to focus increasingly on the patient (the host) for measures that will continue to provide improved results. The following three factors are the determinants of any infectious process:", "title": "" }, { "docid": "24615e8513ce50d229b64eecaa5af8c8", "text": "Driver's gaze direction is a critical information in understanding driver state. In this paper, we present a distributed camera framework to estimate driver's coarse gaze direction using both head and eye cues. Coarse gaze direction is often sufficient in a number of applications, however, the challenge is to estimate gaze direction robustly in naturalistic real-world driving. Towards this end, we propose gaze-surrogate features estimated from eye region via eyelid and iris analysis. We present a novel iris detection computational framework. We are able to extract proposed features robustly and determine driver's gaze zone effectively. We evaluated the proposed system on a dataset, collected from naturalistic on-road driving in urban streets and freeways. A human expert annotated driver's gaze zone ground truth using information from the driver's eyes and the surrounding context. We conducted two experiments to compare the performance of the gaze zone estimation with and without eye cues. The head-alone experiment has a reasonably good result for most of the gaze zones with an overall 79.8% of weighted accuracy. By adding eye cues, the experimental result shows that the overall weighted accuracy is boosted to 94.9%, and all the individual gaze zones have a better true detection rate especially between the adjacent zones. Therefore, our experimental evaluations show efficacy of the proposed features and very promising results for robust gaze zone estimation.", "title": "" }, { "docid": "8a074cfc00239c3987c8d80480c7a2f6", "text": "The paper presents a novel approach for extracting structural features from segmented cursive handwriting. The proposed approach is based on the contour code and stroke direct ion. The contour code feature utilises the rate of change of slope along the c ontour profile in addition to other properties such as the ascender and descender count, start point and e d point. The direction feature identifies individual line segments or strokes from the character’s outer boundary or thinned representation and highlights each character's pertine nt d rection information. Each feature is investigated employing a benchmark da tabase and the experimental results using the proposed contour code based structural fea ture are very promising. A comparative evaluation with the directional feature a nd existing transition feature is included.", "title": "" }, { "docid": "1daaadeb6cfc16143788b51943deff79", "text": "sonSQL is a MySQL variant that aims to be the default database system for social network data. It uses a conceptual schema called sonSchema to translate a social network design into logical tables. This paper introduces sonSchema, shows how it can be instantiated, and illustrates social network analysis for sonSchema datasets. Experiments show such SQL-based analysis brings insight into community evolution, cluster discovery and action propagation.", "title": "" }, { "docid": "49e148ddb4c5798c157e8568c10fae3d", "text": "Aesthetic quality estimation of an image is a challenging task. In this paper, we introduce a deep CNN approach to tackle this problem. We adopt the sate-of-the-art object-recognition CNN as our baseline model, and adapt it for handling several high-level attributes. The networks capable of dealing with these high-level concepts are then fused by a learned logical connector for predicting the aesthetic rating. Results on the standard benchmark shows the effectiveness of our approach.", "title": "" }, { "docid": "83e897a37aca4c349b4a910c9c0787f4", "text": "Computational imaging methods that can exploit multiple modalities have the potential to enhance the capabilities of traditional sensing systems. In this paper, we propose a new method that reconstructs multimodal images from their linear measurements by exploiting redundancies across different modalities. Our method combines a convolutional group-sparse representation of images with total variation (TV) regularization for high-quality multimodal imaging. We develop an online algorithm that enables the unsupervised learning of convolutional dictionaries on large-scale datasets that are typical in such applications. We illustrate the benefit of our approach in the context of joint intensity-depth imaging.", "title": "" } ]
scidocsrr
829e1c0a7f1869c51e60d946326bf49f
Image Segmentation with Cascaded Hierarchical Models and Logistic Disjunctive Normal Networks
[ { "docid": "350c899dbd0d9ded745b70b6f5e97d19", "text": "We propose an approach to include contextual features for labeling images, in which each pixel is assigned to one of a finite set of labels. The features are incorporated into a probabilistic framework, which combines the outputs of several components. Components differ in the information they encode. Some focus on the image-label mapping, while others focus solely on patterns within the label field. Components also differ in their scale, as some focus on fine-resolution patterns while others on coarser, more global structure. A supervised version of the contrastive divergence algorithm is applied to learn these features from labeled image data. We demonstrate performance on two real-world image databases and compare it to a classifier and a Markov random field.", "title": "" } ]
[ { "docid": "fcb69bd97835da9f244841d54996f070", "text": "A conventional transverse slot substrate integrated waveguide (SIW) periodic leaky wave antenna (LWA) provides a fan beam, usually E-plane beam having narrow beam width and H-plane having wider beamwidth. The main beam direction changes with frequency sweep. In the applications requiring a pencil beam, an array of the antenna is generally used to decrease the H-plane beam width which requires long and tiring optimization steps. In this paper, it is shown that the H-plane beamwidth can be easily decreased by using two baffles with a conventional leaky wave antenna. A prototype periodic leaky wave antenna with baffles is designed and fabricated for X-band applications. The E- and H-plane 3 dB beam widths of the antenna at 10.5GHz are, respectively, 6° and 22°. Over the frequency range 8.2–14 GHz, the antenna scans from θ = −60° to θ = 15°, from backward to forward direction. The use of baffles also improves the gain of the antenna including broadside direction by approximately 4 dB.", "title": "" }, { "docid": "2800046ff82a5bc43b42c1d2e2dc6777", "text": "We develop a novel, fundamental and surprisingly simple randomized iterative method for solving consistent linear systems. Our method has six different but equivalent interpretations: sketch-and-project, constrain-and-approximate, random intersect, random linear solve, random update and random fixed point. By varying its two parameters—a positive definite matrix (defining geometry), and a random matrix (sampled in an i.i.d. fashion in each iteration)—we recover a comprehensive array of well known algorithms as special cases, including the randomized Kaczmarz method, randomized Newton method, randomized coordinate descent method and random Gaussian pursuit. We naturally also obtain variants of all these methods using blocks and importance sampling. However, our method allows for a much wider selection of these two parameters, which leads to a number of new specific methods. We prove exponential convergence of the expected norm of the error in a single theorem, from which existing complexity results for known variants can be obtained. However, we also give an exact formula for the evolution of the expected iterates, which allows us to give lower bounds on the convergence rate.", "title": "" }, { "docid": "e0c7387ae9602d3de30695a27f35c16f", "text": "Nanoscale membrane assemblies of sphingolipids, cholesterol, and certain proteins, also known as lipid rafts, play a crucial role in facilitating a broad range of important cell functions. Whereas on living cell membranes lipid rafts have been postulated to have nanoscopic dimensions and to be highly transient, the existence of a similar type of dynamic nanodomains in multicomponent lipid bilayers has been questioned. Here, we perform fluorescence correlation spectroscopy on planar plasmonic antenna arrays with different nanogap sizes to assess the dynamic nanoscale organization of mimetic biological membranes. Our approach takes advantage of the highly enhanced and confined excitation light provided by the nanoantennas together with their outstanding planarity to investigate membrane regions as small as 10 nm in size with microsecond time resolution. Our diffusion data are consistent with the coexistence of transient nanoscopic domains in both the liquid-ordered and the liquid-disordered microscopic phases of multicomponent lipid bilayers. These nanodomains have characteristic residence times between 30 and 150 μs and sizes around 10 nm, as inferred from the diffusion data. Thus, although microscale phase separation occurs on mimetic membranes, nanoscopic domains also coexist, suggesting that these transient assemblies might be similar to those occurring in living cells, which in the absence of raft-stabilizing proteins are poised to be short-lived. Importantly, our work underscores the high potential of photonic nanoantennas to interrogate the nanoscale heterogeneity of native biological membranes with ultrahigh spatiotemporal resolution.", "title": "" }, { "docid": "c215a497d39f4f95a9fc720debb14b05", "text": "Adding frequency reconfigurability to a compact metamaterial-inspired antenna is investigated. The antenna is a printed monopole with an incorporated slot and is fed by a coplanar waveguide (CPW) line. This antenna was originally inspired from the concept of negative-refractive-index metamaterial transmission lines and exhibits a dual-band behavior. By using a varactor diode, the lower band (narrowband) of the antenna, which is due to radiation from the incorporated slot, can be tuned over a broad frequency range, while the higher band (broadband) remains effectively constant. A detailed equivalent circuit model is developed that predicts the frequency-tuning behavior for the lower band of the antenna. The circuit model shows the involvement of both CPW even and odd modes in the operation of the antenna. Experimental results show that, for a varactor diode capacitance approximately ranging from 0.1-0.7 pF, a tuning range of 1.6-2.23 GHz is achieved. The size of the antenna at the maximum frequency is 0.056 λ0 × 0.047 λ0 and the antenna is placed over a 0.237 λ0 × 0.111 λ0 CPW ground plane (λ0 being the wavelength in vacuum).", "title": "" }, { "docid": "8017a70c73f6758b685648054201342a", "text": "Detecting samples from previously unknown classes is a crucial task in object recognition, especially when dealing with real-world applications where the closed-world assumption does not hold. We present how to apply a null space method for novelty detection, which maps all training samples of one class to a single point. Beside the possibility of modeling a single class, we are able to treat multiple known classes jointly and to detect novelties for a set of classes with a single model. In contrast to modeling the support of each known class individually, our approach makes use of a projection in a joint subspace where training samples of all known classes have zero intra-class variance. This subspace is called the null space of the training data. To decide about novelty of a test sample, our null space approach allows for solely relying on a distance measure instead of performing density estimation directly. Therefore, we derive a simple yet powerful method for multi-class novelty detection, an important problem not studied sufficiently so far. Our novelty detection approach is assessed in comprehensive multi-class experiments using the publicly available datasets Caltech-256 and Image Net. The analysis reveals that our null space approach is perfectly suited for multi-class novelty detection since it outperforms all other methods.", "title": "" }, { "docid": "983ec9cdd75d0860c96f89f3c9b2f752", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "17b68f3275ce077e6c4e9f4c0006c43c", "text": "A compact folded dipole antenna for millimeter-wave (MMW) energy harvesting is proposed in this paper. The antenna consists of two folded arms excited by a coplanar stripline (CPS). A coplanar waveguide (CPW) to coplanar stripline (CPS) transformer is introduced for wide band operation. The antenna radiates from 33 GHz to 41 GHz with fractional bandwidth about 21.6%. The proposed antenna shows good radiation characteristics and low VSWR, lower than 2, as well as average antenna gain is around 5 dBi over the whole frequency range. The proposed dipole antenna shows about 49% length reduction. The simulated results using both Ansoft HFSS and CST Microwave Studio show a very good agreement between them.", "title": "" }, { "docid": "32b4b275dc355dff2e3e168fe6355772", "text": "The management of coupon promotions is an important issue for marketing managers since it still is the major promotion medium. However, the distribution of coupons does not go without problems. Although manufacturers and retailers are investing heavily in the attempt to convince as many customers as possible, overall coupon redemption rate is low. This study improves the strategy of retailers and manufacturers concerning their target selection since both parties often end up in a battle for customers. Two separate models are built: one model makes predictions concerning redemption behavior of coupons that are distributed by the retailer while another model does the same for coupons handed out by manufacturers. By means of the feature-selection technique ‘Relief-F’ the dimensionality of the models is reduced, since it searches for the variables that are relevant for predicting the outcome. In this way, redundant variables are not used in the model-building process. The model is evaluated on real-life data provided by a retailer in FMCG. The contributions of this study for retailers as well as manufacturers are threefold. First, the possibility to classify customers concerning their coupon usage is shown. In addition, it is demonstrated that retailers and manufacturers can stay clear of each other in their marketing campaigns. Finally, the feature-selection technique ‘Relief-F’ proves to facilitate and optimize the performance of the models.", "title": "" }, { "docid": "90f188c1f021c16ad7c8515f1244c08a", "text": "Minimally invasive principles should be the driving force behind rehabilitating young individuals affected by severe dental erosion. The maxillary anterior teeth of a patient, class ACE IV, has been treated following the most conservatory approach, the Sandwich Approach. These teeth, if restored by conventional dentistry (eg, crowns) would have required elective endodontic therapy and crown lengthening. To preserve the pulp vitality, six palatal resin composite veneers and four facial ceramic veneers were delivered instead with minimal, if any, removal of tooth structure. In this article, the details about the treatment are described.", "title": "" }, { "docid": "fb9bbfc3e301cb669663a12d1f18a11f", "text": "In extensively modified landscapes, how the matrix is managed determines many conservation outcomes. Recent publications revise popular conceptions of a homogeneous and static matrix, yet we still lack an adequate conceptual model of the matrix. Here, we identify three core effects that influence patch-dependent species, through impacts associated with movement and dispersal, resource availability, and the abiotic environment. These core effects are modified by five 'dimensions': spatial and temporal variation in matrix quality; spatial scale; temporal scale of matrix variation; and adaptation. The conceptual domain of the matrix, defined as three core effects and their interaction with these five dimensions, provides a much-needed framework to underpin management of fragmented landscapes and highlights new research priorities.", "title": "" }, { "docid": "ca9fb43322aae64da6ce7de83b7ed5ed", "text": "We use a combination of lab and field evidence to study whether preferences for immediacy and the tendency to procrastinate are connected as in O’Donoghue and Rabin (1999a). To measure immediacy, we have participants choose between smaller-sooner and larger-later rewards. Both rewards are paid by check to control for transaction costs. To measure procrastination, we record how fast participants cash their checks and complete other tasks. We find that individuals with a preference for immediacy are more likely to procrastinate. We also find evidence that individuals differ in the degree to which they anticipate their own procrastination. First version: December 2007 JEL Codes: D01, D03, D90", "title": "" }, { "docid": "dabe8a7bff4a9d3ba910744804579b74", "text": "Charitable giving is influenced by many social, psychological, and economic factors. One common way to encourage individuals to donate to charities is by offering to match their contribution (often by their employer or by the government). Conitzer and Sandholm introduced the idea of using auctions to allow individuals to offer to match the contribution of others. We explore this idea in a social network setting, where individuals care about the contribution of their neighbors, and are allowed to specify contributions that are conditional on the contribution of their neighbors.\n We give a mechanism for this setting that raises the largest individually rational contributions given the conditional bids, and analyze the equilibria of this mechanism in the case of linear utilities. We show that if the social network is strongly connected, the mechanism always has an equilibrium that raises the maximum total contribution (which is the contribution computed according to the true utilities); in other words, the price of stability of the game defined by this mechanism is one. Interestingly, although the mechanism is not dominant strategy truthful (and in fact, truthful reporting need not even be a Nash equilibrium of this game), this result shows that the mechanism always has a full-information equilibrium which achieves the same outcome as in the truthful scenario. Of course, there exist cases where the maximum total contribution even with true utilities is zero: we show that the existence of non-zero equilibria can be characterized exactly in terms of the largest eigenvalue of the utility matrix associated with the social network.", "title": "" }, { "docid": "1927d9f2010bb8c49d6511c9d3dac2f0", "text": "To determine the relationships among plasma ghrelin and leptin concentrations and hypothalamic ghrelin contents, and sleep, cortical brain temperature (Tcrt), and feeding, we determined these parameters in rats in three experimental conditions: in free-feeding rats with normal diurnal rhythms, in rats with feeding restricted to the 12-h light period (RF), and in rats subjected to 5-h of sleep deprivation (SD) at the beginning of the light cycle. Plasma ghrelin and leptin displayed diurnal rhythms with the ghrelin peak preceding and the leptin peak following the major daily feeding peak in hour 1 after dark onset. RF reversed the diurnal rhythm of these hormones and the rhythm of rapid-eye-movement sleep (REMS) and significantly altered the rhythm of Tcrt. In contrast, the duration and intensity of non-REMS (NREMS) were hardly responsive to RF. SD failed to change leptin concentrations, but it promptly stimulated plasma ghrelin and induced eating. SD elicited biphasic variations in the hypothalamic ghrelin contents. SD increased plasma corticosterone, but corticosterone did not seem to influence either leptin or ghrelin. The results suggest a strong relationship between feeding and the diurnal rhythm of leptin and that feeding also fundamentally modulates the diurnal rhythm of ghrelin. The variations in hypothalamic ghrelin contents might be associated with sleep-wake activity in rats, but, unlike the previous observations in humans, obvious links could not be detected between sleep and the diurnal rhythms of plasma concentrations of either ghrelin or leptin in the rat.", "title": "" }, { "docid": "fa9571673fe848d1d119e2d49f21d28d", "text": "Convolutional Neural Networks (CNNs) trained on large scale RGB databases have become the secret sauce in the majority of recent approaches for object categorization from RGB-D data. Thanks to colorization techniques, these methods exploit the filters learned from 2D images to extract meaningful representations in 2.5D. Still, the perceptual signature of these two kind of images is very different, with the first usually strongly characterized by textures, and the second mostly by silhouettes of objects. Ideally, one would like to have two CNNs, one for RGB and one for depth, each trained on a suitable data collection, able to capture the perceptual properties of each channel for the task at hand. This has not been possible so far, due to the lack of a suitable depth database. This paper addresses this issue, proposing to opt for synthetically generated images rather than collecting by hand a 2.5D large scale database. While being clearly a proxy for real data, synthetic images allow to trade quality for quantity, making it possible to generate a virtually infinite amount of data. We show that the filters learned from such data collection, using the very same architecture typically used on visual data, learns very different filters, resulting in depth features (a) able to better characterize the different facets of depth images, and (b) complementary with respect to those derived from CNNs pre-trained on 2D datasets. Experiments on two publicly available databases show the power of our approach.", "title": "" }, { "docid": "e0682efd9c8807411da832b796b47da2", "text": "The rise of cloud computing is radically changing the way enterprises manage their information technology (IT) assets. Considering the benefits of cloud computing to the information technology sector, we present a review of current research initiatives and applications of the cloud computing paradigm related to product design and manufacturing. In particular, we focus on exploring the potential of utilizing cloud computing for selected aspects of collaborative design, distributed manufacturing, collective innovation, data mining, semantic web technology, and virtualization. In addition, we propose to expand the paradigm of cloud computing to the field of computer-aided design and manufacturing and propose a new concept of cloud-based design and manufacturing (CBDM). Specifically, we (1) propose a comprehensive definition of CBDM; (2) discuss its key characteristics; (3) relate current research in design and manufacture to CBDM; and (4) identify key research issues and future trends. 1", "title": "" }, { "docid": "6a282fbc6ee9baea673c2f9f15955a18", "text": "A 34-year-old woman suffered from significant chronic pain, depression, non-restorative sleep, chronic fatigue, severe morning stiffness, leg cramps, irritable bowel syndrome, hypersensitivity to cold, concentration difficulties, and forgetfulness. Blood tests were negative for rheumatic disorders. The patient was diagnosed with Fibromyalgia syndrome (FMS). Due to the lack of effectiveness of pharmacological therapies in FMS, she approached a novel metabolic proposal for the symptomatic remission. Its core idea is supporting serotonin synthesis by allowing a proper absorption of tryptophan assumed with food, while avoiding, or at least minimizing the presence of interfering non-absorbed molecules, such as fructose and sorbitol. Such a strategy resulted in a rapid improvement of symptoms after only few days on diet, up to the remission of most symptoms in 2 months. Depression, widespread chronic pain, chronic fatigue, non-restorative sleep, morning stiffness, and the majority of the comorbidities remitted. Energy and vitality were recovered by the patient as prior to the onset of the disease, reverting the occupational and social disabilities. The patient episodically challenged herself breaking the dietary protocol leading to its negative test and to the evaluation of its benefit. These breaks correlated with the recurrence of the symptoms, supporting the correctness of the biochemical hypothesis underlying the diet design toward remission of symptoms, but not as a final cure. We propose this as a low risk and accessible therapeutic protocol for the symptomatic remission in FMS with virtually no costs other than those related to vitamin and mineral salt supplements in case of deficiencies. A pilot study is required to further ground this metabolic approach, and to finally evaluate its inclusion in the guidelines for clinical management of FMS.", "title": "" }, { "docid": "9493b44f845bb7d37bf68a96a8ff96f6", "text": "This paper focuses on services and applications provided to mobile users using airborne computing infrastructure. We present concepts such as drones-as-a-service and fl yin,fly-out infrastructure, and note data management and sys tem design issues that arise in these scenarios. Issues of Big Da ta arising from such applications, optimising the configuration of airborne and ground infrastructure to provide the best QoS and QoE, situation-awareness, scalability, reliability,scheduling for efficiency, interaction with users and drones using phys ical annotations are outlined.", "title": "" }, { "docid": "a2082f1b4154cd11e94eff18a016e91e", "text": "1 During the summer of 2005, I discovered that there was not a copy of my dissertation available from the library at McGill University. I was, however, able to obtain a copy of it on microfilm from another university that had initially obtained it on interlibrary loan. I am most grateful to Vicki Galbraith who typed this version from that copy, which except for some minor variations due to differences in type size and margins (plus this footnote, of course) is identical to that on the microfilm. ACKNOWLEDGEMENTS 1 The writer is grateful to Dr. J. T. McIlhone, Associate General Director in Charge of English Classes of the Montreal Catholic School Board, for his kind cooperation in making subjects available, and to the Principals and French teachers of each high school for their assistance and cooperation during the testing programs. advice on the statistical analysis. In addition, the writer would like to express his appreciation to Mr. K. Tunstall for his assistance in the difficult task of interviewing the parents of each student. Finally, the writer would like to express his gratitude to Janet W. Gardner for her invaluable assistance in all phases of the research program.", "title": "" }, { "docid": "02770bf28a64851bf773c56736efa537", "text": "Wearable robotics is strongly oriented to humans. New applications for wearable robots are encouraged by the lightness and portability of new devices and the progress in human-robot cooperation strategies. In this paper, we propose the different design guidelines to realize a robotic extra-finger for human grasping enhancement. Such guidelines were followed for the realization of three prototypes obtained using rapid prototyping techniques, i.e., a 3D printer and an open hardware development platform. Both fully actuated and under-actuated solutions have been explored. In the proposed wearable design, the robotic extra-finger can be worn as a bracelet in its rest position. The availability of a supplementary finger in the human hand allows to enlarge its workspace, improving grasping and manipulation capabilities. This preliminary work is a first step towards the development of robotic extra-limbs able to increase human workspace and dexterity.", "title": "" }, { "docid": "9186f1998d2c836fb1f9b95fd9122911", "text": "We introduce inScent, a wearable olfactory display that can be worn in mobile everyday situations and allows the user to receive personal scented notifications, i.e. scentifications. Olfaction, i.e. the sense of smell, is used by humans as a sensorial information channel as an element for experiencing the environment. Olfactory sensations are closely linked to emotions and memories, but also notify about personal dangers such as fire or foulness. We want to utilize the properties of smell as a notification channel by amplifying received mobile notifications with artificially emitted scents. We built a wearable olfactory display that can be worn as a pendant around the neck and contains up to eight different scent aromas that can be inserted and quickly exchanged via small scent cartridges. Upon emission, scent aroma is vaporized and blown towards the user. A hardware - and software framework is presented that allows developers to add scents to their mobile applications. In a qualitative user study, participants wore the inScent wearable in public. We used subsequent semi-structured interviews and grounded theory to build a common understanding of the experience and derived lessons learned for the use of scentifications in mobile situations.", "title": "" } ]
scidocsrr
2a24868023e3e3792cb16af7531021fb
Aesthetics of Interaction Design: A Literature Review
[ { "docid": "bc892fe2a369f701e0338085eaa0bdbd", "text": "In his In the blink of an eye,Walter Murch, the Oscar-awarded editor of the English Patient, Apocalypse Now, and many other outstanding movies, devises the Rule of Six—six criteria for what makes a good cut. On top of his list is \"to be true to the emotion of the moment,\" a quality more important than advancing the story or being rhythmically interesting. The cut has to deliver a meaningful, compelling, and emotion-rich \"experience\" to the audience. Because, \"what they finally remember is not the editing, not the camerawork, not the performances, not even the story—it’s how they felt.\" Technology for all the right reasons applies this insight to the design of interactive products and technologies—the domain of Human-Computer Interaction,Usability Engineering,and Interaction Design. It takes an experiential approach, putting experience before functionality and leaving behind oversimplified calls for ease, efficiency, and automation or shallow beautification. Instead, it explores what really matters to humans and what it needs to make technology more meaningful. The book clarifies what experience is, and highlights five crucial aspects and their implications for the design of interactive products. It provides reasons why we should bother with an experiential approach, and presents a detailed working model of experience useful for practitioners and academics alike. It closes with the particular challenges of an experiential approach for design. The book presents its view as a comprehensive, yet entertaining blend of scientific findings, design examples, and personal anecdotes.", "title": "" } ]
[ { "docid": "da6a74341c8b12658aea2a267b7a0389", "text": "An experiment demonstrated that false incriminating evidence can lead people to accept guilt for a crime they did not commit. Subjects in a fastor slow-paced reaction time task were accused of damaging a computer by pressing the wrong key. All were truly innocent and initially denied the charge. A confederate then said she saw the subject hit the key or did not see the subject hit the key. Compared with subjects in the slowpacelno-witness group, those in the fast-pace/witness group were more likely to sign a confession, internalize guilt for the event, and confabulate details in memory consistent with that belief Both legal and conceptual implications are discussed. In criminal law, confession evidence is a potent weapon for the prosecution and a recurring source of controversy. Whether a suspect's self-incriminating statement was voluntary or coerced and whether a suspect was of sound mind are just two of the issues that trial judges and juries consider on a routine basis. To guard citizens against violations of due process and to minimize the risk that the innocent would confess to crimes they did not commit, the courts have erected guidelines for the admissibility of confession evidence. Although there is no simple litmus test, confessions are typically excluded from triai if elicited by physical violence, a threat of harm or punishment, or a promise of immunity or leniency, or without the suspect being notified of his or her Miranda rights. To understand the psychology of criminal confessions, three questions need to be addressed: First, how do police interrogators elicit self-incriminating statements (i.e., what means of social influence do they use)? Second, what effects do these methods have (i.e., do innocent suspects ever confess to crimes they did not commit)? Third, when a coerced confession is retracted and later presented at trial, do juries sufficiently discount the evidence in accordance with the law? General reviews of relevant case law and research are available elsewhere (Gudjonsson, 1992; Wrightsman & Kassin, 1993). The present research addresses the first two questions. Informed by developments in case law, the police use various methods of interrogation—including the presentation of false evidence (e.g., fake polygraph, fingerprints, or other forensic test results; staged eyewitness identifications), appeals to God and religion, feigned friendship, and the use of prison informants. A number of manuals are available to advise detectives on how to extract confessions from reluctant crime suspects (Aubry & Caputo, 1965; O'Hara & O'Hara, 1981). The most popular manual is Inbau, Reid, and Buckley's (1986) Criminal Interrogation and Confessions, originally published in 1%2, and now in its third edition. Address correspondence to Saul Kassin, Department of Psychology, Williams College, WllUamstown, MA 01267. After advising interrogators to set aside a bare, soundproof room absent of social support and distraction, Inbau et al, (1986) describe in detail a nine-step procedure consisting of various specific ploys. In general, two types of approaches can be distinguished. One is minimization, a technique in which the detective lulls Che suspect into a false sense of security by providing face-saving excuses, citing mitigating circumstances, blaming the victim, and underplaying the charges. The second approach is one of maximization, in which the interrogator uses scare tactics by exaggerating or falsifying the characterization of evidence, the seriousness of the offense, and the magnitude of the charges. In a recent study (Kassin & McNall, 1991), subjects read interrogation transcripts in which these ploys were used and estimated the severity of the sentence likely to be received. The results indicated that minimization communicated an implicit offer of leniency, comparable to that estimated in an explicit-promise condition, whereas maximization implied a threat of harsh punishment, comparable to that found in an explicit-threat condition. Yet although American courts routinely exclude confessions elicited by explicit threats and promises, they admit those produced by contingencies that are pragmatically implied. Although police often use coercive methods of interrogation, research suggests that juries are prone to convict defendants who confess in these situations. In the case of Arizona v. Fulminante (1991), the U.S. Supreme Court ruled that under certain conditions, an improperly admitted coerced confession may be considered upon appeal to have been nonprejudicial, or \"harmless error.\" Yet mock-jury research shows that people find it hard to believe that anyone would confess to a crime that he or she did not commit (Kassin & Wrightsman, 1980, 1981; Sukel & Kassin, 1994). Still, it happens. One cannot estimate the prevalence of the problem, which has never been systematically examined, but there are numerous documented instances on record (Bedau & Radelet, 1987; Borchard, 1932; Rattner, 1988). Indeed, one can distinguish three types of false confession (Kassin & Wrightsman, 1985): voluntary (in which a subject confesses in the absence of extemal pressure), coercedcompliant (in which a suspect confesses only to escape an aversive interrogation, secure a promised benefit, or avoid a threatened harm), and coerced-internalized (in which a suspect actually comes to believe that he or she is guilty of the crime). This last type of false confession seems most unlikely, but a number of recent cases have come to light in which the police had seized a suspect who was vulnerable (by virtue of his or her youth, intelligence, personality, stress, or mental state) and used false evidence to convince the beleaguered suspect that he or she was guilty. In one case that received a great deal of attention, for example, Paul Ingram was charged with rape and a host of Satanic cult crimes that included the slaughter of newbom babies. During 6 months of interrogation, he was hypnoVOL. 7, NO. 3, MAY 1996 Copyright © 1996 American Psychological Society 125 PSYCHOLOGICAL SCIENCE", "title": "" }, { "docid": "67b2b896af777731615ac010f688bb9c", "text": "Referring expressions are natural language constructions used to identify particular objects within a scene. In this paper, we propose a unified framework for the tasks of referring expression comprehension and generation. Our model is composed of three modules: speaker, listener, and reinforcer. The speaker generates referring expressions, the listener comprehends referring expressions, and the reinforcer introduces a reward function to guide sampling of more discriminative expressions. The listener-speaker modules are trained jointly in an end-to-end learning framework, allowing the modules to be aware of one another during learning while also benefiting from the discriminative reinforcer&#x2019;s feedback. We demonstrate that this unified framework and training achieves state-of-the-art results for both comprehension and generation on three referring expression datasets.", "title": "" }, { "docid": "238adc0417c167aeb64c23b576f434d0", "text": "This paper studies the problem of matching images captured from an unmanned ground vehicle (UGV) to those from a satellite or high-flying vehicle. We focus on situations where the UGV navigates in remote areas with few man-made structures. This is a difficult problem due to the drastic change in perspective between the ground and aerial imagery and the lack of environmental features for image comparison. We do not rely on GPS, which may be jammed or uncertain. We propose a two-step approach: (1) the UGV images are warped to obtain a bird's eye view of the ground, and (2) this view is compared to a grid of satellite locations using whole-image descriptors. We analyze the performance of a variety of descriptors for different satellite map sizes and various terrain and environment types. We incorporate the air-ground matching into a particle-filter framework for localization using the best-performing descriptor. The results show that vision-based UGV localization from satellite maps is not only possible, but often provides better position estimates than GPS estimates, enabling us to improve the location estimates of Google Street View.", "title": "" }, { "docid": "71467b5ba3ef8706dc8eea80ca7d0d4e", "text": "The DiscovEHR collaboration between the Regeneron Genetics Center and Geisinger Health System couples high-throughput sequencing to an integrated health care system using longitudinal electronic health records (EHRs). We sequenced the exomes of 50,726 adult participants in the DiscovEHR study to identify ~4.2 million rare single-nucleotide variants and insertion/deletion events, of which ~176,000 are predicted to result in a loss of gene function. Linking these data to EHR-derived clinical phenotypes, we find clinical associations supporting therapeutic targets, including genes encoding drug targets for lipid lowering, and identify previously unidentified rare alleles associated with lipid levels and other blood level traits. About 3.5% of individuals harbor deleterious variants in 76 clinically actionable genes. The DiscovEHR data set provides a blueprint for large-scale precision medicine initiatives and genomics-guided therapeutic discovery.", "title": "" }, { "docid": "0f6183057c6b61cefe90e4fa048ab47f", "text": "This paper investigates the use of Deep Bidirectional Long Short-Term Memory based Recurrent Neural Networks (DBLSTM-RNNs) for voice conversion. Temporal correlations across speech frames are not directly modeled in frame-based methods using conventional Deep Neural Networks (DNNs), which results in a limited quality of the converted speech. To improve the naturalness and continuity of the speech output in voice conversion, we propose a sequence-based conversion method using DBLSTM-RNNs to model not only the frame-wised relationship between the source and the target voice, but also the long-range context-dependencies in the acoustic trajectory. Experiments show that DBLSTM-RNNs outperform DNNs where Mean Opinion Scores are 3.2 and 2.3 respectively. Also, DBLSTM-RNNs without dynamic features have better performance than DNNs with dynamic features.", "title": "" }, { "docid": "2a225a33dc4d8cd08d0ae4a18d8b267c", "text": "Support Vector Machines is a powerful methodology for solving problems in nonlinear classification, function estimation and density estimation which has also led recently to many new developments in kernel based learning in general. In these methods one solves convex optimization problems, typically quadratic programs. We focus on Least Squares Support Vector Machines which are reformulations to standard SVMs that lead to solving linear KKT systems. Least squares support vector machines are closely related to regularization networks and Gaussian processes but additionally emphasize and exploit primaldual interpretations from optimization theory. In view of interior point algorithms such LS-SVM KKT systems can be considered as a core problem. Where needed the obtained solutions can be robustified and/or sparsified. As an alternative to a top-down choice of the cost function, methods from robust statistics are employed in a bottom-up fashion for further improving the estimates. We explain the natural links between LS-SVM classifiers and kernel Fisher discriminant analysis. The framework is further extended towards unsupervised learning by considering PCA analysis and its kernel version as a one-class modelling problem. This leads to new primal-dual support vector machine formulations for kernel PCA and kernel canonical correlation analysis. Furthermore, LS-SVM formulations are mentioned towards recurrent networks and control, thereby extending the methods from static to dynamic problems. In general, support vector machines may pose heavy computational challenges for large data sets. For this purpose, we propose a method of Fixed Size LS-SVM where the estimation is done in the primal space in relation to a Nyström sampling with active selection of support vectors and we discuss extensions to committee networks. The methods will be illustrated by several benchmark and real-life applications.", "title": "" }, { "docid": "7a3b5ab64e9ef5cd0f0b89391bb8bee2", "text": "Quality enhancement of humanitarian assistance is far from a technical task. It is interwoven with debates on politics of principles and people are intensely committed to the various outcomes these debates might have. It is a field of strongly competing truths, each with their own rationale and appeal. The last few years have seen a rapid increase in discussions, policy paper and organisational initiatives regarding the quality of humanitarian assistance. This paper takes stock of the present initiatives and of the questions raised with regard to the quality of humanitarian assistance.", "title": "" }, { "docid": "c0f958c7bb692f8a405901796445605a", "text": "Thickening is the first step in the design of sustainable (cost effective, environmentally friendly, and socially viable) tailings management solutions for surface deposition, mine backfilling, and sub-aqueous discharge. The high water content slurries are converted to materials with superior dewatering properties by adding long-chain synthetic polymers. Given the solid and liquid composition of a slurry, a high settling rate alongside a high solids content can be achieved by optimizing the various polymers parameters: ionic type (T), charge density (C), molecular weight (M), and dosage (D). This paper developed a statistical model to predict field performance of a selected metal mine slurry using laboratory test data. Results of sedimentationconsolidation tests were fitted using the method of least squares. A newly devised polymer characteristic coefficient (Cp) that combined the various polymer parameters correlated well with the observed dewatering behavior as the R equalled 0.95 for void ratio and 0.84 for hydraulic conductivity. The various combinations of polymer parameters resulted in variable slurry performance during sedimentation and were found to converge during consolidation. Further, the void ratio-effective stress and the hydraulic conductivity-void ratio relationships were found to be e = a σ′ b and k = 10 (c + e , respectively.", "title": "" }, { "docid": "bebd0ea7946bbe44335b951c9c917d0b", "text": "Increasing hospital re-admission rates due to Hospital Acquired Infections (HAIs) are a concern at many healthcare facilities. To prevent the spread of HAIs, caregivers should comply with hand hygiene guidelines, which require reliable and timely hand hygiene compliance monitoring systems. The current standard practice of monitoring compliance involves the direct observation of caregivers' hand cleaning as they enter or exit a patient room by a trained observer, which can be time-consuming, resource-intensive, and subject to bias. To alleviate tedious manual effort and reduce errors, this paper describes how we applied machine learning to study the characteristics of compliance that can later be used to (1) assist direct observation by deciding when and where to station manual auditors and (2) improve compliance by providing just-in-time alerts or recommending training materials to non-compliant staff. The paper analyzes location and handwashing station activation data from a 30-bed intensive care unit study and uses machine learning to assess if location, time-based factors, or other behavior data can determine what characteristics are predictive of handwashing non-compliance events. The results of this study show that a care provider's entry compliance is highly indicative of the same provider's exit compliance. Moreover, compliance of the most recent patient room visit can also predict entry compliance of a provider's current patient room visit.", "title": "" }, { "docid": "932dc0c02047cd701e41530c42d830bc", "text": "The concept of \"extra-cortical organization of higher mental functions\" proposed by Lev Vygotsky and expanded by Alexander Luria extends cultural-historical psychology regarding the interplay of natural and cultural factors in the development of the human mind. Using the example of self-regulation, the authors explore the evolution of this idea from its origins to recent findings on the neuropsychological trajectories of the development of executive functions. Empirical data derived from the Tools of the Mind project are used to discuss the idea of using classroom intervention to study the development of self-regulation in early childhood.", "title": "" }, { "docid": "de298bb631dd0ca515c161b6e6426a85", "text": "We address the problem of sharpness enhancement of images. Existing hierarchical techniques that decompose an image into a smooth image and high frequency components based on Gaussian filter and bilateral filter suffer from halo effects, whereas techniques based on weighted least squares extract low contrast features as detail. Other techniques require multiple images and are not tolerant to noise.", "title": "" }, { "docid": "d63946a096b9e8a99be6d5ddfe4097da", "text": "While the first open comparative challenges in the field of paralinguistics targeted more ‘conventional’ phenomena such as emotion, age, and gender, there still exists a multiplicity of not yet covered, but highly relevant speaker states and traits. The INTERSPEECH 2011 Speaker State Challenge thus addresses two new sub-challenges to overcome the usually low compatibility of results: In the Intoxication Sub-Challenge, alcoholisation of speakers has to be determined in two classes; in the Sleepiness Sub-Challenge, another two-class classification task has to be solved. This paper introduces the conditions, the Challenge corpora “Alcohol Language Corpus” and “Sleepy Language Corpus”, and a standard feature set that may be used. Further, baseline results are given.", "title": "" }, { "docid": "244116ffa1ed424fc8519eedc7062277", "text": "This paper describes a method of automatic placement for standard cells (polycells) that yields areas within 10-20 percent of careful hand placements. The method is based on graph partitioning to identify groups of modules that ought to be close to each other, and a technique for properly accounting for external connections at each level of partitioning. The placement procedure is in production use as part of an automated design system; it has been used in the design of more than 40 chips, in CMOS, NMOS, and bipolar technologies.", "title": "" }, { "docid": "314e10ba42a13a84b40a1b0367bd556e", "text": "How do users behave in online chatrooms, where they instantaneously read and write posts? We analyzed about 2.5 million posts covering various topics in Internet relay channels, and found that user activity patterns follow known power-law and stretched exponential distributions, indicating that online chat activity is not different from other forms of communication. Analysing the emotional expressions (positive, negative, neutral) of users, we revealed a remarkable persistence both for individual users and channels. I.e. despite their anonymity, users tend to follow social norms in repeated interactions in online chats, which results in a specific emotional \"tone\" of the channels. We provide an agent-based model of emotional interaction, which recovers qualitatively both the activity patterns in chatrooms and the emotional persistence of users and channels. While our assumptions about agent's emotional expressions are rooted in psychology, the model allows to test different hypothesis regarding their emotional impact in online communication.", "title": "" }, { "docid": "aa1ce09a8ad407ce413d9e56e13e79d4", "text": "A boost-flyback converter was investigated for its ability to charge separate battery stacks from a low-voltage high-current renewable energy source. A low voltage (12V) battery was connected in the boost configuration, and a high voltage (330V) battery stack was connected in the flyback configuration. This converter works extremely well for this application because it gives charging priority to the low voltage battery and dumps the reserve energy to the high voltage stack. As the low-voltage battery approaches full charge, more power is adaptively directed to the high-voltage stack, until finally the charging of the low voltage battery stops. A two-secondary flyback is also capable of this adaptive charging, but the boost-flyback does it with much higher conversion efficiency, and with a simpler (less expensive) transformer design.", "title": "" }, { "docid": "a7cc7076d324f33d5e9b40756c5e1631", "text": "Social learning analytics introduces tools and methods that help improving the learning process by providing useful information about the actors and their activity in the learning system. This study examines the relation between SNA parameters and student outcomes, between network parameters and global course performance, and it shows how visualizations of social learning analytics can help observing the visible and invisible interactions occurring in online distance education. The findings from our empirical study show that future research should further investigate whether there are conditions under which social network parameters are reliable predictors of academic performance, but also advises against relying exclusively in social network parameters for predictive purposes. The findings also show that data visualization is a useful tool for social learning analytics, and how it may provide additional information about actors and their behaviors for decision making in online distance", "title": "" }, { "docid": "23d9479a38afa6e8061fe431047bed4e", "text": "We introduce cMix, a new approach to anonymous communications. Through a precomputation, the core cMix protocol eliminates all expensive realtime public-key operations—at the senders, recipients and mixnodes—thereby decreasing real-time cryptographic latency and lowering computational costs for clients. The core real-time phase performs only a few fast modular multiplications. In these times of surveillance and extensive profiling there is a great need for an anonymous communication system that resists global attackers. One widely recognized solution to the challenge of traffic analysis is a mixnet, which anonymizes a batch of messages by sending the batch through a fixed cascade of mixnodes. Mixnets can offer excellent privacy guarantees, including unlinkability of sender and receiver, and resistance to many traffic-analysis attacks that undermine many other approaches including onion routing. Existing mixnet designs, however, suffer from high latency in part because of the need for real-time public-key operations. Precomputation greatly improves the real-time performance of cMix, while its fixed cascade of mixnodes yields the strong anonymity guarantees of mixnets. cMix is unique in not requiring any real-time public-key operations by users. Consequently, cMix is the first mixing suitable for low latency chat for lightweight devices. Our presentation includes a specification of cMix, security arguments, anonymity analysis, and a performance comparison with selected other approaches. We also give benchmarks from our prototype.", "title": "" }, { "docid": "d1aa525575e33c587d86e89566c21a49", "text": "This paper investigates the problem of fault detection for nonlinear discrete-time networked systems under an event-triggered scheme. A polynomial fuzzy fault detection filter is designed to generate a residual signal and detect faults in the system. A novel polynomial event-triggered scheme is proposed to determine the transmission of the signal. A fault detection filter is designed to guarantee that the residual system is asymptotically stable and satisfies the desired performance. Polynomial approximated membership functions obtained by Taylor series are employed for filtering analysis. Furthermore, sufficient conditions are represented in terms of sum of squares (SOSs) and can be solved by SOS tools in MATLAB environment. A numerical example is provided to demonstrate the effectiveness of the proposed results.", "title": "" }, { "docid": "73a2b8479bb57d4e94a7fc629ee4528a", "text": "OBJECTIVES\nQuantitative olfactory assessment is often neglected in clinical practice, although olfactory loss can assist to diagnosis and may lead to significant morbidity. \"Sniffin' Sticks\" is a modern test of nasal chemosensory performance that is based on penlike odor-dispensing devices. It consists of three tests of olfactory function: odor threshold, odor discrimination, and odor identification. The results of this test may be presented as a composite threshold-discrimination-identification (TDI) score. The aim of this study was first to develop normative data of olfactory function for the Greek population using this test and second to relate olfactory performance to age, sex, and side examined.\n\n\nSTUDY DESIGN\nThe authors conducted a prospective clinical trial.\n\n\nMETHODS\nA total of 93 healthy subjects were included in the study, 48 males and 45 females, mean age of 44.5 years (range, 6-84 years).\n\n\nRESULTS\nA database of normal values for olfactory testing was established for the Greek population. Females performed better than males and older subjects performed less efficiently in all tests. We also found a right nostril advantage compared with the left. Additionally, scores obtained from bilateral presentation were similar with scores obtained from the nostril with the better performance.\n\n\nCONCLUSIONS\nThe \"Sniffin' Sticks\" can be used effectively in the Greek population to evaluate olfactory performance. Mean values of olfactory tests obtained were better in comparison with data from settings located in central and northern Europe.", "title": "" }, { "docid": "14baf30e1bdf7e31082fc2f1be8ea01c", "text": "Different concentrations (3, 30, 300, and 3000 mg/L of culture fluid) of garlic oil (GAR), diallyl sulfide (DAS), diallyl disulfide (DAD), allicin (ALL), and allyl mercaptan (ALM) were incubated for 24 h in diluted ruminal fluid with a 50:50 forage:concentrate diet (17.7% crude protein; 30.7% neutral detergent fiber) to evaluate their effects on rumen microbial fermentation. Garlic oil (30 and 300 mg/L), DAD (30 and 300 mg/L), and ALM (300 mg/L) resulted in lower molar proportion of acetate and higher proportions of propionate and butyrate. In contrast, at 300 mg/L, DAS only increased the proportion of butyrate, and ALL had no effects on volatile fatty acid proportions. In a dual-flow continuous culture of rumen fluid fed the same 50:50 forage:concentrate diet, addition of GAR (312 mg/L), DAD (31.2 and 312 mg/L), and ALM (31.2 and 312 mg/L) resulted in similar changes to those observed in batch culture, with the exception of the lack of effect of DAD on the proportion of propionate. In a third in vitro study, the potential of GAR (300 mg/L), DAD (300 mg/L), and ALM (300 mg/L) to decrease methane production was evaluated. Treatments GAR, DAD, and ALM resulted in a decrease in methane production of 73.6, 68.5, and 19.5%, respectively, compared with the control. These results confirm the ability of GAR, DAD, and ALM to decrease methane production, which may help to improve the efficiency of energy use in the rumen.", "title": "" } ]
scidocsrr
81057324736ea87689acdea7bd8296cf
Crowd-sourcing NLG Data: Pictures Elicit Better Data
[ { "docid": "675a865e7335b2c9bd0cccf1317a5d27", "text": "The relationship between financial incentives and performance, long of interest to social scientists, has gained new relevance with the advent of web-based \"crowd-sourcing\" models of production. Here we investigate the effect of compensation on performance in the context of two experiments, conducted on Amazon's Mechanical Turk (AMT). We find that increased financial incentives increase the quantity, but not the quality, of work performed by participants, where the difference appears to be due to an \"anchoring\" effect: workers who were paid more also perceived the value of their work to be greater, and thus were no more motivated than workers paid less. In contrast with compensation levels, we find the details of the compensation scheme do matter--specifically, a \"quota\" system results in better work for less pay than an equivalent \"piece rate\" system. Although counterintuitive, these findings are consistent with previous laboratory studies, and may have real-world analogs as well.", "title": "" }, { "docid": "ca0f2b3565b6479c5c3b883325bf3296", "text": "We present a simple, robust generation system which performs content selection and surface realization in a unified, domain-independent framework. In our approach, we break up the end-to-end generation process into a sequence of local decisions, arranged hierarchically and each trained discriminatively. We deployed our system in three different domains—Robocup sportscasting, technical weather forecasts, and common weather forecasts, obtaining results comparable to state-ofthe-art domain-specific systems both in terms of BLEU scores and human evaluation.", "title": "" } ]
[ { "docid": "c47525f2456de0b9b87a5ebbb5a972fb", "text": "This article reviews the potential use of visual feedback, focusing on mirror visual feedback, introduced over 15 years ago, for the treatment of many chronic neurological disorders that have long been regarded as intractable such as phantom pain, hemiparesis from stroke and complex regional pain syndrome. Apart from its clinical importance, mirror visual feedback paves the way for a paradigm shift in the way we approach neurological disorders. Instead of resulting entirely from irreversible damage to specialized brain modules, some of them may arise from short-term functional shifts that are potentially reversible. If so, relatively simple therapies can be devised--of which mirror visual feedback is an example--to restore function.", "title": "" }, { "docid": "ced8cc9329777cc01cdb3e91772a29c2", "text": "Manually annotating clinical document corpora to generate reference standards for Natural Language Processing (NLP) systems or Machine Learning (ML) is a timeconsuming and labor-intensive endeavor. Although a variety of open source annotation tools currently exist, there is a clear opportunity to develop new tools and assess functionalities that introduce efficiencies into the process of generating reference standards. These features include: management of document corpora and batch assignment, integration of machine-assisted verification functions, semi-automated curation of annotated information, and support of machine-assisted pre-annotation. The goals of reducing annotator workload and improving the quality of reference standards are important considerations for development of new tools. An infrastructure is also needed that will support largescale but secure annotation of sensitive clinical data as well as crowdsourcing which has proven successful for a variety of annotation tasks. We introduce the Extensible Human Oracle Suite of Tools (eHOST) http://code.google.com/p/ehost that provides such functionalities that when coupled with server integration offer an end-to-end solution to carry out small or large scale as well as crowd sourced annotation projects.", "title": "" }, { "docid": "5b84008df77e2ff8929cd759ae92de7d", "text": "Purpose – Organizations invest in enterprise systems (ESs) with an expectation to share digital information from disparate sources to improve organizational effectiveness. This study aims to examine how organizations realize digital business strategies using an ES. It does so by evaluating the ES data support activities for knowledge creation, particularly how ES data are transformed into corporate knowledge in relevance to business strategies sought. Further, how this knowledge leads to realization of the business benefits. The linkage between establishing digital business strategy, utilization of ES data in decision-making processes, and realized or unrealized benefits provides the reason for this study. Design/methodology/approach – This study develops and utilizes a transformational model of how ES data are transformed into knowledge and results to evaluate the role of digital business strategies in achieving benefits using an ES. Semi-structured interviews are first conducted with ES vendors, consultants and IT research firms to understand the process of ES data transformation for realizing business strategies from their perspective. This is followed by three in-depth cases (two large and one medium-sized organization) who have implemented ESs. The empirical data are analyzed using the condensation approach. This method condenses the data into multiple groups according to pre-defined categories, which follow the scope of the research questions. Findings – The key findings emphasize that strategic benefit realization from an ES implementation is a holistic process that not only includes the essential data and technology factors, but also includes factors such as digital business strategy deployment, people and process management, and skills and competency development. Although many companies are mature with their ES implementation, these firms have only recently started aligning their ES capabilities with digital business strategies correlating data, decisions, and actions to maximize business value from their ES investment. Research limitations/implications – The findings reflect the views of two large and one mediumsized organization in the manufacturing sector. Although the evidence of the benefit realization process success and its results is more prominent in larger organizations than medium-sized, it may not be generalized that smaller firms cannot achieve these results. Exploration of these aspects in smaller firms or a different industry sector such as retail/service would be of value. Practical implications – The paper highlights the importance of tools and practices for accessing relevant information through an integrated ES so that competent decisions can be established towards achieving digital business strategies, and optimizing organizational performance. Knowledge is a key factor in this process. Originality/value – The paper evaluates a holistic framework for utilization of ES data in realizing digital business strategies. Thus, it develops an enhanced transformational cycle model for ES data transformation into knowledge and results, which maintains to build up the transformational process success in the long term.", "title": "" }, { "docid": "a479d5f8313bc7aabe8154071706fb40", "text": "Test-Driven Development (TDD) [Beck 2002] is one of the most referenced, yet least used agile practices in industry. Its neglect is due mostly to our lack of understanding of its effects on people, processes, and products. Although most people agree that writing a test case before code promotes more robust implementation and a better design, the unknown costs associated with TDD’s effects and the inversion of the ubiquitous programmer “code-then-test” paradigm has impeded TDD’s adoption.", "title": "" }, { "docid": "2b3929da96949056bc473e8da947cebe", "text": "This paper presents “Value-Difference Based Exploration” (VDBE), a method for balancing the exploration/exploitation dilemma inherent to reinforcement learning. The proposed method adapts the exploration parameter of ε-greedy in dependence of the temporal-difference error observed from value-function backups, which is considered as a measure of the agent’s uncertainty about the environment. VDBE is evaluated on a multi-armed bandit task, which allows for insight into the behavior of the method. Preliminary results indicate that VDBE seems to be more parameter robust than commonly used ad hoc approaches such as ε-greedy or softmax.", "title": "" }, { "docid": "d40e565a2ed22af998ae60f670210f57", "text": "Research on human infants has begun to shed light on early-develpping processes for segmenting perceptual arrays into objects. Infants appear to perceive objects by analyzing three-dimensional surface arrangements and motions. Their perception does not accord with a general tendency to maximize figural goodness or to attend-to nonaccidental geometric relations in visual arrays. Object perception does accord with principles governing the motions of material bodies: Infants divide perceptual arrays into units that move as connected wholes, that move separately from one another, that tend to maintain their size and shape over motion, and that tend to act upon each other only on contact. These findings suggest that o general representation of object unity and boundaries is interposed between representations of surfaces and representations of obiects of familiar kinds. The processes that construct this representation may be related to processes of physical reasoning. This article is animated by two proposals about perception and perceptual development. One proposal is substantive: In situations where perception develops through experience, but without instruction or deliberate reflection , development tends to enrich perceptual abilities but not to change them fundamentally. The second proposal is methodological: In the above situations , studies of the origins and early development of perception can shed light on perception in its mature state. These proposals will arise from a discussion of the early development of one perceptual ability: the ability to organize arrays of surfaces into unitary, bounded, and persisting objects. PERCEIVING OBJECTS In recent years, my colleagues and I have been studying young infants' perception of objects in complex displays in which objects are adjacent to other objects, objects are partly hidden behind other objects, of objects move fully", "title": "" }, { "docid": "b96a3320940344dea37f5deccf0e16b2", "text": "This paper proposes a modulated hysteretic current control (MHCC) technique to improve the transient response of a DC-DC boost converter, which suffers from low bandwidth due to the existence of the right-half-plane (RHP) zero. The MHCC technique can automatically adjust the on-time value to rapidly increase the inductor current, as well as to shorten the transient response time. In addition, based on the characteristic of the RHP zero, the compensation poles and zero are deliberately adjusted to achieve fast transient response in case of load transient condition and adequate phase margin in steady state. Experimental results show the improvement of transient recovery time over 7.2 times in the load transient response compared with the conventional boost converter design when the load current changes from light to heavy or vice versa. The power consumption overhead is merely 1%.", "title": "" }, { "docid": "2cbae69bfb5d1379383cd1cf3e1237ef", "text": "TerraSAR-X, the first civil German synthetic aperture radar (SAR) satellite has been successfully launched in 2007, June 15th. After 4.5 days the first processed image has been obtained. The overall quality of the image was outstanding, however, suspicious features could be identified which showed precipitation related signatures. These rain-cell signatures motivated a further in-depth study of the physical background of the related propagation effects. During the commissioning phase, a total of 12000 scenes have been investigated for potential propagation effects and about 100 scenes have revealed atmospheric effects to a visible extent. An interesting case of a data acquisition over New York will be presented which shows typical rain-cell signatures and the SAR image will be compared with weather-radar data acquired nearly simultaneously (within the same minute). Furthermore, in this contribution we discuss the influence of the atmosphere (troposphere) on the external calibration (XCAL) of TerraSAR-X. By acquiring simultaneous weather-radar data over the test-site and the SAR-acquisition it was possibleto improve the absolute calibration constant by 0.15 dB.", "title": "" }, { "docid": "2d0c28d1c23ecee1f1a08be11a49aaa2", "text": "Dictionary learning has became an increasingly important task in machine learning, as it is fundamental to the representation problem. A number of emerging techniques specifically include a codebook learning step, in which a critical knowledge abstraction process is carried out. Existing approaches in dictionary (codebook) learning are either generative (unsupervised e.g. k-means) or discriminative (supervised e.g. extremely randomized forests). In this paper, we propose a multiple instance learning (MIL) strategy (along the line of weakly supervised learning) for dictionary learning. Each code is represented by a classifier, such as a linear SVM, which naturally performs metric fusion for multi-channel features. We design a formulation to simultaneously learn mixtures of codes by maximizing classification margins in MIL. State-of-the-art results are observed in image classification benchmarks based on the learned codebooks, which observe both compactness and effectiveness.", "title": "" }, { "docid": "2d6225b20cf13d2974ce78877642a2f7", "text": "Low rank and sparse representation based methods, which make few specific assumptions about the background, have recently attracted wide attention in background modeling. With these methods, moving objects in the scene are modeled as pixel-wised sparse outliers. However, in many practical scenarios, the distributions of these moving parts are not truly pixel-wised sparse but structurally sparse. Meanwhile a robust analysis mechanism is required to handle background regions or foreground movements with varying scales. Based on these two observations, we first introduce a class of structured sparsity-inducing norms to model moving objects in videos. In our approach, we regard the observed sequence as being constituted of two terms, a low-rank matrix (background) and a structured sparse outlier matrix (foreground). Next, in virtue of adaptive parameters for dynamic videos, we propose a saliency measurement to dynamically estimate the support of the foreground. Experiments on challenging well known data sets demonstrate that the proposed approach outperforms the state-of-the-art methods and works effectively on a wide range of complex videos.", "title": "" }, { "docid": "fac5744d86f96344fe1ad9c06e354a81", "text": "Biocontrol fungi (BCF) are agents that control plant diseases. These include the well-known Trichoderma spp. and the recently described Sebacinales spp. They have the ability to control numerous foliar, root, and fruit pathogens and even invertebrates such as nematodes. However, this is only a subset of their abilities. We now know that they also have the ability to ameliorate a wide range of abiotic stresses, and some of them can also alleviate physiological stresses such as seed aging. They can also enhance nutrient uptake in plants and can substantially increase nitrogen use efficiency in crops. These abilities may be more important to agriculture than disease control. Some strains also have abilities to improve photosynthetic efficiency and probably respiratory activities of plants. All of these capabilities are a consequence of their abilities to reprogram plant gene expression, probably through activation of a limited number of general plant pathways.", "title": "" }, { "docid": "8e794530be184686a49e5ced6ac6521d", "text": "A key feature of the immune system is its ability to induce protective immunity against pathogens while maintaining tolerance towards self and innocuous environmental antigens. Recent evidence suggests that by guiding cells to and within lymphoid organs, CC-chemokine receptor 7 (CCR7) essentially contributes to both immunity and tolerance. This receptor is involved in organizing thymic architecture and function, lymph-node homing of naive and regulatory T cells via high endothelial venules, as well as steady state and inflammation-induced lymph-node-bound migration of dendritic cells via afferent lymphatics. Here, we focus on the cellular and molecular mechanisms that enable CCR7 and its two ligands, CCL19 and CCL21, to balance immunity and tolerance.", "title": "" }, { "docid": "58331d0d42452d615b5a20da473ef5e2", "text": "This paper introduces a new neural structure called FusionNet, which extends existing attention approaches from three perspectives. First, it puts forward a novel concept of “history of word” to characterize attention information from the lowest word-level embedding up to the highest semantic-level representation. Second, it introduces an improved attention scoring function that better utilizes the “history of word” concept. Third, it proposes a fully-aware multi-level attention mechanism to capture the complete information in one text (such as a question) and exploit it in its counterpart (such as context or passage) layer by layer. We apply FusionNet to the Stanford Question Answering Dataset (SQuAD) and it achieves the first position for both single and ensemble model on the official SQuAD leaderboard at the time of writing (Oct. 4th, 2017). Meanwhile, we verify the generalization of FusionNet with two adversarial SQuAD datasets and it sets up the new state-of-the-art on both datasets: on AddSent, FusionNet increases the best F1 metric from 46.6% to 51.4%; on AddOneSent, FusionNet boosts the best F1 metric from 56.0% to 60.7%.", "title": "" }, { "docid": "c736258623c7f977ebc00f5555d13e02", "text": "We present an important step towards the solution of the problem of inverse procedural modeling by generating parametric context-free L-systems that represent an input 2D model. The L-system rules efficiently code the regular structures and the parameters represent the properties of the structure transformations. The algorithm takes as input a 2D vector image that is composed of atomic elements, such as curves and poly-lines. Similar elements are recognized and assigned terminal symbols of an L-system alphabet. The terminal symbols’ position and orientation are pair-wise compared and the transformations are stored as points in multiple 4D transformation spaces. By careful analysis of the clusters in the transformation spaces, we detect sequences of elements and code them as L-system rules. The coded elements are then removed from the clusters, the clusters are updated, and then the analysis attempts to code groups of elements in (hierarchies) the same way. The analysis ends with a single group of elements that is coded as an L-system axiom. We recognize and code branching sequences of linearly translated, scaled, and rotated elements and their hierarchies. The L-system not only represents the input image, but it can also be used for various editing operations. By changing the L-system parameters, the image can be randomized, symmetrized, and groups of elements and regular structures can be edited. By changing the terminal and non-terminal symbols, elements or groups of elements can be replaced.", "title": "" }, { "docid": "56ced0e34c82f085eeba595753d423d1", "text": "The correctness of software is affected by its constant changes. For that reason, developers use change-impact analysis to identify early the potential consequences of changing their software. Dynamic impact analysis is a practical technique that identifies potential impacts of changes for representative executions. However, it is unknown how reliable its results are because their accuracy has not been studied. This paper presents the first comprehensive study of the predictive accuracy of dynamic impact analysis in two complementary ways. First, we use massive numbers of random changes across numerous Java applications to cover all possible change locations. Then, we study more than 100 changes from software repositories, which are representative of developer practices. Our experimental approach uses sensitivity analysis and execution differencing to systematically measure the precision and recall of dynamic impact analysis with respect to the actual impacts observed for these changes. Our results for both types of changes show that the most cost-effective dynamic impact analysis known is surprisingly inaccurate with an average precision of 38-50% and average recall of 50-56% in most cases. This comprehensive study offers insights on the effectiveness of existing dynamic impact analyses and motivates the future development of more accurate impact analyses.", "title": "" }, { "docid": "ea3dfb0ea22c01b670a7b11f21aa06f2", "text": "One of the classical goals of research in artificial intelligence is to construct systems that automatically recover the meaning of natural language text. Machine learning methods hold significant potential for addressing many of the challenges involved with these systems. This thesis presents new techniques for learning to map sentences to logical form — lambda-calculus representations of their meanings. We first describe an approach to the context-independent learning problem, where sentences are analyzed in isolation. We describe a learning algorithm that takes as input a training set of sentences labeled with expressions in the lambda calculus. The algorithm induces a Combinatory Categorial Grammar (CCG) for the problem, along with a log-linear model that represents a distribution over syntactic and semantic analyses conditioned on the input sentence. Next, we present an extension that addresses challenges that arise when learning to analyze spontaneous, unedited natural language input, as is commonly seen in natural language interface applications. A key idea is to introduce non-standard CCG combinators that relax certain parts of the grammar — for example allowing flexible word order, or insertion of lexical items — with learned costs. We also present a new, online algorithm for inducing a weighted CCG. Finally, we describe how to extend this learning approach to the context-dependent analysis setting, where the meaning of a sentence can depend on the context in which it appears. The training examples are sequences of sentences annotated with lambdacalculus meaning representations. We develop an algorithm that maintains explicit, lambda-calculus representations of discourse entities and uses a context-dependent analysis pipeline to recover logical forms. The method uses a hidden-variable variant of the perception algorithm to learn a linear model used to select the best analysis. Experiments demonstrate that the learning techniques we develop induce accurate models for semantic analysis while requiring less data annotate effort than previous approaches. Thesis Supervisor: Michael Collins Title: Associate Professor", "title": "" }, { "docid": "3fa16d5e442bc4a2398ba746d6aaddfe", "text": "Although many users create predictable passwords, the extent to which users realize these passwords are predictable is not well understood. We investigate the relationship between users' perceptions of the strength of specific passwords and their actual strength. In this 165-participant online study, we ask participants to rate the comparative security of carefully juxtaposed pairs of passwords, as well as the security and memorability of both existing passwords and common password-creation strategies. Participants had serious misconceptions about the impact of basing passwords on common phrases and including digits and keyboard patterns in passwords. However, in most other cases, participants' perceptions of what characteristics make a password secure were consistent with the performance of current password-cracking tools. We find large variance in participants' understanding of how passwords may be attacked, potentially explaining why users nonetheless make predictable passwords. We conclude with design directions for helping users make better passwords.", "title": "" }, { "docid": "0e94af8b40bfac3d2ebb1dfced65eadc", "text": "SimPy is a Python-based, interpreted simulation tool that offers the power and convenience of Python. It is able to launch processes and sub-processes using generators, which act autonomously and may interact using interrupts. SimPy offers other advantages over competing commercial codes in that it allows for modular development, use of a version control system such as CVS, can be made self-documenting with PyDoc, and is completely extensible. The convenience of an interpreted language, however, is offset for large models by slower than desired run times. This disadvantage can be compensated for by parallelizing the system using PyMPI, from the Lawrence Livermore National Laboratory. This work was performed under the auspices of the U.S. Department of Energy by University of California, Lawrence Livermore National Laboratory under Contract W-7405-Eng-48.", "title": "" }, { "docid": "65af21566422d9f0a11f07d43d7ead13", "text": "Scene labeling is a challenging computer vision task. It requires the use of both local discriminative features and global context information. We adopt a deep recurrent convolutional neural network (RCNN) for this task, which is originally proposed for object recognition. Different from traditional convolutional neural networks (CNN), this model has intra-layer recurrent connections in the convolutional layers. Therefore each convolutional layer becomes a two-dimensional recurrent neural network. The units receive constant feed-forward inputs from the previous layer and recurrent inputs from their neighborhoods. While recurrent iterations proceed, the region of context captured by each unit expands. In this way, feature extraction and context modulation are seamlessly integrated, which is different from typical methods that entail separate modules for the two steps. To further utilize the context, a multi-scale RCNN is proposed. Over two benchmark datasets, Standford Background and Sift Flow, the model outperforms many state-of-the-art models in accuracy and efficiency.", "title": "" } ]
scidocsrr
9ed7b32594457fb2694f1f96731a15bd
Switched flux permanent magnet machines — Innovation continues
[ { "docid": "a2b60ffe1ed8f8bd79363f4c5cff364b", "text": "The flux-switching permanent-magnet (FSPM) machine is a relatively novel brushless machine having magnets and concentrated windings in the stator instead of rotor, which exhibits inherently sinusoidal back-EMF waveform and high torque capability. However, due to the high airgap flux density produced by magnets and the salient poles in both stator and rotor, the resultant torque ripple is relatively large, which is unfavorable for high performance drive system. In this paper, instead of conventional optimization on the machine itself, a new torque ripple suppression approach is proposed in which a series of specific harmonic currents are added into q-axis reference current, resulting in additional torque components to counteract the fundamental and second-order harmonic components of cogging torque. Both the simulation and experimental results confirm that the proposed approach can effectively suppress the torque ripple. It should be emphasized that this method is applicable to all PM machines having relatively large cogging torque.", "title": "" } ]
[ { "docid": "cb5d0498db49c8421fef279aea69c367", "text": "The growing commoditization of the underground economy has given rise to malware delivery networks, which charge fees for quickly delivering malware or unwanted software to a large number of hosts. A key method to provide this service is through the orchestration of silent delivery campaigns. These campaigns involve a group of downloaders that receive remote commands and then deliver their payloads without any user interaction. These campaigns can evade detection by relying on inconspicuous downloaders on the client side and on disposable domain names on the server side. We describe Beewolf, a system for detecting silent delivery campaigns from Internet-wide records of download events. The key observation behind our system is that the downloaders involved in these campaigns frequently retrieve payloads in lockstep. Beewolf identifies such locksteps in an unsupervised and deterministic manner, and can operate on streaming data. We utilize Beewolf to study silent delivery campaigns at scale, on a data set of 33.3 million download events. This investigation yields novel findings, e.g. malware distributed through compromised software update channels, a substantial overlap between the delivery ecosystems for malware and unwanted software, and several types of business relationships within these ecosystems. Beewolf achieves over 92% true positives and fewer than 5% false positives. Moreover, Beewolf can detect suspicious downloaders a median of 165 days ahead of existing anti-virus products and payload-hosting domains a median of 196 days ahead of existing blacklists.", "title": "" }, { "docid": "d12a47e1b72532a3c2c028620eba44d6", "text": "Mel-filter banks are commonly used in speech recognition, as they are motivated from theory related to speech production and perception. While features derived from mel-filter banks are quite popular, we argue that this filter bank is not really an appropriate choice as it is not learned for the objective at hand, i.e. speech recognition. In this paper, we explore replacing the filter bank with a filter bank layer that is learned jointly with the rest of a deep neural network. Thus, the filter bank is learned to minimize cross-entropy, which is more closely tied to the speech recognition objective. On a 50-hour English Broadcast News task, we show that we can achieve a 5% relative improvement in word error rate (WER) using the filter bank learning approach, compared to having a fixed set of filters.", "title": "" }, { "docid": "cd82eb636078b633060a857a4eb2b47b", "text": "The importance of mobile application specific testing techniques and methods has been attracting much attention of software engineers over the past few years. This is due to the fact that mobile applications are different than traditional web and desktop applications, and more and more they are moving to being used in critical domains. Mobile applications require a different approach to application quality and dependability and require an effective testing approach to build high quality and more reliable software. We performed a systematic mapping study to categorize and to structure the research evidence that has been published in the area of mobile application testing techniques and challenges that they have reported. Seventy nine (79) empirical studies are mapped to a classification schema. Several research gaps are identified and specific key testing issues for practitioners are identified: there is a need for eliciting testing requirements early during development process; the need to conduct research in real-world development environments; specific testing techniques targeting application life-cycle conformance and mobile services testing; and comparative studies for security and usability testing.", "title": "" }, { "docid": "c059d43c51ec35ec7949b0a10d718b6f", "text": "The problem of signal recovery from its Fourier transform magnitude is of paramount importance in various fields of engineering and has been around for more than 100 years. Due to the absence of phase information, some form of additional information is required in order to be able to uniquely identify the signal of interest. In this paper, we focus our attention on discrete-time sparse signals (of length <inline-formula><tex-math notation=\"LaTeX\">$n$ </tex-math></inline-formula>). We first show that if the discrete Fourier transform dimension is greater than or equal to <inline-formula><tex-math notation=\"LaTeX\">$2n$</tex-math></inline-formula>, then almost all signals with <italic> aperiodic</italic> support can be uniquely identified by their Fourier transform magnitude (up to time shift, conjugate flip, and global phase). Then, we develop an efficient two-stage sparse-phase retrieval algorithm (TSPR), which involves: identifying the support, i.e., the locations of the nonzero components, of the signal using a combinatorial algorithm; and identifying the signal values in the support using a convex algorithm. We show that TSPR can <italic> provably</italic> recover most <inline-formula><tex-math notation=\"LaTeX\">$O(n^{1/2-{\\epsilon }})$</tex-math> </inline-formula>-sparse signals (up to a time shift, conjugate flip, and global phase). We also show that, for most <inline-formula><tex-math notation=\"LaTeX\">$O(n^{1/4-{\\epsilon }})$</tex-math></inline-formula>-sparse signals, the recovery is <italic>robust</italic> in the presence of measurement noise. These recovery guarantees are asymptotic in nature. Numerical experiments complement our theoretical analysis and verify the effectiveness of TSPR.", "title": "" }, { "docid": "ac740402c3e733af4d690e34e567fabe", "text": "We address the problem of semantic segmentation: classifying each pixel in an image according to the semantic class it belongs to (e.g. dog, road, car). Most existing methods train from fully supervised images, where each pixel is annotated by a class label. To reduce the annotation effort, recently a few weakly supervised approaches emerged. These require only image labels indicating which classes are present. Although their performance reaches a satisfactory level, there is still a substantial gap between the accuracy of fully and weakly supervised methods. We address this gap with a novel active learning method specifically suited for this setting. We model the problem as a pairwise CRF and cast active learning as finding its most informative nodes. These nodes induce the largest expected change in the overall CRF state, after revealing their true label. Our criterion is equivalent to maximizing an upper-bound on accuracy gain. Experiments on two data-sets show that our method achieves 97% percent of the accuracy of the corresponding fully supervised model, while querying less than 17% of the (super-)pixel labels.", "title": "" }, { "docid": "65d84bb6907a34f8bc8c4b3d46706e53", "text": "This study analyzes the correlation between video game usage and academic performance. Scholastic Aptitude Test (SAT) and grade-point average (GPA) scores were used to gauge academic performance. The amount of time a student spends playing video games has a negative correlation with students' GPA and SAT scores. As video game usage increases, GPA and SAT scores decrease. A chi-squared analysis found a p value for video game usage and GPA was greater than a 95% confidence level (0.005 < p < 0.01). This finding suggests that dependence exists. SAT score and video game usage also returned a p value that was significant (0.01 < p < 0.05). Chi-squared results were not significant when comparing time spent studying and an individual's SAT score. This research suggests that video games may have a detrimental effect on an individual's GPA and possibly on SAT scores. Although these results show statistical dependence, proving cause and effect remains difficult, since SAT scores represent a single test on a given day. The effects of video games maybe be cumulative; however, drawing a conclusion is difficult because SAT scores represent a measure of general knowledge. GPA versus video games is more reliable because both involve a continuous measurement of engaged activity and performance. The connection remains difficult because of the complex nature of student life and academic performance. Also, video game usage may simply be a function of specific personality types and characteristics.", "title": "" }, { "docid": "8020c67dd790bcff7aea0e103ea672f1", "text": "Recent efforts in satellite communication research considered the exploitation of higher frequency bands as a valuable alternative to conventional spectrum portions. An example of this can be provided by the W-band (70-110 GHz). Recently, a scientific experiment carried on by the Italian Space Agency (ASI), namely the DAVID-DCE experiment, was aimed at exploring the technical feasibility of the exploitation of the W-band for broadband networking applications. Some preliminary results of DAVID research activities pointed out that phase noise and high Doppler-shift can severely compromise the efficiency of the modulation system, particularly for what concerns the aspects related to the carrier recovery. This problem becomes very critical when the use of spectrally efficient M-ary modulations is considered in order to profitably exploit the large amount of bandwidth available in the W-band. In this work, a novel carrier recovery algorithm has been proposed for a 16-QAM modulation and tested, considering the presence of phase noise and other kinds of non-ideal behaviors of the communication devices typical of W-band satellite transmission. Simulation results demonstrated the effectiveness the proposed solution for carrier recovery and pointed out the achievable spectral efficiency of the transmission system, considering some constraints about transmitted power, data BER and receiver bandwidth", "title": "" }, { "docid": "0ca476ed89607680399604b39d76185b", "text": "Honeybee swarms and complex brains show many parallels in how they make decisions. In both, separate populations of units (bees or neurons) integrate noisy evidence for alternatives, and, when one population exceeds a threshold, the alternative it represents is chosen. We show that a key feature of a brain--cross inhibition between the evidence-accumulating populations--also exists in a swarm as it chooses its nesting site. Nest-site scouts send inhibitory stop signals to other scouts producing waggle dances, causing them to cease dancing, and each scout targets scouts' reporting sites other than her own. An analytic model shows that cross inhibition between populations of scout bees increases the reliability of swarm decision-making by solving the problem of deadlock over equal sites.", "title": "" }, { "docid": "bd820eea00766190675cd3e8b89477f2", "text": "Mobile Edge Computing (MEC), a new concept that emerged about a year ago, integrating the IT and the Telecom worlds will have a great impact on the openness of the Telecom market. Furthermore, the virtualization revolution that has enabled the Cloud computing success will benefit the Telecom domain, which in turn will be able to support the IaaS (Infrastructure as a Service). The main objective of MEC solution is the export of some Cloud capabilities to the user's proximity decreasing the latency, augmenting the available bandwidth and decreasing the load on the core network. On the other hand, the Internet of Things (IoT), the Internet of the future, has benefited from the proliferation in the mobile phones' usage. Many mobile applications have been developed to connect a world of things (wearables, home automation systems, sensors, RFID tags etc.) to the Internet. Even if it is not a complete solution for a scalable IoT architecture but the time sensitive IoT applications (e-healthcare, real time monitoring, etc.) will profit from the MEC architecture. Furthermore, IoT can extend this paradigm to other areas (e.g. Vehicular Ad-hoc NETworks) with the use of Software Defined Network (SDN) orchestration to cope with the challenges hindering the IoT real deployment, as we will illustrate in this paper.", "title": "" }, { "docid": "e4dd72a52d4961f8d4d8ee9b5b40d821", "text": "Social media users spend several hours a day to read, post and search for news on microblogging platforms. Social media is becoming a key means for discovering news. However, verifying the trustworthiness of this information is becoming even more challenging. In this study, we attempt to address the problem of rumor detection and belief investigation on Twitter. Our definition of rumor is an unverifiable statement, which spreads misinformation or disinformation. We adopt a supervised rumors classification task using the standard dataset. By employing the Tweet Latent Vector (TLV) feature, which creates a 100-d vector representative of each tweet, we increased the rumor retrieval task precision up to 0.972. We also introduce the belief score and study the belief change among the rumor posters between 2010 and 2016.", "title": "" }, { "docid": "b9720d1350bf89c8a94bb30276329ce2", "text": "Generative concept representations have three major advantages over discriminative ones: they can represent uncertainty, they support integration of learning and reasoning, and they are good for unsupervised and semi-supervised learning. We discuss probabilistic and generative deep learning, which generative concept representations are based on, and the use of variational autoencoders and generative adversarial networks for learning generative concept representations, particularly for concepts whose data are sequences, structured data or graphs.", "title": "" }, { "docid": "1eecc45f35f693cddc2b4fe972493396", "text": "In this paper, we reformulate the conventional 2-D Frangi vesselness measure into a pre-weighted neural network (“Frangi-Net”), and illustrate that the Frangi-Net is equivalent to the original Frangi filter. Furthermore, we show that, as a neural network, Frangi-Net is trainable. We evaluate the proposed method on a set of 45 high resolution fundus images. After fine-tuning, we observe both qualitative and quantitative improvements in the segmentation quality compared to the original Frangi measure, with an increase up to 17% in F1 score.", "title": "" }, { "docid": "e471e41553bf7c229a38f3d226ff8a28", "text": "Large AC machines are sometimes fed by multiple inverters. This paper presents the complete steady-state analysis of the PM synchronous machine with multiplex windings, suitable for driving by multiple independent inverters. Machines with 4, 6 and 9 phases are covered in detail. Particular attention is given to the magnetic interactions not only between individual phases, but between channels or groups of phases. This is of interest not only for determining performance and designing control systems, but also for analysing fault tolerance. It is shown how to calculate the necessary self- and mutual inductances and how to reduce them to a compact dq-axis model without loss of detail.", "title": "" }, { "docid": "d48430f65d844c92661d3eb389cdb2f2", "text": "In organizations that use DevOps practices, software changes can be deployed as fast as 500 times or more per day. Without adequate involvement of the security team, rapidly deployed software changes are more likely to contain vulnerabilities due to lack of adequate reviews. The goal of this paper is to aid software practitioners in integrating security and DevOps by summarizing experiences in utilizing security practices in a DevOps environment. We analyzed a selected set of Internet artifacts and surveyed representatives of nine organizations that are using DevOps to systematically explore experiences in utilizing security practices. We observe that the majority of the software practitioners have expressed the potential of common DevOps activities, such as automated monitoring, to improve the security of a system. Furthermore, organizations that integrate DevOps and security utilize additional security activities, such as security requirements analysis and performing security configurations. Additionally, these teams also have established collaboration between the security team and the development and operations teams.", "title": "" }, { "docid": "186c2180e7b681a350126225cd15ece0", "text": "Two lactose-fermenting Salmonella typhi strains were isolated from bile and blood specimens of a typhoid fever patient who underwent a cholecystectomy due to cholelithiasis. One lactose-fermenting S. typhi strain was also isolated from a pus specimen which was obtained at the tip of the T-shaped tube withdrawn from the operative wound of the common bile duct of the patient. These three lactose-fermenting isolates: GIFU 11924 from bile, GIFU 11926 from pus, and GIFU 11927 from blood, were phenotypically identical to the type strain (GIFU 11801 = ATCC 19430 = NCTC 8385) of S. typhi, except that the three strains fermented lactose and failed to blacken the butt of Kligler iron agar or triple sugar iron agar medium. All three lactose-fermenting strains were resistant to chloramphenicol, ampicillin, sulfomethoxazole, trimethoprim, gentamicin, cephaloridine, and four other antimicrobial agents. The type strain was uniformly susceptible to these 10 drugs. The strain GIFU 11925, a lactose-negative dissociant from strain GIFU 11926, was also susceptible to these drugs, with the sole exception of chloramphenicol (minimal inhibitory concentration, 100 micrograms/ml).", "title": "" }, { "docid": "5cfc2b3a740d0434cf0b3c2812bd6e7a", "text": "Well, someone can decide by themselves what they want to do and need to do but sometimes, that kind of person will need some a logical approach to discrete math references. People with open minded will always try to seek for the new things and information from many sources. On the contrary, people with closed mind will always think that they can do it by their principals. So, what kind of person are you?", "title": "" }, { "docid": "5ff7a82ec704c8fb5c1aa975aec0507c", "text": "With the increase of an ageing population and chronic diseases, society becomes more health conscious and patients become “health consumers” looking for better health management. People’s perception is shifting towards patient-centered, rather than the classical, hospital–centered health services which has been propelling the evolution of telemedicine research from the classic e-Health to m-Health and now is to ubiquitous healthcare (u-Health). It is expected that mobile & ubiquitous Telemedicine, integrated with Wireless Body Area Network (WBAN), have a great potential in fostering the provision of next-generation u-Health. Despite the recent efforts and achievements, current u-Health proposed solutions still suffer from shortcomings hampering their adoption today. This paper presents a comprehensive review of up-to-date requirements in hardware, communication, and computing for next-generation u-Health systems. It compares new technological and technical trends and discusses how they address expected u-Health requirements. A thorough survey on various worldwide recent system implementations is presented in an attempt to identify shortcomings in state-of-the art solutions. In particular, challenges in WBAN and ubiquitous computing were emphasized. The purpose of this survey is not only to help beginners with a holistic approach toward understanding u-Health systems but also present to researchers new technological trends and design challenges they have to cope with, while designing such systems.", "title": "" }, { "docid": "cb561e56e60ba0e5eef2034158c544c2", "text": "Android is a modern and popular software platform for smartphones. Among its predominant features is an advanced security model which is based on application-oriented mandatory access control and sandboxing. This allows developers and users to restrict the execution of an application to the privileges it has (mandatorily) assigned at installation time. The exploitation of vulnerabilities in program code is hence believed to be confined within the privilege boundaries of an application’s sandbox. However, in this paper we show that a privilege escalation attack is possible. We show that a genuine application exploited at runtime or a malicious application can escalate granted permissions. Our results immediately imply that Android’s security model cannot deal with a transitive permission usage attack and Android’s sandbox model fails as a last resort against malware and sophisticated runtime attacks.", "title": "" }, { "docid": "3fd551696803695056dd759d8f172779", "text": "The aim of this research essay is to examine the structural nature of theory in Information Systems. Despite the impor tance of theory, questions relating to its form and structure are neglected in comparison with questions relating to episte mology. The essay addresses issues of causality, explanation, prediction, and generalization that underlie an understanding of theory. A taxonomy is proposed that classifies information systems theories with respect to the manner in which four central goals are addressed: analysis, explanation, predic tion, and prescription. Five interrelated types of theory are distinguished: (I) theory for analyzing, (2) theory for ex plaining, (3) theory for predicting, (4) theory for explaining and predicting, and (5) theory for design and action. Examples illustrate the nature of each theory type. The appli cability of the taxonomy is demonstrated by classifying a sample of journal articles. The paper contributes by showing that multiple views of theory exist and by exposing the assumptions underlying different viewpoints. In addition, it is suggested that the type of theory under development can influence the choice of an epistemological approach. Support Allen Lee was the accepting senior editor for this paper. M. Lynne Markus, Michael D. Myers, and Robert W. Zmud served as reviewers. is given for the legitimacy and value of each theory type. The building of integrated bodies of theory that encompass all theory types is advocated.", "title": "" }, { "docid": "9f34152d5dd13619d889b9f6e3dfd5c3", "text": "Nichols, M. (2003). A theory for eLearning. Educational Technology & Society, 6(2), 1-10, Available at http://ifets.ieee.org/periodical/6-2/1.html ISSN 1436-4522. © International Forum of Educational Technology & Society (IFETS). The authors and the forum jointly retain the copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from the editors at kinshuk@massey.ac.nz. A theory for eLearning", "title": "" } ]
scidocsrr
02f515ef921a0680dee230afc579ab4c
Small-Size LTE/WWAN Tablet Device Antenna With Two Hybrid Feeds
[ { "docid": "8e84a474e5b7f6451a6073a3e68b1c34", "text": "A small-size tablet device antenna with three wide operating bands to cover the Long Term Evolution/Wireless Wide Area Network (LTE/WWAN) operation in the 698 ~ 960-, 1710 ~ 2690-, and 3400 ~ 3800-MHz bands is presented. The antenna has a planar structure and is easy to fabricate on one surface of a thin FR4 substrate of size 10×45×0.8 mm3. The antenna is formed by adding a first branch (branch 1, an inductively coupled strip) and a second branch (branch 2, a simple branch strip) to a coupled-fed shorted strip antenna (main portion), and the two branches are configured with the main portion to achieve a compact antenna structure. The three widebands are easy to adjust and can cover the LTE/WWAN operation, which includes the most commonly commercial LTE bands and WWAN bands (698 ~ 960 and 1710 ~ 2690 MHz) and the LTE 3.5-GHz band (3400 ~ 3800 MHz).", "title": "" } ]
[ { "docid": "d4fc45837d85f3a03fa4bd76b45921a1", "text": "The importance of the road infrastructure for the society could be compared with importance of blood vessels for humans. To ensure road surface quality it should be monitored continuously and repaired as necessary. The optimal distribution of resources for road repairs is possible providing the availability of comprehensive and objective real time data about the state of the roads. Participatory sensing is a promising approach for such data collection. The paper is describing a mobile sensing system for road irregularity detection using Android OS based smart-phones. Selected data processing algorithms are discussed and their evaluation presented with true positive rate as high as 90% using real world data. The optimal parameters for the algorithms are determined as well as recommendations for their application.", "title": "" }, { "docid": "b5372d4cad87aab69356ebd72aed0e0b", "text": "Web content nowadays can also be accessed through new generation of Internet connected TVs. However, these products failed to change users’ behavior when consuming online content. Users still prefer personal computers to access Web content. Certainly, most of the online content is still designed to be accessed by personal computers or mobile devices. In order to overcome the usability problem of Web content consumption on TVs, this paper presents a knowledge graph based video generation system that automatically converts textual Web content into videos using semantic Web and computer graphics based technologies. As a use case, Wikipedia articles are automatically converted into videos. The effectiveness of the proposed system is validated empirically via opinion surveys. Fifty percent of survey users indicated that they found generated videos enjoyable and 42 % of them indicated that they would like to use our system to consume Web content on their TVs.", "title": "" }, { "docid": "72f6f6484499ccaa0188d2a795daa74c", "text": "Road detection is one of the most important research areas in driver assistance and automated driving field. However, the performance of existing methods is still unsatisfactory, especially in severe shadow conditions. To overcome those difficulties, first we propose a novel shadow-free feature extractor based on the color distribution of road surface pixels. Then we present a road detection framework based on the extractor, whose performance is more accurate and robust than that of existing extractors. Also, the proposed framework has much low-complexity, which is suitable for usage in practical systems.", "title": "" }, { "docid": "7c287295e022480314d8a2627cd12cef", "text": "The causal role of human papillomavirus infections in cervical cancer has been documented beyond reasonable doubt. The association is present in virtually all cervical cancer cases worldwide. It is the right time for medical societies and public health regulators to consider this evidence and to define its preventive and clinical implications. A comprehensive review of key studies and results is presented.", "title": "" }, { "docid": "e82459841d697a538f3ab77817ed45e7", "text": "A mm-wave digital transmitter based on a 60 GHz all-digital phase-locked loop (ADPLL) with wideband frequency modulation (FM) for FMCW radar applications is proposed. The fractional-N ADPLL employs a high-resolution 60 GHz digitally-controlled oscillator (DCO) and is capable of multi-rate two-point FM. It achieves a measured rms jitter of 590.2 fs, while the loop settles within 3 μs. The measured reference spur is only -74 dBc, the fractional spurs are below -62 dBc, with no other significant spurs. A closed-loop DCO gain linearization scheme realizes a GHz-level triangular chirp across multiple DCO tuning banks with a measured frequency error (i.e., nonlinearity) in the FMCW ramp of only 117 kHz rms for a 62 GHz carrier with 1.22 GHz bandwidth. The synthesizer is transformer-coupled to a 3-stage neutralized power amplifier (PA) that delivers +5 dBm to a 50 Ω load. Implemented in 65 nm CMOS, the transmitter prototype (including PA) consumes 89 mW from a 1.2 V supply.", "title": "" }, { "docid": "dd8194c7f8e28e55fbc45f0d71336112", "text": "Followers' identification with the leader and the organizational unit, dependence on the leader, and empowerment by the leader are often attributed to transformational leadership in organizations. However, these hypothesized outcomes have received very little attention in empirical studies. Using a sample of 888 bank employees working under 76 branch manages, the authors tested the relationships between transformational leadership and these outcomes. They found that transformational leadership was positively related to both followers' dependence and their empowerment and that personal identification mediated the relationship between transformational leadership and followers' dependence on the leader, whereas social identification mediated the relationship between transformational leadership and followers' empowerment. The authors discuss the implications of these findings to both theory and practice.", "title": "" }, { "docid": "659818e97cd3329d603097c122541815", "text": "A large-scale content analysis of characters in video games was employed to answer questions about their representations of gender, race and age in comparison to the US population. The sample included 150 games from a year across nine platforms, with the results weighted according to game sales. This innovation enabled the results to be analyzed in proportion to the games that were actually played by the public, and thus allowed the first statements able to be generalized about the content of popular video games. The results show a systematic over-representation of males, white and adults and a systematic under-representation of females, Hispanics, Native Americans, children and the elderly. Overall, the results are similar to those found in television research. The implications for identity, cognitive models, cultivation and game research are discussed. new media & society Copyright © 2009 SAGE Publications Los Angeles, London, New Delhi, Singapore and Washington DC Vol 11(5): 815–834 [DOI: 10.1177/1461444809105354]", "title": "" }, { "docid": "26b67fe7ee89c941d313187672b1d514", "text": "Since permanent magnet linear synchronous motor (PMLSM) has a bright future in electromagnetic launch (EML), moving-magnet PMLSM with multisegment primary is a potential choice. To overcome the end effect in the junctions of armature units, three different ring windings are proposed for the multisegment primary of PMLSM: slotted ring windings, slotless ring windings, and quasi-sinusoidal ring windings. They are designed for various demands of EML, regarding the load levels and force fluctuations. Auxiliary iron yokes are designed to reduce the mover weights, and also help restrain the end effect. PMLSM with slotted ring windings has a higher thrust for heavy load EML. PMLSM with slotless ring windings eliminates the cogging effect, while PMLSM with quasi-sinusoidal ring windings has very low thrust ripple; they aim to launch the light aircraft and run smooth. Structure designs of these motors are introduced; motor models and parameter optimizations are accomplished by finite-element method (FEM). Then, performance advantages of the proposed motors are investigated by comparisons of common PMLSMs. At last, the prototypes are manufactured and tested to validate the feasibilities of ring winding motors with auxiliary iron yokes. The results prove that the proposed motors can effectively satisfy the requirements of EML.", "title": "" }, { "docid": "15205e074804764a6df0bdb7186c0d8c", "text": "Lactose (milk sugar) is a fermentable substrate. It can be fermented outside of the body to produce cheeses, yoghurts and acidified milks. It can be fermented within the large intestine in those people who have insufficient expression of lactase enzyme on the intestinal mucosa to ferment this disaccharide to its absorbable, simple hexose sugars: glucose and galactose. In this way, the issues of lactose intolerance and of fermented foods are joined. It is only at the extremes of life, in infancy and old age, in which severe and life-threatening consequences from lactose maldigestion may occur. Fermentation as part of food processing can be used for preservation, for liberation of pre-digested nutrients, or to create ethanolic beverages. Almost all cultures and ethnic groups have developed some typical forms of fermented foods. Lessons from fermentation of non-dairy items may be applicable to fermentation of milk, and vice versa.", "title": "" }, { "docid": "365cadf5f980e7c99cc3c2416ca36ba1", "text": "Epidemiologic studies from numerous disparate populations reveal that individuals with the habit of daily moderate wine consumption enjoy significant reductions in all-cause and particularly cardiovascular mortality when compared with individuals who abstain or who drink alcohol to excess. Researchers are working to explain this observation in molecular and nutritional terms. Moderate ethanol intake from any type of beverage improves lipoprotein metabolism and lowers cardiovascular mortality risk. The question now is whether wine, particularly red wine with its abundant content of phenolic acids and polyphenols, confers additional health benefits. Discovering the nutritional properties of wine is a challenging task, which requires that the biological actions and bioavailability of the >200 individual phenolic compounds be documented and interpreted within the societal factors that stratify wine consumption and the myriad effects of alcohol alone. Further challenge arises because the health benefits of wine address the prevention of slowly developing diseases for which validated biomarkers are rare. Thus, although the benefits of the polyphenols from fruits and vegetables are increasingly accepted, consensus on wine is developing more slowly. Scientific research has demonstrated that the molecules present in grapes and in wine alter cellular metabolism and signaling, which is consistent mechanistically with reducing arterial disease. Future research must address specific mechanisms both of alcohol and of polyphenolic action and develop biomarkers of their role in disease prevention in individuals.", "title": "" }, { "docid": "ba959139c1fc6324f3c32a4e4b9bb16c", "text": "The short-term unit commitment problem is traditionally solved as a single-objective optimization problem with system operation cost as the only objective. This paper presents multi-objectivization of the short-term unit commitment problem in uncertain environment by considering reliability as an additional objective along with the economic objective. The uncertainties occurring due to unit outage and load forecast error are incorporated using loss of load probability (LOLP) and expected unserved energy (EUE) reliability indices. The multi-objectivized unit commitment problem in uncertain environment is solved using our earlier proposed multi-objective evolutionary algorithm [1]. Simulations are performed on a test system of 26 thermal generating units and the results obtained are benchmarked against the study [2] where the unit commitment problem was solved as a reliability-constrained single-objective optimization problem. The simulation results demonstrate that the proposed multi-objectivized approach can find solutions with considerably lower cost than those obtained in the benchmark. Further, the efficiency and consistency of the proposed algorithm for multi-objectivized unit commitment problem is demonstrated by quantitative performance assessment using hypervolume indicator.", "title": "" }, { "docid": "bb7bd1a00239a0b8b875ca03ccf218c3", "text": "Objectives: To assess the effect of milk with honey in childre n undergoing tonsillectomy on bleeding, pain and wound healing. Methods: The experimental study wit contol group was conduct ed out ear, nose and throat clinic and outpatient clinic in a public hospital. In the study, it were studied with children undergoing tonsillectomy who are 6-17 years of age (N=68). The standardized natural flowe r honey was applied to children in the experimental group after tonsillectomy, every day, in addition to the standard diet in clinical routine. The children wer e assigned randomly the experimental and control groups accord ing to the operation sequence. In collecting the da ta, questionnaire, pain, wound healing and visual analo g scales was used. The data were analyzed by percen tage distributions, means, chi-square test, variance ana lysis, and correlation analysis. It was depended on ethical principles. Results: In the study, it was determined that not bleeding, is significant less pain and the level of wound he aling of children in group milk with honey than children i milk group (p<.001). It has been found that a st rong negative correlation between the level of pain and wound healing of children in milk with honey and mi lk groups (p<.001). Conclusions: It has been determined that milk with honey was ef fective in prevent bleeding, reducing pain, and accelerate wound healing. Honey, which is a natural nutrient is a safe care tool that can be applied i n children undergoing tonsillectomy without diabetes and aller gic to honey and oral feeding in addition to routin e clinical the diet.", "title": "" }, { "docid": "20373fff73f01977417e9aaf1d88a53f", "text": "In recent years, Deep Learning has been successfully applied to multimodal learning problems, with the aim of learning useful joint representations in data fusion applications. When the available modalities consist of time series data such as video, audio and sensor signals, it becomes imperative to consider their temporal structure during the fusion process. In this paper, we propose the Correlational Recurrent Neural Network (CorrRNN), a novel temporal fusion model for fusing multiple input modalities that are inherently temporal in nature. Key features of our proposed model include: (i) simultaneous learning of the joint representation and temporal dependencies between modalities, (ii) use of multiple loss terms in the objective function, including a maximum correlation loss term to enhance learning of cross-modal information, and (iii) the use of an attention model to dynamically adjust the contribution of different input modalities to the joint representation. We validate our model via experimentation on two different tasks: video-and sensor-based activity classification, and audio-visual speech recognition. We empirically analyze the contributions of different components of the proposed CorrRNN model, and demonstrate its robustness, effectiveness and state-of-the-art performance on multiple datasets.", "title": "" }, { "docid": "131f119361582f0d538413680dfafd9d", "text": "In this paper, the problems of current web search engines are analyzed, and the need for a new design is justified. Some ideas on how to improve current web search engines are presented, and then an adaptive method for web meta-search engines with a multi-agent specially the mobile agents is presented to make search engines work more efficiently. In the method, the cooperation between stationary and mobile agents is used to make more efficiency. The meta-search engine gives the user needed documents based on the multi-stage mechanism. The merge of the results obtained from the search engines in the network is done in parallel. Using a reduction parallel algorithm, the efficiency of this method is increased. Furthermore, a feedback mechanism gives the meta-search engine the user’s suggestions about the found documents, which leads to a new query using a genetic algorithm. In the new search stage, more relevant documents are given to the user. The practical experiments were performed in Aglets programming environment. The results achieved from these experiments confirm the efficiency and adaptability of the method.", "title": "" }, { "docid": "63d301040ccb7051de18af0e2d6d93ba", "text": "Image inpainting refers to the process of restoring missing or damaged areas in an image. This field of research has been very active over recent years, boosted by numerous applications: restoring images from scratches or text overlays, loss concealment in a context of impaired image transmission, object removal in a context of editing, or disocclusion in image-based rendering (IBR) of viewpoints different from those captured by the cameras. Although earlier work dealing with disocclusion has been published in [1], the term inpainting first appeared in [2] by analogy with a process used in art restoration.", "title": "" }, { "docid": "c393b4afc1348e88edaa9eff07fdbe45", "text": "The majority of the research related to visual recognition has so far focused on bottom-up analysis, where the input is processed in a cascade of cortical regions that analyze increasingly complex information. Gradually more studies emphasize the role of top-down facilitation in cortical analysis, but it remains something of a mystery how such processing would be initiated. After all, top-down facilitation implies that high-level information is activated earlier than some relevant lower-level information. Building on previous studies, I propose a specific mechanism for the activation of top-down facilitation during visual object recognition. The gist of this hypothesis is that a partially analyzed version of the input image (i.e., a blurred image) is projected rapidly from early visual areas directly to the prefrontal cortex (PFC). This coarse representation activates in the PFC expectations about the most likely interpretations of the input image, which are then back-projected as an initial guess to the temporal cortex to be integrated with the bottom-up analysis. The top-down process facilitates recognition by substantially limiting the number of object representations that need to be considered. Furthermore, such a rapid mechanism may provide critical information when a quick response is necessary.", "title": "" }, { "docid": "603a4d4037ce9fc653d46473f9085d67", "text": "In different applications like Complex document image processing, Advertisement and Intelligent transportation logo recognition is an important issue. Logo Recognition is an essential sub process although there are many approaches to study logos in these fields. In this paper a robust method for recognition of a logo is proposed, which involves K-nearest neighbors distance classifier and Support Vector Machine classifier to evaluate the similarity between images under test and trained images. For test images eight set of logo image with a rotation angle of 0°, 45°, 90°, 135°, 180°, 225°, 270°, and 315° are considered. A Dual Tree Complex Wavelet Transform features were used for determining features. Final result is obtained by measuring the similarity obtained from the feature vectors of the trained image and image under test. Total of 31 classes of logo images of different organizations are considered for experimental results. An accuracy of 87.49% is obtained using KNN classifier and 92.33% from SVM classifier.", "title": "" }, { "docid": "78e4a57eff6ffc7ad012639933f8ebcc", "text": "In this paper, we describe active and semi-supervised learning methods for reducing the labeling effort for spoken language understanding. In a goal-oriented call routing system, understanding the intent of the user can be framed as a classification problem. State of the art statistical classification systems are trained using a large number of human-labeled utterances, preparation of which is labor intensive and time consuming. Active learning aims to minimize the number of labeled utterances by automatically selecting the utterances that are likely to be most informative for labeling. The method for active learning we propose, inspired by certainty-based active learning, selects the examples that the classifier is the least confident about. The examples that are classified with higher confidence scores (hence not selected by active learning) are exploited using two semi-supervised learning methods. The first method augments the training data by using the machine-labeled classes for the unlabeled utterances. The second method instead augments the classification model trained using the human-labeled utterances with the machine-labeled ones in a weighted manner. We then combine active and semi-supervised learning using selectively sampled and automatically labeled data. This enables us to exploit all collected data and alleviates the data imbalance problem caused by employing only active or semi-supervised learning. We have evaluated these active and semi-supervised learning methods with a call classification system used for AT&T customer care. Our results indicate that it is possible to reduce human labeling effort significantly. 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "4eaa8c1af7a4f6f6c9de1e6de3f2495f", "text": "Technologies to support the Internet of Things are becoming more important as the need to better understand our environments and make them smart increases. As a result it is predicted that intelligent devices and networks, such as WSNs, will not be isolated, but connected and integrated, composing computer networks. So far, the IP-based Internet is the largest network in the world; therefore, there are great strides to connect WSNs with the Internet. To this end, the IETF has developed a suite of protocols and open standards for accessing applications and services for wireless resource constrained networks. However, many open challenges remain, mostly due to the complex deployment characteristics of such systems and the stringent requirements imposed by various services wishing to make use of such complex systems. Thus, it becomes critically important to study how the current approaches to standardization in this area can be improved, and at the same time better understand the opportunities for the research community to contribute to the IoT field. To this end, this article presents an overview of current standards and research activities in both industry and academia.", "title": "" }, { "docid": "08bef09a01414bafcbc778fea85a7c0a", "text": "The use.of energy-minimizing curves, known as “snakes,” to extract features of interest in images has been introduced by Kass, Witkhr & Terzopoulos (Znt. J. Comput. Vision 1, 1987,321-331). We present a model of deformation which solves some of the problems encountered with the original method. The external forces that push the curve to the edges are modified to give more stable results. The original snake, when it is not close enough to contours, is not attracted by them and straightens to a line. Our model makes the curve behave like a balloon which is inflated by an additional force. The initial curve need no longer be close to the solution to converge. The curve passes over weak edges and is stopped only if the edge is strong. We give examples of extracting a ventricle in medical images. We have also made a first step toward 3D object reconstruction, by tracking the extracted contour on a series of successive cross sections.", "title": "" } ]
scidocsrr
bde7b86f912c0b9f51107f1cdafd9552
Unsupervised Random Walk Sentence Embeddings: A Strong but Simple Baseline
[ { "docid": "f5f56d680fbecb94a08d9b8e5925228f", "text": "Semantic word embeddings represent the meaning of a word via a vector, and are created by diverse methods. Many use nonlinear operations on co-occurrence statistics, and have hand-tuned hyperparameters and reweighting methods. This paper proposes a new generative model, a dynamic version of the log-linear topic model of Mnih and Hinton (2007). The methodological novelty is to use the prior to compute closed form expressions for word statistics. This provides a theoretical justification for nonlinear models like PMI, word2vec, and GloVe, as well as some hyperparameter choices. It also helps explain why low-dimensional semantic embeddings contain linear algebraic structure that allows solution of word analogies, as shown by Mikolov et al. (2013a) and many subsequent papers. Experimental support is provided for the generative model assumptions, the most important of which is that latent word vectors are fairly uniformly dispersed in space.", "title": "" }, { "docid": "062c970a14ac0715ccf96cee464a4fec", "text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.", "title": "" } ]
[ { "docid": "beb1c8ba8809d1ac409584bea1495654", "text": "Multimodal information processing has received considerable attention in recent years. The focus of existing research in this area has been predominantly on the use of fusion technology. In this paper, we suggest that cross-modal association can provide a new set of powerful solutions in this area. We investigate different cross-modal association methods using the linear correlation model. We also introduce a novel method for cross-modal association called Cross-modal Factor Analysis (CFA). Our earlier work on Latent Semantic Indexing (LSI) is extended for applications that use off-line supervised training. As a promising research direction and practical application of cross-modal association, cross-modal information retrieval where queries from one modality are used to search for content in another modality using low-level features is then discussed in detail. Different association methods are tested and compared using the proposed cross-modal retrieval system. All these methods achieve significant dimensionality reduction. Among them CFA gives the best retrieval performance. Finally, this paper addresses the use of cross-modal association to detect talking heads. The CFA method achieves 91.1% detection accuracy, while LSI and Canonical Correlation Analysis (CCA) achieve 66.1% and 73.9% accuracy, respectively. As shown by experiments, cross-modal association provides many useful benefits, such as robust noise resistance and effective feature selection. Compared to CCA and LSI, the proposed CFA shows several advantages in analysis performance and feature usage. Its capability in feature selection and noise resistance also makes CFA a promising tool for many multimedia analysis applications.", "title": "" }, { "docid": "c6b1ad47687dbd86b28a098160f406bb", "text": "The development of a 10-item self-report scale (EPDS) to screen for Postnatal Depression in the community is described. After extensive pilot interviews a validation study was carried out on 84 mothers using the Research Diagnostic Criteria for depressive illness obtained from Goldberg's Standardised Psychiatric Interview. The EPDS was found to have satisfactory sensitivity and specificity, and was also sensitive to change in the severity of depression over time. The scale can be completed in about 5 minutes and has a simple method of scoring. The use of the EPDS in the secondary prevention of Postnatal Depression is discussed.", "title": "" }, { "docid": "246bbb92bc968d20866b8c92a10f8ac7", "text": "This survey paper provides an overview of content-based music information retrieval systems, both for audio and for symbolic music notation. Matching algorithms and indexing methods are briefly presented. The need for a TREC-like comparison of matching algorithms such as MIREX at ISMIR becomes clear from the high number of quite different methods which so far only have been used on different data collections. We placed the systems on a map showing the tasks and users for which they are suitable, and we find that existing content-based retrieval systems fail to cover a gap between the very general and the very specific retrieval tasks.", "title": "" }, { "docid": "8518dc45e3b0accfc551111489842359", "text": "PURPOSE\nRobot-assisted surgery has been rapidly adopted in the U.S. for prostate cancer. Its adoption has been driven by market forces and patient preference, and debate continues regarding whether it offers improved outcomes to justify the higher cost relative to open surgery. We examined the comparative effectiveness of robot-assisted vs open radical prostatectomy in cancer control and survival in a nationally representative population.\n\n\nMATERIALS AND METHODS\nThis population based observational cohort study of patients with prostate cancer undergoing robot-assisted radical prostatectomy and open radical prostatectomy during 2003 to 2012 used data captured in the SEER (Surveillance, Epidemiology, and End Results)-Medicare linked database. Propensity score matching and time to event analysis were used to compare all cause mortality, prostate cancer specific mortality and use of additional treatment after surgery.\n\n\nRESULTS\nA total of 6,430 robot-assisted radical prostatectomies and 9,161 open radical prostatectomies performed during 2003 to 2012 were identified. The use of robot-assisted radical prostatectomy increased from 13.6% in 2003 to 2004 to 72.6% in 2011 to 2012. After a median followup of 6.5 years (IQR 5.2-7.9) robot-assisted radical prostatectomy was associated with an equivalent risk of all cause mortality (HR 0.85, 0.72-1.01) and similar cancer specific mortality (HR 0.85, 0.50-1.43) vs open radical prostatectomy. Robot-assisted radical prostatectomy was also associated with less use of additional treatment (HR 0.78, 0.70-0.86).\n\n\nCONCLUSIONS\nRobot-assisted radical prostatectomy has comparable intermediate cancer control as evidenced by less use of additional postoperative cancer therapies and equivalent cancer specific and overall survival. Longer term followup is needed to assess for differences in prostate cancer specific survival, which was similar during intermediate followup. Our findings have significant quality and cost implications, and provide reassurance regarding the adoption of more expensive technology in the absence of randomized controlled trials.", "title": "" }, { "docid": "a63f9b27e27393bb432198f18c3d89e1", "text": "Accounting information system had been widely used by many organizations to automate and integrate their business operations .The main objective s of many businesses to adopt this system are to improve their business efficiency and increase competitiveness. The qualitative characteristic of any Accounting Information System can be maintained if there is a sound internal control system. Internal control is run to ensure the achievement of operational goals and performance. Therefore the purpose of this study is to examine the efficiency of Accounting Information System on performance measures using the secondary data in which it was found that accounting information system is of great importance to both businesses and organization in which it helps in facilitating management decision making, internal controls ,quality of the financial report ,and it facilitates the company’s transaction and it also plays an important role in economic system, and the study recommends that businesses, firms and organization should adopt the use of AIS because adequate accounting information is essential for every effective decision making process and adequate information is possible if accounting information systems are run efficiently also, efficient Accounting Information Systems ensures that all levels of management get sufficient, adequate, relevant and true information for planning and controlling activities of the business organization.", "title": "" }, { "docid": "7963adab39b58ab0334b8eef4149c59c", "text": "The aim of the present study was to gain a better understanding of the content characteristics that make online consumer reviews a useful source of consumer information. To this end, we content analyzed reviews of experience and search products posted on Amazon.com (N = 400). The insights derived from this content analysis were linked with the proportion of ‘useful’ votes that reviews received from fellow consumers. The results show that content characteristics are paramount to understanding the perceived usefulness of reviews. Specifically, argumentation (density and diversity) served as a significant predictor of perceived usefulness, as did review valence although this latter effect was contingent on the type of product (search or experience) being evaluated in reviews. The presence of expertise claims appeared to be weakly related to the perceived usefulness of reviews. The broader theoretical, methodological and practical implications of these findings are discussed.", "title": "" }, { "docid": "df0381c129339b1131897708fc00a96c", "text": "We present a novel congestion control algorithm suitable for use with cumulative, layered data streams in the MBone. Our algorithm behaves similarly to TCP congestion control algorithms, and shares bandwidth fairly with other instances of the protocol and with TCP flows. It is entirely receiver driven and requires no per-receiver status at the sender, in order to scale to large numbers of receivers. It relies on standard functionalities of multicast routers, and is suitable for continuous stream and reliable bulk data transfer. In the paper we illustrate the algorithm, characterize its response to losses both analytically and by simulations, and analyse its behaviour using simulations and experiments in real networks. We also show how error recovery can be dealt with independently from congestion control by using FEC techniques, so as to provide reliable bulk data transfer.", "title": "" }, { "docid": "65a7e691f8bb6831c269cf5770271325", "text": "Seven types of evidence are reviewed that indicate that high subjective wellbeing (such as life satisfaction, absence of negative emotions, optimism, and positive emotions) causes better health and longevity. For example, prospective longitudinal studies of normal populations provide evidence that various types of subjective well-being such as positive affect predict health and longevity, controlling for health and socioeconomic status at baseline. Combined with experimental human and animal research, as well as naturalistic studies of changes of subjective well-being and physiological processes over time, the case that subjective well-being influences health and longevity in healthy populations is compelling. However, the claim that subjective well-being lengthens the lives of those with certain diseases such as cancer remains controversial. Positive feelings predict longevity and health beyond negative feelings. However, intensely aroused or manic positive affect may be detrimental to health. Issues such as causality, effect size, types of subjective well-being, and statistical controls are discussed.", "title": "" }, { "docid": "c64d9727c98e8c5cdbb3445918eb32c7", "text": "This paper describes an industrial project aimed at migrating legacy COBOL programs running on an IBM-AS400 to Java for running in an open environment. The unique aspect of this migration is the reengineering of the COBOL code prior to migration. The programs were in their previous form hardwired to the AS400 screens as well as to the AS400 file system. The goal of the reengineering project was to free the code from these proprietary dependencies and to reduce them to the pure business logic. Disentangling legacy code from it's physical environment is a major prerequisite to converting that code to another environment. The goal is the virtualization of program interfaces. That was accomplished here in a multistep automated process which led to small, environment independent COBOL modules which could be readily converted over into Java packages. The pilot project has been completed for a sample subset of the production planning and control system. The conversion to Java is pending the test of the reengineered COBOL modules.", "title": "" }, { "docid": "6a1f1345a390ff886c95a57519535c40", "text": "BACKGROUND\nThe goal of this pilot study was to evaluate the effects of the cognitive-restructuring technique 'lucid dreaming treatment' (LDT) on chronic nightmares. Becoming lucid (realizing that one is dreaming) during a nightmare allows one to alter the nightmare storyline during the nightmare itself.\n\n\nMETHODS\nAfter having filled out a sleep and a posttraumatic stress disorder questionnaire, 23 nightmare sufferers were randomly divided into 3 groups; 8 participants received one 2-hour individual LDT session, 8 participants received one 2-hour group LDT session, and 7 participants were placed on the waiting list. LDT consisted of exposure, mastery, and lucidity exercises. Participants filled out the same questionnaires 12 weeks after the intervention (follow-up).\n\n\nRESULTS\nAt follow-up the nightmare frequency of both treatment groups had decreased. There were no significant changes in sleep quality and posttraumatic stress disorder symptom severity. Lucidity was not necessary for a reduction in nightmare frequency.\n\n\nCONCLUSIONS\nLDT seems effective in reducing nightmare frequency, although the primary therapeutic component (i.e. exposure, mastery, or lucidity) remains unclear.", "title": "" }, { "docid": "092239f41a6e216411174e5ed9dceee2", "text": "In this paper, we propose a simple but effective specular highlight removal method using a single input image. Our method is based on a key observation the maximum fraction of the diffuse color component (so called maximum diffuse chromaticity in the literature) in local patches in color images changes smoothly. Using this property, we can estimate the maximum diffuse chromaticity values of the specular pixels by directly applying low-pass filter to the maximum fraction of the color components of the original image, such that the maximum diffuse chromaticity values can be propagated from the diffuse pixels to the specular pixels. The diffuse color at each pixel can then be computed as a nonlinear function of the estimated maximum diffuse chromaticity. Our method can be directly extended for multi-color surfaces if edge-preserving filters (e.g., bilateral filter) are used such that the smoothing can be guided by the maximum diffuse chromaticity. But maximum diffuse chromaticity is to be estimated. We thus present an approximation and demonstrate its effectiveness. Recent development in fast bilateral filtering techniques enables our method to run over 200× faster than the state-of-the-art on a standard CPU and differentiates our method from previous work.", "title": "" }, { "docid": "a49c8e6f222b661447d1de32e29d0f16", "text": "The discovery of ammonia oxidation by mesophilic and thermophilic Crenarchaeota and the widespread distribution of these organisms in marine and terrestrial environments indicated an important role for them in the global nitrogen cycle. However, very little is known about their physiology or their contribution to nitrification. Here we report oligotrophic ammonia oxidation kinetics and cellular characteristics of the mesophilic crenarchaeon ‘Candidatus Nitrosopumilus maritimus’ strain SCM1. Unlike characterized ammonia-oxidizing bacteria, SCM1 is adapted to life under extreme nutrient limitation, sustaining high specific oxidation rates at ammonium concentrations found in open oceans. Its half-saturation constant (Km = 133 nM total ammonium) and substrate threshold (≤10 nM) closely resemble kinetics of in situ nitrification in marine systems and directly link ammonia-oxidizing Archaea to oligotrophic nitrification. The remarkably high specific affinity for reduced nitrogen (68,700 l per g cells per h) of SCM1 suggests that Nitrosopumilus-like ammonia-oxidizing Archaea could successfully compete with heterotrophic bacterioplankton and phytoplankton. Together these findings support the hypothesis that nitrification is more prevalent in the marine nitrogen cycle than accounted for in current biogeochemical models.", "title": "" }, { "docid": "b32286014bb7105e62fba85a9aab9019", "text": "PURPOSE\nSystemic thrombolysis for the treatment of acute pulmonary embolism (PE) carries an estimated 20% risk of major hemorrhage, including a 3%-5% risk of hemorrhagic stroke. The authors used evidence-based methods to evaluate the safety and effectiveness of modern catheter-directed therapy (CDT) as an alternative treatment for massive PE.\n\n\nMATERIALS AND METHODS\nThe systematic review was initiated by electronic literature searches (MEDLINE, EMBASE) for studies published from January 1990 through September 2008. Inclusion criteria were applied to select patients with acute massive PE treated with modern CDT. Modern techniques were defined as the use of low-profile devices (< or =10 F), mechanical fragmentation and/or aspiration of emboli including rheolytic thrombectomy, and intraclot thrombolytic injection if a local drug was infused. Relevant non-English language articles were translated into English. Paired reviewers assessed study quality and abstracted data. Meta-analysis was performed by using random effects models to calculate pooled estimates for complications and clinical success rates across studies. Clinical success was defined as stabilization of hemodynamics, resolution of hypoxia, and survival to hospital discharge.\n\n\nRESULTS\nFive hundred ninety-four patients from 35 studies (six prospective, 29 retrospective) met the criteria for inclusion. The pooled clinical success rate from CDT was 86.5% (95% confidence interval [CI]: 82.1%, 90.2%). Pooled risks of minor and major procedural complications were 7.9% (95% CI: 5.0%, 11.3%) and 2.4% (95% CI: 1.9%, 4.3%), respectively. Data on the use of systemic thrombolysis before CDT were available in 571 patients; 546 of those patients (95%) were treated with CDT as the first adjunct to heparin without previous intravenous thrombolysis.\n\n\nCONCLUSIONS\nModern CDT is a relatively safe and effective treatment for acute massive PE. At experienced centers, CDT should be considered as a first-line treatment for patients with massive PE.", "title": "" }, { "docid": "f1131f6f25601c32fefc09c38c7ad84b", "text": "We create a new online reduction of multiclass classification to binary classification for which training and prediction time scale logarithmically with the number of classes. We show that several simple techniques give rise to an algorithm which is superior to previous logarithmic time classification approaches while competing with one-against-all in space. The core construction is based on using a tree to select a small subset of labels with high recall, which are then scored using a one-against-some structure with high precision.", "title": "" }, { "docid": "1c90adf8ec68ff52e777b2041f8bf4c4", "text": "In many situations we have some measurement of confidence on “positiveness” for a binary label. The “positiveness” is a continuous value whose range is a bounded interval. It quantifies the affiliation of each training data to the positive class. We propose a novel learning algorithm called expectation loss SVM (eSVM) that is devoted to the problems where only the “positiveness” instead of a binary label of each training sample is available. Our e-SVM algorithm can also be readily extended to learn segment classifiers under weak supervision where the exact positiveness value of each training example is unobserved. In experiments, we show that the e-SVM algorithm can effectively address the segment proposal classification task under both strong supervision (e.g. the pixel-level annotations are available) and the weak supervision (e.g. only bounding-box annotations are available), and outperforms the alternative approaches. Besides, we further validate this method on two major tasks of computer vision: semantic segmentation and object detection. Our method achieves the state-of-the-art object detection performance on PASCAL VOC 2007 dataset.", "title": "" }, { "docid": "c955e63d5c5a30e18c008dcc51d1194b", "text": "We report, for the first time, the identification of fatty acid particles in formulations containing the surfactant polysorbate 20. These fatty acid particles were observed in multiple mAb formulations during their expected shelf life under recommended storage conditions. The fatty acid particles were granular or sand-like in morphology and were several microns in size. They could be identified by distinct IR bands, with additional confirmation from energy-dispersive X-ray spectroscopy analysis. The particles were readily distinguishable from protein particles by these methods. In addition, particles containing a mixture of protein and fatty acids were also identified, suggesting that the particulation pathways for the two particle types may not be distinct. The techniques and observations described will be useful for the correct identification of proteinaceous versus nonproteinaceous particles in pharmaceutical products.", "title": "" }, { "docid": "02469f669769f5c9e2a9dc49cee20862", "text": "In this work we study the use of 3D hand poses to recognize first-person dynamic hand actions interacting with 3D objects. Towards this goal, we collected RGB-D video sequences comprised of more than 100K frames of 45 daily hand action categories, involving 26 different objects in several hand configurations. To obtain hand pose annotations, we used our own mo-cap system that automatically infers the 3D location of each of the 21 joints of a hand model via 6 magnetic sensors and inverse kinematics. Additionally, we recorded the 6D object poses and provide 3D object models for a subset of hand-object interaction sequences. To the best of our knowledge, this is the first benchmark that enables the study of first-person hand actions with the use of 3D hand poses. We present an extensive experimental evaluation of RGB-D and pose-based action recognition by 18 baselines/state-of-the-art approaches. The impact of using appearance features, poses, and their combinations are measured, and the different training/testing protocols are evaluated. Finally, we assess how ready the 3D hand pose estimation field is when hands are severely occluded by objects in egocentric views and its influence on action recognition. From the results, we see clear benefits of using hand pose as a cue for action recognition compared to other data modalities. Our dataset and experiments can be of interest to communities of 3D hand pose estimation, 6D object pose, and robotics as well as action recognition.", "title": "" }, { "docid": "96471eda3162fa5bdac40220646e7697", "text": "A key step in mass spectrometry (MS)-based proteomics is the identification of peptides in sequence databases by their fragmentation spectra. Here we describe Andromeda, a novel peptide search engine using a probabilistic scoring model. On proteome data, Andromeda performs as well as Mascot, a widely used commercial search engine, as judged by sensitivity and specificity analysis based on target decoy searches. Furthermore, it can handle data with arbitrarily high fragment mass accuracy, is able to assign and score complex patterns of post-translational modifications, such as highly phosphorylated peptides, and accommodates extremely large databases. The algorithms of Andromeda are provided. Andromeda can function independently or as an integrated search engine of the widely used MaxQuant computational proteomics platform and both are freely available at www.maxquant.org. The combination enables analysis of large data sets in a simple analysis workflow on a desktop computer. For searching individual spectra Andromeda is also accessible via a web server. We demonstrate the flexibility of the system by implementing the capability to identify cofragmented peptides, significantly improving the total number of identified peptides.", "title": "" }, { "docid": "595e68cfcf7b2606f42f2ad5afb9713a", "text": "Mammalian hibernators undergo a remarkable phenotypic switch that involves profound changes in physiology, morphology, and behavior in response to periods of unfavorable environmental conditions. The ability to hibernate is found throughout the class Mammalia and appears to involve differential expression of genes common to all mammals, rather than the induction of novel gene products unique to the hibernating state. The hibernation season is characterized by extended bouts of torpor, during which minimal body temperature (Tb) can fall as low as -2.9 degrees C and metabolism can be reduced to 1% of euthermic rates. Many global biochemical and physiological processes exploit low temperatures to lower reaction rates but retain the ability to resume full activity upon rewarming. Other critical functions must continue at physiologically relevant levels during torpor and be precisely regulated even at Tb values near 0 degrees C. Research using new tools of molecular and cellular biology is beginning to reveal how hibernators survive repeated cycles of torpor and arousal during the hibernation season. Comprehensive approaches that exploit advances in genomic and proteomic technologies are needed to further define the differentially expressed genes that distinguish the summer euthermic from winter hibernating states. Detailed understanding of hibernation from the molecular to organismal levels should enable the translation of this information to the development of a variety of hypothermic and hypometabolic strategies to improve outcomes for human and animal health.", "title": "" }, { "docid": "8869e69647a16278d7a2ac26316ec5d0", "text": "Despite significant progress, most existing visual dictionary learning methods rely on image descriptors alone or together with class labels. However, Web images are often associated with text data which may carry substantial information regarding image semantics, and may be exploited for visual dictionary learning. This paper explores this idea by leveraging relational information between image descriptors and textual words via co-clustering, in addition to information of image descriptors. Existing co-clustering methods are not optimal for this problem because they ignore the structure of image descriptors in the continuous space, which is crucial for capturing visual characteristics of images. We propose a novel Bayesian co-clustering model to jointly estimate the underlying distributions of the continuous image descriptors as well as the relationship between such distributions and the textual words through a unified Bayesian inference. Extensive experiments on image categorization and retrieval have validated the substantial value of the proposed joint modeling in improving visual dictionary learning, where our model shows superior performance over several recent methods.", "title": "" } ]
scidocsrr
9a6cf37a84603190818d14ce86bde4ed
A Knowledge-Intensive Model for Prepositional Phrase Attachment
[ { "docid": "3ac2f2916614a4e8f6afa1c31d9f704d", "text": "This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30%.", "title": "" } ]
[ { "docid": "e342178b5c8ee8a48add15fefa0ef5f8", "text": "A new scheme is proposed for the dual-band operation of the Wilkinson power divider/combiner. The dual band operation is achieved by attaching two central transmission line stubs to the conventional Wilkinson divider. It has simple structure and is suitable for distributed circuit implementation.", "title": "" }, { "docid": "5e2b8d3ed227b71869550d739c61a297", "text": "Dairy cattle experience a remarkable shift in metabolism after calving, after which milk production typically increases so rapidly that feed intake alone cannot meet energy requirements (Bauman and Currie, 1980; Baird, 1982). Cows with a poor adaptive response to negative energy balance may develop hyperketonemia (ketosis) in early lactation. Cows that develop ketosis in early lactation lose milk yield and are at higher risk for other postpartum diseases and early removal from the herd.", "title": "" }, { "docid": "91c0bd1c3faabc260277c407b7c6af59", "text": "In this paper, we consider the Direct Perception approach for autonomous driving. Previous efforts in this field focused more on feature extraction of the road markings and other vehicles in the scene rather than on the autonomous driving algorithm and its performance under realistic assumptions. Our main contribution in this paper is introducing a new, more robust, and more realistic Direct Perception framework and corresponding algorithm for autonomous driving. First, we compare the top 3 Convolutional Neural Networks (CNN) models in the feature extraction competitions and test their performance for autonomous driving. The experimental results showed that GoogLeNet performs the best in this application. Subsequently, we propose a deep learning based algorithm for autonomous driving, and we refer to our algorithm as GoogLenet for Autonomous Driving (GLAD). Unlike previous efforts, GLAD makes no unrealistic assumptions about the autonomous vehicle or its surroundings, and it uses only five affordance parameters to control the vehicle as compared to the 14 parameters used by prior efforts. Our simulation results show that the proposed GLAD algorithm outperforms previous Direct Perception algorithms both on empty roads and while driving with other surrounding vehicles.", "title": "" }, { "docid": "bfdfac980d1629f85f5bd57705b11b19", "text": "Deduplication is an approach of avoiding storing data blocks with identical content, and has been shown to effectively reduce the disk space for storing multi-gigabyte virtual machine (VM) images. However, it remains challenging to deploy deduplication in a real system, such as a cloud platform, where VM images are regularly inserted and retrieved. We propose LiveDFS, a live deduplication file system that enables deduplication storage of VM images in an open-source cloud that is deployed under low-cost commodity hardware settings with limited memory footprints. LiveDFS has several distinct features, including spatial locality, prefetching of metadata, and journaling. LiveDFS is POSIXcompliant and is implemented as a Linux kernel-space file system. We deploy our LiveDFS prototype as a storage layer in a cloud platform based on OpenStack, and conduct extensive experiments. Compared to an ordinary file system without deduplication, we show that LiveDFS can save at least 40% of space for storing VM images, while achieving reasonable performance in importing and retrieving VM images. Our work justifies the feasibility of deploying LiveDFS in an open-source cloud.", "title": "" }, { "docid": "774690eaef2d293320df0c162f44af95", "text": "Having a long historical past in traditional Chinese medicine, Ganoderma Lucidum (G. Lucidum) is a type of mushroom believed to extend life and promote health. Due to the increasing consumption pattern, it has been cultivated and marketed intensively since the 1970s. It is claimed to be effective in the prevention and treatment of many diseases, and in addition, it exerts anticancer properties. Almost all the data on the benefits of G. Lucidum are based on laboratory and preclinical studies. The few clinical studies conducted are questionable. Nevertheless, when the findings obtained from laboratory studies are considered, it turns that G. Lucidum is likely to have some benefits for cancer patients. What is important at this point is to determine the components that will provide these benefits, and use them in drug development, after testing their reliability. In conclusion, it would be the right approach to abstain from using and incentivizing this product, until its benefits and harms are set out clearly, by considering its potential side effects.", "title": "" }, { "docid": "9563b47a73e41292599c368e1dfcd40a", "text": "Non-functional requirements are an important, and often critical, aspect of any software system. However, determining the degree to which any particular software system meets such requirements and incorporating such considerations into the software design process is a difficult challenge. This paper presents a modification of the NFR framework that allows for the discovery of a set of system functionalities that optimally satisfice a given set of non-functional requirements. This new technique introduces an adaptation of softgoal interdependency graphs, denoted softgoal interdependency ruleset graphs, in which label propagation can be done consistently. This facilitates the use of optimisation algorithms to determine the best set of bottom-level operationalizing softgoals that optimally satisfice the highest-level NFR softgoals. The proposed method also introduces the capacity to incorporate both qualitative and quantitative information.", "title": "" }, { "docid": "242977c8b2a5768b18fc276309407d60", "text": "We present a parser that relies primarily on extracting information directly from surface spans rather than on propagating information through enriched grammar structure. For example, instead of creating separate grammar symbols to mark the definiteness of an NP, our parser might instead capture the same information from the first word of the NP. Moving context out of the grammar and onto surface features can greatly simplify the structural component of the parser: because so many deep syntactic cues have surface reflexes, our system can still parse accurately with context-free backbones as minimal as Xbar grammars. Keeping the structural backbone simple and moving features to the surface also allows easy adaptation to new languages and even to new tasks. On the SPMRL 2013 multilingual constituency parsing shared task (Seddah et al., 2013), our system outperforms the top single parser system of Björkelund et al. (2013) on a range of languages. In addition, despite being designed for syntactic analysis, our system also achieves stateof-the-art numbers on the structural sentiment task of Socher et al. (2013). Finally, we show that, in both syntactic parsing and sentiment analysis, many broad linguistic trends can be captured via surface features.", "title": "" }, { "docid": "9180fe4fc7020bee9a52aa13de3adf54", "text": "A new Depth Image Layers Separation (DILS) algorithm for synthesizing inter-view images based on disparity depth map layers representation is presented. The approach is to separate the depth map into several layers identified through histogram-based clustering. Each layer is extracted using inter-view interpolation to create objects based on location and depth. DILS is a new paradigm in selecting interesting image locations based on depth, but also in producing new image representations that allow objects or parts of an image to be described without the need of segmentation and identification. The image view synthesis can reduce the configuration complexity of multi-camera arrays in 3D imagery and free-viewpoint applications. The simulation results show that depth layer separation is able to create inter-view images that may be integrated with other techniques such as occlusion handling processes. The DILS algorithm can be implemented using both simple as well as sophisticated stereo matching methods to synthesize inter-view images.", "title": "" }, { "docid": "9e9c55a81d6fe980515c9b93dfe0d265", "text": "Single-cell RNA-seq has become routine for discovering cell types and revealing cellular diversity, but archived human brain samples still pose a challenge to current high-throughput platforms. We present STRT-seq-2i, an addressable 9600-microwell array platform, combining sampling by limiting dilution or FACS, with imaging and high throughput at competitive cost. We applied the platform to fresh single mouse cortical cells and to frozen post-mortem human cortical nuclei, matching the performance of a previous lower-throughput platform while retaining a high degree of flexibility, potentially also for other high-throughput applications.", "title": "" }, { "docid": "4f509a4fdc6bbffa45c214bc9267ea79", "text": "Memory units have been widely used to enrich the capabilities of deep networks on capturing long-term dependencies in reasoning and prediction tasks, but little investigation exists on deep generative models (DGMs) which are good at inferring high-level invariant representations from unlabeled data. This paper presents a deep generative model with a possibly large external memory and an attention mechanism to capture the local detail information that is often lost in the bottom-up abstraction process in representation learning. By adopting a smooth attention model, the whole network is trained end-to-end by optimizing a variational bound of data likelihood via auto-encoding variational Bayesian methods, where an asymmetric recognition network is learnt jointly to infer high-level invariant representations. The asymmetric architecture can reduce the competition between bottom-up invariant feature extraction and top-down generation of instance details. Our experiments on several datasets demonstrate that memory can significantly boost the performance of DGMs on various tasks, including density estimation, image generation, and missing value imputation, and DGMs with memory can achieve state-ofthe-art quantitative results.", "title": "" }, { "docid": "67d8680a41939c58a866f684caa514a3", "text": "Triboelectric effect works on the principle of triboelectrification and electrostatic induction. This principle is used to generate voltage by converting mechanical energy into electrical energy. This paper presents the charging behavior of different capacitors by rubbing of two different materials using mechanical motion. The numerical and simulation modeling, describes the charging performance of a TENG with a bridge rectifier. It is also demonstrated that a 10 μF capacitor can be charged to a maximum of 24.04 volt in 300 seconds and it is also provide 2800 μJ/cm3 maximum energy density. Such system can be used for ultralow power electronic devices, biomedical devices and self-powered appliances etc.", "title": "" }, { "docid": "6e016c311f77963be6ba7ec6e29a44f0", "text": "Unmanned Aerial Vehicles (UAVs) have been recently considered as means to provide enhanced coverage or relaying services to mobile users (MUs) in wireless systems with limited or no infrastructure. In this paper, a UAV-based mobile cloud computing system is studied in which a moving UAV is endowed with computing capabilities to offer computation offloading opportunities to MUs with limited local processing capabilities. The system aims at minimizing the total mobile energy consumption while satisfying quality of service requirements of the offloaded mobile application. Offloading is enabled by uplink and downlink communications between the mobile devices and the UAV that take place by means of frequency division duplex (FDD) via orthogonal or non-orthogonal multiple access (NOMA) schemes. The problem of jointly optimizing the bit allocation for uplink and downlink communication as well as for computing at the UAV, along with the cloudlet’s trajectory under latency and UAV’s energy budget constraints is formulated and addressed by leveraging successive convex approximation (SCA) strategies. Numerical results demonstrate the significant energy savings that can be accrued by means of the proposed joint optimization of bit allocation and cloudlet’s trajectory as compared to local mobile execution as well as to partial optimization approaches that design only the bit allocation or the cloudlet’s trajectory.", "title": "" }, { "docid": "df9d74df931a596b7025150d11a18364", "text": "In recent years, ''gamification'' has been proposed as a solution for engaging people in individually and socially sustainable behaviors, such as exercise, sustainable consumption, and education. This paper studies demographic differences in perceived benefits from gamification in the context of exercise. On the basis of data gathered via an online survey (N = 195) from an exercise gamification service Fitocracy, we examine the effects of gender, age, and time using the service on social, hedonic, and utilitarian benefits and facilitating features of gamifying exercise. The results indicate that perceived enjoyment and usefulness of the gamification decline with use, suggesting that users might experience novelty effects from the service. The findings show that women report greater social benefits from the use of gamification. Further, ease of use of gamification is shown to decline with age. The implications of the findings are discussed. The question of how we understand gamer demographics and gaming behaviors, along with use cultures of different demographic groups, has loomed over the last decade as games became one of the main veins of entertainment and consumer culture (Yi, 2004). The deeply established perception of games being a field of entertainment dominated by young males has been challenged. Nowadays, digital gaming is a mainstream activity with broad demographics. The gender divide has been diminishing, the age span has been widening, and the average age is higher than An illustrative study commissioned by PopCap (Information Solutions Group, 2011) reveals that it is actually women in their 30s and 40s who play the popular social games on social networking services (see e.g. most – outplaying men and younger people. It is clear that age and gender perspectives on gaming activities and motivations require further scrutiny. The expansion of the game industry and the increased competition within the field has also led to two parallel developments: (1) using game design as marketing (Hamari & Lehdonvirta, 2010) and (2) gamification – going beyond what traditionally are regarded as games and implementing game design there often for the benefit of users. For example, services such as Mindbloom, Fitocracy, Zombies, Run!, and Nike+ are aimed at assisting the user toward beneficial behavior related to lifestyle and health choices. However, it is unclear whether we can see age and gender discrepancies in use of gamified services similar to those in other digital gaming contexts. The main difference between games and gamifica-tion is that gamification is commonly …", "title": "" }, { "docid": "5dde43ab080f516c0b485fcd951bf9e1", "text": "Differential privacy is a framework to quantify to what extent individual privacy in a statistical database is preserved while releasing useful aggregate information about the database. In this paper, within the classes of mechanisms oblivious of the database and the queriesqueries beyond the global sensitivity, we characterize the fundamental tradeoff between privacy and utility in differential privacy, and derive the optimal ϵ-differentially private mechanism for a single realvalued query function under a very general utility-maximization (or cost-minimization) framework. The class of noise probability distributions in the optimal mechanism has staircase-shaped probability density functions which are symmetric (around the origin), monotonically decreasing and geometrically decaying. The staircase mechanism can be viewed as a geometric mixture of uniform probability distributions, providing a simple algorithmic description for the mechanism. Furthermore, the staircase mechanism naturally generalizes to discrete query output settings as well as more abstract settings. We explicitly derive the parameter of the optimal staircase mechanism for ℓ<sup>1</sup> and ℓ<sup>2</sup> cost functions. Comparing the optimal performances with those of the usual Laplacian mechanism, we show that in the high privacy regime (ϵ is small), the Laplacian mechanism is asymptotically optimal as ϵ → 0; in the low privacy regime (ϵ is large), the minimum magnitude and second moment of noise are Θ(Δe<sup>(-ϵ/2)</sup>) and Θ(Δ<sup>2</sup>e<sup>(-2ϵ/3)</sup>) as ϵ → +∞, respectively, while the corresponding figures when using the Laplacian mechanism are Δ/ϵ and 2Δ<sup>2</sup>/ϵ<sup>2</sup>, where Δ is the sensitivity of the query function. We conclude that the gains of the staircase mechanism are more pronounced in the moderate-low privacy regime.", "title": "" }, { "docid": "a7bbf188c7219ff48af391a5f8b140b8", "text": "The paper presents the results of studies concerning the designation of COD fraction in raw wastewater. The research was conducted in three mechanical-biological sewage treatment plants. The results were compared with data assumed in the ASM models. During the investigation, the following fractions of COD were determined: dissolved non-biodegradable SI, dissolved easily biodegradable SS, in organic suspension slowly degradable XS, and in organic suspension non-biodegradable XI. The methodology for determining the COD fraction was based on the ATVA 131guidelines. The real concentration of fractions in raw wastewater and the percentage of each fraction in total COD are different from data reported in the literature.", "title": "" }, { "docid": "e56276ed066369ffce7fe882dfde70f8", "text": "In this paper we present a deep learning architecture for extracting word embeddings for visual speech recognition. The embeddings summarize the information of the mouth region that is relevant to the problem of word recognition, while suppressing other types of variability such as speaker, pose and illumination. The system is comprised of a spatiotemporal convolutional layer, a Residual Network and bidirectional LSTMs and is trained on the Lipreading in-the-wild database. We first show that the proposed architecture goes beyond state-of-the-art on closed-set word identification, by attaining 11.92% error rate on a vocabulary of 500 words. We then examine the capacity of the embeddings in modelling words unseen during training. We deploy Probabilistic Linear Discriminant Analysis (PLDA) to model the embeddings and perform low-shot learning experiments on words unseen during training. The experiments demonstrate that word-level visual speech recognition is feasible even in cases where the target words are not included in the training set.", "title": "" }, { "docid": "d5758c68110a604c7af4a68faba32d1d", "text": "Two experiments explore the validity of conceptualizing musical beats as auditory structural features and the potential for increases in tempo to lead to greater sympathetic arousal, measured using skin conductance. In the first experiment, fastand slow-paced rock and classical music excerpts were compared to silence. As expected, skin conductance response (SCR) frequency was greater during music processing than during silence. Skin conductance level (SCL) data showed that fast-paced music elicits greater activation than slow-paced music. Genre significantly interacted with tempo in SCR frequency, with faster tempo increasing activation for classical music while decreasing it for rock music. A second experiment was conducted to explore the possibility that the presumed familiarity of the genre led to this interaction. Although further evidence was found for conceptualizing musical beat onsets as auditory structure, the familiarity explanation was not supported. Music Effects on Arousal 2 Effects of Music Genre and Tempo on Physiological Arousal Music communicates many different types of messages through the combination of sound and lyric (Sellnow & Sellnow, 2001). For example, music can be used to exchange political information (e.g., Frith, 1981; Stewart, Smith, & Denton, 1989). Music can also establish and portray a selfor group-image (Arnett, 1991, 1992; Dehyle, 1998; Kendall & Carterette, 1990; Dillman Carpentier, Knobloch & Zillmann, 2003; Manuel, 1991; McLeod, 1999; see also Hansen & Hansen, 2000). Pertinent to this investigation, music can communicate emotional information (e.g., Juslin & Sloboda, 2001). In short, music is a form of “interhuman communication in which humanly organized, non-verbal sound is perceived as vehiculating primarily affective (emotional) and/or gestural (corporeal) patterns of cognition” (Tagg, 2002, p. 5). This idea of music as communication reaches the likes of audio production students, who are taught the concept of musical underscoring, or adding music to “enhance information or emotional content” in a wide variety of ways from establishing a specific locale to intensifying action (Alten, 2005, p. 360). In this realm, music becomes a key instrument in augmenting or punctuating a given message. Given the importance of arousal and/or activation in most theories of persuasion and information processing, an understanding of how music can be harnessed to instill arousal is arguably of benefit to media producers looking to utilize every possible tool when creating messages, whether the messages are commercial appeals, promotional announcements or disease-prevention messages. It is with the motivation of harnessing the psychological response to music for practical application that two experiments were conducted to test whether message creators can rely on musical tempo as a way to increase sympathetic nervous system Music Effects on Arousal 3 activation in a manner similar to other structural features of media (i.e., cuts, edits, sound effects, voice changes). Before explaining the original work, a brief description of the current state of the literature on music and emotion is offered. Different Approaches in Music Psychology Although there is little doubt that music ‘vehiculates’ emotion, several debates exist within the music psychology literature about exactly how that process is best conceptualized and empirically approached (e.g., Bever, 1988; Gaver & Mandler, 1987; Juslin & Sloboda, 2001; Lundin, 1985; Sloboda, 1991). The primary conceptual issue revolves around two different schools of thought (Scherer & Zentner, 2001). The first, the cognitivist approach, describes emotional response to music as resulting from the listener’s cognitive recognition of cues within the composition itself. Emotivists, on the other hand, eliminate the cognitive calculus required by cue recognition in the score, instead describing emotional response to music as a feeling of emotion. Although both approaches acknowledge a cultural or social influence in how the music is interpreted (e.g., Krumhansl, 1997; Peretz, 2001), the conceptual chasm between emotion as being either expressed or elicited by a piece of music is wide indeed. A second issue in the area of music psychology concerns a difference in the empirical approach present among emotion scholars writ large. Some focus their explorations on specific, discrete affective states (i.e., joy, fear, disgust, etc.), often labeled as the experience of basic emotions (Ortony et al., 1988; Thayer, 1989; Zajonc, 1980). Communication scholars such as Nabi (1999, 2003) and Newhagen (1998) have also found it fruitful to explore unique affective states resulting from mediated messages, driven by the understanding that “each emotion expresses a different relational meaning Music Effects on Arousal 4 that motivates the use of mental and/or physical resources in ways consistent with the emotion’s action tendency” (Nabi, 2003, p. 226; also see Wirth & Schramm, 2005 for review). This approach is also well represented by studies exploring human reactions to music (see Juslin & Laukka, 2003 for review). Other emotion scholars design studies where the focus is placed not on the discrete identifier assigned to a certain feeling-state by a listener, but rather the extent to which different feeling-states share common factors or dimensions. The two most commonly studied dimensions are valence—a term given to the relative positive/negative hedonic value, and arousal—the intensity or level to which that hedonic value is experienced. The centrality of these two dimensions in the published literature is due to the consistency with which they account for the largest amount of predictive variance across a wide variety of dependent variables (Osgood, Suci & Tannenbuam, 1957; Bradley, 1994; Reisenzein, 1994). This dimensional approach to emotional experience is well-represented by articles in the communication literature exploring the combined impact of valence and arousal on memory (Lang, Bolls, Potter & Kawahara, 1999; Sundar & Kalyanaraman, 2004), liking (Yoon, Bolls, & Lang, 1998), and persuasive appeal (Yoon et al., 1998; Potter, LaTour, Braun-LaTour & Reichert, 2006). When surveying the music psychology literature for studies utilizing the dimensional emotions approach, however, results show that the impact of music on hedonic valence are difficult to consistently predict—arguably due to contextual, experiential or mood-state influences of the listener combined with interpretational differences of the song composers and performers (Bigand, Filipic, & Lalitte, 2005; Cantor & Zillmann, 1973; Gabrielsson & Lindström, 2001; Kendall & Carterette, 1990; Leman, 2003; Lundin, 1985). Music Effects on Arousal 5 On the other hand, the measured effects of music on the arousal dimension, while not uniform, are more consistent across studies (see Scherer & Zentner, 2001). For example, numerous experiments have noted the relaxation potential of music—either using compositions pre-tested as relaxing or self-identified by research participants as such. In Bartlett’s (1996) review of music studies using physiological measures, a majority of studies measuring muscle tension found relaxing music to reduce it. Interestingly, slightly more than half of the studies that measured skin temperature found relaxing music to increase it. Pelletier (2004) went beyond reviewing studies individually, conducting a statistical meta-analysis of 22 experiments. Conclusions showed that music alone, as well as used in tandem with relaxation techniques, significantly decreased perceived arousal and physiological activation. However, the amount of decrease significantly varied by age, stressor, musical preference, and previous music experience of the participant. These caveats provide possible explanations for the few inconsistent findings across individual studies that show either little or no effects of relaxing music (e.g., Davis-Rollans & Cunningham, 1987; Robb, Nichols, Rutan, & Bishop, et al., 1995; Strauser, 1997; see Standley, 1991 for review) or that show listening to relaxing music yields higher perceived arousal compared to the absence of music (Davis & Thaut, 1989). Burns, Labbé, Williams, and McCall (1999) relied on both self-report and physiological responses to the musical selections to explore music’s ability to generate states of relaxation. The researchers used a predetermined classical excerpt, a predetermined rock excerpt, an excerpt from a “relaxing” selection chosen by each participant, and a condition of sitting in silence. Burns et al. (1999) found that, within Music Effects on Arousal 6 groups, both finger temperature and skin conductance decreased over time. Across emotional conditions, self-reported relaxation was lowest for rock listeners and highest for participants in the self-selection and silence conditions. However, no significant between-group physiological differences were found. Rickard (2004) also combined self-reports of emotional impact, enjoyment, and familiarity with psychophysiological measures in evaluating arousal effects of music. Psychophysiological measures included skin conductance responses, chills, skin temperature, and muscle tension. Stimuli included relaxing music, music predetermined to be arousing but not emotionally powerful, self-selected emotionally-powerful music, and an emotionally-powerful film scene. Rickard found that music participants had selfidentified as emotionally powerful led to the greatest increases in skin conductance and chills, in addition to higher ratings on the self-reported measures. No correlation was found between these effects and participant gender or musical training. Krumhansl (1997) explored how music affects the peripheral nervous system in eliciting emotions in college-aged music students. Classical music selections approximately 180-seconds long were chosen which expressed sadness, happiness or fear. While listening, ha", "title": "" }, { "docid": "46fb354d3c85325312fe4e03d998632c", "text": "Driver distraction has been identified as a highpriority topic by the National Highway Traffic Safety Administration, reflecting concerns about the compatibility of certain in-vehicle technologies with the driving task, whether drivers are making potentially dangerous decisions about when to interact with invehicle technologies while driving, and that these trends may accelerate as new technologies continue to become available. Since 1991, NHTSA has conducted research to understand the factors that contribute to driver distraction and to develop methods to assess the extent to which in-vehicle technologies may contribute to crashes. This paper summarizes significant findings from past NHTSA research in the area of driver distraction and workload, provides an overview of current ongoing research, and describes upcoming research that will be conducted, including research using the National Advanced Driving Simulator and work to be conducted at NHTSA’s Vehicle Research and Test Center. Preliminary results of the ongoing research are also presented.", "title": "" }, { "docid": "27c125643ffc8f1fee7ed5ee22025c01", "text": "In this paper we establish rigorous benchmarks for image classifier robustness. Our first benchmark, IMAGENET-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications. Then we propose a new dataset called IMAGENET-P which enables researchers to benchmark a classifier’s robustness to common perturbations. Unlike recent robustness research, this benchmark evaluates performance on common corruptions and perturbations not worst-case adversarial perturbations. We find that there are negligible changes in relative corruption robustness from AlexNet classifiers to ResNet classifiers. Afterward we discover ways to enhance corruption and perturbation robustness. We even find that a bypassed adversarial defense provides substantial common perturbation robustness. Together our benchmarks may aid future work toward networks that robustly generalize.", "title": "" }, { "docid": "96029f6daa55fff7a76ab9bd48ebe7b9", "text": "According to the principle of compositionality, the meaning of a sentence is computed from the meaning of its parts and the way they are syntactically combined. In practice, however, the syntactic structure is computed by automatic parsers which are far-from-perfect and not tuned to the specifics of the task. Current recursive neural network (RNN) approaches for computing sentence meaning therefore run into a number of practical difficulties, including the need to carefully select a parser appropriate for the task, deciding how and to what extent syntactic context modifies the semantic composition function, as well as on how to transform parse trees to conform to the branching settings (typically, binary branching) of the RNN. This paper introduces a new model, the Forest Convolutional Network, that avoids all of these challenges, by taking a parse forest as input, rather than a single tree, and by allowing arbitrary branching factors. We report improvements over the state-of-the-art in sentiment analysis and question classification.", "title": "" } ]
scidocsrr
6482a8af53ac20d4bd6148d63200ed3c
Design a novel electronic medical record system for regional clinics and health centers in China
[ { "docid": "8ae8cb422f0f79031b8e19e49b857356", "text": "CSCW as a field has been concerned since its early days with healthcare, studying how healthcare work is collaboratively and practically achieved and designing systems to support that work. Reviewing literature from the CSCW Journal and related conferences where CSCW work is published, we reflect on the contributions that have emerged from this work. The analysis illustrates a rich range of concepts and findings towards understanding the work of healthcare but the work on the larger policy level is lacking. We argue that this presents a number of challenges for CSCW research moving forward: in having a greater impact on larger-scale health IT projects; broadening the scope of settings and perspectives that are studied; and reflecting on the relevance of the traditional methods in this field - namely workplace studies - to meet these challenges.", "title": "" } ]
[ { "docid": "94784bc9f04dbe5b83c2a9f02e005825", "text": "The optical code division multiple access (OCDMA), the most advanced multiple access technology in optical communication has become significant and gaining popularity because of its asynchronous access capability, faster speed, efficiency, security and unlimited bandwidth. Many codes are developed in spectral amplitude coding optical code division multiple access (SAC-OCDMA) with zero or minimum cross-correlation properties to reduce the multiple access interference (MAI) and Phase Induced Intensity Noise (PIIN). This paper compares two novel SAC-OCDMA codes in terms of their performances such as bit error rate (BER), number of active users that is accommodated with minimum cross-correlation property, high data rate that is achievable and the minimum power that the OCDMA system supports to achieve a minimum BER value. One of the proposed novel codes referred in this work as modified random diagonal code (MRDC) possesses cross-correlation between zero to one and the second novel code referred in this work as modified new zero cross-correlation code (MNZCC) possesses cross-correlation zero to further minimize the multiple access interference, which are found to be more scalable compared to the other existing SAC-OCDMA codes. In this work, the proposed MRDC and MNZCC codes are implemented in an optical system using the optisystem version-12 software for the SAC-OCDMA scheme. Simulation results depict that the OCDMA system based on the proposed novel MNZCC code exhibits better performance compared to the MRDC code and former existing SAC-OCDMA codes. The proposed MNZCC code accommodates maximum number of simultaneous users with higher data rate transmission, lower BER and longer traveling distance without any signal quality degradation as compared to the former existing SAC-OCDMA codes.", "title": "" }, { "docid": "3f88da8f70976c11bf5bab5f1d438d58", "text": "The task of the Emotion Recognition in the Wild (EmotiW) Challenge is to assign one of seven emotions to short video clips extracted from Hollywood style movies. The videos depict acted-out emotions under realistic conditions with a large degree of variation in attributes such as pose and illumination, making it worthwhile to explore approaches which consider combinations of features from multiple modalities for label assignment. In this paper we present our approach to learning several specialist models using deep learning techniques, each focusing on one modality. Among these are a convolutional neural network, focusing on capturing visual information in detected faces, a deep belief net focusing on the representation of the audio stream, a K-Means based “bag-of-mouths” model, which extracts visual features around the mouth region and a relational autoencoder, which addresses spatio-temporal aspects of videos. We explore multiple methods for the combination of cues from these modalities into one common classifier. This achieves a considerably greater accuracy than predictions from our strongest single-modality classifier. Our method was the winning submission in the 2013 EmotiW challenge and achieved a test set accuracy of 47.67 % on the 2014 dataset.", "title": "" }, { "docid": "57fd4b59ffb27c35faa6a5ee80001756", "text": "This paper describes a novel method for motion generation and reactive collision avoidance. The algorithm performs arbitrary desired velocity profiles in absence of external disturbances and reacts if virtual or physical contact is made in a unified fashion with a clear physically interpretable behavior. The method uses physical analogies for defining attractor dynamics in order to generate smooth paths even in presence of virtual and physical objects. The proposed algorithm can, due to its low complexity, run in the inner most control loop of the robot, which is absolutely crucial for safe Human Robot Interaction. The method is thought as the locally reactive real-time motion generator connecting control, collision detection and reaction, and global path planning.", "title": "" }, { "docid": "0923e899e5d7091a6da240db21eefad2", "text": "A new method was developed to acquire images automatically at a series of specimen tilts, as required for tomographic reconstruction. The method uses changes in specimen position at previous tilt angles to predict the position at the current tilt angle. Actual measurement of the position or focus is skipped if the statistical error of the prediction is low enough. This method allows a tilt series to be acquired rapidly when conditions are good but falls back toward the traditional approach of taking focusing and tracking images when necessary. The method has been implemented in a program, SerialEM, that provides an efficient environment for data acquisition. This program includes control of an energy filter as well as a low-dose imaging mode, in which tracking and focusing occur away from the area of interest. The program can automatically acquire a montage of overlapping frames, allowing tomography of areas larger than the field of the CCD camera. It also includes tools for navigating between specimen positions and finding regions of interest.", "title": "" }, { "docid": "ccfb2821c51a2fad5b34c3037497cb66", "text": "The next decade will see a deep transformation of industrial applications by big data analytics, machine learning and the internet of things. Industrial applications have a number of unique features, setting them apart from other domains. Central for many industrial applications in the internet of things is time series data generated by often hundreds or thousands of sensors at a high rate, e.g. by a turbine or a smart grid. In a first wave of applications this data is centrally collected and analyzed in Map-Reduce or streaming systems for condition monitoring, root cause analysis, or predictive maintenance. The next step is to shift from centralized analysis to distributed in-field or in situ analytics, e.g in smart cities or smart grids. The final step will be a distributed, partially autonomous decision making and learning in massively distributed environments. In this talk I will give an overview on Siemens’ journey through this transformation, highlight early successes, products and prototypes and point out future challenges on the way towards machine intelligence. I will also discuss architectural challenges for such systems from a Big Data point of view. Bio.Michael May is Head of the Technology Field Business Analytics & Monitoring at Siemens Corporate Technology, Munich, and responsible for eleven research groups in Europe, US, and Asia. Michael is driving research at Siemens in data analytics, machine learning and big data architectures. In the last two years he was responsible for creating the Sinalytics platform for Big Data applications across Siemens’ business. Before joining Siemens in 2013, Michael was Head of the Knowledge Discovery Department at the Fraunhofer Institute for Intelligent Analysis and Information Systems in Bonn, Germany. In cooperation with industry he developed Big Data Analytics applications in sectors ranging from telecommunication, automotive, and retail to finance and advertising. Between 2002 and 2009 Michael coordinated two Europe-wide Data Mining Research Networks (KDNet, KDubiq). He was local chair of ICML 2005, ILP 2005 and program chair of the ECML PKDD Industrial Track 2015. Michael did his PhD on machine discovery of causal relationships at the Graduate Programme for Cognitive Science at the University of Hamburg. Machine Learning Challenges at Amazon", "title": "" }, { "docid": "d07a10da23e0fc18b473f8a30adaebfb", "text": "DATA FLOW IS A POPULAR COMPUTATIONAL MODEL for visual programming languages. Data flow provides a view of computation which shows the data flowing from one filter function to another, being transformed as it goes. In addition, the data flow model easily accomodates the insertion of viewing monitors at various points to show the data to the user. Consequently, many recent visual programming languages are based on the data flow model. This paper describes many of the data flow visual programming languages. The languages are grouped according to their application domain. For each language, pertinent aspects of its appearance, and the particular design alternatives it uses, are discussed. Next, some strengths of data flow visual programming languages are mentioned. Finally, unsolved problems in the design of such languages are discussed.", "title": "" }, { "docid": "89263084f29469d1c363da55c600a971", "text": "Today when there are more than 1 billion Android users all over the world, it shows that its popularity has no equal. These days mobile phones have become so intrusive in our daily lives that when they needed can give huge amount of information to forensic examiners. Till the date of writing this paper there are many papers citing the need of mobile device forensic and ways of getting the vital artifacts through mobile devices for different purposes. With vast options of popular and less popular forensic tools and techniques available today, this papers aims to bring them together under a comparative study so that this paper could serve as a starting point for several android users, future forensic examiners and investigators. During our survey we found scarcity for papers on tools for android forensic. In this paper we have analyzed different tools and techniques used in android forensic and at the end tabulated the results and findings.", "title": "" }, { "docid": "762855af09c1f80ec85d6de63223bc53", "text": "In this paper, we propose a framework for isolating text regions from natural scene images. The main algorithm has two functions: it generates text region candidates, and it verifies of the label of the candidates (text or non-text). The text region candidates are generated through a modified K-means clustering algorithm, which references texture features, edge information and color information. The candidate labels are then verified in a global sense by the Markov Random Field model where collinearity weight is added as long as most texts are aligned. The proposed method achieves reasonable accuracy for text extraction from moderately difficult examples from the ICDAR 2003 database.", "title": "" }, { "docid": "8e3f8fca93ca3106b83cf85d20c061ca", "text": "KeeLoq is a 528-round lightweight block cipher which has a 64-bit secret key and a 32-bit block length. The cube attack, proposed by Dinur and Shamir, is a new type of attacking method. In this paper, we investigate the security of KeeLoq against iterative side-channel cube attack which is an enhanced attack scheme. Based on structure of typical block ciphers, we give the model of iterative side-channel cube attack. Using the traditional single-bit leakage model, we assume that the attacker can exactly possess the information of one bit leakage after round 23. The new attack model costs a data complexity of 211.00 chosen plaintexts to recover the 23-bit key of KeeLoq. Our attack will reduce the key searching space to 241 by considering an error-free bit from internal states.", "title": "" }, { "docid": "852c85ecbed639ea0bfe439f69fff337", "text": "In information theory, Fisher information and Shannon information (entropy) are respectively used to quantify the uncertainty associated with the distribution modeling and the uncertainty in specifying the outcome of given variables. These two quantities are complementary and are jointly applied to information behavior analysis in most cases. The uncertainty property in information asserts a fundamental trade-off between Fisher information and Shannon information, which enlightens us the relationship between the encoder and the decoder in variational auto-encoders (VAEs). In this paper, we investigate VAEs in the FisherShannon plane, and demonstrate that the representation learning and the log-likelihood estimation are intrinsically related to these two information quantities. Through extensive qualitative and quantitative experiments, we provide with a better comprehension of VAEs in tasks such as highresolution reconstruction, and representation learning in the perspective of Fisher information and Shannon information. We further propose a variant of VAEs, termed as Fisher auto-encoder (FAE), for practical needs to balance Fisher information and Shannon information. Our experimental results have demonstrated its promise in improving the reconstruction accuracy and avoiding the non-informative latent code as occurred in previous works.", "title": "" }, { "docid": "838701b64b27fe1d65bd23a124ebcef7", "text": "OBJECTIVES\nInternet can accelerate information exchange. Social networks are the most accessed especially Facebook. This kind of networks might create dependency with several negative consequences in people's life. The aim of this study was to assess potential association between Facebook dependence and poor sleep quality.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nA cross sectional study was performed enrolling undergraduate students of the Universidad Peruana de Ciencias Aplicadas, Lima, Peru. The Internet Addiction Questionnaire, adapted to the Facebook case, and the Pittsburgh Sleep Quality Index, were used. A global score of 6 or greater was defined as the cutoff to determine poor sleep quality. Generalized linear model were used to determine prevalence ratios (PR) and 95% confidence intervals (95%CI). A total of 418 students were analyzed; of them, 322 (77.0%) were women, with a mean age of 20.1 (SD: 2.5) years. Facebook dependence was found in 8.6% (95% CI: 5.9%-11.3%), whereas poor sleep quality was present in 55.0% (95% CI: 50.2%-59.8%). A significant association between Facebook dependence and poor sleep quality mainly explained by daytime dysfunction was found (PR = 1.31; IC95%: 1.04-1.67) after adjusting for age, sex and years in the faculty.\n\n\nCONCLUSIONS\nThere is a relationship between Facebook dependence and poor quality of sleep. More than half of students reported poor sleep quality. Strategies to moderate the use of this social network and to improve sleep quality in this population are needed.", "title": "" }, { "docid": "deed8b565b77f92d91170c001b512e96", "text": "We introduce a novel humanoid robotic platform designed to jointly address three central goals of humanoid robotics: 1) study the role of morphology in biped locomotion; 2) study full-body compliant physical human-robot interaction; 3) be robust while easy and fast to duplicate to facilitate experimentation. The taken approach relies on functional modeling of certain aspects of human morphology, optimizing materials and geometry, as well as on the use of 3D printing techniques. In this article, we focus on the presentation of the design of specific morphological parts related to biped locomotion: the hip, the thigh, the limb mesh and the knee. We present initial experiments showing properties of the robot when walking with the physical guidance of a human.", "title": "" }, { "docid": "122fe53f1e745480837a23b68e62540a", "text": "The images degraded by fog suffer from poor contrast. In order to remove fog effect, a Contrast Limited Adaptive Histogram Equalization (CLAHE)-based method is presented in this paper. This method establishes a maximum value to clip the histogram and redistributes the clipped pixels equally to each gray-level. It can limit the noise while enhancing the image contrast. In our method, firstly, the original image is converted from RGB to HSI. Secondly, the intensity component of the HSI image is processed by CLAHE. Finally, the HSI image is converted back to RGB image. To evaluate the effectiveness of the proposed method, we experiment with a color image degraded by fog and apply the edge detection to the image. The results show that our method is effective in comparison with traditional methods. KeywordsCLAHE, fog, degraded, remove, color image, HSI, edge detection.", "title": "" }, { "docid": "f060713abe9ada73c1c4521c5ca48ea9", "text": "In this paper, we revisit the classical Bayesian face recognition method by Baback Moghaddam et al. and propose a new joint formulation. The classical Bayesian method models the appearance difference between two faces. We observe that this “difference” formulation may reduce the separability between classes. Instead, we model two faces jointly with an appropriate prior on the face representation. Our joint formulation leads to an EM-like model learning at the training time and an efficient, closed-formed computation at the test time. On extensive experimental evaluations, our method is superior to the classical Bayesian face and many other supervised approaches. Our method achieved 92.4% test accuracy on the challenging Labeled Face in Wild (LFW) dataset. Comparing with current best commercial system, we reduced the error rate by 10%.", "title": "" }, { "docid": "391949a4c924c9f8e1986e4747e571c4", "text": "In this paper, we present Auto-Tuned Models, or ATM, a distributed, collaborative, scalable system for automated machine learning. Users of ATM can simply upload a dataset, choose a subset of modeling methods, and choose to use ATM's hybrid Bayesian and multi-armed bandit optimization system. The distributed system works in a load-balanced fashion to quickly deliver results in the form of ready-to-predict models, confusion matrices, cross-validation results, and training timings. By automating hyperparameter tuning and model selection, ATM returns the emphasis of the machine learning workflow to its most irreducible part: feature engineering. We demonstrate the usefulness of ATM on 420 datasets from OpenML and train over 3 million classifiers. Our initial results show ATM can beat human-generated solutions for 30% of the datasets, and can do so in 1/100th of the time.", "title": "" }, { "docid": "861f76c061b9eb52ed5033bdeb9a3ce5", "text": "2007S. Robson Walton Chair in Accounting, University of Arkansas 2007-2014; 2015-2016 Accounting Department Chair, University of Arkansas 2014Distinguished Professor, University of Arkansas 2005-2014 Professor, University of Arkansas 2005-2008 Ralph L. McQueen Chair in Accounting, University of Arkansas 2002-2005 Associate Professor, University of Kansas 1997-2002 Assistant Professor, University of Kansas", "title": "" }, { "docid": "76984b82e44f5790aa72f03f3804c588", "text": "LANGUAGE ASSISTANT (NLA), a web-based natural language dialog system to help users find relevant products on electronic-commerce sites. The system brings together technologies in natural language processing and human-computer interaction to create a faster and more intuitive way of interacting with web sites. By combining statistical parsing techniques with traditional AI rule-based technology, we have created a dialog system that accommodates both customer needs and business requirements. The system is currently embedded in an application for recommending laptops and was deployed as a pilot on IBM’s web site.", "title": "" }, { "docid": "ae9bc4e21d6e2524f09e5f5fbb9e4251", "text": "Arvaniti, Ladd and Mennen (1998) reported a phenomenon of ‘segmental anchoring’: the beginning and end of a linguistically significant pitch movement are anchored to specific locations in segmental structure, which means that the slope and duration of the pitch movement vary according to the segmental material with which it is associated. This finding has since been replicated and extended in several languages. One possible analysis is that autosegmental tones corresponding to the beginning and end of the pitch movement show secondary association with points in structure; however, problems with this analysis have led some authors to cast doubt on the ‘hypothesis’ of segmental anchoring. I argue here that segmental anchoring is not a hypothesis expressed in terms of autosegmental phonology, but rather an empirical phonetic finding. The difficulty of describing segmental anchoring as secondary association does not disprove the ‘hypothesis’, but shows the error of using a symbolic phonological device (secondary association) to represent gradient differences of phonetic detail that should be expressed quantitatively. I propose that treating pitch movements as gestures (in the sense of Articulatory Phonology) goes some way to resolving some of the theoretical questions raised by segmental anchoring, but suggest that pitch gestures have a variety of ‘domains’ which are in need of empirical study before we can successfully integrate segmental anchoring into our understanding of speech production.", "title": "" }, { "docid": "8eb62d4fdc1be402cd9216352cb7cfc3", "text": "In an attempt to better understand generalization in deep learning, we study several possible explanations. We show that implicit regularization induced by the optimization method is playing a key role in generalization and success of deep learning models. Motivated by this view, we study how different complexity measures can ensure generalization and explain how optimization algorithms can implicitly regularize complexity measures. We empirically investigate the ability of these measures to explain different observed phenomena in deep learning. We further study the invariances in neural networks, suggest complexity measures and optimization algorithms that have similar invariances to those in neural networks and evaluate them on a number of learning tasks. Thesis Advisor: Nathan Srebro Title: Professor", "title": "" } ]
scidocsrr
35f4230303b83f4c900b204e08f2b72b
SERS detection of arsenic in water: A review.
[ { "docid": "908f862dea52cd9341d2127928baa7de", "text": "Arsenic's history in science, medicine and technology has been overshadowed by its notoriety as a poison in homicides. Arsenic is viewed as being synonymous with toxicity. Dangerous arsenic concentrations in natural waters is now a worldwide problem and often referred to as a 20th-21st century calamity. High arsenic concentrations have been reported recently from the USA, China, Chile, Bangladesh, Taiwan, Mexico, Argentina, Poland, Canada, Hungary, Japan and India. Among 21 countries in different parts of the world affected by groundwater arsenic contamination, the largest population at risk is in Bangladesh followed by West Bengal in India. Existing overviews of arsenic removal include technologies that have traditionally been used (oxidation, precipitation/coagulation/membrane separation) with far less attention paid to adsorption. No previous review is available where readers can get an overview of the sorption capacities of both available and developed sorbents used for arsenic remediation together with the traditional remediation methods. We have incorporated most of the valuable available literature on arsenic remediation by adsorption ( approximately 600 references). Existing purification methods for drinking water; wastewater; industrial effluents, and technological solutions for arsenic have been listed. Arsenic sorption by commercially available carbons and other low-cost adsorbents are surveyed and critically reviewed and their sorption efficiencies are compared. Arsenic adsorption behavior in presence of other impurities has been discussed. Some commercially available adsorbents are also surveyed. An extensive table summarizes the sorption capacities of various adsorbents. Some low-cost adsorbents are superior including treated slags, carbons developed from agricultural waste (char carbons and coconut husk carbons), biosorbents (immobilized biomass, orange juice residue), goethite and some commercial adsorbents, which include resins, gels, silica, treated silica tested for arsenic removal come out to be superior. Immobilized biomass adsorbents offered outstanding performances. Desorption of arsenic followed by regeneration of sorbents has been discussed. Strong acids and bases seem to be the best desorbing agents to produce arsenic concentrates. Arsenic concentrate treatment and disposal obtained is briefly addressed. This issue is very important but much less discussed.", "title": "" } ]
[ { "docid": "62e900f89427e4b97f64919a3cb0d537", "text": "This paper introduces the SpamBayes classification engine and outlines the most important features and techniques which contribute to its success. The importance of using the indeterminate ‘unsure’ classification produced by the chi-squared combining technique is explained. It outlines a Robinson/Woodhead/Peters technique of ‘tiling’ unigrams and bigrams to produce better results than relying solely on either or other methods of using both unigrams and bigrams. It discusses methods of training the classifier, and evaluates the success of different methods. The paper focuses on highlighting techniques that might aid other classification systems rather than attempting to demonstrate the effectiveness of the SpamBayes classification engine.", "title": "" }, { "docid": "9a9fd442bc7353d9cd202e9ace6e6580", "text": "The idea of developmental dyspraxia has been discussed in the research literature for almost 100 years. However, there continues to be a lack of consensus regarding both the definition and description of this disorder. This paper presents a neuropsychologically based operational definition of developmental dyspraxia that emphasizes that developmental dyspraxia is a disorder of gesture. Research that has investigated the development of praxis is discussed. Further, different types of gestural disorders displayed by children and different mechanisms that underlie developmental dyspraxia are compared to and contrasted with adult acquired apraxia. The impact of perceptual-motor, language, and cognitive impairments on children's gestural development and the possible associations between these developmental disorders and developmental dyspraxia are also examined. Also, the relationship among limb, orofacial, and verbal dyspraxia is discussed. Finally, problems that exist in the neuropsychological assessment of developmental dyspraxia are discussed and recommendations concerning what should be included in such an assessment are presented.", "title": "" }, { "docid": "a5306ca9a50e82e07d487d1ac7603074", "text": "Many modern visual recognition algorithms incorporate a step of spatial ‘pooling’, where the outputs of several nearby feature detectors are combined into a local or global ‘bag of features’, in a way that preserves task-related information while removing irrelevant details. Pooling is used to achieve invariance to image transformations, more compact representations, and better robustness to noise and clutter. Several papers have shown that the details of the pooling operation can greatly influence the performance, but studies have so far been purely empirical. In this paper, we show that the reasons underlying the performance of various pooling methods are obscured by several confounding factors, such as the link between the sample cardinality in a spatial pool and the resolution at which low-level features have been extracted. We provide a detailed theoretical analysis of max pooling and average pooling, and give extensive empirical comparisons for object recognition tasks.", "title": "" }, { "docid": "842e7c5b825669855617133b0067efc9", "text": "This research proposes a robust method for disc localization and cup segmentation that incorporates masking to avoid misclassifying areas as well as forming the structure of the cup based on edge detection. Our method has been evaluated using two fundus image datasets, namely: D-I and D-II comprising of 60 and 38 images, respectively. The proposed method of disc localization achieves an average Fscore of 0.96 and average boundary distance of 7.7 for D-I, and 0.96 and 9.1, respectively, for D-II. The cup segmentation method attains an average Fscore of 0.88 and average boundary distance of 13.8 for D-I, and 0.85 and 18.0, respectively, for D-II. The estimation errors (mean ± standard deviation) of our method for the value of vertical cup-to-disc diameter ratio against the result of the boundary by the expert of DI and D-II have similar value, namely 0.04 ± 0.04. Overall, the result of ourmethod indicates its robustness for glaucoma evaluation. B Anindita Septiarini anindita.septiarini@gmail.com Agus Harjoko aharjoko@ugm.ac.id Reza Pulungan pulungan@ugm.ac.id Retno Ekantini rekantini@ugm.ac.id 1 Department of Computer Science and Electronics, Faculty of Mathematics and Natural Sciences, Universitas Gadjah Mada, Yogyakarta 55281, Indonesia 2 Faculty of Medicine, Universitas Gadjah Mada, Yogyakarta 55281, Indonesia 3 Department of Computer Science, Mulawarman University, Samarinda 75123, Indonesia", "title": "" }, { "docid": "abdd1406266d7290166eb16b8a5045a9", "text": "Individualized manufacturing of cars requires kitting: the collection of individual sets of part variants for each car. This challenging logistic task is frequently performed manually by warehouseman. We propose a mobile manipulation robotic system for autonomous kitting, building on the Kuka Miiwa platform which consists of an omnidirectional base, a 7 DoF collaborative iiwa manipulator, cameras, and distance sensors. Software modules for detection and pose estimation of transport boxes, part segmentation in these containers, recognition of part variants, grasp generation, and arm trajectory optimization have been developed and integrated. Our system is designed for collaborative kitting, i.e. some parts are collected by warehouseman while other parts are picked by the robot. To address safe human-robot collaboration, fast arm trajectory replanning considering previously unforeseen obstacles is realized. The developed system was evaluated in the European Robotics Challenge 2, where the Miiwa robot demonstrated autonomous kitting, part variant recognition, and avoidance of unforeseen obstacles.", "title": "" }, { "docid": "1a58f72cd0f6e979a72dbc233e8c4d4a", "text": "The revolution of genome sequencing is continuing after the successful second-generation sequencing (SGS) technology. The third-generation sequencing (TGS) technology, led by Pacific Biosciences (PacBio), is progressing rapidly, moving from a technology once only capable of providing data for small genome analysis, or for performing targeted screening, to one that promises high quality de novo assembly and structural variation detection for human-sized genomes. In 2014, the MinION, the first commercial sequencer using nanopore technology, was released by Oxford Nanopore Technologies (ONT). MinION identifies DNA bases by measuring the changes in electrical conductivity generated as DNA strands pass through a biological pore. Its portability, affordability, and speed in data production makes it suitable for real-time applications, the release of the long read sequencer MinION has thus generated much excitement and interest in the genomics community. While de novo genome assemblies can be cheaply produced from SGS data, assembly continuity is often relatively poor, due to the limited ability of short reads to handle long repeats. Assembly quality can be greatly improved by using TGS long reads, since repetitive regions can be easily expanded into using longer sequencing lengths, despite having higher error rates at the base level. The potential of nanopore sequencing has been demonstrated by various studies in genome surveillance at locations where rapid and reliable sequencing is needed, but where resources are limited.", "title": "" }, { "docid": "8f0630e009fdab34a77db9780850f0f0", "text": "A wireless power transfer (WPT) using inductive coupling for mobile phone charger is studied. The project is offer to study and fabricate solar WPT using inductive coupling for mobile phone charger that will give more information about distance is effect for WPT performance and WPT is not much influenced by the presence of hands, books and types of plastics. The components used to build wireless power transfer can be divided into 3 parts components, the transceiver for power transmission, the inductive coils in this case as the antenna, receiver and the rectifier which act convert AC to DC. Experiments have been conducted and the wireless power transfer using inductive coupling is suitable to be implemented for mobile phone charger.", "title": "" }, { "docid": "39cb45c62b83a40f8ea42cb872a7aa59", "text": "Levy flights are employed in a lattice model of contaminant migration by bioturbation, the reworking of sediment by benthic organisms. The model couples burrowing, foraging, and conveyor-belt feeding with molecular diffusion. The model correctly predicts a square-root dependence on bioturbation rates over a wide range of biomass densities. The model is used to predict the effect of bioturbation on the redistribution of contaminants in laboratory microcosms containing pyrene-inoculated sediments and the tubificid oligochaete Limnodrilus hoffmeisteri. The model predicts the dynamic flux from the sediment and in-bed concentration profiles that are consistent with observations. The sensitivity of flux and concentration profiles to the specific mechanisms of bioturbation are explored with the model. The flux of pyrene to the overlying water was largely controlled by the simulated foraging activities.", "title": "" }, { "docid": "ca5ad8301e3a37a6d2749bb27ede1d7a", "text": "Data and connectivity between users form the core of social networks. Every status, post, friendship, tweet, re-tweet, tag or image generates a massive amount of structured and unstructured data. Deriving meaning from this data and, in particular, extracting behavior and emotions of individual users, as well as of user communities, is the goal of sentiment analysis and affective computing and represents a significant challenge. Social networks also represent a potentially infinite source of applications for both research and commercial purposes and are adaptable to many different areas, including life science. Nevertheless, collecting, sharing, storing and analyzing social networks data pose several challenges to computer scientists, such as the management of highly unstructured data, big data, and the need for real-time computation. In this paper we give a brief overview of some concrete examples of applying sentiment analysis to social networks for healthcare purposes, we present the current type of tools existing for sentiment analysis, and summarize the challenges involved in this process focusing on the role of high performance computing.", "title": "" }, { "docid": "24ce878b5cb0c7ff62ea8e29cc7a237c", "text": "The energy-efficient tracking and precise localization of continuous objects have long been key issues in research on wireless sensor networks (WSNs). Among various techniques, significant results are reported from applying a clustering-based object tracking technique, which benefits the energy-efficient and stable network in large-scale WSNs. As of now, during the consideration of large-scale WSNs, a continuous object is tracked by using a static clustering-based approach. However, due to the restriction of global information sharing among static clusters, tracking at the boundary region is a challenging issue. This paper presents a complete tracking and localization algorithm in WSNs. Considering the limitation of static clusters, an energy-efficient incremental clustering algorithm followed by Gaussian adaptive resonance theory is proposed at the boundary region. The proposed research is allowed to learn, create, update, and retain clusters incrementally through online learning to adapt to incessant motion patterns. Finally, the Trilateration algorithm is applied for the precise localization of dynamic objects throughout the sensor network. The performance of the proposed system is evaluated through simulation results, demonstrating its energy-efficient tracking and stable network.", "title": "" }, { "docid": "54bf53b120f5fa1c0cdfad80e5e264c9", "text": "To ensure safety in the construction of important metallic components for roadworthiness, it is necessary to check every component thoroughly using non-destructive testing. In last decades, X-ray testing has been adopted as the principal non-destructive testing method to identify defects within a component which are undetectable to the naked eye. Nowadays, modern computer vision techniques, such as deep learning and sparse representations, are opening new avenues in automatic object recognition in optical images. These techniques have been broadly used in object and texture recognition by the computer vision community with promising results in optical images. However, a comprehensive evaluation in X-ray testing is required. In this paper, we release a new dataset containing around 47.500 cropped X-ray images of 32 32 pixels with defects and no-defects in automotive components. Using this dataset, we evaluate and compare 24 computer vision techniques including deep learning, sparse representations, local descriptors and texture features, among others. We show in our experiments that the best performance was achieved by a simple LBP descriptor with a SVM-linear classifier obtaining 97% precision and 94% recall. We believe that the methodology presented could be used in similar projects that have to deal with automated detection of defects.", "title": "" }, { "docid": "5a601e08824185bafeb94ac432b6e92e", "text": "Transforming a natural language (NL) question into a corresponding logical form (LF) is central to the knowledge-based question answering (KB-QA) task. Unlike most previous methods that achieve this goal based on mappings between lexicalized phrases and logical predicates, this paper goes one step further and proposes a novel embedding-based approach that maps NL-questions into LFs for KBQA by leveraging semantic associations between lexical representations and KBproperties in the latent space. Experimental results demonstrate that our proposed method outperforms three KB-QA baseline methods on two publicly released QA data sets.", "title": "" }, { "docid": "f7ed4fb9015dad13d47dec677c469c4b", "text": "In this paper, a low-cost, power efficient and fast Differential Cascode Voltage-Switch-Logic (DCVSL) based delay cell (named DCVSL-R) is proposed. We use the DCVSL-R cell to implement high frequency and power-critical delay cells and flip-flops of ring oscillators and frequency dividers. When compared to TSPC, DCVSL circuits offer small input and clock capacitance and a symmetric differential loading for previous RF stages. When compared to CML, they offer low transistor count, no headroom limitation, rail-to-rail swing and no static current consumption. However, DCVSL circuits suffer from a large low-to-high propagation delay, which limits their speed and results in asymmetrical output waveforms. The proposed DCVSL-R circuit embodies the benefits of DCVSL while reducing the total propagation delay, achieving faster operation. DCVSL-R also generates symmetrical output waveforms which are critical for differential circuits. Another contribution of this work is a closed-form delay model that predicts the speed of DCVSL circuits with 8% worst case accuracy. We implement two ring-oscillator-based VCOs in 0.13 μm technology with DCVSL and DCVSL-R delay cells. Measurements show that the proposed DCVSL-R based VCO consumes 30% less power than the DCVSL VCO for the same oscillation frequency (2.4 GHz) and same phase noise (-113 dBc/Hz at 10 MHz). DCVSL-R circuits are also used to implement the high frequency dual modulus prescaler (DMP) of a 2.4 GHz frequency synthesizer in 0.18 μm technology. The DMP consumes only 0.8 mW at 2.48 GHz, a 40% reduction in power when compared to other reported DMPs with similar division ratios and operating frequencies. The RF buffer that drives the DMP consumes only 0.27 mW, demonstrating the lowest combined DMP and buffer power consumption among similar synthesizers in literature.", "title": "" }, { "docid": "828c54f29339e86107f1930ae2a5e77f", "text": "Artificial bee colony (ABC) algorithm is an optimization algorithm based on a particular intelligent behaviour of honeybee swarms. This work compares the performance of ABC algorithm with that of differential evolution (DE), particle swarm optimization (PSO) and evolutionary algorithm (EA) for multi-dimensional numeric problems. The simulation results show that the performance of ABC algorithm is comparable to those of the mentioned algorithms and can be efficiently employed to solve engineering problems with high dimensionality. # 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "edf8d1bb84c0845dddad417a939e343b", "text": "Suicides committed by intraorally placed firecrackers are rare events. Given to the use of more powerful components such as flash powder recently, some firecrackers may cause massive life-threatening injuries in case of such misuse. Innocuous black powder firecrackers are subject to national explosives legislation and only have the potential to cause harmless injuries restricted to the soft tissue. We here report two cases of suicide committed by an intraoral placement of firecrackers, resulting in similar patterns of skull injury. As it was first unknown whether black powder firecrackers can potentially cause serious skull injury, we compared the potential of destruction using black powder and flash powder firecrackers in a standardized skull simulant model (Synbone, Malans, Switzerland). This was the first experiment to date simulating the impacts resulting from an intraoral burst in a skull simulant model. The intraoral burst of a “D-Böller” (an example of one of the most powerful black powder firecrackers in Germany) did not lead to any injuries of the osseous skull. In contrast, the “La Bomba” (an example of the weakest known flash powder firecrackers) caused complex fractures of both the viscero- and neurocranium. The results obtained from this experimental study indicate that black powder firecrackers are less likely to cause severe injuries as a consequence of intraoral explosions, whereas flash powder-based crackers may lead to massive life-threatening craniofacial destructions and potentially death.", "title": "" }, { "docid": "bf126b871718a5ee09f1e54ea5052d20", "text": "Deep fully convolutional neural network (FCN) based architectures have shown great potential in medical image segmentation. However, such architectures usually have millions of parameters and inadequate number of training samples leading to over-fitting and poor generalization. In this paper, we present a novel DenseNet based FCN architecture for cardiac segmentation which is parameter and memory efficient. We propose a novel up-sampling path which incorporates long skip and short-cut connections to overcome the feature map explosion in conventional FCN based architectures. In order to process the input images at multiple scales and view points simultaneously, we propose to incorporate Inception module's parallel structures. We propose a novel dual loss function whose weighting scheme allows to combine advantages of cross-entropy and Dice loss leading to qualitative improvements in segmentation. We demonstrate computational efficacy of incorporating conventional computer vision techniques for region of interest detection in an end-to-end deep learning based segmentation framework. From the segmentation maps we extract clinically relevant cardiac parameters and hand-craft features which reflect the clinical diagnostic analysis and train an ensemble system for cardiac disease classification. We validate our proposed network architecture on three publicly available datasets, namely: (i) Automated Cardiac Diagnosis Challenge (ACDC-2017), (ii) Left Ventricular segmentation challenge (LV-2011), (iii) 2015 Kaggle Data Science Bowl cardiac challenge data. Our approach in ACDC-2017 challenge stood second place for segmentation and first place in automated cardiac disease diagnosis tasks with an accuracy of 100% on a limited testing set (n=50). In the LV-2011 challenge our approach attained 0.74 Jaccard index, which is so far the highest published result in fully automated algorithms. In the Kaggle challenge our approach for LV volume gave a Continuous Ranked Probability Score (CRPS) of 0.0127, which would have placed us tenth in the original challenge. Our approach combined both cardiac segmentation and disease diagnosis into a fully automated framework which is computationally efficient and hence has the potential to be incorporated in computer-aided diagnosis (CAD) tools for clinical application.", "title": "" }, { "docid": "c6f4ff7072dcb55c0f86e253160479b7", "text": "In this study we extracted websites' URL features and analyzed subset based feature selection methods and classification algorithms for phishing websites detection.", "title": "" }, { "docid": "6cf4315ecce8a06d9354ca2f2684113c", "text": "BACKGROUND\nNutritional supplementation may be used to treat muscle loss with aging (sarcopenia). However, if physical activity does not increase, the elderly tend to compensate for the increased energy delivered by the supplements with reduced food intake, which results in a calorie substitution rather than supplementation. Thus, an effective supplement should stimulate muscle anabolism more efficiently than food or common protein supplements. We have shown that balanced amino acids stimulate muscle protein anabolism in the elderly, but it is unknown whether all amino acids are necessary to achieve this effect.\n\n\nOBJECTIVE\nWe assessed whether nonessential amino acids are required in a nutritional supplement to stimulate muscle protein anabolism in the elderly.\n\n\nDESIGN\nWe compared the response of muscle protein metabolism to either 18 g essential amino acids (EAA group: n = 6, age 69 +/- 2 y; +/- SD) or 40 g balanced amino acids (18 g essential amino acids + 22 g nonessential amino acids, BAA group; n = 8, age 71 +/- 2 y) given orally in small boluses every 10 min for 3 h to healthy elderly volunteers. Muscle protein metabolism was measured in the basal state and during amino acid administration via L-[ring-(2)H(5)]phenylalanine infusion, femoral arterial and venous catheterization, and muscle biopsies.\n\n\nRESULTS\nPhenylalanine net balance (in nmol x min(-1). 100 mL leg volume(-1)) increased from the basal state (P < 0.01), with no differences between groups (BAA: from -16 +/- 5 to 16 +/- 4; EAA: from -18 +/- 5 to 14 +/- 13) because of an increase (P < 0.01) in muscle protein synthesis and no change in breakdown.\n\n\nCONCLUSION\nEssential amino acids are primarily responsible for the amino acid-induced stimulation of muscle protein anabolism in the elderly.", "title": "" }, { "docid": "505e80ac2fe0ee1a34c60279b90d0ca7", "text": "In an effective e-learning game, the learner’s enjoyment acts as a catalyst to encourage his/her learning initiative. Therefore, the availability of a scale that effectively measures the enjoyment offered by e-learning games assist the game designer to understanding the strength and flaw of the game efficiently from the learner’s points of view. E-learning games are aimed at the achievement of learning objectives via the creation of a flow effect. Thus, this study is based on Sweetser’s & Wyeth’s framework to develop a more rigorous scale that assesses user enjoyment of e-learning games. The scale developed in the present study consists of eight dimensions: Immersion, social interaction, challenge, goal clarity, feedback, concentration, control, and knowledge improvement. Four learning games employed in a university’s online learning course ‘‘Introduction to Software Application” were used as the instruments of scale verification. Survey questionnaires were distributed to students taking the course and 166 valid samples were subsequently collected. The results showed that the validity and reliability of the scale, EGameFlow, were satisfactory. Thus, the measurement is an effective tool for evaluating the level of enjoyment provided by elearning games to their users. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "bb6314a8e6ec728d09aa37bfffe5c835", "text": "In recent years, Convolutional Neural Network (CNN) has been extensively applied in the field of computer vision, which has also made remarkable achievements. However, the CNN models are computation-intensive and memory-consuming, which hinders the deployment of CNN-based methods on resource-limited embedded platforms. Therefore, this paper gives insight into low numerical precision Convolutional Neural Networks. At first, an image classification CNN model is quantized into 8-bit dynamic fixed-point with no more than 1% accuracy drop and then the method of conducting inference on low-cost ARM processor has been proposed. Experimental results verified the effectiveness of this method. Besides, our proof-of-concept prototype implementation can obtain a frame rate of 4.54fps when running on single Cortex-A72 core under 1.8GHz working frequency and 6.48 watts of gross power consumption.", "title": "" } ]
scidocsrr
70eeaacf0ac2e76bb4ba1adfe4684a1a
Platform-tolerant PIFA-type UHF RFID tag antenna
[ { "docid": "9cef8aa700cefcbfcc6a79d530987018", "text": "This paper presents a wideband UHF RFID tag designed for operating on multiple materials including metal. We describe the antenna structure and present the comparison of modeling and simulation results with experimental data.", "title": "" } ]
[ { "docid": "bc07015b2a2624a75a656ae50d3b4e07", "text": "Current NAC technologies implement a pre-connect phase whe re t status of a device is checked against a set of policies before being granted access to a network, an d a post-connect phase that examines whether the device complies with the policies that correspond to its rol e in the network. In order to enhance current NAC technologies, we propose a new architecture based on behaviorsrather thanrolesor identity, where the policies are automatically learned and updated over time by the membe rs of the network in order to adapt to behavioral changes of the devices. Behavior profiles may be presented as identity cards that can change over time. By incorporating an Anomaly Detector (AD) to the NAC server or t each of the hosts, their behavior profile is modeled and used to determine the type of behaviors that shou ld be accepted within the network. These models constitute behavior-based policies. In our enhanced NAC ar chitecture, global decisions are made using a group voting process. Each host’s behavior profile is used to compu te a partial decision for or against the acceptance of a new profile or traffic. The aggregation of these partial vote s amounts to the model-group decision. This voting process makes the architecture more resilient to attacks. E ven after accepting a certain percentage of malicious devices, the enhanced NAC is able to compute an adequate deci sion. We provide proof-of-concept experiments of our architecture using web traffic from our department netwo rk. Our results show that the model-group decision approach based on behavior profiles has a 99% detection rate o f nomalous traffic with a false positive rate of only 0.005%. Furthermore, the architecture achieves short latencies for both the preand post-connect phases.", "title": "" }, { "docid": "1514bae0c1b47f5aaf0bfca6a63d9ce9", "text": "The persistence of racial inequality in the U.S. labor market against a general backdrop of formal equality of opportunity is a troubling phenomenon that has significant ramifications on the design of hiring policies. In this paper, we show that current group disparate outcomes may be immovable even when hiring decisions are bound by an input-output notion of “individual fairness.” Instead, we construct a dynamic reputational model of the labor market that illustrates the reinforcing nature of asymmetric outcomes resulting from groups’ divergent accesses to resources and as a result, investment choices. To address these disparities, we adopt a dual labor market composed of a Temporary Labor Market (TLM), in which firms’ hiring strategies are constrained to ensure statistical parity of workers granted entry into the pipeline, and a Permanent Labor Market (PLM), in which firms hire top performers as desired. Individual worker reputations produce externalities for their group; the corresponding feedback loop raises the collective reputation of the initially disadvantaged group via a TLM fairness intervention that need not be permanent. We show that such a restriction on hiring practices induces an equilibrium that, under particular market conditions, Pareto-dominates those arising from strategies that statistically discriminate or employ a “group-blind” criterion. The enduring nature of equilibria that are both inequitable and Pareto suboptimal suggests that fairness interventions beyond procedural checks of hiring decisions will be of critical importance in a world where machines play a greater role in the employment process. ACM Reference Format: Lily Hu and Yiling Chen. 2018. A Short-term Intervention for Long-term Fairness in the Labor Market. In WWW 2018: The 2018 Web Conference, April 23–27, 2018, Lyon, France. ACM, New York, NY, USA, 10 pages. https: //doi.org/10.1145/3178876.3186044", "title": "" }, { "docid": "d142ad76c2c5bb1565ef539188ce7d43", "text": "The recent discovery of new classes of small RNAs has opened unknown territories to explore new regulations of physiopathological events. We have recently demonstrated that RNY (or Y RNA)-derived small RNAs (referred to as s-RNYs) are an independent class of clinical biomarkers to detect coronary artery lesions and are associated with atherosclerosis burden. Here, we have studied the role of s-RNYs in human and mouse monocytes/macrophages and have shown that in lipid-laden monocytes/macrophages s-RNY expression is timely correlated to the activation of both NF-κB and caspase 3-dependent cell death pathways. Loss- or gain-of-function experiments demonstrated that s-RNYs activate caspase 3 and NF-κB signaling pathways ultimately promoting cell death and inflammatory responses. As, in atherosclerosis, Ro60-associated s-RNYs generated by apoptotic macrophages are released in the blood of patients, we have investigated the extracellular function of the s-RNY/Ro60 complex. Our data demonstrated that s-RNY/Ro60 complex induces caspase 3-dependent cell death and NF-κB-dependent inflammation, when added to the medium of cultured monocytes/macrophages. Finally, we have shown that s-RNY function is mediated by Toll-like receptor 7 (TLR7). Indeed using chloroquine, which disrupts signaling of endosome-localized TLRs 3, 7, 8 and 9 or the more specific TLR7/9 antagonist, the phosphorothioated oligonucleotide IRS954, we blocked the effect of either intracellular or extracellular s-RNYs. These results position s-RNYs as relevant novel functional molecules that impacts on macrophage physiopathology, indicating their potential role as mediators of inflammatory diseases, such as atherosclerosis.", "title": "" }, { "docid": "5318baa10a6db98a0f31c6c30fdf6104", "text": "In image analysis, the images are often represented by multiple visual features (also known as multiview features), that aim to better interpret them for achieving remarkable performance of the learning. Since the processes of feature extraction on each view are separated, the multiple visual features of images may include overlap, noise, and redundancy. Thus, learning with all the derived views of the data could decrease the effectiveness. To address this, this paper simultaneously conducts a hierarchical feature selection and a multiview multilabel (MVML) learning for multiview image classification, via embedding a proposed a new block-row regularizer into the MVML framework. The block-row regularizer concatenating a Frobenius norm (F-norm) regularizer and an l2,1-norm regularizer is designed to conduct a hierarchical feature selection, in which the F-norm regularizer is used to conduct a high-level feature selection for selecting the informative views (i.e., discarding the uninformative views) and the 12,1-norm regularizer is then used to conduct a low-level feature selection on the informative views. The rationale of the use of a block-row regularizer is to avoid the issue of the over-fitting (via the block-row regularizer), to remove redundant views and to preserve the natural group structures of data (via the F-norm regularizer), and to remove noisy features (the 12,1-norm regularizer), respectively. We further devise a computationally efficient algorithm to optimize the derived objective function and also theoretically prove the convergence of the proposed optimization method. Finally, the results on real image datasets show that the proposed method outperforms two baseline algorithms and three state-of-the-art algorithms in terms of classification performance.", "title": "" }, { "docid": "2dde6c9387ee0a51220d92a4bc0bb8bf", "text": "We propose a generic algorithm for computation of similarit y measures for sequential data. The algorithm uses generalized suffix trees f or efficient calculation of various kernel, distance and non-metric similarity func tions. Its worst-case run-time is linear in the length of sequences and independen t of the underlying embedding language, which can cover words, k-grams or all contained subsequences. Experiments with network intrusion detection, DN A analysis and text processing applications demonstrate the utility of distan ces and similarity coefficients for sequences as alternatives to classical kernel fu ctions.", "title": "" }, { "docid": "0ff7f69f341f62711b383699746452fd", "text": "Dynamic sensitivity control (DSC) is being discussed within the new IEEE 802.11ax task group as one of the potential techniques to improve the system performance for next generation Wi-Fi in high capacity and dense deployment environments, e.g. stadiums, conference venues, shopping malls, etc. However, there appears to be lack of consensus regarding the adoption of DSC within the group. This paper reports on investigations into the performance of the baseline DSC technique proposed in the IEEE 802.11ax task group under realistic scenarios defined by the task group. Simulations were carried out and the results suggest that compared with the default case (no DSC), the use of DSC may lead to mixed results in terms of throughput and fairness with the gain varying depending on factors like inter-AP distance, node distribution, node density and the DSC margin value. Further, we also highlight avenues for mitigating the shortcomings of DSC found in this study.", "title": "" }, { "docid": "2e864dcde57ea1716847f47977af0140", "text": "I focus on the role of case studies in developing causal explanations. I distinguish between the theoretical purposes of case studies and the case selection strategies or research designs used to advance those objectives. I construct a typology of case studies based on their purposes: idiographic (inductive and theory-guided), hypothesis-generating, hypothesis-testing, and plausibility probe case studies. I then examine different case study research designs, including comparable cases, most and least likely cases, deviant cases, and process tracing, with attention to their different purposes and logics of inference. I address the issue of selection bias and the “single logic” debate, and I emphasize the utility of multi-method research.", "title": "" }, { "docid": "cac628a1f0727994969c554832f4b7e0", "text": "We have shown that it is possible to achieve artistic style transfer within a purely image processing paradigm. This is in contrast to previous work that utilized deep neural networks to learn the difference between “style” and “content” in a painting. We leverage the work by Kwatra et. al. on texture synthesis to accomplish “style synthesis” from our given style images, building off the work of Elad and Milanfar. We have also introduced a novel “style fusion” concept that guides the algorithm to follow broader structures of style at a higher level while giving it the freedom to make its own artistic decisions at a smaller scale. Our results are comparable to the neural network approach, while improving speed and maintaining robustness to different styles and contents.", "title": "" }, { "docid": "17106095b19d87ad8883af0606714a07", "text": "Based on American Customer Satisfaction Index model ACSI and study at home and abroad, a Hotel online booking Consumer Satisfaction model (HECS) is established. After empirically testing the validity of the measurement model and structural model of Hotel online booking Consumer Satisfaction, consumer satisfaction index is calculated. Results show that Website easy usability impacts on customer satisfaction most significantly, followed by responsiveness and reliability of the website. Statistic results also show a medium consumer satisfaction index number. Suggestions are given to improve online booking consumer satisfaction, such as website designing of easier using, timely processing of orders, offering more offline personal support for online service, doing more communication with customers, providing more communication channel and so on.", "title": "" }, { "docid": "fb67e237688deb31bd684c714a49dca5", "text": "In order to mitigate investments, stock price forecasting has attracted more attention in recent years. Aiming at the discreteness, non-normality, high-noise in high-frequency data, a support vector machine regression (SVR) algorithm is introduced in this paper. However, the characteristics in different periods of the same stock, or the same periods of different stocks are significantly different. So, SVR with fixed parameters is difficult to satisfy with the constantly changing data flow. To tackle this problem, an adaptive SVR was proposed for stock data at three different time scales, including daily data, 30-min data, and 5-min data. Experiments show that the improved SVR with dynamic optimization of learning parameters by particle swarm optimization can get a better result than compared methods including SVR and back-propagation neural network.", "title": "" }, { "docid": "f1f281bce1a71c3bce99077e76197560", "text": "Probabilistic timed automata (PTA) combine discrete probabilistic choice, real time and nondeterminism. This paper presents a fully automatic tool for model checking PTA with respect to probabilistic and expected reachability properties. PTA are specified in Modest, a high-level compositional modelling language that includes features such as exception handling, dynamic parallelism and recursion, and thus enables model specification in a convenient fashion. For model checking, we use an integral semantics of time, representing clocks with bounded integer variables. This makes it possible to use the probabilistic model checker PRISM as analysis backend. We describe details of the approach and its implementation, and report results obtained for three different case studies.", "title": "" }, { "docid": "9c447f9a2b00a2e27433601fce4ab4ce", "text": "The Hypertext Transfer Protocol (HTTP) has been widely adopted and deployed as the key protocol for video streaming over the Internet. One of the consequences of leveraging traditional HTTP for video streaming is the significantly increased request overhead due to the segmentation of the video content into HTTP resources. The overhead becomes even more significant when non-multiplexed video and audio segments are deployed. In this paper, we investigate and address the request overhead problem by employing the server push technology in the new HTTP 2.0 protocol. In particular, we develop a set of push strategies that actively deliver video and audio content from the HTTP server without requiring a request for each individual segment. We evaluate our approach in a Dynamic Adaptive Streaming over HTTP (DASH) streaming system. We show that the request overhead can be significantly reduced by using our push strategies. Also, we validate that the server push based approach is compatible with the existing HTTP streaming features, such as adaptive bitrate switching.", "title": "" }, { "docid": "2d7a13754631206203d6618ab2a27a76", "text": "This Contrast enhancement is frequently referred to as one of the most important issues in image processing. Histogram equalization (HE) is one of the common methods used for improving contrast in digital images. Histogram equalization (HE) has proved to be a simple and effective image contrast enhancement technique. However, the conventional histogram equalization methods usually result in excessive contrast enhancement, which causes the unnatural look and visual artifacts of the processed image. This paper presents a review of new forms of histogram for image contrast enhancement. The major difference among the methods in this family is the criteria used to divide the input histogram. Brightness preserving BiHistogram Equalization (BBHE) and Quantized Bi-Histogram Equalization (QBHE) use the average intensity value as their separating point. Dual Sub-Image Histogram Equalization (DSIHE) uses the median intensity value as the separating point. Minimum Mean Brightness Error Bi-HE (MMBEBHE) uses the separating point that produces the smallest Absolute Mean Brightness Error (AMBE). Recursive Mean-Separate Histogram Equalization (RMSHE) is another improvement of BBHE. The Brightness preserving dynamic histogram equalization (BPDHE) method is actually an extension to both MPHEBP and DHE. Weighting mean-separated sub-histogram equalization (WMSHE) method is to perform the effective contrast enhancement of the digital image. Keywords-component image processing; contrast enhancement; histogram equalization; minimum mean brightness error; brightness preserving enhancement, histogram partition.", "title": "" }, { "docid": "4018814855e4cd7232d7c75636a538b8", "text": "Personalized recommendation of Points of Interest (POIs) plays a key role in satisfying users on Location-Based Social Networks (LBSNs). In this article, we propose a probabilistic model to find the mapping between user-annotated tags and locations’ taste keywords. Furthermore, we introduce a dataset on locations’ contextual appropriateness and demonstrate its usefulness in predicting the contextual relevance of locations. We investigate four approaches to use our proposed mapping for addressing the data sparsity problem: one model to reduce the dimensionality of location taste keywords and three models to predict user tags for a new location. Moreover, we present different scores calculated from multiple LBSNs and show how we incorporate new information from the mapping into a POI recommendation approach. Then, the computed scores are integrated using learning to rank techniques. The experiments on two TREC datasets show the effectiveness of our approach, beating state-of-the-art methods.", "title": "" }, { "docid": "f182fdd2f5bae84b5fc38284f83f0c27", "text": "We adopted an approach based on an LSTM neural network to monitor and detect faults in industrial multivariate time series data. To validate the approach we created a Modelica model of part of a real gasoil plant. By introducing hacks into the logic of the Modelica model, we were able to generate both the roots and causes of fault behavior in the plant. Having a self-consistent data set with labeled faults, we used an LSTM architecture with a forecasting error threshold to obtain precision and recall quality metrics. The dependency of the quality metric on the threshold level is considered. An appropriate mechanism such as “one handle” was introduced for filtering faults that are outside of the plant operator field of interest.", "title": "" }, { "docid": "d5b20e250e28cae54a7f3c868f342fc5", "text": "Context: Reusing software by means of copy and paste is a frequent activity in software development. The duplicated code is known as a software clone and the activity is known as code cloning. Software clones may lead to bug propagation and serious maintenance problems. Objective: This study reports an extensive systematic literature review of software clones in general and software clone detection in particular. Method: We used the standard systematic literature review method based on a comprehensive set of 213 articles from a total of 2039 articles published in 11 leading journals and 37 premier conferences and", "title": "" }, { "docid": "d1c0b58fa78ecda169d3972eae870590", "text": "Power system stability is defined as an ability of the power system to reestablish the initial steady state or come into the new steady state after any variation of the system's operation value or after system´s breakdown. The stability and reliability of the electric power system is highly actual topic nowadays, especially in the light of recent accidents like splitting of UCTE system and north-American blackouts. This paper deals with the potential of the evaluation in term of transient stability of the electric power system within the defense plan and the definition of the basic criterion for the transient stability – Critical Clearing Time (CCT).", "title": "" }, { "docid": "58b2ee3d0a4f61d4db883bc0a896f8f4", "text": "While applications for mobile devices have become extremely important in the last few years, little public information exists on mobile application usage behavior. We describe a large-scale deployment-based research study that logged detailed application usage information from over 4,100 users of Android-powered mobile devices. We present two types of results from analyzing this data: basic descriptive statistics and contextual descriptive statistics. In the case of the former, we find that the average session with an application lasts less than a minute, even though users spend almost an hour a day using their phones. Our contextual findings include those related to time of day and location. For instance, we show that news applications are most popular in the morning and games are at night, but communication applications dominate through most of the day. We also find that despite the variety of apps available, communication applications are almost always the first used upon a device's waking from sleep. In addition, we discuss the notion of a virtual application sensor, which we used to collect the data.", "title": "" }, { "docid": "3e6df23444ae08f65ded768c5dc8dc9d", "text": "In this paper, we propose a method for automatically detecting various types of snore sounds using image classification convolutional neural network (CNN) descriptors extracted from audio file spectrograms. The descriptors, denoted as deep spectrum features, are derived from forwarding spectrograms through very deep task-independent pre-trained CNNs. Specifically, activations of fully connected layers from two common image classification CNNs, AlexNet and VGG19, are used as feature vectors. Moreover, we investigate the impact of differing spectrogram colour maps and two CNN architectures on the performance of the system. Results presented indicate that deep spectrum features extracted from the activations of the second fully connected layer of AlexNet using a viridis colour map are well suited to the task. This feature space, when combined with a support vector classifier, outperforms the more conventional knowledge-based features of 6 373 acoustic functionals used in the INTERSPEECH ComParE 2017 Snoring sub-challenge baseline system. In comparison to the baseline, unweighted average recall is increased from 40.6% to 44.8% on the development partition, and from 58.5% to 67.0% on the test partition.", "title": "" }, { "docid": "a482218d67b0df6343f63f6d1b796c8e", "text": "Decoupling local geometric features from the spatial location of a mesh is crucial for feature-preserving mesh denoising. This paper focuses on first order features, i.e., facet normals, and presents a simple yet effective anisotropic mesh denoising framework via normal field denoising. Unlike previous denoising methods based on normal filtering, which process normals defined on the Gauss sphere, our method considers normals as a surface signal defined over the original mesh. This allows the design of a novel bilateral normal filter that depends on both spatial distance and signal distance. Our bilateral filter is a more natural extension of the elegant bilateral filter for image denoising than those used in previous bilateral mesh denoising methods. Besides applying this bilateral normal filter in a local, iterative scheme, as common in most of previous works, we present for the first time a global, noniterative scheme for an isotropic denoising. We show that the former scheme is faster and more effective for denoising extremely noisy meshes while the latter scheme is more robust to irregular surface sampling. We demonstrate that both our feature-preserving schemes generally produce visually and numerically better denoising results than previous methods, especially at challenging regions with sharp features or irregular sampling.", "title": "" } ]
scidocsrr