query_id
stringlengths
32
32
query
stringlengths
5
5.38k
positive_passages
listlengths
1
23
negative_passages
listlengths
4
100
subset
stringclasses
7 values
0d3bc1d1725c9bc96856f0649aae7b7e
Deep Learning Face Representation from Predicting 10,000 Classes
[ { "docid": "152e5d8979eb1187e98ecc0424bb1fde", "text": "Face verification remains a challenging problem in very complex conditions with large variations such as pose, illumination, expression, and occlusions. This problem is exacerbated when we rely unrealistically on a single training data source, which is often insufficient to cover the intrinsically complex face variations. This paper proposes a principled multi-task learning approach based on Discriminative Gaussian Process Latent Variable Model (DGPLVM), named GaussianFace, for face verification. In contrast to relying unrealistically on a single training data source, our model exploits additional data from multiple source-domains to improve the generalization performance of face verification in an unknown target-domain. Importantly, our model can adapt automatically to complex data distributions, and therefore can well capture complex face variations inherent in multiple sources. To enhance discriminative power, we introduced a more efficient equivalent form of Kernel Fisher Discriminant Analysis to DGPLVM. To speed up the process of inference and prediction, we exploited the low rank approximation method. Extensive experiments demonstrated the effectiveness of the proposed model in learning from diverse data sources and generalizing to unseen domains. Specifically, the accuracy of our algorithm achieved an impressive accuracy rate of 98.52% on the well-known and challenging Labeled Faces in the Wild (LFW) benchmark. For the first time, the human-level performance in face verification (97.53%) on LFW is surpassed.", "title": "" } ]
[ { "docid": "126c3c034bfd1380e0cbd115d07989a2", "text": "This paper presents a four-pole elliptic tunable combline bandpass filter with center frequency and bandwidth control. The filter is built on a Duroid substrate with εr=10.2 and h=25 mils, and the tuning is done using packaged Schottky diodes. A frequency range of 1.55-2.1 GHz with a 1-dB bandwidth tuning from 40-120 MHz (2.2-8% fractional bandwidth) is demonstrated. A pair of tunable transmission zeroes are synthesized at both passband edges and significantly improve the filter selectivity. The rejection level at both the lower and upper stopbands is >; 50 dB and no spurious response exists close to the passband. The measured third-order intermodulation intercept point (TOI) and 1-dB power compression point at midband (1.85 GHz) and a bandwidth of 110 MHz are >; 14& dBm and 6 dBm, respectively, and are limited by the Schottky diodes. It is believed that this is the first four-pole combline tunable bandpass filter with an elliptic function response and center frequency and bandwidth control. The application areas are in tunable filters for wireless systems and cognitive radios.", "title": "" }, { "docid": "845cce1a45804da160e2a4bed0469638", "text": "The adoption of game mechanics into serious contexts such as business applications (gamification) is a promising trend to improve the user’s participation and engagement with the software in question and on the job. However, this topic is mainly driven by practitioners. A theoretical model for gamification with appropriate empirical validation is missing. In this paper, we introduce a prototype for gamification using SAP ERP as example. Moreover, we have evaluated the concept within a comprehensive user study with 112 participants based on the technology acceptance model (TAM) using partial least squares (PLS) for analysis. Finally, we show that this gamification approach yields significant improvements in latent variables such as enjoyment, flow or perceived ease of use. Moreover, we outline further research requirements in the domain of gamification.", "title": "" }, { "docid": "553980e1d2432d1d27f84f8edcfc81bc", "text": "The home of the future should be a smart one, to support us in our daily life. Up to now only a few security incidents in that area are known. Depending on different security analyses, this fact is rather a result of the low spread of Smart Home products than the success of such systems security. Given that Smart Homes become more and more popular, we will consider current incidents and analyses to estimate potential security threats in the future. The definitions of a Smart Home drift widely apart. Thus we first need to define Smart Home for ourselves and additionally provide a way to categorize the big mass of products into smaller groups.", "title": "" }, { "docid": "b59f429192a680c1dc07580d21f9e374", "text": "Recently, several competing smart home programming frameworks that support third party app development have emerged. These frameworks provide tangible benefits to users, but can also expose users to significant security risks. This paper presents the first in-depth empirical security analysis of one such emerging smart home programming platform. We analyzed Samsung-owned SmartThings, which has the largest number of apps among currently available smart home platforms, and supports a broad range of devices including motion sensors, fire alarms, and door locks. SmartThings hosts the application runtime on a proprietary, closed-source cloud backend, making scrutiny challenging. We overcame the challenge with a static source code analysis of 499 SmartThings apps (called SmartApps) and 132 device handlers, and carefully crafted test cases that revealed many undocumented features of the platform. Our key findings are twofold. First, although SmartThings implements a privilege separation model, we discovered two intrinsic design flaws that lead to significant overprivilege in SmartApps. Our analysis reveals that over 55% of SmartApps in the store are overprivileged due to the capabilities being too coarse-grained. Moreover, once installed, a SmartApp is granted full access to a device even if it specifies needing only limited access to the device. Second, the SmartThings event subsystem, which devices use to communicate asynchronously with SmartApps via events, does not sufficiently protect events that carry sensitive information such as lock codes. We exploited framework design flaws to construct four proof-of-concept attacks that: (1) secretly planted door lock codes, (2) stole existing door lock codes, (3) disabled vacation mode of the home, and (4) induced a fake fire alarm. We conclude the paper with security lessons for the design of emerging smart home programming frameworks.", "title": "" }, { "docid": "c2869d1324181e08cc80a9ba069dead8", "text": "Human identifi cation leads to mutual trust that is essential for the proper functioning of society. We have been identifying fellow humans based on their voice, appearance, or gait for thousands of years. However, a systematic and scientifi c basis for human identifi cation started in the nineteenth century when Alphonse Bertillon (Rhodes and Henry 1956 ) introduced the use of a number of anthropomorphic measurements to identify habitual criminals. The Bertillon system was short-lived: soon after its introduction, the distinctiveness of human fi ngerprints was established. Since the early 1900s, fi ngerprints have been an accepted method in forensic investigations to identify suspects and repeat criminals. Now, virtually all law enforcement agencies worldwide use Automatic Fingerprint Identifi cation Systems (AFIS). With growing concerns about terrorist activities, security breaches, and fi nancial fraud, other physiological and behavioral human characteristics have been used for person identifi cation. These distinctive characteristics, or biometric traits, include features such as face, iris, palmprint, and voice. Biometrics (Jain et al. 2006, 2007 ) is now a mature technology that is widely used in a variety of applications ranging from border crossings (e.g., the US-VISIT program) to visiting Walt Disney Parks.", "title": "" }, { "docid": "1bd1a43a0885f33b7ea9863a656758e4", "text": "In this paper a semi-supervised deep framework is proposed for the problem of 3D shape inverse rendering from a single 2D input image. The main structure of proposed framework consists of unsupervised pre-trained components which significantly reduce the need to labeled data for training the whole framework. using labeled data has the advantage of achieving to accurate results without the need to predefined assumptions about image formation process. Three main components are used in the proposed network: an encoder which maps 2D input image to a representation space, a 3D decoder which decodes a representation to a 3D structure and a mapping component in order to map 2D to 3D representation. The only part that needs label for training is the mapping part with not too many parameters. The other components in the network can be pre-trained unsupervised using only 2D images or 3D data in each case. The way of reconstructing 3D shapes in the decoder component, inspired by the model based methods for 3D reconstruction, maps a low dimensional representation to 3D shape space with the advantage of extracting the basis vectors of shape space from training data itself and is not restricted to a small set of examples as used in predefined models. Therefore, the proposed framework deals directly with coordinate values of the point cloud representation which leads to achieve dense 3D shapes in the output. The experimental results on several benchmark datasets of objects and human faces and comparing with recent similar methods shows the power of proposed network in recovering more details from single 2D images.", "title": "" }, { "docid": "4bc1a78a3c9749460da218fd9d314e56", "text": "Fast and accurate side-chain conformation prediction is important for homology modeling, ab initio protein structure prediction, and protein design applications. Many methods have been presented, although only a few computer programs are publicly available. The SCWRL program is one such method and is widely used because of its speed, accuracy, and ease of use. A new algorithm for SCWRL is presented that uses results from graph theory to solve the combinatorial problem encountered in the side-chain prediction problem. In this method, side chains are represented as vertices in an undirected graph. Any two residues that have rotamers with nonzero interaction energies are considered to have an edge in the graph. The resulting graph can be partitioned into connected subgraphs with no edges between them. These subgraphs can in turn be broken into biconnected components, which are graphs that cannot be disconnected by removal of a single vertex. The combinatorial problem is reduced to finding the minimum energy of these small biconnected components and combining the results to identify the global minimum energy conformation. This algorithm is able to complete predictions on a set of 180 proteins with 34342 side chains in <7 min of computer time. The total chi(1) and chi(1 + 2) dihedral angle accuracies are 82.6% and 73.7% using a simple energy function based on the backbone-dependent rotamer library and a linear repulsive steric energy. The new algorithm will allow for use of SCWRL in more demanding applications such as sequence design and ab initio structure prediction, as well addition of a more complex energy function and conformational flexibility, leading to increased accuracy.", "title": "" }, { "docid": "8b73f2f12edde981f4e995380a5b9e0c", "text": "The detection of acoustic scenes is a challenging problem in which environmental sound events must be detected from a given audio signal. This includes classifying the events as well as estimating their onset and offset times. We approach this problem with a neural network architecture that uses the recently-proposed capsule routing mechanism. A capsule is a group of activation units representing a set of properties for an entity of interest, and the purpose of routing is to identify part-whole relationships between capsules. That is, a capsule in one layer is assumed to belong to a capsule in the layer above in terms of the entity being represented. Using capsule routing, we wish to train a network that can learn global coherence implicitly, thereby improving generalization performance. Our proposed method is evaluated on Task 4 of the DCASE 2017 challenge. Results show that classification performance is state-of-the-art, achieving an F-score of 58.6%. In addition, overfitting is reduced considerably compared to other architectures.", "title": "" }, { "docid": "c56d09b3c08f2cb9cc94ace3733b1c54", "text": "In this paper, we describe our microblog realtime filtering system developed and submitted for the Text Retrieval Conference (TREC 2015) microblog track. We submitted six runs for two tasks related to real-time filtering by using various Information Retrieval (IR), and Machine Learning (ML) techniques to analyze the Twitter sample live stream and match relevant tweets corresponding to specific user interest profiles. Evaluation results demonstrate the effectiveness of our approach as we achieved 3 of the top 7 best scores among automatic submissions across all participants and obtained the best (or close to best) scores in more than 25% of the evaluated topics for the real-time mobile push notification task.", "title": "" }, { "docid": "e7ed6060dcae9deea01ec24a999c2563", "text": "All organizations learn, whether they consciously choose to or not-it is a fundamental requirement for their sustained existence. Some firms deliberately advance organizational learning, developing capabilities that are consistent with their objectives; others make no focused effort and, therefore, acquire habits that are counterproductive. Nonetheless, all organizations learn. But what does it mean that an organization learns? We can think of organizational learning as a metaphor derived from our understanding of individual learning. In fact, organizations ultimately learn via their individual members. Hence, theories of individual learning are crucial for understanding organizational learning. Psychologists have studied individual learning for decades, but they are still far from fully understanding the workings of the human mind. Likewise, the theory of organizational learning is still in its embryonic stage. The purpose of this paper is to build a theory about the process through which individual learning advances organizational learning. To do this, we must address the role of individual learning and memory, differentiate between levels of learning, take into account different organizational types, and specify the transfer mechanism between individual and organizational learning. This transfer is at the heart of organizational learning: the process through which individual learning becomes embedded in an organization's memory and structure. Until now, it has received little attention and is not well understood, although a promising interaction between organization theory and psychology has begun. To contribute to our understanding of the nature of the learning organization, I present a framework that focuses on the crucial link between individual learning and organizational learning. Once we have a clear understanding of this transfer process, we can actively manage the learning process to make it consistent with an organization's goals, vision, and values.", "title": "" }, { "docid": "af6cd7f5448acab7cf569b88eb5b3859", "text": "Advances in wireless sensor network (WSN) technology has provided the availability of small and low-cost sensor nodes with capability of sensing various types of physical and environmental conditions, data processing, and wireless communication. Variety of sensing capabilities results in profusion of application areas. However, the characteristics of wireless sensor networks require more effective methods for data forwarding and processing. In WSN, the sensor nodes have a limited transmission range, and their processing and storage capabilities as well as their energy resources are also limited. Routing protocols for wireless sensor networks are responsible for maintaining the routes in the network and have to ensure reliable multi-hop communication under these conditions. In this paper, we give a survey of routing protocols for Wireless Sensor Network and compare their strengths and limitations.", "title": "" }, { "docid": "1ade1bea5fece2d1882c6b6fac1ef63e", "text": "Probe-based confocal laser endomicroscopy is a recent tissue imaging technology that requires placing a probe in contact with the tissue to be imaged and provides real time images with a microscopic resolution. Additionally, generating adequate probe movements to sweep the tissue surface can be used to reconstruct a wide mosaic of the scanned region while increasing the resolution which is appropriate for anatomico-pathological cancer diagnosis. However, properly controlling the motion along the scanning trajectory is a major problem. Indeed, the tissue exhibits deformations under friction forces exerted by the probe leading to deformed mosaics. In this paper we propose a visual servoing approach for controlling the probe movements relative to the tissue while rejecting the tissue deformation disturbance. The probe displacement with respect to the tissue is firstly estimated using the confocal images and an image registration real-time algorithm. Secondly, from this real-time image-based position measurement, the probe motion is controlled thanks to a simple proportional-integral compensator and a feedforward term. Ex vivo experiments using a Stäubli TX40 robot and a Mauna Kea Technologies Cellvizio imaging device demonstrate the effectiveness of the approach on liver and muscle tissue.", "title": "" }, { "docid": "2e9f2a2e9b74c4634087a664a85fef9f", "text": "Parkinson’s disease (PD) is the second most common neurodegenerative disease, which is characterized by loss of dopaminergic (DA) neurons in the substantia nigra pars compacta and the formation of Lewy bodies and Lewy neurites in surviving DA neurons in most cases. Although the cause of PD is still unclear, the remarkable advances have been made in understanding the possible causative mechanisms of PD pathogenesis. Numerous studies showed that dysfunction of mitochondria may play key roles in DA neuronal loss. Both genetic and environmental factors that are associated with PD contribute to mitochondrial dysfunction and PD pathogenesis. The induction of PD by neurotoxins that inhibit mitochondrial complex I provides direct evidence linking mitochondrial dysfunction to PD. Decrease of mitochondrial complex I activity is present in PD brain and in neurotoxin- or genetic factor-induced PD cellular and animal models. Moreover, PINK1 and parkin, two autosomal recessive PD gene products, have important roles in mitophagy, a cellular process to clear damaged mitochondria. PINK1 activates parkin to ubiquitinate outer mitochondrial membrane proteins to induce a selective degradation of damaged mitochondria by autophagy. In this review, we summarize the factors associated with PD and recent advances in understanding mitochondrial dysfunction in PD.", "title": "" }, { "docid": "8207c9dd4c6cdf75e666a6d982981d07", "text": "Novelty search is a recently proposed method for evolutionary computation designed to avoid the problem of deception, in which the fitness function guides the search process away from global optima. Novelty search replaces fitness-based selection with novelty-based selection, where novelty is measured by comparing an individual's behavior to that of the current population and an archive of past novel individuals. Though there is substantial evidence that novelty search can overcome the problem of deception, the critical factors in its performance remain poorly understood. This paper helps to bridge this gap by analyzing how the behavior function, which maps each genotype to a behavior, affects performance. We propose the notion of descendant fitness probability (DFP), which describes how likely a genotype's descendants are to have a certain fitness, and formulate two hypotheses about when changes to the behavior function will improve novelty search's performance, based on the effect of those changes on behavior and DFP. Experiments in both artificial and deceptive maze domains provide substantial empirical support for these hypotheses.", "title": "" }, { "docid": "bd3620816c83fae9b4a5c871927f2b73", "text": "Quantifying behavior is crucial for many applications in neuroscience. Videography provides easy methods for the observation and recording of animal behavior in diverse settings, yet extracting particular aspects of a behavior for further analysis can be highly time consuming. In motor control studies, humans or other animals are often marked with reflective markers to assist with computer-based tracking, but markers are intrusive, and the number and location of the markers must be determined a priori. Here we present an efficient method for markerless pose estimation based on transfer learning with deep neural networks that achieves excellent results with minimal training data. We demonstrate the versatility of this framework by tracking various body parts in multiple species across a broad collection of behaviors. Remarkably, even when only a small number of frames are labeled (~200), the algorithm achieves excellent tracking performance on test frames that is comparable to human accuracy. Using a deep learning approach to track user-defined body parts during various behaviors across multiple species, the authors show that their toolbox, called DeepLabCut, can achieve human accuracy with only a few hundred frames of training data.", "title": "" }, { "docid": "fdd01ae46b9c57eada917a6e74796141", "text": "This paper presents a high-level discussion of dexterity in robotic systems, focusing particularly on manipulation and hands. While it is generally accepted in the robotics community that dexterity is desirable and that end effectors with in-hand manipulation capabilities should be developed, there has been little, if any, formal description of why this is needed, particularly given the increased design and control complexity required. This discussion will overview various definitions of dexterity used in the literature and highlight issues related to specific metrics and quantitative analysis. It will also present arguments regarding why hand dexterity is desirable or necessary, particularly in contrast to the capabilities of a kinematically redundant arm with a simple grasper. Finally, we overview and illustrate the various classes of in-hand manipulation, and review a number of dexterous manipulators that have been previously developed. We believe this work will help to revitalize the dialogue on dexterity in the manipulation community and lead to further formalization of the concepts discussed here.", "title": "" }, { "docid": "f58a66f2caf848341b29094e9d3b0e71", "text": "Since student performance and pass rates in school reflect teaching level of the school and even all education system, it is critical to improve student pass rates and reduce dropout rates. Decision Tree (DT) algorithm and Support Vector Machine (SVM) algorithm in data mining, have been used by researchers to find important student features and predict the student pass rates, however they did not consider the coefficient of initialization, and whether there is a dependency between student features. Therefore, in this study, we propose a new concept: features dependencies, and use the grid search algorithm to optimize DT and SVM, in order to improve the accuracy of the algorithm. Furthermore, we added 10-fold cross-validation to DT and SVM algorithm. The results show the experiment can achieve better results in this work. The purpose of this study is providing assistance to students who have greater difficulties in their studies, and students who are at risk of graduating through data mining techniques.", "title": "" }, { "docid": "113c07908c1f22c7671553c7f28c0b3f", "text": "Nearly 80% of children in the United States have at least 1 sibling, indicating that the birth of a baby sibling is a normative ecological transition for most children. Many clinicians and theoreticians believe the transition is stressful, constituting a developmental crisis for most children. Yet, a comprehensive review of the empirical literature on children's adjustment over the transition to siblinghood (TTS) has not been done for several decades. The current review summarizes research examining change in first borns' adjustment to determine whether there is evidence that the TTS is disruptive for most children. Thirty studies addressing the TTS were found, and of those studies, the evidence did not support a crisis model of developmental transitions, nor was there overwhelming evidence of consistent changes in firstborn adjustment. Although there were decreases in children's affection and responsiveness toward mothers, the results were more equivocal for many other behaviors (e.g., sleep problems, anxiety, aggression, regression). An inspection of the scientific literature indicated there are large individual differences in children's adjustment and that the TTS can be a time of disruption, an occasion for developmental advances, or a period of quiescence with no noticeable changes. The TTS may be a developmental turning point for some children that portends future psychopathology or growth depending on the transactions between children and the changes in the ecological context over time. A developmental ecological systems framework guided the discussion of how child, parent, and contextual factors may contribute to the prediction of firstborn children's successful adaptation to the birth of a sibling.", "title": "" }, { "docid": "6bdcd13e63a4f24561f575efcd232dad", "text": "Men have called me mad,” wrote Edgar Allan Poe, “but the question is not yet settled, whether madness is or is not the loftiest intelligence— whether much that is glorious—whether all that is profound—does not spring from disease of thought—from moods of mind exalted at the expense of the general intellect.” Many people have long shared Poe’s suspicion that genius and insanity are entwined. Indeed, history holds countless examples of “that fine madness.” Scores of influential 18thand 19th-century poets, notably William Blake, Lord Byron and Alfred, Lord Tennyson, wrote about the extreme mood swings they endured. Modern American poets John Berryman, Randall Jarrell, Robert Lowell, Sylvia Plath, Theodore Roethke, Delmore Schwartz and Anne Sexton were all hospitalized for either mania or depression during their lives. And many painters and composers, among them Vincent van Gogh, Georgia O’Keeffe, Charles Mingus and Robert Schumann, have been similarly afflicted. Judging by current diagnostic criteria, it seems that most of these artists—and many others besides—suffered from one of the major mood disorders, namely, manic-depressive illness or major depression. Both are fairly common, very treatable and yet frequently lethal diseases. Major depression induces intense melancholic spells, whereas manic-depression, Manic-Depressive Illness and Creativity", "title": "" }, { "docid": "4825e492dc1b7b645a5b92dde0c766cd", "text": "This article shows how language processing is intimately tuned to input frequency. Examples are given of frequency effects in the processing of phonology, phonotactics, reading, spelling, lexis, morphosyntax, formulaic language, language comprehension, grammaticality, sentence production, and syntax. The implications of these effects for the representations and developmental sequence of SLA are discussed. Usage-based theories hold that the acquisition of language is exemplar based. It is the piecemeal learning of many thousands of constructions and the frequency-biased abstraction of regularities within them. Determinants of pattern productivity include the power law of practice, cue competition and constraint satisfaction, connectionist learning, and effects of type and token frequency. The regularities of language emerge from experience as categories and prototypical patterns. The typical route of emergence of constructions is from formula, through low-scope pattern, to construction. Frequency plays a large part in explaining sociolinguistic variation and language change. Learners’ sensitivity to frequency in all these domains has implications for theories of implicit and explicit learning and their interactions. The review concludes by considering the history of frequency as an explanatory concept in theoretical and applied linguistics, its 40 years of exile, and its necessary reinstatement as a bridging variable that binds the different schools of language acquisition research.", "title": "" } ]
scidocsrr
2bcbe92be31315c9fbab39a0684eb566
Exploiting Temporal and Social Factors for B2B Marketing Campaign Recommendations
[ { "docid": "13b887760a87bc1db53b16eb4fba2a01", "text": "Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.", "title": "" }, { "docid": "0b6846c4dd89be21af70b144c93f7a7b", "text": "Most existing collaborative filtering models only consider the use of user feedback (e.g., ratings) and meta data (e.g., content, demographics). However, in most real world recommender systems, context information, such as time and social networks, are also very important factors that could be considered in order to produce more accurate recommendations. In this work, we address several challenges for the context aware movie recommendation tasks in CAMRa 2010: (1) how to combine multiple heterogeneous forms of user feedback? (2) how to cope with dynamic user and item characteristics? (3) how to capture and utilize social connections among users? For the first challenge, we propose a novel ranking based matrix factorization model to aggregate explicit and implicit user feedback. For the second challenge, we extend this model to a sequential matrix factorization model to enable time-aware parametrization. Finally, we introduce a network regularization function to constrain user parameters based on social connections. To the best of our knowledge, this is the first study that investigates the collective modeling of social and temporal dynamics. Experiments on the CAMRa 2010 dataset demonstrated clear improvements over many baselines.", "title": "" }, { "docid": "51dce19889df3ae51b6c12e3f2a47672", "text": "Existing recommender systems model user interests and the social influences independently. In reality, user interests may change over time, and as the interests change, new friends may be added while old friends grow apart and the new friendships formed may cause further interests change. This complex interaction requires the joint modeling of user interest and social relationships over time. In this paper, we propose a probabilistic generative model, called Receptiveness over Time Model (RTM), to capture this interaction. We design a Gibbs sampling algorithm to learn the receptiveness and interest distributions among users over time. The results of experiments on a real world dataset demonstrate that RTM-based recommendation outperforms the state-of-the-art recommendation methods. Case studies also show that RTM is able to discover the user interest shift and receptiveness change over time", "title": "" }, { "docid": "8ca30cd6fd335024690837c137f0d1af", "text": "Non-negative matrix factorization (NMF) is a recently deve loped technique for finding parts-based, linear representations of non-negative data. Although it h as successfully been applied in several applications, it does not always result in parts-based repr esentations. In this paper, we show how explicitly incorporating the notion of ‘sparseness’ impro ves the found decompositions. Additionally, we provide complete MATLAB code both for standard NMF a nd for our extension. Our hope is that this will further the application of these methods to olving novel data-analysis problems.", "title": "" } ]
[ { "docid": "91cb5e59cb11f7d5ba3300cf4f00ff5d", "text": "Blockchain is a technology uniquely suited to support massive number of transactions and smart contracts within the Internet of Things (IoT) ecosystem, thanks to the decentralized accounting mechanism. In a blockchain network, the states of the accounts are stored and updated by the validator nodes, interconnected in a peer-to-peer fashion. IoT devices are characterized by relatively low computing capabilities and low power consumption, as well as sporadic and low-bandwidth wireless connectivity. An IoT device connects to one or more validator nodes to observe or modify the state of the accounts. In order to interact with the most recent state of accounts, a device needs to be synchronized with the blockchain copy stored by the validator nodes. In this work, we describe general architectures and synchronization protocols that enable synchronization of the IoT endpoints to the blockchain, with different communication costs and security levels. We model and analytically characterize the traffic generated by the synchronization protocols, and also investigate the power consumption and synchronization trade-off via numerical simulations. To the best of our knowledge, this is the first study that rigorously models the role of wireless connectivity in blockchain-powered IoT systems.", "title": "" }, { "docid": "ecc7f7c7c81645e7f2feeb6ac8d8f737", "text": "Worldwide, there are more than 10 million new cancer cases each year, and cancer is the cause of approximately 12% of all deaths. Given this, a large number of epidemiologic studies have been undertaken to identify potential risk factors for cancer, amongst which the association with trace elements has received considerable attention. Trace elements, such as selenium, zinc, arsenic, cadmium, and nickel, are found naturally in the environment, and human exposure derives from a variety of sources, including air, drinking water, and food. Trace elements are of particular interest given that the levels of exposure to them are potentially modifiable. In this review, we focus largely on the association between each of the trace elements noted above and risk of cancers of the lung, breast, colorectum, prostate, urinary bladder, and stomach. Overall, the evidence currently available appears to support an inverse association between selenium exposure and prostate cancer risk, and possibly also a reduction in risk with respect to lung cancer, although additional prospective studies are needed. There is also limited evidence for an inverse association between zinc and breast cancer, and again, prospective studies are needed to confirm this. Most studies have reported no association between selenium and risk of breast, colorectal, and stomach cancer, and between zinc and prostate cancer risk. There is compelling evidence in support of positive associations between arsenic and risk of both lung and bladder cancers, and between cadmium and lung cancer risk.", "title": "" }, { "docid": "d76246dfee7e2f3813e025ac34ffc354", "text": "Web usage mining is application of data mining techniques to discover usage patterns from web data, in order to better serve the needs of web based applications. The user access log files present very significant information about a web server. This paper is concerned with the in-depth analysis of Web Log Data of NASA website to find information about a web site, top errors, potential visitors of the site etc. which help system administrator and Web designer to improve their system by determining occurred systems errors, corrupted and broken links by using web using mining. The obtained results of the study will be used in the further development of the web site in order to increase its effectiveness.", "title": "" }, { "docid": "fcceec0849ed7f00a77b45f4297f2218", "text": "Image retargeting is a process to change the resolution of image while preserve interesting regions and avoid obvious visual distortion. In other words, it focuses on image content more than anything else that applies to filter the useful information for data analysis. Existing approaches may encounter difficulties on the various types of images since most of these approaches only consider 2D features, which are sensitive to the complexity of the contents in images. Researchers are now focusing on the RGB-D information, hoping depth information can help to promote the accuracy. However it is not easy to obtain the RGB-D image we need anywhere and how to utilize depth information is still at the exploration stage. In this paper, instead of using RGB-D data captured by 3D camera, we employ an iterative MRF learning model to predict depth information from a single still image. Then we propose our self-learning 3D saliency model based on the RGB-D data and apply it on the seam carving framework. In seam caving, the self-learning 3D saliency is combined with L1-norm of gradient for better seam searching. Experimental results demonstrate the advantages of our method using RGB-D data in the seam carving framework.", "title": "" }, { "docid": "c158e9421ec0d1265bd625b629e64dc5", "text": "This paper proposes a gateway framework for in-vehicle networks (IVNs) based on the controller area network (CAN), FlexRay, and Ethernet. The proposed gateway framework is designed to be easy to reuse and verify to reduce development costs and time. The gateway framework can be configured, and its verification environment is automatically generated by a program with a dedicated graphical user interface (GUI). The gateway framework provides state-of-the-art functionalities that include parallel reprogramming, diagnostic routing, network management (NM), dynamic routing update, multiple routing configuration, and security. The proposed gateway framework was developed, and its performance was analyzed and evaluated.", "title": "" }, { "docid": "ccd883caf9a4bc10db6ec67d033b22eb", "text": "In this paper, a quality model for object-oriented software and an automated metric tool, Reconfigurable Automated Metrics for Object-Oriented Software (RAMOOS) are proposed. The quality model is targeted at the maintainability and reusability aspects of software which can be effectively predicted from the source code. RAMOOS assists users in applying customized quality model during the development of software. In the beginning of adopting RAMOOS, a user may need to use his intuition to select or modify a system-recommended metric model to fit his specific software project needs. If the initial metrics do not meet the expectation, the user can retrive the saved intermediate results and perform further modification to the metric model. The verified model can then be applied to future similar projects.", "title": "" }, { "docid": "2282af5c9f4de5e0de2aae14c0a47840", "text": "The penetration of smart devices such as mobile phones, tabs has significantly changed the way people communicate. This has led to the growth of usage of social media tools such as twitter, facebook chats for communication. This has led to development of new challenges and perspectives in the language technologies research. Automatic processing of such texts requires us to develop new methodologies. Thus there is great need to develop various automatic systems such as information extraction, retrieval and summarization. Entity recognition is a very important sub task of Information extraction and finds its applications in information retrieval, machine translation and other higher Natural Language Processing (NLP) applications such as co-reference resolution. Some of the main issues in handling of such social media texts are i) Spelling errors ii) Abbreviated new language vocabulary such as “gr8” for great iii) use of symbols such as emoticons/emojis iv) use of meta tags and hash tags v) Code mixing. Entity recognition and extraction has gained increased attention in Indian research community. However there is no benchmark data available where all these systems could be compared on same data for respective languages in this new generation user generated text. Towards this we have organized the Code Mix Entity Extraction in social media text track for Indian languages (CMEE-IL) in the Forum for Information Retrieval Evaluation (FIRE). We present the overview of CMEE-IL 2016 track. This paper describes the corpus created for Hindi-English and Tamil-English. Here we also present overview of the approaches used by the participants. CCS Concepts • Computing methodologies ~ Artificial intelligence • Computing methodologies ~ Natural language processing • Information systems ~ Information extraction", "title": "" }, { "docid": "d69571c1614c3a078d36467d91a09bc6", "text": "In many species of oviparous reptiles, the first steps of gonadal sex differentiation depend on the incubation temperature of the eggs. Feminization of gonads by exogenous oestrogens at a male-producing temperature and masculinization of gonads by antioestrogens and aromatase inhibitors at a female-producing temperature have irrefutably demonstrated the involvement of oestrogens in ovarian differentiation. Nevertheless, several studies performed on the entire gonad/adrenal/mesonephros complex failed to find differences between male- and female-producing temperatures in oestrogen content, aromatase activity and aromatase gene expression during the thermosensitive period for sex determination. Thus, the key role of aromatase and oestrogens in the first steps of ovarian differentiation has been questioned, and extragonadal organs or tissues, such as adrenal, mesonephros, brain or yolk, were considered as possible targets of temperature and sources of the oestrogens acting on gonadal sex differentiation. In disagreement with this view, experiments and assays carried out on the gonads alone, i.e. separated from the adrenal/mesonephros, provide evidence that the gonads themselves respond to temperature shifts by modifying their sexual differentiation and are the site of aromatase activity and oestrogen synthesis during the thermosensitive period. Oestrogens act locally on both the cortical and the medullary part of the gonad to direct ovarian differentiation. We have concluded that there is no objective reason to search for the implication of other organs in the phenomenon of temperature-dependent sex determination in reptiles. From the comparison with data obtained in other vertebrates, we propose two main directions for future research: to examine how transcription of the aromatase gene is regulated and to identify molecular and cellular targets of oestrogens in gonads during sex differentiation, in species with strict genotypic sex determination and species with temperature-dependent sex determination.", "title": "" }, { "docid": "92963d6a511d5e0a767aa34f8932fe86", "text": "A 77-GHz transmit-array on dual-layer printed circuit board (PCB) is proposed for automotive radar applications. Coplanar patch unit-cells are etched on opposite sides of the PCB and connected by through-via. The unit-cells are arranged in concentric rings to form the transmit-array for 1-bit in-phase transmission. When combined with four-substrate-integrated waveguide (SIW) slot antennas as the primary feeds, the transmit-array is able to generate four beams with a specific coverage of ±15°. The simulated and measured results of the antenna prototype at 76.5 GHz agree well, with gain greater than 18.5 dBi. The coplanar structure significantly simplifies the transmit-array design and eases the fabrication, in particular, at millimeter-wave frequencies.", "title": "" }, { "docid": "4d56f134c2e2a597948bcf9b1cf37385", "text": "This paper focuses on semantic scene completion, a task for producing a complete 3D voxel representation of volumetric occupancy and semantic labels for a scene from a single-view depth map observation. Previous work has considered scene completion and semantic labeling of depth maps separately. However, we observe that these two problems are tightly intertwined. To leverage the coupled nature of these two tasks, we introduce the semantic scene completion network (SSCNet), an end-to-end 3D convolutional network that takes a single depth image as input and simultaneously outputs occupancy and semantic labels for all voxels in the camera view frustum. Our network uses a dilation-based 3D context module to efficiently expand the receptive field and enable 3D context learning. To train our network, we construct SUNCG - a manually created largescale dataset of synthetic 3D scenes with dense volumetric annotations. Our experiments demonstrate that the joint model outperforms methods addressing each task in isolation and outperforms alternative approaches on the semantic scene completion task. The dataset and code is available at http://sscnet.cs.princeton.edu.", "title": "" }, { "docid": "137b9760d265304560f1cac14edb7f21", "text": "Gallstones are solid particles formed from bile in the gall bladder. In this paper, we propose a technique to automatically detect Gallstones in ultrasound images, christened as, Automated Gallstone Segmentation (AGS) Technique. Speckle Noise in the ultrasound image is first suppressed using Anisotropic Diffusion Technique. The edges are then enhanced using Unsharp Filtering. NCUT Segmentation Technique is then put to use to segment the image. Afterwards, edges are detected using Sobel Edge Detection. Further, Edge Thickening Process is used to smoothen the edges and probability maps are generated using Floodfill Technique. Then, the image is scribbled using Automatic Scribbling Technique. Finally, we get the segmented gallstone within the gallbladder using the Closed Form Matting Technique.", "title": "" }, { "docid": "64122833d6fa0347f71a9abff385d569", "text": "We present a brief history and overview of statistical methods in frame-semantic parsing – the automatic analysis of text using the theory of frame semantics. We discuss how the FrameNet lexicon and frameannotated datasets have been used by statistical NLP researchers to build usable, state-of-the-art systems. We also focus on future directions in frame-semantic parsing research, and discuss NLP applications that could benefit from this line of work. 1 Frame-Semantic Parsing Frame-semantic parsing has been considered as the task of automatically finding semantically salient targets in text, disambiguating their semantic frame representing an event and scenario in discourse, and annotating arguments consisting of words or phrases in text with various frame elements (or roles). The FrameNet lexicon (Baker et al., 1998), an ontology inspired by the theory of frame semantics (Fillmore, 1982), serves as a repository of semantic frames and their roles. Figure 1 depicts a sentence with three evoked frames for the targets “million”, “created” and “pushed” with FrameNet frames and roles. Automatic analysis of text using framesemantic structures can be traced back to the pioneering work of Gildea and Jurafsky (2002). Although their experimental setup relied on a primitive version of FrameNet and only made use of “exemplars” or example usages of semantic frames (containing one target per sentence) as opposed to a “corpus” of sentences, it resulted in a flurry of work in the area of automatic semantic role labeling (Màrquez et al., 2008). However, the focus of semantic role labeling (SRL) research has mostly been on PropBank (Palmer et al., 2005) conventions, where verbal targets could evoke a “sense” frame, which is not shared across targets, making the frame disambiguation setup different from the representation in FrameNet. Furthermore, it is fair to say that early research on PropBank focused primarily on argument structure prediction, and the interaction between frame and argument structure analysis has mostly been unaddressed (Màrquez et al., 2008). There are exceptions, where the verb frame has been taken into account during SRL (Meza-Ruiz and Riedel, 2009; Watanabe et al., 2010). Moreoever, the CoNLL 2008 and 2009 shared tasks also include the verb and noun frame identification task in their evaluations, although the overall goal was to predict semantic dependencies based on PropBank, and not full argument spans (Surdeanu et al., 2008; Hajič", "title": "" }, { "docid": "6d26012bd529735410477c9f389bbf73", "text": "Most current planners assume complete domain models and focus on generating correct plans. Unfortunately, domain modeling is a laborious and error-prone task, thus real world agents have to plan with incomplete domain models. While domain experts cannot guarantee completeness, often they are able to circumscribe the incompleteness of the model by providing annotations as to which parts of the domain model may be incomplete. In this paper, we study planning problems with incomplete domain models where the annotations specify possible preconditions and effects of actions. We show that the problem of assessing the quality of a plan, or its plan robustness, is #P -complete, establishing its equivalence with the weighted model counting problems. We present two approaches to synthesizing robust plans. While the method based on the compilation to conformant probabilistic planning is much intuitive, its performance appears to be limited to only small problem instances. Our second approach based on stochastic heuristic search works well for much larger problems. It aims to use the robustness measure directly for estimating heuristic distance, which is then used to guide the search. Our planning system, PISA, outperforms a state-of-the-art planner handling incomplete domain models in most of the tested domains, both in terms of plan quality and planning time. Finally, we also present an extension of PISA called CPISA that is able to exploit the available of past successful plan traces to both improve the robustness of the synthesized plans and reduce the domain modeling burden.", "title": "" }, { "docid": "223d5658dee7ba628b9746937aed9bb3", "text": "A low-power receiver with a one-tap data and edge decision-feedback equalizer (DFE) and a clock recovery circuit is presented. The receiver employs analog adders for the tap-weight summation in both the data and the edge path to simultaneously optimize both the voltage and timing margins. A switched-capacitor input stage allows the receiver to be fully compatible with near-GND input levels without extra level conversion circuits. Furthermore, the critical path of the DFE is simplified to relax the timing margin. Fabricated in the 65-nm CMOS technology, a prototype DFE receiver shows that the data-path DFE extends the voltage and timing margins from 40 mVpp and 0.3 unit interval (UI), respectively, to 70 mVpp and 0.6 UI, respectively. Likewise, the edge-path equalizer reduces the uncertain sampling region (the edge region), which results in 17% reduction of the recovered clock jitter. The DFE core, including adders and samplers, consumes 1.1 mW from a 1.2-V supply while operating at 6.4 Gb/s.", "title": "" }, { "docid": "42392af599ce65f38748420353afc534", "text": "An innovative technology for the mass production ofstretchable printed circuit boards (SCBs) will bepresented in this paper. This technology makes itpossible for the first time to really integrate fine pitch,high performance electronic circuits easily into textilesand so may be the building block for a totally newgeneration of wearable electronic systems. Anoverview of the technology will be given andsubsequently a real system using SCB technology ispresented.", "title": "" }, { "docid": "aaa2c8a7367086cd762f52b6a6c30df6", "text": "Many mature term-based or pattern-based approaches have been used in the field of information filtering to generate users' information needs from a collection of documents. A fundamental assumption for these approaches is that the documents in the collection are all about one topic. However, in reality users' interests can be diverse and the documents in the collection often involve multiple topics. Topic modelling, such as Latent Dirichlet Allocation (LDA), was proposed to generate statistical models to represent multiple topics in a collection of documents, and this has been widely utilized in the fields of machine learning and information retrieval, etc. But its effectiveness in information filtering has not been so well explored. Patterns are always thought to be more discriminative than single terms for describing documents. However, the enormous amount of discovered patterns hinder them from being effectively and efficiently used in real applications, therefore, selection of the most discriminative and representative patterns from the huge amount of discovered patterns becomes crucial. To deal with the above mentioned limitations and problems, in this paper, a novel information filtering model, Maximum matched Pattern-based Topic Model (MPBTM), is proposed. The main distinctive features of the proposed model include: (1) user information needs are generated in terms of multiple topics; (2) each topic is represented by patterns; (3) patterns are generated from topic models and are organized in terms of their statistical and taxonomic features; and (4) the most discriminative and representative patterns, called Maximum Matched Patterns, are proposed to estimate the document relevance to the user's information needs in order to filter out irrelevant documents. Extensive experiments are conducted to evaluate the effectiveness of the proposed model by using the TREC data collection Reuters Corpus Volume 1. The results show that the proposed model significantly outperforms both state-of-the-art term-based models and pattern-based models.", "title": "" }, { "docid": "fb7961117dae98e770e0fe84c33673b9", "text": "Named-Entity Recognition (NER) aims at identifying the fragments of a given text that mention a given entity of interest. This manuscript presents our Minimal named-Entity Recognizer (MER), designed with flexibility, autonomy and efficiency in mind. To annotate a given text, MER only requires a lexicon (text file) with the list of terms representing the entities of interest; and a GNU Bash shell grep and awk tools. MER was deployed in a cloud infrastructure using multiple Virtual Machines to work as an annotation server and participate in the Technical Interoperability and Performance of annotation Servers (TIPS) task of BioCreative V.5. Preliminary results show that our solution processed each document (text retrieval and annotation) in less than 3 seconds on average without using any type of cache. MER is publicly available in a GitHub repository (https://github.com/lasigeBioTM/MER) and through a RESTful Web service (http://labs.fc.ul.pt/mer/).", "title": "" }, { "docid": "a513c25bccbeda0c4314213aea49668a", "text": "Identity recognition faces several challenges especially in extracting an individual's unique features from biometric modalities and pattern classifications. Electrocardiogram (ECG) waveforms, for instance, have unique identity properties for human recognition, and their signals are not periodic. At present, in order to generate a significant ECG feature set, nonfiducial methodologies based on an autocorrelation (AC) in conjunction with linear dimension reduction methods are used. This paper proposes a new non-fiducial framework for ECG biometric verification using kernel methods to reduce both high autocorrelation vectors' dimensionality and recognition system after denoising signals of 52 subjects with Discrete Wavelet Transform (DWT). The effects of different dimensionality reduction techniques for use in feature extraction were investigated to evaluate verification performance rates of a multi-class Support Vector Machine (SVM) with the One-Against-All (OAA) approach. The experimental results demonstrated higher test recognition rates of Gaussian OAA SVMs on random unknown ECG data sets with the use of the Kernel Principal Component Analysis (KPCA) as compared to the use of the Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA). Keyword: ECG biometric recognition; Non-fiducial feature extraction; Kernel methods; Dimensionality reduction; Gaussian OAA SVM", "title": "" }, { "docid": "2e0e53ff34dccd5412faab5b51a3a2f2", "text": "This study examines print and online daily newspaper journalists’ perceptions of the credibility of Internet news information, as well as the influence of several factors— most notably, professional role conceptions—on those perceptions. Credibility was measured as a multidimensional construct. The results of a survey of U.S. journalists (N = 655) show that Internet news information was viewed as moderately credible overall and that online newspaper journalists rated Internet news information as significantly more credible than did print newspaper journalists. Hierarchical regression analyses reveal that Internet reliance was a strong positive predictor of credibility. Two professional role conceptions also emerged as significant predictors. The populist mobilizer role conception was a significant positive predictor of online news credibility, while the adversarial role conception was a significant negative predictor. Demographic characteristics of print and online daily newspaper journalists did not influence their perceptions of online news credibility.", "title": "" }, { "docid": "a752279721e2bf6142a0ca34a1a708f3", "text": "Zika virus (ZIKV) is a mosquito-borne flavivirus first isolated in Uganda from a sentinel monkey in 1947. Mosquito and sentinel animal surveillance studies have demonstrated that ZIKV is endemic to Africa and Southeast Asia, yet reported human cases are rare, with <10 cases reported in the literature. In June 2007, an epidemic of fever and rash associated with ZIKV was detected in Yap State, Federated States of Micronesia. We report the genetic and serologic properties of the ZIKV associated with this epidemic.", "title": "" } ]
scidocsrr
7bd3876d9badd720037ed7ffece74b62
ARmatika: 3D game for arithmetic learning with Augmented Reality technology
[ { "docid": "ae4c9e5df340af3bd35ae5490083c72a", "text": "The massive technological advancements around the world have created significant challenging competition among companies where each of the companies tries to attract the customers using different techniques. One of the recent techniques is Augmented Reality (AR). The AR is a new technology which is capable of presenting possibilities that are difficult for other technologies to offer and meet. Nowadays, numerous augmented reality applications have been used in the industry of different kinds and disseminated all over the world. AR will really alter the way individuals view the world. The AR is yet in its initial phases of research and development at different colleges and high-tech institutes. Throughout the last years, AR apps became transportable and generally available on various devices. Besides, AR begins to occupy its place in our audio-visual media and to be used in various fields in our life in tangible and exciting ways such as news, sports and is used in many domains in our life such as electronic commerce, promotion, design, and business. In addition, AR is used to facilitate the learning whereas it enables students to access location-specific information provided through various sources. Such growth and spread of AR applications pushes organizations to compete one another, and every one of them exerts its best to gain the customers. This paper provides a comprehensive study of AR including its history, architecture, applications, current challenges and future trends.", "title": "" }, { "docid": "273153d0cf32162acb48ed989fa6d713", "text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.", "title": "" }, { "docid": "f1c00253a57236ead67b013e7ce94a5e", "text": "A meta-analysis of 128 studies examined the effects of extrinsic rewards on intrinsic motivation. As predicted, engagement-contingent, completion-contingent, and performance-contingent rewards significantly undermined free-choice intrinsic motivation (d = -0.40, -0.36, and -0.28, respectively), as did all rewards, all tangible rewards, and all expected rewards. Engagement-contingent and completion-contingent rewards also significantly undermined self-reported interest (d = -0.15, and -0.17), as did all tangible rewards and all expected rewards. Positive feedback enhanced both free-choice behavior (d = 0.33) and self-reported interest (d = 0.31). Tangible rewards tended to be more detrimental for children than college students, and verbal rewards tended to be less enhancing for children than college students. The authors review 4 previous meta-analyses of this literature and detail how this study's methods, analyses, and results differed from the previous ones.", "title": "" } ]
[ { "docid": "1eafc02a19766817536f3da89230b4cf", "text": "Basically, Bayesian Belief Networks (BBNs) as probabilistic tools provide suitable facilities for modelling process under uncertainty. A BBN applies a Directed Acyclic Graph (DAG) for encoding relations between all variables in state of problem. Finding the beststructure (structure learning) ofthe DAG is a classic NP-Hard problem in BBNs. In recent years, several algorithms are proposed for this task such as Hill Climbing, Greedy Thick Thinning and K2 search. In this paper, we introduced Simulated Annealing algorithm with complete details as new method for BBNs structure learning. Finally, proposed algorithm compared with other structure learning algorithms based on classification accuracy and construction time on valuable databases. Experimental results of research show that the simulated annealing algorithmis the bestalgorithmfrom the point ofconstructiontime but needs to more attention for classification process.", "title": "" }, { "docid": "82c8a692e3b39e58bd73997b2e922c2c", "text": "The traditional approaches to building survivable systems assume a framework of absolute trust requiring a provably impenetrable and incorruptible Trusted Computing Base (TCB). Unfortunately, we don’t have TCB’s, and experience suggests that we never will. We must instead concentrate on software systems that can provide useful services even when computational resource are compromised. Such a system will 1) Estimate the degree to which a computational resources may be trusted using models of possible compromises. 2) Recognize that a resource is compromised by relying on a system for long term monitoring and analysis of the computational infrastructure. 3) Engage in self-monitoring, diagnosis and adaptation to best achieve its purposes within the available infrastructure. All this, in turn, depends on the ability of the application, monitoring, and control systems to engage in rational decision making about what resources they should use in order to achieve the best ratio of expected benefit to risk.", "title": "" }, { "docid": "245204d71a7ba2f56897ccb67f26b595", "text": "The objective of the study is to describe distinguishing characteristics of commercial sexual exploitation of children/child sex trafficking victims (CSEC) who present for health care in the pediatric setting. This is a retrospective study of patients aged 12-18 years who presented to any of three pediatric emergency departments or one child protection clinic, and who were identified as suspected victims of CSEC. The sample was compared with gender and age-matched patients with allegations of child sexual abuse/sexual assault (CSA) without evidence of CSEC on variables related to demographics, medical and reproductive history, high-risk behavior, injury history and exam findings. There were 84 study participants, 27 in the CSEC group and 57 in the CSA group. Average age was 15.7 years for CSEC patients and 15.2 years for CSA patients; 100% of the CSEC and 94.6% of the CSA patients were female. The two groups significantly differed in 11 evaluated areas with the CSEC patients more likely to have had experiences with violence, substance use, running away from home, and involvement with child protective services and/or law enforcement. CSEC patients also had a longer history of sexual activity. Adolescent CSEC victims differ from sexual abuse victims without evidence of CSEC in their reproductive history, high risk behavior, involvement with authorities, and history of violence.", "title": "" }, { "docid": "38382c04e7dc46f5db7f2383dcae11fb", "text": "Motor schemas serve as the basic unit of behavior specification for the navigation of a mobile robot. They are multiple concurrent processes that operate in conjunction with associated perceptual schemas and contribute independently to the overall concerted action of the vehicle. The motivation behind the use of schemas for this domain is drawn from neuroscientific, psychological, and robotic sources. A variant of the potential field method is used to produce the appropriate velocity and steering commands for the robot. Simulation results and actual mobile robot experiments demonstrate the feasibility of this approach.", "title": "" }, { "docid": "fca196c6900f43cf6fd711f8748c6768", "text": "The fatigue fracture of structural details subjected to cyclic loads mostly occurs at a critical cross section with stress concentration. The welded joint is particularly dangerous location because of sinergetic harmful effects of stress concentration, tensile residual stresses, deffects, microstructural heterogeneity. Because of these reasons many methods for improving the fatigue resistance of welded joints are developed. Significant increase in fatigue strength and fatigue life was proved and could be attributed to improving weld toe profile, the material microstructure, removing deffects at the weld toe and modifying the original residual stress field. One of the most useful methods to improve fatigue behaviour of welded joints is TIG dressing. The magnitude of the improvement in fatigue performance depends on base material strength, type of welded joint and type of loading. Improvements of the fatigue behaviour of the welded joints in low-carbon structural steel treated by TIG dressing is considered in this paper.", "title": "" }, { "docid": "5b6f55af9994b2c2491344fca573502d", "text": "From times immemorial, colorants, and flavorings have been used in foods. Color and flavor are the major attributes to the quality of a food product, affecting the appearance and acceptance of the product. As a consequence of the increased demand of natural flavoring and colorant from industries, there is a renewed interest in the research on the composition and recovery of natural food flavors and colors. Over the years, numerous procedures have been proposed for the isolation of aromatic compounds and colors from plant materials. Generally, the methods of extraction followed for aroma and pigment from plant materials are solvent extraction, hydro-distillation, steam distillation, and super critical carbon dioxide extraction. The application of enzymes in the extraction of oil from oil seeds like sunflower, corn, coconut, olives, avocado etc. are reported in literature. There is a great potential for this enzyme-based extraction technology with the selection of appropriate enzymes with optimized operating conditions. Various enzyme combinations are used to loosen the structural integrity of botanical material thereby enhancing the extraction of the desired flavor and color components. Recently enzymes have been used for the extraction of flavor and color from plant materials, as a pre-treatment of the raw material before subjecting the plant material to hydro distillation/solvent extraction. A deep knowledge of enzymes, their mode of action, conditions for optimum activity, and selection of the right type of enzymes are essential to use them effectively for extraction. Although the enzyme hydrolases such as lipases, proteases (chymotrypsin, subtilisin, thermolysin, and papain), esterases use water as a substrate for the reaction, they are also able to accept other nucleophiles such as alcohols, amines, thio-esters, and oximes. Advantages of enzyme-assisted extraction of flavor and color in some of the plant materials in comparison with conventional methods are dealt with in this reveiw.", "title": "" }, { "docid": "46dc94fe4ba164ccf1cb37810112883f", "text": "The purpose of the study was to test four predictions derived from evolutionary (sexual strategies) theory. The central hypothesis was that men and women possess different emotional mechanisms that motivate and evaluate sexual activities. Consequently, even when women express indifference to emotional involvement and commitment and voluntarily engage in casual sexual relations, their goals, their feelings about the experience, and the associations between their sexual behavior and prospects for long-term investment differ significantly from those of men. Women's sexual behavior is associated with their perception of investment potential: long-term, short-term, and partners' ability and willingness to invest. For men,these associations are weaker or inversed. Regression analyses of survey data from 333 male and 363 female college students revealed the following: Greater permissiveness of sexual attitudes was positively associated with number of sex partners; this association was not moderated by sex of subject (Prediction 1); even when women deliberately engaged in casual sexual relations, thoughts that expressed worry and vulnerability crossed their minds; for females, greater number of partners was associated with increased worry-vulnerability whereas for males the trend was the opposite (Prediction 2); with increasing numbers of sex partners, marital thoughts decreased; this finding was not moderated by sex of subject; this finding did not support Prediction 3; for both males and females, greater number of partners was related to larger numbers of one-night stands, partners foreseen in the next 5 years, and deliberately casual sexual relations. This trend was significantly stronger for males than for females (Prediction 4).", "title": "" }, { "docid": "636f5002b3ced8a541df3e0568604f71", "text": "We report density functional theory (M06L) calculations including Poisson-Boltzmann solvation to determine the reaction pathways and barriers for the hydrogen evolution reaction (HER) on MoS2, using both a periodic two-dimensional slab and a Mo10S21 cluster model. We find that the HER mechanism involves protonation of the electron rich molybdenum hydride site (Volmer-Heyrovsky mechanism), leading to a calculated free energy barrier of 17.9 kcal/mol, in good agreement with the barrier of 19.9 kcal/mol estimated from the experimental turnover frequency. Hydronium protonation of the hydride on the Mo site is 21.3 kcal/mol more favorable than protonation of the hydrogen on the S site because the electrons localized on the Mo-H bond are readily transferred to form dihydrogen with hydronium. We predict the Volmer-Tafel mechanism in which hydrogen atoms bound to molybdenum and sulfur sites recombine to form H2 has a barrier of 22.6 kcal/mol. Starting with hydrogen atoms on adjacent sulfur atoms, the Volmer-Tafel mechanism goes instead through the M-H + S-H pathway. In discussions of metal chalcogenide HER catalysis, the S-H bond energy has been proposed as the critical parameter. However, we find that the sulfur-hydrogen species is not an important intermediate since the free energy of this species does not play a direct role in determining the effective activation barrier. Rather we suggest that the kinetic barrier should be used as a descriptor for reactivity, rather than the equilibrium thermodynamics. This is supported by the agreement between the calculated barrier and the experimental turnover frequency. These results suggest that to design a more reactive catalyst from edge exposed MoS2, one should focus on lowering the reaction barrier between the metal hydride and a proton from the hydronium in solution.", "title": "" }, { "docid": "eb3886f7e212f2921b3333a8e1b7b0ed", "text": "With the resurgence of head-mounted displays for virtual reality, users need new input devices that can accurately track their hands and fingers in motion. We introduce Finexus, a multipoint tracking system using magnetic field sensing. By instrumenting the fingertips with electromagnets, the system can track fine fingertip movements in real time using only four magnetic sensors. To keep the system robust to noise, we operate each electromagnet at a different frequency and leverage bandpass filters to distinguish signals attributed to individual sensing points. We develop a novel algorithm to efficiently calculate the 3D positions of multiple electromagnets from corresponding field strengths. In our evaluation, we report an average accuracy of 1.33 mm, as compared to results from an optical tracker. Our real-time implementation shows Finexus is applicable to a wide variety of human input tasks, such as writing in the air.", "title": "" }, { "docid": "19e070089a8495a437e81da50f3eb21c", "text": "Mobile payment refers to the use of mobile devices to conduct payment transactions. Users can use mobile devices for remote and proximity payments; moreover, they can purchase digital contents and physical goods and services. It offers an alternative payment method for consumers. However, there are relative low adoption rates in this payment method. This research aims to identify and explore key factors that affect the decision of whether to use mobile payments. Two well-established theories, the Technology Acceptance Model (TAM) and the Innovation Diffusion Theory (IDT), are applied to investigate user acceptance of mobile payments. Survey data from mobile payments users will be used to test the proposed hypothesis and the model.", "title": "" }, { "docid": "c71f3284872169d1f506927000df557b", "text": "Natural rewards and drugs of abuse can alter dopamine signaling, and ventral tegmental area (VTA) dopaminergic neurons are known to fire action potentials tonically or phasically under different behavioral conditions. However, without technology to control specific neurons with appropriate temporal precision in freely behaving mammals, the causal role of these action potential patterns in driving behavioral changes has been unclear. We used optogenetic tools to selectively stimulate VTA dopaminergic neuron action potential firing in freely behaving mammals. We found that phasic activation of these neurons was sufficient to drive behavioral conditioning and elicited dopamine transients with magnitudes not achieved by longer, lower-frequency spiking. These results demonstrate that phasic dopaminergic activity is sufficient to mediate mammalian behavioral conditioning.", "title": "" }, { "docid": "b1827b03bc37fde80f99b73b6547c454", "text": "When constructing the model of a word by collecting interval-valued data from a group of individuals, both interpersonal and intrapersonal uncertainties coexist. Similar to the interval type-2 fuzzy set (IT2 FS) used in the enhanced interval approach (EIA), the Cloud model characterized by only three parameters can manage both uncertainties. Thus, based on the Cloud model, this paper proposes a new representation model for a word from interval-valued data. In our proposed method, firstly, the collected data intervals are preprocessed to remove the bad ones. Secondly, the fuzzy statistical method is used to compute the histogram of the surviving intervals. Then, the generated histogram is fitted by a Gaussian curve function. Finally, the fitted results are mapped into the parameters of a Cloud model to obtain the parametric model for a word. Compared with eight or nine parameters needed by an IT2 FS, only three parameters are needed to represent a Cloud model. Therefore, we develop a much more parsimonious parametric model for a word based on the Cloud model. Generally a simpler representation model with less parameters usually means less computations and memory requirements in applications. Moreover, the comparison experiments with the recent EIA show that, our proposed method can not only obtain much thinner footprints of uncertainty (FOUs) but also capture sufficient uncertainties of words. 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "4a837ccd9e392f8c7682446d9a3a3743", "text": "This paper investigates the applicability of Genetic Programming type systems to dynamic game environments. Grammatical Evolution was used to evolve Behaviour Trees, in order to create controllers for the Mario AI Benchmark. The results obtained reinforce the applicability of evolutionary programming systems to the development of artificial intelligence in games, and in dynamic systems in general, illustrating their viability as an alternative to more standard AI techniques.", "title": "" }, { "docid": "d365eceff514375d7ae19f70aec71c08", "text": "Importance\nSeveral studies now provide evidence of ketamine hydrochloride's ability to produce rapid and robust antidepressant effects in patients with mood and anxiety disorders that were previously resistant to treatment. Despite the relatively small sample sizes, lack of longer-term data on efficacy, and limited data on safety provided by these studies, they have led to increased use of ketamine as an off-label treatment for mood and other psychiatric disorders.\n\n\nObservations\nThis review and consensus statement provides a general overview of the data on the use of ketamine for the treatment of mood disorders and highlights the limitations of the existing knowledge. While ketamine may be beneficial to some patients with mood disorders, it is important to consider the limitations of the available data and the potential risk associated with the drug when considering the treatment option.\n\n\nConclusions and Relevance\nThe suggestions provided are intended to facilitate clinical decision making and encourage an evidence-based approach to using ketamine in the treatment of psychiatric disorders considering the limited information that is currently available. This article provides information on potentially important issues related to the off-label treatment approach that should be considered to help ensure patient safety.", "title": "" }, { "docid": "4620525bfbfd492f469e948b290d73a2", "text": "This thesis contains the complete end-to-end simulation, development, implementation, and calibration of the wide bandwidth, low-Q, Kiwi-SAS synthetic aperture sonar (SAS). Through the use of a very stable towfish, a new novel wide bandwidth transducer design, and autofocus procedures, high-resolution diffraction limited imagery is produced. As a complete system calibration was performed, this diffraction limited imagery is not only geometrically calibrated, it is also calibrated for target cross-section or target strength estimation. Is is important to note that the diffraction limited images are formed without access to any form of inertial measurement information. Previous investigations applying the synthetic aperture technique to sonar have developed processors based on exact, but inefficient, spatial-temporal domain time-delay and sum beamforming algorithms, or they have performed equivalent operations in the frequency domain using fast-correlation techniques (via the fast Fourier transform (FFT)). In this thesis, the algorithms used in the generation of synthetic aperture radar (SAR) images are derived in their wide bandwidth forms and it is shown that these more efficient algorithms can be used to form diffraction limited SAS images. Several new algorithms are developed; accelerated chirp scaling algorithm represents an efficient method for processing synthetic aperture data, while modified phase gradient autofocus and a low-Q autofocus routine based on prominent point processing are used to focus both simulated and real target data that has been corrupted by known and unknown motion or medium propagation errors.", "title": "" }, { "docid": "260fa16461d510094d810f04c333a220", "text": "We propose a novel VAE-based deep autoencoder model that can learn disentangled latent representations in a fully unsupervised manner, endowed with the ability to identify all meaningful sources of variation and their cardinality. Our model, dubbed Relevance-Factor-VAE, leverages the total correlation (TC) in the latent space to achieve the disentanglement goal, but also addresses the key issue of existing approaches which cannot distinguish between meaningful and nuisance factors of latent variation, often the source of considerable degradation in disentanglement performance. We tackle this issue by introducing the so-called relevance indicator variables that can be automatically learned from data, together with the VAE parameters. Our model effectively focuses the TC loss onto the relevant factors only by tolerating large prior KL divergences, a desideratum justified by our semi-parametric theoretical analysis. Using a suite of disentanglement metrics, including a newly proposed one, as well as qualitative evidence, we demonstrate that our model outperforms existing methods across several challenging benchmark datasets.", "title": "" }, { "docid": "4791e1e3ccde1260887d3a80ea4577b6", "text": "The fabulous results of Deep Convolution Neural Networks in computer vision and image analysis have recently attracted considerable attention from researchers of other application domains as well. In this paper we present NgramCNN, a neural network architecture we designed for sentiment analysis of long text documents. It uses pretrained word embeddings for dense feature representation and a very simple single-layer classifier. The complexity is encapsulated in feature extraction and selection parts that benefit from the effectiveness of convolution and pooling layers. For evaluation we utilized different kinds of emotional text datasets and achieved an accuracy of 91.2 % accuracy on the popular IMDB movie reviews. NgramCNN is more accurate than similar shallow convolution networks or deeper recurrent networks that were used as baselines. In the future, we intent to generalize the architecture for state of the art results in sentiment analysis of variable-length texts.", "title": "" }, { "docid": "d82897a2778b3ef6ddfe062f2c778451", "text": "Inspired by the recent advances in deep learning, we propose a novel iterative belief propagation-convolutional neural network (BP-CNN) architecture to exploit noise correlation for channel decoding under correlated noise. The standard BP decoder is used to estimate the coded bits, followed by a CNN to remove the estimation errors of the BP decoder and obtain a more accurate estimation of the channel noise. Iterating between BP and CNN will gradually improve the decoding SNR and hence result in better decoding performance. To train a well-behaved CNN model, we define a new loss function which involves not only the accuracy of the noise estimation but also the normality test for the estimation errors, i.e., to measure how likely the estimation errors follow a Gaussian distribution. The introduction of the normality test to the CNN training shapes the residual noise distribution and further reduces the BER of the iterative decoding, compared to using the standard quadratic loss function. We carry out extensive experiments to analyze and verify the proposed framework.", "title": "" }, { "docid": "73aa720bebc5f2fa1930930fb4185490", "text": "A CMOS OTA-C notch filter for 50Hz interference was presented in this paper. The OTAs were working in weak inversion region in order to achieve ultra low transconductance and power consumptions. The circuits were designed using SMIC mixed-signal 0.18nm 1P6M process. The post-annotated simulation indicated that an attenuation of 47.2dB for power line interference and a 120pW consumption. The design achieved a dynamic range of 75.8dB and a THD of 0.1%, whilst the input signal was a 1 Hz 20mVpp sine wave.", "title": "" }, { "docid": "d2d39b17b4047dd43e19ac4272b31c7e", "text": "Lignocellulose is a term for plant materials that are composed of matrices of cellulose, hemicellulose, and lignin. Lignocellulose is a renewable feedstock for many industries. Lignocellulosic materials are used for the production of paper, fuels, and chemicals. Typically, industry focuses on transforming the polysaccharides present in lignocellulose into products resulting in the incomplete use of this resource. The materials that are not completely used make up the underutilized streams of materials that contain cellulose, hemicellulose, and lignin. These underutilized streams have potential for conversion into valuable products. Treatment of these lignocellulosic streams with bacteria, which specifically degrade lignocellulose through the action of enzymes, offers a low-energy and low-cost method for biodegradation and bioconversion. This review describes lignocellulosic streams and summarizes different aspects of biological treatments including the bacteria isolated from lignocellulose-containing environments and enzymes which may be used for bioconversion. The chemicals produced during bioconversion can be used for a variety of products including adhesives, plastics, resins, food additives, and petrochemical replacements.", "title": "" } ]
scidocsrr
3d1eb27f60fcf8f1d45261a55471eb48
Network Intrusion Detection Using Hybrid Simplified Swarm Optimization and Random Forest Algorithm on Nsl-Kdd Dataset
[ { "docid": "320c7c49dd4341cca532fa02965ef953", "text": "During the last decade, anomaly detection has attracted the attention of many researchers to overcome the weakness of signature-based IDSs in detecting novel attacks, and KDDCUP'99 is the mostly widely used data set for the evaluation of these systems. Having conducted a statistical analysis on this data set, we found two important issues which highly affects the performance of evaluated systems, and results in a very poor evaluation of anomaly detection approaches. To solve these issues, we have proposed a new data set, NSL-KDD, which consists of selected records of the complete KDD data set and does not suffer from any of mentioned shortcomings.", "title": "" }, { "docid": "11a2882124e64bd6b2def197d9dc811a", "text": "1 Abstract— Clustering is the most acceptable technique to analyze the raw data. Clustering can help detect intrusions when our training data is unlabeled, as well as for detecting new and unknown types of intrusions. In this paper we are trying to analyze the NSL-KDD dataset using Simple K-Means clustering algorithm. We tried to cluster the dataset into normal and four of the major attack categories i.e. DoS, Probe, R2L, U2R. Experiments are performed in WEKA environment. Results are verified and validated using test dataset. Our main objective is to provide the complete analysis of NSL-KDD intrusion detection dataset.", "title": "" }, { "docid": "7b05751aa3257263e7f1a8a6f1e2ff7e", "text": "Intrusion Detection System (IDS) that turns to be a vital component to secure the network. The lack of regular updation, less capability to detect unknown attacks, high non adaptable false alarm rate, more consumption of network resources etc., makes IDS to compromise. This paper aims to classify the NSL-KDD dataset with respect to their metric data by using the best six data mining classification algorithms like J48, ID3, CART, Bayes Net, Naïve Bayes and SVM to find which algorithm will be able to offer more testing accuracy. NSL-KDD dataset has solved some of the inherent limitations of the available KDD’99 dataset. KeywordsIDS, KDD, Classification Algorithms, PCA etc.", "title": "" }, { "docid": "305efd1823009fe79c9f8ff52ddb5724", "text": "We explore the problem of classifying images by the object categories they contain in the case of a large number of object categories. To this end we combine three ingredients: (i) shape and appearance representations that support spatial pyramid matching over a region of interest. This generalizes the representation of Lazebnik et al., (2006) from an image to a region of interest (ROI), and from appearance (visual words) alone to appearance and local shape (edge distributions); (ii) automatic selection of the regions of interest in training. This provides a method of inhibiting background clutter and adding invariance to the object instance 's position; and (iii) the use of random forests (and random ferns) as a multi-way classifier. The advantage of such classifiers (over multi-way SVM for example) is the ease of training and testing. Results are reported for classification of the Caltech-101 and Caltech-256 data sets. We compare the performance of the random forest/ferns classifier with a benchmark multi-way SVM classifier. It is shown that selecting the ROI adds about 5% to the performance and, together with the other improvements, the result is about a 10% improvement over the state of the art for Caltech-256.", "title": "" }, { "docid": "035b2296835a9c4a7805ba446760071e", "text": "Intrusion detection is the process of monitoring the events occurring in a computer system or network and analyzing them for signs of intrusions, defined as attempts to compromise the confidentiality, integrity, availability, or to bypass the security mechanisms of a computer or network. This paper proposes the development of an Intrusion Detection Program (IDP) which could detect known attack patterns. An IDP does not eliminate the use of any preventive mechanism but it works as the last defensive mechanism in securing the system. Three variants of genetic programming techniques namely Linear Genetic Programming (LGP), Multi-Expression Programming (MEP) and Gene Expression Programming (GEP) were evaluated to design IDP. Several indices are used for comparisons and a detailed analysis of MEP technique is provided. Empirical results reveal that genetic programming technique could play a major role in developing IDP, which are light weight and accurate when compared to some of the conventional intrusion detection systems based on machine learning paradigms.", "title": "" } ]
[ { "docid": "2518564949f7488a7f01dff74e3b6e2d", "text": "Although it is commonly believed that women are kinder and more cooperative than men, there is conflicting evidence for this assertion. Current theories of sex differences in social behavior suggest that it may be useful to examine in what situations men and women are likely to differ in cooperation. Here, we derive predictions from both sociocultural and evolutionary perspectives on context-specific sex differences in cooperation, and we conduct a unique meta-analytic study of 272 effect sizes-sampled across 50 years of research-on social dilemmas to examine several potential moderators. The overall average effect size is not statistically different from zero (d = -0.05), suggesting that men and women do not differ in their overall amounts of cooperation. However, the association between sex and cooperation is moderated by several key features of the social context: Male-male interactions are more cooperative than female-female interactions (d = 0.16), yet women cooperate more than men in mixed-sex interactions (d = -0.22). In repeated interactions, men are more cooperative than women. Women were more cooperative than men in larger groups and in more recent studies, but these differences disappeared after statistically controlling for several study characteristics. We discuss these results in the context of both sociocultural and evolutionary theories of sex differences, stress the need for an integrated biosocial approach, and outline directions for future research.", "title": "" }, { "docid": "6d411b994567b18ea8ab9c2b9622e7f5", "text": "Nearly half a century ago, psychiatrist John Bowlby proposed that the instinctual behavioral system that underpins an infant’s attachment to his or her mother is accompanied by ‘‘internal working models’’ of the social world—models based on the infant’s own experience with his or her caregiver (Bowlby, 1958, 1969/1982). These mental models were thought to mediate, in part, the ability of an infant to use the caregiver as a buffer against the stresses of life, as well as the later development of important self-regulatory and social skills. Hundreds of studies now testify to the impact of caregivers’ behavior on infants’ behavior and development: Infants who most easily seek and accept support from their parents are considered secure in their attachments and are more likely to have received sensitive and responsive caregiving than insecure infants; over time, they display a variety of socioemotional advantages over insecure infants (Cassidy & Shaver, 1999). Research has also shown that, at least in older children and adults, individual differences in the security of attachment are indeed related to the individual’s representations of social relations (Bretherton & Munholland, 1999). Yet no study has ever directly assessed internal working models of attachment in infancy. In the present study, we sought to do so.", "title": "" }, { "docid": "4fa1054bd78a624f68a0f62840542457", "text": "The ReWalkTM powered exoskeleton assists thoracic level motor complete spinal cord injury patients who are paralyzed to walk again with an independent, functional, upright, reciprocating gait. We completed an evaluation of twelve such individuals with promising results. All subjects met basic criteria to be able to use the ReWalkTM - including items such as sufficient bone mineral density, leg passive range of motion, strength, body size and weight limits. All subjects received approximately the same number of training sessions. However there was a wide distribution in walking ability. Walking velocities ranged from under 0.1m/s to approximately 0.5m/s. This variability was not completely explained by injury level The remaining sources of that variability are not clear at present. This paper reports our preliminary analysis into how the walking kinematics differed across the subjects - as a first step to understand the possible contribution to the velocity range and determine if the subjects who did not walk as well could be taught to improve by mimicking the better walkers.", "title": "" }, { "docid": "cfea41d4bc6580c91ee27201360f8e17", "text": "It is common sense that cloud-native applications (CNA) are intentionally designed for the cloud. Although this understanding can be broadly used it does not guide and explain what a cloud-native application exactly is. The term ”cloud-native” was used quite frequently in birthday times of cloud computing (2006) which seems somehow obvious nowadays. But the term disappeared almost completely. Suddenly and in the last years the term is used again more and more frequently and shows increasing momentum. This paper summarizes the outcomes of a systematic mapping study analyzing research papers covering ”cloud-native” topics, research questions and engineering methodologies. We summarize research focuses and trends dealing with cloud-native application engineering approaches. Furthermore, we provide a definition for the term ”cloud-native application” which takes all findings, insights of analyzed publications and already existing and well-defined terminology into account.", "title": "" }, { "docid": "73b150681d7de50ada8e046a3027085f", "text": "We introduce a new model, the Recurrent Entity Network (EntNet). It is equipped with a dynamic long-term memory which allows it to maintain and update a representation of the state of the world as it receives new data. For language understanding tasks, it can reason on-the-fly as it reads text, not just when it is required to answer a question or respond as is the case for a Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed size memory and can learn to perform location and content-based read and write operations. However, unlike those models it has a simple parallel architecture in which several memory locations can be updated simultaneously. The EntNet sets a new state-of-the-art on the bAbI tasks, and is the first method to solve all the tasks in the 10k training examples setting. We also demonstrate that it can solve a reasoning task which requires a large number of supporting facts, which other methods are not able to solve, and can generalize past its training horizon. It can also be practically used on large scale datasets such as Children’s Book Test, where it obtains competitive performance, reading the story in a single pass.", "title": "" }, { "docid": "290796519b7757ce7ec0bf4d37290eed", "text": "A freely available English thesaurus of related words is presented that has been automatically compiled by analyzing the distributional similarities of words in the British National Corpus. The quality of the results has been evaluated by comparison with human judgments as obtained from non-native and native speakers of English who were asked to provide rankings of word similarities. According to this measure, the results generated by our system are better than the judgments of the non-native speakers and come close to the native speakers’ performance. An advantage of our approach is that it does not require syntactic parsing and therefore can be more easily adapted to other languages. As an example, a similar thesaurus for German has already been completed.", "title": "" }, { "docid": "10a33d5a75419519ce1177f6711b749c", "text": "Perianal fistulizing Crohn's disease has a major negative effect on patient quality of life and is a predictor of poor long-term outcomes. Factors involved in the pathogenesis of perianal fistulizing Crohn's disease include an increased production of transforming growth factor β, TNF and IL-13 in the inflammatory infiltrate that induce epithelial-to-mesenchymal transition and upregulation of matrix metalloproteinases, leading to tissue remodelling and fistula formation. Care of patients with perianal Crohn's disease requires a multidisciplinary approach. A complete assessment of fistula characteristics is the basis for optimal management and must include the clinical evaluation of fistula openings, endoscopic assessment of the presence of proctitis, and MRI to determine the anatomy of fistula tracts and presence of abscesses. Local injection of mesenchymal stem cells can induce remission in patients not responding to medical therapies, or to avoid the exposure to systemic immunosuppression in patients naive to biologics in the absence of active luminal disease. Surgery is still required in a high proportion of patients and should not be delayed when criteria for drug failure is met. In this Review, we provide an up-to-date overview on the pathogenesis and diagnosis of fistulizing Crohn's disease, as well as therapeutic strategies.", "title": "" }, { "docid": "872f556cb441d9c8976e2bf03ebd62ee", "text": "Monitoring is an issue of primary concern in current and next generation networked systems. For ex, the objective of sensor networks is to monitor their surroundings for a variety of different applications like atmospheric conditions, wildlife behavior, and troop movements among others. Similarly, monitoring in data networks is critical not only for accounting and management, but also for detecting anomalies and attacks. Such monitoring applications are inherently continuous and distributed, and must be designed to minimize the communication overhead that they introduce. In this context we introduce and study a fundamental class of problems called \"thresholded counts\" where we must return the aggregate frequency count of an event that is continuously monitored by distributed nodes with a user-specified accuracy whenever the actual count exceeds a given threshold value.In this paper we propose to address the problem of thresholded counts by setting local thresholds at each monitoring node and initiating communication only when the locally observed data exceeds these local thresholds. We explore algorithms in two categories: static and adaptive thresholds. In the static case, we consider thresholds based on a linear combination of two alternate strategies, and show that there exists an optimal blend of the two strategies that results in minimum communication overhead. We further show that this optimal blend can be found using a steepest descent search. In the adaptive case, we propose algorithms that adjust the local thresholds based on the observed distributions of updated information. We use extensive simulations not only to verify the accuracy of our algorithms and validate our theoretical results, but also to evaluate the performance of our algorithms. We find that both approaches yield significant savings over the naive approach of centralized processing.", "title": "" }, { "docid": "da4699d1e358bebc822b059b568916a8", "text": "An InterCloud is an interconnected global “cloud of clouds” that enables each cloud to tap into resources of other clouds. This is the earliest work to devise an agent-based InterCloud economic model for analyzing consumer-to-cloud and cloud-to-cloud interactions. While economic encounters between consumers and cloud providers are modeled as a many-to-many negotiation, economic encounters among clouds are modeled as a coalition game. To bolster many-to-many consumer-to-cloud negotiations, this work devises a novel interaction protocol and a novel negotiation strategy that is characterized by both 1) adaptive concession rate (ACR) and 2) minimally sufficient concession (MSC). Mathematical proofs show that agents adopting the ACR-MSC strategy negotiate optimally because they make minimum amounts of concession. By automatically controlling concession rates, empirical results show that the ACR-MSC strategy is efficient because it achieves significantly higher utilities than the fixed-concession-rate time-dependent strategy. To facilitate the formation of InterCloud coalitions, this work devises a novel four-stage cloud-to-cloud interaction protocol and a set of novel strategies for InterCloud agents. Mathematical proofs show that these InterCloud coalition formation strategies 1) converge to a subgame perfect equilibrium and 2) result in every cloud agent in an InterCloud coalition receiving a payoff that is equal to its Shapley value.", "title": "" }, { "docid": "838bd8a38f9d67d768a34183c72da07d", "text": "Jacobsen syndrome (JS), a rare disorder with multiple dysmorphic features, is caused by the terminal deletion of chromosome 11q. Typical features include mild to moderate psychomotor retardation, trigonocephaly, facial dysmorphism, cardiac defects, and thrombocytopenia, though none of these features are invariably present. The estimated occurrence of JS is about 1/100,000 births. The female/male ratio is 2:1. The patient admitted to our clinic at 3.5 years of age with a cardiac murmur and facial anomalies. Facial anomalies included trigonocephaly with bulging forehead, hypertelorism, telecanthus, downward slanting palpebral fissures, and a carp-shaped mouth. The patient also had strabismus. An echocardiogram demonstrated perimembranous aneurysmatic ventricular septal defect and a secundum atrial defect. The patient was <3rd percentile for height and weight and showed some developmental delay. Magnetic resonance imaging (MRI) showed hyperintensive gliotic signal changes in periventricular cerebral white matter, and leukodystrophy was suspected. Chromosomal analysis of the patient showed terminal deletion of chromosome 11. The karyotype was designated 46, XX, del(11) (q24.1). A review of published reports shows that the severity of the observed clinical abnormalities in patients with JS is not clearly correlated with the extent of the deletion. Most of the patients with JS had short stature, and some of them had documented growth hormone deficiency, or central or primary hypothyroidism. In patients with the classical phenotype, the diagnosis is suspected on the basis of clinical findings: intellectual disability, facial dysmorphic features and thrombocytopenia. The diagnosis must be confirmed by cytogenetic analysis. For patients who survive the neonatal period and infancy, the life expectancy remains unknown. In this report, we describe a patient with the clinical features of JS without thrombocytopenia. To our knowledge, this is the first case reported from Turkey.", "title": "" }, { "docid": "d7635b011cef61fe6487a823c0d09301", "text": "The present letter describes the design of an energy harvesting circuit on a one-sided directional flexible planar antenna. The circuit is composed of a flexible antenna with an impedance matching circuit, a resonant circuit, and a booster circuit for converting and boosting radio frequency power into a dc voltage. The proposed one-sided directional flexible antenna has a bottom floating metal layer that enables one-sided radiation and easy connection of the booster circuit to the metal layer. The simulated output dc voltage is 2.89 V for an input of 100 mV and a 50 Ω power source at 900 MHz, and power efficiency is 58.7% for 1.0 × 107 Ω load resistance.", "title": "" }, { "docid": "57e71550633cdb4a37d3fa270f0ad3a7", "text": "Classifiers based on sparse representations have recently been shown to provide excellent results in many visual recognition and classification tasks. However, the high cost of computing sparse representations at test time is a major obstacle that limits the applicability of these methods in large-scale problems, or in scenarios where computational power is restricted. We consider in this paper a simple yet efficient alternative to sparse coding for feature extraction. We study a classification scheme that applies the soft-thresholding nonlinear mapping in a dictionary, followed by a linear classifier. A novel supervised dictionary learning algorithm tailored for this low complexity classification architecture is proposed. The dictionary learning problem, which jointly learns the dictionary and linear classifier, is cast as a difference of convex (DC) program and solved efficiently with an iterative DC solver. We conduct experiments on several datasets, and show that our learning algorithm that leverages the structure of the classification problem outperforms generic learning procedures. Our simple classifier based on soft-thresholding also competes with the recent sparse coding classifiers, when the dictionary is learned appropriately. The adopted classification scheme further requires less computational time at the testing stage, compared to other classifiers. The proposed scheme shows the potential of the adequately trained soft-thresholding mapping for classification and paves the way towards the development of very efficient classification methods for vision problems.", "title": "" }, { "docid": "88b0d223ccff042d20148abf79599102", "text": "Lack of performance when it comes to continual learning over non-stationary distributions of data remains a major challenge in scaling neural network learning to more human realistic settings. In this work we propose a new conceptualization of the continual learning problem in terms of a trade-off between transfer and interference. We then propose a new algorithm, Meta-Experience Replay (MER), that directly exploits this view by combining experience replay with optimization based meta-learning. This method learns parameters that make interference based on future gradients less likely and transfer based on future gradients more likely. We conduct experiments across continual lifelong supervised learning benchmarks and non-stationary reinforcement learning environments demonstrating that our approach consistently outperforms recently proposed baselines for continual learning. Our experiments show that the gap between the performance of MER and baseline algorithms grows both as the environment gets more non-stationary and as the fraction of the total experiences stored gets smaller. 1 SOLVING THE CONTINUAL LEARNING PROBLEM A long-held goal of AI is to build agents capable of operating autonomously for long periods. Such agents must incrementally learn and adapt to a changing environment while maintaining memories of what they have learned before, a setting known as lifelong learning (Thrun, 1994; 1996). In this paper we explore a variant called continual learning (Ring, 1994; Lopez-Paz & Ranzato, 2017). Continual learning assumes that the learner is exposed to a sequence of tasks, where each task is a sequence of experiences from the same distribution. We would like to develop a solution in this setting by discovering notions of tasks without supervision while learning incrementally after every experience. This is challenging because in standard offline single task and multi-task learning (Caruana, 1997) it is implicitly assumed that the data is drawn from an i.i.d. stationary distribution. Neural networks tend to struggle whenever this is not the case (Goodrich, 2015). Over the years, solutions to the continual learning problem have been largely driven by prominent conceptualizations of the issues faced by neural networks. One popular view is catastrophic forgetting (interference) (McCloskey & Cohen, 1989), in which the primary concern is the lack of stability in neural networks, and the main solution is to limit the extent of weight sharing across experiences by focusing on preserving past knowledge (Kirkpatrick et al., 2017; Zenke et al., 2017; Lee et al., 2017). Another popular and more complex conceptualization is the stability-plasticity dilemma (Carpenter & Grossberg, 1987). In this view, the primary concern is the balance between network stability (to preserve past knowledge) and plasticity (to rapidly learn the current experience). Recently proposed techniques focus on balancing limited weight sharing with some mechanism to ensure fast learning (Li & Hoiem, 2016; Riemer et al., 2016; Lopez-Paz & Ranzato, 2017; Rosenbaum et al., 2018; Lee et al., 2018; Serrà et al., 2018). In this paper, we extend this view by 1 ar X iv :1 81 0. 11 91 0v 1 [ cs .L G ] 2 9 O ct 2 01 8 Published as a conference paper at ICLR 2019 Stability – Plasticity Dilemma Stability – Plasticity Dilemma A. Transfer – Interference Trade-off Transfer Old Learning Current Learning Future Learning Sharing Sharing B. Transfer C. Interference ∂Li ∂θ ∂Lj ∂θ", "title": "" }, { "docid": "0307912d034d4cbfef7cafb79ea9f9b3", "text": "This survey focuses on recognition performed by matching models of the three-dimensional shape of the face, either alone or in combination with matching corresponding two-dimensional intensity images. Research trends to date are summarized, and challenges confronting the development of more accurate three-dimensional face recognition are identified. These challenges include the need for better sensors, improved recognition algorithms, and more rigorous experimental methodology. 2005 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "b3f2c1736174eda75f7eedb3cee2a729", "text": "Stochastic local search (SLS) algorithms are well known for their ability to efficiently find models of random instances of the Boolean satisfiability (SAT) problem. One of the most famous SLS algorithms for SAT is WalkSAT, which is an initial algorithm that has wide influence and performs very well on random 3-SAT instances. However, the performance of WalkSAT on random k-SAT instances with k > 3 lags far behind. Indeed, there are limited works on improving SLS algorithms for such instances. This work takes a good step toward this direction. We propose a novel concept namely multilevel make. Based on this concept, we design a scoring function called linear make, which is utilized to break ties in WalkSAT, leading to a new algorithm called WalkSATlm. Our experimental results show that WalkSATlm improves WalkSAT by orders of magnitude on random k-SAT instances with k > 3 near the phase transition. Additionally, we propose an efficient implementation for WalkSATlm, which leads to a speedup of 100%. We also give some insights on different forms of linear make functions, and show the limitation of the linear make function on random 3-SAT through theoretical analysis.", "title": "" }, { "docid": "b4a784bb8eb714afc86f1eee4f0a20ed", "text": "Warthin tumor (papillary cystadenoma lymphomatosum) is a benign salivary gland tumor involving almost exclusively the parotid gland. The lip is a very unusual location for this type of tumor, which develops only rarely in minor salivary glands. The case of 42-year-old woman with Warthin tumor arising in minor salivary glands of the upper lip is reported.", "title": "" }, { "docid": "86f25f09b801d28ce32f1257a39ddd44", "text": "Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data-center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks that proves robust to the unbalanced and non-IID data distributions that naturally arise. This method allows high-quality models to be trained in relatively few rounds of communication, the principal constraint for federated learning. The key insight is that despite the non-convex loss functions we optimize, parameter averaging over updates from multiple clients produces surprisingly good results, for example decreasing the communication needed to train an LSTM language model by two orders of magnitude.", "title": "" }, { "docid": "7e647cac9417bf70acd8c0b4ee0faa9b", "text": "Global Navigation Satellite Systems (GNSS) are applicable to deliver train locations in real time. This train localization function should comply with railway functional safety standards; thus, the GNSS performance needs to be evaluated in consistent with railway EN 50126 standard [Reliability, Availability, Maintainability, and Safety (RAMS)]. This paper demonstrates the performance of the GNSS receiver for train localization. First, the GNSS performance and railway RAMS properties are compared by definitions. Second, the GNSS receiver measurements are categorized into three states (i.e., up, degraded, and faulty states). The relations between the states are illustrated in a stochastic Petri net model. Finally, the performance properties are evaluated using real data collected on the railway track in High Tatra Mountains in Slovakia. The property evaluation is based on the definitions represented by the modeled states.", "title": "" }, { "docid": "1347e22f1b3afe4ce6cd40f25770a465", "text": "Contextual bandit algorithms provide principled online learning solutions to find optimal trade-offs between exploration and exploitation with companion side-information. They have been extensively used in many important practical scenarios, such as display advertising and content recommendation. A common practice estimates the unknown bandit parameters pertaining to each user independently. This unfortunately ignores dependency among users and thus leads to suboptimal solutions, especially for the applications that have strong social components.\n In this paper, we develop a collaborative contextual bandit algorithm, in which the adjacency graph among users is leveraged to share context and payoffs among neighboring users while online updating. We rigorously prove an improved upper regret bound of the proposed collaborative bandit algorithm comparing to conventional independent bandit algorithms. Extensive experiments on both synthetic and three large-scale real-world datasets verified the improvement of our proposed algorithm against several state-of-the-art contextual bandit algorithms.", "title": "" }, { "docid": "854bd77e534e0bb53953edb708c867b1", "text": "About 60-GHz millimeter wave (mmWave) unlicensed frequency band is considered as a key enabler for future multi-Gbps WLANs. IEEE 802.11ad (WiGig) standard has been ratified for 60-GHz wireless local area networks (WLANs) by only considering the use case of peer to peer (P2P) communication coordinated by a single WiGig access point (AP). However, due to 60-GHz fragile channel, multiple number of WiGig APs should be installed to fully cover a typical target environment. Nevertheless, the exhaustive search beamforming training and the maximum received power-based autonomous users association prevent WiGig APs from establishing optimal WiGig concurrent links using random access. In this paper, we formulate the problem of WiGig concurrent transmissions in random access scenarios as an optimization problem, and then we propose a greedy scheme based on (2.4/5 GHz) Wi-Fi/(60 GHz) WiGig coordination to find out a suboptimal solution for it. In the proposed WLAN, the wide coverage Wi-Fi band is used to provide the control signalling required for launching the high date rate WiGig concurrent links. Besides, statistical learning using Wi-Fi fingerprinting is utilized to estimate the suboptimal candidate AP along with its suboptimal beam direction for establishing the WiGig concurrent link without causing interference to the existing WiGig data links while maximizing the total system throughput. Numerical analysis confirms the high impact of the proposed Wi-Fi/WiGig coordinated WLAN.", "title": "" } ]
scidocsrr
0c56ff755afba097645800990f749c55
Design of a Wideband Planar Printed Quasi-Yagi Antenna Using Stepped Connection Structure
[ { "docid": "6661cc34d65bae4b09d7c236d0f5400a", "text": "In this letter, we present a novel coplanar waveguide fed quasi-Yagi antenna with broad bandwidth. The uniqueness of this design is due to its simple feed selection and despite this, its achievable bandwidth. The 10 dB return loss bandwidth of the antenna is 44% covering X-band. The antenna is realized on a high dielectric constant substrate and is compatible with microstrip circuitry and active devices. The gain of the antenna is 7.4 dBi, the front-to-back ratio is 15 dB and the nominal efficiency of the radiator is 95%.", "title": "" }, { "docid": "5f40ac6afd39e3d2fcbc5341bc3af7b4", "text": "We present a modified quasi-Yagi antenna for use in WLAN access points. The antenna uses a new microstrip-to-coplanar strip (CPS) transition, consisting of a tapered microstrip input, T-junction, conventional 50-ohm microstrip line, and three artificial transmission line (ATL) sections. The design concept, mode conversion scheme, and simulated and experimental S-parameters of the transition are discussed first. It features a compact size, and a 3dB-insertion loss bandwidth of 78.6%. Based on the transition, a modified quasi-Yagi antenna is demonstrated. In addition to the new transition, the antenna consists of a CPS feed line, a meandered dipole, and a parasitic element. The meandered dipole can substantially increase to the front-to-back ratio of the antenna without sacrificing the operating bandwidth. The parasitic element is placed in close proximity to the driven element to improve impedance bandwidth and radiation characteristics. The antenna exhibits excellent end-fire radiation with a front-to-back ratio of greater than 15 dB. It features a moderate gain around 4 dBi, and a fractional bandwidth of 38.3%. We carefully investigate the concept, methodology, and experimental results of the proposed antenna.", "title": "" } ]
[ { "docid": "d84c8302578391c909b2ac261c93c1fb", "text": "This short communication describes a case of diprosopiasis in Trachemys scripta scripta imported from Florida (USA) and farmed for about 4 months by a private owner in Palermo, Sicily, Italy. The water turtle showed the morphological and radiological features characterizing such deformity. This communication aims to advance the knowledge of the reptile's congenital anomalies and suggests the need for more detailed investigations to better understand its pathogenesis.", "title": "" }, { "docid": "b04ba2e942121b7a32451f0b0f690553", "text": "Due to the growing number of vehicles on the roads worldwide, road traffic accidents are currently recognized as a major public safety problem. In this context, connected vehicles are considered as the key enabling technology to improve road safety and to foster the emergence of next generation cooperative intelligent transport systems (ITS). Through the use of wireless communication technologies, the deployment of ITS will enable vehicles to autonomously communicate with other nearby vehicles and roadside infrastructures and will open the door for a wide range of novel road safety and driver assistive applications. However, connecting wireless-enabled vehicles to external entities can make ITS applications vulnerable to various security threats, thus impacting the safety of drivers. This article reviews the current research challenges and opportunities related to the development of secure and safe ITS applications. It first explores the architecture and main characteristics of ITS systems and surveys the key enabling standards and projects. Then, various ITS security threats are analyzed and classified, along with their corresponding cryptographic countermeasures. Finally, a detailed ITS safety application case study is analyzed and evaluated in light of the European ETSI TC ITS standard. An experimental test-bed is presented, and several elliptic curve digital signature algorithms (ECDSA) are benchmarked for signing and verifying ITS safety messages. To conclude, lessons learned, open research challenges and opportunities are discussed. Electronics 2015, 4 381", "title": "" }, { "docid": "19bb054fb4c6398df99a84a382354d59", "text": "Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms. We take the principled view of distributionally robust optimization, which guarantees performance under adversarial input perturbations. By considering a Lagrangian penalty formulation of perturbation of the underlying data distribution in a Wasserstein ball, we provide a training procedure that augments model parameter updates with worst-case perturbations of training data. For smooth losses, our procedure provably achieves moderate levels of robustness with little computational or statistical cost relative to empirical risk minimization. Furthermore, our statistical guarantees allow us to efficiently certify robustness for the population loss. We match or outperform heuristic approaches on supervised and reinforcement learning tasks.", "title": "" }, { "docid": "48c28572e5eafda1598a422fa1256569", "text": "Future power networks will be characterized by safe and reliable functionality against physical and cyber attacks. This paper proposes a unified framework and advanced monitoring procedures to detect and identify network components malfunction or measurements corruption caused by an omniscient adversary. We model a power system under cyber-physical attack as a linear time-invariant descriptor system with unknown inputs. Our attack model generalizes the prototypical stealth, (dynamic) false-data injection and replay attacks. We characterize the fundamental limitations of both static and dynamic procedures for attack detection and identification. Additionally, we design provably-correct (dynamic) detection and identification procedures based on tools from geometric control theory. Finally, we illustrate the effectiveness of our method through a comparison with existing (static) detection algorithms, and through a numerical study.", "title": "" }, { "docid": "403d54a5672037cb8adb503405845bbd", "text": "This paper introduces adaptor grammars, a class of probabil istic models of language that generalize probabilistic context-free grammar s (PCFGs). Adaptor grammars augment the probabilistic rules of PCFGs with “ada ptors” that can induce dependencies among successive uses. With a particular choice of adaptor, based on the Pitman-Yor process, nonparametric Bayesian mo dels f language using Dirichlet processes and hierarchical Dirichlet proc esses can be written as simple grammars. We present a general-purpose inference al gorithm for adaptor grammars, making it easy to define and use such models, and ill ustrate how several existing nonparametric Bayesian models can be expressed wi thin this framework.", "title": "" }, { "docid": "f5d8c506c9f25bff429cea1ed4c84089", "text": "Therabot is a robotic therapy support system designed to supplement a therapist and to provide support to patients diagnosed with conditions associated with trauma and adverse events. The system takes on the form factor of a floppy-eared dog which fits in a person»s lap and is designed for patients to provide support and encouragement for home therapy exercises and in counseling.", "title": "" }, { "docid": "4249c95fcd869434312524f05c013c55", "text": "The demands on visual recognition systems do not end with the complexity offered by current large-scale image datasets, such as ImageNet. In consequence, we need curious and continuously learning algorithms that actively acquire knowledge about semantic concepts which are present in available unlabeled data. As a step towards this goal, we show how to perform continuous active learning and exploration, where an algorithm actively selects relevant batches of unlabeled examples for annotation. These examples could either belong to already known or to yet undiscovered classes. Our algorithm is based on a new generalization of the Expected Model Output Change principle for deep architectures and is especially tailored to deep neural networks. Furthermore, we show easy-to-implement approximations that yield efficient techniques for active selection. Empirical experiments show that our method outperforms currently used heuristics.", "title": "" }, { "docid": "e95fa624bb3fd7ea45650213088a43b0", "text": "In recent years, much research has been conducted on image super-resolution (SR). To the best of our knowledge, however, few SR methods were concerned with compressed images. The SR of compressed images is a challenging task due to the complicated compression artifacts, while many images suffer from them in practice. The intuitive solution for this difficult task is to decouple it into two sequential but independent subproblems, i.e., compression artifacts reduction (CAR) and SR. Nevertheless, some useful details may be removed in CAR stage, which is contrary to the goal of SR and makes the SR stage more challenging. In this paper, an end-to-end trainable deep convolutional neural network is designed to perform SR on compressed images (CISRDCNN), which reduces compression artifacts and improves image resolution jointly. Experiments on compressed images produced by JPEG (we take the JPEG as an example in this paper) demonstrate that the proposed CISRDCNN yields state-of-the-art SR performance on commonly used test images and imagesets. The results of CISRDCNN on real low quality web images are also very impressive, with obvious quality enhancement. Further, we explore the application of the proposed SR method in low bit-rate image coding, leading to better rate-distortion performance than JPEG.", "title": "" }, { "docid": "33817271f39357c4aef254ac96aab480", "text": "Evolutionary computation methods have been successfully applied to neural networks since two decades ago, while those methods cannot scale well to the modern deep neural networks due to the complicated architectures and large quantities of connection weights. In this paper, we propose a new method using genetic algorithms for evolving the architectures and connection weight initialization values of a deep convolutional neural network to address image classification problems. In the proposed algorithm, an efficient variable-length gene encoding strategy is designed to represent the different building blocks and the unpredictable optimal depth in convolutional neural networks. In addition, a new representation scheme is developed for effectively initializing connection weights of deep convolutional neural networks, which is expected to avoid networks getting stuck into local minima which is typically a major issue in the backward gradient-based optimization. Furthermore, a novel fitness evaluation method is proposed to speed up the heuristic search with substantially less computational resource. The proposed algorithm is examined and compared with 22 existing algorithms on nine widely used image classification tasks, including the stateof-the-art methods. The experimental results demonstrate the remarkable superiority of the proposed algorithm over the stateof-the-art algorithms in terms of classification error rate and the number of parameters (weights).", "title": "" }, { "docid": "7db989219c3c15aa90a86df84b134473", "text": "INTRODUCTION\nResearch indicated that: (i) vaginal orgasm (induced by penile-vaginal intercourse [PVI] without concurrent clitoral masturbation) consistency (vaginal orgasm consistency [VOC]; percentage of PVI occasions resulting in vaginal orgasm) is associated with mental attention to vaginal sensations during PVI, preference for a longer penis, and indices of psychological and physiological functioning, and (ii) clitoral, distal vaginal, and deep vaginal/cervical stimulation project via different peripheral nerves to different brain regions.\n\n\nAIMS\nThe aim of this study is to examine the association of VOC with: (i) sexual arousability perceived from deep vaginal stimulation (compared with middle and shallow vaginal stimulation and clitoral stimulation), and (ii) whether vaginal stimulation was present during the woman's first masturbation.\n\n\nMETHODS\nA sample of 75 Czech women (aged 18-36), provided details of recent VOC, site of genital stimulation during first masturbation, and their recent sexual arousability from the four genital sites.\n\n\nMAIN OUTCOME MEASURES\nThe association of VOC with: (i) sexual arousability perceived from the four genital sites and (ii) involvement of vaginal stimulation in first-ever masturbation.\n\n\nRESULTS\nVOC was associated with greater sexual arousability from deep vaginal stimulation but not with sexual arousability from other genital sites. VOC was also associated with women's first masturbation incorporating (or being exclusively) vaginal stimulation.\n\n\nCONCLUSIONS\nThe findings suggest (i) stimulating the vagina during early life masturbation might indicate individual readiness for developing greater vaginal responsiveness, leading to adult greater VOC, and (ii) current sensitivity of deep vaginal and cervical regions is associated with VOC, which might be due to some combination of different neurophysiological projections of the deep regions and their greater responsiveness to penile stimulation.", "title": "" }, { "docid": "28a4fd94ba02c70d6781ae38bf35ca5a", "text": "Zero-shot learning (ZSL) highly depends on a good semantic embedding to connect the seen and unseen classes. Recently, distributed word embeddings (DWE) pre-trained from large text corpus have become a popular choice to draw such a connection. Compared with human defined attributes, DWEs are more scalable and easier to obtain. However, they are designed to reflect semantic similarity rather than visual similarity and thus using them in ZSL often leads to inferior performance. To overcome this visual-semantic discrepancy, this work proposes an objective function to re-align the distributed word embeddings with visual information by learning a neural network to map it into a new representation called visually aligned word embedding (VAWE). Thus the neighbourhood structure of VAWEs becomes similar to that in the visual domain. Note that in this work we do not design a ZSL method that projects the visual features and semantic embeddings onto a shared space but just impose a requirement on the structure of the mapped word embeddings. This strategy allows the learned VAWE to generalize to various ZSL methods and visual features. As evaluated via four state-of-the-art ZSL methods on four benchmark datasets, the VAWE exhibit consistent performance improvement.", "title": "" }, { "docid": "17c12cc27cd66d0289fe3baa9ab4124d", "text": "In this paper we review classification algorithms used to design brain-computer interface (BCI) systems based on electroencephalography (EEG). We briefly present the commonly employed algorithms and describe their critical properties. Based on the literature, we compare them in terms of performance and provide guidelines to choose the suitable classification algorithm(s) for a specific BCI.", "title": "" }, { "docid": "59209ea750988390be9b0d0207ec06bd", "text": "In diesem Kapitel wird Kognitive Modellierung als ein interdisziplinäres Forschungsgebiet vorgestellt, das sich mit der Entwicklung von computerimplementierbaren Modellen beschäftigt, in denen wesentliche Eigenschaften des Wissens und der Informationsverarbeitung beim Menschen abgebildet sind. Nach einem allgemeinen Überblick über Zielsetzungen, Methoden und Vorgehensweisen, die sich auf den Gebieten der kognitiven Psychologie und der Künstlichen Intelligenz entwickelt haben, sowie der Darstellung eines Theorierahmens werden vier Modelle detaillierter besprochen: In einem I>crnmodcll, das in einem Intelligenten Tutoriellen System Anwendung findet und in einem Performanz-Modell der MenschComputer-Interaktion wird menschliches Handlungswissen beschrieben. Die beiden anderen Modelle zum Textverstehen und zur flexiblen Gedächtnisorganisation beziehen sich demgegenüber vor allem auf den Aufbau und Abruf deklarativen Wissens. Abschließend werden die vorgestellten Modelle in die historische Entwicklung eingeordnet. Möglichkeiten und Grenzen der Kognitiven Modellierung werden hinsichtlich interessant erscheinender Weiterentwicklungen diskutiert. 1. Einleitung und Überblick Das Gebiet der Künstlichen Intelligenz wird meist unter Bezugnahme auf ursprünglich nur beim Menschen beobachtetes Verhalten definiert. So wird die Künstliche Intelligenz oder KI als die Erforschung von jenen Verhaltensabläufen verstanden, deren Planung und Durchführung Intelligenz erfordert. Der Begriff Intelligenz wird dabei unter Bezugnahme auf den Menschen vage abgegrenzt |Siekmann_83,Winston_84]. Da auch Teilbereiche der Psychologie, vor allem die Kognitive Psychologie, Intelligenz und Denken untersuchen, könnte man vermuten, daß die KI-Forschung als die jüngere Wissenschaft direkt auf älteren psychologischen Erkenntnissen aufbauen würde. Obwohl K I und kognitive Psychologie einen ähnlichen Gegenstandsbereich erforschen, gibt es jedoch auch vielschichtige Unterschiede zwischen beiden Disziplinen. Daraus läßt sich möglicherweise erklären, daß die beiden Fächer bislang nicht in dem Maß interagiert haben, wie dies wünschenswert wäre. 1.1 Unterschiede zwischen KI und Kognitiver Psychologie Auch wenn keine klare Grenze zwischen den beiden Gebieten gezogen werden kann, so müssen wir doch feststellen, daß K I nicht gleich Kognitiver Psychologie ist. Wichtige Unterschiede bestehen in den primären Forschungszielen und Methoden, sowie in der Interpretation von Computermodellen (computational models). Zielsetzungen und Methoden Während die K I eine Modellierung von Kompetenzen anstrebt, erforscht die Psychologie die Performanz des Menschen. • Die K I sucht nach Verfahren, die zu einem intelligenten Verhalten eines Computers fuhren. Beispielsweise sollte ein Computer natürliche Sprache verstehen, neue Begriffe lernen können oder Expertenverhalten zeigen oder unterstützen. Die K I versucht also, intelligente Systeme zu entwickeln und deckt dabei mögliche Prinzipien von Intelligenz auf, indem sie Datenstrukturen und Algorithmen spezifiziert, die intelligentes Verhalten erwarten lassen. Entscheidend ist dabei, daß eine intelligente Leistung im Sinne eines Turing-Tests erbracht wird: Eine Implementierung des Algorithmus soll für eine Menge spezifizierter Eingaben (z. B . gesprochene Sprache) innerhalb angemessener Zeit die vergleichbare Verarbeitungsleistung erbringen wie der Mensch. Der beobachtete Systemoutput von Mensch und Computer wäre also oberflächlich betrachtet nicht voneinander unterscheidbar [Turing_63]. Ob die dabei im Computer verwendeten Strukturen, Prozesse und Heuristiken denen beim Menschen ähneln, spielt in der K I keine primäre Rolle. • Die Kognitive Psychologie hingegen untersucht eher die internen kognitiven Verarbeitungsprozesse des Menschen. Bei einer psychologischen Theorie sollte also auch das im Modell verwendete Verfahren den Heuristiken entsprechen, die der Mensch verwendet. Beispielsweise wird ein Schachprogramm nicht dadurch zu einem psychologisch adäquaten Modell, daß es die Spielstärke menschlicher Meisterspieler erreicht. Vielmehr sollten bei einem psychologischen Modell auch die Verarbeitungsprozesse von Mensch und Programm übereinstimmen (vgl. dazu [deGroot_66]).Für psychologische Forschungen sind daher empirische und gezielte experimentelle Untersuchungen der menschlichen Kognition von großer Bedeutung. In der K I steht die Entwicklung und Implementierung von Modellen im Vordergrund. Die kognitive Psychologie dagegen betont die Wichtigkeit der empirischen Evaluation von Modellen zur Absicherung von präzisen, allgemeingültigen Aussagen. Wegen dieser verschiedenen Schwerpunkt Setzung und den daraus resultierenden unterschiedlichen Forschungsmethoden ist es für die Forscher der einen Disziplin oft schwierig, den wissenschaftlichen Fortschritt der jeweils anderen Disziplin zu nutzen [Miller_78]. Interpretation von Computermodellen Die K I ist aus der Informatik hervorgegangen. Wie bei der Informatik bestehen auch bei der K I wissenschaftliche Erkenntnisse darin, daß mit ingenieurwissenschaftlichen Verfahren neue Systeme wie Computerhardund -Software konzipiert und erzeugt werden. Die genaue Beschreibung eines so geschaffenen Systems ist für den Informatiker im Prinzip unproblematisch, da er das System selbst entwickelt hat und daher über dessen Bestandteile und Funktionsweisen bestens informiert ist. Darin liegt ein Unterschied zu den empirischen Wissenschaften wie der Physik oder Psychologie. Der Erfahrungswissenschaftler muß Objektbereiche untersuchen, deren Gesetzmäßigkeiten er nie mit letzter Sicherheit feststellen kann. Er m u ß sich daher Theorien oder Modelle über den Untersuchungsgegenstand bilden, die dann empirisch überprüft werden können. Jedoch läßt sich durch eine noch so große Anzahl von Experimenten niemals die Korrektheit eines Modells beweisen [Popper_66]. E in einfaches Beispiel kann diesen Unterschied verdeutlichen. • E in Hardwarespezialist, der einen Personal Computer gebaut hat, weiß, daß die Aussage \"Der Computer ist mit 640 K B Hauptspeicher bestückt\" richtig ist, weil er ihn eben genau so bestückt hat. Dies ist also eine feststehende Tatsache, die keiner weiteren Überprüfung bedarf. • Die Behauptung eines Psychologen, daß der menschliche Kurzzeitoder Arbeitsspeicher eine Kapazität von etwa 7 Einheiten oder Chunks habe, hat jedoch einen ganz anderen Stellenwert. Damit wird keinesfalls eine faktische Behauptung über die Größe von Arealen im menschlichen Gehirn aufgestellt. \"Arbeitsspeicher\" wird hier als theoretischer Term eines Modells verwendet. Mit der Aussage über die Kapazität des Arbeitsspeichers ist gemeint, daß erfahrungsgemäß Modelle, die eine solche Kapazitätsbescfiränkung annehmen, menschliches Verhalten gut beschreiben können. Dadurch wird jedoch nicht ausgeschlossen, daß ein weiteres Experiment Unzulänglichkeiten oder die Inkorrektheit des Modells nachweist. In den Erfahrungswissenscharten werden theoretische Begriffe wie etwa Arbeitsspeicher innerhalb von Computermodellen zur abstrahierten und integrativen Beschreibung von empirischen Erkenntnissen verwendet. Dadurch können beim Menschen zu beobachtende Verhaltensweisen vorhergesagt werden. Aus der Sichtweise der Informatik bezeichnen genau die gleichen Tcrme jedoch tatsächliche Komponenten eines Geräts oder Programms. Diese unterschiedlichen Sichtweisen der gleichen Modelle verbieten einen unkritischen und oberflächlichen Informationstransfer zwischen K I und Kognitiver Psychologie. Aus der Integration der Zielsetzungen und Sichtweisen ergeben sich jedoch auch gerade vielversprechende Erkenntnismöglichkeiten über Intelligenz. Da theoretische wie auch empirische Untersuchungen zum Verständnis der Intelligenz beitragen, können sich die Methoden und Erkenntnisse von beiden Disziplinen (ähnlich wie Mathematik und Physik im Bereich der theoretischen Physik) ergänzen und befruchten. 1.2 Synthese von KI und Kognitiver Psychologie Im Rahmen der Kognitionswissenschaften(cognitive science) tragen viele Disziplinen (z.B. K I , Psychologie, Linguistik, Anthropologie ...) Erkenntnisse über informationsverarbeitende Systeme bei. Die Kognitive Modellierung als ein Teilgebiet von sowohl K I als auch Kognitiver Psychologie befaßt sich mit der Entwicklung von computerimplementierbaren Modellen, in denen wesentliche Eigenschaften des Wissens und der Informationsverarbeitung beim Menschen abgebildet sind. Durch Kognitive Modellierung wird also eine Synthese von K I und psychologischer Forschung angestrebt. E in Computermodell wird zu einem kognitiven Modell, indem Entitätcn des Modells psychologischen Beobachtungen und Erkenntnissen zugeordnet werden. Da ein solches Modell auch den Anspruch erhebt, menschliches Verhalten vorherzusagen, können Kognitive Modelle aufgrund empirischer Untersuchungen weiterentwickelt werden. Die Frage, ob ein KI-Modell als ein kognitives Modell anzusehen ist, kann nicht einfach bejaht oder verneint werden, sondern wird vielmehr durch die Angabe einer Zuordnung von Aspekten der menschlichen Informationsverarbeitung zu Eigenschaften des Computermodells beantwortet.", "title": "" }, { "docid": "2a36a2ab5b0e01da90859179a60cef9a", "text": "We report 3 cases of renal toxicity associated with use of the antiviral agent tenofovir. Renal failure, proximal tubular dysfunction, and nephrogenic diabetes insipidus were observed, and, in 2 cases, renal biopsy revealed severe tubular necrosis with characteristic nuclear changes. Patients receiving tenofovir must be monitored closely for early signs of tubulopathy (glycosuria, acidosis, mild increase in the plasma creatinine level, and proteinuria).", "title": "" }, { "docid": "598ffff550aa4e3a9ad1d2f5251fc03a", "text": "The now taken-for-granted notion that data lead to information, which leads to knowledge, which in turn leads to wisdom was first specified in detail by R. L. Ackoff in 1988. The Data-Information-KnowledgeWisdom hierarchy is based on filtration, reduction, and transformation. Besides being causal and hierarchical, the scheme is pyramidal, in that data are plentiful while wisdom is almost nonexistent. Ackoff’s formula linking these terms together this way permits us to ask what the opposite of knowledge is and whether analogous principles of hierarchy, process, and pyramiding apply to it. The inversion of the DataInformation-Knowledge-Wisdom hierarchy produces a series of opposing terms (including misinformation, error, ignorance, and stupidity) but not exactly a chain or a pyramid. Examining the connections between these phenomena contributes to our understanding of the contours and limits of knowledge. This presentation will revisit the Data-Information-Knowledge-Wisdom hierarchy linking these concepts together as stages of a single developmental process, with the aim of building a taxonomy for a postulated opposite of knowledge, which I will call ‘nonknowledge’. Concepts of data, information, knowledge, and wisdom are the building blocks of library and information science. Discussions and definitions of these terms pervade the literature from introductory textbooks to theoretical research articles (see Zins, 2007). Expressions linking some of these concepts predate the development of information science as a field of study (Sharma 2008). But the first to put all the terms into a single formula was Russell Lincoln Ackoff, in 1989. Ackoff posited a hierarchy at the top of which lay wisdom, and below that understanding, knowledge, information, and data, in that order. Furthermore, he wrote that “each of these includes the categories that fall below it,” and estimated that “on average about forty percent of the human mind consists of data, thirty percent information, twenty percent knowledge, ten percent understanding, and virtually no wisdom” (Ackoff, 1989, 3). This phraseology allows us to view his model as a pyramid, and indeed it has been likened to one ever since (Rowley, 2007; see figure 1). (‘Understanding’ is omitted, since subsequent formulations have not picked up on it.) Ackoff was a management consultant and former professor of management science at the Wharton School specializing in operations research and organizational theory. His article formulating what is now commonly called the Data-InformationKnowledge-Wisdom hierarchy (or DIKW for short) was first given in 1988 as a presidential address to the International Society for General Systems Research. This background may help explain his approach. Data in his terms are the product of observations, and are of no value until they are processed into a usable form to become information. Information is contained in answers to questions. Knowledge, the next layer, further refines information by making “possible the transformation of information into instructions. It makes control of a system possible” (Ackoff, 1989, 4), and that enables one to make it work efficiently. A managerial rather than scholarly perspective runs through Ackoff’s entire hierarchy, so that “understanding” for him", "title": "" }, { "docid": "76c7b343d2f03b64146a0d6ed2d60668", "text": "Three important stages within automated 3D object reconstruction via multi-image convergent photogrammetry are image pre-processing, interest point detection for feature-based matching and triangular mesh generation. This paper investigates approaches to each of these. The Wallis filter is initially examined as a candidate image pre-processor to enhance the performance of the FAST interest point operator. The FAST algorithm is then evaluated as a potential means to enhance the speed, robustness and accuracy of interest point detection for subsequent feature-based matching. Finally, the Poisson Surface Reconstruction algorithm for wireframe mesh generation of objects with potentially complex 3D surface geometry is evaluated. The outcomes of the investigation indicate that the Wallis filter, FAST interest operator and Poisson Surface Reconstruction algorithms present distinct benefits in the context of automated image-based object reconstruction. The reported investigation has advanced the development of an automatic procedure for high-accuracy point cloud generation in multi-image networks, where robust orientation and 3D point determination has enabled surface measurement and visualization to be implemented within a single software system.", "title": "" }, { "docid": "b8d63090ea7d3302c71879ea4d11fde5", "text": "We study the problem of how to distribute the training of large-scale deep learning models in the parallel computing environment. We propose a new distributed stochastic optimization method called Elastic Averaging SGD (EASGD). We analyze the convergence rate of the EASGD method in the synchronous scenario and compare its stability condition with the existing ADMM method in the round-robin scheme. An asynchronous and momentum variant of the EASGD method is applied to train deep convolutional neural networks for image classification on the CIFAR and ImageNet datasets. Our approach accelerates the training and furthermore achieves better test accuracy. It also requires a much smaller amount of communication than other common baseline approaches such as the DOWNPOUR method. We then investigate the limit in speedup of the initial and the asymptotic phase of the mini-batch SGD, the momentum SGD, and the EASGD methods. We find that the spread of the input data distribution has a big impact on their initial convergence rate and stability region. We also find a surprising connection between the momentum SGD and the EASGD method with a negative moving average rate. A non-convex case is also studied to understand when EASGD can get trapped by a saddle point. Finally, we scale up the EASGD method by using a tree structured network topology. We show empirically its advantage and challenge. We also establish a connection between the EASGD and the DOWNPOUR method with the classical Jacobi and the Gauss-Seidel method, thus unifying a class of distributed stochastic optimization methods.", "title": "" }, { "docid": "7d33ba30fd30dce2cd4a3f5558a8c0ba", "text": "It has long been conjectured that hypothesis spaces suitable for data that is compositional in nature, such as text or images, may be more efficiently represented with deep hierarchical architectures than with shallow ones. Despite the vast empirical evidence, formal arguments to date are limited and do not capture the kind of networks used in practice. Using tensor factorization, we derive a universal hypothesis space implemented by an arithmetic circuit over functions applied to local data structures (e.g. image patches). The resulting networks first pass the input through a representation layer, and then proceed with a sequence of layers comprising sum followed by product-pooling, where sum corresponds to the widely used convolution operator. The hierarchical structure of networks is born from factorizations of tensors based on the linear weights of the arithmetic circuits. We show that a shallow network corresponds to a rank-1 decomposition, whereas a deep network corresponds to a Hierarchical Tucker (HT) decomposition. Log-space computation for numerical stability transforms the networks into SimNets.", "title": "" }, { "docid": "d89a5b253d188c28aa64facd3fef8b95", "text": "This paper presents a method for decomposing long, complex consumer health questions. Our approach largely decomposes questions using their syntactic structure, recognizing independent questions embedded in clauses, as well as coordinations and exemplifying phrases. Additionally, we identify elements specific to disease-related consumer health questions, such as the focus disease and background information. To achieve this, our approach combines rank-and-filter machine learning methods with rule-based methods. Our results demonstrate significant improvements over the heuristic methods typically employed for question decomposition that rely only on the syntactic parse tree.", "title": "" }, { "docid": "6d0aba91efbe627d8d98c7f49c34fe3d", "text": "The R language, from the point of view of language design and implementation, is a unique combination of various programming language concepts. It has functional characteristics like lazy evaluation of arguments, but also allows expressions to have arbitrary side effects. Many runtime data structures, for example variable scopes and functions, are accessible and can be modified while a program executes. Several different object models allow for structured programming, but the object models can interact in surprising ways with each other and with the base operations of R. \n R works well in practice, but it is complex, and it is a challenge for language developers trying to improve on the current state-of-the-art, which is the reference implementation -- GNU R. The goal of this work is to demonstrate that, given the right approach and the right set of tools, it is possible to create an implementation of the R language that provides significantly better performance while keeping compatibility with the original implementation. \n In this paper we describe novel optimizations backed up by aggressive speculation techniques and implemented within FastR, an alternative R language implementation, utilizing Truffle -- a JVM-based language development framework developed at Oracle Labs. We also provide experimental evidence demonstrating effectiveness of these optimizations in comparison with GNU R, as well as Renjin and TERR implementations of the R language.", "title": "" } ]
scidocsrr
f279df399f50407436670d9821df0891
Training with Exploration Improves a Greedy Stack LSTM Parser
[ { "docid": "b5f7511566b902bc206228dc3214c211", "text": "In the imitation learning paradigm algorithms learn from expert demonstrations in order to become able to accomplish a particular task. Daumé III et al. (2009) framed structured prediction in this paradigm and developed the search-based structured prediction algorithm (Searn) which has been applied successfully to various natural language processing tasks with state-of-the-art performance. Recently, Ross et al. (2011) proposed the dataset aggregation algorithm (DAgger) and compared it with Searn in sequential prediction tasks. In this paper, we compare these two algorithms in the context of a more complex structured prediction task, namely biomedical event extraction. We demonstrate that DAgger has more stable performance and faster learning than Searn, and that these advantages are more pronounced in the parameter-free versions of the algorithms.", "title": "" } ]
[ { "docid": "73270e8140d763510d97f7bd2fdd969e", "text": "Inspired by the progress of deep neural network (DNN) in single-media retrieval, the researchers have applied the DNN to cross-media retrieval. These methods are mainly two-stage learning: the first stage is to generate the separate representation for each media type, and the existing methods only model the intra-media information but ignore the inter-media correlation with the rich complementary context to the intra-media information. The second stage is to get the shared representation by learning the cross-media correlation, and the existing methods learn the shared representation through a shallow network structure, which cannot fully capture the complex cross-media correlation. For addressing the above problems, we propose the cross-media multiple deep network (CMDN) to exploit the complex cross-media correlation by hierarchical learning. In the first stage, CMDN jointly models the intra-media and intermedia information for getting the complementary separate representation of each media type. In the second stage, CMDN hierarchically combines the inter-media and intra-media representations to further learn the rich cross-media correlation by a deeper two-level network strategy, and finally get the shared representation by a stacked network style. Experiment results show that CMDN achieves better performance comparing with several state-of-the-art methods on 3 extensively used cross-media datasets.", "title": "" }, { "docid": "a0db56f55e2d291cb7cf871c064cf693", "text": "It's being very important to listen to social media streams whether it's Twitter, Facebook, Messenger, LinkedIn, email or even company own application. As many customers may be using this streams to reach out to company because they need help. The company have setup social marketing team to monitor this stream. But due to huge volumes of users it's very difficult to analyses each and every social message and take a relevant action to solve users grievances, which lead to many unsatisfied customers or may even lose a customer. This papers proposes a system architecture which will try to overcome the above shortcoming by analyzing messages of each ejabberd users to check whether it's actionable or not. If it's actionable then an automated Chatbot will initiates conversation with that user and help the user to resolve the issue by providing a human way interactions using LUIS and cognitive services. To provide a highly robust, scalable and extensible architecture, this system is implemented on AWS public cloud.", "title": "" }, { "docid": "fe0120f7d74ad63dbee9c3cd5ff81e6f", "text": "Background: Software fault prediction is the process of developing models that can be used by the software practitioners in the early phases of software development life cycle for detecting faulty constructs such as modules or classes. There are various machine learning techniques used in the past for predicting faults. Method: In this study we perform a systematic review studies from January 1991 to October 2013 in the literature that use the machine learning techniques for software fault prediction. We assess the performance capability of the machine learning techniques in existing research for software fault prediction. We also compare the performance of the machine learning techniques with the", "title": "" }, { "docid": "4e8040c9336cf7d847d938b905f8f81d", "text": "Many cluster management systems (CMSs) have been proposed to share a single cluster with multiple distributed computing systems. However, none of the existing approaches can handle distributed machine learning (ML) workloads given the following criteria: high resource utilization, fair resource allocation and low sharing overhead. To solve this problem, we propose a new CMS named Dorm, incorporating a dynamically-partitioned cluster management mechanism and an utilization-fairness optimizer. Specifically, Dorm uses the container-based virtualization technique to partition a cluster, runs one application per partition, and can dynamically resize each partition at application runtime for resource efficiency and fairness. Each application directly launches its tasks on the assigned partition without petitioning for resources frequently, so Dorm imposes flat sharing overhead. Extensive performance evaluations showed that Dorm could simultaneously increase the resource utilization by a factor of up to 2.32, reduce the fairness loss by a factor of up to 1.52, and speed up popular distributed ML applications by a factor of up to 2.72, compared to existing approaches. Dorm's sharing overhead is less than 5% in most cases.", "title": "" }, { "docid": "f5a934dc200b27747d3452f5a14c24e5", "text": "Psoriasis vulgaris is a common and often chronic inflammatory skin disease. The incidence of psoriasis in Western industrialized countries ranges from 1.5% to 2%. Patients afflicted with severe psoriasis vulgaris may experience a significant reduction in quality of life. Despite the large variety of treatment options available, surveys have shown that patients still do not received optimal treatments. To optimize the treatment of psoriasis in Germany, the Deutsche Dermatologi sche Gesellschaft (DDG) and the Berufsverband Deutscher Dermatologen (BVDD) have initiated a project to develop evidence-based guidelines for the management of psoriasis. They were first published in 2006 and updated in 2011. The Guidelines focus on induction therapy in cases of mild, moderate and severe plaque-type psoriasis in adults including systemic therapy, UV therapy and topical therapies. The therapeutic recommendations were developed based on the results of a systematic literature search and were finalized during a consensus meeting using structured consensus methods (nominal group process).", "title": "" }, { "docid": "da986950f6bbad36de5e9cc55d04e798", "text": "Digital information is accumulating at an astounding rate, straining our ability to store and archive it. DNA is among the most dense and stable information media known. The development of new technologies in both DNA synthesis and sequencing make DNA an increasingly feasible digital storage medium. We developed a strategy to encode arbitrary digital information in DNA, wrote a 5.27-megabit book using DNA microchips, and read the book by using next-generation DNA sequencing.", "title": "" }, { "docid": "d1f02e2f57cffbc17387de37506fddc9", "text": "The task of matching patterns in graph-structured data has applications in such diverse areas as computer vision, biology, electronics, computer aided design, social networks, and intelligence analysis. Consequently, work on graph-based pattern matching spans a wide range of research communities. Due to variations in graph characteristics and application requirements, graph matching is not a single problem, but a set of related problems. This paper presents a survey of existing work on graph matching, describing variations among problems, general and specific solution approaches, evaluation techniques, and directions for further research. An emphasis is given to techniques that apply to general graphs with semantic characteristics.", "title": "" }, { "docid": "b0b2c4c321b5607cd6ebda817258921d", "text": "In recent years, classification of colon biopsy images has become an active research area. Traditionally, colon cancer is diagnosed using microscopic analysis. However, the process is subjective and leads to considerable inter/intra observer variation. Therefore, reliable computer-aided colon cancer detection techniques are in high demand. In this paper, we propose a colon biopsy image classification system, called CBIC, which benefits from discriminatory capabilities of information rich hybrid feature spaces, and performance enhancement based on ensemble classification methodology. Normal and malignant colon biopsy images differ with each other in terms of the color distribution of different biological constituents. The colors of different constituents are sharp in normal images, whereas the colors diffuse with each other in malignant images. In order to exploit this variation, two feature types, namely color components based statistical moments (CCSM) and Haralick features have been proposed, which are color components based variants of their traditional counterparts. Moreover, in normal colon biopsy images, epithelial cells possess sharp and well-defined edges. Histogram of oriented gradients (HOG) based features have been employed to exploit this information. Different combinations of hybrid features have been constructed from HOG, CCSM, and Haralick features. The minimum Redundancy Maximum Relevance (mRMR) feature selection method has been employed to select meaningful features from individual and hybrid feature sets. Finally, an ensemble classifier based on majority voting has been proposed, which classifies colon biopsy images using the selected features. Linear, RBF, and sigmoid SVM have been employed as base classifiers. The proposed system has been tested on 174 colon biopsy images, and improved performance (=98.85%) has been observed compared to previously reported studies. Additionally, the use of mRMR method has been justified by comparing the performance of CBIC on original and reduced feature sets.", "title": "" }, { "docid": "0f9ef379901c686df08dd0d1bb187e22", "text": "This paper studies the minimum achievable source coding rate as a function of blocklength <i>n</i> and probability ϵ that the distortion exceeds a given level <i>d</i> . Tight general achievability and converse bounds are derived that hold at arbitrary fixed blocklength. For stationary memoryless sources with separable distortion, the minimum rate achievable is shown to be closely approximated by <i>R</i>(<i>d</i>) + √<i>V</i>(<i>d</i>)/(<i>n</i>) <i>Q</i><sup>-1</sup>(ϵ), where <i>R</i>(<i>d</i>) is the rate-distortion function, <i>V</i>(<i>d</i>) is the rate dispersion, a characteristic of the source which measures its stochastic variability, and <i>Q</i><sup>-1</sup>(·) is the inverse of the standard Gaussian complementary cumulative distribution function.", "title": "" }, { "docid": "1348ee3316643f4269311b602b71d499", "text": "This paper describes our proposed solution for SemEval 2017 Task 1: Semantic Textual Similarity (Daniel Cer and Specia, 2017). The task aims at measuring the degree of equivalence between sentences given in English. Performance is evaluated by computing Pearson Correlation scores between the predicted scores and human judgements. Our proposed system consists of two subsystems and one regression model for predicting STS scores. The two subsystems are designed to learn Paraphrase and Event Embeddings that can take the consideration of paraphrasing characteristics and sentence structures into our system. The regression model associates these embeddings to make the final predictions. The experimental result shows that our system acquires 0.8 of Pearson Correlation Scores in this task.", "title": "" }, { "docid": "49717f07b8b4a3da892c1bb899f7a464", "text": "Single cells were recorded in the visual cortex of monkeys trained to attend to stimuli at one location in the visual field and ignore stimuli at another. When both locations were within the receptive field of a cell in prestriate area V4 or the inferior temporal cortex, the response to the unattended stimulus was dramatically reduced. Cells in the striate cortex were unaffected by attention. The filtering of irrelevant information from the receptive fields of extrastriate neurons may underlie the ability to identify and remember the properties of a particular object out of the many that may be represented on the retina.", "title": "" }, { "docid": "6421979368a138e4b21ab7d9602325ff", "text": "In recent years, despite several risk management models proposed by different researchers, software projects still have a high degree of failures. Improper risk assessment during software development was the major reason behind these unsuccessful projects as risk analysis was done on overall projects. This work attempts in identifying key risk factors and risk types for each of the development phases of SDLC, which would help in identifying the risks at a much early stage of development.", "title": "" }, { "docid": "d76b7b25bce29cdac24015f8fa8ee5bb", "text": "A circularly polarized magnetoelectric dipole antenna with high efficiency based on printed ridge gap waveguide is presented. The antenna gain is improved by using a wideband lens in front of the antennas. The lens consists of three layers dual-polarized mu-near zero (MNZ) inclusions. Each layer consists of a <inline-formula> <tex-math notation=\"LaTeX\">$3\\times4$ </tex-math></inline-formula> MNZ unit cell. The measured results indicate that the magnitude of <inline-formula> <tex-math notation=\"LaTeX\">$S_{11}$ </tex-math></inline-formula> is below −10 dB in the frequency range of 29.5–37 GHz. The resulting 3-dB axial ratio is over a frequency range of 32.5–35 GHz. The measured realized gain of the antenna is more than 10 dBi over a frequency band of 31–35 GHz achieving a radiation efficiency of 94% at 34 GHz.", "title": "" }, { "docid": "3fa30df910c964bb2bf27a885aa59495", "text": "In an Intelligent Environment, he user and the environment work together in a unique manner; the user expresses what he wishes to do, and the environment recognizes his intentions and helps out however it can. If well-implemented, such an environment allows the user to interact with it in the manner that is most natural for him personally. He should need virtually no time to learn to use it and should be more productive once he has. But to implement a useful and natural Intelligent Environment, he designers are faced with a daunting task: they must design a software system that senses what its users do, understands their intentions, and then responds appropriately. In this paper we argue that, in order to function reasonably in any of these ways, an Intelligent Environment must make use of declarative representations of what the user might do. We present our evidence in the context of the Intelligent Classroom, a facility that aids a speaker in this way and uses its understanding to produce a video of his presentation.", "title": "" }, { "docid": "5b07bc318cb0f5dd7424cdcc59290d31", "text": "The current practice used in the design of physical interactive products (such as handheld devices), often suffers from a divide between exploration of form and exploration of interactivity. This can be attributed, in part, to the fact that working prototypes are typically expensive, take a long time to manufacture, and require specialized skills and tools not commonly available in design studios.We have designed a prototyping tool that, we believe, can significantly reduce this divide. The tool allows designers to rapidly create functioning, interactive, physical prototypes early in the design process using a collection of wireless input components (buttons, sliders, etc.) and a sketch of form. The input components communicate with Macromedia Director to enable interactivity.We believe that this tool can improve the design practice by: a) Improving the designer's ability to explore both the form and interactivity of the product early in the design process, b) Improving the designer's ability to detect problems that emerge from the combination of the form and the interactivity, c) Improving users' ability to communicate their ideas, needs, frustrations and desires, and d) Improving the client's understanding of the proposed design, resulting in greater involvement and support for the design.", "title": "" }, { "docid": "ae3d959972d673d24e6d0b7a0567323e", "text": "Traditional data on influenza vaccination has several limitations: high cost, limited coverage of underrepresented groups, and low sensitivity to emerging public health issues. Social media, such as Twitter, provide an alternative way to understand a population’s vaccination-related opinions and behaviors. In this study, we build and employ several natural language classifiers to examine and analyze behavioral patterns regarding influenza vaccination in Twitter across three dimensions: temporality (by week and month), geography (by US region), and demography (by gender). Our best results are highly correlated official government data, with a correlation over 0.90, providing validation of our approach. We then suggest a number of directions for future work.", "title": "" }, { "docid": "ff4c069ab63ced5979cf6718eec30654", "text": "Dowser is a ‘guided’ fuzzer that combines taint tracking, program analysis and symbolic execution to find buffer overflow and underflow vulnerabilities buried deep in a program’s logic. The key idea is that analysis of a program lets us pinpoint the right areas in the program code to probe and the appropriate inputs to do so. Intuitively, for typical buffer overflows, we need consider only the code that accesses an array in a loop, rather than all possible instructions in the program. After finding all such candidate sets of instructions, we rank them according to an estimation of how likely they are to contain interesting vulnerabilities. We then subject the most promising sets to further testing. Specifically, we first use taint analysis to determine which input bytes influence the array index and then execute the program symbolically, making only this set of inputs symbolic. By constantly steering the symbolic execution along branch outcomes most likely to lead to overflows, we were able to detect deep bugs in real programs (like the nginx webserver, the inspircd IRC server, and the ffmpeg videoplayer). Two of the bugs we found were previously undocumented buffer overflows in ffmpeg and the poppler PDF rendering library.", "title": "" }, { "docid": "21925b0a193ebb3df25c676d8683d895", "text": "The use of dialogue systems in vehicles raises the problem of making sure that the dialogue does not distract the driver from the primary task of driving. Earlier studies have indicated that humans are very apt at adapting the dialogue to the traffic situation and the cognitive load of the driver. The goal of this paper is to investigate strategies for interrupting and resuming in, as well as changing topic domain of, spoken human-human in-vehicle dialogue. The results show a large variety of strategies being used, and indicate that the choice of resumption and domain-switching strategy depends partly on the topic domain being resumed, and partly on the role of the speaker (driver or passenger). These results will be used as a basis for the development of dialogue strategies for interruption, resumption and domain-switching in the DICO in-vehicle dialogue system.", "title": "" }, { "docid": "58f1ba92eb199f4d105bf262b30dbbc5", "text": "Before the big data era, scene recognition was often approached with two-step inference using localized intermediate representations (objects, topics, and so on). One of such approaches is the semantic manifold (SM), in which patches and images are modeled as points in a semantic probability simplex. Patch models are learned resorting to weak supervision via image labels, which leads to the problem of scene categories co-occurring in this semantic space. Fortunately, each category has its own co-occurrence patterns that are consistent across the images in that category. Thus, discovering and modeling these patterns are critical to improve the recognition performance in this representation. Since the emergence of large data sets, such as ImageNet and Places, these approaches have been relegated in favor of the much more powerful convolutional neural networks (CNNs), which can automatically learn multi-layered representations from the data. In this paper, we address many limitations of the original SM approach and related works. We propose discriminative patch representations using neural networks and further propose a hybrid architecture in which the semantic manifold is built on top of multiscale CNNs. Both representations can be computed significantly faster than the Gaussian mixture models of the original SM. To combine multiple scales, spatial relations, and multiple features, we formulate rich context models using Markov random fields. To solve the optimization problem, we analyze global and local approaches, where a top–down hierarchical algorithm has the best performance. Experimental results show that exploiting different types of contextual relations jointly consistently improves the recognition accuracy.", "title": "" }, { "docid": "bbf987eef74d76cf2916ae3080a2b174", "text": "The facial system plays an important role in human-robot interaction. EveR-4 H33 is a head system for an android face controlled by thirty-three motors. It consists of three layers: a mechanical layer, an inner cover layer and an outer cover layer. Motors are attached under the skin and some motors are correlated with each other. Some expressions cannot be shown by moving just one motor. In addition, moving just one motor can cause damage to other motors or the skin. To solve these problems, a facial muscle control method that controls motors in a correlated manner is required. We designed a facial muscle control method and applied it to EveR-4 H33. We develop the actress robot EveR-4A by applying the EveR-4 H33 to the 24 degrees of freedom upper body and mannequin legs. EveR-4A shows various facial expressions with lip synchronization using our facial muscle control method.", "title": "" } ]
scidocsrr
9964a76f995125776e2fc1a30d248fec
The dawn of the liquid biopsy in the fight against cancer
[ { "docid": "aa234355d0b0493e1d8c7a04e7020781", "text": "Cancer is associated with mutated genes, and analysis of tumour-linked genetic alterations is increasingly used for diagnostic, prognostic and treatment purposes. The genetic profile of solid tumours is currently obtained from surgical or biopsy specimens; however, the latter procedure cannot always be performed routinely owing to its invasive nature. Information acquired from a single biopsy provides a spatially and temporally limited snap-shot of a tumour and might fail to reflect its heterogeneity. Tumour cells release circulating free DNA (cfDNA) into the blood, but the majority of circulating DNA is often not of cancerous origin, and detection of cancer-associated alleles in the blood has long been impossible to achieve. Technological advances have overcome these restrictions, making it possible to identify both genetic and epigenetic aberrations. A liquid biopsy, or blood sample, can provide the genetic landscape of all cancerous lesions (primary and metastases) as well as offering the opportunity to systematically track genomic evolution. This Review will explore how tumour-associated mutations detectable in the blood can be used in the clinic after diagnosis, including the assessment of prognosis, early detection of disease recurrence, and as surrogates for traditional biopsies with the purpose of predicting response to treatments and the development of acquired resistance.", "title": "" } ]
[ { "docid": "fc9eae18a5a44ee7df22d6c7bdb5a164", "text": "In this paper, methods are shown how to adapt invertible two-dimensional chaotic maps on a torus or on a square to create new symmetric block encryption schemes. A chaotic map is first generalized by introducing parameters and then discretized to a finite square lattice of points which represent pixels or some other data items. Although the discretized map is a permutation and thus cannot be chaotic, it shares certain properties with its continuous counterpart as long as the number of iterations remains small. The discretized map is further extended to three dimensions and composed with a simple diffusion mechanism. As a result, a symmetric block product encryption scheme is obtained. To encrypt an N × N image, the ciphering map is iteratively applied to the image. The construction of the cipher and its security is explained with the two-dimensional Baker map. It is shown that the permutations induced by the Baker map behave as typical random permutations. Computer simulations indicate that the cipher has good diffusion properties with respect to the plain-text and the key. A nontraditional pseudo-random number generator based on the encryption scheme is described and studied. Examples of some other two-dimensional chaotic maps are given and their suitability for secure encryption is discussed. The paper closes with a brief discussion of a possible relationship between discretized chaos and cryptosystems.", "title": "" }, { "docid": "1bfc1972a32222a1b5816bb040040374", "text": "BACKGROUND\nSkeletal muscle is key to motor development and represents a major metabolic end organ that aids glycaemic regulation.\n\n\nOBJECTIVES\nTo create gender-specific reference curves for fat-free mass (FFM) and appendicular (limb) skeletal muscle mass (SMMa) in children and adolescents. To examine the muscle-to-fat ratio in relation to body mass index (BMI) for age and gender.\n\n\nMETHODS\nBody composition was measured by segmental bioelectrical impedance (BIA, Tanita BC418) in 1985 Caucasian children aged 5-18.8 years. Skeletal muscle mass data from the four limbs were used to derive smoothed centile curves and the muscle-to-fat ratio.\n\n\nRESULTS\nThe centile curves illustrate the developmental patterns of %FFM and SMMa. While the %FFM curves differ markedly between boys and girls, the SMMa (kg), %SMMa and %SMMa/FFM show some similarities in shape and variance, together with some gender-specific characteristics. Existing BMI curves do not reveal these gender differences. Muscle-to-fat ratio showed a very wide range with means differing between boys and girls and across fifths of BMI z-score.\n\n\nCONCLUSIONS\nBIA assessment of %FFM and SMMa represents a significant advance in nutritional assessment since these body composition components are associated with metabolic health. Muscle-to-fat ratio has the potential to provide a better index of future metabolic health.", "title": "" }, { "docid": "32817233f5aa05036ca292e7b57143fb", "text": "Asphalt pavement distresses have significant importance in roads and highways. This paper addresses the detection and localization of one of the key pavement distresses, the potholes using computer vision. Different kinds of pothole and non-pothole images from asphalt pavement are considered for experimentation. Considering the appearance-shape based nature of the potholes, Histograms of oriented gradients (HOG) features are computed for the input images. Features are trained and classified using Naïve Bayes classifier resulting in labeling of the input as pothole or non-pothole image. To locate the pothole in the detected pothole images, normalized graph cut segmentation scheme is employed. Proposed scheme is tested on a dataset having broad range of pavement images. Experimentation results showed 90 % accuracy for the detection of pothole images and high recall for the localization of pothole in the detected images.", "title": "" }, { "docid": "6851e4355ab4825b0eb27ac76be2329f", "text": "Segmentation of novel or dynamic objects in a scene, often referred to as “background subtraction” or “foreground segmentation”, is a critical early in step in most computer vision applications in domains such as surveillance and human-computer interaction. All previously described, real-time methods fail to handle properly one or more common phenomena, such as global illumination changes, shadows, inter-reflections, similarity of foreground color to background, and non-static backgrounds (e.g. active video displays or trees waving in the wind). The recent advent of hardware and software for real-time computation of depth imagery makes better approaches possible. We propose a method for modeling the background that uses per-pixel, time-adaptive, Gaussian mixtures in the combined input space of depth and luminance-invariant color. This combination in itself is novel, but we further improve it by introducing the ideas of 1) modulating the background model learning rate based on scene activity, and 2) making colorbased segmentation criteria dependent on depth observations. Our experiments show that the method possesses much greater robustness to problematic phenomena than the prior state-of-the-art, without sacrificing real-time performance, making it well-suited for a wide range of practical applications in video event detection and recognition.", "title": "" }, { "docid": "b72bc9ee1c32ec3d268abd1d3e51db25", "text": "As a newly developing academic domain, researches on Mobile learning are still in their initial stage. Meanwhile, M-blackboard comes from Mobile learning. This study attempts to discover the factors impacting the intention to adopt mobile blackboard. Eleven selected model on the Mobile learning adoption were comprehensively reviewed. From the reviewed articles, the most factors are identified. Also, from the frequency analysis, the most frequent factors in the Mobile blackboard or Mobile learning adoption studies are performance expectancy, effort expectancy, perceived playfulness, facilitating conditions, self-management, cost and past experiences. The descriptive statistic was performed to gather the respondents’ demographic information. It also shows that the respondents agreed on nearly every statement item. Pearson correlation and regression analysis were also conducted.", "title": "" }, { "docid": "0dd4f05f9bd3d582b9fb9c64f00ed697", "text": "Today, among other challenges, teaching students how to write computer programs for the first time can be an important criterion for whether students in computing will remain in their program of study, i.e. Computer Science or Information Technology. Not learning to program a computer as a computer scientist or information technologist can be compared to a mathematician not learning algebra. For a mathematician this would be an extremely limiting situation. For a computer scientist, not learning to program imposes a similar severe limitation on the budding computer scientist. Therefore it is not a question as to whether programming should be taught rather it is a question of how to maximize aspects of teaching programming so that students are less likely to be discouraged when learning to program. Different criteria have been used to select first programming languages. Computer scientists have attempted to establish criteria for selecting the first programming language to teach a student. This paper examines the criteria used to select first programming languages and the issues that novices face when learning to program in an effort to create a more comprehensive model for selecting first programming languages.", "title": "" }, { "docid": "c26eabb377db5f1033ec6d354d890a6f", "text": "Recurrent neural networks have recently shown significant potential in different language applications, ranging from natural language processing to language modelling. This paper introduces a research effort to use such networks to develop and evaluate natural language acquisition on a humanoid robot. Here, the problem is twofold. First, the focus will be put on using the gesture-word combination stage observed in infants to transition from single to multi-word utterances. Secondly, research will be carried out in the domain of connecting action learning with language learning. In the former, the long-short term memory architecture will be implemented, whilst in the latter multiple time-scale recurrent neural networks will be used. This will allow for comparison between the two architectures, whilst highlighting the strengths and shortcomings of both with respect to the language learning problem. Here, the main research efforts, challenges and expected outcomes are described.", "title": "" }, { "docid": "665fb08aba7cc1a2d6680bccb259396f", "text": "Sample entropy (SampEn) has been proposed as a method to overcome limitations associated with approximate entropy (ApEn). The initial paper describing the SampEn metric included a characterization study comparing both ApEn and SampEn against theoretical results and concluded that SampEn is both more consistent and agrees more closely with theory for known random processes than ApEn. SampEn has been used in several studies to analyze the regularity of clinical and experimental time series. However, questions regarding how to interpret SampEn in certain clinical situations and its relationship to classical signal parameters remain unanswered. In this paper we report the results of a characterization study intended to provide additional insights regarding the interpretability of SampEn in the context of biomedical signal analysis.", "title": "" }, { "docid": "323d633995296611c903874aefa5cdb7", "text": "This paper investigates the possibility of communicating through vibrations. By modulating the vibration motors available in all mobile phones, and decoding them through accelerometers, we aim to communicate small packets of information. Of course, this will not match the bit rates available through RF modalities, such as NFC or Bluetooth, which utilize a much larger bandwidth. However, where security is vital, vibratory communication may offer advantages. We develop Ripple, a system that achieves up to 200 bits/s of secure transmission using off-the-shelf vibration motor chips, and 80 bits/s on Android smartphones. This is an outcome of designing and integrating a range of techniques, including multicarrier modulation, orthogonal vibration division, vibration braking, side-channel jamming, etc. Not all these techniques are novel; some are borrowed and suitably modified for our purposes, while others are unique to this relatively new platform of vibratory communication.", "title": "" }, { "docid": "ccd356a943f19024478c42b5db191293", "text": "This paper discusses the relationship between concepts of narrative, patterns of interaction within computer games constituting gameplay gestalts, and the relationship between narrative and the gameplay gestalt. The repetitive patterning involved in gameplay gestalt formation is found to undermine deep narrative immersion. The creation of stronger forms of interactive narrative in games requires the resolution of this confl ict. The paper goes on to describe the Purgatory Engine, a game engine based upon more fundamentally dramatic forms of gameplay and interaction, supporting a new game genre referred to as the fi rst-person actor. The fi rst-person actor does not involve a repetitive gestalt mode of gameplay, but defi nes gameplay in terms of character development and dramatic interaction.", "title": "" }, { "docid": "34b7073f947888694053cb421544cb37", "text": "Many fundamental image-related problems involve deconvolution operators. Real blur degradation seldom complies with an ideal linear convolution model due to camera noise, saturation, image compression, to name a few. Instead of perfectly modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristics of degradation. We note directly applying existing deep neural networks does not produce reasonable results. Our solution is to establish the connection between traditional optimization-based schemes and a neural network architecture where a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts. Our network contains two submodules, both trained in a supervised manner with proper initialization. They yield decent performance on non-blind image deconvolution compared to previous generative-model based methods.", "title": "" }, { "docid": "d7a85bedea94e2e70f9ad52c6247f8d3", "text": "Little is known about the perception of artificial spatial hearing by hearing-impaired subjects. The purpose of this study was to investigate how listeners with hearing disorders perceived the effect of a spatialization feature designed for wireless microphone systems. Forty listeners took part in the experiments. They were arranged in four groups: normal-hearing, moderate, severe, and profound hearing loss. Their performance in terms of speech understanding and speaker localization was assessed with diotic and binaural stimuli. The results of the speech intelligibility experiment revealed that the subjects presenting a moderate or severe hearing impairment better understood speech with the spatialization feature. Thus, it was demonstrated that the conventional diotic binaural summation operated by current wireless systems can be transformed to reproduce the spatial cues required to localize the speaker, without any loss of intelligibility. The speaker localization experiment showed that a majority of the hearing-impaired listeners had similar performance with natural and artificial spatial hearing, contrary to the normal-hearing listeners. This suggests that certain subjects with hearing impairment preserve their localization abilities with approximated generic head-related transfer functions in the frontal horizontal plane.", "title": "" }, { "docid": "8d071dbd68902f3bac18e61caa0828dd", "text": "This paper demonstrates that it is possible to construct the Stochastic flash ADC using standard digital cells. In order to minimize the analog circuit requirements which cost high, it is appropriate to begin the architecture with highly digital. The proposed Stochastic flash ADC uses a random comparator offset to set the trip points. Since the comparator are no longer sized for small offset, they can be shrunk down into digital cells. Using comparators that are implemented as digital cells produces a large variation of comparator offset. Typically, this is considered a disadvantage, but in our case, this large standard deviation of offset is used to set the input signal range. By designing an ADC that is made up entirely of digital cells, it is natural candidate for a synthesizable ADC. The analog comparator which is used in this ADC is constructed from standard digital NAND gates connected with SR latch to minimize the memory effects. A Wallace tree adder is used to sum the total number of comparator output, since the order of comparator output is random. Thus, all the components including the comparator and Wallace tree adder can be implemented using standard digital cells. [1] INTRODUCTION As CMOS designs are scaled to smaller technology nodes, many benefits arise, as well as challenges. There are benefits in speed and power due to decreased capacitance and lower supply voltage, yet reduction in intrinsic device gain and lower supply voltage make it difficult to migrate previous analog designs to smaller scaled processes. Moreover, as scaling trends continue, the analog portion of a mixed-signal system tends to consume proportionally more power and area and have a higher design cost than the digital counterpart. This tends to increase the overall design cost of the mixed-signal design. Automatically synthesized digital circuits get all the benefits of scaling, but analog circuits get these benefits at a large cost. The most essential component of ADC is the comparator, which translates from the analog world to digital world. Since comparator defines the boundary between analog and digital realms, the flash ADC architecture will be considered, as it places the comparator as close to the analog input signal. Flash ADCs use a reference ladder to generate the comparator trip points that correspond to each digital code. Typically the references are either generated by a resistor ladder or some form of analog interpolation, but the effect is the same: a …", "title": "" }, { "docid": "4100a10b2a03f3a1ba712901cee406d2", "text": "Traditionally, many clinicians tend to forego esthetic considerations when full-coverage restorations are indicated for pediatric patients with primary dentitions. However, the availability of new zirconia pediatric crowns and reliable techniques for cementation makes esthetic outcomes practical and consistent when restoring primary dentition. Two cases are described: a 3-year-old boy who presented with severe early childhood caries affecting both anterior and posterior teeth, and a 6-year-old boy who presented with extensive caries of his primary posterior dentition, including a molar requiring full coverage. The parents of both boys were concerned about esthetics, and the extent of decay indicated the need for full-coverage restorations. This led to the boys receiving treatment using a restorative procedure in which the carious teeth were prepared for and restored with esthetic tooth-colored zirconia crowns. In both cases, comfortable function and pleasing esthetics were achieved.", "title": "" }, { "docid": "b6b9e1eaf17f6cdbc9c060e467021811", "text": "Tumour-associated viruses produce antigens that, on the face of it, are ideal targets for immunotherapy. Unfortunately, these viruses are experts at avoiding or subverting the host immune response. Cervical-cancer-associated human papillomavirus (HPV) has a battery of immune-evasion mechanisms at its disposal that could confound attempts at HPV-directed immunotherapy. Other virally associated human cancers might prove similarly refractive to immuno-intervention unless we learn how to circumvent their strategies for immune evasion.", "title": "" }, { "docid": "95d624c86fcd86377e46738689bb18a8", "text": "EEG desynchronization is a reliable correlate of excited neural structures of activated cortical areas. EEG synchronization within the alpha band may be an electrophysiological correlate of deactivated cortical areas. Such areas are not processing sensory information or motor output and can be considered to be in an idling state. One example of such an idling cortical area is the enhancement of mu rhythms in the primary hand area during visual processing or during foot movement. In both circumstances, the neurons in the hand area are not needed for visual processing or preparation for foot movement. As a result of this, an enhanced hand area mu rhythm can be observed.", "title": "" }, { "docid": "827e9045f932b146a8af66224e114be6", "text": "Using a common set of attributes to determine which methodology to use in a particular data warehousing project.", "title": "" }, { "docid": "569fed958b7a471e06ce718102687a1e", "text": "The introduction of convolutional layers greatly advanced the performance of neural networks on image tasks due to innately capturing a way of encoding and learning translation-invariant operations, matching one of the underlying symmetries of the image domain. In comparison, there are a number of problems in which there are a number of different inputs which are all ’of the same type’ — multiple particles, multiple agents, multiple stock prices, etc. The corresponding symmetry to this is permutation symmetry, in that the algorithm should not depend on the specific ordering of the input data. We discuss a permutation-invariant neural network layer in analogy to convolutional layers, and show the ability of this architecture to learn to predict the motion of a variable number of interacting hard discs in 2D. In the same way that convolutional layers can generalize to different image sizes, the permutation layer we describe generalizes to different numbers of objects.", "title": "" }, { "docid": "81b5379abf3849e1ae4e233fd4955062", "text": "Three-phase dc/dc converters have the superior characteristics including lower current rating of switches, the reduced output filter requirement, and effective utilization of transformers. To further reduce the voltage stress on switches, three-phase three-level (TPTL) dc/dc converters have been investigated recently; however, numerous active power switches result in a complicated configuration in the available topologies. Therefore, a novel TPTL dc/dc converter adopting a symmetrical duty cycle control is proposed in this paper. Compared with the available TPTL converters, the proposed converter has fewer switches and simpler configuration. The voltage stress on all switches can be reduced to the half of the input voltage. Meanwhile, the ripple frequency of output current can be increased significantly, resulting in a reduced filter requirement. Experimental results from a 540-660-V input and 48-V/20-A output are presented to verify the theoretical analysis and the performance of the proposed converter.", "title": "" }, { "docid": "9c16f3ccaab4e668578e3eda7d452ebd", "text": "Speech is a common and effective way of communication between humans, and modern consumer devices such as smartphones and home hubs are equipped with deep learning based accurate automatic speech recognition to enable natural interaction between humans and machines. Recently, researchers have demonstrated powerful attacks against machine learning models that can fool them to produce incorrect results. However, nearly all previous research in adversarial attacks has focused on image recognition and object detection models. In this short paper, we present a first of its kind demonstration of adversarial attacks against speech classification model. Our algorithm performs targeted attacks with 87% success by adding small background noise without having to know the underlying model parameter and architecture. Our attack only changes the least significant bits of a subset of audio clip samples, and the noise does not change 89% the human listener’s perception of the audio clip as evaluated in our human study.", "title": "" } ]
scidocsrr
8b0fb060f28dee6142e3ee5ff28c5578
Community Detection in Multi-Dimensional Networks
[ { "docid": "bb2504b2275a20010c0d5f9050173d40", "text": "Clustering nodes in a graph is a useful general technique in data mining of large network data sets. In this context, Newman and Girvan [9] recently proposed an objective function for graph clustering called the Q function which allows automatic selection of the number of clusters. Empirically, higher values of the Q function have been shown to correlate well with good graph clusterings. In this paper we show how optimizing the Q function can be reformulated as a spectral relaxation problem and propose two new spectral clustering algorithms that seek to maximize Q. Experimental results indicate that the new algorithms are efficient and effective at finding both good clusterings and the appropriate number of clusters across a variety of real-world graph data sets. In addition, the spectral algorithms are much faster for large sparse graphs, scaling roughly linearly with the number of nodes n in the graph, compared to O(n) for previous clustering algorithms using the Q function.", "title": "" }, { "docid": "31873424960073962d3d8eba151f6a4b", "text": "Multiple view data, which have multiple representations from different feature spaces or graph spaces, arise in various data mining applications such as information retrieval, bioinformatics and social network analysis. Since different representations could have very different statistical properties, how to learn a consensus pattern from multiple representations is a challenging problem. In this paper, we propose a general model for multiple view unsupervised learning. The proposed model introduces the concept of mapping function to make the different patterns from different pattern spaces comparable and hence an optimal pattern can be learned from the multiple patterns of multiple representations. Under this model, we formulate two specific models for two important cases of unsupervised learning, clustering and spectral dimensionality reduction; we derive an iterating algorithm for multiple view clustering, and a simple algorithm providing a global optimum to multiple spectral dimensionality reduction. We also extend the proposed model and algorithms to evolutionary clustering and unsupervised learning with side information. Empirical evaluations on both synthetic and real data sets demonstrate the effectiveness of the proposed model and algorithms.", "title": "" } ]
[ { "docid": "5441d081eabb4ad3d96775183e603b65", "text": "We give an introduction to computation and logic tailored for algebraists, and use this as a springboard to discuss geometric models of computation and the role of cut-elimination in these models, following Girard's geometry of interaction program. We discuss how to represent programs in the λ-calculus and proofs in linear logic as linear maps between infinite-dimensional vector spaces. The interesting part of this vector space semantics is based on the cofree cocommutative coalgebra of Sweedler [71] and the recent explicit computations of liftings in [62].", "title": "" }, { "docid": "2c28d01814e0732e59d493f0ea2eafcb", "text": "Victor Frankenstein sought to create an intelligent being imbued with the r ules of civilized human conduct, who could further learn how to behave and possibly even evolve through successive g nerations into a more perfect form. Modern human composers similarly strive to create intell igent algorithmic music composition systems that can follow prespecified rules, learn appropriate patte rns from a collection of melodies, or evolve to produce output more perfectly matched to some aesthetic criteria . H re we review recent efforts aimed at each of these three types of algorithmic composition. We focus pa rticularly on evolutionary methods, and indicate how monstrous many of the results have been. We present a ne w method that uses coevolution to create linked artificial music critics and music composers , and describe how this method can attach the separate parts of rules, learning, and evolution together in to one coherent body. “Invention, it must be humbly admitted, does not consist in creating out of void, but ou t of chaos; the materials must, in the first place, be afforded...” --Mary Shelley, Frankenstein (1831/1993, p. 299)", "title": "" }, { "docid": "b21ae248eea30b91e41012ab70cb6d81", "text": "Communication technology plays an increasingly important role in the growing automated metering infrastructure (AMI) market. This paper presents a thorough analysis and comparison of four application layer protocols in the smart metering context. The inspected protocols are DLMS/COSEM, the Smart Message Language (SML), and the MMS and SOAP mappings of IEC 61850. The focus of this paper is on their use over TCP/IP. The protocols are first compared with respect to qualitative criteria such as the ability to transmit clock synchronization information. Afterwards the message size of meter reading requests and responses and the different binary encodings of the protocols are compared.", "title": "" }, { "docid": "ce5c5d0d0cb988c96f0363cfeb9610d4", "text": "Due to deep automation, the configuration of many Cloud infrastructures is static and homogeneous, which, while easing administration, significantly decreases a potential attacker's uncertainty on a deployed Cloud-based service and hence increases the chance of the service being compromised. Moving-target defense (MTD) is a promising solution to the configuration staticity and homogeneity problem. This paper presents our findings on whether and to what extent MTD is effective in protecting a Cloud-based service with heterogeneous and dynamic attack surfaces - these attributes, which match the reality of current Cloud infrastructures, have not been investigated together in previous works on MTD in general network settings. We 1) formulate a Cloud-based service security model that incorporates Cloud-specific features such as VM migration/snapshotting and the diversity/compatibility of migration, 2) consider the accumulative effect of the attacker's intelligence on the target service's attack surface, 3) model the heterogeneity and dynamics of the service's attack surfaces, as defined by the (dynamic) probability of the service being compromised, as an S-shaped generalized logistic function, and 4) propose a probabilistic MTD service deployment strategy that exploits the dynamics and heterogeneity of attack surfaces for protecting the service against attackers. Through simulation, we identify the conditions and extent of the proposed MTD strategy's effectiveness in protecting Cloud-based services. Namely, 1) MTD is more effective when the service deployment is dense in the replacement pool and/or when the attack is strong, and 2) attack-surface heterogeneity-and-dynamics awareness helps in improving MTD's effectiveness.", "title": "" }, { "docid": "348702d85126ed64ca24bdc62c1146d9", "text": "Autonomous Vehicles are currently being tested in a variety of scenarios. As we move towards Autonomous Vehicles, how should intersections look? To answer that question, we break down an intersection management into the different conundrums and scenarios involved in the trajectory planning and current approaches to solve them. Then, a brief analysis of current works in autonomous intersection is conducted. With a critical eye, we try to delve into the discrepancies of existing solutions while presenting some critical and important factors that have been addressed. Furthermore, open issues that have to be addressed are also emphasized. We also try to answer the question of how to benchmark intersection management algorithms by providing some factors that impact autonomous navigation at intersection.", "title": "" }, { "docid": "4bddc7bb7088c01dbc48504656b0f8d4", "text": "The basic knowledge required to do sentiment analysis of Twitter is discussed in this review paper. Sentiment Analysis can be viewed as field of text mining, natural language processing. Thus we can study sentiment analysis in various aspects. This paper presents levels of sentiment analysis, approaches to do sentiment analysis, methodologies for doing it, and features to be extracted from text and the applications. Twitter is a microblogging service to which if sentiment analysis done one has to follow explicit path. Thus this paper puts overview about tweets extraction, their preprocessing and their sentiment analysis.", "title": "" }, { "docid": "d848a684aeddd5447f17282fdd2efaf0", "text": "..........................................................................................................iii ACKNOWLEDGMENTS.........................................................................................iv TABLE OF CONTENTS .........................................................................................vi LIST OF TABLES................................................................................................viii LIST OF FIGURES ................................................................................................ix", "title": "" }, { "docid": "b4d7a8b6b24c85af9f62105194087535", "text": "New technologies provide expanded opportunities for interaction design. The growing number of possible ways to interact, in turn, creates a new responsibility for designers: Besides the product's visual aesthetics, one has to make choices about the aesthetics of interaction. This issue recently gained interest in Human-Computer Interaction (HCI) research. Based on a review of 19 approaches, we provide an overview of today's state of the art. We focused on approaches that feature \"qualities\", \"dimensions\" or \"parameters\" to describe interaction. Those fell into two broad categories. One group of approaches dealt with detailed spatio-temporal attributes of interaction sequences (i.e., action-reaction) on a sensomotoric level (i.e., form). The other group addressed the feelings and meanings an interaction is enveloped in rather than the interaction itself (i.e., experience). Surprisingly, only two approaches addressed both levels simultaneously, making the explicit link between form and experience. We discuss these findings and its implications for future theory building.", "title": "" }, { "docid": "33ad325fc91be339c580581107314146", "text": "Designing technological systems for personalized education is an iterative and interdisciplinary process that demands a deep understanding of the application domain, the limitations of current methods and technologies, and the computational methods and complexities behind user modeling and adaptation. We present our design process and the Socially Assistive Robot (SAR) tutoring system to support the efforts of educators in teaching number concepts to preschool children. We focus on the computational considerations of designing a SAR system for young children that may later be personalized along multiple dimensions. We conducted an initial data collection to validate that the system is at the proper challenge level for our target population, and discovered promising patterns in participants' learning styles, nonverbal behavior, and performance. We discuss our plans to leverage the data collected to learn and validate a computational, multidimensional model of number concepts learning.", "title": "" }, { "docid": "f25b9147e67bd8051852142ebd82cf20", "text": "Fossil fuels currently supply most of the world's energy needs, and however unacceptable their long-term consequences, the supplies are likely to remain adequate for the next few generations. Scientists and policy makers must make use of this period of grace to assess alternative sources of energy and determine what is scientifically possible, environmentally acceptable and technologically promising.", "title": "" }, { "docid": "a08697b03ca0b8b8ea6e037fdccb8645", "text": "Most P2P systems that provide a DHT abstraction distribute objects among “peer nodes” by choosing random identifiers for the objects. This could result in an O(log N) imbalance. Besides, P2P systems can be highly heterogeneous, i.e. they may consist of peers that range from old desktops behind modem lines to powerful servers connected to the Internet through high-bandwidth lines. In this paper, we address the problem of load balancing in such P2P systems. We explore the space of designing load-balancing algorithms that uses the notion of “virtual servers”. We present three schemes that differ primarily in the amount of information used to decide how to re-arrange load. Our simulation results show that even the simplest scheme is able to balance the load within 80% of the optimal value, while the most complex scheme is able to balance the load within 95% of the optimal value.", "title": "" }, { "docid": "db83931d7fef8174acdb3a1f4ef0d043", "text": "Physical fatigue has been identified as a risk factor associated with the onset of occupational injury. Muscular fatigue developed from repetitive hand-gripping tasks is of particular concern. This study examined the use of a maximal, repetitive, static power grip test of strength-endurance in detecting differences in exertions between workers with uninjured and injured hands, and workers who were asked to provide insincere exertions. The main dependent variable of interest was power grip muscular force measured with a force strain gauge. Group data showed that the power grip protocol, used in this study, provided a valid and reliable estimate of wrist-hand strength-endurance. Force fatigue curves showed both linear and curvilinear effects among the study groups. An endurance index based on force decrement during repetitive power grip was shown to differentiate between uninjured, injured, and insincere groups.", "title": "" }, { "docid": "0f969ca56c984eb573a541318884fdaa", "text": "One of the mechanisms by which the innate immune system senses the invasion of pathogenic microorganisms is through the Toll-like receptors (TLRs), which recognize specific molecular patterns that are present in microbial components. Stimulation of different TLRs induces distinct patterns of gene expression, which not only leads to the activation of innate immunity but also instructs the development of antigen-specific acquired immunity. Here, we review the rapid progress that has recently improved our understanding of the molecular mechanisms that mediate TLR signalling.", "title": "" }, { "docid": "b9261a0d56a6305602ff27da5ec160e8", "text": "In psychology the Rubber Hand Illusion (RHI) is an experiment where participants get the feeling that a fake hand is becoming their own. Recently, new testing methods using an action based paradigm have induced stronger RHI. However, these experiments are facing limitations because they are difficult to implement and lack of rigorous experimental conditions. This paper proposes a low-cost open source robotic hand which is easy to manufacture and removes these limitations. This device reproduces fingers movement of the participants in real time. A glove containing sensors is worn by the participant and records fingers flexion. Then a microcontroller drives hobby servo-motors on the robotic hand to reproduce the corresponding fingers position. A connection between the robotic device and a computer can be established, enabling the experimenters to tune precisely the desired parameters using Matlab. Since this is the first time a robotic hand is developed for the RHI, a validation study has been conducted. This study confirms previous results found in the literature. This study also illustrates the fact that the robotic hand can be used to conduct innovative experiments in the RHI field. Understanding such RHI is important because it can provide guidelines for prosthetic design.", "title": "" }, { "docid": "60a6c8588c46fa2aa63a3348723f2bb1", "text": "An early warning system can help to identify at-risk students, or predict student learning performance by analyzing learning portfolios recorded in a learning management system (LMS). Although previous studies have shown the applicability of determining learner behaviors from an LMS, most investigated datasets are not assembled from online learning courses or from whole learning activities undertaken on courses that can be analyzed to evaluate students’ academic achievement. Previous studies generally focus on the construction of predictors for learner performance evaluation after a course has ended, and neglect the practical value of an ‘‘early warning’’ system to predict at-risk students while a course is in progress. We collected the complete learning activities of an online undergraduate course and applied data-mining techniques to develop an early warning system. Our results showed that, timedependent variables extracted from LMS are critical factors for online learning. After students have used an LMS for a period of time, our early warning system effectively characterizes their current learning performance. Data-mining techniques are useful in the construction of early warning systems; based on our experimental results, classification and regression tree (CART), supplemented by AdaBoost is the best classifier for the evaluation of learning performance investigated by this study. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "394c8f7a708d69ca26ab0617ab1530ab", "text": "Developing wireless sensor networks can enable information gathering, information processing and reliable monitoring of a variety of environments for both civil and military applications. It is however necessary to agree upon a basic architecture for building sensor network applications. This paper presents a general classification of sensor network applications based on their network configurations and discusses some of their architectural requirements. We propose a generic architecture for a specific subclass of sensor applications which we define as self-configurable systems where a large number of sensors coordinate amongst themselves to achieve a large sensing task. Throughout this paper we assume a certain subset of the sensors to be immobile. This paper lists the general architectural and infra-structural components necessary for building this class of sensor applications. Given the various architectural components, we present an algorithm that self-organizes the sensors into a network in a transparent manner. Some of the basic goals of our algorithm include minimizing power utilization, localizing operations and tolerating node and link failures.", "title": "" }, { "docid": "38e95632ff481471ddf38c12044257df", "text": "Retrieving object instances among cluttered scenes efficiently requires compact yet comprehensive regional image representations. Intuitively, object semantics can help build the index that focuses on the most relevant regions. However, due to the lack of bounding-box datasets for objects of interest among retrieval benchmarks, most recent work on regional representations has focused on either uniform or class-agnostic region selection. In this paper, we first fill the void by providing a new dataset of landmark bounding boxes, based on the Google Landmarks dataset, that includes 94k images with manually curated boxes from 15k unique landmarks. Then, we demonstrate how a trained landmark detector, using our new dataset, can be leveraged to index image regions and improve retrieval accuracy while being much more efficient than existing regional methods. In addition, we further introduce a novel regional aggregated selective match kernel (R-ASMK) to effectively combine information from detected regions into an improved holistic image representation. R-ASMK boosts image retrieval accuracy substantially at no additional memory cost, while even outperforming systems that index image regions independently. Our complete image retrieval system improves upon the previous state-of-the-art by significant margins on the Revisited Oxford and Paris datasets. Code and data will be released.", "title": "" }, { "docid": "eb0e38817ff491fbe274caf5e7126d2d", "text": "At the forefront of debates on language are new data demonstrating infants' early acquisition of information about their native language. The data show that infants perceptually \"map\" critical aspects of ambient language in the first year of life before they can speak. Statistical properties of speech are picked up through exposure to ambient language. Moreover, linguistic experience alters infants' perception of speech, warping perception in the service of language. Infants' strategies are unexpected and unpredicted by historical views. A new theoretical position has emerged, and six postulates of this position are described.", "title": "" }, { "docid": "1938d1b72bbeec9cb9c2eed3f2c0a19a", "text": "Domain Name System (DNS) traffic has become a rich source of information from a security perspective. However, the volume of DNS traffic has been skyrocketing, such that security analyzers experience difficulties in collecting, retrieving, and analyzing the DNS traffic in response to modern Internet threats. More precisely, much of the research relating to DNS has been negatively affected by the dramatic increase in the number of queries and domains. This phenomenon has necessitated a scalable approach, which is not dependent on the volume of DNS traffic. In this paper, we introduce a fast and scalable approach, called PsyBoG, for detecting malicious behavior within large volumes of DNS traffic. PsyBoG leverages a signal processing technique, power spectral density (PSD) analysis, to discover the major frequencies resulting from the periodic DNS queries of botnets. The PSD analysis allows us to detect sophisticated botnets regardless of their evasive techniques, sporadic behavior, and even normal users’ traffic. Furthermore, our method allows us to deal with large-scale DNS data by only utilizing the timing information of query generation regardless of the number of queries and domains. Finally, PsyBoG discovers groups of hosts which show similar patterns of malicious behavior. PsyBoG was evaluated by conducting experiments with two different data sets, namely DNS traces generated by real malware in controlled environments and a large number of real-world DNS traces collected from a recursive DNS server, an authoritative DNS server, and Top-Level Domain (TLD) servers. We utilized the malware traces as the ground truth, and, as a result, PsyBoG performed with a detection accuracy of 95%. By using a large number of DNS traces, we were able to demonstrate the scalability and effectiveness of PsyBoG in terms of practical usage. Finally, PsyBoG detected 23 unknown and 26 known botnet groups with 0.1% false positives. © 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "2a422c6047bca5a997d5c3d0ee080437", "text": "Connecting mathematical logic and computation, it ensures that some aspects of programming are absolute.", "title": "" } ]
scidocsrr
8b23a893d4cb1ebc5060bafc3c45d1bd
How to Make a Digital Currency on a Blockchain Stable
[ { "docid": "11e19b59fa2df88f3468b4e71aab8cf4", "text": "Blockchain is a distributed timestamp server technology introduced for realization of Bitcoin, a digital cash system. It has been attracting much attention especially in the areas of financial and legal applications. But such applications would fail if they are designed without knowledge of the fundamental differences in blockchain from existing technology. We show that blockchain is a probabilistic state machine in which participants can never commit on decisions, we also show that this probabilistic nature is necessarily deduced from the condition where the number of participants remains unknown. This work provides useful abstractions to think about blockchain, and raises discussion for promoting the better use of the technology.", "title": "" } ]
[ { "docid": "9a4a519023175802578dad5864b3dd01", "text": "The problem of efficiently finding the best match for a query in a given set with respect to the Euclidean distance or the cosine similarity has been extensively studied. However, the closely related problem of efficiently finding the best match with respect to the inner-product has never been explored in the general setting to the best of our knowledge. In this paper we consider this problem and contrast it with the previous problems considered. First, we propose a general branch-and-bound algorithm based on a (single) tree data structure. Subsequently, we present a dual-tree algorithm for the case where there are multiple queries. Our proposed branch-and-bound algorithms are based on novel inner-product bounds. Finally we present a new data structure, the cone tree, for increasing the efficiency of the dual-tree algorithm. We evaluate our proposed algorithms on a variety of data sets from various applications, and exhibit up to five orders of magnitude improvement in query time over the naive search technique in some cases.", "title": "" }, { "docid": "6cf97825d649a4f7518be9b72ea8f19f", "text": "This paper proposes a distributed discrete-time algorithm to solve an additive cost optimization problem over undirected deterministic or time-varying graphs. Different from most previous methods that require to exchange exact states between nodes, each node in our algorithm needs only the sign of the relative state between its neighbors, which is clearly one bit of information. Our analysis is based on optimization theory rather than Lyapunov theory or algebraic graph theory. The latter is commonly used in existing literature, especially in the continuous-time algorithm design, and is difficult to apply in our case. Besides, an optimization-theory-based analysis may make our results more extendible. In particular, our convergence proofs are based on the convergences of the subgradient method and the stochastic subgradient method. Moreover, the convergence rate of our algorithm can vary from $O(1/\\ln(k))$ to $O(1/\\sqrt{k})$, depending on the choice of the stepsize. A quantile regression problem is included to illustrate the performance of our algorithm using simulations.", "title": "" }, { "docid": "4b494016220eb5442642e34c3ed2d720", "text": "BACKGROUND\nTreatments for alopecia are in high demand, but not all are safe and reliable. Dalteparin and protamine microparticles (D/P MPs) can effectively carry growth factors (GFs) in platelet-rich plasma (PRP).\n\n\nOBJECTIVE\nTo identify the effects of PRP-containing D/P MPs (PRP&D/P MPs) on hair growth.\n\n\nMETHODS & MATERIALS\nParticipants were 26 volunteers with thin hair who received five local treatments of 3 mL of PRP&D/P MPs (13 participants) or PRP and saline (control, 13 participants) at 2- to 3-week intervals and were evaluated for 12 weeks. Injected areas comprised frontal or parietal sites with lanugo-like hair. Experimental and control areas were photographed. Consenting participants underwent biopsies for histologic examination.\n\n\nRESULTS\nD/P MPs bind to various GFs contained in PRP. Significant differences were seen in hair cross-section but not in hair numbers in PRP and PRP&D/P MP injections. The addition of D/P MPs to PRP resulted in significant stimulation in hair cross-section. Microscopic findings showed thickened epithelium, proliferation of collagen fibers and fibroblasts, and increased vessels around follicles.\n\n\nCONCLUSION\nPRP&D/P MPs and PRP facilitated hair growth but D/P MPs provided additional hair growth. The authors have indicated no significant interest with commercial supporters.", "title": "" }, { "docid": "97dfc67c63e7e162dd06d5cb2959912a", "text": "To examine the pattern of injuries in cases of fatal shark attack in South Australian waters, the authors examined the files of their institution for all cases of shark attack in which full autopsies had been performed over the past 25 years, from 1974 to 1998. Of the seven deaths attributed to shark attack during this period, full autopsies were performed in only two cases. In the remaining five cases, bodies either had not been found or were incomplete. Case 1 was a 27-year-old male surfer who had been attacked by a shark. At autopsy, the main areas of injury involved the right thigh, which displayed characteristic teeth marks, extensive soft tissue damage, and incision of the femoral artery. There were also incised wounds of the right wrist. Bony injury was minimal, and no shark teeth were recovered. Case 2 was a 26-year-old male diver who had been attacked by a shark. At autopsy, the main areas of injury involved the left thigh and lower leg, which displayed characteristic teeth marks, extensive soft tissue damage, and incised wounds of the femoral artery and vein. There was also soft tissue trauma to the left wrist, with transection of the radial artery and vein. Bony injury was minimal, and no shark teeth were recovered. In both cases, death resulted from exsanguination following a similar pattern of soft tissue and vascular damage to a leg and arm. This type of injury is in keeping with predator attack from underneath or behind, with the most severe injuries involving one leg. Less severe injuries to the arms may have occurred during the ensuing struggle. Reconstruction of the damaged limb in case 2 by sewing together skin, soft tissue, and muscle bundles not only revealed that no soft tissue was missing but also gave a clearer picture of the pattern of teeth marks, direction of the attack, and species of predator.", "title": "" }, { "docid": "3cd383e547b01040261dc1290d87b02e", "text": "Abnormal condition in a power system generally leads to a fall in system frequency, and it leads to system blackout in an extreme condition. This paper presents a technique to develop an auto load shedding and islanding scheme for a power system to prevent blackout and to stabilize the system under any abnormal condition. The technique proposes the sequence and conditions of the applications of different load shedding schemes and islanding strategies. It is developed based on the international current practices. It is applied to the Bangladesh Power System (BPS), and an auto load-shedding and islanding scheme is developed. The effectiveness of the developed scheme is investigated simulating different abnormal conditions in BPS.", "title": "" }, { "docid": "62c6050db8e42b1de54f8d1d54fd861f", "text": "In this paper we present our approach of solving the PAN 2016 Author Profiling Task. It involves classifying users’ gender and age using social media posts. We used SVM classifiers and neural networks on TF-IDF and verbosity features. Results showed that SVM classifiers are better for English datasets and neural networks perform better for Dutch and Spanish datasets.", "title": "" }, { "docid": "d477e2a2678de720c57895bf1d047c4b", "text": "Interpreting predictions from tree ensemble methods such as gradient boosting machines and random forests is important, yet feature attribution for trees is often heuristic and not individualized for each prediction. Here we show that popular feature attribution methods are inconsistent, meaning they can lower a feature’s assigned importance when the true impact of that feature actually increases. This is a fundamental problem that casts doubt on any comparison between features. To address it we turn to recent applications of game theory and develop fast exact tree solutions for SHAP (SHapley Additive exPlanation) values, which are the unique consistent and locally accurate attribution values. We then extend SHAP values to interaction effects and define SHAP interaction values. We propose a rich visualization of individualized feature attributions that improves over classic attribution summaries and partial dependence plots, and a unique “supervised” clustering (clustering based on feature attributions). We demonstrate better agreement with human intuition through a user study, exponential improvements in run time, improved clustering performance, and better identification of influential features. An implementation of our algorithm has also been merged into XGBoost and LightGBM, see http://github.com/slundberg/shap for details. ACM Reference Format: Scott M. Lundberg, Gabriel G. Erion, and Su-In Lee. 2018. Consistent Individualized Feature Attribution for Tree Ensembles. In Proceedings of ACM (KDD’18). ACM, New York, NY, USA, 9 pages. https://doi.org/none", "title": "" }, { "docid": "d29eba4f796cb642d64e73b76767e59d", "text": "In this paper, a novel segmentation and recognition approach to automatically extract street lighting poles from mobile LiDAR data is proposed. First, points on or around the ground are extracted and removed through a piecewise elevation histogram segmentation method. Then, a new graph-cut-based segmentation method is introduced to extract the street lighting poles from each cluster obtained through a Euclidean distance clustering algorithm. In addition to the spatial information, the street lighting pole's shape and the point's intensity information are also considered to formulate the energy function. Finally, a Gaussian-mixture-model-based method is introduced to recognize the street lighting poles from the candidate clusters. The proposed approach is tested on several point clouds collected by different mobile LiDAR systems. Experimental results show that the proposed method is robust to noises and achieves an overall performance of 90% in terms of true positive rate.", "title": "" }, { "docid": "3f5c761e5c5dbfd5aa1d1d9af736e5fd", "text": "In this paper, a double L-slot microstrip patch antenna array using Coplanar waveguide feed for Wireless Local Area Network (WLAN) and Worldwide Interoperability for Microwave Access (WiMAX) frequency bands are presented. The proposed antenna is fabricated on Aluminum Nitride Ceramic substrate with dielectric constant 8.8 and thickness of 1.5mm. The key feature of this substrate is that it can withstand in high temperature. The return loss is about -31dB at the operating frequency of 3.6GHz with 50Ω input impedance. The basic parameters of the proposed antenna such as return loss, VSWR, and radiation pattern are simulated using Ansoft HFSS. Simulation results of antenna parameters of single patch and double patch antenna array are analyzed and presented.", "title": "" }, { "docid": "0bd720d912575c0810c65d04f6b1712b", "text": "Digital painters commonly use a tablet and stylus to drive software like Adobe Photoshop. A high quality stylus with 6 degrees of freedom (DOFs: 2D position, pressure, 2D tilt, and 1D rotation) coupled to a virtual brush simulation engine allows skilled users to produce expressive strokes in their own style. However, such devices are difficult for novices to control, and many people draw with less expensive (lower DOF) input devices. This paper presents a data-driven approach for synthesizing the 6D hand gesture data for users of low-quality input devices. Offline, we collect a library of strokes with 6D data created by trained artists. Online, given a query stroke as a series of 2D positions, we synthesize the 4D hand pose data at each sample based on samples from the library that locally match the query. This framework optionally can also modify the stroke trajectory to match characteristic shapes in the style of the library. Our algorithm outputs a 6D trajectory that can be fed into any virtual brush stroke engine to make expressive strokes for novices or users of limited hardware.", "title": "" }, { "docid": "b2032f8912fac19b18bc5a836c3536e9", "text": "Electroencephalographic measurements are commonly used in medical and research areas. This review article presents an introduction into EEG measurement. Its purpose is to help with orientation in EEG field and with building basic knowledge for performing EEG recordings. The article is divided into two parts. In the first part, background of the subject, a brief historical overview, and some EEG related research areas are given. The second part explains EEG recording.", "title": "" }, { "docid": "5e64e36e76f4c0577ae3608b6e715a1f", "text": "Deep learning has recently become very popular on account of its incredible success in many complex datadriven applications, including image classification and speech recognition. The database community has worked on data-driven applications for many years, and therefore should be playing a lead role in supporting this new wave. However, databases and deep learning are different in terms of both techniques and applications. In this paper, we discuss research problems at the intersection of the two fields. In particular, we discuss possible improvements for deep learning systems from a database perspective, and analyze database applications that may benefit from deep learning techniques.", "title": "" }, { "docid": "8a50b086b61e19481cc3dee78a785f09", "text": "A new approach to the online classification of streaming data is introduced in this paper. It is based on a self-developing (evolving) fuzzy-rule-based (FRB) classifier system of Takagi-Sugeno ( eTS) type. The proposed approach, called eClass (evolving class ifier), includes different architectures and online learning methods. The family of alternative architectures includes: 1) eClass0, with the classifier consequents representing class label and 2) the newly proposed method for regression over the features using a first-order eTS fuzzy classifier, eClass1. An important property of eClass is that it can start learning ldquofrom scratch.rdquo Not only do the fuzzy rules not need to be prespecified, but neither do the number of classes for eClass (the number may grow, with new class labels being added by the online learning process). In the event that an initial FRB exists, eClass can evolve/develop it further based on the newly arrived data. The proposed approach addresses the practical problems of the classification of streaming data (video, speech, sensory data generated from robotic, advanced industrial applications, financial and retail chain transactions, intruder detection, etc.). It has been successfully tested on a number of benchmark problems as well as on data from an intrusion detection data stream to produce a comparison with the established approaches. The results demonstrate that a flexible (with evolving structure) FRB classifier can be generated online from streaming data achieving high classification rates and using limited computational resources.", "title": "" }, { "docid": "7ba0a2631c104e80c43aba739567b248", "text": "We consider a stochastic bandit problem with infinitely many arms. In this setting, the learner has no chance of trying all the arms even once and has to dedicate its limited number of samples only to a certain number of arms. All previous algorithms for this setting were designed for minimizing the cumulative regret of the learner. In this paper, we propose an algorithm aiming at minimizing the simple regret. As in the cumulative regret setting of infinitely many armed bandits, the rate of the simple regret will depend on a parameter β characterizing the distribution of the near-optimal arms. We prove that depending on β, our algorithm is minimax optimal either up to a multiplicative constant or up to a log(n) factor. We also provide extensions to several important cases: when β is unknown, in a natural setting where the near-optimal arms have a small variance, and in the case of unknown time horizon.", "title": "" }, { "docid": "8f876345827e55e8ff241afa99c6bb70", "text": "Reef-building corals occur as a range of colour morphs because of varying types and concentrations of pigments within the host tissues, but little is known about their physiological or ecological significance. Here, we examined whether specific host pigments act as an alternative mechanism for photoacclimation in the coral holobiont. We used the coral Montipora monasteriata (Forskål 1775) as a case study because it occurs in multiple colour morphs (tan, blue, brown, green and red) within varying light-habitat distributions. We demonstrated that two of the non-fluorescent host pigments are responsive to changes in external irradiance, with some host pigments up-regulating in response to elevated irradiance. This appeared to facilitate the retention of antennal chlorophyll by endosymbionts and hence, photosynthetic capacity. Specifically, net P(max) Chl a(-1) correlated strongly with the concentration of an orange-absorbing non-fluorescent pigment (CP-580). This had major implications for the energetics of bleached blue-pigmented (CP-580) colonies that maintained net P(max) cm(-2) by increasing P(max) Chl a(-1). The data suggested that blue morphs can bleach, decreasing their symbiont populations by an order of magnitude without compromising symbiont or coral health.", "title": "" }, { "docid": "d01198e88f91a47a1777337d0db41939", "text": "Ultra low quiescent, wide output current range low-dropout regulators (LDO) are in high demand in portable applications to extend battery lives. This paper presents a 500 nA quiescent, 0 to 100 mA load, 3.5–7 V input to 3 V output LDO in a digital 0.35 μm 2P3M CMOS technology. The challenges in designing with nano-ampere of quiescent current are discussed, namely the leakage, the parasitics, and the excessive DC gain. CMOS super source follower voltage buffer and input excessive gain reduction are then proposed. The LDO is internally compensated using Ahuja method with a minimum phase margin of 55° across all load conditions. The maximum transient voltage variation is less than 150 and 75 mV when used with 1 and 10 μF external capacitor. Compared with existing work, this LDO achieves the best transient flgure-of-merit with close to best dynamic current efficiency (maximum-to-quiescent current ratio).", "title": "" }, { "docid": "6fd8226482617b0997640b8783ad2445", "text": "OBJECTIVES\nThis article presents a new tool that helps systematic reviewers to extract and compare implementation data across primary trials. Currently, systematic review guidance does not provide guidelines for the identification and extraction of data related to the implementation of the underlying interventions.\n\n\nSTUDY DESIGN AND SETTING\nA team of systematic reviewers used a multistaged consensus development approach to develop this tool. First, a systematic literature search on the implementation and synthesis of clinical trial evidence was performed. The team then met in a series of subcommittees to develop an initial draft index. Drafts were presented at several research conferences and circulated to methodological experts in various health-related disciplines for feedback. The team systematically recorded, discussed, and incorporated all feedback into further revisions. A penultimate draft was discussed at the 2010 Cochrane-Campbell Collaboration Colloquium to finalize its content.\n\n\nRESULTS\nThe Oxford Implementation Index provides a checklist of implementation data to extract from primary trials. Checklist items are organized into four domains: intervention design, actual delivery by trial practitioners, uptake of the intervention by participants, and contextual factors. Systematic reviewers piloting the index at the Cochrane-Campbell Colloquium reported that the index was helpful for the identification of implementation data.\n\n\nCONCLUSION\nThe Oxford Implementation Index provides a framework to help reviewers assess implementation data across trials. Reviewers can use this tool to identify implementation data, extract relevant information, and compare features of implementation across primary trials in a systematic review. The index is a work-in-progress, and future efforts will focus on refining the index, improving usability, and integrating the index with other guidance on systematic reviewing.", "title": "" }, { "docid": "318938c2dd173a511d03380826d31bd9", "text": "The theory and construction of the HP-1430A feed-through sampling head are reviewed, and a model for the sampling head is developed from dimensional and electrical measurements in conjunction with electromagnetic, electronic, and network theory. The model was used to predict the sampling-head step response needed for the deconvolution of true input waveforms. The dependence of the sampling-head step response on the sampling diode bias is investigated. Calculations based on the model predict step response transition durations of 27.5 to 30.5 ps for diode reverse bias values of -1.76 to -1.63 V.", "title": "" }, { "docid": "2276f5bd8866d54128bd1782a748eb43", "text": "8.5 Printing 304 8.5.1 Overview 304 8.5.2 Inks and subtractive color calculations 304 8.5.2.1 Density 305 8.5.3 Continuous tone printing 306 8.5.4 Halftoning 307 8.5.4.1 Traditional halftoning 307 8.5.5 Digital halftoning 308 8.5.5.1 Cluster dot dither 310 8.5.5.2 Bayer dither and void and cluster dither 310 8.5.5.3 Error diffusion 311 8.5.5.4 Color digital halftoning 312 8.5.6 Print characterization 313 8.5.6.1 Transduction: the tone reproduction curve 313 8.6", "title": "" }, { "docid": "93151277f8325a15c569d77dc973c1a8", "text": "A class of binary quasi-cyclic burst error-correcting codes based upon product codes is studied. An expression for the maximum burst error-correcting capability for each code in the class is given. In certain cases the codes reduce to Gilbert codes, which are cyclic. Often codes exist in the class which have the same block length and number of check bits as the Gilbert codes but correct longer bursts of errors than Gilbert codes. By shortening the codes, it is possible to design codes which achieve the Reiger bound.", "title": "" } ]
scidocsrr
9a55767aba9c03100f383feb17188a74
Isolated Swiss-Forward Three-Phase Rectifier With Resonant Reset
[ { "docid": "ee6461f83cee5fdf409a130d2cfb1839", "text": "This paper introduces a novel three-phase buck-type unity power factor rectifier appropriate for high power Electric Vehicle battery charging mains interfaces. The characteristics of the converter, named the Swiss Rectifier, including the principle of operation, modulation strategy, suitable control structure, and dimensioning equations are described in detail. Additionally, the proposed rectifier is compared to a conventional 6-switch buck-type ac-dc power conversion. According to the results, the Swiss Rectifier is the topology of choice for a buck-type PFC. Finally, the feasibility of the Swiss Rectifier concept for buck-type rectifier applications is demonstrated by means of a hardware prototype.", "title": "" } ]
[ { "docid": "fe8f31db9c3e8cbe9d69e146c40abb49", "text": "BACKGROUND\nRegular physical activity (PA) can be beneficial to pregnant women, however, many women do not adhere to current PA guidelines during the antenatal period. Patient and public involvement is essential when designing antenatal PA interventions in order to uncover the reasons for non-adherence and non-engagement with the behaviour, as well as determining what type of intervention would be acceptable. The aim of this research was to explore women's experiences of PA during a recent pregnancy, understand the barriers and determinants of antenatal PA and explore the acceptability of antenatal walking groups for further development.\n\n\nMETHODS\nSeven focus groups were undertaken with women who had given birth within the past five years. Focus groups were transcribed and analysed using a grounded theory approach. Relevant and related behaviour change techniques (BCTs), which could be applied to future interventions, were identified using the BCT taxonomy.\n\n\nRESULTS\nWomen's opinions and experiences of PA during pregnancy were categorised into biological/physical (including tiredness and morning sickness), psychological (fear of harm to baby and self-confidence) and social/environmental issues (including access to facilities). Although antenatal walking groups did not appear popular, women identified some factors which could encourage attendance (e.g. childcare provision) and some which could discourage attendance (e.g. walking being boring). It was clear that the personality of the walk leader would be extremely important in encouraging women to join a walking group and keep attending. Behaviour change technique categories identified as potential intervention components included social support and comparison of outcomes (e.g. considering pros and cons of behaviour).\n\n\nCONCLUSIONS\nWomen's experiences and views provided a range of considerations for future intervention development, including provision of childcare, involvement of a fun and engaging leader and a range of activities rather than just walking. These experiences and views relate closely to the Health Action Process Model which, along with BCTs, could be used to develop future interventions. The findings of this study emphasise the importance of involving the target population in intervention development and present the theoretical foundation for building an antenatal PA intervention to encourage women to be physically active throughout their pregnancies.", "title": "" }, { "docid": "f6ba46b72139f61cfb098656d71553ed", "text": "This paper introduces the Voice Conversion Octave Toolbox made available to the public as open source. The first version of the toolbox features tools for VTLN-based voice conversion supporting a variety of warping functions. The authors describe the implemented functionality and how to configure the included tools.", "title": "" }, { "docid": "d92f9a08b608f895f004e69c7893f2f0", "text": "Although research has determined that reactive oxygen species (ROS) function as signaling molecules in plant development, the molecular mechanism by which ROS regulate plant growth is not well known. An aba overly sensitive mutant, abo8-1, which is defective in a pentatricopeptide repeat (PPR) protein responsible for the splicing of NAD4 intron 3 in mitochondrial complex I, accumulates more ROS in root tips than the wild type, and the ROS accumulation is further enhanced by ABA treatment. The ABO8 mutation reduces root meristem activity, which can be enhanced by ABA treatment and reversibly recovered by addition of certain concentrations of the reducing agent GSH. As indicated by low ProDR5:GUS expression, auxin accumulation/signaling was reduced in abo8-1. We also found that ABA inhibits the expression of PLETHORA1 (PLT1) and PLT2, and that root growth is more sensitive to ABA in the plt1 and plt2 mutants than in the wild type. The expression of PLT1 and PLT2 is significantly reduced in the abo8-1 mutant. Overexpression of PLT2 in an inducible system can largely rescue root apical meristem (RAM)-defective phenotype of abo8-1 with and without ABA treatment. These results suggest that ABA-promoted ROS in the mitochondria of root tips are important retrograde signals that regulate root meristem activity by controlling auxin accumulation/signaling and PLT expression in Arabidopsis.", "title": "" }, { "docid": "bc272e837f1071fabcc7056134bae784", "text": "Parental vaccine hesitancy is a growing problem affecting the health of children and the larger population. This article describes the evolution of the vaccine hesitancy movement and the individual, vaccine-specific and societal factors contributing to this phenomenon. In addition, potential strategies to mitigate the rising tide of parent vaccine reluctance and refusal are discussed.", "title": "" }, { "docid": "f55c9ef1e60afd326bebbb619452fd97", "text": "With the flourish of the Web, online review is becoming a more and more useful and important information resource for people. As a result, automatic review mining and summarization has become a hot research topic recently. Different from traditional text summarization, review mining and summarization aims at extracting the features on which the reviewers express their opinions and determining whether the opinions are positive or negative. In this paper, we focus on a specific domain - movie review. A multi-knowledge based approach is proposed, which integrates WordNet, statistical analysis and movie knowledge. The experimental results show the effectiveness of the proposed approach in movie review mining and summarization.", "title": "" }, { "docid": "42b6c55e48f58e3e894de84519cb6feb", "text": "What social value do Likes on Facebook hold? This research examines people’s attitudes and behaviors related to receiving one-click feedback in social media. Likes and other kinds of lightweight affirmation serve as social cues of acceptance and maintain interpersonal relationships, but may mean different things to different people. Through surveys and de-identified, aggregated behavioral Facebook data, we find that in general, people care more about who Likes their posts than how many Likes they receive, desiring feedback most from close friends, romantic partners, and family members other than their parents. While most people do not feel strongly that receiving “enough” Likes is important, roughly two-thirds of posters regularly receive more than “enough.” We also note a “Like paradox,” a phenomenon in which people’s friends receive more Likes because their friends have more friends to provide those Likes. Individuals with lower levels of self-esteem and higher levels of self-monitoring are more likely to think that Likes are important and to feel bad if they do not receive “enough” Likes. The results inform product design and our understanding of how lightweight interactions shape our experiences online.", "title": "" }, { "docid": "48fffb441a5e7f304554e6bdef6b659e", "text": "The massive accumulation of genome-sequences in public databases promoted the proliferation of genome-level phylogenetic analyses in many areas of biological research. However, due to diverse evolutionary and genetic processes, many loci have undesirable properties for phylogenetic reconstruction. These, if undetected, can result in erroneous or biased estimates, particularly when estimating species trees from concatenated datasets. To deal with these problems, we developed GET_PHYLOMARKERS, a pipeline designed to identify high-quality markers to estimate robust genome phylogenies from the orthologous clusters, or the pan-genome matrix (PGM), computed by GET_HOMOLOGUES. In the first context, a set of sequential filters are applied to exclude recombinant alignments and those producing anomalous or poorly resolved trees. Multiple sequence alignments and maximum likelihood (ML) phylogenies are computed in parallel on multi-core computers. A ML species tree is estimated from the concatenated set of top-ranking alignments at the DNA or protein levels, using either FastTree or IQ-TREE (IQT). The latter is used by default due to its superior performance revealed in an extensive benchmark analysis. In addition, parsimony and ML phylogenies can be estimated from the PGM. We demonstrate the practical utility of the software by analyzing 170 Stenotrophomonas genome sequences available in RefSeq and 10 new complete genomes of Mexican environmental S. maltophilia complex (Smc) isolates reported herein. A combination of core-genome and PGM analyses was used to revise the molecular systematics of the genus. An unsupervised learning approach that uses a goodness of clustering statistic identified 20 groups within the Smc at a core-genome average nucleotide identity (cgANIb) of 95.9% that are perfectly consistent with strongly supported clades on the core- and pan-genome trees. In addition, we identified 16 misclassified RefSeq genome sequences, 14 of them labeled as S. maltophilia, demonstrating the broad utility of the software for phylogenomics and geno-taxonomic studies. The code, a detailed manual and tutorials are freely available for Linux/UNIX servers under the GNU GPLv3 license at https://github.com/vinuesa/get_phylomarkers. A docker image bundling GET_PHYLOMARKERS with GET_HOMOLOGUES is available at https://hub.docker.com/r/csicunam/get_homologues/, which can be easily run on any platform.", "title": "" }, { "docid": "67136c5bd9277e0637393e9a131d7b53", "text": "BACKGROUND\nSynchronous written conversations (or \"chats\") are becoming increasingly popular as Web-based mental health interventions. Therefore, it is of utmost importance to evaluate and summarize the quality of these interventions.\n\n\nOBJECTIVE\nThe aim of this study was to review the current evidence for the feasibility and effectiveness of online one-on-one mental health interventions that use text-based synchronous chat.\n\n\nMETHODS\nA systematic search was conducted of the databases relevant to this area of research (Medical Literature Analysis and Retrieval System Online [MEDLINE], PsycINFO, Central, Scopus, EMBASE, Web of Science, IEEE, and ACM). There were no specific selection criteria relating to the participant group. Studies were included if they reported interventions with individual text-based synchronous conversations (ie, chat or text messaging) and a psychological outcome measure.\n\n\nRESULTS\nA total of 24 articles were included in this review. Interventions included a wide range of mental health targets (eg, anxiety, distress, depression, eating disorders, and addiction) and intervention design. Overall, compared with the waitlist (WL) condition, studies showed significant and sustained improvements in mental health outcomes following synchronous text-based intervention, and post treatment improvement equivalent but not superior to treatment as usual (TAU) (eg, face-to-face and telephone counseling).\n\n\nCONCLUSIONS\nFeasibility studies indicate substantial innovation in this area of mental health intervention with studies utilizing trained volunteers and chatbot technologies to deliver interventions. While studies of efficacy show positive post-intervention gains, further research is needed to determine whether time requirements for this mode of intervention are feasible in clinical practice.", "title": "" }, { "docid": "8f0b7554ff0d9f6bf0d1cf8579dc2893", "text": "Recent advances in Convolutional Neural Networks (CNNs) have obtained promising results in difficult deep learning tasks. However, the success of a CNN depends on finding an architecture to fit a given problem. A hand-crafted architecture is a challenging, time-consuming process that requires expert knowledge and effort, due to a large number of architectural design choices. In this article, we present an efficient framework that automatically designs a high-performing CNN architecture for a given problem. In this framework, we introduce a new optimization objective function that combines the error rate and the information learnt by a set of feature maps using deconvolutional networks (deconvnet). The new objective function allows the hyperparameters of the CNN architecture to be optimized in a way that enhances the performance by guiding the CNN through better visualization of learnt features via deconvnet. The actual optimization of the objective function is carried out via the Nelder-Mead Method (NMM). Further, our new objective function results in much faster convergence towards a better architecture. The proposed framework has the ability to explore a CNN architecture’s numerous design choices in an efficient way and also allows effective, distributed execution and synchronization via web services. Empirically, we demonstrate that the CNN architecture designed with our approach outperforms several existing approaches in terms of its error rate. Our results are also competitive with state-of-the-art results on the MNIST dataset and perform reasonably against the state-of-the-art results on CIFAR-10 and CIFAR-100 datasets. Our approach has a significant role in increasing the depth, reducing the size of strides, and constraining some convolutional layers not followed by pooling layers in order to find a CNN architecture that produces a high recognition performance.", "title": "" }, { "docid": "ccf7390abc2924e4d2136a2b82639115", "text": "The proposition of increased innovation in network applications and reduced cost for network operators has won over the networking world to the vision of software-defined networking (SDN). With the excitement of holistic visibility across the network and the ability to program network devices, developers have rushed to present a range of new SDN-compliant hardware, software, and services. However, amidst this frenzy of activity, one key element has only recently entered the debate: Network Security. In this paper, security in SDN is surveyed presenting both the research community and industry advances in this area. The challenges to securing the network from the persistent attacker are discussed, and the holistic approach to the security architecture that is required for SDN is described. Future research directions that will be key to providing network security in SDN are identified.", "title": "" }, { "docid": "e34815efa68cb1b7a269e436c838253d", "text": "A new mobile robot prototype for inspection of overhead transmission lines is proposed. The mobile platform is composed of 3 arms. And there is a motorized rubber wheel on the end of each arm. On the two end arms, a gripper is designed to clamp firmly onto the conductors from below to secure the robot. Each arm has a motor to achieve 2 degrees of freedom which is realized by moving along a curve. It could roll over some obstacles (compression splices, vibration dampers, etc). And the robot could clear other types of obstacles (spacers, suspension clamps, etc).", "title": "" }, { "docid": "e45c921effd9b5026f34ff738b63c48c", "text": "We consider the problem of weakly supervised learning for object localization. Given a collection of images with image-level annotations indicating the presence/absence of an object, our goal is to localize the object in each image. We propose a neural network architecture called the attention network for this problem. Given a set of candidate regions in an image, the attention network first computes an attention score on each candidate region in the image. Then these candidate regions are combined together with their attention scores to form a whole-image feature vector. This feature vector is used for classifying the image. The object localization is implicitly achieved via the attention scores on candidate regions. We demonstrate that our approach achieves superior performance on several benchmark datasets.", "title": "" }, { "docid": "db2553268fc3ccaddc3ec7077514655c", "text": "Aspect extraction is a task to abstract the common properties of objects from corpora discussing them, such as reviews of products. Recent work on aspect extraction is leveraging the hierarchical relationship between products and their categories. However, such effort focuses on the aspects of child categories but ignores those from parent categories. Hence, we propose an LDA-based generative topic model inducing the two-layer categorical information (CAT-LDA), to balance the aspects of both a parent category and its child categories. Our hypothesis is that child categories inherit aspects from parent categories, controlled by the hierarchy between them. Experimental results on 5 categories of Amazon.com products show that both common aspects of parent category and the individual aspects of subcategories can be extracted to align well with the common sense. We further evaluate the manually extracted aspects of 16 products, resulting in an average hit rate of 79.10%.", "title": "" }, { "docid": "6e07085f81dc4f6892e0f2aba7a8dcdd", "text": "With the rapid growth in the number of spiraling network users and the increase in the use of communication technologies, the multi-server environment is the most common environment for widely deployed applications. Reddy et al. recently showed that Lu et al.'s biometric-based authentication scheme for multi-server environment was insecure, and presented a new authentication and key-agreement scheme for the multi-server. Reddy et al. continued to assert that their scheme was more secure and practical. After a careful analysis, however, their scheme still has vulnerabilities to well-known attacks. In this paper, the vulnerabilities of Reddy et al.'s scheme such as the privileged insider and user impersonation attacks are demonstrated. A proposal is then presented of a new biometric-based user authentication scheme for a key agreement and multi-server environment. Lastly, the authors demonstrate that the proposed scheme is more secure using widely accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool, and that it serves to satisfy all of the required security properties.", "title": "" }, { "docid": "b5b7bef8ec2d38bb2821dc380a3a49bf", "text": "Maternal uniparental disomy (UPD) 7 is found in approximately 5% of patients with Silver-Russell syndrome. By a descriptive and comparative clinical analysis of all published cases (more than 60 to date) their phenotype is updated and compared with the clinical findings in patients with Sliver-Russell syndrome (SRS) of either unexplained etiology or epimutations of the imprinting center region 1 (ICR1) on 11p15. The higher frequency of relative macrocephaly and high forehead/frontal bossing makes the face of patients with epimutations of the ICR1 on 11p15 more distinctive than the face of cases with SRS of unexplained etiology or maternal UPD 7. Because of the distinct micrognathia in the latter, their triangular facial gestalt is more pronounced than in the other groups. However, solely by clinical findings patients with maternal UPD 7 cannot be discriminated unambiguously from patients with epimutations of the ICR1 on 11p15 or SRS of unexplained etiology. Therefore, both loss of methylation of the ICR1 on 11p15 and maternal UPD 7 should be investigated for if SRS is suspected.", "title": "" }, { "docid": "82779e315cf982b56ed14396603ae251", "text": "The selection of drain current, inversion coefficient, and channel length for each MOS device in an analog circuit results in significant tradeoffs in performance. The selection of inversion coefficient, which is a numerical measure of MOS inversion, enables design freely in weak, moderate, and strong inversion and facilitates optimum design. Here, channel width required for layout is easily found and implicitly considered in performance expressions. This paper gives hand expressions motivated by the EKV MOS model and measured data for MOS device performance, inclusive of velocity saturation and other small-geometry effects. A simple spreadsheet tool is then used to predict MOS device performance and map this into complete circuit performance. Tradeoffs and optimization of performance are illustrated by the design of three, 0.18-mum CMOS operational transconductance amplifiers optimized for DC, balanced, and AC performance. Measured performance shows significant tradeoffs in voltage gain, output resistance, transconductance bandwidth, input-referred flicker noise and offset voltage, and layout area.", "title": "" }, { "docid": "b49a8894277278256b6c1430bb4e4a91", "text": "In the past years, several support vector machines (SVM) novelty detection approaches have been applied on the network intrusion detection field. The main advantage of these approaches is that they can characterize normal traffic even when trained with datasets containing not only normal traffic but also a number of attacks. Unfortunately, these algorithms seem to be accurate only when the normal traffic vastly outnumbers the number of attacks present in the dataset. A situation which can not be always hold This work presents an approach for autonomous labeling of normal traffic as a way of dealing with situations where class distribution does not present the imbalance required for SVM algorithms. In this case, the autonomous labeling process is made by SNORT, a misuse-based intrusion detection system. Experiments conducted on the 1998 DARPA dataset show that the use of the proposed autonomous labeling approach not only outperforms existing SVM alternatives but also, under some attack distributions, obtains improvements over SNORT itself.", "title": "" }, { "docid": "4d5e8e1c8942256088f1c5ef0e122c9f", "text": "Cybercrime and cybercriminal activities continue to impact communities as the steady growth of electronic information systems enables more online business. The collective views of sixty-six computer users and organizations, that have an exposure to cybercrime, were analyzed using concept analysis and mapping techniques in order to identify the major issues and areas of concern, and provide useful advice. The findings of the study show that a range of computing stakeholders have genuine concerns about the frequency of information security breaches and malware incursions (including the emergence of dangerous security and detection avoiding malware), the need for e-security awareness and education, the roles played by law and law enforcement, and the installation of current security software and systems. While not necessarily criminal in nature, some stakeholders also expressed deep concerns over the use of computers for cyberbullying, particularly where younger and school aged users are involved. The government’s future directions and recommendations for the technical and administrative management of cybercriminal activity were generally observed to be consistent with stakeholder concerns, with some users also taking practical steps to reduce cybercrime risks. a 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b23e141ca479abecab2b00f13141b9b3", "text": "The prediction of movement time in human-computer interfaces as undertaken using Fitts' law is reviewed. Techniques for model building are summarized and three refinements to improve the theoretical and empirical accuracy of the law are presented. Refinements include (1) the Shannon formulation for the index of task difficulty, (2) new interpretations of \"target width\" for twoand three-dimensional tasks, and (3) a technique for normalizing error rates across experimental factors . Finally, a detailed application example is developed showing the potential of Fitts' law to predict and compare the performance of user interfaces before designs are finalized.", "title": "" }, { "docid": "c034cb6e72bc023a60b54d0f8316045a", "text": "This thesis presents the design, implementation, and valid ation of a system that enables a micro air vehicle to autonomously explore and map unstruct u ed and unknown indoor environments. Such a vehicle would be of considerable use in many real-world applications such as search and rescue, civil engineering inspection, an d a host of military tasks where it is dangerous or difficult to send people. While mapping and exploration capabilities are common for ground vehicles today, air vehicles seeking t o achieve these capabilities face unique challenges. While there has been recent progres s toward sensing, control, and navigation suites for GPS-denied flight, there have been few demonstrations of stable, goal-directed flight in real environments. The main focus of this research is the development of real-ti me state estimation techniques that allow our quadrotor helicopter to fly autonomous ly in indoor, GPS-denied environments. Accomplishing this feat required the developm ent of a large integrated system that brought together many components into a cohesive packa ge. As such, the primary contribution is the development of the complete working sys tem. I show experimental results that illustrate the MAV’s ability to navigate accurat ely in unknown environments, and demonstrate that our algorithms enable the MAV to operate au tonomously in a variety of indoor environments. Thesis Supervisor: Nicholas Roy Title: Associate Professor of Aeronautics and Astronautic s", "title": "" } ]
scidocsrr
5dfc521aa0b4e8ca3fe63d828d91068d
Parallel Concatenated Trellis Coded Modulation1
[ { "docid": "5ef37c0620e087d3552499e2b9b4fc84", "text": "A parallel concatenated coding scheme consists of two simple constituent systematic encoders linked by an interleaver. The input bits to the first encoder are scrambled by the interleaver before entering the second encoder. The codeword of the parallel concatenated code consists of the input bits to the first encoder followed by the parity check bits of both encoders. This construction can be generalized to any number of constituent codes. Parallel concatenated schemes employing two convolutional codes as constituent codes, in connection with an iterative decoding algorithm of complexity comparable to that of the constituent codes, have been recently shown to yield remarkable coding gains close to theoretical limits. They have been named, and are known as, “turbo codes.” We propose a method to evaluate an upper bound to the bit error probability of a parallel concatenated coding scheme averaged over all interleavers of a given length. The analytical bounding technique is then used to shed some light on some crucial questions which have been floating around in the communications community since the proposal of turbo codes.", "title": "" } ]
[ { "docid": "889c8754c97db758b474a6f140b39911", "text": "Herbal toothpaste Salvadora with comprehensive effective materials for dental health ranging from antibacterial, detergent and whitening properties including benzyl isothiocyanate, alkaloids, and anions such as thiocyanate, sulfate, and nitrate with potential antibacterial feature against oral microbial flora, silica and chloride for oral disinfection and bleaching the tooth, fluoride to strengthen tooth enamel, and saponin with appropriate detergent, and resin which protects tooth enamel by placing on it and is aggregated in Salvadora has been formulated. The paste is also from other herbs extract including valerian and chamomile. Current toothpaste has antibacterial, anti-plaque, anti-tartar and whitening, and wood extract of the toothbrush strengthens the tooth and enamel, and prevents the cancellation of enamel.From the other side, resin present in toothbrush wood creates a proper covering on tooth enamel and protects it against decay and benzyl isothiocyanate and also alkaloids present in miswak wood gives Salvadora toothpaste considerable antibacterial and bactericidal effects. Anti-inflammatory effects of the toothpaste are for apigenin and alpha bisabolol available in chamomile extract and seskuiterpen components including valeric acid with sedating features give the paste sedating and calming effect to oral tissues.", "title": "" }, { "docid": "0aab0c0fa6a1b0f283478b390dece614", "text": "Hydrokinetic turbines can provide a source of electricity for remote areas located near a river or stream. The objective of this paper is to describe the design, simulation, build, and testing of a novel hydrokinetic turbine. The main components of the system are a permanent magnet synchronous generator (PMSG), a machined H-Darrieus rotor, an embedded controls system, and a cataraft. The design and construction of this device was conducted at the Oregon Institute of Technology in Wilsonville, Oregon.", "title": "" }, { "docid": "8a564e77710c118e4de86be643b061a6", "text": "SOAR is a cognitive architecture named from state, operator and result, which is adopted to portray the drivers’ guidance compliance behavior on variable message sign VMS in this paper. VMS represents traffic conditions to drivers by three colors: red, yellow, and green. Based on the multiagent platform, SOAR is introduced to design the agent with the detailed description of the working memory, long-term memory, decision cycle, and learning mechanism. With the fixed decision cycle, agent transforms state through four kinds of operators, including choosing route directly, changing the driving goal, changing the temper of driver, and changing the road condition of prediction. The agent learns from the process of state transformation by chunking and reinforcement learning. Finally, computerized simulation program is used to study the guidance compliance behavior. Experiments are simulated many times under given simulation network and conditions. The result, including the comparison between guidance and no guidance, the state transition times, and average chunking times are analyzed to further study the laws of guidance compliance and learning mechanism.", "title": "" }, { "docid": "f6669d0b53dd0ca789219874d35bf14e", "text": "Saliva in the mouth is a biofluid produced mainly by three pairs of major salivary glands--the submandibular, parotid and sublingual glands--along with secretions from many minor submucosal salivary glands. Salivary gland secretion is a nerve-mediated reflex and the volume of saliva secreted is dependent on the intensity and type of taste and on chemosensory, masticatory or tactile stimulation. Long periods of low (resting or unstimulated) flow are broken by short periods of high flow, which is stimulated by taste and mastication. The nerve-mediated salivary reflex is modulated by nerve signals from other centers in the central nervous system, which is most obvious as hyposalivation at times of anxiety. An example of other neurohormonal influences on the salivary reflex is the circadian rhythm, which affects salivary flow and ionic composition. Cholinergic parasympathetic and adrenergic sympathetic autonomic nerves evoke salivary secretion, signaling through muscarinic M3 and adrenoceptors on salivary acinar cells and leading to secretion of fluid and salivary proteins. Saliva gland acinar cells are chloride and sodium secreting, and the isotonic fluid produced is rendered hypotonic by salivary gland duct cells as it flows to the mouth. The major proteins present in saliva are secreted by salivary glands, creating viscoelasticity and enabling the coating of oral surfaces with saliva. Salivary films are essential for maintaining oral health and regulating the oral microbiome. Saliva in the mouth contains a range of validated and potential disease biomarkers derived from epithelial cells, neutrophils, the microbiome, gingival crevicular fluid and serum. For example, cortisol levels are used in the assessment of stress, matrix metalloproteinases-8 and -9 appear to be promising markers of caries and periodontal disease, and a panel of mRNA and proteins has been proposed as a marker of oral squamous cell carcinoma. Understanding the mechanisms by which components enter saliva is an important aspect of validating their use as biomarkers of health and disease.", "title": "" }, { "docid": "4030f6e47e7e1519f69ec9335f4f7cf6", "text": "In this work, we study the problem of scheduling parallelizable jobs online with an objective of minimizing average flow time. Each parallel job is modeled as a DAG where each node is a sequential task and each edge represents dependence between tasks. Previous work has focused on a model of parallelizability known as the arbitrary speed-up curves setting where a scalable algorithm is known. However, the DAG model is more widely used by practitioners, since many jobs generated from parallel programming languages and libraries can be represented in this model. However, little is known for this model in the online setting with multiple jobs. The DAG model and the speed-up curve models are incomparable and algorithmic results from one do not immediately imply results for the other. Previous work has left open the question of whether an online algorithm can be O(1)-competitive with O(1)-speed for average flow time in the DAG setting. In this work, we answer this question positively by giving a scalable algorithm which is (1 + ǫ)-speed O( 1 ǫ )-competitive for any ǫ > 0. We further introduce the first greedy algorithm for scheduling parallelizable jobs — our algorithm is a generalization of the shortest jobs first algorithm. Greedy algorithms are among the most useful in practice due to their simplicity. We show that this algorithm is (2 + ǫ)-speed O( 1 ǫ )competitive for any ǫ > 0. ∗Department of Computer Science and Engineering, Washington University in St. Louis, 1 Brookings Drive, St. Louis, MO 63130. {kunal, li.jing, kefulu, bmoseley}@wustl.edu. B. Moseley and K. Lu work was supported in part by a Google Research Award and a Yahoo Research Award. K. Agrawal and J. Li were supported in part by NSF grants CCF-1150036 and CCF-1340571.", "title": "" }, { "docid": "13748d365584ef2e680affb67cfcc882", "text": "In this paper, we discuss the development of cost effective, wireless, and wearable vibrotactile haptic device for stiffness perception during an interaction with virtual objects. Our experimental setup consists of haptic device with five vibrotactile actuators, virtual reality environment tailored in Unity 3D integrating the Oculus Rift Head Mounted Display (HMD) and the Leap Motion controller. The virtual environment is able to capture touch inputs from users. Interaction forces are then rendered at 500 Hz and fed back to the wearable setup stimulating fingertips with ERM vibrotactile actuators. Amplitude and frequency of vibrations are modulated proportionally to the interaction force to simulate the stiffness of a virtual object. A quantitative and qualitative study is done to compare the discrimination of stiffness on virtual linear spring in three sensory modalities: visual only feedback, tactile only feedback, and their combination. A common psychophysics method called the Two Alternative Forced Choice (2AFC) approach is used for quantitative analysis using Just Noticeable Difference (JND) and Weber Fractions (WF). According to the psychometric experiment result, average Weber fraction values of 0.39 for visual only feedback was improved to 0.25 by adding the tactile feedback.", "title": "" }, { "docid": "a40fab738589a9efbf3f87b6c7668601", "text": "AUTOSAR supports the re-use of software and hardware components of automotive electronic systems. Therefore, amongst other things, AUTOSAR defines a software architecture that is used to decouple software components from hardware devices. This paper gives an overview about the different layers of that architecture. In addition, the upper most layer that concerns the application specific part of automotive electronic systems is presented.", "title": "" }, { "docid": "c7a32821699ebafadb4c59e99fb3aa9e", "text": "According to the trend towards high-resolution CMOS image sensors, pixel sizes are continuously shrinking, towards and below 1.0μm, and sizes are now reaching a technological limit to meet required SNR performance [1-2]. SNR at low-light conditions, which is a key performance metric, is determined by the sensitivity and crosstalk in pixels. To improve sensitivity, pixel technology has migrated from frontside illumination (FSI) to backside illumiation (BSI) as pixel size shrinks down. In BSI technology, it is very difficult to further increase the sensitivity in a pixel of near-1.0μm size because there are no structural obstacles for incident light from micro-lens to photodiode. Therefore the only way to improve low-light SNR is to reduce crosstalk, which makes the non-diagonal elements of the color-correction matrix (CCM) close to zero and thus reduces color noise [3]. The best way to improve crosstalk is to introduce a complete physical isolation between neighboring pixels, e.g., using deep-trench isolation (DTI). So far, a few attempts using DTI have been made to suppress silicon crosstalk. A backside DTI in as small as 1.12μm-pixel, which is formed in the BSI process, is reported in [4], but it is just an intermediate step in the DTI-related technology because it cannot completely prevent silicon crosstalk, especially for long wavelengths of light. On the other hand, front-side DTIs for FSI pixels [5] and BSI pixels [6] are reported. In [5], however, DTI is present not only along the periphery of each pixel, but also invades into the pixel so that it is inefficient in terms of gathering incident light and providing sufficient amount of photodiode area. In [6], the pixel size is as large as 2.0μm and it is hard to scale down with this technology for near 1.0μm pitch because DTI width imposes a critical limit on the sufficient amount of photodiode area for full-well capacity. Thus, a new technological advance is necessary to realize the ideal front DTI in a small size pixel near 1.0μm.", "title": "" }, { "docid": "9841b00b0fe5b9c7112a2e98553b61b0", "text": "The market of converters connected to transmission lines continues to require insulated gate bipolar transistors (IGBTs) with higher blocking voltages to reduce the number of IGBTs connected in series in high-voltage converters. To cope with these demands, semiconductor manufactures have developed several technologies. Nowadays, IGBTs up to 6.5-kV blocking voltage and IEGTs up to 4.5-kV blocking voltage are on the market. However, these IGBTs and injection-enhanced gate transistors (IEGTs) still have very high switching losses compared to low-voltage devices, leading to a realistic switching frequency of up to 1 kHz. To reduce switching losses in high-power applications, the auxiliary resonant commutated pole inverter (ARCPI) is a possible alternative. In this paper, switching losses and on-state voltages of NPT-IGBT (3.3 kV-1200 A), FS-IGBT (6.5 kV-600 A), SPT-IGBT (2.5 kV-1200 A, 3.3 kV-1200 A and 6.5 kV-600 A) and IEGT (3.3 kV-1200 A) are measured under hard-switching and zero-voltage switching (ZVS) conditions. The aim of this selection is to evaluate the impact of ZVS on various devices of the same voltage ranges. In addition, the difference in ZVS effects among the devices with various blocking voltage levels is evaluated.", "title": "" }, { "docid": "be96da6d7a1e8348366b497f160c674e", "text": "The large availability of biomedical data brings opportunities and challenges to health care. Representation of medical concepts has been well studied in many applications, such as medical informatics, cohort selection, risk prediction, and health care quality measurement. In this paper, we propose an efficient multichannel convolutional neural network (CNN) model based on multi-granularity embeddings of medical concepts named MG-CNN, to examine the effect of individual patient characteristics including demographic factors and medical comorbidities on total hospital costs and length of stay (LOS) by using the Hospital Quality Monitoring System (HQMS) data. The proposed embedding method leverages prior medical hierarchical ontology and improves the quality of embedding for rare medical concepts. The embedded vectors are further visualized by the t-Distributed Stochastic Neighbor Embedding (t-SNE) technique to demonstrate the effectiveness of grouping related medical concepts. Experimental results demonstrate that our MG-CNN model outperforms traditional regression methods based on the one-hot representation of medical concepts, especially in the outcome prediction tasks for patients with low-frequency medical events. In summary, MG-CNN model is capable of mining potential knowledge from the clinical data and will be broadly applicable in medical research and inform clinical decisions.", "title": "" }, { "docid": "7442f94af36f6d317291da814e7f3676", "text": "Muscles are required to perform or absorb mechanical work under different conditions. However the ability of a muscle to do this depends on the interaction between its contractile components and its elastic components. In the present study we have used ultrasound to examine the length changes of the gastrocnemius medialis muscle fascicle along with those of the elastic Achilles tendon during locomotion under different incline conditions. Six male participants walked (at 5 km h(-1)) on a treadmill at grades of -10%, 0% and 10% and ran (at 10 km h(-1)) at grades of 0% and 10%, whilst simultaneous ultrasound, electromyography and kinematics were recorded. In both walking and running, force was developed isometrically; however, increases in incline increased the muscle fascicle length at which force was developed. Force was developed at shorter muscle lengths for running when compared to walking. Substantial levels of Achilles tendon strain were recorded in both walking and running conditions, which allowed the muscle fascicles to act at speeds more favourable for power production. In all conditions, positive work was performed by the muscle. The measurements suggest that there is very little change in the function of the muscle fascicles at different slopes or speeds, despite changes in the required external work. This may be a consequence of the role of this biarticular muscle or of the load sharing between the other muscles of the triceps surae.", "title": "" }, { "docid": "33126812301dfc04b475ecbc9c8ae422", "text": "From fishtail to princess braids, these intricately woven structures define an important and popular class of hairstyle, frequently used for digital characters in computer graphics. In addition to the challenges created by the infinite range of styles, existing modeling and capture techniques are particularly constrained by the geometric and topological complexities. We propose a data-driven method to automatically reconstruct braided hairstyles from input data obtained from a single consumer RGB-D camera. Our approach covers the large variation of repetitive braid structures using a family of compact procedural braid models. From these models, we produce a database of braid patches and use a robust random sampling approach for data fitting. We then recover the input braid structures using a multi-label optimization algorithm and synthesize the intertwining hair strands of the braids. We demonstrate that a minimal capture equipment is sufficient to effectively capture a wide range of complex braids with distinct shapes and structures.", "title": "" }, { "docid": "6cf048863ed227ea7d2188ec6b8ee107", "text": "Lane keeping is an important feature for self-driving cars. This paper presents an end-to-end learning approach to obtain the proper steering angle to maintain the car in the lane. The convolutional neural network (CNN) model takes raw image frames as input and outputs the steering angles accordingly. The model is trained and evaluated using the comma.ai dataset, which contains the front view image frames and the steering angle data captured when driving on the road. Unlike the traditional approach that manually decomposes the autonomous driving problem into technical components such as lane detection, path planning and steering control, the end-to-end model can directly steer the vehicle from the front view camera data after training. It learns how to keep in lane from human driving data. Further discussion of this end-to-end approach and its limitation are also provided.", "title": "" }, { "docid": "333645d1c405ae51aafe2b236c8fa3fd", "text": "Proposes a new method of personal recognition based on footprints. In this method, an input pair of raw footprints is normalized, both in direction and in position for robustness image-matching between the input pair of footprints and the pair of registered footprints. In addition to the Euclidean distance between them, the geometric information of the input footprint is used prior to the normalization, i.e., directional and positional information. In the experiment, the pressure distribution of the footprint was measured with a pressure-sensing mat. Ten volunteers contributed footprints for testing the proposed method. The recognition rate was 30.45% without any normalization (i.e., raw image), and 85.00% with the authors' method.", "title": "" }, { "docid": "c117bb1f7a25c44cbd0d75b7376022f6", "text": "Data noise is present in many machine learning problems domains, some of these are well studied but others have received less attention. In this paper we propose an algorithm for constructing a kernel Fisher discriminant (KFD) from training examples withnoisy labels. The approach allows to associate with each example a probability of the label being flipped. We utilise an expectation maximization (EM) algorithm for updating the probabilities. The E-step uses class conditional probabilities estimated as a by-product of the KFD algorithm. The M-step updates the flip probabilities and determines the parameters of the discriminant. We demonstrate the feasibility of the approach on two real-world data-sets.", "title": "" }, { "docid": "f97086d856ebb2f1c5e4167f725b5890", "text": "In this paper, an ac-linked hybrid electrical energy system comprising of photo voltaic (PV) and fuel cell (FC) with electrolyzer for standalone applications is proposed. PV is the primary power source of the system, and an FC-electrolyzer combination is used as a backup and as long-term storage system. A Fuzzy Logic controller is developed for the maximum power point tracking for the PV system. A simple power management strategy is designed for the proposed system to manage power flows among the different energy sources. A simulation model for the hybrid energy has been developed using MATLAB/Simulink.", "title": "" }, { "docid": "1bfab561c8391dad6f0493fa7614feba", "text": "Submission instructions: You should submit your answers via GradeScope and your code via Snap submission site. Submitting answers: Prepare answers to your homework into a single PDF file and submit it via http://gradescope.com. Make sure that answer to each question is on a separate page. This means you should submit a 14-page PDF (1 page for the cover sheet, 4 pages for the answers to question 1, 3 pages for answers to question 2, and 6 pages for question 3). On top of each page write the number of the question you are answering. Please find the cover sheet and the recommended templates located here: Not including the cover sheet in your submission will result in a 2-point penalty. Put all the code for a single question into a single file and upload it. Questions We strongly encourage you to use Snap.py for Python. However, you can use any other graph analysis tool or package you want (SNAP for C++, NetworkX for Python, JUNG for Java, etc.). A question that occupied sociologists and economists as early as the 1900's is how do innovations (e.g. ideas, products, technologies, behaviors) diffuse (spread) within a society. One of the prominent researchers in the field is Professor Mark Granovetter who among other contributions introduced along with Thomas Schelling threshold models in sociology. In Granovetter's model, there is a population of individuals (mob) and for simplicity two behaviours (riot or not riot). • Threshold model: each individual i has a threshold t i that determines her behavior in the following way. If there are at least t i individuals that are rioting, then she will join the riot, otherwise she stays inactive. Here, it is implicitly assumed that each individual has full knowledge of the behavior of all other individuals in the group. Nodes with small threshold are called innovators (early adopters) and nodes with large threshold are called laggards (late adopters). Granovetter's threshold model has been successful in explain classical empirical adoption curves by relating them to thresholds in", "title": "" }, { "docid": "5e8fbfec1ff5bf432dbaadaf13c9ca75", "text": "Multiple studies have illustrated the potential for dramatic societal, environmental and economic benefits from significant penetration of autonomous driving. However, all the current approaches to autonomous driving require the automotive manufacturers to shoulder the primary responsibility and liability associated with replacing human perception and decision making with automation, potentially slowing the penetration of autonomous vehicles, and consequently slowing the realization of the societal benefits of autonomous vehicles. We propose here a new approach to autonomous driving that will re-balance the responsibility and liabilities associated with autonomous driving between traditional automotive manufacturers, private infrastructure players, and third-party players. Our proposed distributed intelligence architecture leverages the significant advancements in connectivity and edge computing in the recent decades to partition the driving functions between the vehicle, edge computers on the road side, and specialized third-party computers that reside in the vehicle. Infrastructure becomes a critical enabler for autonomy. With this Infrastructure Enabled Autonomy (IEA) concept, the traditional automotive manufacturers will only need to shoulder responsibility and liability comparable to what they already do today, and the infrastructure and third-party players will share the added responsibility and liabilities associated with autonomous functionalities. We propose a Bayesian Network Model based framework for assessing the risk benefits of such a distributed intelligence architecture. An additional benefit of the proposed architecture is that it enables “autonomy as a service” while still allowing for private ownership of automobiles.", "title": "" }, { "docid": "648cc09e715d3a5bdc84a908f96c95d2", "text": "With the advent of battery-powered portable devices and the mandatory adoptions of power factor correction (PFC), non-inverting buck-boost converter is attracting numerous attentions. Conventional two-switch or four-switch non-inverting buck-boost converters choose their operation modes by measuring input and output voltage magnitudes. This can cause higher output voltage transients when input and output are close to each other. For the mode selection, the comparison of input and output voltage magnitudes is not enough due to the voltage drops raised by the parasitic components. In addition, the difference in the minimum and maximum effective duty cycle between controller output and switching device yields the discontinuity at the instant of mode change. Moreover, the different properties of output voltage versus a given duty cycle of buck and boost operating modes contribute to the output voltage transients. In this paper, the effect of the discontinuity due to the effective duty cycle derived from device switching time at the mode change is analyzed. A technique to compensate the output voltage transient due to this discontinuity is proposed. In order to attain additional mitigation of output transients and linear input/output voltage characteristic in buck and boost modes, the linearization of DC-gain of large signal model in boost operation is analyzed as well. Analytical, simulation, and experimental results are presented to validate the proposed theory.", "title": "" }, { "docid": "a45dbfbea6ff33d920781c07dac0442b", "text": "Context-aware intelligent systems employ implicit inputs, and make decisions based on complex rules and machine learning models that are rarely clear to users. Such lack of system intelligibility can lead to loss of user trust, satisfaction and acceptance of these systems. However, automatically providing explanations about a system's decision process can help mitigate this problem. In this paper we present results from a controlled study with over 200 participants in which the effectiveness of different types of explanations was examined. Participants were shown examples of a system's operation along with various automatically generated explanations, and then tested on their understanding of the system. We show, for example, that explanations describing why the system behaved a certain way resulted in better understanding and stronger feelings of trust. Explanations describing why the system did not behave a certain way, resulted in lower understanding yet adequate performance. We discuss implications for the use of our findings in real-world context-aware applications.", "title": "" } ]
scidocsrr
df6567247f9e63497797c4b6703b9f8b
Task Scheduling and Server Provisioning for Energy-Efficient Cloud-Computing Data Centers
[ { "docid": "95c41c6f901685490c912a2630c04345", "text": "Network-based cloud computing is rapidly expanding as an alternative to conventional office-based computing. As cloud computing becomes more widespread, the energy consumption of the network and computing resources that underpin the cloud will grow. This is happening at a time when there is increasing attention being paid to the need to manage energy consumption across the entire information and communications technology (ICT) sector. While data center energy use has received much attention recently, there has been less attention paid to the energy consumption of the transmission and switching networks that are key to connecting users to the cloud. In this paper, we present an analysis of energy consumption in cloud computing. The analysis considers both public and private clouds, and includes energy consumption in switching and transmission as well as data processing and data storage. We show that energy consumption in transport and switching can be a significant percentage of total energy consumption in cloud computing. Cloud computing can enable more energy-efficient use of computing power, especially when the computing tasks are of low intensity or infrequent. However, under some circumstances cloud computing can consume more energy than conventional computing where each user performs all computing on their own personal computer (PC).", "title": "" } ]
[ { "docid": "cf14e5e501cc4e5e3e97561c4932ae8f", "text": "Plug-and-play information technology (IT) infrastructure has been expanding very rapidly in recent years. With the advent of cloud computing, many ecosystem and business paradigms are encountering potential changes and may be able to eliminate their IT infrastructure maintenance processes. Real-time performance and high availability requirements have induced telecom networks to adopt the new concepts of the cloud model: software-defined networking (SDN) and network function virtualization (NFV). NFV introduces and deploys new network functions in an open and standardized IT environment, while SDN aims to transform the way networks function. SDN and NFV are complementary technologies; they do not depend on each other. However, both concepts can be merged and have the potential to mitigate the challenges of legacy networks. In this paper, our aim is to describe the benefits of using SDN in a multitude of environments such as in data centers, data center networks, and Network as Service offerings. We also present the various challenges facing SDN, from scalability to reliability and security concerns, and discuss existing solutions to these challenges. Keywords—Software-Defined Networking, OpenFlow, Datacenters, Network as a Service, Network Function Virtualization.", "title": "" }, { "docid": "3ff82fc754526e7a0255959e4b3f6301", "text": "We propose a novel statistical analysis method for functional magnetic resonance imaging (fMRI) to overcome the drawbacks of conventional data-driven methods such as the independent component analysis (ICA). Although ICA has been broadly applied to fMRI due to its capacity to separate spatially or temporally independent components, the assumption of independence has been challenged by recent studies showing that ICA does not guarantee independence of simultaneously occurring distinct activity patterns in the brain. Instead, sparsity of the signal has been shown to be more promising. This coincides with biological findings such as sparse coding in V1 simple cells, electrophysiological experiment results in the human medial temporal lobe, etc. The main contribution of this paper is, therefore, a new data driven fMRI analysis that is derived solely based upon the sparsity of the signals. A compressed sensing based data-driven sparse generalized linear model is proposed that enables estimation of spatially adaptive design matrix as well as sparse signal components that represent synchronous, functionally organized and integrated neural hemodynamics. Furthermore, a minimum description length (MDL)-based model order selection rule is shown to be essential in selecting unknown sparsity level for sparse dictionary learning. Using simulation and real fMRI experiments, we show that the proposed method can adapt individual variation better compared to the conventional ICA methods.", "title": "" }, { "docid": "c5ae1d66d31128691e7e7d8e2ccd2ba8", "text": "The scope of this paper is two-fold: firstly it proposes the application of a 1-2-3 Zones approach to Internet of Things (IoT)-related Digital Forensics (DF) investigations. Secondly, it introduces a Next-Best-Thing Triage (NBT) Model for use in conjunction with the 1-2-3 Zones approach where necessary and vice versa. These two `approaches' are essential for the DF process from an IoT perspective: the atypical nature of IoT sources of evidence (i.e. Objects of Forensic Interest - OOFI), the pervasiveness of the IoT environment and its other unique attributes - and the combination of these attributes - dictate the necessity for a systematic DF approach to incidents. The two approaches proposed are designed to serve as a beacon to incident responders, increasing the efficiency and effectiveness of their IoT-related investigations by maximizing the use of the available time and ensuring relevant evidence identification and acquisition. The approaches can also be applied in conjunction with existing, recognised DF models, methodologies and frameworks.", "title": "" }, { "docid": "3bf0cead54473e6b118ab8835995bc5f", "text": "A compact printed microstrip-fed monopole ultrawideband antenna with triple notched bands is presented and analyzed in detail. A straight, open-ended quarter-wavelength slot is etched in the radiating patch to create the first notched band in 3.3-3.7 GHz for the WiMAX system. In addition, three semicircular half-wavelength slots are cut in the radiating patch to generate the second and third notched bands in 5.15-5.825 GHz for WLAN and 7.25-7.75 GHz for downlink of X-band satellite communication systems. Surface current distributions and transmission line models are used to analyze the effect of these slots. The antenna is successfully fabricated and measured, showing broad band matched impedance and good omnidirectional radiation pattern. The designed antenna has a compact size of 25 × 29 mm2.", "title": "" }, { "docid": "4d69284c25e1a9a503dd1c12fde23faa", "text": "Human pose estimation has been actively studied for decades. While traditional approaches rely on 2d data like images or videos, the development of Time-of-Flight cameras and other depth sensors created new opportunities to advance the field. We give an overview of recent approaches that perform human motion analysis which includes depthbased and skeleton-based activity recognition, head pose estimation, facial feature detection, facial performance capture, hand pose estimation and hand gesture recognition. While the focus is on approaches using depth data, we also discuss traditional image based methods to provide a broad overview of recent developments in these areas.", "title": "" }, { "docid": "4357e361fd35bcbc5d6a7c195a87bad1", "text": "In an age of increasing technology, the possibility that typing on a keyboard will replace handwriting raises questions about the future usefulness of handwriting skills. Here we present evidence that brain activation during letter perception is influenced in different, important ways by previous handwriting of letters versus previous typing or tracing of those same letters. Preliterate, five-year old children printed, typed, or traced letters and shapes, then were shown images of these stimuli while undergoing functional MRI scanning. A previously documented \"reading circuit\" was recruited during letter perception only after handwriting-not after typing or tracing experience. These findings demonstrate that handwriting is important for the early recruitment in letter processing of brain regions known to underlie successful reading. Handwriting therefore may facilitate reading acquisition in young children.", "title": "" }, { "docid": "859c6f75ac740e311da5e68fcd093531", "text": "PURPOSE\nTo understand the effect of socioeconomic status (SES) on the risk of complications in type 1 diabetes (T1D), we explored the relationship between SES and major diabetes complications in a prospective, observational T1D cohort study.\n\n\nMETHODS\nComplete data were available for 317 T1D persons within 4 years of age 28 (ages 24-32) in the Pittsburgh Epidemiology of Diabetes Complications Study. Age 28 was selected to maximize income, education, and occupation potential and to minimize the effect of advanced diabetes complications on SES.\n\n\nRESULTS\nThe incidences over 1 to 20 years' follow-up of end-stage renal disease and coronary artery disease were two to three times greater for T1D individuals without, compared with those with a college degree (p < .05 for both), whereas the incidence of autonomic neuropathy was significantly greater for low-income and/or nonprofessional participants (p < .05 for both). HbA(1c) was inversely associated only with income level. In sex- and diabetes duration-adjusted Cox models, lower education predicted end-stage renal disease (hazard ratio [HR], 2.9; 95% confidence interval [95% CI], 1.1-7.7) and coronary artery disease (HR, 2.5, 95% CI, 1.3-4.9), whereas lower income predicted autonomic neuropathy (HR, 1.7; 95% CI, 1.0-2.9) and lower-extremity arterial disease (HR, 3.7; 95% CI, 1.1-11.9).\n\n\nCONCLUSIONS\nThese associations, partially mediated by clinical risk factors, suggest that lower SES T1D individuals may have poorer self-management and, thus, greater complications from diabetes.", "title": "" }, { "docid": "62e445cabbb5c79375f35d7b93f9a30d", "text": "The recent outbreak of indie games has popularized volumetric terrains to a new level, although video games have used them for decades. These terrains contain geological data, such as materials or cave systems. To improve the exploration experience and due to the large amount of data needed to construct volumetric terrains, industry uses procedural methods to generate them. However, they use their own methods, which are focused on their specific problem domains, lacking customization features. Besides, the evaluation of the procedural terrain generators remains an open issue in this field since no standard metrics have been established yet. In this paper, we propose a new approach to procedural volumetric terrains. It generates completely customizable volumetric terrains with layered materials and other features (e.g., mineral veins, underground caves, material mixtures and underground material flow). The method allows the designer to specify the characteristics of the terrain using intuitive parameters. Additionally, it uses a specific representation for the terrain based on stacked material structures, reducing memory requirements. To overcome the problem in the evaluation of the generators, we propose a new set of metrics for the generated content.", "title": "" }, { "docid": "4f23f9ddf35f6e2f7f5ecfcdf28edcea", "text": "OBJECTIVE\nWe quantified the range of motion (ROM) required for eight upper-extremity activities of daily living (ADLs) in healthy participants.\n\n\nMETHOD\nFifteen right-handed participants completed several bimanual and unilateral basic ADLs while joint kinematics were monitored using a motion capture system. Peak motions of the pelvis, trunk, shoulder, elbow, and wrist were quantified for each task.\n\n\nRESULTS\nTo complete all activities tested, participants needed a minimum ROM of -65°/0°/105° for humeral plane angle (horizontal abduction-adduction), 0°-108° for humeral elevation, -55°/0°/79° for humeral rotation, 0°-121° for elbow flexion, -53°/0°/13° for forearm rotation, -40°/0°/38° for wrist flexion-extension, and -28°/0°/38° for wrist ulnar-radial deviation. Peak trunk ROM was 23° lean, 32° axial rotation, and 59° flexion-extension.\n\n\nCONCLUSION\nFull upper-limb kinematics were calculated for several ADLs. This methodology can be used in future studies as a basis for developing normative databases of upper-extremity motions and evaluating pathology in populations.", "title": "" }, { "docid": "a3ef868300a3c036c2f8802aa6a3793d", "text": "This paper presents a manifesto directed at developers and designers of internet-of-things creation platforms. Currently, most existing creation platforms are tailored to specific types of end-users, mostly people with a substantial background in or affinity with technology. The thirteen items presented in the manifesto however, resulted from several user studies including non-technical users, and highlight aspects that should be taken into account in order to open up internet-of-things creation to a wider audience. To reach out and involve more people in internet-of-things creation, a relation is made to the social phenomenon of do-it-yourself, which provides valuable insights into how society can be encouraged to get involved in creation activities. Most importantly, the manifesto aims at providing a framework for do-it-yourself systems enabling non-technical users to create internet-of-things applications.", "title": "" }, { "docid": "5d5c3c8cc8344a8c5d18313bec9adb04", "text": "Research in reinforcement learning (RL) has thus far concentrated on two optimality criteria: the discounted framework, which has been very well-studied, and the average-reward framework, in which interest is rapidly increasing. In this paper, we present a framework called sensitive discount optimality which ooers an elegant way of linking these two paradigms. Although sensitive discount optimality has been well studied in dynamic programming, with several provably convergent algorithms, it has not received any attention in RL. This framework is based on studying the properties of the expected cumulative discounted reward, as discounting tends to 1. Under these conditions, the cumulative discounted reward can be expanded using a Laurent series expansion to yields a sequence of terms, the rst of which is the average reward, the second involves the average adjusted sum of rewards (or bias), etc. We use the sensitive discount optimality framework to derive a new model-free average reward technique, which is related to Q-learning type methods proposed by Bertsekas, Schwartz, and Singh, but which unlike these previous methods, optimizes both the rst and second terms in the Laurent series (average reward and bias values). Statement: This paper has not been submitted to any other conference.", "title": "" }, { "docid": "03dc2c32044a41715991d900bb7ec783", "text": "The analysis of large scale data logged from complex cyber-physical systems, such as microgrids, often entails the discovery of invariants capturing functional as well as operational relationships underlying such large systems. We describe a latent factor approach to infer invariants underlying system variables and how we can leverage these relationships to monitor a cyber-physical system. In particular we illustrate how this approach helps rapidly identify outliers during system operation.", "title": "" }, { "docid": "af3af0a4102ea0fb555cad52e4cafa50", "text": "The identification of the exact positions of the first and second heart sounds within a phonocardiogram (PCG), or heart sound segmentation, is an essential step in the automatic analysis of heart sound recordings, allowing for the classification of pathological events. While threshold-based segmentation methods have shown modest success, probabilistic models, such as hidden Markov models, have recently been shown to surpass the capabilities of previous methods. Segmentation performance is further improved when apriori information about the expected duration of the states is incorporated into the model, such as in a hidden semiMarkov model (HSMM). This paper addresses the problem of the accurate segmentation of the first and second heart sound within noisy real-world PCG recordings using an HSMM, extended with the use of logistic regression for emission probability estimation. In addition, we implement a modified Viterbi algorithm for decoding the most likely sequence of states, and evaluated this method on a large dataset of 10 172 s of PCG recorded from 112 patients (including 12 181 first and 11 627 second heart sounds). The proposed method achieved an average F1 score of 95.63 ± 0.85%, while the current state of the art achieved 86.28 ± 1.55% when evaluated on unseen test recordings. The greater discrimination between states afforded using logistic regression as opposed to the previous Gaussian distribution-based emission probability estimation as well as the use of an extended Viterbi algorithm allows this method to significantly outperform the current state-of-the-art method based on a two-sided paired t-test.", "title": "" }, { "docid": "bb240f2e536e5e5cd80fcca8c9d98171", "text": "We propose a novel metaphor interpretation method, Meta4meaning. It provides interpretations for nominal metaphors by generating a list of properties that the metaphor expresses. Meta4meaning uses word associations extracted from a corpus to retrieve an approximation to properties of concepts. Interpretations are then obtained as an aggregation or difference of the saliences of the properties to the tenor and the vehicle. We evaluate Meta4meaning using a set of humanannotated interpretations of 84 metaphors and compare with two existing methods for metaphor interpretation. Meta4meaning significantly outperforms the previous methods on this task.", "title": "" }, { "docid": "7a82c189c756e9199ae0d394ed9ade7f", "text": "Since the late 1970s, globalization has become a phenomenon that has elicited polarizing responses from scholars, politicians, activists, and the business community. Several scholars and activists, such as labor unions, see globalization as an anti-democratic movement that would weaken the nation-state in favor of the great powers. There is no doubt that globalization, no matter how it is defined, is here to stay, and is causing major changes on the globe. Given the rapid proliferation of advances in technology, communication, means of production, and transportation, globalization is a challenge to health and well-being worldwide. On an international level, the average human lifespan is increasing primarily due to advances in medicine and technology. The trends are a reflection of increasing health care demands along with the technological advances needed to prevent, diagnose, and treat disease (IOM, 1997). Along with this increase in longevity comes the concern of finding commonalities in the treatment of health disparities for all people. In a seminal work by Friedman (2005), it is posited that the connecting of knowledge into a global network will result in eradication of most of the healthcare translational barriers we face today. Since healthcare is a knowledge-driven profession, it is reasonable to presume that global healthcare will become more than just a buzzword. This chapter looks at all aspects or components of globalization but focuses specifically on how the movement impacts the health of the people and the nations of the world. The authors propose to use the concept of health as a measuring stick of the claims made on behalf of globalization.", "title": "" }, { "docid": "e8e2cd6e4aacbf1427a50e009bfa35cf", "text": "We present a model that, after learning on observations of (sequence, outcome) pairs, can be efficiently used to revise a new sequence in order to improve its associated outcome. Our framework requires neither example improvements, nor additional evaluation of outcomes for proposed revisions. To avoid combinatorial-search over sequence elements, we specify a generative model with continuous latent factors, which is learned via joint approximate inference using a recurrent variational autoencoder (VAE) and an outcome-predicting neural network module. Under this model, gradient methods can be used to efficiently optimize the continuous latent factors with respect to inferred outcomes. By appropriately constraining this optimization and using the VAE decoder to generate a revised sequence, we ensure the revision is fundamentally similar to the original sequence, is associated with better outcomes, and looks natural. These desiderata are proven to hold with high probability under our approach, which is empirically demonstrated for revising natural language sentences. Introduction The success of recurrent neural network (RNN) models in complex tasks like machine translation and audio synthesis has inspired immense interest in learning from sequence data (Eck & Schmidhuber, 2002; Graves, 2013; Sutskever et al., 2014; Karpathy, 2015). Comprised of elements s t P S , which are typically symbols from a discrete vocabulary, a sequence x “ ps1, . . . , sT q P X has length T which can vary between different instances. Sentences are a popular example of such data, where each s j is a word from the language. In many domains, only a tiny fraction of X (the set of possible sequences over a given vocabulary) represents sequences likely to be found in nature (ie. MIT Computer Science & Artificial Intelligence Laboratory. Correspondence to: J. Mueller <jonasmueller@csail.mit.edu>. Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s). those which appear realistic). For example: a random sequence of words will almost never form a coherent sentence that reads naturally, and a random amino-acid sequence is highly unlikely to specify a biologically active protein. In this work, we consider applications where each sequence x is associated with a corresponding outcome y P R. For example: a news article title or Twitter post can be associated with the number of shares it subsequently received online, or the amino-acid sequence of a synthetic protein can be associated with its clinical efficacy. We operate under the standard supervised learning setting, assuming availability of a dataset D", "title": "" }, { "docid": "e7ad934ea591d5b4a6899b5eb2fa1cb3", "text": "Increases in the size of the pupil of the eye have been found to accompany the viewing of emotionally toned or interesting visual stimuli. A technique for recording such changes has been developed, and preliminary results with cats and human beings are reported with attention being given to differences between the sexes in response to particular types of material.", "title": "" }, { "docid": "a64f1bb761ac8ee302a278df03eecaa8", "text": "We analyze StirTrace towards benchmarking face morphing forgeries and extending it by additional scaling functions for the face biometrics scenario. We benchmark a Benford's law based multi-compression-anomaly detection approach and acceptance rates of morphs for a face matcher to determine the impact of the processing on the quality of the forgeries. We use 2 different approaches for automatically creating 3940 images of morphed faces. Based on this data set, 86614 images are created using StirTrace. A manual selection of 183 high quality morphs is used to derive tendencies based on the subjective forgery quality. Our results show that the anomaly detection seems to be able to detect anomalies in the morphing regions, the multi-compression-anomaly detection performance after the processing can be differentiated into good (e.g. cropping), partially critical (e.g. rotation) and critical results (e.g. additive noise). The influence of the processing on the biometric matcher is marginal.", "title": "" }, { "docid": "9e0a28a8205120128938b52ba8321561", "text": "Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.", "title": "" }, { "docid": "4b7714c60749a2f945f21ca3d6d367fe", "text": "Abstractive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion. On the other hand, query-based summarization highlights those points that are relevant in the context of a given query. The encodeattend-decode paradigm has achieved notable success in machine translation, extractive summarization, dialog systems, etc. But it suffers from the drawback of generation of repeated phrases. In this work we propose a model for the query-based summarization task based on the encode-attend-decode paradigm with two key additions (i) a query attention model (in addition to document attention model) which learns to focus on different portions of the query at different time steps (instead of using a static representation for the query) and (ii) a new diversity based attention model which aims to alleviate the problem of repeating phrases in the summary. In order to enable the testing of this model we introduce a new query-based summarization dataset building on debatepedia. Our experiments show that with these two additions the proposed model clearly outperforms vanilla encode-attend-decode models with a gain of 28% (absolute) in ROUGE-L scores.ive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion. On the other hand, query-based summarization highlights those points that are relevant in the context of a given query. The encodeattend-decode paradigm has achieved notable success in machine translation, extractive summarization, dialog systems, etc. But it suffers from the drawback of generation of repeated phrases. In this work we propose a model for the query-based summarization task based on the encode-attend-decode paradigm with two key additions (i) a query attention model (in addition to document attention model) which learns to focus on different portions of the query at different time steps (instead of using a static representation for the query) and (ii) a new diversity based attention model which aims to alleviate the problem of repeating phrases in the summary. In order to enable the testing of this model we introduce a new query-based summarization dataset building on debatepedia. Our experiments show that with these two additions the proposed model clearly outperforms vanilla encode-attend-decode models with a gain of 28% (absolute) in ROUGE-L scores.", "title": "" } ]
scidocsrr
c857af66e1ebadea18b3b07de5b0400a
A Parallel Method for Earth Mover's Distance
[ { "docid": "872a79a47e6a4d83e7440ea5e7126dee", "text": "We propose simple and extremely efficient methods for solving the Basis Pursuit problem min{‖u‖1 : Au = f, u ∈ R}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization and they give a very accurate solution after solving only a very small number of instances of the unconstrained problem min u∈Rn μ‖u‖1 + 1 2 ‖Au− f‖2, for given matrix A and vector fk. We show analytically that this iterative approach yields exact solutions in a finite number of steps, and present numerical results that demonstrate that as few as two to six iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrix-vector operations involving A and A> can be computed by fast transforms. Utilizing a fast fixed-point continuation solver that is solely based on such operations for solving the above unconstrained sub-problem, we were able to solve huge instances of compressed sensing problems quickly on a standard PC.", "title": "" } ]
[ { "docid": "ed530d8481bbfd81da4bdf5d611ad4a4", "text": "Traumatic coma was produced in 45 monkeys by accelerating the head without impact in one of three directions. The duration of coma, degree of neurological impairment, and amount of diffuse axonal injury (DAI) in the brain were directly related to the amount of coronal head motion used. Coma of less than 15 minutes (concussion) occurred in 11 of 13 animals subjected to sagittal head motion, in 2 of 6 animals with oblique head motion, and in 2 of 26 animals with full lateral head motion. All 15 concussioned animals had good recovery, and none had DAI. Conversely, coma lasting more than 6 hours occurred in one of the sagittal or oblique injury groups but was present in 20 of the laterally injured animals, all of which were severely disabled afterward. All laterally injured animals had a degree of DAI similar to that found in severe human head injury. Coma lasting 16 minutes to 6 hours occurred in 2 of 13 of the sagittal group, 4 of 6 in the oblique group, and 4 of 26 in the lateral group, these animals had less neurological disability and less DAI than when coma lasted longer than 6 hours. These experimental findings duplicate the spectrum of traumatic coma seen in human beings and include axonal damage identical to that seen in sever head injury in humans. Since the amount of DAI was directly proportional to the severity of injury (duration of coma and quality of outcome), we conclude that axonal damage produced by coronal head acceleration is a major cause of prolonged traumatic coma and its sequelae.", "title": "" }, { "docid": "84af7a01dc5486c800f1cf94832ac5a8", "text": "A technique intended to increase the diversity order of bit-interleaved coded modulations (BICM) over non Gaussian channels is presented. It introduces simple modifications to the mapper and to the corresponding demapper. They consist of a constellation rotation coupled with signal space component interleaving. Iterative processing at the receiver side can provide additional improvement to the BICM performance. This method has been shown to perform well over fading channels with or without erasures. It has been adopted for the 4-, 16-, 64- and 256-QAM constellations considered in the DVB-T2 standard. Resulting gains can vary from 0.2 dB to several dBs depending on the order of the constellation, the coding rate and the channel model.", "title": "" }, { "docid": "9d45323cd4550075d4c2569065ae583c", "text": "Research on Offline Handwritten Signature Verification explored a large variety of handcrafted feature extractors, ranging from graphology, texture descriptors to interest points. In spite of advancements in the last decades, performance of such systems is still far from optimal when we test the systems against skilled forgeries - signature forgeries that target a particular individual. In previous research, we proposed a formulation of the problem to learn features from data (signature images) in a Writer-Independent format, using Deep Convolutional Neural Networks (CNNs), seeking to improve performance on the task. In this research, we push further the performance of such method, exploring a range of architectures, and obtaining a large improvement in state-of-the-art performance on the GPDS dataset, the largest publicly available dataset on the task. In the GPDS-160 dataset, we obtained an Equal Error Rate of 2.74%, compared to 6.97% in the best result published in literature (that used a combination of multiple classifiers). We also present a visual analysis of the feature space learned by the model, and an analysis of the errors made by the classifier. Our analysis shows that the model is very effective in separating signatures that have a different global appearance, while being particularly vulnerable to forgeries that very closely resemble genuine signatures, even if their line quality is bad, which is the case of slowly-traced forgeries.", "title": "" }, { "docid": "17ba29c670e744d6e4f9e93ceb109410", "text": "This paper presents a novel online video recommendation system called VideoReach, which alleviates users' efforts on finding the most relevant videos according to current viewings without a sufficient collection of user profiles as required in traditional recommenders. In this system, video recommendation is formulated as finding a list of relevant videos in terms of multimodal relevance (i.e. textual, visual, and aural relevance) and user click-through. Since different videos have different intra-weights of relevance within an individual modality and inter-weights among different modalities, we adopt relevance feedback to automatically find optimal weights by user click-though, as well as an attention fusion function to fuse multimodal relevance. We use 20 clips as the representative test videos, which are searched by top 10 queries from more than 13k online videos, and report superior performance compared with an existing video site.", "title": "" }, { "docid": "e96c9bdd3f5e9710f7264cbbe02738a7", "text": "25 years ago, Lenstra, Lenstra and Lovász presented their c el brated LLL lattice reduction algorithm. Among the various applicatio ns of the LLL algorithm is a method due to Coppersmith for finding small roots of polyn mial equations. We give a survey of the applications of this root finding metho d t the problem of inverting the RSA function and the factorization problem. A s we will see, most of the results are of a dual nature, they can either be interpret ed as cryptanalytic results or as hardness/security results.", "title": "" }, { "docid": "640f9ca0bec934786b49f7217e65780b", "text": "Social Networking has become today’s lifestyle and anyone can easily receive information about everyone in the world. It is very useful if a personal identity can be obtained from the mobile device and also connected to social networking. Therefore, we proposed a face recognition system on mobile devices by combining cloud computing services. Our system is designed in the form of an application developed on Android mobile devices which utilized the Face.com API as an image data processor for cloud computing services. We also applied the Augmented Reality as an information viewer to the users. The result of testing shows that the system is able to recognize face samples with the average percentage of 85% with the total computation time for the face recognition system reached 7.45 seconds, and the average augmented reality translation time is 1.03 seconds to get someone’s information.", "title": "" }, { "docid": "934bdd758626ec37241cffba8e2cbeb9", "text": "The combination of GPS/INS provides an ideal navigation system of full capability of continuously outputting position, velocity, and attitude of the host platform. However, the accuracy of INS degrades with time when GPS signals are blocked in environments such as tunnels, dense urban canyons and indoors. To dampen down the error growth, the INS sensor errors should be properly estimated and compensated before the inertial data are involved in the navigation computation. Therefore appropriate modelling of the INS sensor errors is a necessity. Allan Variance (AV) is a simple and efficient method for verifying and modelling these errors by representing the root mean square (RMS) random drift error as a function of averaging time. The AV can be used to determine the characteristics of different random processes. This paper applies the AV to analyse and model different types of random errors residing in the measurements of MEMS inertial sensors. The derived error model will be further applied to a low-cost GPS/MEMS-INS system once the correctness of the model is verified. The paper gives the detail of the AV analysis as well as presents the test results.", "title": "" }, { "docid": "f670bd1ad43f256d5f02039ab200e1e8", "text": "This article addresses the performance of distributed database systems. Specifically, we present an algorithm for dynamic replication of an object in distributed systems. The algorithm is adaptive in the sence that it changes the replication scheme of the object i.e., the set of processors at which the object inreplicated) as changes occur in the read-write patern of the object (i.e., the number of reads and writes issued by each processor). The algorithm continuously moves the replication scheme towards an optimal one. We show that the algorithm can be combined with the concurrency control and recovery mechanisms of ta distributed database management system. The performance of the algorithm is analyzed theoretically and experimentally. On the way we provide a lower bound on the performance of any dynamic replication algorith.", "title": "" }, { "docid": "45b90a55678a022f6c3f128d0dc7d1bf", "text": "Finding community structures in online social networks is an important methodology for understanding the internal organization of users and actions. Most previous studies have focused on structural properties to detect communities. They do not analyze the information gathered from the posting activities of members of social networks, nor do they consider overlapping communities. To tackle these two drawbacks, a new overlapping community detection method involving social activities and semantic analysis is proposed. This work applies a fuzzy membership to detect overlapping communities with different extent and run semantic analysis to include information contained in posts. The available resource description format contributes to research in social networks. Based on this new understanding of social networks, this approach can be adopted for large online social networks and for social portals, such as forums, that are not based on network topology. The efficiency and feasibility of this method is verified by the available experimental analysis. The results obtained by the tests on real networks indicate that the proposed approach can be effective in discovering labelled and overlapping communities with a high amount of modularity. This approach is fast enough to process very large and dense social networks. 6", "title": "" }, { "docid": "b7521521277f944a9532dc4435a2bda7", "text": "The NDN project investigates Jacobson's proposed evolution from today's host-centric network architecture (IP) to a data-centric network architecture (NDN). This conceptually simple shift has far-reaching implications in how we design, develop, deploy and use networks and applications. The NDN design and development has attracted significant attention from the networking community. To facilitate broader participation in addressing NDN research and development challenges, this tutorial will describe the vision of this new architecture and its basic components and operations.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "e7686824a9449bf793554fcf78b66c0e", "text": "In this paper, tension propagation analysis of a newly designed multi-DOF robotic platform for single-port access surgery (SPS) is presented. The analysis is based on instantaneous kinematics of the proposed 6-DOF surgical instrument, and provides the decision criteria for estimating the payload of a surgical instrument according to its pose changes and specifications of a driving-wire. Also, the wire-tension and the number of reduction ratio to manage such a payload can be estimated, quantitatively. The analysis begins with derivation of the power transmission efficiency through wire-interfaces from each instrument joint to an actuator. Based on the energy conservation law and the capstan equation, we modeled the degradation of power transmission efficiency due to 1) the reducer called wire-reduction mechanism, 2) bending of proximal instrument joints, and 3) bending of hyper-redundant guide tube. Based on the analysis, the tension of driving-wires was computed according to various manipulation poses and loading conditions. In our experiment, a newly designed surgical instrument successfully managed the external load of 1kgf, which was applied to the end effector of a surgical manipulator.", "title": "" }, { "docid": "c78ebe9d42163142379557068b652a9c", "text": "A tumor is a mass of tissue that's formed by an accumulation of abnormal cells. Normally, the cells in your body age, die, and are replaced by new cells. With cancer and other tumors, something disrupts this cycle. Tumor cells grow, even though the body does not need them, and unlike normal old cells, they don't die. As this process goes on, the tumor continues to grow as more and more cells are added to the mass. Image processing is an active research area in which medical image processing is a highly challenging field. Brain tumor analysis is done by doctors but its grading gives different conclusions which may vary from one doctor to another. In this project, it provides a foundation of segmentation and edge detection, as the first step towards brain tumor grading. Current segmentation approaches are reviewed with an emphasis placed on revealing the advantages and disadvantages of these methods for medical imaging applications. There are dissimilar types of algorithm were developed for brain tumor detection. Comparing to the other algorithms the performance of fuzzy c-means plays a major role. The patient's stage is determined by this process, whether it can be cured with medicine or not. Also we study difficulty to detect Mild traumatic brain injury (mTBI) the current tools are qualitative, which can lead to poor diagnosis and treatment and to overcome these difficulties, an algorithm is proposed that takes advantage of subject information and texture information from MR images. A contextual model is developed to simulate the progression of the disease using multiple inputs, such as the time post injury and the location of injury. Textural features are used along with feature selection for a single MR modality.", "title": "" }, { "docid": "9530749d15f1f3493f920b84e6e8cebd", "text": "The view that humans comprise only two types of beings, women and men, a framework that is sometimes referred to as the \"gender binary,\" played a profound role in shaping the history of psychological science. In recent years, serious challenges to the gender binary have arisen from both academic research and social activism. This review describes 5 sets of empirical findings, spanning multiple disciplines, that fundamentally undermine the gender binary. These sources of evidence include neuroscience findings that refute sexual dimorphism of the human brain; behavioral neuroendocrinology findings that challenge the notion of genetically fixed, nonoverlapping, sexually dimorphic hormonal systems; psychological findings that highlight the similarities between men and women; psychological research on transgender and nonbinary individuals' identities and experiences; and developmental research suggesting that the tendency to view gender/sex as a meaningful, binary category is culturally determined and malleable. Costs associated with reliance on the gender binary and recommendations for future research, as well as clinical practice, are outlined. (PsycINFO Database Record", "title": "" }, { "docid": "8c679f94e31dc89787ccff8e79e624b5", "text": "This paper presents a radar sensor package specifically developed for wide-coverage sounding and imaging of polar ice sheets from a variety of aircraft. Our instruments address the need for a reliable remote sensing solution well-suited for extensive surveys at low and high altitudes and capable of making measurements with fine spatial and temporal resolution. The sensor package that we are presenting consists of four primary instruments and ancillary systems with all the associated antennas integrated into the aircraft to maintain aerodynamic performance. The instruments operate simultaneously over different frequency bands within the 160 MHz-18 GHz range. The sensor package has allowed us to sound the most challenging areas of the polar ice sheets, ice sheet margins, and outlet glaciers; to map near-surface internal layers with fine resolution; and to detect the snow-air and snow-ice interfaces of snow cover over sea ice to generate estimates of snow thickness. In this paper, we provide a succinct description of each radar and associated antenna structures and present sample results to document their performance. We also give a brief overview of our field measurement programs and demonstrate the unique capability of the sensor package to perform multifrequency coincidental measurements from a single airborne platform. Finally, we illustrate the relevance of using multispectral radar data as a tool to characterize the entire ice column and to reveal important subglacial features.", "title": "" }, { "docid": "99cb4f69fb7b6ff16c9bffacd7a42f4d", "text": "Single cell segmentation is critical and challenging in live cell imaging data analysis. Traditional image processing methods and tools require time-consuming and labor-intensive efforts of manually fine-tuning parameters. Slight variations of image setting may lead to poor segmentation results. Recent development of deep convolutional neural networks(CNN) provides a potentially efficient, general and robust method for segmentation. Most existing CNN-based methods treat segmentation as a pixel-wise classification problem. However, three unique problems of cell images adversely affect segmentation accuracy: lack of established training dataset, few pixels on cell boundaries, and ubiquitous blurry features. The problem becomes especially severe with densely packed cells, where a pixel-wise classification method tends to identify two neighboring cells with blurry shared boundary as one cell, leading to poor cell count accuracy and affecting subsequent analysis. Here we developed a different learning strategy that combines strengths of CNN and watershed algorithm. The method first trains a CNN to learn Euclidean distance transform of binary masks corresponding to the input images. Then another CNN is trained to detect individual cells in the Euclidean distance transform. In the third step, the watershed algorithm takes the outputs from the previous steps as inputs and performs the segmentation. We tested the combined method and various forms of the pixel-wise classification algorithm on segmenting fluorescence and transmitted light images. The new method achieves similar pixel accuracy but significant higher cell count accuracy than pixel-wise classification methods do, and the advantage is most obvious when applying on noisy images of densely packed cells.", "title": "" }, { "docid": "ef9650746ac9ab803b2a3bbdd5493fee", "text": "This paper addresses the problem of establishing correspondences between two sets of visual features using higher order constraints instead of the unary or pairwise ones used in classical methods. Concretely, the corresponding hypergraph matching problem is formulated as the maximization of a multilinear objective function over all permutations of the features. This function is defined by a tensor representing the affinity between feature tuples. It is maximized using a generalization of spectral techniques where a relaxed problem is first solved by a multidimensional power method and the solution is then projected onto the closest assignment matrix. The proposed approach has been implemented, and it is compared to state-of-the-art algorithms on both synthetic and real data.", "title": "" }, { "docid": "ab572c22a75656c19e50b311eb4985ec", "text": "With the increasingly complex electromagnetic environment of communication, as well as the gradually increased radar signal types, how to effectively identify the types of radar signals at low SNR becomes a hot topic. A radar signal recognition algorithm based on entropy features, which describes the distribution characteristics for different types of radar signals by extracting Shannon entropy, Singular spectrum Shannon entropy and Singular spectrum index entropy features, was proposed to achieve the purpose of signal identification. Simulation results show that, the algorithm based on entropies has good anti-noise performance, and it can still describe the characteristics of signals well even at low SNR, which can achieve the purpose of identification and classification for different radar signals.", "title": "" }, { "docid": "1de46f2eee8db2fad444faa6fbba4d1c", "text": "Hyunsook Yoon Dongguk University, Korea This paper reports on a qualitative study that investigated the changes in students’ writing process associated with corpus use over an extended period of time. The primary purpose of this study was to examine how corpus technology affects students’ development of competence as second language (L2) writers. The research was mainly based on case studies with six L2 writers in an English for Academic Purposes writing course. The findings revealed that corpus use not only had an immediate effect by helping the students solve immediate writing/language problems, but also promoted their perceptions of lexicogrammar and language awareness. Once the corpus approach was introduced to the writing process, the students assumed more responsibility for their writing and became more independent writers, and their confidence in writing increased. This study identified a wide variety of individual experiences and learning contexts that were involved in deciding the levels of the students’ willingness and success in using corpora. This paper also discusses the distinctive contributions of general corpora to English for Academic Purposes and the importance of lexical and grammatical aspects in L2 writing pedagogy.", "title": "" }, { "docid": "cb2f5ac9292df37860b02313293d2f04", "text": "How can web services that depend on user generated content discern fake social engagement activities by spammers from legitimate ones? In this paper, we focus on the social site of YouTube and the problem of identifying bad actors posting inorganic contents and inflating the count of social engagement metrics. We propose an effective method, Leas (Local Expansion at Scale), and show how the fake engagement activities on YouTube can be tracked over time by analyzing the temporal graph based on the engagement behavior pattern between users and YouTube videos. With the domain knowledge of spammer seeds, we formulate and tackle the problem in a semi-supervised manner — with the objective of searching for individuals that have similar pattern of behavior as the known seeds — based on a graph diffusion process via local spectral subspace. We offer a fast, scalable MapReduce deployment adapted from the localized spectral clustering algorithm. We demonstrate the effectiveness of our deployment at Google by achieving a manual review accuracy of 98% on YouTube Comments graph in practice. Comparing with the state-of-the-art algorithm CopyCatch, Leas achieves 10 times faster running time on average. Leas is now actively in use at Google, searching for daily deceptive practices on YouTube’s engagement graph spanning over a", "title": "" } ]
scidocsrr
57efca4f00bb10f737800d3d006c3ce9
Real-Time Data Analytics in Sensor Networks
[ { "docid": "2abd75766d4875921edd4d6d63d5d617", "text": "Wireless sensor networks typically consist of a large number of sensor nodes embedded in a physical space. Such sensors are low-power devices that are primarily used for monitoring several physical phenomena, potentially in remote harsh environments. Spatial and temporal dependencies between the readings at these nodes highly exist in such scenarios. Statistical contextual information encodes these spatio-temporal dependencies. It enables the sensors to locally predict their current readings based on their own past readings and the current readings of their neighbors. In this paper, we introduce context-aware sensors. Specifically, we propose a technique for modeling and learning statistical contextual information in sensor networks. Our approach is based on Bayesian classifiers; we map the problem of learning and utilizing contextual information to the problem of learning the parameters of a Bayes classifier, and then making inferences, respectively. We propose a scalable and energy-efficient procedure for online learning of these parameters in-network, in a distributed fashion. We discuss applications of our approach in discovering outliers and detection of faulty sensors, approximation of missing values, and in-network sampling. We experimentally analyze our approach in two applications, tracking and monitoring.", "title": "" } ]
[ { "docid": "a17bf7467da65eede493d543a335c9ae", "text": "Recently interest has grown in applying activity theory, the leading theoretical approach in Russian psychology, to issues of human-computer interaction. This chapter analyzes why experts in the field are looking for an alternative to the currently dominant cognitive approach. The basic principles of activity theory are presented and their implications for human-computer interaction are discussed. The chapter concludes with an outline of the potential impact of activity theory on studies and design of computer use in real-life settings.", "title": "" }, { "docid": "18140fdf4629a1c7528dcd6060f427c3", "text": "Central to many text analysis methods is the notion of a concept: a set of semantically related keywords characterizing a specific object, phenomenon, or theme. Advances in word embedding allow building a concept from a small set of seed terms. However, naive application of such techniques may result in false positive errors because of the polysemy of natural language. To mitigate this problem, we present a visual analytics system called ConceptVector that guides a user in building such concepts and then using them to analyze documents. Document-analysis case studies with real-world datasets demonstrate the fine-grained analysis provided by ConceptVector. To support the elaborate modeling of concepts, we introduce a bipolar concept model and support for specifying irrelevant words. We validate the interactive lexicon building interface by a user study and expert reviews. Quantitative evaluation shows that the bipolar lexicon generated with our methods is comparable to human-generated ones.", "title": "" }, { "docid": "f1d00811120f666763e56e33ad2c3b10", "text": "Fairness is a critical trait in decision making. As machine-learning models are increasingly being used in sensitive application domains (e.g. education and employment) for decision making, it is crucial that the decisions computed by such models are free of unintended bias. But how can we automatically validate the fairness of arbitrary machine-learning models? For a given machine-learning model and a set of sensitive input parameters, our Aeqitas approach automatically discovers discriminatory inputs that highlight fairness violation. At the core of Aeqitas are three novel strategies to employ probabilistic search over the input space with the objective of uncovering fairness violation. Our Aeqitas approach leverages inherent robustness property in common machine-learning models to design and implement scalable test generation methodologies. An appealing feature of our generated test inputs is that they can be systematically added to the training set of the underlying model and improve its fairness. To this end, we design a fully automated module that guarantees to improve the fairness of the model. We implemented Aeqitas and we have evaluated it on six stateof- the-art classifiers. Our subjects also include a classifier that was designed with fairness in mind. We show that Aeqitas effectively generates inputs to uncover fairness violation in all the subject classifiers and systematically improves the fairness of respective models using the generated test inputs. In our evaluation, Aeqitas generates up to 70% discriminatory inputs (w.r.t. the total number of inputs generated) and leverages these inputs to improve the fairness up to 94%.", "title": "" }, { "docid": "fa0c62b91643a45a5eff7c1b1fa918f1", "text": "This paper presents outdoor field experimental results to clarify the 4x4 MIMO throughput performance from applying multi-point transmission in the 15 GHz frequency band in the downlink of 5G cellular radio access system. The experimental results in large-cell scenario shows that up to 30 % throughput gain compared to non-multi-point transmission is achieved although the difference for the RSRP of two TPs is over 10 dB, so that the improvement for the antenna correlation is achievable and important aspect for the multi-point transmission in the 15 GHz frequency band as well as the improvement of the RSRP. Furthermore in small-cell scenario, the throughput gain of 70% and over 5 Gbps are achieved applying multi-point transmission in the condition of two different MIMO streams transmission from a single TP as distributed MIMO instead of four MIMO streams transmission from a single TP.", "title": "" }, { "docid": "be9d13a24f41eadc0a1d15d99e594b55", "text": "Traditionally, mobile robot design is based on wheels, tracks or legs with their respective advantages and disadvantages. Very few groups have explored designs with spherical morphology. During the past ten years, the number of robots with spherical shape and related studies has substantially increased, and a lot of work is done in this area of mobile robotics. Interest in robots with spherical morphology has also increased, in part due to NASA's search for an alternative design for a Mars rover since the wheel-based rover Spirit is now stuck for good in soft soil. This paper presents the spherical amphibious robot Groundbot, developed by Rotundus AB in Stockholm, Sweden, and describes in detail the navigation algorithm employed in this system.", "title": "" }, { "docid": "c1477b801a49df62eb978b537fd3935e", "text": "The striatum is thought to play an essential role in the acquisition of a wide range of motor, perceptual, and cognitive skills, but neuroimaging has not yet demonstrated striatal activation during nonmotor skill learning. Functional magnetic resonance imaging was performed while participants learned probabilistic classification, a cognitive task known to rely on procedural memory early in learning and declarative memory later in learning. Multiple brain regions were active during probabilistic classification compared with a perceptual-motor control task, including bilateral frontal cortices, occipital cortex, and the right caudate nucleus in the striatum. The left hippocampus was less active bilaterally during probabilistic classification than during the control task, and the time course of this hippocampal deactivation paralleled the expected involvement of medial temporal structures based on behavioral studies of amnesic patients. Findings provide initial evidence for the role of frontostriatal systems in normal cognitive skill learning.", "title": "" }, { "docid": "84f688155a92ed2196974d24b8e27134", "text": "My sincere thanks to Donald Norman and David Rumelhart for their support of many years. I also wish to acknowledge the help of The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the sponsoring agencies. Approved for public release; distribution unlimited. Reproduction in whole or in part is permitted for any purpose of the United States Government Requests for reprints should be sent to the", "title": "" }, { "docid": "a7bd8b02d7a46e6b96223122f673a222", "text": "This study was conducted to identify the risk factors that are associated with neonatal mortality in lambs and kids in Jordan. The bacterial causes of mortality in lambs and kids were investigated. One hundred sheep and goat flocks were selected randomly from different areas of North Jordan at the beginning of the lambing season. The flocks were visited every other week to collect information and to take samples from freshly dead animals. By the end of the lambing season, flocks that had neonatal mortality rate ≥ 1.0% were considered as “case group” while flocks that had neonatal mortality rate less than 1.0% − as “control group”. The results indicated that neonatal mortality rate (within 4 weeks of age), in lambs and kids, was 3.2%. However, the early neonatal mortality rate (within 48 hours of age) was 2.01% and represented 62.1% of the neonatal mortalities. The following risk factors were found to be associated with the neonatal mortality in lambs and kids: not separating the neonates from adult animals; not vaccinating dams against infectious diseases (pasteurellosis, colibacillosis and enterotoxemia); walking more than 5 km and starvation-mismothering exposure. The causes of neonatal mortality in lambs and kids were: diarrhea (59.75%), respiratory diseases (13.3%), unknown causes (12.34%), and accident (8.39%). Bacteria responsible for neonatal mortality were: Escherichia coli, Pasteurella multocida, Clostridium perfringens and Staphylococcus aureus. However, E. coli was the most frequent bacterial species identified as cause of neonatal mortality in lambs and kids and represented 63.4% of all bacterial isolates. The E. coli isolates belonged to 10 serogroups, the O44 and O26 being the most frequent isolates.", "title": "" }, { "docid": "1eb4805e6874ea1882a995d0f1861b80", "text": "The Asian-Pacific Association for the Study of the Liver (APASL) convened an international working party on the \"APASL consensus statements and recommendation on management of hepatitis C\" in March, 2015, in order to revise \"APASL consensus statements and management algorithms for hepatitis C virus infection (Hepatol Int 6:409-435, 2012)\". The working party consisted of expert hepatologists from the Asian-Pacific region gathered at Istanbul Congress Center, Istanbul, Turkey on 13 March 2015. New data were presented, discussed and debated to draft a revision. Participants of the consensus meeting assessed the quality of cited studies. Finalized recommendations on treatment of hepatitis C are presented in this review.", "title": "" }, { "docid": "76ecd4ba20333333af4d09b894ff29fc", "text": "This study is an application of social identity theory to feminist consciousness and activism. For women, strong gender identifications may enhance support for equality struggles, whereas for men, they may contribute to backlashes against feminism. University students (N � 276), primarily Euroamerican, completed a measure of gender self-esteem (GSE, that part of one’s selfconcept derived from one’s gender), and two measures of feminism. High GSE in women and low GSE in men were related to support for feminism. Consistent with past research, women were more supportive of feminism than men, and in both genders, support for feminist ideas was greater than self-identification as a feminist.", "title": "" }, { "docid": "e5f5aa53a90f482fb46a7f02bae27b20", "text": "Machinima is a low-cost alternative to full production filmmaking. However, creating quality cinematic visualizations with existing machinima techniques still requires a high degree of talent and effort. We introduce a lightweight artificial intelligence system, Cambot, that can be used to assist in machinima production. Cambot takes a script as input and produces a cinematic visualization. Unlike other virtual cinematography systems, Cambot favors an offline algorithm coupled with an extensible library of specific modular and reusable facets of cinematic knowledge. One of the advantages of this approach to virtual cinematography is a tight coordination between the positions and movements of the camera and the actors.", "title": "" }, { "docid": "240c47d27533069f339d8eb090a637a9", "text": "This paper discusses the active and reactive power control method for a modular multilevel converter (MMC) based grid-connected PV system. The voltage vector space analysis is performed by using average value models for the feasibility analysis of reactive power compensation (RPC). The proposed double-loop control strategy enables the PV system to handle unidirectional active power flow and bidirectional reactive power flow. Experiments have been performed on a laboratory-scaled modular multilevel PV inverter. The experimental results verify the correctness and feasibility of the proposed strategy.", "title": "" }, { "docid": "eacf295c0cbd52599a1567c6d4193007", "text": "Search Ranking and Recommendations are fundamental problems of crucial interest to major Internet companies, including web search engines, content publishing websites and marketplaces. However, despite sharing some common characteristics a one-size-fits-all solution does not exist in this space. Given a large difference in content that needs to be ranked, personalized and recommended, each marketplace has a somewhat unique challenge. Correspondingly, at Airbnb, a short-term rental marketplace, search and recommendation problems are quite unique, being a two-sided marketplace in which one needs to optimize for host and guest preferences, in a world where a user rarely consumes the same item twice and one listing can accept only one guest for a certain set of dates. In this paper we describe Listing and User Embedding techniques we developed and deployed for purposes of Real-time Personalization in Search Ranking and Similar Listing Recommendations, two channels that drive 99% of conversions. The embedding models were specifically tailored for Airbnb marketplace, and are able to capture guest's short-term and long-term interests, delivering effective home listing recommendations. We conducted rigorous offline testing of the embedding models, followed by successful online tests before fully deploying them into production.", "title": "" }, { "docid": "47c88bb234a6e21e8037a67e6dd2444f", "text": "Lacking an operational theory to explain the organization and behaviour of matter in unicellular and multicellular organisms hinders progress in biology. Such a theory should address life cycles from ontogenesis to death. This theory would complement the theory of evolution that addresses phylogenesis, and would posit theoretical extensions to accepted physical principles and default states in order to grasp the living state of matter and define proper biological observables. Thus, we favour adopting the default state implicit in Darwin’s theory, namely, cell proliferation with variation plus motility, and a framing principle, namely, life phenomena manifest themselves as non-identical iterations of morphogenetic processes. From this perspective, organisms become a consequence of the inherent variability generated by proliferation, motility and self-organization. Morphogenesis would then be the result of the default state plus physical constraints, like gravity, and those present in living organisms, like muscular tension.", "title": "" }, { "docid": "1a1c9b8fa2b5fc3180bc1b504def5ea1", "text": "Wireless sensor networks can be deployed in any attended or unattended environments like environmental monitoring, agriculture, military, health care etc., where the sensor nodes forward the sensing data to the gateway node. As the sensor node has very limited battery power and cannot be recharged after deployment, it is very important to design a secure, effective and light weight user authentication and key agreement protocol for accessing the sensed data through the gateway node over insecure networks. Most recently, Turkanovic et al. proposed a light weight user authentication and key agreement protocol for accessing the services of the WSNs environment and claimed that the same protocol is efficient in terms of security and complexities than related existing protocols. In this paper, we have demonstrated several security weaknesses of the Turkanovic et al. protocol. Additionally, we have also illustrated that the authentication phase of the Turkanovic et al. is not efficient in terms of security parameters. In order to fix the above mentioned security pitfalls, we have primarily designed a novel architecture for the WSNs environment and basing upon which a proposed scheme has been presented for user authentication and key agreement scheme. The security validation of the proposed protocol has done by using BAN logic, which ensures that the protocol achieves mutual authentication and session key agreement property securely between the entities involved. Moreover, the proposed scheme has simulated using well popular AVISPA security tool, whose simulation results show that the protocol is SAFE under OFMC and CL-AtSe models. Besides, several security issues informally confirm that the proposed protocol is well protected in terms of relevant security attacks including the above mentioned security pitfalls. The proposed protocol not only resists the above mentioned security weaknesses, but also achieves complete security requirements including specially energy efficiency, user anonymity, mutual authentication and user-friendly password change phase. Performance comparison section ensures that the protocol is relatively efficient in terms of complexities. The security and performance analysis makes the system so efficient that the proposed protocol can be implemented in real-life application. © 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "7c54cef80d345cdb10f56ca440f5fad9", "text": "SIR, Arndt–Gottron scleromyxoedema is a rare fibromucinous disorder regarded as a variant of the lichen myxoedematosus. The diagnostic criteria are a generalized papular and sclerodermoid eruption, a microscopic triad of mucin deposition, fibroblast proliferation and fibrosis, a monoclonal gammopathy (mostly IgG-k paraproteinaemia) and the absence of a thyroid disorder. This disease initially presents with sclerosis of the skin and clusters of small lichenoid papules with a predilection for the face, neck and the forearm. Progressively, the skin lesions can become more widespread and the induration of skin can result in a scleroderma-like condition with sclerodactyly and microstomia, reduced mobility and disability. Systemic involvement is common, e.g. upper gastrointestinal dysmotility, proximal myopathy, joint contractures, neurological complications such as psychic disturbances and encephalopathy, obstructive ⁄restrictive lung disease, as well as renal and cardiovascular involvement. Numerous treatment options have been described in the literature. These include corticosteroids, retinoids, thalidomide, extracorporeal photopheresis (ECP), psoralen plus ultraviolet A radiation, ciclosporin, cyclophosphamide, melphalan or autologous stem cell transplantation. In September 1999, a 48-year-old white female first noticed an erythematous induration with a lichenoid papular eruption on her forehead. Three months later the lesions became more widespread including her face (Fig. 1a), neck, shoulders, forearms (Fig. 2a) and legs. When the patient first presented in our department in June 2000, she had problems opening her mouth fully as well as clenching both hands or moving her wrist. The histological examination of the skin biopsy was highly characteristic of Arndt–Gottron scleromyxoedema. Full blood count, blood morphology, bone marrow biopsy, bone scintigraphy and thyroid function tests were normal. Serum immunoelectrophoresis revealed an IgG-k paraproteinaemia. Urinary Bence-Jones proteins were negative. No systemic involvement was disclosed. We initiated ECP therapy in August 2000, initially at 2-week intervals (later monthly) on two succeeding days. When there was no improvement after 3 months, we also administered cyclophosphamide (Endoxana ; Baxter Healthcare Ltd, Newbury, U.K.) at a daily dose of 100 mg with mesna 400 mg (Uromitexan ; Baxter) prophylaxis. The response to this therapy was rather moderate. In February 2003 the patient developed a change of personality and loss of orientation and was admitted to hospital. The extensive neurological, radiological and microbiological diagnostics were unremarkable at that time. A few hours later the patient had seizures and was put on artificial ventilation in an intensive care unit. The patient was comatose for several days. A repeated magnetic resonance imaging scan was still normal, but the cerebrospinal fluid tap showed a dysfunction of the blood–cerebrospinal fluid barrier. A bilateral loss of somatosensory evoked potentials was noticeable. The neurological symptoms were classified as a ‘dermatoneuro’ syndrome, a rare extracutaneous manifestation of scleromyxoedema. After initiation of treatment with methylprednisolone (Urbason ; Aventis, Frankfurt, Germany) the neurological situation normalized in the following 2 weeks. No further medical treatment was necessary. In April 2003 therapy options were re-evaluated and the patient was started and maintained on a 7-day course of melphalan 7.5 mg daily (Alkeran ; GlaxoSmithKline, Uxbridge, U.K.) in combination with prednisolone 40 mg daily (Decortin H ; Merck, Darmstadt, Germany) every 6 weeks. This treat(a)", "title": "" }, { "docid": "d37d6139ced4c85ff0cbc4cce018212b", "text": "We describe isone, a tool that facilitates the visual exploration of social networks. Social network analysis is a methodological approach in the social sciences using graph-theoretic concepts to describe, understand and explain social structure. The isone software is an attempt to integrate analysis and visualization of social networks and is intended to be used in research and teaching. While we are primarily focussing on users in the social sciences, several features provided in the tool will be useful in other fields as well. In contrast to more conventional mathematical software in the social sciences that aim at providing a comprehensive suite of analytical options, our emphasis is on complementing every option we provide with tailored means of graphical interaction. We attempt to make complicated types of analysis and data handling transparent, intuitive, and more readily accessible. User feedback indicates that many who usually regard data exploration and analysis complicated and unnerving enjoy the playful nature of visual interaction. Consequently, much of the tool is about graph drawing methods specifically adapted to facilitate visual data exploration. The origins of isone lie in an interdisciplinary cooperation with researchers from political science which resulted in innovative uses of graph drawing methods for social network visualization, and prototypical implementations thereof. With the growing demand for access to these methods, we started implementing an integrated tool for public use. It should be stressed, however, that isone remains a research platform and testbed for innovative methods, and is not intended to become", "title": "" }, { "docid": "742c0b15f6a466bfb4e5130b49f79e64", "text": "There has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks (DBNs); however, scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the convolutional deep belief network, a hierarchical generative model that scales to realistic image sizes. This model is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Key to our approach is probabilistic max-pooling, a novel technique that shrinks the representations of higher layers in a probabilistically sound way. Our experiments show that the algorithm learns useful high-level visual features, such as object parts, from unlabeled images of objects and natural scenes. We demonstrate excellent performance on several visual recognition tasks and show that our model can perform hierarchical (bottom-up and top-down) inference over full-sized images.", "title": "" }, { "docid": "c4aafcc0a98882de931713359e55a04a", "text": "We present a computer vision tool that analyses video from a CCTV system installed on fishing trawlers to monitor discarded fish catch. The system aims to support expert observers who review the footage and verify numbers, species and sizes of discarded fish. The operational environment presents a significant challenge for these tasks. Fish are processed below deck under fluorescent lights, they are randomly oriented and there are multiple occlusions. The scene is unstructured and complicated by the presence of fishermen processing the catch. We describe an approach to segmenting the scene and counting fish that exploits the N4-Fields algorithm. We performed extensive tests of the algorithm on a data set comprising 443 frames from 6 belts. Results indicate the relative count error (for individual fish) ranges from 2% to 16%. We believe this is the first system that is able to handle footage from operational trawlers.", "title": "" }, { "docid": "1e493440a61578c8c6ca8fbe63f475d6", "text": "3D object detection is an essential task in autonomous driving. Recent techniques excel with highly accurate detection rates, provided the 3D input data is obtained from precise but expensive LiDAR technology. Approaches based on cheaper monocular or stereo imagery data have, until now, resulted in drastically lower accuracies — a gap that is commonly attributed to poor image-based depth estimation. However, in this paper we argue that data representation (rather than its quality) accounts for the majority of the difference. Taking the inner workings of convolutional neural networks into consideration, we propose to convert imagebased depth maps to pseudo-LiDAR representations — essentially mimicking LiDAR signal. With this representation we can apply different existing LiDAR-based detection algorithms. On the popular KITTI benchmark, our approach achieves impressive improvements over the existing stateof-the-art in image-based performance — raising the detection accuracy of objects within 30m range from the previous state-of-the-art of 22% to an unprecedented 74%. At the time of submission our algorithm holds the highest entry on the KITTI 3D object detection leaderboard for stereo image based approaches.", "title": "" } ]
scidocsrr
a56f23de3827e0be9e6269cbd25ac03e
Wideband, Low-Profile Patch Array Antenna With Corporate Stacked Microstrip and Substrate Integrated Waveguide Feeding Structure
[ { "docid": "50bd58b07a2cf7bf51ff291b17988a2c", "text": "A wideband linearly polarized antenna element with complementary sources is proposed and exploited for array antennas. The element covers a bandwidth of 38.7% from 50 to 74 GHz with an average gain of 8.7 dBi. The four-way broad wall coupler is applied for the 2 <inline-formula> <tex-math notation=\"LaTeX\">$\\times $ </tex-math></inline-formula> 2 subarray, which suppresses the cross-polarization of a single element. Based on the designed 2 <inline-formula> <tex-math notation=\"LaTeX\">$ \\times $ </tex-math></inline-formula> 2 subarray, two larger arrays have been designed and measured. The <inline-formula> <tex-math notation=\"LaTeX\">$4 \\times 4$ </tex-math></inline-formula> array exhibits 26.7% bandwidth, fully covering the 57–71 GHz unlicensed band. The <inline-formula> <tex-math notation=\"LaTeX\">$8 \\times 8$ </tex-math></inline-formula> array antenna covers a bandwidth of 14.5 GHz (22.9%) from 56.1 to 70.6 GHz with a peak gain of 26.7 dBi, and the radiation efficiency is around 80% within the matching band. It is demonstrated that the proposed antenna element and arrays can be used for future 5G applications to cover the 22% bandwidth of the unlicensed band with high gain and low loss.", "title": "" } ]
[ { "docid": "45079629c4bc09cc8680b3d9ac325112", "text": "Power consumption is of utmost concern in sensor networks. Researchers have several ways of measuring the power consumption of a complete sensor network, but they are typically either impractical or inaccurate. To meet the need for practical and scalable measurement of power consumption of sensor networks, we have developed a cycle-accurate simulator, called COOJA/MSPsim, that enables live power estimation of systems running on MSP430 processors. This demonstration shows the ease of use and the power measurement accuracy of COOJA/MSPsim. The demo setup consists of a small sensor network and a laptop. Beside gathering software-based power measurements from the motes, the laptop runs COOJA/MSPsim to simulate the same network.We visualize the power consumption of both the simulated and the real sensor network, and show that the simulator produces matching results.", "title": "" }, { "docid": "678df42df19aa5a15ede86b4a19c49c4", "text": "This paper presents the fundamentals of Origami engineering and its application in nowadays as well as future industry. Several main cores of mathematical approaches such as HuzitaHatori axioms, Maekawa and Kawasaki’s theorems are introduced briefly. Meanwhile flaps and circle packing by Robert Lang is explained to make understood the underlying principles in designing crease pattern. Rigid origami and its corrugation patterns which are potentially applicable for creating transformable or temporary spaces is discussed to show the transition of origami from paper to thick material. Moreover, some innovative applications of origami such as eyeglass, origami stent and high tech origami based on mentioned theories and principles are showcased in section III; while some updated origami technology such as Vacuumatics, self-folding of polymer sheets and programmable matter folding which could greatlyenhance origami structureare demonstrated in Section IV to offer more insight in future origami. Keywords—Origami, origami application, origami engineering, origami technology, rigid origami.", "title": "" }, { "docid": "690544595e0fa2e5f1c40e3187598263", "text": "In this paper, a methodology is presented and employed for simulating the Internet of Things (IoT). The requirement for scalability, due to the possibly huge amount of involved sensors and devices, and the heterogeneous scenarios that might occur, impose resorting to sophisticated modeling and simulation techniques. In particular, multi-level simulation is regarded as a main framework that allows simulating large-scale IoT environments while keeping high levels of detail, when it is needed. We consider a use case based on the deployment of smart services in decentralized territories. A two level simulator is employed, which is based on a coarse agent-based, adaptive parallel and distributed simulation approach to model the general life of simulated entities. However, when needed a finer grained simulator (based on OMNeT++) is triggered on a restricted portion of the simulated area, which allows considering all issues concerned with wireless communications. Based on this use case, it is confirmed that the ad-hoc wireless networking technologies do represent a principle tool to deploy smart services over decentralized countrysides. Moreover, the performance evaluation confirms the viability of utilizing multi-level simulation for simulating large scale IoT environments.", "title": "" }, { "docid": "162823edcbd50579a1d386f88931d59d", "text": "Elevated liver enzymes are a common scenario encountered by physicians in clinical practice. For many physicians, however, evaluation of such a problem in patients presenting with no symptoms can be challenging. Evidence supporting a standardized approach to evaluation is lacking. Although alterations of liver enzymes could be a normal physiological phenomenon in certain cases, it may also reflect potential liver injury in others, necessitating its further assessment and management. In this article, we provide a guide to primary care clinicians to interpret abnormal elevation of liver enzymes in asymptomatic patients using a step-wise algorithm. Adopting a schematic approach that classifies enzyme alterations on the basis of pattern (hepatocellular, cholestatic and isolated hyperbilirubinemia), we review an approach to abnormal alteration of liver enzymes within each section, the most common causes of enzyme alteration, and suggest initial investigations.", "title": "" }, { "docid": "450aee5811484932e8542eb4f0eefa4d", "text": "Natural Language Generation systems in interactive settings often face a multitude of choices, given that the communicative effect of each utterance they generate depends crucially on the interplay between its physical circumstances, addressee and interaction history. This is particularly true in interactive and situated settings. In this paper we present a novel approach for situated Natural Language Generation in dialogue that is based on hierarchical reinforcement learning and learns the best utterance for a context by optimisation through trial and error. The model is trained from human–human corpus data and learns particularly to balance the trade-off between efficiency and detail in giving instructions: the user needs to be given sufficient information to execute their task, but without exceeding their cognitive load. We present results from simulation and a task-based human evaluation study comparing two different versions of hierarchical reinforcement learning: One operates using a hierarchy of policies with a large state space and local knowledge, and the other additionally shares knowledge across generation subtasks to enhance performance. Results show that sharing knowledge across subtasks achieves better performance than learning in isolation, leading to smoother and more successful interactions that are better perceived by human users.", "title": "" }, { "docid": "96344ccc2aac1a7e7fbab96c1355fa10", "text": "A highly sensitive field-effect sensor immune to environmental potential fluctuation is proposed. The sensor circuit consists of two sensors each with a charge sensing field effect transistor (FET) and an extended sensing gate (SG). By enlarging the sensing gate of an extended gate ISFET, a remarkable sensitivity of 130mV/pH is achieved, exceeding the conventional Nernst limit of 59mV/pH. The proposed differential sensing circuit consists of a pair of matching n-channel and p-channel ion sensitive sensors connected in parallel and biased at a matched transconductance bias point. Potential fluctuations in the electrolyte appear as common mode signal to the differential pair and are cancelled by the matched transistors. This novel differential measurement technique eliminates the need for a true reference electrode such as the bulky Ag/AgCl reference electrode and enables the use of the sensor for autonomous and implantable applications.", "title": "" }, { "docid": "8129b5aae31133afbb8a145d4ac131fc", "text": "Community health workers (CHWs) are promoted as a mechanism to increase community involvement in health promotion efforts, despite little consensus about the role and its effectiveness. This article reviews the databased literature on CHW effectiveness, which indicates preliminary support for CHWs in increasing access to care, particularly in underserved populations. There are a smaller number of studies documenting outcomes in the areas of increased health knowledge, improved health status outcomes, and behavioral changes, with inconclusive results. Although CHWs show some promise as an intervention, the role can be doomed by overly high expectations, lack of a clear focus, and lack of documentation. Further research is required with an emphasis on stronger study design, documentation of CHW activities, and carefully defined target populations.", "title": "" }, { "docid": "31404322fb03246ba2efe451191e29fa", "text": "OBJECTIVES\nThe aim of this study is to report an unusual form of penile cancer presentation associated with myiasis infestation, treatment options and outcomes.\n\n\nMATERIALS AND METHODS\nWe studied 10 patients with suspected malignant neoplasm of the penis associated with genital myiasis infestation. Diagnostic assessment was conducted through clinical history, physical examination, penile biopsy, larvae identification and computerized tomography scan of the chest, abdomen and pelvis. Clinical and pathological staging was done according to 2002 TNM classification system. Radical inguinal lymphadenectomy was conducted according to the primary penile tumor pathology and clinical lymph nodes status.\n\n\nRESULTS\nPatients age ranged from 41 to 77 years (mean=62.4). All patients presented squamous cell carcinoma of the penis in association with myiasis infestation caused by Psychoda albipennis. Tumor size ranged from 4cm to 12cm (mean=5.3). Circumcision was conducted in 1 (10%) patient, while penile partial penectomy was performed in 5 (50%). Total penectomy was conducted in 2 (20%) patients, while emasculation was the treatment option for 2 (20%). All patients underwent radical inguinal lymphadenectomy. Prophylactic lymphadenectomy was performed on 3 (30%) patients, therapeutic on 5 (50%), and palliative lymphadenectomy on 2 (20%) patients. Time elapsed from primary tumor treatment to radical inguinal lymphadenectomy was 2 to 6 weeks. The mean follow-up was 34.3 months.\n\n\nCONCLUSION\nThe occurrence of myiasis in the genitalia is more common in patients with precarious hygienic practices and low socio-economic level. The treatment option varied according to the primary tumor presentation and clinical lymph node status.", "title": "" }, { "docid": "26bd615c16b99e84b787b573d6028878", "text": "Extendible hashing is a new access technique, in which the user is guaranteed no more than two page faults to locate the data associated with a given unique identifier, or key. Unlike conventional hashing, extendible hashing has a dynamic structure that grows and shrinks gracefully as the database grows and shrinks. This approach simultaneously solves the problem of making hash tables that are extendible and of making radix search trees that are balanced. We study, by analysis and simulation, the performance of extendible hashing. The results indicate that extendible hashing provides an attractive alternative to other access methods, such as balanced trees.", "title": "" }, { "docid": "c4e8dbd875e35e5bd9bd55ca24cdbfc2", "text": "In this paper, we introduce a new framework for recognizing textual entailment which depends on extraction of the set of publiclyheld beliefs – known as discourse commitments – that can be ascribed to the author of a text or a hypothesis. Once a set of commitments have been extracted from a t-h pair, the task of recognizing textual entailment is reduced to the identification of the commitments from a t which support the inference of the h. Promising results were achieved: our system correctly identified more than 80% of examples from the RTE-3 Test Set correctly, without the need for additional sources of training data or other web-based resources.", "title": "" }, { "docid": "e4069b8312b8a273743b31b12b1dfbae", "text": "Automatic keyphrase extraction techniques play an important role for many tasks including indexing, categorizing, summarizing, and searching. In this paper, we develop and evaluate an automatic keyphrase extraction system for scientific documents. Compared with previous work, our system concentrates on two important issues: (1) more precise location for potential keyphrases: a new candidate phrase generation method is proposed based on the core word expansion algorithm, which can reduce the size of the candidate set by about 75% without increasing the computational complexity; (2) overlap elimination for the output list: when a phrase and its sub-phrases coexist as candidates, an inverse document frequency feature is introduced for selecting the proper granularity. Additional new features are added for phrase weighting. Experiments based on real-world datasets were carried out to evaluate the proposed system. The results show the efficiency and effectiveness of the refined candidate set and demonstrate that the new features improve the accuracy of the system. The overall performance of our system compares favorably with other state-of-the-art keyphrase extraction systems.", "title": "" }, { "docid": "122a27336317372a0d84ee353bb94a4b", "text": "Recently, many advanced machine learning approaches have been proposed for coreference resolution; however, all of the discriminatively-trained models reason over mentions rather than entities. That is, they do not explicitly contain variables indicating the “canonical” values for each attribute of an entity (e.g., name, venue, title, etc.). This canonicalization step is typically implemented as a post-processing routine to coreference resolution prior to adding the extracted entity to a database. In this paper, we propose a discriminatively-trained model that jointly performs coreference resolution and canonicalization, enabling features over hypothesized entities. We validate our approach on two different coreference problems: newswire anaphora resolution and research paper citation matching, demonstrating improvements in both tasks and achieving an error reduction of up to 62% when compared to a method that reasons about mentions only.", "title": "" }, { "docid": "d97b2b028fbfe0658e841954958aac06", "text": "Videogame control interfaces continue to evolve beyond their traditional roots, with devices encouraging more natural forms of interaction growing in number and pervasiveness. Yet little is known about their true potential for intuitive use. This paper proposes methods to leverage existing intuitive interaction theory for games research, specifically by examining different types of naturally mapped control interfaces for videogames using new measures for previous player experience. Three commercial control devices for a racing game were categorised using an existing typology, according to how the interface maps physical control inputs with the virtual gameplay actions. The devices were then used in a within-groups (n=64) experimental design aimed at measuring differences in intuitive use outcomes. Results from mixed design ANOVA are discussed, along with implications for the field.", "title": "" }, { "docid": "99d9dcef0e4441ed959129a2a705c88e", "text": "Wikipedia has grown to a huge, multi-lingual source of encyclopedic knowledge. Apart from textual content, a large and everincreasing number of articles feature so-called infoboxes, which provide factual information about the articles’ subjects. As the different language versions evolve independently, they provide different information on the same topics. Correspondences between infobox attributes in different language editions can be leveraged for several use cases, such as automatic detection and resolution of inconsistencies in infobox data across language versions, or the automatic augmentation of infoboxes in one language with data from other language versions. We present an instance-based schema matching technique that exploits information overlap in infoboxes across different language editions. As a prerequisite we present a graph-based approach to identify articles in different languages representing the same real-world entity using (and correcting) the interlanguage links in Wikipedia. To account for the untyped nature of infobox schemas, we present a robust similarity measure that can reliably quantify the similarity of strings with mixed types of data. The qualitative evaluation on the basis of manually labeled attribute correspondences between infoboxes in four of the largest Wikipedia editions demonstrates the effectiveness of the proposed approach. 1. Entity and Attribute Matching across Wikipedia Languages Wikipedia is a well-known public encyclopedia. While most of the information contained in Wikipedia is in textual form, the so-called infoboxes provide semi-structured, factual information. They are displayed as tables in many Wikipedia articles and state basic facts about the subject. There are different templates for infoboxes, each targeting a specific category of articles and providing fields for properties that are relevant for the respective subject type. For example, in the English Wikipedia, there is a class of infoboxes about companies, one to describe the fundamental facts about countries (such as their capital and population), one for musical artists, etc. However, each of the currently 281 language versions1 defines and maintains its own set of infobox classes with their own set of properties, as well as providing sometimes different values for corresponding attributes. Figure 1 shows extracts of the English and German infoboxes for the city of Berlin. The arrows indicate matches between properties. It is already apparent that matching purely based on property names is futile: The terms Population density and Bevölkerungsdichte or Governing parties and Reg. Parteien have no textual similarity. However, their property values are more revealing: <3,857.6/km2> and <3.875 Einw. je km2> or <SPD/Die Linke> and <SPD und Die Linke> have a high textual similarity, respectively. Email addresses: daniel.rinser@alumni.hpi.uni-potsdam.de (Daniel Rinser), dustin.lange@hpi.uni-potsdam.de (Dustin Lange), naumann@hpi.uni-potsdam.de (Felix Naumann) 1as of March 2011 Our overall goal is to automatically find a mapping between attributes of infobox templates across different language versions. Such a mapping can be valuable for several different use cases: First, it can be used to increase the information quality and quantity in Wikipedia infoboxes, or at least help the Wikipedia communities to do so. Inconsistencies among the data provided by different editions for corresponding attributes could be detected automatically. For example, the infobox in the English article about Germany claims that the population is 81,799,600, while the German article specifies a value of 81,768,000 for the same country. Detecting such conflicts can help the Wikipedia communities to increase consistency and information quality across language versions. Further, the detected inconsistencies could be resolved automatically by fusing the data in infoboxes, as proposed by [1]. Finally, the coverage of information in infoboxes could be increased significantly by completing missing attribute values in one Wikipedia edition with data found in other editions. An infobox template does not describe a strict schema, so that we need to collect the infobox template attributes from the template instances. For the purpose of this paper, an infobox template is determined by the set of attributes that are mentioned in any article that reference the template. The task of matching attributes of corresponding infoboxes across language versions is a specific application of schema matching. Automatic schema matching is a highly researched topic and numerous different approaches have been developed for this task as surveyed in [2] and [3]. Among these, schema-level matchers exploit attribute labels, schema constraints, and structural similarities of the schemas. However, in the setting of Wikipedia infoboxes these Preprint submitted to Information Systems October 19, 2012 Figure 1: A mapping between the English and German infoboxes for Berlin techniques are not useful, because infobox definitions only describe a rather loose list of supported properties, as opposed to a strict relational or XML schema. Attribute names in infoboxes are not always sound, often cryptic or abbreviated, and the exact semantics of the attributes are not always clear from their names alone. Moreover, due to our multi-lingual scenario, attributes are labeled in different natural languages. This latter problem might be tackled by employing bilingual dictionaries, if the previously mentioned issues were solved. Due to the flat nature of infoboxes and their lack of constraints or types, other constraint-based matching approaches must fail. On the other hand, there are instance-based matching approaches, which leverage instance data of multiple data sources. Here, the basic assumption is that similarity of the instances of the attributes reflects the similarity of the attributes. To assess this similarity, instance-based approaches usually analyze the attributes of each schema individually, collecting information about value patterns and ranges, amongst others, such as in [4]. A different, duplicate-based approach exploits information overlap across data sources [5]. The idea there is to find two representations of same real-world objects (duplicates) and then suggest mappings between attributes that have the same or similar values. This approach has one important requirement: The data sources need to share a sufficient amount of common instances (or tuples, in a relational setting), i.e., instances describing the same real-world entity. Furthermore, the duplicates either have to be known in advance or have to be discovered despite a lack of knowledge of corresponding attributes. The approach presented in this article is based on such duplicate-based matching. Our approach consists of three steps: Entity matching, template matching, and attribute matching. The process is visualized in Fig. 2. (1) Entity matching: First, we find articles in different language versions that describe the same real-world entity. In particular, we make use of the crosslanguage links that are present for most Wikipedia articles and provide links between same entities across different language versions. We present a graph-based approach to resolve conflicts in the linking information. (2) Template matching: We determine a cross-lingual mapping between infobox templates by analyzing template co-occurrences in the language versions. (3) Attribute matching: The infobox attribute values of the corresponding articles are compared to identify matching attributes across the language versions, assuming that the values of corresponding attributes are highly similar for the majority of article pairs. As a first step we analyze the quality of Wikipedia’s interlanguage links in Sec. 2. We show how to use those links to create clusters of semantically equivalent entities with only one entity from each language in Sec. 3. This entity matching approach is evaluated in Sec. 4. In Sec. 5, we show how a crosslingual mapping between infobox templates can be established. The infobox attribute matching approach is described in Sec. 6 and in turn evaluated in Sec. 7. Related work in the areas of ILLs, concept identification, and infobox attribute matching is discussed in Sec. 8. Finally, Sec. 9 draws conclusions and discusses future work. 2. Interlanguage Links Our basic assumption is that there is a considerable amount of information overlap across the different Wikipedia language editions. Our infobox matching approach presented later requires mappings between articles in different language editions", "title": "" }, { "docid": "a1b5821ec18904ad805c57e6b478ef92", "text": "To extract English name mentions, we apply a linear-chain CRFs model trained from ACE 20032005 corpora (Li et al., 2012a). For Chinese and Spanish, we use Stanford name tagger (Finkel et al., 2005). We also encode several regular expression based rules to extract poster name mentions in discussion forum posts. In this year’s task, person nominal mentions extraction is added. There are two major challenges: (1) Only person nominal mentions referring to specific, individual real-world entities need to be extracted. Therefore, a system should be able to distinguish specific and generic person nominal mentions; (2) within-document coreference resolution should be applied to clustering person nominial and name mentions. We apply heuristic rules to try to solve these two challenges: (1) We consider person nominal mentions that appear after indefinite articles (e.g., a/an) or conditional conjunctions (e.g., if ) as generic. The person nomnial mention extraction F1 score of this approach is around 46% for English training data. (2) For coreference resolution, if the closest mention of a person nominal mention is a name, then we consider they are coreferential. The accuracy of this approach is 67% using perfect mentions in English training data.", "title": "" }, { "docid": "8ea17804db874a0434bd61c55bc83aab", "text": "Some recent work in the field of Genetic Programming (GP) has been concerned with finding optimum representations for evolvable and efficient computer programs. In this paper, I describe a new GP system in which target programs run on a stack-based virtual machine. The system is shown to have certain advantages in terms of efficiency and simplicity of implementation, and for certain classes of problems, its effectiveness is shown to be comparable or superior to current methods.", "title": "" }, { "docid": "61cd88d56bcae85c12dde4c2920af2ec", "text": "“Walk east on Flinders St/State Route 30 towards Market St; Turn right onto St Kilda Rd/Swanston St” vs. “Walk east on Flinders St/State Route 30 towards Market St; Turn right onto St Kilda Rd/Swanston St after Flinders Street Station, a yellow building with a green dome.” T1: <Flinders Street Station, front, Federation Square> T2: <Flinders Street Station, color, yellow> T3: <Flinders Street Station, has, green dome> Sent: Flinders Street Station is a yellow building with a green dome roof located in front of Federation Square", "title": "" }, { "docid": "0b3291e5ddfdd51a75340b195b7ffbfe", "text": "Œe Knowledge graph (KG) uses the triples to describe the facts in the real world. It has been widely used in intelligent analysis and applications. However, possible noises and conƒicts are inevitably introduced in the process of constructing. And the KG based tasks or applications assume that the knowledge in the KG is completely correct and inevitably bring about potential deviations. In this paper, we establish a knowledge graph triple trustworthiness measurement model that quantify their semantic correctness and the true degree of the facts expressed. Œe model is a crisscrossing neural network structure. It synthesizes the internal semantic information in the triples and the global inference information of the KG to achieve the trustworthiness measurement and fusion in the three levels of entity level, relationship level, and KG global level. We analyzed the validity of the model output con€dence values, and conducted experiments in the real-world dataset FB15K (from Freebase) for the knowledge graph error detection task. Œe experimental results showed that compared with other models, our model achieved signi€cant and consistent improvements.", "title": "" }, { "docid": "a11b39c895f7a89b7d2df29126671057", "text": "A typical NURBS surface model has a large percentage of superfluous control points that significantly interfere with the design process. This paper presents an algorithm for eliminating such superfluous control points, producing a T-spline. The algorithm can remove substantially more control points than competing methods such as B-spline wavelet decomposition. The paper also presents a new T-spline local refinement algorithm and answers two fundamental open questions on T-spline theory.", "title": "" }, { "docid": "4b546f3bc34237d31c862576ecf63f9a", "text": "Optimizing the internal supply chain for direct or production goods was a major element during the implementation of enterprise resource planning systems (ERP) which has taken place since the late 1980s. However, supply chains to the suppliers of indirect materials were not usually included due to low transaction volumes, low product values and low strategic importance of these goods. With the advent of the Internet, systems for streamlining indirect goods supply chains emerged and were adopted by many companies. In view of the paperprone processes in many companies, the implementation of these electronic procurement systems led to substantial improvement potentials. This research reports the quantitative and qualitative results of a benchmarking study which explores the use of the Internet in procurement (eProcurement). Among the major goals are to obtain more insight on how European and North American companies used and introduced eProcurement solutions as well as how these systems enhanced the procurement function. The analysis presents a heterogeneous picture and shows that all analyzed solutions emphasize different parts of the procurement and coordination process. Based on interviews and case studies the research proposes an initial set of generalized success factors which may improve future implementations and stimulate further success factor research.", "title": "" } ]
scidocsrr
bb96da6f83753746b0a0a7f7b80623b1
A computer vision assisted system for autonomous forklift vehicles in real factory environment
[ { "docid": "dbd7b707910d2b7ba0a3c4574a01bdaa", "text": "Visual recognition for object grasping is a well-known challenge for robot automation in industrial applications. A typical example is pallet recognition in industrial environment for pick-and-place automated process. The aim of vision and reasoning algorithms is to help robots in choosing the best pallets holes location. This work proposes an application-based approach, which ful l all requirements, dealing with every kind of occlusions and light situations possible. Even some ”meaning noise” (or ”meaning misunderstanding”) is considered. A pallet model, with limited degrees of freedom, is described and, starting from it, a complete approach to pallet recognition is outlined. In the model we de ne both virtual and real corners, that are geometrical object proprieties computed by different image analysis operators. Real corners are perceived by processing brightness information directly from the image, while virtual corners are inferred at a higher level of abstraction. A nal reasoning stage selects the best solution tting the model. Experimental results and performance are reported in order to demonstrate the suitability of the proposed approach.", "title": "" } ]
[ { "docid": "1f02f9dae964a7e326724faa79f5ddc3", "text": "The purpose of this review was to examine published research on small-group development done in the last ten years that would constitute an empirical test of Tuckman’s (1965) hypothesis that groups go through these stages of “forming,” “storming,” “norming,” and “performing.” Of the twenty-two studies reviewed, only one set out to directly test this hypothesis, although many of the others could be related to it. Following a review of these studies, a fifth stage, “adjourning.” was added to the hypothesis, and more empirical work was recommended.", "title": "" }, { "docid": "9c3050cca4deeb2d94ae5cff883a2d68", "text": "High speed, low latency obstacle avoidance is essential for enabling Micro Aerial Vehicles (MAVs) to function in cluttered and dynamic environments. While other systems exist that do high-level mapping and 3D path planning for obstacle avoidance, most of these systems require high-powered CPUs on-board or off-board control from a ground station. We present a novel entirely on-board approach, leveraging a light-weight low power stereo vision system on FPGA. Our approach runs at a frame rate of 60 frames a second on VGA-sized images and minimizes latency between image acquisition and performing reactive maneuvers, allowing MAVs to fly more safely and robustly in complex environments. We also suggest our system as a light-weight safety layer for systems undertaking more complex tasks, like mapping the environment. Finally, we show our algorithm implemented on a lightweight, very computationally constrained platform, and demonstrate obstacle avoidance in a variety of environments.", "title": "" }, { "docid": "d43dc521d3f0f17ccd4840d6081dcbfe", "text": "In Vehicular Ad hoc NETworks (VANETs), authentication is a crucial security service for both inter-vehicle and vehicle-roadside communications. On the other hand, vehicles have to be protected from the misuse of their private data and the attacks on their privacy, as well as to be capable of being investigated for accidents or liabilities from non-repudiation. In this paper, we investigate the authentication issues with privacy preservation and non-repudiation in VANETs. We propose a novel framework with preservation and repudiation (ACPN) for VANETs. In ACPN, we introduce the public-key cryptography (PKC) to the pseudonym generation, which ensures legitimate third parties to achieve the non-repudiation of vehicles by obtaining vehicles' real IDs. The self-generated PKCbased pseudonyms are also used as identifiers instead of vehicle IDs for the privacy-preserving authentication, while the update of the pseudonyms depends on vehicular demands. The existing ID-based signature (IBS) scheme and the ID-based online/offline signature (IBOOS) scheme are used, for the authentication between the road side units (RSUs) and vehicles, and the authentication among vehicles, respectively. Authentication, privacy preservation, non-repudiation and other objectives of ACPN have been analyzed for VANETs. Typical performance evaluation has been conducted using efficient IBS and IBOOS schemes. We show that the proposed ACPN is feasible and adequate to be used efficiently in the VANET environment.", "title": "" }, { "docid": "8ccb5aeb084c9a6223dc01fa296d908e", "text": "Effective chronic disease management is essential to improve positive health outcomes, and incentive strategies are useful in promoting self-care with longevity. Gamification, applied with mHealth (mobile health) applications, has the potential to better facilitate patient self-management. This review article addresses a knowledge gap around the effective use of gamification design principles, or mechanics, in developing mHealth applications. Badges, leaderboards, points and levels, challenges and quests, social engagement loops, and onboarding are mechanics that comprise gamification. These mechanics are defined and explained from a design and development perspective. Health and fitness applications with gamification mechanics include: bant which uses points, levels, and social engagement, mySugr which uses challenges and quests, RunKeeper which uses leaderboards as well as social engagement loops and onboarding, Fitocracy which uses badges, and Mango Health, which uses points and levels. Specific design considerations are explored, an example of the efficacy of a gamified mHealth implementation in facilitating improved self-management is provided, limitations to this work are discussed, a link between the principles of gaming and gamification in health and wellness technologies is provided, and suggestions for future work are made. We conclude that gamification could be leveraged in developing applications with the potential to better facilitate self-management in persons with chronic conditions.", "title": "" }, { "docid": "00d44e09b62be682b902b01a3f3a56c2", "text": "A novel approach is presented to efficiently render local subsurface scattering effects. We introduce an importance sampling scheme for a practical subsurface scattering model. It leads to a simple and efficient rendering algorithm, which operates in image-space, and which is even amenable for implementation on graphics hardware. We demonstrate the applicability of our technique to the problem of skin rendering, for which the subsurface transport of light typically remains local. Our implementation shows that plausible images can be rendered interactively using hardware acceleration.", "title": "" }, { "docid": "ade9860157680b2ca6820042f0cda302", "text": "This chapter has two main objectives: to review influential ideas and findings in the literature and to outline the organization and content of the volume. The first part of the chapter lays a conceptual and empirical foundation for other chapters in the volume. Specifically, the chapter defines and distinguishes the key concepts of prejudice, stereotypes, and discrimination, highlighting how bias can occur at individual, institutional, and cultural levels. We also review different theoretical perspectives on these phenomena, including individual differences, social cognition, functional relations between groups, and identity concerns. We offer a broad overview of the field, charting how this area has developed over previous decades and identify emerging trends and future directions. The second part of the chapter focuses specifically on the coverage of the area in the present volume. It explains the organization of the book and presents a brief synopsis of the chapters in the volume. Throughout psychology’s history, researchers have evinced strong interest in understanding prejudice, stereotyping, and discrimination (Brewer & Brown, 1998; Dovidio, 2001; Duckitt, 1992; Fiske, 1998), as well as the phenomenon of intergroup bias more generally (Hewstone, Rubin, & Willis, 2002). Intergroup bias generally refers to the systematic tendency to evaluate one’s own membership group (the ingroup) or its members more favorably than a non-membership group (the outgroup) or its members. These topics have a long history in the disciplines of anthropology and sociology (e.g., Sumner, 1906). However, social psychologists, building on the solid foundations of Gordon Allport’s (1954) masterly volume, The Nature of Prejudice, have developed a systematic and more nuanced analysis of bias and its associated phenomena. Interest in prejudice, stereotyping, and discrimination is currently shared by allied disciplines such as sociology and political science, and emerging disciplines such as neuroscience. The practical implications of this 4 OVERVIEW OF THE TOPIC large body of research are widely recognized in the law (Baldus, Woodworth, & Pulaski, 1990; Vidmar, 2003), medicine (Institute of Medicine, 2003), business (e.g., Brief, Dietz, Cohen, et al., 2000), the media, and education (e.g., Ben-Ari & Rich, 1997; Hagendoorn &", "title": "" }, { "docid": "a89cd3351d6a427d18a461893949e0d7", "text": "Touch is a powerful vehicle for communication between humans. The way we touch (how) embraces and mediates certain emotions such as anger, joy, fear, or love. While this phenomenon is well explored for human interaction, HCI research is only starting to uncover the fine granularity of sensory stimulation and responses in relation to certain emotions. Within this paper we present the findings from a study exploring the communication of emotions through a haptic system that uses tactile stimulation in mid-air. Here, haptic descriptions for specific emotions (e.g., happy, sad, excited, afraid) were created by one group of users to then be reviewed and validated by two other groups of users. We demonstrate the non-arbitrary mapping between emotions and haptic descriptions across three groups. This points to the huge potential for mediating emotions through mid-air haptics. We discuss specific design implications based on the spatial, directional, and haptic parameters of the created haptic descriptions and illustrate their design potential for HCI based on two design ideas.", "title": "" }, { "docid": "03e267aeeef5c59aab348775d264afce", "text": "Visual relations, such as person ride bike and bike next to car, offer a comprehensive scene understanding of an image, and have already shown their great utility in connecting computer vision and natural language. However, due to the challenging combinatorial complexity of modeling subject-predicate-object relation triplets, very little work has been done to localize and predict visual relations. Inspired by the recent advances in relational representation learning of knowledge bases and convolutional object detection networks, we propose a Visual Translation Embedding network (VTransE) for visual relation detection. VTransE places objects in a low-dimensional relation space where a relation can be modeled as a simple vector translation, i.e., subject + predicate &#x2248; object. We propose a novel feature extraction layer that enables object-relation knowledge transfer in a fully-convolutional fashion that supports training and inference in a single forward/backward pass. To the best of our knowledge, VTransE is the first end-toend relation detection network. We demonstrate the effectiveness of VTransE over other state-of-the-art methods on two large-scale datasets: Visual Relationship and Visual Genome. Note that even though VTransE is a purely visual model, it is still competitive to the Lu&#x2019;s multi-modal model with language priors [27].", "title": "" }, { "docid": "c678ea5e9bc8852ec80a8315a004c7f0", "text": "Educators, researchers, and policy makers have advocated student involvement for some time as an essential aspect of meaningful learning. In the past twenty years engineering educators have implemented several means of better engaging their undergraduate students, including active and cooperative learning, learning communities, service learning, cooperative education, inquiry and problem-based learning, and team projects. This paper focuses on classroom-based pedagogies of engagement, particularly cooperative and problem-based learning. It includes a brief history, theoretical roots, research support, summary of practices, and suggestions for redesigning engineering classes and programs to include more student engagement. The paper also lays out the research ahead for advancing pedagogies aimed at more fully enhancing students’ involvement in their learning.", "title": "" }, { "docid": "ec4638bad4caf17de83ac3557254c4bf", "text": "Explaining policies of Markov Decision Processes (MDPs) is complicated due to their probabilistic and sequential nature. We present a technique to explain policies for factored MDP by populating a set of domain-independent templates. We also present a mechanism to determine a minimal set of templates that, viewed together, completely justify the policy. Our explanations can be generated automatically at run-time with no additional effort required from the MDP designer. We demonstrate our technique using the problems of advising undergraduate students in their course selection and assisting people with dementia in completing the task of handwashing. We also evaluate our explanations for courseadvising through a user study involving students.", "title": "" }, { "docid": "fe3a3ffab9a98cf8f4f71c666383780c", "text": "We present a new dataset and model for textual entailment, derived from treating multiple-choice question-answering as an entailment problem. SCITAIL is the first entailment set that is created solely from natural sentences that already exist independently “in the wild” rather than sentences authored specifically for the entailment task. Different from existing entailment datasets, we create hypotheses from science questions and the corresponding answer candidates, and premises from relevant web sentences retrieved from a large corpus. These sentences are often linguistically challenging. This, combined with the high lexical similarity of premise and hypothesis for both entailed and non-entailed pairs, makes this new entailment task particularly difficult. The resulting challenge is evidenced by state-of-the-art textual entailment systems achieving mediocre performance on SCITAIL, especially in comparison to a simple majority class baseline. As a step forward, we demonstrate that one can improve accuracy on SCITAIL by 5% using a new neural model that exploits linguistic structure.", "title": "" }, { "docid": "369746e53baad6fef5df42935fb5c516", "text": "SWOT analysis is an established method for assisting the formulation of strategy. An application to strategy formulation and its incorporation into the strategic development process at the University of Warwick is described. The application links SWOT analysis to resource-based planning, illustrates it as an iterative rather than a linear process and embeds it within the overall planning process. Lessons are drawn both for the University and for the strategy formulation process itself. 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "f35007fdca9c35b4c243cb58bd6ede7a", "text": "Photovoltaic Thermal Collector (PVT) is a hybrid generator which converts solar radiation into useful electric and thermal energies simultaneously. This paper gathers all PVT sub-models in order to form a unique dynamic model that reveals PVT parameters interactions. As PVT is a multi-input/output/output system, a state space model based on energy balance equations is developed in order to analyze and assess the parameters behaviors and correlations of PVT constituents. The model simulation is performed using LabVIEW Software. The simulation shows the impact of the fluid flow rate variation on the collector efficiencies (thermal and electrical).", "title": "" }, { "docid": "634c58784820e70145b417f51414fc96", "text": "A considerable number of studies have been undertaken on using smart card data to analyse urban mobility. Most of these studies aim to identify recurrent passenger habits, reveal mobility patterns, reconstruct and predict passenger flows, etc. Forecasting mobility demand is a central problem for public transport authorities and operators alike. It is the first step to efficient allocation and optimisation of available resources. This paper explores an innovative approach to forecasting dynamic Origin-Destination (OD) matrices in a subway network using long Short-term Memory (LSTM) recurrent neural networks. A comparison with traditional approaches, such as calendar methodology or Vector Autoregression is conducted on a real smart card dataset issued from the public transport network of Rennes Métropole, France. The obtained results show that reliable short-term prediction (over a 15 minutes time horizon) of OD pairs can be achieved with the proposed approach. We also experiment with the effect of taking into account additional data about OD matrices of nearby transport systems (buses in this case) on the prediction accuracy.", "title": "" }, { "docid": "1f27caaaeae8c82db6a677f66f2dee74", "text": "State of the art visual SLAM systems have recently been presented which are capable of accurate, large-scale and real-time performance, but most of these require stereo vision. Important application areas in robotics and beyond open up if similar performance can be demonstrated using monocular vision, since a single camera will always be cheaper, more compact and easier to calibrate than a multi-camera rig. With high quality estimation, a single camera moving through a static scene of course effectively provides its own stereo geometry via frames distributed over time. However, a classic issue with monocular visual SLAM is that due to the purely projective nature of a single camera, motion estimates and map structure can only be recovered up to scale. Without the known inter-camera distance of a stereo rig to serve as an anchor, the scale of locally constructed map portions and the corresponding motion estimates is therefore liable to drift over time. In this paper we describe a new near real-time visual SLAM system which adopts the continuous keyframe optimisation approach of the best current stereo systems, but accounts for the additional challenges presented by monocular input. In particular, we present a new pose-graph optimisation technique which allows for the efficient correction of rotation, translation and scale drift at loop closures. Especially, we describe the Lie group of similarity transformations and its relation to the corresponding Lie algebra. We also present in detail the system’s new image processing front-end which is able accurately to track hundreds of features per frame, and a filter-based approach for feature initialisation within keyframe-based SLAM. Our approach is proven via large-scale simulation and real-world experiments where a camera completes large looped trajectories.", "title": "" }, { "docid": "71c31f41d116a51786a4e8ded2c5fb87", "text": "Targeting CTLA-4 represents a new type of immunotherapeutic approach, namely immune checkpoint inhibition. Blockade of CTLA-4 by ipilimumab was the first strategy to achieve a significant clinical benefit for late-stage melanoma patients in two phase 3 trials. These results fueled the notion of immunotherapy being the breakthrough strategy for oncology in 2013. Subsequently, many trials have been set up to test various immune checkpoint modulators in malignancies, not only in melanoma. In this review, recent new ideas about the mechanism of action of CTLA-4 blockade, its current and future therapeutic use, and the intensive search for biomarkers for response will be discussed. Immune checkpoint blockade, targeting CTLA-4 and/or PD-1/PD-L1, is currently the most promising systemic therapeutic approach to achieve long-lasting responses or even cure in many types of cancer, not just in patients with melanoma.", "title": "" }, { "docid": "176dc97bd2ce3c1fd7d3a8d6913cff70", "text": "Packet broadcasting is a form of data communications architecture which can combine the features of packet switching with those of broadcast channels for data communication networks. Much of the basic theory of packet broadcasting has been presented as a byproduct in a sequence of papers with a distinctly practical emphasis. In this paper we provide a unified presentation of packet broadcasting theory. In Section I1 we introduce the theory of packet broadcasting data networks. In Section I11 we provide some theoretical results dealing with the performance of a packet broadcasting network when the users of the network have a variety of data rates. In Section IV we deal with packet broadcasting networks distributed in space, and in Section V we derive some properties of power-limited packet broadcasting channels,showing that the throughput of such channels can approach that of equivalent point-to-point channels.", "title": "" }, { "docid": "8d350db000f7a2b1481b9cad6ce318f1", "text": "Purpose – The purpose of this research paper is to offer a solution to differentiate supply chain planning for products with different demand features and in different life-cycle phases. Design/methodology/approach – A normative framework for selecting a planning approach was developed based on a literature review of supply chain differentiation and supply chain planning. Explorative mini-cases from three companies – Vaisala, Mattel, Inc. and Zara – were investigated to identify the features of their innovative planning solutions. The selection framework was applied to the case company’s new business unit dealing with a product portfolio of highly innovative products as well as commodity items. Findings – The need for planning differentiation is essential for companies with large product portfolios operating in volatile markets. The complexity of market, channel and supply networks makes supply chain planning more intricate. The case company provides an example of using the framework for rough segmentation to differentiate planning. Research limitations/implications – The paper widens Fisher’s supply chain selection framework to consider the aspects of planning. Practical implications – Despite substantial resources being used, planning results are often not reliable or consistent enough to ensure cost efficiency and adequate customer service. Therefore there is a need for management to critically consider current planning solutions. Originality/value – The procedure outlined in this paper is a first illustrative example of the type of processes needed to monitor and select the right planning approach.", "title": "" }, { "docid": "4b013b69e174914aafc09100e182dd14", "text": "The network of patents connected by citations is an evolving graph, which provides a representation of the innovation process. A patent citing another implies that the cited patent reflects a piece of previously existing knowledge that the citing patent builds upon. A methodology presented here (1) identifies actual clusters of patents: i.e., technological branches, and (2) gives predictions about the temporal changes of the structure of the clusters. A predictor, called the citation vector, is defined for characterizing technological development to show how a patent cited by other patents belongs to various industrial fields. The clustering technique adopted is able to detect the new emerging recombinations, and predicts emerging new technology clusters. The predictive ability of our new method is illustrated on the example of USPTO subcategory 11, Agriculture, Food, Textiles. A cluster of patents is determined based on citation data up to 1991, which shows significant overlap of the class 442 formed at the beginning of 1997. These new tools of predictive analytics could support policy decision making processes in science and technology, and help formulate recommendations for action.", "title": "" }, { "docid": "ef8a61d3ff3aad461c57fe893e0b5bb6", "text": "In this paper, we propose an underwater wireless sensor network (UWSN) named SOUNET where sensor nodes form and maintain a tree-topological network for data gathering in a self-organized manner. After network topology discovery via packet flooding, the sensor nodes consistently update their parent node to ensure the best connectivity by referring to the timevarying neighbor tables. Such a persistent and self-adaptive method leads to high network connectivity without any centralized control, even when sensor nodes are added or unexpectedly lost. Furthermore, malfunctions that frequently happen in self-organized networks such as node isolation and closed loop are resolved in a simple way. Simulation results show that SOUNET outperforms other conventional schemes in terms of network connectivity, packet delivery ratio (PDR), and energy consumption throughout the network. In addition, we performed an experiment at the Gyeongcheon Lake in Korea using commercial underwater modems to verify that SOUNET works well in a real environment.", "title": "" } ]
scidocsrr
19e407b8d995f901f24f776c36cc6bf9
Image quality quantification for fingerprints using quality-impairment assessment
[ { "docid": "c1b79f29ce23b2d0ba97928831302e18", "text": "Quality assessment of biometric fingerprint images is necessary to ensure high biometric performance in biometric recognition systems. We relate the quality of a fingerprint sample to the biometric performance to ensure an objective and performance oriented benchmark. The proposed quality metric is based on Gabor filter responses and is evaluated against eight contemporary quality estimation methods on four datasets using sample utility derived from the separation of genuine and imposter distributions as benchmark. The proposed metric shows performance and consistency approaching that of the composite NFIQ quality assessment algorithm and is thus a candidate for inclusion in a feature vector introducing the NFIQ 2.0 metric.", "title": "" }, { "docid": "1a9be0a664da314c143ca430bd6f4502", "text": "Fingerprint image quality is an important factor in the perf ormance of Automatic Fingerprint Identification Systems(AFIS). It is used to evaluate the system performance, assess enrollment acceptability, and evaluate fingerprint sensors. This paper presents a novel methodology for fingerp rint image quality measurement. We propose limited ring-wedge spectral measu r to estimate the global fingerprint image features, and inhomogeneity with d rectional contrast to estimate local fingerprint image features. Experimental re sults demonstrate the effectiveness of our proposal.", "title": "" } ]
[ { "docid": "32417703b8291a5cdcc3c9eaabbdb99c", "text": "Purpose – The aim of this paper is to identify the quality determinants for education services provided by higher education institutions (HEIs) in Greece and to measure their relative importance from the students’ points of view. Design/mthodology/approach – A multi-criteria decision-making methodology was used for assessing the relative importance of quality determinants that affect student satisfaction. More specifically, the analytical hierarchical process (AHP) was used in order to measure the relative weight of each quality factor. Findings – The relative weights of the factors that contribute to the quality of educational services as it is perceived by students was measured. Research limitations/implications – The research is based on the questionnaire of the Hellenic Quality Assurance Agency for Higher Education. This implies that the measured weights are related mainly to questions posed in this questionnaire. However, the applied method (AHP) can be used to assess different quality determinants. Practical implications – The outcome of this study can be used in order to quantify internal quality assessment of HEIs. More specifically, the outcome can be directly used by HEIs for assessing quality as perceived by students. Originality/value – The paper attempts to develop insights into comparative evaluations of quality determinants as they are perceived by students.", "title": "" }, { "docid": "f8b24b0e8b440643a5fb49166cbbd96b", "text": "A Proportional-Integral (PI) based Maximum Power Point Tracking (MPPT) control algorithm is proposed in this study where it is applied to a Buck-Boost converter. It is aimed to combine regular PI control and MPPT technique to enhance the generated power from photovoltaic PV) panels. The perturb and observe (P&O) technique is used as the MPPT control algorithm. The study proposes to reduce converter output oscillation owing to implemented MPPT control technique with additional PI observer. Furthermore aims to optimize output power using PI voltage mode closed-loop structure.", "title": "" }, { "docid": "47b4b22cee9d5693c16be296afe61982", "text": "In this work we introduce a fully end-to-end approach for action detection in videos that learns to directly predict the temporal bounds of actions. Our intuition is that the process of detecting actions is naturally one of observation and refinement: observing moments in video, and refining hypotheses about when an action is occurring. Based on this insight, we formulate our model as a recurrent neural network-based agent that interacts with a video over time. The agent observes video frames and decides both where to look next and when to emit a prediction. Since backpropagation is not adequate in this non-differentiable setting, we use REINFORCE to learn the agent's decision policy. Our model achieves state-of-the-art results on the THUMOS'14 and ActivityNet datasets while observing only a fraction (2% or less) of the video frames.", "title": "" }, { "docid": "e33b3ebfc46c371253cf7f68adbbe074", "text": "Although backward folding of the epiglottis is one of the signal events of the mammalian adult swallow, the epiglottis does not fold during the infant swallow. How this functional change occurs is unknown, but we hypothesize that a change in swallow mechanism occurs with maturation, prior to weaning. Using videofluoroscopy, we found three characteristic patterns of swallowing movement at different ages in the pig: an infant swallow, a transitional swallow and a post-weaning (juvenile or adult) swallow. In animals of all ages, the dorsal region of the epiglottis and larynx was held in an intranarial position by a muscular sphincter formed by the palatopharyngeal arch. In the infant swallow, increasing pressure in the oropharynx forced a liquid bolus through the piriform recesses on either side of a relatively stationary epiglottis into the esophagus. As the infant matured, the palatopharyngeal arch and the soft palate elevated at the beginning of the swallow, so exposing a larger area of the epiglottis to bolus pressure. In transitional swallows, the epiglottis was tilted backward relatively slowly by a combination of bolus pressure and squeezing of the epiglottis by closure of the palatopharyngeal sphincter. The bolus, however, traveled alongside but never over the tip of the epiglottis. In the juvenile swallow, the bolus always passed over the tip of the epiglottis. The tilting of the epiglottis resulted from several factors, including the action of the palatopharyngeal sphincter, higher bolus pressure exerted on the epiglottis and the allometry of increased size. In both transitional and juvenile swallows, the subsequent relaxation of the palatopharyngeal sphincter released the epiglottis, which sprang back to its original intranarial position.", "title": "" }, { "docid": "d1f771fd1b0f8e5d91bbf65bc19aeb54", "text": "Web-based systems are often a composition of infrastructure components, such as web servers and databases, and of applicationspecific code, such as HTML-embedded scripts and server-side applications. While the infrastructure components are usually developed by experienced programmers with solid security skills, the application-specific code is often developed under strict time constraints by programmers with little security training. As a result, vulnerable web-applications are deployed and made available to the Internet at large, creating easilyexploitable entry points for the compromise of entire networks. Web-based applications often rely on back-end database servers to manage application-specific persistent state. The data is usually extracted by performing queries that are assembled using input provided by the users of the applications. If user input is not sanitized correctly, it is possible to mount a variety of attacks that leverage web-based applications to compromise the security of back-end databases. Unfortunately, it is not always possible to identify these attacks using signature-based intrusion detection systems, because of the ad hoc nature of many web-based applications. Signatures are rarely written for this class of applications due to the substantial investment of time and expertise this would require. We have developed an anomaly-based system that learns the profiles of the normal database access performed by web-based applications using a number of different models. These models allow for the detection of unknown attacks with reduced false positives and limited overhead. In addition, our solution represents an improvement with respect to previous approaches because it reduces the possibility of executing SQL-based mimicry attacks.", "title": "" }, { "docid": "505a9b6139e8cbf759652dc81f989de9", "text": "SQL injection attacks, a class of injection flaw in which specially crafted input strings leads to illegal queries to databases, are one of the topmost threats to web applications. A Number of research prototypes and commercial products that maintain the queries structure in web applications have been developed. But these techniques either fail to address the full scope of the problem or have limitations. Based on our observation that the injected string in a SQL injection attack is interpreted differently on different databases. A characteristic diagnostic feature of SQL injection attacks is that they change the intended structure of queries issued. Pattern matching is a technique that can be used to identify or detect any anomaly packet from a sequential action. Injection attack is a method that can inject any kind of malicious string or anomaly string on the original string. Most of the pattern based techniques are used static analysis and patterns are generated from the attacked statements. In this paper, we proposed a detection and prevention technique for preventing SQL Injection Attack (SQLIA) using Aho–Corasick pattern matching algorithm. In this paper, we proposed an overview of the architecture. In the initial stage evaluation, we consider some sample of standard attack patterns and it shows that the proposed algorithm is works well against the SQL Injection Attack. Keywords—SQL Injection Attack; Pattern matching; Static Pattern; Dynamic Pattern", "title": "" }, { "docid": "e1d635202eb482e49ff736fd37d161ac", "text": "Can people feel worse off as the options they face increase? The present studies suggest that some people--maximizers--can. Study 1 reported a Maximization Scale, which measures individual differences in desire to maximize. Seven samples revealed negative correlations between maximization and happiness, optimism, self-esteem, and life satisfaction, and positive correlations between maximization and depression, perfectionism, and regret. Study 2 found maximizers less satisfied than nonmaximizers (satisficers) with consumer decisions, and more likely to engage in social comparison. Study 3 found maximizers more adversely affected by upward social comparison. Study 4 found maximizers more sensitive to regret and less satisfied in an ultimatum bargaining game. The interaction between maximizing and choice is discussed in terms of regret, adaptation, and self-blame.", "title": "" }, { "docid": "48036770f56e84df8b05c198e8a89018", "text": "Advances in low power VLSI design, along with the potentially low duty cycle of wireless sensor nodes open up the possibility of powering small wireless computing devices from scavenged ambient power. A broad review of potential power scavenging technologies and conventional energy sources is first presented. Low-level vibrations occurring in common household and office environments as a potential power source are studied in depth. The goal of this paper is not to suggest that the conversion of vibrations is the best or most versatile method to scavenge ambient power, but to study its potential as a viable power source for applications where vibrations are present. Different conversion mechanisms are investigated and evaluated leading to specific optimized designs for both capacitive MicroElectroMechancial Systems (MEMS) and piezoelectric converters. Simulations show that the potential power density from piezoelectric conversion is significantly higher. Experiments using an off-the-shelf PZT piezoelectric bimorph verify the accuracy of the models for piezoelectric converters. A power density of 70 mW/cm has been demonstrated with the PZT bimorph. Simulations show that an optimized design would be capable of 250 mW/cm from a vibration source with an acceleration amplitude of 2.5 m/s at 120 Hz. q 2002 Elsevier Science B.V.. All rights reserved.", "title": "" }, { "docid": "4acfb49be406de472af9080d3cdc6fa4", "text": "Evolution provides a creative fount of complex and subtle adaptations that often surprise the scientists who discover them. However, the creativity of evolution is not limited to the natural world: artificial organisms evolving in computational environments have also elicited surprise and wonder from the researchers studying them. The process of evolution is an algorithmic process that transcends the substrate in which it occurs. Indeed, many researchers in the field of digital evolution can provide examples of how their evolving algorithms and organisms have creatively subverted their expectations or intentions, exposed unrecognized bugs in their code, produced unexpectedly adaptations, or engaged in behaviors and outcomes uncannily convergent with ones found in nature. Such stories routinely reveal surprise and creativity by evolution in these digital worlds, but they rarely fit into the standard scientific narrative. Instead they are often treated as mere obstacles to be overcome, rather than results that warrant study in their own right. Bugs are fixed, experiments are refocused, and one-off surprises are collapsed into a single data point. The stories themselves are traded among researchers through oral tradition, but that mode of information transmission is inefficient and prone to error and outright loss. Moreover, the fact that these stories tend to be shared only among practitioners means that many natural scientists do not realize how interesting and lifelike digital organisms are and how natural their evolution can be. To our knowledge, no collection of such anecdotes has been published before. This paper is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. In doing so we also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may indeed be a universal property of all complex evolving systems.", "title": "" }, { "docid": "059b8861a00bb0246a07fa339b565079", "text": "Recognizing facial action units (AUs) from spontaneous facial expressions is still a challenging problem. Most recently, CNNs have shown promise on facial AU recognition. However, the learned CNNs are often overfitted and do not generalize well to unseen subjects due to limited AU-coded training images. We proposed a novel Incremental Boosting CNN (IB-CNN) to integrate boosting into the CNN via an incremental boosting layer that selects discriminative neurons from the lower layer and is incrementally updated on successive mini-batches. In addition, a novel loss function that accounts for errors from both the incremental boosted classifier and individual weak classifiers was proposed to fine-tune the IB-CNN. Experimental results on four benchmark AU databases have demonstrated that the IB-CNN yields significant improvement over the traditional CNN and the boosting CNN without incremental learning, as well as outperforming the state-of-the-art CNN-based methods in AU recognition. The improvement is more impressive for the AUs that have the lowest frequencies in the databases.", "title": "" }, { "docid": "17321e451d7441c8a434c637237370a2", "text": "In recent years, there are increasing interests in using path identifiers (<inline-formula> <tex-math notation=\"LaTeX\">$\\it PIDs$ </tex-math></inline-formula>) as inter-domain routing objects. However, the <inline-formula> <tex-math notation=\"LaTeX\">$\\it PIDs$ </tex-math></inline-formula> used in existing approaches are static, which makes it easy for attackers to launch the distributed denial-of-service (DDoS) flooding attacks. To address this issue, in this paper, we present the design, implementation, and evaluation of dynamic PID (D-PID), a framework that uses <inline-formula> <tex-math notation=\"LaTeX\">$\\it PIDs$ </tex-math></inline-formula> negotiated between the neighboring domains as inter-domain routing objects. In D-PID, the <inline-formula> <tex-math notation=\"LaTeX\">$\\it PID$ </tex-math></inline-formula> of an inter-domain path connecting the two domains is kept secret and changes dynamically. We describe in detail how neighboring domains negotiate <inline-formula> <tex-math notation=\"LaTeX\">$\\it PIDs$ </tex-math></inline-formula> and how to maintain ongoing communications when <inline-formula> <tex-math notation=\"LaTeX\">$\\it PIDs$ </tex-math></inline-formula> change. We build a 42-node prototype comprised of six domains to verify D-PID’s feasibility and conduct extensive simulations to evaluate its effectiveness and cost. The results from both simulations and experiments show that D-PID can effectively prevent DDoS attacks.", "title": "" }, { "docid": "0ba15705fcd12cb3efa17a6878c43606", "text": "Voice has become an increasingly popular User Interaction (UI) channel, mainly contributing to the current trend of wearables, smart vehicles, and home automation systems. Voice assistants such as Alexa, Siri, and Google Now, have become our everyday fixtures, especially when/where touch interfaces are inconvenient or even dangerous to use, such as driving or exercising. The open nature of the voice channel makes voice assistants difficult to secure, and hence exposed to various threats as demonstrated by security researchers. To defend against these threats, we present VAuth, the first system that provides continuous authentication for voice assistants. VAuth is designed to fit in widely-adopted wearable devices, such as eyeglasses, earphones/buds and necklaces, where it collects the body-surface vibrations of the user and matches it with the speech signal received by the voice assistant's microphone. VAuth guarantees the voice assistant to execute only the commands that originate from the voice of the owner. We have evaluated VAuth with 18 users and 30 voice commands and find it to achieve 97% detection accuracy and less than 0.1% false positive rate, regardless of VAuth's position on the body and the user's language, accent or mobility. VAuth successfully thwarts various practical attacks, such as replay attacks, mangled voice attacks, or impersonation attacks. It also incurs low energy and latency overheads and is compatible with most voice assistants.", "title": "" }, { "docid": "38715a7ba5efc87b47491d9ced8c8a31", "text": "We propose a new method for fusing a LIDAR point cloud and camera-captured images in the deep convolutional neural network (CNN). The proposed method constructs a new layer called non-homogeneous pooling layer to transform features between bird view map and front view map. The sparse LIDAR point cloud is used to construct the mapping between the two maps. The pooling layer allows efficient fusion of the bird view and front view features at any stage of the network. This is favorable for the 3D-object detection using camera-LIDAR fusion in autonomous driving scenarios. A corresponding deep CNN is designed and tested on the KITTI[1] bird view object detection dataset, which produces 3D bounding boxes from the bird view map. The fusion method shows particular benefit for detection of pedestrians in the bird view compared to other fusion-based object detection networks.", "title": "" }, { "docid": "2caf8a90640a98f3690785b6dd641e08", "text": "This paper presents a simple, novel, yet very powerful approach for robust rotation-invariant texture classification based on random projection. The proposed sorted random projection maintains the strengths of random projection, in being computationally efficient and low-dimensional, with the addition of a straightforward sorting step to introduce rotation invariance. At the feature extraction stage, a small set of random measurements is extracted from sorted pixels or sorted pixel differences in local image patches. The rotation invariant random features are embedded into a bag-of-words model to perform texture classification, allowing us to achieve global rotation invariance. The proposed unconventional and novel random features are very robust, yet by leveraging the sparse nature of texture images, our approach outperforms traditional feature extraction methods which involve careful design and complex steps. We report extensive experiments comparing the proposed method to six state-of-the-art methods, RP, Patch, LBP, WMFS and the methods of Lazebnik et al. and Zhang et al., in texture classification on five databases: CUReT, Brodatz, UIUC, UMD and KTH-TIPS. Our approach leads to significant improvements in classification accuracy, producing consistently good results on each database, including what we believe to be the best reported results for Brodatz, UMD and KTH-TIPS. & 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "07153810148e93a0bc0b62a6de77594c", "text": "Six healthy young male volunteers at a contract research organization were enrolled in the first phase 1 clinical trial of TGN1412, a novel superagonist anti-CD28 monoclonal antibody that directly stimulates T cells. Within 90 minutes after receiving a single intravenous dose of the drug, all six volunteers had a systemic inflammatory response characterized by a rapid induction of proinflammatory cytokines and accompanied by headache, myalgias, nausea, diarrhea, erythema, vasodilatation, and hypotension. Within 12 to 16 hours after infusion, they became critically ill, with pulmonary infiltrates and lung injury, renal failure, and disseminated intravascular coagulation. Severe and unexpected depletion of lymphocytes and monocytes occurred within 24 hours after infusion. All six patients were transferred to the care of the authors at an intensive care unit at a public hospital, where they received intensive cardiopulmonary support (including dialysis), high-dose methylprednisolone, and an anti-interleukin-2 receptor antagonist antibody. Prolonged cardiovascular shock and acute respiratory distress syndrome developed in two patients, who required intensive organ support for 8 and 16 days. Despite evidence of the multiple cytokine-release syndrome, all six patients survived. Documentation of the clinical course occurring over the 30 days after infusion offers insight into the systemic inflammatory response syndrome in the absence of contaminating pathogens, endotoxin, or underlying disease.", "title": "" }, { "docid": "af691c2ca5d9fd1ca5109c8b2e7e7b6d", "text": "As social robots become more widely used as educational tutoring agents, it is important to study how children interact with these systems, and how effective they are as assessed by learning gains, sustained engagement, and perceptions of the robot tutoring system as a whole. In this paper, we summarize our prior work involving a long-term child-robot interaction study and outline important lessons learned regarding individual differences in children. We then discuss how these lessons inform future research in child-robot interaction.", "title": "" }, { "docid": "41c5dbb3e903c007ba4b8f37d40b06ef", "text": "BACKGROUND\nMyocardial infarction (MI) can directly cause ischemic mitral regurgitation (IMR), which has been touted as an indicator of poor prognosis in acute and early phases after MI. However, in the chronic post-MI phase, prognostic implications of IMR presence and degree are poorly defined.\n\n\nMETHODS AND RESULTS\nWe analyzed 303 patients with previous (>16 days) Q-wave MI by ECG who underwent transthoracic echocardiography: 194 with IMR quantitatively assessed in routine practice and 109 without IMR matched for baseline age (71+/-11 versus 70+/-9 years, P=0.20), sex, and ejection fraction (EF, 33+/-14% versus 34+/-11%, P=0.14). In IMR patients, regurgitant volume (RVol) and effective regurgitant orifice (ERO) area were 36+/-24 mL/beat and 21+/-12 mm(2), respectively. After 5 years, total mortality and cardiac mortality for patients with IMR (62+/-5% and 50+/-6%, respectively) were higher than for those without IMR (39+/-6% and 30+/-5%, respectively) (both P<0.001). In multivariate analysis, independently of all baseline characteristics, particularly age and EF, the adjusted relative risks of total and cardiac mortality associated with the presence of IMR (1.88, P=0.003 and 1.83, P=0.014, respectively) and quantified degree of IMR defined by RVol >/=30 mL (2.05, P=0.002 and 2.01, P=0.009) and by ERO >/=20 mm(2) (2.23, P=0.003 and 2.38, P=0.004) were high.\n\n\nCONCLUSIONS\nIn the chronic phase after MI, IMR presence is associated with excess mortality independently of baseline characteristics and degree of ventricular dysfunction. The mortality risk is related directly to the degree of IMR as defined by ERO and RVol. Therefore, IMR detection and quantification provide major information for risk stratification and clinical decision making in the chronic post-MI phase.", "title": "" }, { "docid": "4e5d46d9bb7b9edbc4fc6a42b6314703", "text": "Positive body image among adults is related to numerous indicators of well-being. However, no research has explored body appreciation among children. To facilitate our understanding of children’s positive body image, the current study adapts and validates the Body Appreciation Scale-2 (BAS-2; Tylka & WoodBarcalow, 2015a) for use with children. Three hundred and forty-four children (54.4% girls) aged 9–11 completed the adapted Body Appreciation Scale-2 for Children (BAS-2C) alongside measures of body esteem, media influence, body surveillance, mood, and dieting. A sub-sample of 154 participants (62.3% girls) completed the questionnaire 6-weeks later to examine stability (test-retest) reliability. The BAS-2C", "title": "" }, { "docid": "35f8b54ee1fbf153cb483fc4639102a5", "text": "This research studies the risk prediction of hospital readmissions using metaheuristic and data mining approaches. This is a critical issue in the U.S. healthcare system because a large percentage of preventable hospital readmissions derive from a low quality of care during patients’ stays in the hospital as well as poor arrangement of the discharge process. To reduce the number of hospital readmissions, the Centers for Medicare and Medicaid Services has launched a readmission penalty program in which hospitals receive reduced reimbursement for high readmission rates for Medicare beneficiaries. In the current practice, patient readmission risk is widely assessed by evaluating a LACE score including length of stay (L), acuity level of admission (A), comorbidity condition (C), and use of emergency rooms (E). However, the LACE threshold classifying highand low-risk readmitted patients is set up by clinic practitioners based on specific circumstances and experiences. This research proposed various data mining approaches to identify the risk group of a particular patient, including neural network model, random forest (RF) algorithm, and the hybrid model of swarm intelligence heuristic and support vector machine (SVM). The proposed neural network algorithm, the RF and the SVM classifiers are used to model patients’ characteristics, such as their ages, insurance payers, medication risks, etc. Experiments are conducted to compare the performance of the proposed models with previous research. Experimental results indicate that the proposed prediction SVM model with particle swarm parameter tuning outperforms other algorithms and achieves 78.4% on overall prediction accuracy, 97.3% on sensitivity. The high sensitivity shows its strength in correctly identifying readmitted patients. The outcome of this research will help reduce overall hospital readmission rates and allow hospitals to utilize their resources more efficiently to enhance interventions for high-risk patients. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "86e0c7b70de40fcd5179bf3ab67bc3a4", "text": "The development of a scale to assess drug and other treatment effects on severely mentally retarded individuals was described. In the first stage of the project, an initial scale encompassing a large number of behavior problems was used to rate 418 residents. The scale was then reduced to an intermediate version, and in the second stage, 509 moderately to profoundly retarded individuals were rated. Separate factor analyses of the data from the two samples resulted in a five-factor scale comprising 58 items. The factors of the Aberrant Behavior Checklist have been labeled as follows: (I) Irritability, Agitation, Crying; (II) Lethargy, Social Withdrawal; (III) Stereotypic Behavior; (IV) Hyperactivity, Noncompliance; and (V) Inappropriate Speech. Average subscale scores were presented for the instrument, and the results were compared with empirically derived rating scales of childhood psychopathology and with factor analytic work in the field of mental retardation.", "title": "" } ]
scidocsrr
a16f0041754899e1f6101f7b8a5d82a6
Agile Software Development Methodologies and Practices
[ { "docid": "2e9b2eccefe56b9cbf8d5793cc3f1cbb", "text": "This paper summarizes several classes of software cost estimation models and techniques: parametric models, expertise-based techniques, learning-oriented techniques, dynamics-based models, regression-based models, and composite-Bayesian techniques for integrating expertisebased and regression-based models. Experience to date indicates that neural-net and dynamics-based techniques are less mature than the other classes of techniques, but that all classes of techniques are challenged by the rapid pace of change in software technology. The primary conclusion is that no single technique is best for all situations, and that a careful comparison of the results of several approaches is most likely to produce realistic estimates.", "title": "" } ]
[ { "docid": "19f4100f2e1d5655edca03a269adf79a", "text": "OBJECTIVES\nTo assess the influence of conventional glass ionomer cement (GIC) vs resin-modified GIC (RMGIC) as a base material for novel, super-closed sandwich restorations (SCSR) and its effect on shrinkage-induced crack propensity and in vitro accelerated fatigue resistance.\n\n\nMETHODS\nA standardized MOD slottype tooth preparation was applied to 30 extracted maxillary molars (5 mm depth/5 mm buccolingual width). A modified sandwich restoration was used, in which the enamel/dentin bonding agent was applied first (Optibond FL, Kerr), followed by a Ketac Molar (3M ESPE)(group KM, n = 15) or Fuji II LC (GC) (group FJ, n = 15) base, leaving 2 mm for composite resin material (Miris 2, Coltène-Whaledent). Shrinkageinduced enamel cracks were tracked with photography and transillumination. Samples were loaded until fracture or to a maximum of 185,000 cycles under isometric chewing (5 H z), starting with a load of 200 N (5,000 X), followed by stages of 400, 600, 800, 1,000, 1,200, and 1,400 N at a maximum of 30,000 X each. Groups were compared using the life table survival analysis (α = .008, Bonferroni method).\n\n\nRESULTS\nGroup FJ showed the highest survival rate (40% intact specimens) but did not differ from group KM (20%) or traditional direct restorations (13%, previous data). SCSR generated less shrinkage-induced cracks. Most failures were re-restorable (above the cementoenamel junction [CEJ]).\n\n\nCONCLUSIONS\nInclusion of GIC/RMGIC bases under large direct SCSRs does not affect their fatigue strength but tends to decrease the shrinkage-induced crack propensity.\n\n\nCLINICAL SIGNIFICANCE\nThe use of GIC/ RMGIC bases and the SCSR is an easy way to minimize polymerization shrinkage stress in large MOD defects without weakening the restoration.", "title": "" }, { "docid": "4cb25adf48328e1e9d871940a97fdff2", "text": "This article is concerned with parameters identification problems and computer modeling of thrust generation subsystem for small unmanned aerial vehicles (UAV) quadrotor type. In this paper approach for computer model generation of dynamic process of thrust generation subsystem that consists of fixed pitch propeller, EC motor and power amplifier, is considered. Due to the fact that obtainment of aerodynamic characteristics of propeller via analytical approach is quite time-consuming, and taking into account that subsystem consists of as well as propeller, motor and power converter with microcontroller control system, which operating algorithm is not always available from manufacturer, receiving trusted computer model of thrust generation subsystem via analytical approach is impossible. Identification of the system under investigation is performed from the perspective of “black box” with the known qualitative description of proceeded there dynamic processes. For parameters identification of subsystem special laboratory rig that described in this paper was designed.", "title": "" }, { "docid": "88804c0fb16e507007983108811950dc", "text": "We propose a neural probabilistic structured-prediction method for transition-based natural language processing, which integrates beam search and contrastive learning. The method uses a global optimization model, which can leverage arbitrary features over nonlocal context. Beam search is used for efficient heuristic decoding, and contrastive learning is performed for adjusting the model according to search errors. When evaluated on both chunking and dependency parsing tasks, the proposed method achieves significant accuracy improvements over the locally normalized greedy baseline on the two tasks, respectively.", "title": "" }, { "docid": "0513ce3971cb0e438598ea6766be19ff", "text": "This paper proposes two interference mitigation strategies that adjust the maximum transmit power of femtocell users to suppress the cross-tier interference at a macrocell base station (BS). The open-loop and the closed-loop control suppress the cross-tier interference less than a fixed threshold and an adaptive threshold based on the noise and interference (NI) level at the macrocell BS, respectively. Simulation results show that both schemes effectively compensate the uplink throughput degradation of the macrocell BS due to the cross-tier interference and that the closed-loop control provides better femtocell throughput than the open-loop control at a minimal cost of macrocell throughput.", "title": "" }, { "docid": "5e5e2d038ae29b4c79c79abe3d20ae40", "text": "Article history: Received 28 February 2013 Accepted 26 July 2013 Available online 11 October 2013 Fault diagnosis of Discrete Event Systems has become an active research area in recent years. The research activity in this area is driven by the needs of many different application domains such as manufacturing, process control, control systems, transportation, communication networks, software engineering, and others. The aim of this paper is to review the state-of the art of methods and techniques for fault diagnosis of Discrete Event Systems based on models that include faulty behaviour. Theoretical and practical issues related to model description tools, diagnosis processing structure, sensor selection, fault representation and inference are discussed. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d3f43eef5e36eb7b078b010482bdb115", "text": "This study is aimed at constructing a correlative model between Internet addiction and mobile phone addiction; the aim is to analyse the correlation (if any) between the two traits and to discuss the influence confirming that the gender has difference on this fascinating topic; taking gender into account opens a new world of scientific study to us. The study collected 448 college students on an island as study subjects, with 61.2% males and 38.8% females. Moreover, this study issued Mobile Phone Addiction Scale and Internet Addiction Scale to conduct surveys on the participants and adopts the structural equation model (SEM) to process the collected data. According to the study result, (1) mobile phone addiction and Internet addiction are positively related; (2) female college students score higher than male ones in the aspect of mobile addiction. Lastly, this study proposes relevant suggestions to serve as a reference for schools, college students, and future studies based on the study results.", "title": "" }, { "docid": "a66b5b6dea68e5460b227af4caa14ef3", "text": "This paper will discuss and compare event representations across a variety of types of event annotation: Rich Entities, Relations, and Events (Rich ERE), Light Entities, Relations, and Events (Light ERE), Event Nugget (EN), Event Argument Extraction (EAE), Richer Event Descriptions (RED), and Event-Event Relations (EER). Comparisons of event representations are presented, along with a comparison of data annotated according to each event representation. An event annotation experiment is also discussed, including annotation for all of these representations on the same set of sample data, with the purpose of being able to compare actual annotation across all of these approaches as directly as possible. We walk through a brief example to illustrate the various annotation approaches, and to show the intersections among the various annotated data sets.", "title": "" }, { "docid": "37d3bf208ee4e513a809fa94f93a2654", "text": "Unplanned use of fertilizers leads to inferior quality of crops. Excess of one nutrient can make it difficult for the plant to absorb the other nutrients. To deal with this problem, the quality of soil is tested using a PH sensor that indicates the percentage of macronutrients present in the soil. Conventional methods used to test soil quality, involve the use of Ion Selective Field Effect Transistors (ISFET), Ion Selective Electrode (ISE) and Optical Sensors as the sensing units which were found to be very expensive. The prototype design will allow sprinkling of fertilizers to take place in zones which are deficient in these macronutrients (Nitrogen, Phosphorous and Potassium), proving it to be a cost efficient and farmer-friendly automated fertilization unit. Cost of the proposed unit is found to be one-seventh of that of the present methods, making it affordable for farmers and also saves the manual labor. Initial analysis and intensive case studies conducted in farmland situated near Ambedkar Nagar, Sarjapur also revealed the use of above mechanism to be more prominent and verified through practical implementation and experimentation as it takes lesser time to analyze the nutrient content than the other methods which require soil testing. Sprinklers cover discrete zones in the field that automate fertilization and reduce the effort of farmers in the rural areas. This novel technique also has a fast response time as it enables real time, in-situ soil nutrient analysis, thereby maintaining proper soil pH level required for a particular crop, reducing potentially negative environmental impacts.", "title": "" }, { "docid": "20cbfe9c1d20bfd67bbcbf39641aa69a", "text": "The CIPS-SIGHAN CLP 2010 Chinese Word Segmentation Bakeoff was held in the summer of 2010 to evaluate the current state of the art in word segmentation. It focused on the crossdomain performance of Chinese word segmentation algorithms. Eighteen groups submitted 128 results over two tracks (open training and closed training), four domains (literature, computer science, medicine and finance) and two subtasks (simplified Chinese and traditional Chinese). We found that compared with the previous Chinese word segmentation bakeoffs, the performance of cross-domain Chinese word segmentation is not much lower, and the out-of-vocabulary recall is improved.", "title": "" }, { "docid": "080032ded41edee2a26320e3b2afb123", "text": "The aim of this study was to evaluate the effects of calisthenic exercises on psychological status in patients with ankylosing spondylitis (AS) and multiple sclerosis (MS). This study comprised 40 patients diagnosed with AS randomized into two exercise groups (group 1 = hospital-based, group 2 = home-based) and 40 patients diagnosed with MS randomized into two exercise groups (group 1 = hospital-based, group 2 = home-based). The exercise programme was completed by 73 participants (hospital-based = 34, home-based = 39). Mean age was 33.75 ± 5.77 years. After the 8-week exercise programme in the AS group, the home-based exercise group showed significant improvements in erythrocyte sedimentation rates (ESR). The hospital-based exercise group showed significant improvements in terms of the Bath AS Metrology Index (BASMI) and Hospital Anxiety and Depression Scale-Anxiety (HADS-A) scores. After the 8-week exercise programme in the MS group, the home-based and hospital-based exercise groups showed significant improvements in terms of the 10-m walking test, Berg Balance Scale (BBS), HADS-A, and MS international Quality of Life (MusiQoL) scores. There was a significant improvement in the hospital-based and a significant deterioration in the home-based MS patients according to HADS-Depression (HADS-D) score. The positive effects of exercises on neurologic and rheumatic chronic inflammatory processes associated with disability should not be underestimated. Ziel der vorliegenden Studie war die Untersuchung der Wirkungen von gymnastischen Übungen auf die psychische Verfassung von Patienten mit Spondylitis ankylosans (AS) und multipler Sklerose (MS). Die Studie umfasste 40 Patienten mit der Diagnose AS, die randomisiert in 2 Übungsgruppen aufgeteilt wurden (Gruppe 1: stationär, Gruppe 2: ambulant), und 40 Patienten mit der Diagnose MS, die ebenfalls randomisiert in 2 Übungsgruppen aufgeteilt wurden (Gruppe 1: stationär, Gruppe 2: ambulant). Vollständig absolviert wurde das Übungsprogramm von 73 Patienten (stationär: 34, ambulant: 39). Das Durchschnittsalter betrug 33,75 ± 5,77 Jahre. Nach dem 8-wöchigen Übungsprogramm in der AS-Gruppe zeigten sich bei der ambulanten Übungsgruppe signifikante Verbesserungen bei der Blutsenkungsgeschwindigkeit (BSG). Die stationäre Übungsgruppe wies signifikante Verbesserungen in Bezug auf den BASMI-Score (Bath AS Metrology Index) und den HADS-A-Score (Hospital Anxiety and Depression Scale-Anxiety) auf. Nach dem 8-wöchigen Übungsprogramm in der MS-Gruppe zeigten sich sowohl in der ambulanten als auch in der stationären Übungsgruppe signifikante Verbesserungen hinsichtlich des 10-m-Gehtests, des BBS-Ergebnisses (Berg Balance Scale), des HADS-A- sowie des MusiQoL-Scores (MS international Quality of Life). Beim HADS-D-Score (HADS-Depression) bestand eine signifikante Verbesserung bei den stationären und eine signifikante Verschlechterung bei den ambulanten MS-Patienten. Die positiven Wirkungen von gymnastischen Übungen auf neurologische und rheumatische chronisch entzündliche Prozesse mit Behinderung sollten nicht unterschätzt werden.", "title": "" }, { "docid": "af11d259a031d22f7ee595ee2a250136", "text": "Cellular networks today are designed for and operate in dedicated licensed spectrum. At the same time there are other spectrum usage authorization models for wireless communication, such as unlicensed spectrum or, as widely discussed currently but not yet implemented in practice, various forms of licensed shared spectrum. Hence, cellular technology as of today can only operate in a subset of the spectrum that is in principle available. Hence, a future wireless system may benefit from the ability to access also spectrum opportunities other than dedicated licensed spectrum. It is therefore important to identify which additional ways of authorizing spectrum usage are deemed to become relevant in the future and to analyze the resulting technical requirements. The implications of sharing spectrum between different technologies are analyzed in this paper, both from efficiency and technology neutrality perspective. Different known sharing techniques are outlined and their applicability to the relevant range of future spectrum regulatory regimes is discussed. Based on an assumed range of relevant (according to the views of the authors) future spectrum sharing scenarios, a toolbox of certain spectrum sharing techniques is proposed as the basis for the design of spectrum sharing related functionality in future mobile broadband systems.", "title": "" }, { "docid": "10d41334c88039e9d85ce6eb93cb9abf", "text": "nonlinear functional analysis and its applications iii variational methods and optimization PDF remote sensing second edition models and methods for image processing PDF remote sensing third edition models and methods for image processing PDF guide to signals and patterns in image processing foundations methods and applications PDF introduction to image processing and analysis PDF principles of digital image processing advanced methods undergraduate topics in computer science PDF image processing analysis and machine vision PDF image acquisition and processing with labview image processing series PDF wavelet transform techniques for image resolution PDF sparse image and signal processing wavelets and related geometric multiscale analysis PDF nonstandard methods in stochastic analysis and mathematical physics dover books on mathematics PDF solution manual wavelet tour of signal processing PDF remote sensing image fusion signal and image processing of earth observations PDF image understanding using sparse representations synthesis lectures on image video and multimedia processing PDF", "title": "" }, { "docid": "d763947e969ade3c54c18f0b792a0f7b", "text": "Recent results in compressive sampling have shown that sparse signals can be recovered from a small number of random measurements. This property raises the question of whether random measurements can provide an efficient representation of sparse signals in an information-theoretic sense. Through both theoretical and experimental results, we show that encoding a sparse signal through simple scalar quantization of random measurements incurs a significant penalty relative to direct or adaptive encoding of the sparse signal. Information theory provides alternative quantization strategies, but they come at the cost of much greater estimation complexity.", "title": "" }, { "docid": "bc6cbf7da118c01d74914d58a71157ac", "text": "Currently, there are increasing interests in text-to-speech (TTS) synthesis to use sequence-to-sequence models with attention. These models are end-to-end meaning that they learn both co-articulation and duration properties directly from text and speech. Since these models are entirely data-driven, they need large amounts of data to generate synthetic speech with good quality. However, in challenging speaking styles, such as Lombard speech, it is difficult to record sufficiently large speech corpora. Therefore, in this study we propose a transfer learning method to adapt a sequence-to-sequence based TTS system of normal speaking style to Lombard style. Moreover, we experiment with a WaveNet vocoder in synthesis of Lombard speech. We conducted subjective evaluations to assess the performance of the adapted TTS systems. The subjective evaluation results indicated that an adaptation system with the WaveNet vocoder clearly outperformed the conventional deep neural network based TTS system in synthesis of Lombard speech.", "title": "" }, { "docid": "3a2729b235884bddc05dbdcb6a1c8fc9", "text": "The people of Tumaco-La Tolita culture inhabited the borders of present-day Colombia and Ecuador. Already extinct by the time of the Spaniards arrival, they left a huge collection of pottery artifacts depicting everyday life; among these, disease representations were frequently crafted. In this article, we present the results of the personal examination of the largest collections of Tumaco-La Tolita pottery in Colombia and Ecuador; cases of Down syndrome, achondroplasia, mucopolysaccharidosis I H, mucopolysaccharidosis IV, a tumor of the face and a benign tumor in an old woman were found. We believe these to be among the earliest artistic representations of disease.", "title": "" }, { "docid": "950a6a611f1ceceeec49534c939b4e0f", "text": "Often signals and system parameters are most conveniently represented as complex-valued vectors. This occurs, for example, in array processing [1], as well as in communication systems [7] when processing narrowband signals using the equivalent complex baseband representation [2]. Furthermore, in many important applications one attempts to optimize a scalar real-valued measure of performance over the complex parameters defining the signal or system of interest. This is the case, for example, in LMS adaptive filtering where complex filter coefficients are adapted on line. To effect this adaption one attempts to optimize the performance measure by adjustments of the coefficients along its gradient direction [16, 23].", "title": "" }, { "docid": "a3ac978e59bdedc18c45d460dd8fc154", "text": "Searching for information in distributed ledgers is currently not an easy task, as information relating to an entity may be scattered throughout the ledger with no index. As distributed ledger technologies become more established, they will increasingly be used to represent real world transactions involving many parties and the search requirements will grow. An index providing the ability to search using domain specific terms across multiple ledgers will greatly enhance to power, usability and scope of these systems. We have implemented a semantic index to the Ethereum blockchain platform, to expose distributed ledger data as Linked Data. As well as indexing blockand transactionlevel data according to the BLONDiE ontology, we have mapped smart contracts to the Minimal Service Model ontology, to take the first steps towards connecting smart contracts with Semantic Web Services.", "title": "" }, { "docid": "0feae39f7e557a65699f686d14f4cf0f", "text": "This paper describes the design of a multi-gigabit fiber-optic receiver with integrated large-area photo detectors for plastic optical fiber applications. An integrated 250 μm diameter non-SML NW/P-sub photo detector is adopted to allow efficient light coupling. The theory of applying a fully-differential pre-amplifier with a single-ended photo current is also examined and a super-Gm transimpedance amplifier has been proposed to drive a C PD of 14 pF to multi-gigahertz frequency. Both differential and common-mode operations of the proposed super-Gm transimpedance amplifier have been analyzed and a differential noise analysis is performed. A digitally-controlled linear equalizer is proposed to produce a slow-rising-slope frequency response to compensate for the photo detector up to 3 GHz. The proposed POF receiver consists of an illuminated signal photo detector, a shielded dummy photo detector, a super-Gm transimpedance amplifier, a variable-gain amplifier, a linear equalizer, a post amplifier, and an output driver. A test chip is fabricated in TSMC's 65 nm low-power CMOS process, and it consumes 50 mW of DC power (excluding the output driver) from a single 1.2 V supply. A bit-error rate of less than 10-12 has been measured at a data rate of 3.125 Gbps with a 670 nm VCSEL-based electro-optical transmitter.", "title": "" }, { "docid": "5b6d68984b4f9a6e0f94e0a68768dc8c", "text": "In this paper, we focus on a major internet problem which is a huge amount of uncategorized text. We review existing techniques used for feature selection and categorization. After reviewing the existing literature, it was found that there exist some gaps in existing algorithms, one of which is a requirement of the labeled dataset for the training of the classifier. Keywords— Bayesian; KNN; PCA; SVM; TF-IDF", "title": "" }, { "docid": "6459493643eb7ff011fa0d8873382911", "text": "This paper is about the effectiveness of qualitative easing; a government policy that is designed to mitigate risk through central bank purchases of privately held risky assets and their replacement by government debt, with a return that is guaranteed by the taxpayer. Policies of this kind have recently been carried out by national central banks, backed by implicit guarantees from national treasuries. I construct a general equilibrium model where agents have rational expectations and there is a complete set of financial securities, but where agents are unable to participate in financial markets that open before they are born. I show that a change in the asset composition of the central bank’s balance sheet will change equilibrium asset prices. Further, I prove that a policy in which the central bank stabilizes fluctuations in the stock market is Pareto improving and is costless to implement.", "title": "" } ]
scidocsrr
5696d4593a6c514e4916dab560dc94f5
Chapter LVIII The Design , Play , and Experience Framework
[ { "docid": "ecddd4f80f417dcec49021065394c89a", "text": "Research in the area of educational technology has often been critiqued for a lack of theoretical grounding. In this article we propose a conceptual framework for educational technology by building on Shulman’s formulation of ‘‘pedagogical content knowledge’’ and extend it to the phenomenon of teachers integrating technology into their pedagogy. This framework is the result of 5 years of work on a program of research focused on teacher professional development and faculty development in higher education. It attempts to capture some of the essential qualities of teacher knowledge required for technology integration in teaching, while addressing the complex, multifaceted, and situated nature of this knowledge. We argue, briefly, that thoughtful pedagogical uses of technology require the development of a complex, situated form of knowledge that we call Technological Pedagogical Content Knowledge (TPCK). In doing so, we posit the complex roles of, and interplay among, three main components of learning environments: content, pedagogy, and technology. We argue that this model has much to offer to discussions of technology integration at multiple levels: theoretical, pedagogical, and methodological. In this article, we describe the theory behind our framework, provide examples of our teaching approach based upon the framework, and illustrate the methodological contributions that have resulted from this work.", "title": "" }, { "docid": "e5a3119470420024b99df2d6eb14b966", "text": "Why should wait for some days to get or receive the rules of play game design fundamentals book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This rules of play game design fundamentals is well known book in the world, of course many people will try to own it. Why don't you become the first? Still confused with the way?", "title": "" } ]
[ { "docid": "e737c117cd6e7083cd50069b70d236cb", "text": "In this article we discuss a data structure, which combines advantages of two different ways for representing graphs: adjacency matrix and collection of adjacency lists. This data structure can fast add and search edges (advantages of adjacency matrix), use linear amount of memory, let to obtain adjacency list for certain vertex (advantages of collection of adjacency lists). Basic knowledge of linked lists and hash tables is required to understand this article. The article contains examples of implementation on Java.", "title": "" }, { "docid": "9dcee1244dd71174b15df9cfaba2ebdf", "text": "In this paper, we investigate the dynamical behaviors of a Morris–Lecar neuron model. By using bifurcation methods and numerical simulations, we examine the global structure of bifurcations of the model. Results are summarized in various two-parameter bifurcation diagrams with the stimulating current as the abscissa and the other parameter as the ordinate. We also give the one-parameter bifurcation diagrams and pay much attention to the emergence of periodic solutions and bistability. Different membrane excitability is obtained by bifurcation analysis and frequency-current curves. The alteration of the membrane properties of the Morris–Lecar neurons is discussed.", "title": "" }, { "docid": "39861e2759b709883f3d37a65d13834b", "text": "BACKGROUND\nDeveloping countries account for 99 percent of maternal deaths annually. While increasing service availability and maintaining acceptable quality standards, it is important to assess maternal satisfaction with care in order to make it more responsive and culturally acceptable, ultimately leading to enhanced utilization and improved outcomes. At a time when global efforts to reduce maternal mortality have been stepped up, maternal satisfaction and its determinants also need to be addressed by developing country governments. This review seeks to identify determinants of women's satisfaction with maternity care in developing countries.\n\n\nMETHODS\nThe review followed the methodology of systematic reviews. Public health and social science databases were searched. English articles covering antenatal, intrapartum or postpartum care, for either home or institutional deliveries, reporting maternal satisfaction from developing countries (World Bank list) were included, with no year limit. Out of 154 shortlisted abstracts, 54 were included and 100 excluded. Studies were extracted onto structured formats and analyzed using the narrative synthesis approach.\n\n\nRESULTS\nDeterminants of maternal satisfaction covered all dimensions of care across structure, process and outcome. Structural elements included good physical environment, cleanliness, and availability of adequate human resources, medicines and supplies. Process determinants included interpersonal behavior, privacy, promptness, cognitive care, perceived provider competency and emotional support. Outcome related determinants were health status of the mother and newborn. Access, cost, socio-economic status and reproductive history also influenced perceived maternal satisfaction. Process of care dominated the determinants of maternal satisfaction in developing countries. Interpersonal behavior was the most widely reported determinant, with the largest body of evidence generated around provider behavior in terms of courtesy and non-abuse. Other aspects of interpersonal behavior included therapeutic communication, staff confidence and competence and encouragement to laboring women.\n\n\nCONCLUSIONS\nQuality improvement efforts in developing countries could focus on strengthening the process of care. Special attention is needed to improve interpersonal behavior, as evidence from the review points to the importance women attach to being treated respectfully, irrespective of socio-cultural or economic context. Further research on maternal satisfaction is required on home deliveries and relative strength of various determinants in influencing maternal satisfaction.", "title": "" }, { "docid": "1fe0a9895bca5646908efc86e019f5d3", "text": "The purpose of this study was to examine how violence from patients and visitors is related to emergency department (ED) nurses' work productivity and symptoms of post-traumatic stress disorder (PTSD). Researchers have found ED nurses experience a high prevalence of physical assaults from patients and visitors. Yet, there is little research which examines the effect violent events have on nurses' productivity, particularly their ability to provide safe and compassionate patient care. A cross-sectional design was used to gather data from ED nurses who are members of the Emergency Nurses Association in the United States. Participants were asked to complete the Impact of Events Scale-Revised and Healthcare Productivity Survey in relation to a stressful violent event. Ninety-four percent of nurses experienced at least one posttraumatic stress disorder symptom after a violent event, with 17% having scores high enough to be considered probable for PTSD. In addition, there were significant indirect relationships between stress symptoms and work productivity. Workplace violence is a significant stressor for ED nurses. Results also indicate violence has an impact on the care ED nurses provide. Interventions are needed to prevent the violence and to provide care to the ED nurse after an event.", "title": "" }, { "docid": "3e6e72747036ca7255b449f4c93e15f7", "text": "In this paper a planar antenna is studied for ultrawide-band (UWB) applications. This antenna consists of a wide-band tapered-slot feeding structure, curved radiators and a parasitic element. It is a modification of the conventional dual exponential tapered slot antenna and can be viewed as a printed dipole antenna with tapered slot feed. The design guideline is introduced, and the antenna parameters including return loss, radiation patterns and gain are investigated. To demonstrate the applicability of the proposed antenna to UWB applications, the transfer functions of a transmitting-receiving system with a pair of identical antennas are measured. Transient waveforms as the transmitting-receiving system being excited by a simulated pulse are discussed at the end of this paper.", "title": "" }, { "docid": "7cb6582bf81aea75818eef2637c95c79", "text": "Although multi-frame super resolution has been extensively studied in past decades, super resolving real-world video sequences still remains challenging. In existing systems, either the motion models are oversimplified, or important factors such as blur kernel and noise level are assumed to be known. Such models cannot deal with the scene and imaging conditions that vary from one sequence to another. In this paper, we propose a Bayesian approach to adaptive video super resolution via simultaneously estimating underlying motion, blur kernel and noise level while reconstructing the original high-res frames. As a result, our system not only produces very promising super resolution results that outperform the state of the art, but also adapts to a variety of noise levels and blur kernels. Theoretical analysis of the relationship between blur kernel, noise level and frequency-wise reconstruction rate is also provided, consistent with our experimental results.", "title": "" }, { "docid": "e4183c85a9f6771fa06316b002e13188", "text": "This paper provides an analysis of some argumentation in a biomedical genetics research article as a step towards developing a corpus of articles annotated to support research on argumentation. We present a specification of several argumentation schemes and inter-argument relationships to be annotated.", "title": "" }, { "docid": "b515eb759984047f46f9a0c27b106f47", "text": "Visual motion estimation is challenging, due to high data rates, fast camera motions, featureless or repetitive environments, uneven lighting, and many other issues. In this work, we propose a twolayer approach for visual odometry with stereo cameras, which runs in real-time and combines feature-based matching with semi-dense direct image alignment. Our method initializes semi-dense depth estimation, which is computationally expensive, from motion that is tracked by a fast but robust feature point-based method. By that, we are not only able to efficiently estimate the pose of the camera with a high frame rate, but also to reconstruct the 3D structure of the environment at image gradients, which is useful, e.g., for mapping and obstacle avoidance. Experiments on datasets captured by a micro aerial vehicle (MAV) show that our approach is faster than state-of-the-art methods without losing accuracy. Moreover, our combined approach achieves promising results on the KITTI dataset, which is very challenging for direct methods, because of the low frame rate in conjunction with fast motion.", "title": "" }, { "docid": "a743ac1f5b37c35bb78cf7efc3d3a3c8", "text": "Concepts concerning mediation in the causal inference literature are reviewed. Notions of direct and indirect effects from a counterfactual approach to mediation are compared with those arising from the standard regression approach to mediation of Baron and Kenny (1986), commonly utilized in the social science literature. It is shown that concepts of direct and indirect effect from causal inference generalize those described by Baron and Kenny and that under appropriate identification assumptions these more general direct and indirect effects from causal inference can be estimated using regression even when there are interactions between the primary exposure of interest and the mediator. A number of conceptual issues are discussed concerning the interpretation of identification conditions for mediation, the notion of counterfactuals based on hypothetical interventions and the so called consistency and composition assumptions.", "title": "" }, { "docid": "55610ac91c3abb52e3bbd95c289b9b95", "text": "A robot finger is developed for five-fingered robot hand having equal number of DOF to human hand. The robot hand is driven by a new method proposed by authors using ultrasonic motors and elastic elements. The method utilizes restoring force of elastic element as driving power for grasping an object, so that the hand can perform the soft and stable grasping motion with no power supply. In addition, all the components are placed inside the hand thanks to the ultrasonic motors with compact size and high torque at low speed. Applying the driving method to multi-DOF mechanism, a robot index finger is designed and implemented. It has equal number of joints and DOF to human index finger, and it is also equal in size to the finger of average adult male. The performance of the robot finger is confirmed by fundamental driving test.", "title": "" }, { "docid": "413c4d1115e8042cce44308583649279", "text": "With the growing popularity of microblogging services such as Twitter in recent years, an increasing number of users are using these services in their daily lives. The huge volume of information generated by users raises new opportunities in various applications and areas. Inferring user interests plays a significant role in providing personalized recommendations on microblogging services, and also on third-party applications providing social logins via these services, especially in cold-start situations. In this survey, we review user modeling strategies with respect to inferring user interests from previous studies. To this end, we focus on four dimensions of inferring user interest profiles: (1) data collection, (2) representation of user interest profiles, (3) construction and enhancement of user interest profiles, and (4) the evaluation of the constructed profiles. Through this survey, we aim to provide an overview of state-of-the-art user modeling strategies for inferring user interest profiles on microblogging social networks with respect to the four dimensions. For each dimension, we review and summarize previous studies based on specified criteria. Finally, we discuss some challenges and opportunities for future work in this research domain.", "title": "" }, { "docid": "9ffb34f554e9d31938b77a33be187014", "text": "Job recommendation systems mainly use different sources of data in order to give the better content for the end user. Developing the well-performing system requires complex hybrid approaches of representing similarity based on the content of job postings and resumes as well as interactions between them. We develop an efficient hybrid networkbased job recommendation system which uses Personalized PageRank algorithm in order to rank vacancies for the users based on the similarity between resumes and job posts as textual documents, along with previous interactions of users with vacancies. Our approach achieved the recall of 50% and generated more applies for the jobs during the online A/B test than previous algorithms.", "title": "" }, { "docid": "a9b620269c6448facfe0ae8e034f41fa", "text": "The aim of this project is to make progress towards building a machine learning agent that understands natural language and can perform basic reasoning. Towards this nebulous goal, we focus on question answering: Can an agent answer a query based on a given set of natural language facts? We combine LSTM sentence embedding models with an attention mechanism and obtain good results on the Facebook bAbI dataset [1], outperforming [2] on 1 task and achieving similar performance on several others.", "title": "" }, { "docid": "507a60e62e9d2086481e7a306d012e52", "text": "Health monitoring systems have rapidly evolved recently, and smart systems have been proposed to monitor patient current health conditions, in our proposed and implemented system, we focus on monitoring the patient's blood pressure, and his body temperature. Based on last decade statistics of medical records, death rates due to hypertensive heart disease, shows that the blood pressure is a crucial risk factor for atherosclerosis and ischemic heart diseases; thus, preventive measures should be taken against high blood pressure which provide the ability to track, trace and save patient's life at appropriate time is an essential need for mankind. Nowadays, Globalization demands Smart cities, which involves many attributes and services, such as government services, Intelligent Transportation Systems (ITS), energy, health care, water and waste. This paper proposes a system architecture for smart healthcare based on GSM and GPS technologies. The objective of this work is providing an effective application for Real Time Health Monitoring and Tracking. The system will track, trace, monitor patients and facilitate taking care of their health; so efficient medical services could be provided at appropriate time. By Using specific sensors, the data will be captured and compared with a configurable threshold via microcontroller which is defined by a specialized doctor who follows the patient; in any case of emergency a short message service (SMS) will be sent to the Doctor's mobile number along with the measured values through GSM module. furthermore, the GPS provides the position information of the monitored person who is under surveillance all the time. Moreover, the paper demonstrates the feasibility of realizing a complete end-to-end smart health system responding to the real health system design requirements by taking in consideration wider vital human health parameters such as respiration rate, nerves signs ... etc. The system will be able to bridge the gap between patients - in dramatic health change occasions- and health entities who response and take actions in real time fashion.", "title": "" }, { "docid": "e1d9ff28da38fcf8ea3a428e7990af25", "text": "The Autonomous car is a complex topic, different technical fields like: Automotive engineering, Control engineering, Informatics, Artificial Intelligence etc. are involved in solving the human driver replacement with an artificial (agent) driver. The problem is even more complicated because usually, nowadays, having and driving a car defines our lifestyle. This means that the mentioned (major) transformation is also a cultural issue. The paper will start with the mentioned cultural aspects related to a self-driving car and will continue with the big picture of the system.", "title": "" }, { "docid": "7ae332505306f94f8f2b4e3903188126", "text": "Clustering Web services would greatly boost the ability of Web service search engine to retrieve relevant services. The performance of traditional Web service description language (WSDL)-based Web service clustering is not satisfied, due to the singleness of data source. Recently, Web service search engines such as Seekda! allow users to manually annotate Web services using tags, which describe functions of Web services or provide additional contextual and semantical information. In this paper, we cluster Web services by utilizing both WSDL documents and tags. To handle the clustering performance limitation caused by uneven tag distribution and noisy tags, we propose a hybrid Web service tag recommendation strategy, named WSTRec, which employs tag co-occurrence, tag mining, and semantic relevance measurement for tag recommendation. Extensive experiments are conducted based on our real-world dataset, which consists of 15,968 Web services. The experimental results demonstrate the effectiveness of our proposed service clustering and tag recommendation strategies. Specifically, compared with traditional WSDL-based Web service clustering approaches, the proposed approach produces gains in both precision and recall for up to 14 % in most cases.", "title": "" }, { "docid": "acb0f1e123cb686b4aeab418f380bd79", "text": "Surface parameterization is necessary for many graphics tasks: texture-preserving simplification, remeshing, surface painting, and precomputation of solid textures. The stretch caused by a given parameterization determines the sampling rate on the surface. In this article, we present an automatic parameterization method for segmenting a surface into patches that are then flattened with little stretch.\n Many objects consist of regions of relatively simple shapes, each of which has a natural parameterization. Based on this observation, we describe a three-stage feature-based patch creation method for manifold surfaces. The first two stages, genus reduction and feature identification, are performed with the help of distance-based surface functions. In the last stage, we create one or two patches for each feature region based on a covariance matrix of the feature's surface points.\n To reduce stretch during patch unfolding, we notice that stretch is a 2 × 2 tensor, which in ideal situations is the identity. Therefore, we use the <i>Green-Lagrange tensor</i> to measure and to guide the optimization process. Furthermore, we allow the boundary vertices of a patch to be optimized by adding <i>scaffold triangles</i>. We demonstrate our feature-based patch creation and patch unfolding methods for several textured models.\n Finally, to evaluate the quality of a given parameterization, we describe an image-based error measure that takes into account stretch, seams, smoothness, packing efficiency, and surface visibility.", "title": "" }, { "docid": "9eabe9a867edbceee72bd20d483ad886", "text": "Inspired by recent advances of deep learning in instance segmentation and object tracking, we introduce the concept of convnet-based guidance applied to video object segmentation. Our model proceeds on a per-frame basis, guided by the output of the previous frame towards the object of interest in the next frame. We demonstrate that highly accurate object segmentation in videos can be enabled by using a convolutional neural network (convnet) trained with static images only. The key component of our approach is a combination of offline and online learning strategies, where the former produces a refined mask from the previous frame estimate and the latter allows to capture the appearance of the specific object instance. Our method can handle different types of input annotations such as bounding boxes and segments while leveraging an arbitrary amount of annotated frames. Therefore our system is suitable for diverse applications with different requirements in terms of accuracy and efficiency. In our extensive evaluation, we obtain competitive results on three different datasets, independently from the type of input annotation.", "title": "" }, { "docid": "a0a13e7e5ce06e5cc28a2b23ea64c8f5", "text": "The efficacy study was performed to prove the equivalent efficacy of dexibuprofen compared to the double dose of racemic ibuprofen and to show a clinical dose-response relationship of dexibuprofen. The 1-year tolerability study was carried out to investigate the tolerability of dexibuprofen. In the efficacy study 178 inpatients with osteoarthritis of the hip were assigned to 600 or 1200 mg of dexibuprofen or 2400 mg of racemic ibuprofen daily. The primary end-point was the improvement of the WOMAC OA index. A 1-year open tolerability study included 223 outpatients pooled from six studies. The main parameter was the incidence of clinical adverse events. In the efficacy study the evaluation of the improvement of the WOMAC OA index showed equivalence of dexibuprofen 400 mg t.i.d. compared to racemic ibuprofen 800 mg t.i.d., with dexibuprofen being borderline superior (P = 0.055). The comparison between the 400 mg t.i.d. and 200 mg t.i.d. doses confirmed a significant superior efficacy of dexibuprofen 400 mg (P = 0.023). In the tolerability study the overall incidence of clinical adverse events was 15.2% (GI tract 11.7%, CNS 1.3%, skin 1.3%, others 0.9%). The active enantiomer dexibuprofen proved to be an effective NSAID with a significant dose-response relationship. Compared to the double dose of racemic ibuprofen, dexibuprofen was at least equally efficient, with borderline superiority over dexibuprofen (P = 0.055). The tolerability study in 223 patients on dexibuprofen showed an incidence of clinical adverse events of 15.2% after 12 months. The results of the studies suggest that dexibuprofen is an effective NSAID with good tolerability.", "title": "" }, { "docid": "ab662b1dd07a7ae868f70784408e1ce1", "text": "We use autoencoders to create low-dimensional embeddings of underlying patient phenotypes that we hypothesize are a governing factor in determining how different patients will react to different interventions. We compare the performance of autoencoders that take fixed length sequences of concatenated timesteps as input with a recurrent sequence-to-sequence autoencoder. We evaluate our methods on around 35,500 patients from the latest MIMIC III dataset from Beth Israel Deaconess Hospital.", "title": "" } ]
scidocsrr
08783703748f4805351206e24d216c29
Development of extensible open information extraction
[ { "docid": "5f2818d3a560aa34cc6b3dbfd6b8f2cc", "text": "Open Information Extraction (IE) systems extract relational tuples from text, without requiring a pre-specified vocabulary, by identifying relation phrases and associated arguments in arbitrary sentences. However, stateof-the-art Open IE systems such as REVERB and WOE share two important weaknesses – (1) they extract only relations that are mediated by verbs, and (2) they ignore context, thus extracting tuples that are not asserted as factual. This paper presents OLLIE, a substantially improved Open IE system that addresses both these limitations. First, OLLIE achieves high yield by extracting relations mediated by nouns, adjectives, and more. Second, a context-analysis step increases precision by including contextual information from the sentence in the extractions. OLLIE obtains 2.7 times the area under precision-yield curve (AUC) compared to REVERB and 1.9 times the AUC of WOE.", "title": "" }, { "docid": "ebaeacf1c0eeb4a4818b4ac050e60b0c", "text": "Open information extraction (Open IE) systems aim to obtain relation tuples with highly scalable extraction in portable across domain by identifying a variety of relation phrases and their arguments in arbitrary sentences. The first generation of Open IE learns linear chain models based on unlexicalized features such as Part-of-Speech (POS) or shallow tags to label the intermediate words between pair of potential arguments for identifying extractable relations. Open IE currently is developed in the second generation that is able to extract instances of the most frequently observed relation types such as Verb, Noun and Prep, Verb and Prep, and Infinitive with deep linguistic analysis. They expose simple yet principled ways in which verbs express relationships in linguistics such as verb phrase-based extraction or clause-based extraction. They obtain a significantly higher performance over previous systems in the first generation. In this paper, we describe an overview of two Open IE generations including strengths, weaknesses and application areas.", "title": "" } ]
[ { "docid": "f271596a45a3104554bfe975ac8b4d6c", "text": "In many regions of the visual system, the activity of a neuron is normalized by the activity of other neurons in the same region. Here we show that a similar normalization occurs during olfactory processing in the Drosophila antennal lobe. We exploit the orderly anatomy of this circuit to independently manipulate feedforward and lateral input to second-order projection neurons (PNs). Lateral inhibition increases the level of feedforward input needed to drive PNs to saturation, and this normalization scales with the total activity of the olfactory receptor neuron (ORN) population. Increasing total ORN activity also makes PN responses more transient. Strikingly, a model with just two variables (feedforward and total ORN activity) accurately predicts PN odor responses. Finally, we show that discrimination by a linear decoder is facilitated by two complementary transformations: the saturating transformation intrinsic to each processing channel boosts weak signals, while normalization helps equalize responses to different stimuli.", "title": "" }, { "docid": "4538c5874872a0081593407d09e4c6fa", "text": "PatternAttribution is a recent method, introduced in the vision domain, that explains classifications of deep neural networks. We demonstrate that it also generates meaningful interpretations in the language domain.", "title": "" }, { "docid": "d4793c300bca8137d0da7ffdde75a72b", "text": "The expectation-maximization (EM) method can facilitate maximizing likelihood functions that arise in statistical estimation problems. In the classical EM paradigm, one iteratively maximizes the conditional log-likelihood of a single unobservable complete data space, rather than maximizing the intractable likelihood function for the measured or incomplete data. EM algorithms update all parameters simultaneously, which has two drawbacks: 1) slow convergence, and 2) difficult maximization steps due to coupling when smoothness penalties are used. This paper describes the space-alternating generalized EM (SAGE) method, which updates the parameters sequentially by alternating between several small hidden-data spaces defined by the algorithm designer. We prove that the sequence of estimates monotonically increases the penalized-likelihood objective, we derive asymptotic convergence rates, and we provide sufficient conditions for monotone convergence in norm. Two signal processing applications illustrate the method: estimation of superimposed signals in Gaussian noise, and image reconstruction from Poisson measurements. In both applications, our SAGE algorithms easily accommodate smoothness penalties and converge faster than the EM algorithms.", "title": "" }, { "docid": "3b54f22dd95670f618650f2d71e58068", "text": "This paper proposes a novel multi-view human action recognition method by discovering and sharing common knowledge among different video sets captured in multiple viewpoints. To our knowledge, we are the first to treat a specific view as target domain and the others as source domains and consequently formulate the multi-view action recognition into the cross-domain learning framework. First, the classic bag-of-visual word framework is implemented for visual feature extraction in individual viewpoints. Then, we propose a cross-domain learning method with block-wise weighted kernel function matrix to highlight the saliency components and consequently augment the discriminative ability of the model. Extensive experiments are implemented on IXMAS, the popular multi-view action dataset. The experimental results demonstrate that the proposed method can consistently outperform the state of the arts.", "title": "" }, { "docid": "8ad20ab4523e4cc617142a2de299dd4a", "text": "OBJECTIVE\nTo determine the reliability and internal validity of the Hypospadias Objective Penile Evaluation (HOPE)-score, a newly developed scoring system assessing the cosmetic outcome in hypospadias.\n\n\nPATIENTS AND METHODS\nThe HOPE scoring system incorporates all surgically-correctable items: position of meatus, shape of meatus, shape of glans, shape of penile skin and penile axis. Objectivity was established with standardized photographs, anonymously coded patients, independent assessment by a panel, standards for a \"normal\" penile appearance, reference pictures and assessment of the degree of abnormality. A panel of 13 pediatric urologists completed 2 questionnaires, each consisting of 45 series of photographs, at an interval of at least 1 week. The inter-observer reliability, intra-observer reliability and internal validity were analyzed.\n\n\nRESULTS\nThe correlation coefficients for the HOPE-score were as follows: intra-observer reliability 0.817, inter-observer reliability 0.790, \"non-parametric\" internal validity 0.849 and \"parametric\" internal validity 0.842. These values reflect good reproducibility, sufficient agreement among observers and a valid measurement of differences and similarities in cosmetic appearance.\n\n\nCONCLUSIONS\nThe HOPE-score is the first scoring system that fulfills the criteria of a valid measurement tool: objectivity, reliability and validity. These favorable properties support its use as an objective outcome measure of the cosmetic result after hypospadias surgery.", "title": "" }, { "docid": "5fa860515f72bca0667134bb61d2f695", "text": "In the broad field of evaluation, the importance of stakeholders is often acknowledged and different categories of stakeholders are identified. Far less frequent is careful attention to analysis of stakeholders' interests, needs, concerns, power, priorities, and perspectives and subsequent application of that knowledge to the design of evaluations. This article is meant to help readers understand and apply stakeholder identification and analysis techniques in the design of credible evaluations that enhance primary intended use by primary intended users. While presented using a utilization-focused-evaluation (UFE) lens, the techniques are not UFE-dependent. The article presents a range of the most relevant techniques to identify and analyze evaluation stakeholders. The techniques are arranged according to their ability to inform the process of developing and implementing an evaluation design and of making use of the evaluation's findings.", "title": "" }, { "docid": "f19f6c8caec01e3ca9c14981c0ea05fa", "text": "Non-invasive cuff-less Blood Pressure (BP) estimation from Photoplethysmogram (PPG) is a well known challenge in the field of affordable healthcare. This paper presents a set of improvements over an existing method that estimates BP using 2-element Windkessel model from PPG signal. A noisy PPG corpus is collected using fingertip pulse oximeter, from two different locations in India. Exhaustive pre-processing techniques, such as filtering, baseline and topline correction are performed on the noisy PPG signals, followed by the selection of consistent cycles. Subsequently, the most relevant PPG features and demographic features are selected through Maximal Information Coefficient (MIC) score for learning the latent parameters controlling BP. Experimental results reveal that overall error in estimating BP lies within 10% of a commercially available digital BP monitoring device. Also, use of alternative latent parameters that incorporate the variation in cardiac output, shows a better trend following for abnormally low and high BP.", "title": "" }, { "docid": "bd42bffcbb76d4aadde3df502326655a", "text": "We present a novel class of actor-critic algorithms for actors consisting of sets of interacting modules. We present, analyze theoretically, and empirically evaluate an update rule for each module, which requires only local information: the module’s input, output, and the TD error broadcast by a critic. Such updates are necessary when computation of compatible features becomes prohibitively difficult and are also desirable to increase the biological plausibility of reinforcement learning methods.", "title": "" }, { "docid": "eee5ffff364575afad1dcebbf169777b", "text": "In this paper, we proposed the multiclass support vector machine (SVM) with the error-correcting output codes for the multiclass electroencephalogram (EEG) signals classification problem. The probabilistic neural network (PNN) and multilayer perceptron neural network were also tested and benchmarked for their performance on the classification of the EEG signals. Decision making was performed in two stages: feature extraction by computing the wavelet coefficients and the Lyapunov exponents and classification using the classifiers trained on the extracted features. The purpose was to determine an optimum classification scheme for this problem and also to infer clues about the extracted features. Our research demonstrated that the wavelet coefficients and the Lyapunov exponents are the features which well represent the EEG signals and the multiclass SVM and PNN trained on these features achieved high classification accuracies", "title": "" }, { "docid": "7456ceee02f50c9e92a665d362a9a419", "text": "Visualization of dynamically changing networks (graphs) is a significant challenge for researchers. Previous work has experimentally compared animation, small multiples, and other techniques, and found trade-offs between these. One potential way to avoid such trade-offs is to combine previous techniques in a hybrid visualization. We present two taxonomies of visualizations of dynamic graphs: one of non-hybrid techniques, and one of hybrid techniques. We also describe a prototype, called DiffAni, that allows a graph to be visualized as a sequence of three kinds of tiles: diff tiles that show difference maps over some time interval, animation tiles that show the evolution of the graph over some time interval, and small multiple tiles that show the graph state at an individual time slice. This sequence of tiles is ordered by time and covers all time slices in the data. An experimental evaluation of DiffAni shows that our hybrid approach has advantages over non-hybrid techniques in certain cases.", "title": "" }, { "docid": "e680f8b83e7a2137321cc644724827de", "text": "A dual-band antenna is developed on a flexible Liquid Crystal Polymer (LCP) substrate for simultaneous operation at 2.45 and 5.8 GHz in high frequency Radio Frequency IDentification (RFID) systems. The response of the low profile double T-shaped slot antenna is preserved when the antenna is placed on platforms such as wood and cardboard, and when bent to conform to a cylindrical plastic box. Furthermore, experiments show that the antenna is still operational when placed at a distance of around 5cm from a metallic surface.", "title": "" }, { "docid": "fd0cfef7be75a9aa98229c25ffaea864", "text": "A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.", "title": "" }, { "docid": "5f78f4f492b45eb5efd50d2cda340413", "text": "This study examined the anatomy of the infrapatellar fat pad (IFP) in relation to knee pathology and surgical approaches. Eight embalmed knees were dissected via semicircular parapatellar incisions and each IFP was examined. Their volume, shape and constituent features were recorded. They were found in all knees and were constant in shape, consisting of a central body with medial and lateral extensions. The ligamentum mucosum was found inferior to the central body in all eight knees, while a fat tag was located superior to the central body in seven cases. Two clefts were consistently found on the posterior aspect of the IFP, a horizontal cleft below the ligamentum mucosum in six knees and a vertical cleft above, in seven cases. Our study found that the IFP is a constant structure in the knee joint, which may play a number of roles in knee joint function and pathology. Its significance in knee surgery is discussed.", "title": "" }, { "docid": "fed23432144a6929c4f3442b10157771", "text": "Knowledge has widely been acknowledged as one of the most important factors for corporate competitiveness, and we have witnessed an explosion of IS/IT solutions claiming to provide support for knowledge management (KM). A relevant question to ask, though, is how systems and technology intended for information such as the intranet can be able to assist in the managing of knowledge. To understand this, we must examine the relationship between information and knowledge. Building on Polanyi’s theories, I argue that all knowledge is tacit, and what can be articulated and made tangible outside the human mind is merely information. However, information and knowledge affect one another. By adopting a multi-perspective of the intranet where information, awareness, and communication are all considered, this interaction can best be supported and the intranet can become a useful and people-inclusive KM environment. 1. From philosophy to IT Ever since the ancient Greek period, philosophers have discussed what knowledge is. Early thinkers such as Plato and Aristotle where followed by Hobbes and Locke, Kant and Hegel, and into the 20th century by the likes of Wittgenstein, Popper, and Kuhn, to name but a few of the more prominent western philosophers. In recent years, we have witnessed a booming interest in knowledge also from other disciplines; organisation theorists, information system developers, and economists have all been swept away by the knowledge management avalanche. It seems, though, that the interest is particularly strong within the IS/IT community, where new opportunities to develop computer systems are welcomed. A plausible question to ask then is how knowledge relates to information technology (IT). Can IT at all be used to handle 0-7695-1435-9/02 $ knowledge, and if so, what sort of knowledge? What sorts of knowledge are there? What is knowledge? It seems we have little choice but to return to these eternal questions, but belonging to the IS/IT community, we should not approach knowledge from a philosophical perspective. As observed by Alavi and Leidner, the knowledge-based theory of the firm was never built on a universal truth of what knowledge really is but on a pragmatic interest in being able to manage organisational knowledge [2]. The discussion in this paper shall therefore be aimed at addressing knowledge from an IS/IT perspective, trying to answer two overarching questions: “What does the relationship between information and knowledge look like?” and “What role does an intranet have in this relationship?” The purpose is to critically review the contemporary KM literature in order to clarify the relationships between information and knowledge that commonly and implicitly are assumed within the IS/IT community. Epistemologically, this paper shall address the difference between tacit and explicit knowledge by accounting for some of the views more commonly found in the KM literature. Some of these views shall also be questioned, and the prevailing assump tion that tacit and explicit are two forms of knowledge shall be criticised by returning to Polanyi’s original work. My interest in the tacit side of knowledge, i.e. the aspects of knowledge that is omnipresent, taken for granted, and affecting our understanding without us being aware of it, has strongly influenced the content of this paper. Ontologywise, knowledge may be seen to exist on different levels, i.e. individual, group, organisation and inter-organisational [23]. Here, my primary interest is on the group and organisational levels. However, these two levels are obviously made up of individuals and we are thus bound to examine the personal aspects of knowledge as well, though be it from a macro perspective. 17.00 (c) 2002 IEEE 1 Proceedings of the 35th Hawaii International Conference on System Sciences 2002 2. Opposite traditions – and a middle way? When examining the knowledge literature, two separate tracks can be identified: the commodity view and the community view [35]. The commodity view of or the objective approach to knowledge as some absolute and universal truth has since long been the dominating view within science. Rooted in the positivism of the mid-19th century, the commodity view is still especially strong in the natural sciences. Disciples of this tradition understand knowledge as an artefact that can be handled in discrete units and that people may possess. Knowledge is a thing for which we can gain evidence, and knowledge as such is separated from the knower [33]. Metaphors such as drilling, mining, and harvesting are used to describe how knowledge is being managed. There is also another tradition that can be labelled the community view or the constructivist approach. This tradition can be traced back to Locke and Hume but is in its modern form rooted in the critique of the established quantitative approach to science that emerged primarily amongst social scientists during the 1960’s, and resulted in the publication of books by Garfinkel, Bourdieu, Habermas, Berger and Luckmann, and Glaser and Strauss. These authors argued that reality (and hence also knowledge) should be understood as socially constructed. According to this tradition, it is impossible to define knowledge universally; it can only be defined in practice, in the activities of and interactions between individuals. Thus, some understand knowledge to be universal and context-independent while others conceive it as situated and based on individual experiences. Maybe it is a little bit Author(s) Data Informa", "title": "" }, { "docid": "e76afdc4a867789e6bcc92876a6b52af", "text": "An Optimal fuzzy logic guidance (OFLG) law for a surface to air homing missile is introduced. The introduced approach is based on the well-known proportional navigation guidance (PNG) law. Particle Swarm Optimization (PSO) is used to optimize the of the membership functions&apos; (MFs) parameters of the proposed design. The distribution of the MFs is obtained by minimizing a nonlinear constrained multi-objective optimization problem where; control effort and miss distance are treated as competing objectives. The performance of the introduced guidance law is compared with classical fuzzy logic guidance (FLG) law as well as PNG one. The simulation results show that OFLG performs better than other guidance laws. Moreover, the introduced design is shown to perform well with the existence of noisy measurements.", "title": "" }, { "docid": "15fd626d5a6eb1258b8846137c62f97d", "text": "Since leadership plays a vital role in democratic movements, understanding the nature of democratic leadership is essential. However, the definition of democratic leadership is unclear (Gastil, 1994). Also, little research has defined democratic leadership in the context of democratic movements. The leadership literature has paid no attention to democratic leadership in such movements, focusing on democratic leadership within small groups and organizations. This study proposes a framework of democratic leadership in democratic movements. The framework includes contexts, motivations, characteristics, and outcomes of democratic leadership. The study considers sacrifice, courage, symbolism, citizen participation, and vision as major characteristics in the display of democratic leadership in various political, social, and cultural contexts. Applying the framework to Nelson Mandela, Lech Walesa, and Dae Jung Kim; the study considers them as exemplary models of democratic leadership in democratic movements for achieving democracy. They have showed crucial characteristics of democratic leadership, offering lessons for democratic governance.", "title": "" }, { "docid": "74ecfe68112ba6309ac355ba1f7b9818", "text": "We present a novel approach to probabilistic time series forecasting that combines state space models with deep learning. By parametrizing a per-time-series linear state space model with a jointly-learned recurrent neural network, our method retains desired properties of state space models such as data efficiency and interpretability, while making use of the ability to learn complex patterns from raw data offered by deep learning approaches. Our method scales gracefully from regimes where little training data is available to regimes where data from large collection of time series can be leveraged to learn accurate models. We provide qualitative as well as quantitative results with the proposed method, showing that it compares favorably to the state-of-the-art.", "title": "" }, { "docid": "7100b0adb93419a50bbaeb1b7e32edf5", "text": "Fractals have been very successful in quantifying the visual complexity exhibited by many natural patterns, and have captured the imagination of scientists and artists alike. Our research has shown that the poured patterns of the American abstract painter Jackson Pollock are also fractal. This discovery raises an intriguing possibility - are the visual characteristics of fractals responsible for the long-term appeal of Pollock's work? To address this question, we have conducted 10 years of scientific investigation of human response to fractals and here we present, for the first time, a review of this research that examines the inter-relationship between the various results. The investigations include eye tracking, visual preference, skin conductance, and EEG measurement techniques. We discuss the artistic implications of the positive perceptual and physiological responses to fractal patterns.", "title": "" }, { "docid": "2cfc7eeae3259a43a24ef56932d8b27f", "text": "This paper presents Platener, a system that allows quickly fabricating intermediate design iterations of 3D models, a process also known as low-fidelity fabrication. Platener achieves its speed-up by extracting straight and curved plates from the 3D model and substituting them with laser cut parts of the same size and thickness. Only the regions that are of relevance to the current design iteration are executed as full-detail 3D prints. Platener connects the parts it has created by automatically inserting joints. To help fast assembly it engraves instructions. Platener allows users to customize substitution results by (1) specifying fidelity-speed tradeoffs, (2) choosing whether or not to convert curved surfaces to plates bent using heat, and (3) specifying the conversion of individual plates and joints interactively. Platener is designed to best preserve the fidelity of func-tional objects, such as casings and mechanical tools, all of which contain a large percentage of straight/rectilinear elements. Compared to other low-fab systems, such as faBrickator and WirePrint, Platener better preserves the stability and functionality of such objects: the resulting assemblies have fewer parts and the parts have the same size and thickness as in the 3D model. To validate our system, we converted 2.250 3D models downloaded from a 3D model site (Thingiverse). Platener achieves a speed-up of 10 or more for 39.5% of all objects.", "title": "" } ]
scidocsrr
811485a5cf46d72e029480ba51b2cbbe
Determining the Chemical Compositions of Garlic Plant and its Existing Active Element
[ { "docid": "85e63b1689e6fd77cdfc1db191ba78ee", "text": "Singh VK, Singh DK. Pharmacological Effects of Garlic (Allium sativum L.). ARBS Annu Rev Biomed Sci 2008;10:6-26. Garlic (Allium sativum L.) is a bulbous herb used as a food item, spice and medicine in different parts of the world. Its medicinal use is based on traditional experience passed from generation to generation. Researchers from various disciplines are now directing their efforts towards discovering the effects of garlic on human health. Interest in garlic among researchers, particularly those in medical profession, has stemmed from the search for a drug that has a broad-spectrum therapeutic effect with minimal toxicity. Recent studies indicate that garlic extract has antimicrobial activity against many genera of bacteria, fungi and viruses. The role of garlic in preventing cardiovascular disease has been acclaimed by several authors. Chemical constituents of garlic have been investigated for treatment of hyperlipidemia, hypertension, platelet aggregation and blood fibrinolytic activity. Experimental data indicate that garlic may have anticarcinogenic effect. Recent researches in the area of pest control show that garlic has strong insecticidal, nematicidal, rodenticidal and molluscicidal activity. Despite field trials and laboratory experiments on the pesticidal activity of garlic have been conducted, more studies on the way of delivery in environment and mode of action are still recommended for effective control of pest. Adverse effects of oral ingestion and topical exposure of garlic include body odor, allergic reactions, acceleration in the effects of anticoagulants and reduction in the efficacy of anti-AIDS drug Saquinavir. ©by São Paulo State University ISSN 1806-8774", "title": "" } ]
[ { "docid": "2b3335d6fb1469c4848a201115a78e2c", "text": "Laser grooving is used for the singulation of advanced CMOS wafers since it is believed that it exerts lower mechanical stress than traditional blade dicing. The very local heating of wafers, however, might result in high thermal stress around the heat affected zone. In this work we present a model to predict the temperature distribution, material removal, and the resulting stress, in a sandwiched structure of metals and dielectric materials that are commonly found in the back-end of line of semiconductor wafers. Simulation results on realistic three dimensional back-end structures reveal that the presence of metals clearly affects both the ablation depth, and the stress in the material. Experiments showed a similar observation for the ablation depth. The shape of the crater, however, was found to be more uniform than predicted by simulations, which is probably due to the redistribution of molten metal.", "title": "" }, { "docid": "e273298153872073e463662b5d6d8931", "text": "The lack of readily-available large corpora of aligned monolingual sentence pairs is a major obstacle to the development of Statistical Machine Translation-based paraphrase models. In this paper, we describe the use of annotated datasets and Support Vector Machines to induce larger monolingual paraphrase corpora from a comparable corpus of news clusters found on the World Wide Web. Features include: morphological variants; WordNet synonyms and hypernyms; loglikelihood-based word pairings dynamically obtained from baseline sentence alignments; and formal string features such as word-based edit distance. Use of this technique dramatically reduces the Alignment Error Rate of the extracted corpora over heuristic methods based on position of the sentences in the text.", "title": "" }, { "docid": "52c9ee7e057ff9ade5daf44ea713e889", "text": "In this work, we present a novel peak-piloted deep network (PPDN) that uses a sample with peak expression (easy sample) to supervise the intermediate feature responses for a sample of non-peak expression (hard sample) of the same type and from the same subject. The expression evolving process from nonpeak expression to peak expression can thus be implicitly embedded in the network to achieve the invariance to expression intensities.", "title": "" }, { "docid": "2a827e858bf93cd5edba7feb3c0448f9", "text": "Kinetic analyses (joint moments, powers and work) of the lower limbs were performed during normal walking to determine what further information can be gained from a three-dimensional model over planar models. It was to be determined whether characteristic moment and power profiles exist in the frontal and transverse planes across subjects and how much work was performed in these planes. Kinetic profiles from nine subjects were derived using a three-dimensional inverse dynamics model of the lower limbs and power profiles were then calculated by a dot product of the angular velocities and joint moments resolved in a global reference system. Characteristic joint moment profiles across subjects were found for the hip, knee and ankle joints in all planes except for the ankle frontal moment. As expected, the major portion of work was performed in the plane of progression since the goal of locomotion is to support the body against gravity while generating movements which propel the body forward. However, the results also showed that substantial work was done in the frontal plane by the hip during walking (23% of the total work at that joint). The characteristic joint profiles suggest defined motor patterns and functional roles in the frontal and transverse planes. Kinetic analysis in three dimensions is necessary particularly if the hip joint is being examined as a substantial amount of work was done in the frontal plane of the hip to control the pelvis and trunk against gravitational forces.", "title": "" }, { "docid": "34bd41f7384d6ee4d882a39aec167b3e", "text": "This paper presents a robust feedback controller for ball and beam system (BBS). The BBS is a nonlinear system in which a ball has to be balanced on a particular beam position. The proposed nonlinear controller designed for the BBS is based upon Backstepping control technique which guarantees the boundedness of tracking error. To tackle the unknown disturbances, an external disturbance estimator (EDE) has been employed. The stability analysis of the overall closed loop robust control system has been worked out in the sense of Lyapunov theory. Finally, the simulation studies have been done to demonstrate the suitability of proposed scheme.", "title": "" }, { "docid": "4a837ccd9e392f8c7682446d9a3a3743", "text": "This paper investigates the applicability of Genetic Programming type systems to dynamic game environments. Grammatical Evolution was used to evolve Behaviour Trees, in order to create controllers for the Mario AI Benchmark. The results obtained reinforce the applicability of evolutionary programming systems to the development of artificial intelligence in games, and in dynamic systems in general, illustrating their viability as an alternative to more standard AI techniques.", "title": "" }, { "docid": "d563b025b084b53c30afba4211870f2d", "text": "Collaborative filtering (CF) techniques recommend items to users based on their historical ratings. In real-world scenarios, user interests may drift over time since they are affected by moods, contexts, and pop culture trends. This leads to the fact that a user’s historical ratings comprise many aspects of user interests spanning a long time period. However, at a certain time slice, one user’s interest may only focus on one or a couple of aspects. Thus, CF techniques based on the entire historical ratings may recommend inappropriate items. In this paper, we consider modeling user-interest drift over time based on the assumption that each user has multiple counterparts over temporal domains and successive counterparts are closely related. We adopt the cross-domain CF framework to share the static group-level rating matrix across temporal domains, and let user-interest distribution over item groups drift slightly between successive temporal domains. The derived method is based on a Bayesian latent factor model which can be inferred using Gibbs sampling. Our experimental results show that our method can achieve state-of-the-art recommendation performance as well as explicitly track and visualize user-interest drift over time.", "title": "" }, { "docid": "5399b924cdf1d034a76811360b6c018d", "text": "Psychological construction models of emotion state that emotions are variable concepts constructed by fundamental psychological processes, whereas according to basic emotion theory, emotions cannot be divided into more fundamental units and each basic emotion is represented by a unique and innate neural circuitry. In a previous study, we found evidence for the psychological construction account by showing that several brain regions were commonly activated when perceiving different emotions (i.e. a general emotion network). Moreover, this set of brain regions included areas associated with core affect, conceptualization and executive control, as predicted by psychological construction models. Here we investigate directed functional brain connectivity in the same dataset to address two questions: 1) is there a common pathway within the general emotion network for the perception of different emotions and 2) if so, does this common pathway contain information to distinguish between different emotions? We used generalized psychophysiological interactions and information flow indices to examine the connectivity within the general emotion network. The results revealed a general emotion pathway that connects neural nodes involved in core affect, conceptualization, language and executive control. Perception of different emotions could not be accurately classified based on the connectivity patterns from the nodes of the general emotion pathway. Successful classification was achieved when connections outside the general emotion pathway were included. We propose that the general emotion pathway functions as a common pathway within the general emotion network and is involved in shared basic psychological processes across emotions. However, additional connections within the general emotion network are required to classify different emotions, consistent with a constructionist account.", "title": "" }, { "docid": "485b48bb7b489d2be73de84994a16e42", "text": "This paper presents Conflux, a fast, scalable and decentralized blockchain system that optimistically process concurrent blocks without discarding any as forks. The Conflux consensus protocol represents relationships between blocks as a direct acyclic graph and achieves consensus on a total order of the blocks. Conflux then, from the block order, deterministically derives a transaction total order as the blockchain ledger. We evaluated Conflux on Amazon EC2 clusters with up to 20k full nodes. Conflux achieves a transaction throughput of 5.76GB/h while confirming transactions in 4.5-7.4 minutes. The throughput is equivalent to 6400 transactions per second for typical Bitcoin transactions. Our results also indicate that when running Conflux, the consensus protocol is no longer the throughput bottleneck. The bottleneck is instead at the processing capability of individual nodes.", "title": "" }, { "docid": "73e398a5ae434dbd2a10ddccd2cfb813", "text": "Face alignment aims to estimate the locations of a set of landmarks for a given image. This problem has received much attention as evidenced by the recent advancement in both the methodology and performance. However, most of the existing works neither explicitly handle face images with arbitrary poses, nor perform large-scale experiments on non-frontal and profile face images. In order to address these limitations, this paper proposes a novel face alignment algorithm that estimates both 2D and 3D landmarks and their 2D visibilities for a face image with an arbitrary pose. By integrating a 3D point distribution model, a cascaded coupled-regressor approach is designed to estimate both the camera projection matrix and the 3D landmarks. Furthermore, the 3D model also allows us to automatically estimate the 2D landmark visibilities via surface normal. We use a substantially larger collection of all-pose face images to evaluate our algorithm and demonstrate superior performances than the state-of-the-art methods.", "title": "" }, { "docid": "e7b7c37a340b4a22dddff59fc6651218", "text": "Different types of printing methods have recently attracted interest as emerging technologies for fabrication of drug delivery systems. If printing is combined with different oral film manufacturing technologies such as solvent casting and other techniques, multifunctional structures can be created to enable further complexity and high level of sophistication. This review paper intends to provide profound understanding and future perspectives for the potential use of printing technologies in the preparation of oral film formulations as novel drug delivery systems. The described concepts include advanced multi-layer coatings, stacked systems, and integrated bioactive multi-compartments, which comprise of integrated combinations of diverse materials to form sophisticated bio-functional constructs. The advanced systems enable tailored dosing for individual drug therapy, easy and safe manufacturing of high-potent drugs, development and manufacturing of fixed-dose combinations and product tracking for anti-counterfeiting strategies.", "title": "" }, { "docid": "6082c0252dffe7903512e36f13da94eb", "text": "Thousands of storage tanks in oil refineries have to be inspected manually to prevent leakage and/or any other potential catastrophe. A wall climbing robot with permanent magnet adhesion mechanism equipped with nondestructive sensor has been designed. The robot can be operated autonomously or manually. In autonomous mode the robot uses an ingenious coverage algorithm based on distance transform function to navigate itself over the tank surface in a back and forth motion to scan the external wall for the possible faults using sensors without any human intervention. In manual mode the robot can be navigated wirelessly from the ground station to any location of interest. Preliminary experiment has been carried out to test the prototype.", "title": "" }, { "docid": "45a15455945fdd03ee726b285b8dd75a", "text": "The nonequispaced Fourier transform arises in a variety of application areas, from medical imaging to radio astronomy to the numerical solution of partial differential equations. In a typical problem, one is given an irregular sampling of N data in the frequency domain and one is interested in reconstructing the corresponding function in the physical domain. When the sampling is uniform, the fast Fourier transform (FFT) allows this calculation to be computed in O(N logN) operations rather than O(N2) operations. Unfortunately, when the sampling is nonuniform, the FFT does not apply. Over the last few years, a number of algorithms have been developed to overcome this limitation and are often referred to as nonuniform FFTs (NUFFTs). These rely on a mixture of interpolation and the judicious use of the FFT on an oversampled grid [A. Dutt and V. Rokhlin, SIAM J. Sci. Comput., 14 (1993), pp. 1368–1383]. In this paper, we observe that one of the standard interpolation or “gridding” schemes, based on Gaussians, can be accelerated by a significant factor without precomputation and storage of the interpolation weights. This is of particular value in twoand threedimensional settings, saving either 10dN in storage in d dimensions or a factor of about 5–10 in CPU time (independent of dimension).", "title": "" }, { "docid": "2f23d51ffd54a6502eea07883709d016", "text": "Named entity recognition (NER) is a popular domain of natural language processing. For this reason, many tools exist to perform this task. Amongst other points, they differ in the processing method they rely upon, the entity types they can detect, the nature of the text they can handle, and their input/output formats. This makes it difficult for a user to select an appropriate NER tool for a specific situation. In this article, we try to answer this question in the context of biographic texts. For this matter, we first constitute a new corpus by annotating 247 Wikipedia articles. We then select 4 publicly available, well known and free for research NER tools for comparison: Stanford NER, Illinois NET, OpenCalais NER WS and Alias-i LingPipe. We apply them to our corpus, assess their performances and compare them. When considering overall performances, a clear hierarchy emerges: Stanford has the best results, followed by LingPipe, Illionois and OpenCalais. However, a more detailed evaluation performed relatively to entity types and article categories highlights the fact their performances are diversely influenced by those factors. This complementarity opens an interesting perspective regarding the combination of these individual tools in order to improve performance.", "title": "" }, { "docid": "ce72785681a085be7f947ab6fa787b79", "text": "A computationally implemented model of the transmission of linguistic behavior over time is presented. In this model [the iterated learning model (ILM)], there is no biological evolution, natural selection, nor any measurement of the success of the agents at communicating (except for results-gathering purposes). Nevertheless, counter to intuition, significant evolution of linguistic behavior is observed. From an initially unstructured communication system (a protolanguage), a fully compositional syntactic meaning-string mapping emerges. Furthermore, given a nonuniform frequency distribution over a meaning space and a production mechanism that prefers short strings, a realistic distribution of string lengths and patterns of stable irregularity emerges, suggesting that the ILM is a good model for the evolution of some of the fundamental features of human language.", "title": "" }, { "docid": "7ba37f2dcf95f36727e1cd0f06e31cc0", "text": "The neonate receiving parenteral nutrition (PN) therapy requires a physiologically appropriate solution in quantity and quality given according to a timely, cost-effective strategy. Maintaining tissue integrity, metabolism, and growth in a neonate is challenging. To support infant growth and influence subsequent development requires critical timing for nutrition assessment and intervention. Providing amino acids to neonates has been shown to improve nitrogen balance, glucose metabolism, and amino acid profiles. In contrast, supplying the lipid emulsions (currently available in the United States) to provide essential fatty acids is not the optimal composition to help attenuate inflammation. Recent investigations with an omega-3 fish oil IV emulsion are promising, but there is need for further research and development. Complications from PN, however, remain problematic and include infection, hepatic dysfunction, and cholestasis. These complications in the neonate can affect morbidity and mortality, thus emphasizing the preference to provide early enteral feedings, as well as medication therapy to improve liver health and outcome. Potential strategies aimed at enhancing PN therapy in the neonate are highlighted in this review, and a summary of guidelines for practical management is included.", "title": "" }, { "docid": "343115505ad21c973475c12c3657d82c", "text": "New transportation fuels are badly needed to reduce our heavy dependence on imported oil and to reduce the release of greenhouse gases that cause global climate change; cellulosic biomass is the only inexpensive resource that can be used for sustainable production of the large volumes of liquid fuels that our transportation sector has historically favored. Furthermore, biological conversion of cellulosic biomass can take advantage of the power of biotechnology to take huge strides toward making biofuels cost competitive. Ethanol production is particularly well suited to marrying this combination of need, resource, and technology. In fact, major advances have already been realized to competitively position cellulosic ethanol with corn ethanol. However, although biotechno logy presents important opportunities to achieve very low costs, pretreatment of naturally resistant cellulosic mate rials is essential if we are to achieve high yields from biological operations; this operation is projected to be the single, most expensive processing step, representing about 20% of the total cost. In addition, pretreatment has pervasive impacts on all other major operations in the overall conversion scheme from choice of feedstock through to size reduction, hydrolysis, and fermentation, and on to product recovery, residue processing, and co-product potential. A number of different pretreatments involving biological, chemical, physical, and thermal approaches have been investigated over the years, but only those that employ chemicals currently offer the high yields and low costs vital to economic success. Among the most promising are pretreatments using dilute acid, sulfur dioxide, near-neutral pH control, ammonia expansion, aqueous ammonia, and lime, with signifi cant differences among the sugar-release patterns. Although projected costs for these options are similar when applied to corn stover, a key need now is to dramatically improve our knowledge of these systems with the goal of advancing pretreatment to substantially reduce costs and to accelerate commercial applications. © 2007 Society of Chemical Industry and John Wiley & Sons, Ltd", "title": "" }, { "docid": "0cccb226bb72be281ead8c614bd46293", "text": "We introduce a model for incorporating contextual information (such as geography) in learning vector-space representations of situated language. In contrast to approaches to multimodal representation learning that have used properties of the object being described (such as its color), our model includes information about the subject (i.e., the speaker), allowing us to learn the contours of a word’s meaning that are shaped by the context in which it is uttered. In a quantitative evaluation on the task of judging geographically informed semantic similarity between representations learned from 1.1 billion words of geo-located tweets, our joint model outperforms comparable independent models that learn meaning in isolation.", "title": "" }, { "docid": "33c5ddb4633cc09c87b8ee26d7c54e51", "text": "INTRODUCTION\nAdvances in technology have revolutionized the medical field and changed the way healthcare is delivered. Unmanned aerial vehicles (UAVs) are the next wave of technological advancements that have the potential to make a huge splash in clinical medicine. UAVs, originally developed for military use, are making their way into the public and private sector. Because they can be flown autonomously and can reach almost any geographical location, the significance of UAVs are becoming increasingly apparent in the medical field.\n\n\nMATERIALS AND METHODS\nWe conducted a comprehensive review of the English language literature via the PubMed and Google Scholar databases using search terms \"unmanned aerial vehicles,\" \"UAVs,\" and \"drone.\" Preference was given to clinical trials and review articles that addressed the keywords and clinical medicine.\n\n\nRESULTS\nPotential applications of UAVs in medicine are broad. Based on articles identified, we grouped UAV application in medicine into three categories: (1) Prehospital Emergency Care; (2) Expediting Laboratory Diagnostic Testing; and (3) Surveillance. Currently, UAVs have been shown to deliver vaccines, automated external defibrillators, and hematological products. In addition, they are also being studied in the identification of mosquito habitats as well as drowning victims at beaches as a public health surveillance modality.\n\n\nCONCLUSIONS\nThese preliminary studies shine light on the possibility that UAVs may help to increase access to healthcare for patients who may be otherwise restricted from proper care due to cost, distance, or infrastructure. As with any emerging technology and due to the highly regulated healthcare environment, the safety and effectiveness of this technology need to be thoroughly discussed. Despite the many questions that need to be answered, the application of drones in medicine appears to be promising and can both increase the quality and accessibility of healthcare.", "title": "" } ]
scidocsrr
37daee87cefd6eabae129bc0df7338dd
Blockchain distributed ledger technologies for biomedical and health care applications
[ { "docid": "9e65315d4e241dc8d4ea777247f7c733", "text": "A long-standing focus on compliance has traditionally constrained development of fundamental design changes for Electronic Health Records (EHRs). We now face a critical need for such innovation, as personalization and data science prompt patients to engage in the details of their healthcare and restore agency over their medical data. In this paper, we propose MedRec: a novel, decentralized record management system to handle EHRs, using blockchain technology. Our system gives patients a comprehensive, immutable log and easy access to their medical information across providers and treatment sites. Leveraging unique blockchain properties, MedRec manages authentication, confidentiality, accountability and data sharing—crucial considerations when handling sensitive information. A modular design integrates with providers' existing, local data storage solutions, facilitating interoperability and making our system convenient and adaptable. We incentivize medical stakeholders (researchers, public health authorities, etc.) to participate in the network as blockchain “miners”. This provides them with access to aggregate, anonymized data as mining rewards, in return for sustaining and securing the network via Proof of Work. MedRec thus enables the emergence of data economics, supplying big data to empower researchers while engaging patients and providers in the choice to release metadata. The purpose of this paper is to expose, in preparation for field tests, a working prototype through which we analyze and discuss our approach and the potential for blockchain in health IT and research.", "title": "" }, { "docid": "8780b620d228498447c4f1a939fa5486", "text": "A new mechanism is proposed for securing a blockchain applied to contracts management such as digital rights management. This mechanism includes a new consensus method using a credibility score and creates a hybrid blockchain by alternately using this new method and proof-of-stake. This makes it possible to prevent an attacker from monopolizing resources and to keep securing blockchains.", "title": "" } ]
[ { "docid": "91c0bd1c3faabc260277c407b7c6af59", "text": "In this paper, we consider the Direct Perception approach for autonomous driving. Previous efforts in this field focused more on feature extraction of the road markings and other vehicles in the scene rather than on the autonomous driving algorithm and its performance under realistic assumptions. Our main contribution in this paper is introducing a new, more robust, and more realistic Direct Perception framework and corresponding algorithm for autonomous driving. First, we compare the top 3 Convolutional Neural Networks (CNN) models in the feature extraction competitions and test their performance for autonomous driving. The experimental results showed that GoogLeNet performs the best in this application. Subsequently, we propose a deep learning based algorithm for autonomous driving, and we refer to our algorithm as GoogLenet for Autonomous Driving (GLAD). Unlike previous efforts, GLAD makes no unrealistic assumptions about the autonomous vehicle or its surroundings, and it uses only five affordance parameters to control the vehicle as compared to the 14 parameters used by prior efforts. Our simulation results show that the proposed GLAD algorithm outperforms previous Direct Perception algorithms both on empty roads and while driving with other surrounding vehicles.", "title": "" }, { "docid": "45a098c09a3803271f218fafd4d951cd", "text": "Recent years have seen a tremendous increase in the demand for wireless bandwidth. To support this demand by innovative and resourceful use of technology, future communication systems will have to shift towards higher carrier frequencies. Due to the tight regulatory situation, frequencies in the atmospheric attenuation window around 300 GHz appear very attractive to facilitate an indoor, short range, ultra high speed THz communication system. In this paper, we investigate the influence of diffuse scattering at such high frequencies on the characteristics of the communication channel and its implications on the non-line-of-sight propagation path. The Kirchhoff approach is verified by an experimental study of diffuse scattering from randomly rough surfaces commonly encountered in indoor environments using a fiber-coupled terahertz time-domain spectroscopy system to perform angle- and frequency-dependent measurements. Furthermore, we integrate the Kirchhoff approach into a self-developed ray tracing algorithm to model the signal coverage of a typical office scenario.", "title": "" }, { "docid": "a9595ea31ebfe07ac9d3f7fccf0d1c05", "text": "The growing movement of biologically inspired design is driven in part by the need for sustainable development and in part by the recognition that nature could be a source of innovation. Biologically inspired design by definition entails cross-domain analogies from biological systems to problems in engineering and other design domains. However, the practice of biologically inspired design at present typically is ad hoc, with little systemization of either biological knowledge for the purposes of engineering design or the processes of transferring knowledge of biological designs to engineering problems. In this paper we present an intricate episode of biologically inspired engineering design that unfolded over an extended period of time. We then analyze our observations in terms of why, what, how, and when questions of analogy. This analysis contributes toward a content theory of creative analogies in the context of biologically inspired design.", "title": "" }, { "docid": "96363ec5134359b5bf7c8b67f67971db", "text": "Self adaptive video games are important for rehabilitation at home. Recent works have explored different techniques with satisfactory results but these have a poor use of game design concepts like Challenge and Conservative Handling of Failure. Dynamic Difficult Adjustment with Help (DDA-Help) approach is presented as a new point of view for self adaptive video games for rehabilitation. Procedural Content Generation (PCG) and automatic helpers are used to a different work on Conservative Handling of Failure and Challenge. An experience with amblyopic children showed the proposal effectiveness, increasing the visual acuity 2-3 level following the Snellen Vision Test and improving the performance curve during the game time.", "title": "" }, { "docid": "6b19d08c9aa6ecfec27452a298353e1f", "text": "This paper presents the recent development in automatic vision based technology. Use of this technology is increasing in agriculture and fruit industry. An automatic fruit quality inspection system for sorting and grading of tomato fruit and defected tomato detection discussed here. The main aim of this system is to replace the manual inspection system. This helps in speed up the process improve accuracy and efficiency and reduce time. This system collect image from camera which is placed on conveyor belt. Then image processing is done to get required features of fruits such as texture, color and size. Defected fruit is detected based on blob detection, color detection is done based on thresholding and size detection is based on binary image of tomato. Sorting is done based on color and grading is done based on size.", "title": "" }, { "docid": "1d11060907f0a2c856fdda9152b107e5", "text": "NOTICE This report was prepared by Columbia University in the course of performing work contracted for and sponsored by the New York State Energy Research and Development Authority (hereafter \" NYSERDA \"). The opinions expressed in this report do not necessarily reflect those of NYSERDA or the State of New York, and reference to any specific product, service, process, or method does not constitute an implied or expressed recommendation or endorsement of it. Further, NYSERDA, the State of New York, and the contractor make no warranties or representations, expressed or implied, as to the fitness for particular purpose or merchantability of any product, apparatus, or service, or the usefulness, completeness, or accuracy of any processes, methods, or other information contained, described, disclosed, or referred to in this report. NYSERDA, the State of New York, and the contractor make no representation that the use of any product, apparatus, process, method, or other information will not infringe privately owned rights and will assume no liability for any loss, injury, or damage resulting from, or occurring in connection with, the use of information contained, described, disclosed, or referred to in this report. iii ABSTRACT A research project was conducted to develop a concrete material that contains recycled waste glass and reprocessed carpet fibers and would be suitable for precast concrete wall panels. Post-consumer glass and used carpets constitute major solid waste components. Therefore their beneficial use will reduce the pressure on scarce landfills and the associated costs to taxpayers. By identifying and utilizing the special properties of these recycled materials, it is also possible to produce concrete elements with improved esthetic and thermal insulation properties. Using recycled waste glass as substitute for natural aggregate in commodity products such as precast basement wall panels brings only modest economic benefits at best, because sand, gravel, and crushed stone are fairly inexpensive. However, if the esthetic properties of the glass are properly exploited, such as in building façade elements with architectural finishes, the resulting concrete panels can compete very effectively with other building materials such as natural stone. As for recycled carpet fibers, the intent of this project was to exploit their thermal properties in order to increase the thermal insulation of concrete wall panels. In this regard, only partial success was achieved, because commercially reprocessed carpet fibers improve the thermal properties of concrete only marginally, as compared with other methods, such as the use of …", "title": "" }, { "docid": "ba29af46fd410829c450eed631aa9280", "text": "We address the problem of dense visual-semantic embedding that maps not only full sentences and whole images but also phrases within sentences and salient regions within images into a multimodal embedding space. Such dense embeddings, when applied to the task of image captioning, enable us to produce several region-oriented and detailed phrases rather than just an overview sentence to describe an image. Specifically, we present a hierarchical structured recurrent neural network (RNN), namely Hierarchical Multimodal LSTM (HM-LSTM). Compared with chain structured RNN, our proposed model exploits the hierarchical relations between sentences and phrases, and between whole images and image regions, to jointly establish their representations. Without the need of any supervised labels, our proposed model automatically learns the fine-grained correspondences between phrases and image regions towards the dense embedding. Extensive experiments on several datasets validate the efficacy of our method, which compares favorably with the state-of-the-art methods.", "title": "" }, { "docid": "2c39f8c440a89f72db8814e633cb5c04", "text": "There is increasing evidence that gardening provides substantial human health benefits. However, no formal statistical assessment has been conducted to test this assertion. Here, we present the results of a meta-analysis of research examining the effects of gardening, including horticultural therapy, on health. We performed a literature search to collect studies that compared health outcomes in control (before participating in gardening or non-gardeners) and treatment groups (after participating in gardening or gardeners) in January 2016. The mean difference in health outcomes between the two groups was calculated for each study, and then the weighted effect size determined both across all and sets of subgroup studies. Twenty-two case studies (published after 2001) were included in the meta-analysis, which comprised 76 comparisons between control and treatment groups. Most studies came from the United States, followed by Europe, Asia, and the Middle East. Studies reported a wide range of health outcomes, such as reductions in depression, anxiety, and body mass index, as well as increases in life satisfaction, quality of life, and sense of community. Meta-analytic estimates showed a significant positive effect of gardening on the health outcomes both for all and sets of subgroup studies, whilst effect sizes differed among eight subgroups. Although Egger's test indicated the presence of publication bias, significant positive effects of gardening remained after adjusting for this using trim and fill analysis. This study has provided robust evidence for the positive effects of gardening on health. A regular dose of gardening can improve public health.", "title": "" }, { "docid": "b2f1ec4d8ac0a8447831df4287271c35", "text": "We present a new, robust and computationally efficient Hierarchical Bayesian model for effective topic correlation modeling. We model the prior distribution of topics by a Generalized Dirichlet distribution (GD) rather than a Dirichlet distribution as in Latent Dirichlet Allocation (LDA). We define this model as GD-LDA. This framework captures correlations between topics, as in the Correlated Topic Model (CTM) and Pachinko Allocation Model (PAM), and is faster to infer than CTM and PAM. GD-LDA is effective to avoid over-fitting as the number of topics is increased. As a tree model, it accommodates the most important set of topics in the upper part of the tree based on their probability mass. Thus, GD-LDA provides the ability to choose significant topics effectively. To discover topic relationships, we perform hyper-parameter estimation based on Monte Carlo EM Estimation. We provide results using Empirical Likelihood(EL) in 4 public datasets from TREC and NIPS. Then, we present the performance of GD-LDA in ad hoc information retrieval (IR) based on MAP, P@10, and Discounted Gain. We discuss an empirical comparison of the fitting time. We demonstrate significant improvement over CTM, LDA, and PAM for EL estimation. For all the IR measures, GD-LDA shows higher performance than LDA, the dominant topic model in IR. All these improvements with a small increase in fitting time than LDA, as opposed to CTM and PAM.", "title": "" }, { "docid": "5c05ad44ac2bf3fb26cea62d563435f8", "text": "We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters. We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramér GAN critic. Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs. In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance. We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training.", "title": "" }, { "docid": "c4387f3c791acc54d0a0655221947c8b", "text": "An emerging Internet application, IPTV, has the potential to flood Internet access and backbone ISPs with massive amounts of new traffic. Although many architectures are possible for IPTV video distribution, several mesh-pull P2P architectures have been successfully deployed on the Internet. In order to gain insights into mesh-pull P2P IPTV systems and the traffic loads they place on ISPs, we have undertaken an in-depth measurement study of one of the most popular IPTV systems, namely, PPLive. We have developed a dedicated PPLive crawler, which enables us to study the global characteristics of the mesh-pull PPLive system. We have also collected extensive packet traces for various different measurement scenarios, including both campus access networks and residential access networks. The measurement results obtained through these platforms bring important insights into P2P IPTV systems. Specifically, our results show the following. 1) P2P IPTV users have the similar viewing behaviors as regular TV users. 2) During its session, a peer exchanges video data dynamically with a large number of peers. 3) A small set of super peers act as video proxy and contribute significantly to video data uploading. 4) Users in the measured P2P IPTV system still suffer from long start-up delays and playback lags, ranging from several seconds to a couple of minutes. Insights obtained in this study will be valuable for the development and deployment of future P2P IPTV systems.", "title": "" }, { "docid": "31c0dc8f0a839da9260bb9876f635702", "text": "The application of a recently developed broadband beamformer to distinguish audio signals received from different directions is experimentally tested. The beamformer combines spatial and temporal subsampling using a nested array and multirate techniques which leads to the same region of support in the frequency domain for all subbands. This allows using the same beamformer for all subbands. The experimental set-up is presented and the recorded signals are analyzed. Results indicate that the proposed approach can be used to distinguish plane waves propagating with different direction of arrivals.", "title": "" }, { "docid": "7f6b4a74f88d5ae1a4d21948aac2e260", "text": "The PEP-R (psychoeducational profile revised) is an instrument that has been used in many countries to assess abilities and formulate treatment programs for children with autism and related developmental disorders. To the end to provide further information on the PEP-R's psychometric properties, a large sample (N = 137) of children presenting Autistic Disorder symptoms under the age of 12 years, including low-functioning individuals, was examined. Results yielded data of interest especially in terms of: Cronbach's alpha, interrater reliability, and validation with the Vineland Adaptive Behavior Scales. These findings help complete the instrument's statistical description and augment its usefulness, not only in designing treatment programs for these individuals, but also as an instrument for verifying the efficacy of intervention.", "title": "" }, { "docid": "a81e4507632505b64f4839a1a23fa440", "text": "Unity am e Deelopm nt w ith C# Alan Thorn In Pro Unity Game Development with C#, Alan Thorn, author of Learn Unity for 2D` Game Development and experienced game developer, takes you through the complete C# workflow for developing a cross-platform first person shooter in Unity. C# is the most popular programming language for experienced Unity developers, helping them get the most out of what Unity offers. If you’re already using C# with Unity and you want to take the next step in becoming an experienced, professional-level game developer, this is the book you need. Whether you are a student, an indie developer, or a seasoned game dev professional, you’ll find helpful C# examples of how to build intelligent enemies, create event systems and GUIs, develop save-game states, and lots more. You’ll understand and apply powerful programming concepts such as singleton classes, component based design, resolution independence, delegates, and event driven programming.", "title": "" }, { "docid": "45f1964932b06f23b7b0556bfb4d2d24", "text": "We present a real-time deep learning framework for video-based facial performance capture---the dense 3D tracking of an actor's face given a monocular video. Our pipeline begins with accurately capturing a subject using a high-end production facial capture pipeline based on multi-view stereo tracking and artist-enhanced animations. With 5--10 minutes of captured footage, we train a convolutional neural network to produce high-quality output, including self-occluded regions, from a monocular video sequence of that subject. Since this 3D facial performance capture is fully automated, our system can drastically reduce the amount of labor involved in the development of modern narrative-driven video games or films involving realistic digital doubles of actors and potentially hours of animated dialogue per character. We compare our results with several state-of-the-art monocular real-time facial capture techniques and demonstrate compelling animation inference in challenging areas such as eyes and lips.", "title": "" }, { "docid": "66cde02bdf134923ca7ef3ec5c4f0fb8", "text": "In this paper a method for holographic localization of passive UHF-RFID transponders is presented. It is shown how persons or devices that are equipped with a RFID reader and that are moving along a trajectory can be enabled to locate tagged objects reliably. The localization method is based on phase values sampled from a synthetic aperture by a RFID reader. The calculated holographic image is a spatial probability density function that reveals the actual RFID tag position. Experimental results are presented which show that the holographically measured positions are in good agreement with the real position of the tag. Additional simulations have been carried out to investigate the positioning accuracy of the proposed method depending on different distortion parameters and measuring conditions. The effect of antenna phase center displacement is briefly discussed and measurements are shown that quantify the influence on the phase measurement.", "title": "" }, { "docid": "7eea90d85df0245eac0de51702efdbfd", "text": "Mobile wellness application is widely used for assisting self-monitoring practice to monitor user's daily food intake and physical activities. Although these mostly free downloadable mobile application is easy to use and covers many aspects of wellness routines, there is no proof of prolonged use. Previous research reported that user will stop using the application and turned back into their old attitude of food consumptions. The purpose of this study is to examine the factors that influence the continuance intention to adopt a mobile phone wellness application. Review of Information System Continuance Model in the areas such as mobile health, mobile phone wellness application, social network and web 2.0, were done to examine the existing factors. From the critical review, two external factors namely Social Norm and Perceive Interactivity is believed to have the ability to explain the social perspective behavior and also the effect of perceiving interactivity towards prolong usage of wellness mobile application. These findings contribute to the development of the Mobile Phones Wellness Application Continuance Use theoretical model.", "title": "" }, { "docid": "3cdca28361b7c2b9525b476e9073fc10", "text": "The proliferation of MP3 players and the exploding amount of digital music content call for novel ways of music organization and retrieval to meet the ever-increasing demand for easy and effective information access. As almost every music piece is created to convey emotion, music organization and retrieval by emotion is a reasonable way of accessing music information. A good deal of effort has been made in the music information retrieval community to train a machine to automatically recognize the emotion of a music signal. A central issue of machine recognition of music emotion is the conceptualization of emotion and the associated emotion taxonomy. Different viewpoints on this issue have led to the proposal of different ways of emotion annotation, model training, and result visualization. This article provides a comprehensive review of the methods that have been proposed for music emotion recognition. Moreover, as music emotion recognition is still in its infancy, there are many open issues. We review the solutions that have been proposed to address these issues and conclude with suggestions for further research.", "title": "" }, { "docid": "89e88b92adc44176f0112a66ec92515a", "text": "Computer programming is being introduced in schools worldwide as part of a movement that promotes Computational Thinking (CT) skills among young learners. In general, learners use visual, block-based programming languages to acquire these skills, with Scratch being one of the most popular ones. Similar to professional developers, learners also copy and paste their code, resulting in duplication. In this paper we present the findings of correlating the assessment of the CT skills of learners with the presence of software clones in over 230,000 projects obtained from the Scratch platform. Specifically, we investigate i) if software cloning is an extended practice in Scratch projects, ii) if the presence of code cloning is independent of the programming mastery of learners, iii) if code cloning can be found more frequently in Scratch projects that require specific skills (as parallelism or logical thinking), and iv) if learners who have the skills to avoid software cloning really do so. The results show that i) software cloning can be commonly found in Scratch projects, that ii) it becomes more frequent as learners work on projects that require advanced skills, that iii) no CT dimension is to be found more related to the absence of software clones than others, and iv) that learners -even if they potentially know how to avoid cloning- still copy and paste frequently. The insights from this paper could be used by educators and learners to determine when it is pedagogically more effective to address software cloning, by educational programming platform developers to adapt their systems, and by learning assessment tools to provide better evaluations.", "title": "" }, { "docid": "e8215231e8eb26241d5ac8ac5be4b782", "text": "This research is on the use of a decision tree approach for predicting students‟ academic performance. Education is the platform on which a society improves the quality of its citizens. To improve on the quality of education, there is a need to be able to predict academic performance of the students. The IBM Statistical Package for Social Studies (SPSS) is used to apply the Chi-Square Automatic Interaction Detection (CHAID) in producing the decision tree structure. Factors such as the financial status of the students, motivation to learn, gender were discovered to affect the performance of the students. 66.8% of the students were predicted to have passed while 33.2% were predicted to fail. It is observed that much larger percentage of the students were likely to pass and there is also a higher likely of male students passing than female students.", "title": "" } ]
scidocsrr
bc66ec751e7ce368347c821c4b761d56
Smart Cars on Smart Roads : Problems of Control
[ { "docid": "436900539406faa9ff34c1af12b6348d", "text": "The accomplishments to date on the development of automatic vehicle control (AVC) technology in the Program on Advanced Technology for the Highway (PATH) at the University of California, Berkeley, are summarized. The basic prqfiiples and assumptions underlying the PATH work are identified, ‘followed by explanations of the work on automating vehicle lateral (steering) and longitudinal (spacing and speed) control. For both lateral and longitudinal control, the modeling of plant dynamics is described first, followed by development of the additional subsystems needed (communications, reference/sensor systems) and the derivation of the control laws. Plans for testing on vehicles in both near and long term are then discussed.", "title": "" } ]
[ { "docid": "dfbe5a92d45d4081910b868d78a904d0", "text": "Actuation is essential for artificial machines to interact with their surrounding environment and to accomplish the functions for which they are designed. Over the past few decades, there has been considerable progress in developing new actuation technologies. However, controlled motion still represents a considerable bottleneck for many applications and hampers the development of advanced robots, especially at small length scales. Nature has solved this problem using molecular motors that, through living cells, are assembled into multiscale ensembles with integrated control systems. These systems can scale force production from piconewtons up to kilonewtons. By leveraging the performance of living cells and tissues and directly interfacing them with artificial components, it should be possible to exploit the intricacy and metabolic efficiency of biological actuation within artificial machines. We provide a survey of important advances in this biohybrid actuation paradigm.", "title": "" }, { "docid": "5f7aa812dc718de9508b083320c67e8a", "text": "High power multi-level converters are deemed as the mainstay power conversion technology for renewable energy systems including the PV farm, energy storage system and electrical vehicle charge station. This paper is focused on the modeling and design of coupled and integrated magnetics in three-level DC/DC converter with multi-phase interleaved structure. The interleaved phase legs offer the benefit of output current ripple reduction, while inversed coupled inductors can suppress the circulating current between phase legs. To further reduce the magnetic volume, the four inductors in two-phase three-level DC/DC converter are integrated into one common structure, incorporating the negative coupling effects. Because of the nonlinearity of the inductor coupling, the equivalent circuit model is developed for the proposed interleaving structure to facilitate the design optimization of the integrated system. The model identifies the existence of multiple equivalent inductances during one switching cycle. A combination of them determines the inductor current ripple and dynamics of the system. By virtue of inverse coupling and means of controlling the coupling coefficients, one can minimize the current ripple and the unwanted circulating current. The fabricated prototype of the integrated coupled inductors is tested with a two-phase three-level DC/DC converter hardware, showing its good current ripple reduction performance as designed.", "title": "" }, { "docid": "7b89e1ac1dcdcc1f3897e672fd934a40", "text": "A 61-year-old female with long-standing constipation presented with increasing abdominal distention, pain, nausea and weight loss. She had been previously treated with intermittent fiber supplements and osmotic laxatives for chronic constipation. She did not use medications known to cause delayed bowel transit. Examination revealed a distended abdomen, hard stool in the rectum, and audible heart sounds throughout the abdomen. A CT scan showed severe colonic distention from stool (Fig. 1). She had no mechanical, infectious, metabolic, or endocrine-related etiology for constipation. After failing conservative management including laxative suppositories, enemas, manual disimpaction, methylnaltrexone and neostigmine, the patient underwent a colectomy with Hartmann pouch and terminal ileostomy. The removed colon measured 25.5 cm in largest diameter and weighed over 15 kg (Fig. 2). The histopathological examination demonstrated no neuronal degeneration, apoptosis or agangliosis to suggest Hirschprung’s disease or another intrinsic neuro-muscular disorder. Idiopathic megacolon is a relatively uncommon condition usually associated with slow-transit constipation. Although medical therapy is frequently ineffective, rectal laxatives, gentle enemas, and manual disimpaction can be attempted. Oral osmotic or secretory laxatives as well as unprepped lower endoscopy are relative contraindications as they may precipitate a perforation. Surgical therapy is often required as most cases are refractory to medical therapy.", "title": "" }, { "docid": "8f7a27b88a29fd915e198962d8cd17ad", "text": "For embedded high resolution successive approximation ADCs, it is necessary to determine the performance limitation of the CMOS process used for the design. This paper presents a modelling technique for major limitations, i.e. capacitor mismatch and non-linearity effects. The model is besed on Monte Carlo simulations applied to an analytical description of the ADC. Additional effects like charge injection and parasitic capacitance are included. The analytical basis covers different architectures with a fully binary weighted or series-split capacitor array. when comparing our analysis and measurement results to several conventional approaches, a significantly more realistic estimation of the attainable resolution is achieved. The presented results provide guidance in choosing process and circuit structure for the design of SAR ADCs. The model also enbles reliable capacitor sizing early in the design process, i.e. well before actual layout implementation.", "title": "" }, { "docid": "c1d436c01088c2295b35a1a37e922bee", "text": "Tourism is an important part of national economy. On the other hand it can also be a source of some negative externalities. These are mainly environmental externalities, resulting in increased pollution, aesthetic or architectural damages. High concentration of visitors may also lead to increased crime, or aggressiveness. These may have negative effects on quality of life of residents and negative experience of visitors. The paper deals with the influence of tourism on destination environment. It highlights the necessity of sustainable forms of tourism and activities to prevent negative implication of tourism, such as education activities and tourism monitoring. Key-words: Tourism, Mass Tourism, Development, Sustainability, Tourism Impact, Monitoring.", "title": "" }, { "docid": "afe44962393bf0d250571f7cd7e82677", "text": "Analytics is a field of research and practice that aims to reveal new patterns of information through the collection of large sets of data held in previously distinct sources. Growing interest in data and analytics in education, teaching, and learning raises the priority for increased, high-quality research into the models, methods, technologies, and impact of analytics. The challenges of applying analytics on academic and ethical reliability to control over data. The other challenge is that the educational landscape is extremely turbulent at present, and key challenge is the appropriate collection, protection and use of large data sets. This paper brings out challenges of multi various pertaining to the domain by offering a big data model for higher education system.", "title": "" }, { "docid": "004da753abb6cb84f1ba34cfb4dacc67", "text": "The aim of this study was to present a method for endodontic management of a maxillary first molar with unusual C-shaped morphology of the buccal root verified by cone-beam computed tomography (CBCT) images. This rare anatomical variation was confirmed using CBCT, and nonsurgical endodontic treatment was performed by meticulous evaluation of the pulpal floor. Posttreatment image revealed 3 independent canals in the buccal root obturated efficiently to the accepted lengths in all 3 canals. Our study describes a unique C-shaped variation of the root canal system in a maxillary first molar, involving the 3 buccal canals. In addition, our study highlights the usefulness of CBCT imaging for accurate diagnosis and management of this unusual canal morphology.", "title": "" }, { "docid": "89432b112f153319d3a2a816c59782e3", "text": "The Eyelink Toolbox software supports the measurement of eye movements. The toolbox provides an interface between a high-level interpreted language (MATLAB), a visual display programming toolbox (Psychophysics Toolbox), and a video-based eyetracker (Eyelink). The Eyelink Toolbox enables experimenters to measure eye movements while simultaneously executing the stimulus presentation routines provided by the Psychophysics Toolbox. Example programs are included with the toolbox distribution. Information on the Eyelink Toolbox can be found at http://psychtoolbox.org/.", "title": "" }, { "docid": "2d6225b20cf13d2974ce78877642a2f7", "text": "Low rank and sparse representation based methods, which make few specific assumptions about the background, have recently attracted wide attention in background modeling. With these methods, moving objects in the scene are modeled as pixel-wised sparse outliers. However, in many practical scenarios, the distributions of these moving parts are not truly pixel-wised sparse but structurally sparse. Meanwhile a robust analysis mechanism is required to handle background regions or foreground movements with varying scales. Based on these two observations, we first introduce a class of structured sparsity-inducing norms to model moving objects in videos. In our approach, we regard the observed sequence as being constituted of two terms, a low-rank matrix (background) and a structured sparse outlier matrix (foreground). Next, in virtue of adaptive parameters for dynamic videos, we propose a saliency measurement to dynamically estimate the support of the foreground. Experiments on challenging well known data sets demonstrate that the proposed approach outperforms the state-of-the-art methods and works effectively on a wide range of complex videos.", "title": "" }, { "docid": "f53d8be1ec89cb8a323388496d45dcd0", "text": "While Processing-in-Memory has been investigated for decades, it has not been embraced commercially. A number of emerging technologies have renewed interest in this topic. In particular, the emergence of 3D stacking and the imminent release of Micron's Hybrid Memory Cube device have made it more practical to move computation near memory. However, the literature is missing a detailed analysis of a killer application that can leverage a Near Data Computing (NDC) architecture. This paper focuses on in-memory MapReduce workloads that are commercially important and are especially suitable for NDC because of their embarrassing parallelism and largely localized memory accesses. The NDC architecture incorporates several simple processing cores on a separate, non-memory die in a 3D-stacked memory package; these cores can perform Map operations with efficient memory access and without hitting the bandwidth wall. This paper describes and evaluates a number of key elements necessary in realizing efficient NDC operation: (i) low-EPI cores, (ii) long daisy chains of memory devices, (iii) the dynamic activation of cores and SerDes links. Compared to a baseline that is heavily optimized for MapReduce execution, the NDC design yields up to 15X reduction in execution time and 18X reduction in system energy.", "title": "" }, { "docid": "08c26880862b09e81acc1cd99904fded", "text": "Efficient use of high speed hardware requires operating system components be customized to the application workload. Our general purpose operating systems are ill-suited for this task. We present EbbRT, a framework for constructing per-application library operating systems for cloud applications. The primary objective of EbbRT is to enable highperformance in a tractable and maintainable fashion. This paper describes the design and implementation of EbbRT, and evaluates its ability to improve the performance of common cloud applications. The evaluation of the EbbRT prototype demonstrates memcached, run within a VM, can outperform memcached run on an unvirtualized Linux. The prototype evaluation also demonstrates an 14% performance improvement of a V8 JavaScript engine benchmark, and a node.js webserver that achieves a 50% reduction in 99th percentile latency compared to it run on Linux.", "title": "" }, { "docid": "52dbfe369d1875c402220692ef985bec", "text": "Geographically annotated social media is extremely valuable for modern information retrieval. However, when researchers can only access publicly-visible data, one quickly finds that social media users rarely publish location information. In this work, we provide a method which can geolocate the overwhelming majority of active Twitter users, independent of their location sharing preferences, using only publicly-visible Twitter data. Our method infers an unknown user's location by examining their friend's locations. We frame the geotagging problem as an optimization over a social network with a total variation-based objective and provide a scalable and distributed algorithm for its solution. Furthermore, we show how a robust estimate of the geographic dispersion of each user's ego network can be used as a per-user accuracy measure which is effective at removing outlying errors. Leave-many-out evaluation shows that our method is able to infer location for 101, 846, 236 Twitter users at a median error of 6.38 km, allowing us to geotag over 80% of public tweets.", "title": "" }, { "docid": "1967de1be0b095b4a59a5bb0fdc403c0", "text": "As the popularity of content sharing websites has increased, they have become targets for spam, phishing and the distribution of malware. On YouTube, the facility for users to post comments can be used by spam campaigns to direct unsuspecting users to malicious third-party websites. In this paper, we demonstrate how such campaigns can be tracked over time using network motif profiling, i.e. by tracking counts of indicative network motifs. By considering all motifs of up to five nodes, we identify discriminating motifs that reveal two distinctly different spam campaign strategies, and present an evaluation that tracks two corresponding active campaigns.", "title": "" }, { "docid": "33c449dc56b7f844e1582bd61d87a8a4", "text": "We can determine whether two texts are paraphrases of each other by finding out the extent to which the texts are similar. The typical lexical matching technique works by matching the sequence of tokens between the texts to recognize paraphrases, and fails when different words are used to convey the same meaning. We can improve this simple method by combining lexical with syntactic or semantic representations of the input texts. The present work makes use of syntactical information in the texts and computes the similarity between them using word similarity measures based on WordNet and lexical databases. The texts are converted into a unified semantic structural model through which the semantic similarity of the texts is obtained. An approach is presented to assess the semantic similarity and the results of applying this approach is evaluated using the Microsoft Research Paraphrase (MSRP) Corpus.", "title": "" }, { "docid": "5621d7df640dbe3d757ebb600486def9", "text": "Dynamic spectrum access is the key to solving worldwide spectrum shortage. The open wireless medium subjects DSA systems to unauthorized spectrum use by illegitimate users. This paper presents SpecGuard, the first crowdsourced spectrum misuse detection framework for DSA systems. In SpecGuard, a transmitter is required to embed a spectrum permit into its physical-layer signals, which can be decoded and verified by ubiquitous mobile users. We propose three novel schemes for embedding and detecting a spectrum permit at the physical layer. Detailed theoretical analyses, MATLAB simulations, and USRP experiments confirm that our schemes can achieve correct, low-intrusive, and fast spectrum misuse detection.", "title": "" }, { "docid": "bab246f8b15931501049862066fde77f", "text": "The upcoming Internet of Things will introduce large sensor networks including devices with very different propagation characteristics and power consumption demands. 5G aims to fulfill these requirements by demanding a battery lifetime of at least 10 years. To integrate smart devices that are located in challenging propagation conditions, IoT communication technologies furthermore have to support very deep coverage. NB-IoT and eMTC are designed to meet these requirements and thus paving the way to 5G. With the power saving options extended Discontinuous Reception and Power Saving Mode as well as the usage of large numbers of repetitions, NB-IoT and eMTC introduce new techniques to meet the 5G IoT requirements. In this paper, the performance of NB-IoT and eMTC is evaluated. Therefore, data rate, power consumption, latency and spectral efficiency are examined in different coverage conditions. Although both technologies use the same power saving techniques as well as repetitions to extend the communication range, the analysis reveals a different performance in the context of data size, rate and coupling loss. While eMTC comes with a 4% better battery lifetime than NB-IoT when considering 144 dB coupling loss, NB-IoT battery lifetime raises to 18% better performance in 164 dB coupling loss scenarios. The overall analysis shows that in coverage areas with a coupling loss of 155 dB or less, eMTC performs better, but requires much more bandwidth. Taking the spectral efficiency into account, NB-IoT is in all evaluated scenarios the better choice and more suitable for future networks with massive numbers of devices.", "title": "" }, { "docid": "ac82ad870c787e759d08b1a80dc51bd2", "text": "We consider supervised learning in the presence of very many irrelevant features, and study two different regularization methods for preventing overfitting. Focusing on logistic regression, we show that using L1 regularization of the parameters, the sample complexity (i.e., the number of training examples required to learn \"well,\") grows only logarithmically in the number of irrelevant features. This logarithmic rate matches the best known bounds for feature selection, and indicates that L1 regularized logistic regression can be effective even if there are exponentially many irrelevant features as there are training examples. We also give a lower-bound showing that any rotationally invariant algorithm---including logistic regression with L2 regularization, SVMs, and neural networks trained by backpropagation---has a worst case sample complexity that grows at least linearly in the number of irrelevant features.", "title": "" }, { "docid": "87c09def017d5e32f06a887e5d06b0ff", "text": "A blade element momentum theory propeller model is coupled with a commercial RANS solver. This allows the fully appended self propulsion of the autonomous underwater vehicle Autosub 3 to be considered. The quasi-steady propeller model has been developed to allow for circumferential and radial variations in axial and tangential inflow. The non-uniform inflow is due to control surface deflections and the bow-down pitch of the vehicle in cruise condition. The influence of propeller blade Reynolds number is included through the use of appropriate sectional lift and drag coefficients. Simulations have been performed over the vehicles operational speed range (Re = 6.8× 10 to 13.5× 10). A workstation is used for the calculations with mesh sizes up to 2x10 elements. Grid uncertainty is calculated to be 3.07% for the wake fraction. The initial comparisons with in service data show that the coupled RANS-BEMT simulation under predicts the drag of the vehicle and consequently the required propeller rpm. However, when an appropriate correction is made for the effect on resistance of various protruding sensors the predicted propulsor rpm matches well with that of in-service rpm measurements for vessel speeds (1m/s 2m/s). The developed analysis captures the important influence of the propeller blade and hull Reynolds number on overall system efficiency.", "title": "" }, { "docid": "57bec1f2ee904f953463e4e41e2cb688", "text": "Graph embedding is an important branch in Data Mining and Machine Learning, and most of recent studies are focused on preserving the hierarchical structure with less dimensions. One of such models, called Poincare Embedding, achieves the goal by using Poincare Ball model to embed hierarchical structure in hyperbolic space instead of traditionally used Euclidean space. However, Poincare Embedding suffers from two major problems: (1) performance drops as depth of nodes increases since nodes tend to lay at the boundary; (2) the embedding model is constrained with pre-constructed structures and cannot be easily extended. In this paper, we first raise several techniques to overcome the problem of low performance for deep nodes, such as using partial structure, adding regularization, and exploring sibling relations in the structure. Then we also extend the Poincare Embedding model by extracting information from text corpus and propose a joint embedding model with Poincare Embedding and Word2vec.", "title": "" }, { "docid": "6228498fed5b26c0def578251aa1c749", "text": "Observation-Level Interaction (OLI) is a sensemaking technique relying upon the interactive semantic exploration of data. By manipulating data items within a visualization, users provide feedback to an underlying mathematical model that projects multidimensional data into a meaningful two-dimensional representation. In this work, we propose, implement, and evaluate an OLI model which explicitly defines clusters within this data projection. These clusters provide targets against which data values can be manipulated. The result is a cooperative framework in which the layout of the data affects the clusters, while user-driven interactions with the clusters affect the layout of the data points. Additionally, this model addresses the OLI \"with respect to what\" problem by providing a clear set of clusters against which interaction targets are judged and computed.", "title": "" } ]
scidocsrr
111e970b027530331ee4320b8ecbc49f
Selection of K in K-means clustering
[ { "docid": "ed9e22167d3e9e695f67e208b891b698", "text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.", "title": "" }, { "docid": "3e44a5c966afbeabff11b54bafcefdce", "text": "In this paper, we aim to compare empirically four initialization methods for the K-Means algorithm: random, Forgy, MacQueen and Kaufman. Although this algorithm is known for its robustness, it is widely reported in literature that its performance depends upon two key points: initial clustering and instance order. We conduct a series of experiments to draw up (in terms of mean, maximum, minimum and standard deviation) the probability distribution of the square-error values of the nal clusters returned by the K-Means algorithm independently on any initial clustering and on any instance order when each of the four initialization methods is used. The results of our experiments illustrate that the random and the Kauf-man initialization methods outperform the rest of the compared methods as they make the K-Means more eeective and more independent on initial clustering and on instance order. In addition, we compare the convergence speed of the K-Means algorithm when using each of the four initialization methods. Our results suggest that the Kaufman initialization method induces to the K-Means algorithm a more desirable behaviour with respect to the convergence speed than the random initial-ization method.", "title": "" }, { "docid": "651d048aaae1ce1608d3d9f0f09d4b9b", "text": "We investigate here the behavior of the standard k-means clustering algorithm and several alternatives to it: the k-harmonic means algorithm due to Zhang and colleagues, fuzzy k-means, Gaussian expectation-maximization, and two new variants of k-harmonic means. Our aim is to find which aspects of these algorithms contribute to finding good clusterings, as opposed to converging to a low-quality local optimum. We describe each algorithm in a unified framework that introduces separate cluster membership and data weight functions. We then show that the algorithms do behave very differently from each other on simple low-dimensional synthetic datasets and image segmentation tasks, and that the k-harmonic means method is superior. Having a soft membership function is essential for finding high-quality clusterings, but having a non-constant data weight function is useful also.", "title": "" } ]
[ { "docid": "1e042aca14a3412a4772761109cb6c10", "text": "With increasing quality requirements for multimedia communications, audio codecs must maintain both high quality and low delay. Typically, audio codecs offer either low delay or high quality, but rarely both. We propose a codec that simultaneously addresses both these requirements, with a delay of only 8.7 ms at 44.1 kHz. It uses gain-shape algebraic vector quantization in the frequency domain with time-domain pitch prediction. We demonstrate that the proposed codec operating at 48 kb/s and 64 kb/s out-performs both G.722.1C and MP3 and has quality comparable to AAC-LD, despite having less than one fourth of the algorithmic delay of these codecs.", "title": "" }, { "docid": "0dc3c4e628053e8f7c32c0074a2d1a59", "text": "Understanding inter-character relationships is fundamental for understanding character intentions and goals in a narrative. This paper addresses unsupervised modeling of relationships between characters. We model relationships as dynamic phenomenon, represented as evolving sequences of latent states empirically learned from data. Unlike most previous work our approach is completely unsupervised. This enables data-driven inference of inter-character relationship types beyond simple sentiment polarities, by incorporating lexical and semantic representations, and leveraging large quantities of raw text. We present three models based on rich sets of linguistic features that capture various cues about relationships. We compare these models with existing techniques and also demonstrate that relationship categories learned by our model are semantically coherent.", "title": "" }, { "docid": "9c7d3937b25c6be6480d52dec14bb4d5", "text": "Worldwide the pros and cons of games and social behaviour are discussed. In Western countries the discussion is focussing on violent game and media content; in Japan on intensive game usage and the impact on the intellectual development of children. A lot is already discussed on the harmful and negative effects of entertainment technology on human behaviour, therefore we decided to focus primarily on the positive effects. Based on an online document search we could find and select 393 online available publications according the following categories: meta review (N=34), meta analysis (N=13), literature review (N=38), literature survey (N=36), empirical study (N=91), survey study (N=44), design study (N=91), any other document (N=46). In this paper a first preliminary overview over positive effects of entertainment technology on human behaviour is presented and discussed. The drawn recommendations can support developers and designers in entertainment industry.", "title": "" }, { "docid": "9a86609ecefc5780a49ca638be4de64c", "text": "In this paper, we propose an end-to-end capsule network for pixel level localization of actors and actions present in a video. The localization is performed based on a natural language query through which an actor and action are specified. We propose to encode both the video as well as textual input in the form of capsules, which provide more effective representation in comparison with standard convolution based features. We introduce a novel capsule based attention mechanism for fusion of video and text capsules for text selected video segmentation. The attention mechanism is performed via joint EM routing over video and text capsules for text selected actor and action localization. The existing works on actor-action localization are mainly focused on localization in a single frame instead of the full video. Different from existing works, we propose to perform the localization on all frames of the video. To validate the potential of the proposed network for actor and action localization on all the frames of a video, we extend an existing actor-action dataset (A2D) with annotations for all the frames. The experimental evaluation demonstrates the effectiveness of the proposed capsule network for text selective actor and action localization in videos, and it also improves upon the performance of the existing state-of-the art works on single frame-based localization. Figure 1: Overview of the proposed approach. For a given video, we want to localize the actor and action which are described by an input textual query. Capsules are extracted from both the video and the textual query, and a joint EM routing algorithm creates high level capsules, which are further used for localization of selected actors and actions.", "title": "" }, { "docid": "5208762a8142de095c21824b0a395b52", "text": "Battery storage (BS) systems are static energy conversion units that convert the chemical energy directly into electrical energy. They exist in our cars, laptops, electronic appliances, micro electricity generation systems and in many other mobile to stationary power supply systems. The economic advantages, partial sustainability and the portability of these units pose promising substitutes for backup power systems for hybrid vehicles and hybrid electricity generation systems. Dynamic behaviour of these systems can be analysed by using mathematical modeling and simulation software programs. Though, there have been many mathematical models presented in the literature and proved to be successful, dynamic simulation of these systems are still very exhaustive and time consuming as they do not behave according to specific mathematical models or functions. The charging and discharging of battery functions are a combination of exponential and non-linear nature. The aim of this research paper is to present a suitable convenient, dynamic battery model that can be used to model a general BS system. Proposed model is a new modified dynamic Lead-Acid battery model considering the effect of temperature and cyclic charging and discharging effects. Simulink has been used to study the characteristics of the system and the proposed system has proved to be very successful as the simulation results have been very good. Keywords—Simulink Matlab, Battery Model, Simulation, BS Lead-Acid, Dynamic modeling, Temperature effect, Hybrid Vehicles.", "title": "" }, { "docid": "e1f531740891d47387a2fc2ef4f71c46", "text": "Multi-dimensional arrays, or tensors, are increasingly found in fields such as signal processing and recommender systems. Real-world tensors can be enormous in size and often very sparse. There is a need for efficient, high-performance tools capable of processing the massive sparse tensors of today and the future. This paper introduces SPLATT, a C library with shared-memory parallelism for three-mode tensors. SPLATT contains algorithmic improvements over competing state of the art tools for sparse tensor factorization. SPLATT has a fast, parallel method of multiplying a matricide tensor by a Khatri-Rao product, which is a key kernel in tensor factorization methods. SPLATT uses a novel data structure that exploits the sparsity patterns of tensors. This data structure has a small memory footprint similar to competing methods and allows for the computational improvements featured in our work. We also present a method of finding cache-friendly reordering and utilizing them with a novel form of cache tiling. To our knowledge, this is the first work to investigate reordering and cache tiling in this context. SPLATT averages almost 30x speedup compared to our baseline when using 16 threads and reaches over 80x speedup on NELL-2.", "title": "" }, { "docid": "6c4944ebd75404a0f3b2474e346677f1", "text": "Wireless industry nowadays is facing two major challenges: 1) how to support the vertical industry applications so that to expand the wireless industry market and 2) how to further enhance device capability and user experience. In this paper, we propose a technology framework to address these challenges. The proposed technology framework is based on end-to-end vertical and horizontal slicing, where vertical slicing enables vertical industry and services and horizontal slicing improves system capacity and user experience. The technology development on vertical slicing has already started in late 4G and early 5G and is mostly focused on slicing the core network. We envision this trend to continue with the development of vertical slicing in the radio access network and the air interface. Moving beyond vertical slicing, we propose to horizontally slice the computation and communication resources to form virtual computation platforms for solving the network capacity scaling problem and enhancing device capability and user experience. In this paper, we explain the concept of vertical and horizontal slicing and illustrate the slicing techniques in the air interface, the radio access network, the core network and the computation platform. This paper aims to initiate the discussion on the long-range technology roadmap and spur development on the solutions for E2E network slicing in 5G and beyond.", "title": "" }, { "docid": "bc6fc806fefc8298b8969f7a5f5b9e8b", "text": "Short text is usually expressed in refined slightly, insufficient information, which makes text classification difficult. But we can try to introduce some information from the existing knowledge base to strengthen the performance of short text classification. Wikipedia [2,13,15] is now the largest human-edited knowledge base of high quality. It would benefit to short text classification if we can make full use of Wikipedia information in short text classification. This paper presents a new concept based [22] on Wikipedia short text representation method, by identifying the concept of Wikipedia mentioned in short text, and then expand the concept of wiki correlation and short text messages to the feature vector representation.", "title": "" }, { "docid": "50e7ca7394db235909d657495bb11de2", "text": "Radar is an attractive technology for long term monitoring of human movement as it operates remotely, can be placed behind walls and is able to monitor a large area depending on its operating parameters. A radar signal reflected off a moving person carries rich information on his or her activity pattern in the form of a set of Doppler frequency signatures produced by the specific combination of limbs and torso movements. To enable classification and efficient storage and transmission of movement data, unique parameters have to be extracted from the Doppler signatures. Two of the most important human movement parameters for activity identification and classification are the velocity profile and the fundamental cadence frequency of the movement pattern. However, the complicated pattern of limbs and torso movement worsened by multipath propagation in indoor environment poses a challenge for the extraction of these human movement parameters. In this paper, three new approaches for the estimation of human walking velocity profile in indoor environment are proposed and discussed. The first two methods are based on spectrogram estimates whereas the third method is based on phase difference computation. In addition, a method to estimate the fundamental cadence frequency of the gait is suggested and discussed. The accuracy of the methods are evaluated and compared in an indoor experiment using a flexible and low-cost software defined radar platform. The results obtained indicate that the velocity estimation methods are able to estimate the velocity profile of the person’s translational motion with an error of less than 10%. The results also showed that the fundamental cadence is estimated with an error of 7%.", "title": "" }, { "docid": "90d5aca626d61806c2af3cc551b28c90", "text": "This paper presents two novel approaches to increase performance bounds of image steganography under the criteria of minimizing distortion. First, in order to efficiently use the images’ capacities, we propose using parallel images in the embedding stage. The result is then used to prove sub-optimality of the message distribution technique used by all cost based algorithms including HUGO, S-UNIWARD, and HILL. Second, a new distribution approach is presented to further improve the security of these algorithms. Experiments show that this distribution method avoids embedding in smooth regions and thus achieves a better performance, measured by state-of-the-art steganalysis, when compared with the current used distribution.", "title": "" }, { "docid": "a70475e2799b0a439e63382abcd90bd4", "text": "Nonabelian group-based public key cryptography is a relatively new and exciting research field. Rapidly increasing computing power and the futurity quantum computers [52] that have since led to, the security of public key cryptosystems in use today, will be questioned. Research in new cryptographic methods is also imperative. Research on nonabelian group-based cryptosystems will become one of contemporary research priorities. Many innovative ideas for them have been presented for the past two decades, and many corresponding problems remain to be resolved. The purpose of this paper, is to present a survey of the nonabelian group-based public key cryptosystems with the corresponding problems of security. We hope that readers can grasp the trend that is examined in this study.", "title": "" }, { "docid": "1a59bf4467e73a6cae050e5670dbf4fa", "text": "BACKGROUND\nNivolumab combined with ipilimumab resulted in longer progression-free survival and a higher objective response rate than ipilimumab alone in a phase 3 trial involving patients with advanced melanoma. We now report 3-year overall survival outcomes in this trial.\n\n\nMETHODS\nWe randomly assigned, in a 1:1:1 ratio, patients with previously untreated advanced melanoma to receive nivolumab at a dose of 1 mg per kilogram of body weight plus ipilimumab at a dose of 3 mg per kilogram every 3 weeks for four doses, followed by nivolumab at a dose of 3 mg per kilogram every 2 weeks; nivolumab at a dose of 3 mg per kilogram every 2 weeks plus placebo; or ipilimumab at a dose of 3 mg per kilogram every 3 weeks for four doses plus placebo, until progression, the occurrence of unacceptable toxic effects, or withdrawal of consent. Randomization was stratified according to programmed death ligand 1 (PD-L1) status, BRAF mutation status, and metastasis stage. The two primary end points were progression-free survival and overall survival in the nivolumab-plus-ipilimumab group and in the nivolumab group versus the ipilimumab group.\n\n\nRESULTS\nAt a minimum follow-up of 36 months, the median overall survival had not been reached in the nivolumab-plus-ipilimumab group and was 37.6 months in the nivolumab group, as compared with 19.9 months in the ipilimumab group (hazard ratio for death with nivolumab plus ipilimumab vs. ipilimumab, 0.55 [P<0.001]; hazard ratio for death with nivolumab vs. ipilimumab, 0.65 [P<0.001]). The overall survival rate at 3 years was 58% in the nivolumab-plus-ipilimumab group and 52% in the nivolumab group, as compared with 34% in the ipilimumab group. The safety profile was unchanged from the initial report. Treatment-related adverse events of grade 3 or 4 occurred in 59% of the patients in the nivolumab-plus-ipilimumab group, in 21% of those in the nivolumab group, and in 28% of those in the ipilimumab group.\n\n\nCONCLUSIONS\nAmong patients with advanced melanoma, significantly longer overall survival occurred with combination therapy with nivolumab plus ipilimumab or with nivolumab alone than with ipilimumab alone. (Funded by Bristol-Myers Squibb and others; CheckMate 067 ClinicalTrials.gov number, NCT01844505 .).", "title": "" }, { "docid": "3b2ddbef9ee3e5db60e2b315064a02c3", "text": "It is indispensable to understand and analyze industry structure and company relations from documents, such as news articles, in order to make management decisions concerning supply chains, selection of business partners, etc. Analysis of company relations from news articles requires both a macro-viewpoint, e.g., overviewing competitor groups, and a micro-viewpoint, e.g., grasping the descriptions of the relationship between a specific pair of companies collaborating. Research has typically focused on only the macro-viewpoint, classifying each company pair into a specific relation type. In this paper, to support company relation analysis from both macro-and micro-viewpoints, we propose a method that extracts collaborative/competitive company pairs from individual sentences in Web news articles by applying a Markov logic network and gather extracted relations from each company pair. By this method, we are able not only to perform clustering of company pairs into competitor groups based on the dominant relations of each pair (macro-viewpoint) but also to know how each company pair is described in individual sentences (micro-viewpoint). We empirically confirmed that the proposed method is feasible through analysis of 4,661 Web news articles on the semiconductor and related industries.", "title": "" }, { "docid": "d0bb31d79a7c93f67f7d11d6abee50cb", "text": "The chapter introduces the book explaining its purposes and significance, framing it within the current literature related to Location-Based Mobile Games. It further clarifies the methodology of the study on the ground of this work and summarizes the content of each chapter.", "title": "" }, { "docid": "b73c1b51f0f74c3b27b8d3d58c14e600", "text": "Water balance of the terrestrial isopod, Armadillidium vulgare, was investigated during conglobation (rolling-up behavior). Water loss and metabolic rates were measured at 18 +/- 1 degrees C in dry air using flow-through respirometry. Water-loss rates decreased 34.8% when specimens were in their conglobated form, while CO2 release decreased by 37.1%. Water loss was also measured gravimetrically at humidities ranging from 6 to 75 %RH. Conglobation was associated with a decrease in water-loss rates up to 53 %RH, but no significant differences were observed at higher humidities. Our findings suggest that conglobation behavior may help to conserve water, in addition to its demonstrated role in protection from predation.", "title": "" }, { "docid": "1d04def7d22e9f915d825551aa10b077", "text": "Recent advances in wireless networking technologies and the growing success of mobile computing devices, such as laptop computers, third generation mobile phones, personal digital assistants, watches and the like, are enabling new classes of applications that present challenging problems to designers. Mobile devices face temporary loss of network connectivity when they move; they are likely to have scarce resources, such as low battery power, slow CPU speed and little memory; they are required to react to frequent and unannounced changes in the environment, such as high variability of network bandwidth, and in the remote resources availability, and so on. To support designers building mobile applications, research in the field of middleware systems has proliferated. Middleware aims at facilitating communication and coordination of distributed components, concealing difficulties raised by mobility from application engineers as much as possible. In this survey, we examine characteristics of mobile distributed systems and distinguish them from their fixed counterpart. We introduce a framework and a categorization of the various middleware systems designed to support mobility, and we present a detailed and comparative review of the major results reached in this field. An analysis of current trends inside the mobile middleware community and a discussion of further directions of research conclude the survey.", "title": "" }, { "docid": "27ea4d25d672b04632c53c711afe0ceb", "text": "Many advancements have been taking place in unmanned aerial vehicle (UAV) technology lately. This is leading towards the design and development of UAVs with various sizes that possess increased on-board processing, memory, storage, and communication capabilities. Consequently, UAVs are increasingly being used in a vast amount of commercial, military, civilian, agricultural, and environmental applications. However, to take full advantages of their services, these UAVs must be able to communicate efficiently with each other using UAV-to-UAV (U2U) communication and with existing networking infrastructures using UAV-to-Infrastructure (U2I) communication. In this paper, we identify the functions, services and requirements of UAV-based communication systems. We also present networking architectures, underlying frameworks, and data traffic requirements in these systems as well as outline the various protocols and technologies that can be used at different UAV communication links and networking layers. In addition, the paper discusses middleware layer services that can be provided in order to provide seamless communication and support heterogeneous network interfaces. Furthermore, we discuss a new important area of research, which involves the use of UAVs in collecting data from wireless sensor networks (WSNs). We discuss and evaluate several approaches that can be used to collect data from different types of WSNs including topologies such as linear sensor networks (LSNs), geometric and clustered WSNs. We outline the benefits of using UAVs for this function, which include significantly decreasing sensor node energy consumption, lower interference, and offers considerably increased flexibility in controlling the density of the deployed nodes since the need for the multihop approach for sensor-tosink communication is either eliminated or significantly reduced. Consequently, UAVs can provide good connectivity to WSN clusters.", "title": "" }, { "docid": "c9398b3dad75ba85becbec379a65a219", "text": "Passwords are still the predominant mode of authentication in contemporary information systems, despite a long list of problems associated with their insecurity. Their primary advantage is the ease of use and the price of implementation, compared to other systems of authentication (e.g. two-factor, biometry, …). In this paper we present an analysis of passwords used by students of one of universities and their resilience against brute force and dictionary attacks. The passwords were obtained from a university's computing center in plaintext format for a very long period - first passwords were created before 1980. The results show that early passwords are extremely easy to crack: the percentage of cracked passwords is above 95 % for those created before 2006. Surprisingly, more than 40 % of passwords created in 2014 were easily broken within a few hours. The results show that users - in our case students, despite positive trends, still choose easy to break passwords. This work contributes to loud warnings that a shift from traditional password schemes to more elaborate systems is needed.", "title": "" }, { "docid": "ae8f26a5ab75e11f86d295c2beaa2189", "text": "BACKGROUND\nThe neonatal and pediatric antimicrobial point prevalence survey (PPS) of the Antibiotic Resistance and Prescribing in European Children project (http://www.arpecproject.eu/) aims to standardize a method for surveillance of antimicrobial use in children and neonates admitted to the hospital within Europe. This article describes the audit criteria used and reports overall country-specific proportions of antimicrobial use. An analytical review presents methodologies on antimicrobial use.\n\n\nMETHODS\nA 1-day PPS on antimicrobial use in hospitalized children was organized in September 2011, using a previously validated and standardized method. The survey included all inpatient pediatric and neonatal beds and identified all children receiving an antimicrobial treatment on the day of survey. Mandatory data were age, gender, (birth) weight, underlying diagnosis, antimicrobial agent, dose and indication for treatment. Data were entered through a web-based system for data-entry and reporting, based on the WebPPS program developed for the European Surveillance of Antimicrobial Consumption project.\n\n\nRESULTS\nThere were 2760 and 1565 pediatric versus 1154 and 589 neonatal inpatients reported among 50 European (n = 14 countries) and 23 non-European hospitals (n = 9 countries), respectively. Overall, antibiotic pediatric and neonatal use was significantly higher in non-European (43.8%; 95% confidence interval [CI]: 41.3-46.3% and 39.4%; 95% CI: 35.5-43.4%) compared with that in European hospitals (35.4; 95% CI: 33.6-37.2% and 21.8%; 95% CI: 19.4-24.2%). Proportions of antibiotic use were highest in hematology/oncology wards (61.3%; 95% CI: 56.2-66.4%) and pediatric intensive care units (55.8%; 95% CI: 50.3-61.3%).\n\n\nCONCLUSIONS\nAn Antibiotic Resistance and Prescribing in European Children standardized web-based method for a 1-day PPS was successfully developed and conducted in 73 hospitals worldwide. It offers a simple, feasible and sustainable way of data collection that can be used globally.", "title": "" } ]
scidocsrr
882aa388eedca0c6c969b96359cac93b
Swarm Intelligence Algorithms for Data Clustering
[ { "docid": "3293e4e0d7dd2e29505db0af6fbb13d1", "text": "A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive testbed it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.", "title": "" }, { "docid": "96b270cf4799d041217ee3e071383ab1", "text": "Cluster analysis has been widely used in several disciplines, such as statistics, software engineering, biology, psychology and other social sciences, in order to identify natural groups in large amounts of data. Clustering has also been widely adopted by researchers within computer science and especially the database community. K-means is the most famous clustering algorithms. In this paper, the performance of basic k means algorithm is evaluated using various distance metrics for iris dataset, wine dataset, vowel dataset, ionosphere dataset and crude oil dataset by varying no of clusters. From the result analysis we can conclude that the performance of k means algorithm is based on the distance metrics for selected database. Thus, this work will help to select suitable distance metric for particular application.", "title": "" } ]
[ { "docid": "bbf5561f88f31794ca95dd991c074b98", "text": "O CTO B E R 2014 | Volume 18, Issue 4 GetMobile Every time you use a voice command on your smartphone, you are benefitting from a technique called cloud offload. Your speech is captured by a microphone, pre-processed, then sent over a wireless network to a cloud service that converts speech to text. The result is then forwarded to another cloud service or sent back to your mobile device, depending on the application. Speech recognition and many other resource-intensive mobile services require cloud offload. Otherwise, the service would be too slow and drain too much of your battery. Research projects on cloud offload are hot today, with MAUI [4] in 2010, Odessa [13] and CloneCloud [2] in 2011, and COMET [8] in 2012. These build on a rich heritage of work dating back to the mid-1990s on a theme that is broadly characterized as cyber foraging. They are also relevant to the concept of cloudlets [18] that has emerged as an important theme in mobile-cloud convergence. Reflecting my participation in this evolution from its origins, this article is a personal account of the key developments in this research area. It focuses on mobile computing, ignoring many other uses of remote execution since the 1980s such as distributed processing, query processing, distributed object systems, and distributed partitioning.", "title": "" }, { "docid": "e5241f16c4bebf7c87d8dcc99ff38bc4", "text": "Several techniques for estimating the reliability of estimated error rates and for estimating the signicance of observed dierences in error rates are explored in this paper. Textbook formulas which assume a large test set, i.e., a normal distribution, are commonly used to approximate the condence limits of error rates or as an approximate signicance test for comparing error rates. Expressions for determining more exact limits and signicance levels for small samples are given here, and criteria are also given for determining when these more exact methods should be used. The assumed normal distribution gives a poor approximation to the condence interval in most cases, but is usually useful for signicance tests when the proper mean and variance expressions are used. A commonly used 62 signicance test uses an improper expression for , which is too low and leads to a high likelihood of Type I errors. Common machine learning methods for estimating signicance from observations on a single sample may be unreliable.", "title": "" }, { "docid": "8df0689ffe5c730f7a6ef6da65bec57e", "text": "Image-based reconstruction of 3D shapes is inherently biased under the occurrence of interreflections, since the observed intensity at surface concavities consists of direct and global illumination components. This issue is commonly not considered in a Photometric Stereo (PS) framework. Under the usual assumption of only direct reflections, this corrupts the normal estimation process in concave regions and thus leads to inaccurate results. For this reason, global illumination effects need to be considered for the correct reconstruction of surfaces affected by interreflections. While there is ongoing research in the field of inverse lighting (i.e. separation of global and direct illumination components), the interreflection aspect remains oftentimes neglected in the field of 3D shape reconstruction. In this study, we present a computationally driven approach for iteratively solving that problem. Initially, we introduce a photometric stereo approach that roughly reconstructs a surface with at first unknown reflectance properties. Then, we show that the initial surface reconstruction result can be refined iteratively regarding non-distant light sources and, especially, interreflections. The benefit for the reconstruction accuracy is evaluated on real Lambertian surfaces using laser range scanner data as ground truth.", "title": "" }, { "docid": "e2427ff836c8b83a75d8f7074656a025", "text": "With the rapid growth of smartphone and tablet users, Device-to-Device (D2D) communications have become an attractive solution for enhancing the performance of traditional cellular networks. However, relevant security issues involved in D2D communications have not been addressed yet. In this paper, we investigate the security requirements and challenges for D2D communications, and present a secure and efficient key agreement protocol, which enables two mobile devices to establish a shared secret key for D2D communications without prior knowledge. Our approach is based on the Diffie-Hellman key agreement protocol and commitment schemes. Compared to previous work, our proposed protocol introduces less communication and computation overhead. We present the design details and security analysis of the proposed protocol. We also integrate our proposed protocol into the existing Wi-Fi Direct protocol, and implement it using Android smartphones.", "title": "" }, { "docid": "1052a1454d421290dfdd8fdb448a50cc", "text": "Viola and Jones [9] introduced a method to accurately and rapidly detect faces within an image. This technique can be adapted to accurately detect facial features. However, the area of the image being analyzed for a facial feature needs to be regionalized to the location with the highest probability of containing the feature. By regionalizing the detection area, false positives are eliminated and the speed of detection is increased due to the reduction of the area examined. INTRODUCTION The human face poses even more problems than other objects since the human face is a dynamic object that comes in many forms and colors [7]. However, facial detection and tracking provides many benefits. Facial recognition is not possible if the face is not isolated from the background. Human Computer Interaction (HCI) could greatly be improved by using emotion, pose, and gesture recognition, all of which require face and facial feature detection and tracking [2]. Although many different algorithms exist to perform face detection, each has its own weaknesses and strengths. Some use flesh tones, some use contours, and other are even more complex involving templates, neural networks, or filters. These algorithms suffer from the same problem; they are computationally expensive [2]. An image is only a collection of color and/or light intensity values. Analyzing these pixels for face detection is time consuming and difficult to accomplish because of the wide variations of shape and JCSC 21, 4 (April 2006) 128 Figure 1 Common Haar Features pigmentation within a human face. Pixels often require reanalysis for scaling and precision. Viola and Jones devised an algorithm, called Haar Classifiers, to rapidly detect any object, including human faces, using AdaBoost classifier cascades that are based on Haar-like features and not pixels [9]. HAAR CASCADE CLASSIFIERS The core basis for Haar classifier object detection is the Haar-like features. These features, rather than using the intensity values of a pixel, use the change in contrast values between adjacent rectangular groups of pixels. The contrast variances between the pixel groups are used to determine relative light and dark areas. Two or three adjacent groups with a relative contrast variance form a Haar-like feature. Haar-like features, as shown in Figure 1 are used to detect an image [8]. Haar features can easily be scaled by increasing or decreasing the size of the pixel group being examined. This allows features to be used to detect objects of various sizes. Integral Image The simple rectangular features of an image are c a l c u l a t e d u s i n g a n intermediate representation of an image, called the integral image [9]. The integral image is an array containing the sums of the pixels’ intensity values located directly to the left of a pixel and directly above the pixel at location (x, y) inclusive. So if A[x,y] is the original image and AI[x,y] is the integral image then the integral image is computed as shown in equation 1 and illustrated in Figure 2. (1) [ ] AI x y A x y x x y y , ( ' , ' ) ' , ' =", "title": "" }, { "docid": "d21e4e55966bac19bbed84b23360b66d", "text": "Smart growth is an approach to urban planning that provides a framework for making community development decisions. Despite its growing use, it is not known whether smart growth can impact physical activity. This review utilizes existing built environment research on factors that have been used in smart growth planning to determine whether they are associated with physical activity or body mass. Searching the MEDLINE, Psycinfo and Web-of-Knowledge databases, 204 articles were identified for descriptive review, and 44 for a more in-depth review of studies that evaluated four or more smart growth planning principles. Five smart growth factors (diverse housing types, mixed land use, housing density, compact development patterns and levels of open space) were associated with increased levels of physical activity, primarily walking. Associations with other forms of physical activity were less common. Results varied by gender and method of environmental assessment. Body mass was largely unaffected. This review suggests that several features of the built environment associated with smart growth planning may promote important forms of physical activity. Future smart growth community planning could focus more directly on health, and future research should explore whether combinations or a critical mass of smart growth features is associated with better population health outcomes.", "title": "" }, { "docid": "c29a2429d6dd7bef7761daf96a29daaf", "text": "In this meta-analysis, we synthesized data from published journal articles that investigated viewers’ enjoyment of fright and violence. Given the limited research on this topic, this analysis was primarily a way of summarizing the current state of knowledge and developing directions for future research. The studies selected (a) examined frightening or violent media content; (b) used self-report measures of enjoyment or preference for such content (the dependent variable); and (c) included independent variables that were given theoretical consideration in the literature. The independent variables examined were negative affect and arousal during viewing, empathy, sensation seeking, aggressiveness, and the respondents’ gender and age. The analysis confirmed that male viewers, individuals lower in empathy, and those higher in sensation seeking and aggressiveness reported more enjoyment of fright and violence. Some support emerged for Zillmann’s (1980, 1996) model of suspense enjoyment. Overall, the results demonstrate the importance of considering how viewers interpret or appraise their reactions to fright and violence. However, the studies were so diverse in design and measurement methods that it was difficult to identify the underlying processes. Suggestions are proposed for future research that will move toward the integration of separate lines of inquiry in a unified approach to understanding entertainment. MEDIA PSYCHOLOGY, 7, 207–237 Copyright © 2005, Lawrence Erlbaum Associates, Inc.", "title": "" }, { "docid": "8700c7f150c00013990c837a4bf7b655", "text": "The rule of thumb that logistic and Cox models should be used with a minimum of 10 outcome events per predictor variable (EPV), based on two simulation studies, may be too conservative. The authors conducted a large simulation study of other influences on confidence interval coverage, type I error, relative bias, and other model performance measures. They found a range of circumstances in which coverage and bias were within acceptable levels despite less than 10 EPV, as well as other factors that were as influential as or more influential than EPV. They conclude that this rule can be relaxed, in particular for sensitivity analyses undertaken to demonstrate adequate control of confounding.", "title": "" }, { "docid": "4d136b60209ef625c09a15e3e5abb7f7", "text": "Alterations in the bidirectional interactions between the intestine and the nervous system have important roles in the pathogenesis of irritable bowel syndrome (IBS). A body of largely preclinical evidence suggests that the gut microbiota can modulate these interactions. A small and poorly defined role for dysbiosis in the development of IBS symptoms has been established through characterization of altered intestinal microbiota in IBS patients and reported improvement of subjective symptoms after its manipulation with prebiotics, probiotics, or antibiotics. It remains to be determined whether IBS symptoms are caused by alterations in brain signaling from the intestine to the microbiota or primary disruption of the microbiota, and whether they are involved in altered interactions between the brain and intestine during development. We review the potential mechanisms involved in the pathogenesis of IBS in different groups of patients. Studies are needed to better characterize alterations to the intestinal microbiome in large cohorts of well-phenotyped patients, and to correlate intestinal metabolites with specific abnormalities in gut-brain interactions.", "title": "" }, { "docid": "d5f8c9f7a495d9ebc5517b18ced3e784", "text": "BACKGROUND\nFor some adolescents feeling lonely can be a protracted and painful experience. It has been suggested that engaging in health risk behaviours such as substance use and sexual behaviour may be a way of coping with the distress arising from loneliness during adolescence. However, the association between loneliness and health risk behaviour has been little studied to date. To address this research gap, the current study examined this relation among Russian and U.S. adolescents.\n\n\nMETHODS\nData were used from the Social and Health Assessment (SAHA), a school-based survey conducted in 2003. A total of 1995 Russian and 2050 U.S. students aged 13-15 years old were included in the analysis. Logistic regression was used to examine the association between loneliness and substance use, sexual risk behaviour, and violence.\n\n\nRESULTS\nAfter adjusting for demographic characteristics and depressive symptoms, loneliness was associated with a significantly increased risk of adolescent substance use in both Russia and the United States. Lonely Russian girls were significantly more likely to have used marijuana (odds ratio [OR]: 2.28; confidence interval [CI]: 1.17-4.45), while lonely Russian boys had higher odds for past 30-day smoking (OR, 1.87; CI, 1.08-3.24). In the U.S. loneliness was associated with the lifetime use of illicit drugs (excepting marijuana) among boys (OR, 3.09; CI, 1.41-6.77) and with lifetime marijuana use (OR, 1.79; CI, 1.26-2.55), past 30-day alcohol consumption (OR, 1.80; CI, 1.18-2.75) and past 30-day binge drinking (OR, 2.40; CI, 1.56-3.70) among girls. The only relation between loneliness and sexual risk behaviour was among Russian girls, where loneliness was associated with significantly higher odds for ever having been pregnant (OR, 1.69; CI: 1.12-2.54). Loneliness was not associated with violent behaviour among boys or girls in either country.\n\n\nCONCLUSION\nLoneliness is associated with adolescent health risk behaviour among boys and girls in both Russia and the United States. Further research is now needed in both settings using quantitative and qualitative methods to better understand the association between loneliness and health risk behaviours so that effective interventions can be designed and implemented to mitigate loneliness and its effects on adolescent well-being.", "title": "" }, { "docid": "e2950089f76e1509ad2aa74ea5c738eb", "text": "In this review the knowledge status of and future research options on a green gas supply based on biogas production by co-digestion is explored. Applications and developments of the (bio)gas supply in The Netherlands have been considered, whereafter literature research has been done into the several stages from production of dairy cattle manure and biomass to green gas injection into the gas grid. An overview of a green gas supply chain has not been made before. In this study it is concluded that on installation level (micro-level) much practical knowledge is available and on macro-level knowledge about availability of biomass. But on meso-level (operations level of a green gas supply) very little research has been done until now. Future research should include the modeling of a green gas supply chain on an operations level, i.e. questions must be answered as where to build digesters based on availability of biomass. Such a model should also advise on technology of upgrading depending on scale factors. Future research might also give insight in the usability of mixing (partly upgraded) biogas with natural gas. The preconditions for mixing would depend on composition of the gas, the ratio of gases to be mixed and the requirements on the mixture.", "title": "" }, { "docid": "a981db3aa149caec10b1824c82840782", "text": "It has been suggested that the performance of a team is determined by the team members’ roles. An analysis of the performance of 342 individuals organised into 33 teams indicates that team roles characterised by creativity, co-ordination and cooperation are positively correlated with team performance. Members of developed teams exhibit certain performance enhancing characteristics and behaviours. Amongst the more developed teams there is a positive relationship between Specialist Role characteristics and team performance. While the characteristics associated with the Coordinator Role are also positively correlated with performance, these can impede the performance of less developed teams.", "title": "" }, { "docid": "43184dfe77050618402900bc309203d5", "text": "A prototype of Air Gap RLSA has been designed and simulated using hybrid air gap and FR4 dielectric material. The 28% wide bandwidth has been recorded through this approach. A 12.35dBi directive gain also recorded from the simulation. The 13.3 degree beamwidth of the radiation pattern is sufficient for high directional application. Since the proposed application was for Point to Point Link, this study concluded the Air Gap RLSA is a new candidate for this application.", "title": "" }, { "docid": "5fa6f8a5ee1d458ca79c18d7b9d2e6de", "text": "Automotive radars, along with other sensors such as lidar, (which stands for \"light detection and ranging\"), ultrasound, and cameras, form the backbone of self-driving cars and advanced driver assistant systems (ADASs). These technological advancements are enabled by extremely complex systems with a long signal processing path from radars/sensors to the controller. Automotive radar systems are responsible for the detection of objects and obstacles, their position, and speed relative to the vehicle. The development of signal processing techniques along with progress in the millimeter-wave (mm-wave) semiconductor technology plays a key role in automotive radar systems. Various signal processing techniques have been developed to provide better resolution and estimation performance in all measurement dimensions: range, azimuth-elevation angles, and velocity of the targets surrounding the vehicles. This article summarizes various aspects of automotive radar signal processing techniques, including waveform design, possible radar architectures, estimation algorithms, implementation complexity-resolution trade off, and adaptive processing for complex environments, as well as unique problems associated with automotive radars such as pedestrian detection. We believe that this review article will combine the several contributions scattered in the literature to serve as a primary starting point to new researchers and to give a bird's-eye view to the existing research community.", "title": "" }, { "docid": "cbaf7cd4e17c420b7546d132959b3283", "text": "User mobility has given rise to a variety of Web applications, in which the global positioning system (GPS) plays many important roles in bridging between these applications and end users. As a kind of human behavior, transportation modes, such as walking and driving, can provide pervasive computing systems with more contextual information and enrich a user's mobility with informative knowledge. In this article, we report on an approach based on supervised learning to automatically infer users' transportation modes, including driving, walking, taking a bus and riding a bike, from raw GPS logs. Our approach consists of three parts: a change point-based segmentation method, an inference model and a graph-based post-processing algorithm. First, we propose a change point-based segmentation method to partition each GPS trajectory into separate segments of different transportation modes. Second, from each segment, we identify a set of sophisticated features, which are not affected by differing traffic conditions (e.g., a person's direction when in a car is constrained more by the road than any change in traffic conditions). Later, these features are fed to a generative inference model to classify the segments of different modes. Third, we conduct graph-based postprocessing to further improve the inference performance. This postprocessing algorithm considers both the commonsense constraints of the real world and typical user behaviors based on locations in a probabilistic manner. The advantages of our method over the related works include three aspects. (1) Our approach can effectively segment trajectories containing multiple transportation modes. (2) Our work mined the location constraints from user-generated GPS logs, while being independent of additional sensor data and map information like road networks and bus stops. (3) The model learned from the dataset of some users can be applied to infer GPS data from others. Using the GPS logs collected by 65 people over a period of 10 months, we evaluated our approach via a set of experiments. As a result, based on the change-point-based segmentation method and Decision Tree-based inference model, we achieved prediction accuracy greater than 71 percent. Further, using the graph-based post-processing algorithm, the performance attained a 4-percent enhancement.", "title": "" }, { "docid": "7516f24dad8441f6e13d211047c93f36", "text": "The growth of the software game development industry is enormous and is gaining importance day by day. This growth imposes severe pressure and a number of issues and challenges on the game development community. Game development is a complex process, and one important game development choice is to consider the developer perspective to produce good-quality software games by improving the game development process. The objective of this study is to provide a better understanding of the developer’s dimension as a factor in software game success. It focusses mainly on an empirical investigation of the effect of key developer factors on the software game development process and eventually on the quality of the resulting game. A quantitative survey was developed and conducted to identify key developer factors for an enhanced game development process. For this study, the developed survey was used to test the research model and hypotheses. The results provide evidence that game development organizations must deal with multiple key factors to remain competitive and to handle high pressure in the software game industry. The main contribution of this paper is to investigate empirically the influence of key developer factors on the game development process.", "title": "" }, { "docid": "d34be0ce0f9894d6e219d12630166308", "text": "The need for curricular reform in K-4 mathematics is clear. Such reform must address both the content and emphasis of the curriculum as well as approaches to instruction. A longstanding preoccupation with computation and other traditional skills has dominated both what mathematics is taught and the way mathematics is taught at this level. As a result, the present K-4 curriculum is narrow in scope; fails to foster mathematical insight, reasoning, and problem solving; and emphasizes rote activities. Even more significant is that children begin to lose their belief that learning mathematics is a sense-making experience. They become passive receivers of rules and procedures rather than active participants in creating knowledge.", "title": "" }, { "docid": "6bc3114cc800446f4d28eb47f40adc1e", "text": "We propose a novel computer-aided detection (CAD) framework of breast masses in mammography. To increase detection sensitivity for various types of mammographic masses, we propose the combined use of different detection algorithms. In particular, we develop a region-of-interest combination mechanism that integrates detection information gained from unsupervised and supervised detection algorithms. Also, to significantly reduce the number of false-positive (FP) detections, the new ensemble classification algorithm is developed. Extensive experiments have been conducted on a benchmark mammogram database. Results show that our combined detection approach can considerably improve the detection sensitivity with a small loss of FP rate, compared to representative detection algorithms previously developed for mammographic CAD systems. The proposed ensemble classification solution also has a dramatic impact on the reduction of FP detections; as much as 70% (from 15 to 4.5 per image) at only cost of 4.6% sensitivity loss (from 90.0% to 85.4%). Moreover, our proposed CAD method performs as well or better (70.7% and 80.0% per 1.5 and 3.5 FPs per image respectively) than the results of mammography CAD algorithms previously reported in the literature.", "title": "" }, { "docid": "2372c664173be9aa8c2497b42703a80e", "text": "Medical devices have a great impact but rigorous production and quality norms to meet, which pushes manufacturing technology to its limits in several fields, such as electronics, optics, communications, among others. This paper briefly explores how the medical industry is absorbing many of the technological developments from other industries, and making an effort to translate them into the healthcare requirements. An example is discussed in depth: implantable neural microsystems used for brain circuits mapping and modulation. Conventionally, light sources and electrical recording points are placed on silicon neural probes for optogenetic applications. The active sites of the probe must provide enough light power to modulate connectivity between neural networks, and simultaneously ensure reliable recordings of action potentials and local field activity. These devices aim at being a flexible and scalable technology capable of acquiring knowledge about neural mechanisms. Moreover, this paper presents a fabrication method for 2-D LED-based microsystems with high aspect-ratio shafts, capable of reaching up to 20 mm deep neural structures. In addition, PDMS $\\mu $ lenses on LEDs top surface are presented for focusing and increasing light intensity on target structures.", "title": "" } ]
scidocsrr
5d0c7a76bcf5ff7fb4c681a1bd5496d1
GPS Spoofing Detection Based on Decision Fusion with a K-out-of-N Rule
[ { "docid": "a9bc9d9098fe852d13c3355ab6f81edb", "text": "The area under the ROC curve, or the equivalent Gini index, is a widely used measure of performance of supervised classification rules. It has the attractive property that it side-steps the need to specify the costs of the different kinds of misclassification. However, the simple form is only applicable to the case of two classes. We extend the definition to the case of more than two classes by averaging pairwise comparisons. This measure reduces to the standard form in the two class case. We compare its properties with the standard measure of proportion correct and an alternative definition of proportion correct based on pairwise comparison of classes for a simple artificial case and illustrate its application on eight data sets. On the data sets we examined, the measures produced similar, but not identical results, reflecting the different aspects of performance that they were measuring. Like the area under the ROC curve, the measure we propose is useful in those many situations where it is impossible to give costs for the different kinds of misclassification.", "title": "" }, { "docid": "531d387a14eefa6a8c45ad64039f29be", "text": "This paper presents an S-Transform based probabilistic neural network (PNN) classifier for recognition of power quality (PQ) disturbances. The proposed method requires less number of features as compared to wavelet based approach for the identification of PQ events. The features extracted through the S-Transform are trained by a PNN for automatic classification of the PQ events. Since the proposed methodology can reduce the features of the disturbance signal to a great extent without losing its original property, less memory space and learning PNN time are required for classification. Eleven types of disturbances are considered for the classification problem. The simulation results reveal that the combination of S-Transform and PNN can effectively detect and classify different PQ events. The classification performance of PNN is compared with a feedforward multilayer (FFML) neural network (NN) and learning vector quantization (LVQ) NN. It is found that the classification performance of PNN is better than both FFML and LVQ.", "title": "" } ]
[ { "docid": "f905016b422d9c16ac11b85182f196c7", "text": "The random forest (RF) classifier is an ensemble classifier derived from decision tree idea. However the parallel operations of several classifiers along with use of randomness in sample and feature selection has made the random forest a very strong classifier with accuracy rates comparable to most of currently used classifiers. Although, the use of random forest on handwritten digits has been considered before, in this paper RF is applied in recognizing Persian handwritten characters. Trying to improve the recognition rate, we suggest converting the structure of decision trees from a binary tree to a multi branch tree. The improvement gained this way proves the applicability of the idea.", "title": "" }, { "docid": "fb5e9a15429c9361dbe577ca8db18e46", "text": "Most experiments are done in laboratories. However, there is also a theory and practice of field experimentation. It has had its successes and failures over the past four decades but is now increasingly used for answering causal questions. This is true for both randomized and-perhaps more surprisingly-nonrandomized experiments. In this article, we review the history of the use of field experiments, discuss some of the reasons for their current renaissance, and focus the bulk of the article on the particular technical developments that have made this renaissance possible across four kinds of widely used experimental and quasi-experimental designs-randomized experiments, regression discontinuity designs in which those units above a cutoff get one treatment and those below get another, short interrupted time series, and nonrandomized experiments using a nonequivalent comparison group. We focus this review on some of the key technical developments addressing problems that previously stymied accurate effect estimation, the solution of which opens the way for accurate estimation of effects under the often difficult conditions of field implementation-the estimation of treatment effects under partial treatment implementation, the prevention and analysis of attrition, analysis of nested designs, new analytic developments for both regression discontinuity designs and short interrupted time series, and propensity score analysis. We also cover the key empirical evidence showing the conditions under which some nonrandomized experiments may be able to approximate results from randomized experiments.", "title": "" }, { "docid": "9efa0ff0743edacc4e9421ed45441fde", "text": "Perception of universal facial beauty has long been debated amongst psychologists and anthropologists. In this paper, we perform experiments to evaluate the extent of universal beauty by surveying a number of diverse human referees to grade a collection of female facial images. Results obtained show that there exists a strong central tendency in the human grades, thus exhibiting agreement on beauty assessment. We then trained an automated classifier using the average human grades as the ground truth and used it to classify an independent test set of facial images. The high accuracy achieved proves that this classifier can be used as a general, automated tool for objective classification of female facial beauty. Potential applications exist in the entertainment industry, cosmetic industry, virtual media, and plastic surgery.", "title": "" }, { "docid": "361bc333d47d2e1d4b6a6e8654d2659d", "text": "Both the industrial organization theory (IO) and the resource-based view of the firm (RBV) have advanced our understanding of the antecedents of competitive advantage but few have attempted to verify the outcome variables of competitive advantage and the persistence of such outcome variables. Here by integrating both IO and RBV perspectives in the analysis of competitive advantage at the firm level, our study clarifies a conceptual distinction between two types of competitive advantage: temporary competitive advantage and sustainable competitive advantage, and explores how firms transform temporary competitive advantage into sustainable competitive advantage. Testing of the developed hypotheses, based on a survey of 165 firms from Taiwan’s information and communication technology industry, suggests that firms with a stronger market position can only attain a better outcome of temporary competitive advantage whereas firms possessing a superior position in technological resources or capabilities can attain a better outcome of sustainable competitive advantage. More importantly, firms can leverage a temporary competitive advantage as an outcome of market position, to improving their technological resource and capability position, which in turn can enhance their sustainable competitive advantage.", "title": "" }, { "docid": "0b0e935d88fb5eb6b964e7e0853a7f2f", "text": "Skill prerequisite information is useful for tutoring systems that assess student knowledge or that provide remediation. These systems often encode prerequisites as graphs designed by subject matter experts in a costly and time-consuming process. In this paper, we introduce Combined student Modeling and prerequisite Discovery (COMMAND), a novel algorithm for jointly inferring a prerequisite graph and a student model from data. Learning a COMMAND model requires student performance data and a mapping of items to skills (Q-matrix). COMMAND learns the skill prerequisite relations as a Bayesian network (an encoding of the probabilistic dependence among the skills) via a two-stage learning process. In the first stage, it uses an algorithm called Structural Expectation Maximization to select a class of equivalent Bayesian networks; in the second stage, it uses curriculum information to select a single Bayesian network. Our experiments on simulations and real student data suggest that COMMAND is better than prior methods in the literature.", "title": "" }, { "docid": "6ad344c7049abad62cd53dacc694c651", "text": "Primary syphilis with oropharyngeal manifestations should be kept in mind, though. Lips and tongue ulcers are the most frequently reported lesions and tonsillar ulcers are much more rare. We report the case of a 24-year-old woman with a syphilitic ulcer localized in her left tonsil.", "title": "" }, { "docid": "6325188ee21b6baf65dbce6855c19bc2", "text": "A knowledgeable observer of a game of football (soccer) can make a subjective evaluation of the quality of passes made between players during the game, such as rating them as Good, OK, or Bad. In this article, we consider the problem of producing an automated system to make the same evaluation of passes and present a model to solve this problem.\n Recently, many professional football leagues have installed object tracking systems in their stadiums that generate high-resolution and high-frequency spatiotemporal trajectories of the players and the ball. Beginning with the thesis that much of the information required to make the pass ratings is available in the trajectory signal, we further postulated that using complex data structures derived from computational geometry would enable domain football knowledge to be included in the model by computing metric variables in a principled and efficient manner. We designed a model that computes a vector of predictor variables for each pass made and uses machine learning techniques to determine a classification function that can accurately rate passes based only on the predictor variable vector.\n Experimental results show that the learned classification functions can rate passes with 90.2% accuracy. The agreement between the classifier ratings and the ratings made by a human observer is comparable to the agreement between the ratings made by human observers, and suggests that significantly higher accuracy is unlikely to be achieved. Furthermore, we show that the predictor variables computed using methods from computational geometry are among the most important to the learned classifiers.", "title": "" }, { "docid": "57f5b00d796489b7f5caee701ce3116b", "text": "SR-IOV capable network devices offer the benefits of direct I/O throughput and reduced CPU utilization while greatly increasing the scalability and sharing capabilities of the device. SR-IOV allows the benefits of the paravirtualized driver’s throughput increase and additional CPU usage reductions in HVMs (Hardware Virtual Machines). SR-IOV uses direct I/O assignment of a network device to multiple VMs, maximizing the potential for using the full bandwidth capabilities of the network device, as well as enabling unmodified guest OS based device drivers which will work for different underlying VMMs. Drawing on our recent experience in developing an SR-IOV capable networking solution for the Xen hypervisor we discuss the system level requirements and techniques for SR-IOV enablement on the platform. We discuss PCI configuration considerations, direct MMIO, interrupt handling and DMA into an HVM using an IOMMU (I/O Memory Management Unit). We then explain the architectural, design and implementation considerations for SR-IOV networking in Xen in which the Physical Function has a driver running in the driver domain that serves as a “master” and each Virtual Function exposed to a guest VM has its own virtual driver.", "title": "" }, { "docid": "ae151d8ed9b8f99cfe22e593f381dd3b", "text": "A common assumption in studies of interruptions is that one is focused in an activity and then distracted by other stimuli. We take the reverse perspective and examine whether one might first be in an attentional state that makes one susceptible to communications typically associated with distraction. We explore the confluence of multitasking and workplace communications from three temporal perspectives -- prior to an interaction, when tasks and communications are interleaved, and at the end of the day. Using logging techniques and experience sampling, we observed 32 employees in situ for five days. We found that certain attentional states lead people to be more susceptible to particular types of interaction. Rote work is followed by more Facebook or face-to-face interaction. Focused and aroused states are followed by more email. The more time in email and face-fo-face interaction, and the more total screen switches, the less productive people feel at the day's end. We present the notion of emotional homeostasis along with new directions for multitasking research.", "title": "" }, { "docid": "4621856b479672433f9f9dff86d4f4da", "text": "Reproducibility of computational studies is a hallmark of scientific methodology. It enables researchers to build with confidence on the methods and findings of others, reuse and extend computational pipelines, and thereby drive scientific progress. Since many experimental studies rely on computational analyses, biologists need guidance on how to set up and document reproducible data analyses or simulations. In this paper, we address several questions about reproducibility. For example, what are the technical and non-technical barriers to reproducible computational studies? What opportunities and challenges do computational notebooks offer to overcome some of these barriers? What tools are available and how can they be used effectively? We have developed a set of rules to serve as a guide to scientists with a specific focus on computational notebook systems, such as Jupyter Notebooks, which have become a tool of choice for many applications. Notebooks combine detailed workflows with narrative text and visualization of results. Combined with software repositories and open source licensing, notebooks are powerful tools for transparent, collaborative, reproducible, and reusable data analyses.", "title": "" }, { "docid": "6d262139067d030c3ebb1169e93c6422", "text": "In this paper, we present a study on learning visual recognition models from large scale noisy web data. We build a new database called WebVision, which contains more than 2.4 million web images crawled from the Internet by using queries generated from the 1, 000 semantic concepts of the ILSVRC 2012 benchmark. Meta information along with those web images (e.g., title, description, tags, etc.) are also crawled. A validation set and test set containing human annotated images are also provided to facilitate algorithmic development. Based on our new database, we obtain a few interesting observations: 1) the noisy web images are sufficient for training a good deep CNN model for visual recognition; 2) the model learnt from our WebVision database exhibits comparable or even better generalization ability than the one trained from the ILSVRC 2012 dataset when being transferred to new datasets and tasks; 3) a domain adaptation issue (a.k.a., dataset bias) is observed, which means the dataset can be used as the largest benchmark dataset for visual domain adaptation. Our new WebVision database and relevant studies in this work would benefit the advance of learning state-of-the-art visual models with minimum supervision based on web data.", "title": "" }, { "docid": "f825dbbc9ff17178a81be71c5b9312ae", "text": "Skills like computational thinking, problem solving, handling complexity, team-work and project management are essential for future careers and needs to be taught to students at the elementary level itself. Computer programming knowledge and skills, experiencing technology and conducting science and engineering experiments are also important for students at elementary level. However, teaching such skills effectively through active learning can be challenging for educators. In this paper, we present our approach and experiences in teaching such skills to several elementary level children using Lego Mindstorms EV3 robotics education kit. We describe our learning environment consisting of lessons, worksheets, hands-on activities and assessment. We taught students how to design, construct and program robots using components such as motors, sensors, wheels, axles, beams, connectors and gears. Students also gained knowledge on basic programming constructs such as control flow, loops, branches and conditions using a visual programming environment. We carefully observed how students performed various tasks and solved problems. We present experimental results which demonstrates that our teaching methodology consisting of both the course content and pedagogy was effective in imparting the desired skills and knowledge to elementary level children. The students also participated in a competitive World Robot Olympiad India event and qualified during the regional round which is an evidence of the effectiveness of the approach.", "title": "" }, { "docid": "1a834cb0c5d72c6bc58c4898d318cfc2", "text": "This paper proposes a novel single-stage high-power-factor ac/dc converter with symmetrical topology. The circuit topology is derived from the integration of two buck-boost power-factor-correction (PFC) converters and a full-bridge series resonant dc/dc converter. The switch-utilization factor is improved by using two active switches to serve in the PFC circuits. A high power factor at the input line is assured by operating the buck-boost converters at discontinuous conduction mode. With symmetrical operation and elaborately designed circuit parameters, zero-voltage switching on all the active power switches of the converter can be retained to achieve high circuit efficiency. The operation modes, design equations, and design steps for the circuit parameters are proposed. A prototype circuit designed for a 200-W dc output was built and tested to verify the analytical predictions. Satisfactory performances are obtained from the experimental results.", "title": "" }, { "docid": "9bf99d48bc201147a9a9ad5af547a002", "text": "Consider a biped evolving in the sagittal plane. The unexpected rotation of the supporting foot can be avoided by controlling the zero moment point (ZMP). The objective of this study is to propose and analyze a control strategy for simultaneously regulating the position of the ZMP and the joints of the robot. If the tracking requirements were posed in the time domain, the problem would be underactuated in the sense that the number of inputs would be less than the number of outputs. To get around this issue, the proposed controller is based on a path-following control strategy, previously developed for dealing with the underactuation present in planar robots without actuated ankles. In particular, the control law is defined in such a way that only the kinematic evolution of the robot's state is regulated, but not its temporal evolution. The asymptotic temporal evolution of the robot is completely defined through a one degree-of-freedom subsystem of the closed-loop model. Since the ZMP is controlled, bipedal walking that includes a prescribed rotation of the foot about the toe can also be considered. Simple analytical conditions are deduced that guarantee the existence of a periodic motion and the convergence toward this motion.", "title": "" }, { "docid": "fdb0c8d2a4c4bbe68b7cffe58adbd074", "text": "Endowing a chatbot with personality is challenging but significant to deliver more realistic and natural conversations. In this paper, we address the issue of generating responses that are coherent to a pre-specified personality or profile. We present a method that uses generic conversation data from social media (without speaker identities) to generate profile-coherent responses. The central idea is to detect whether a profile should be used when responding to a user post (by a profile detector), and if necessary, select a key-value pair from the profile to generate a response forward and backward (by a bidirectional decoder) so that a personalitycoherent response can be generated. Furthermore, in order to train the bidirectional decoder with generic dialogue data, a position detector is designed to predict a word position from which decoding should start given a profile value. Manual and automatic evaluation shows that our model can deliver more coherent, natural, and diversified responses.", "title": "" }, { "docid": "055c9fad6d2f246fc1b6cbb1bce26a92", "text": "This work uses deep learning models for daily directional movements prediction of a stock price using financial news titles and technical indicators as input. A comparison is made between two different sets of technical indicators, set 1: Stochastic %K, Stochastic %D, Momentum, Rate of change, William’s %R, Accumulation/Distribution (A/D) oscillator and Disparity 5; set 2: Exponential Moving Average, Moving Average Convergence-Divergence, Relative Strength Index, On Balance Volume and Bollinger Bands. Deep learning methods can detect and analyze complex patterns and interactions in the data allowing a more precise trading process. Experiments has shown that Convolutional Neural Network (CNN) can be better than Recurrent Neural Networks (RNN) on catching semantic from texts and RNN is better on catching the context information and modeling complex temporal characteristics for stock market forecasting. So, there are two models compared in this paper: a hybrid model composed by a CNN for the financial news and a Long Short-Term Memory (LSTM) for technical indicators, named as SI-RCNN; and a LSTM network only for technical indicators, named as I-RNN. The output of each model is used as input for a trading agent that buys stocks on the current day and sells the next day when the model predicts that the price is going up, otherwise the agent sells stocks on the current day and buys the next day. The proposed method shows a major role of financial news in stabilizing the results and almost no improvement when comparing different sets of technical indicators.", "title": "" }, { "docid": "43db7c431cac1afd33f48774ee0dbc61", "text": "We present a diff algorithm for XML data. This work is motivated by the support for change control in the context of the Xyleme project that is investigating dynamic warehouses capable of storing massive volume of XML data. Because of the context, our algorithm has to be very efficient in terms of speed and memory space even at the cost of some loss of “quality”. Also, it considers, besides insertions, deletions and updates (standard in diffs), a move operation on subtrees that is essential in the context of XML. Intuitively, our diff algorithm uses signatures to match (large) subtrees that were left unchanged between the old and new versions. Such exact matchings are then possibly propagated to ancestors and descendants to obtain more matchings. It also uses XML specific information such as ID attributes. We provide a performance analysis of the algorithm. We show that it runs in average in linear time vs. quadratic time for previous algorithms. We present experiments on synthetic data that confirm the analysis. Since this problem is NPhard, the linear time is obtained by trading some quality. We present experiments (again on synthetic data) that show that the output of our algorithm is reasonably close to the “optimal” in terms of quality. Finally we present experiments on a small sample of XML pages found on the Web.", "title": "" }, { "docid": "04ed876237214c1366f966b80ebb7fd4", "text": "Load Balancing is essential for efficient operations indistributed environments. As Cloud Computing is growingrapidly and clients are demanding more services and betterresults, load balancing for the Cloud has become a veryinteresting and important research area. Many algorithms weresuggested to provide efficient mechanisms and algorithms forassigning the client's requests to available Cloud nodes. Theseapproaches aim to enhance the overall performance of the Cloudand provide the user more satisfying and efficient services. Inthis paper, we investigate the different algorithms proposed toresolve the issue of load balancing and task scheduling in CloudComputing. We discuss and compare these algorithms to providean overview of the latest approaches in the field.", "title": "" }, { "docid": "96e9c66453ba91d1bc44bb0242f038ce", "text": "Body temperature is one of the key parameters for health monitoring of premature infants at the neonatal intensive care unit (NICU). In this paper, we propose and demonstrate a design of non-invasive neonatal temperature monitoring with wearable sensors. A negative temperature coefficient (NTC) resistor is applied as the temperature sensor due to its accuracy and small size. Conductive textile wires are used to make the sensor integration compatible for a wearable non-invasive monitoring platform, such as a neonatal smart jacket. Location of the sensor, materials and appearance are designed to optimize the functionality, patient comfort and the possibilities for aesthetic features. A prototype belt is built of soft bamboo fabrics with NTC sensor integrated to demonstrate the temperature monitoring. Experimental results from the testing on neonates at NICU of Máxima Medical Center (MMC), Veldhoven, the Netherlands, show the accurate temperature monitoring by the prototype belt comparing with the standard patient monitor.", "title": "" }, { "docid": "2eebc7477084b471f9e9872ba8751359", "text": "Despite significant progress in the development of human action detection datasets and algorithms, no current dataset is representative of real-world aerial view scenarios. We present Okutama-Action, a new video dataset for aerial view concurrent human action detection. It consists of 43 minute-long fully-annotated sequences with 12 action classes. Okutama-Action features many challenges missing in current datasets, including dynamic transition of actions, significant changes in scale and aspect ratio, abrupt camera movement, as well as multi-labeled actors. As a result, our dataset is more challenging than existing ones, and will help push the field forward to enable real-world applications.", "title": "" } ]
scidocsrr
8e6e49e6cb0f4d85f4018da85bfadc80
Bagging, Boosting and the Random Subspace Method for Linear Classifiers
[ { "docid": "00ea9078f610b14ed0ed00ed6d0455a7", "text": "Boosting is a general method for improving the performance of learning algorithms. A recently proposed boosting algorithm, Ada Boost, has been applied with great success to several benchmark machine learning problems using mainly decision trees as base classifiers. In this article we investigate whether Ada Boost also works as well with neural networks, and we discuss the advantages and drawbacks of different versions of the Ada Boost algorithm. In particular, we compare training methods based on sampling the training set and weighting the cost function. The results suggest that random resampling of the training data is not the main explanation of the success of the improvements brought by Ada Boost. This is in contrast to bagging, which directly aims at reducing variance and for which random resampling is essential to obtain the reduction in generalization error. Our system achieves about 1.4 error on a data set of on-line handwritten digits from more than 200 writers. A boosted multilayer network achieved 1.5 error on the UCI letters and 8.1 error on the UCI satellite data set, which is significantly better than boosted decision trees.", "title": "" } ]
[ { "docid": "3606b1c9bc5003c6119a5cc675ad63f4", "text": "Hypothyroidism is a clinical disorder commonly encountered by the primary care physician. Untreated hypothyroidism can contribute to hypertension, dyslipidemia, infertility, cognitive impairment, and neuromuscular dysfunction. Data derived from the National Health and Nutrition Examination Survey suggest that about one in 300 persons in the United States has hypothyroidism. The prevalence increases with age, and is higher in females than in males. Hypothyroidism may occur as a result of primary gland failure or insufficient thyroid gland stimulation by the hypothalamus or pituitary gland. Autoimmune thyroid disease is the most common etiology of hypothyroidism in the United States. Clinical symptoms of hypothyroidism are nonspecific and may be subtle, especially in older persons. The best laboratory assessment of thyroid function is a serum thyroid-stimulating hormone test. There is no evidence that screening asymptomatic adults improves outcomes. In the majority of patients, alleviation of symptoms can be accomplished through oral administration of synthetic levothyroxine, and most patients will require lifelong therapy. Combination triiodothyronine/thyroxine therapy has no advantages over thyroxine monotherapy and is not recommended. Among patients with subclinical hypothyroidism, those at greater risk of progressing to clinical disease, and who may be considered for therapy, include patients with thyroid-stimulating hormone levels greater than 10 mIU per L and those who have elevated thyroid peroxidase antibody titers.", "title": "" }, { "docid": "6c175d7a90ed74ab3b115977c82b0ffa", "text": "We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of connections follow power laws that indicate a scale-free pattern of connectivity, with most nodes having relatively few connections joined together through a small number of hubs with many connections. These regularities have also been found in certain other complex natural networks, such as the World Wide Web, but they are not consistent with many conventional models of semantic organization, based on inheritance hierarchies, arbitrarily structured networks, or high-dimensional vector spaces. We propose that these structures reflect the mechanisms by which semantic networks grow. We describe a simple model for semantic growth, in which each new word or concept is connected to an existing network by differentiating the connectivity pattern of an existing node. This model generates appropriate small-world statistics and power-law connectivity distributions, and it also suggests one possible mechanistic basis for the effects of learning history variables (age of acquisition, usage frequency) on behavioral performance in semantic processing tasks.", "title": "" }, { "docid": "8933d92ec139e80ffb8f0ebaa909d76c", "text": "Reading an article and answering questions about its content is a fundamental task for natural language understanding. While most successful neural approaches to this problem rely on recurrent neural networks (RNNs), training RNNs over long documents can be prohibitively slow. We present a novel framework for question answering that can efficiently scale to longer documents while maintaining or even improving performance. Our approach combines a coarse, inexpensive model for selecting one or more relevant sentences and a more expensive RNN that produces the answer from those sentences. A central challenge is the lack of intermediate supervision for the coarse model, which we address using reinforcement learning. Experiments demonstrate state-of-the-art performance on a challenging subset of the WIKIREADING dataset (Hewlett et al., 2016) and on a newly-gathered dataset, while reducing the number of sequential RNN steps by 88% against a standard sequence to sequence model.", "title": "" }, { "docid": "86826e10d531b8d487fada7a5c151a41", "text": "Feature selection is an important preprocessing step in data mining. Mutual information-based feature selection is a kind of popular and effective approaches. In general, most existing mutual information-based techniques are greedy methods, which are proven to be efficient but suboptimal. In this paper, mutual information-based feature selection is transformed into a global optimization problem, which provides a new idea for solving feature selection problems. Firstly, a single-objective feature selection algorithm combining relevance and redundancy is presented, which has well global searching ability and high computational efficiency. Furthermore, to improve the performance of feature selection, we propose a multi-objective feature selection algorithm. The method can meet different requirements and achieve a tradeoff among multiple conflicting objectives. On this basis, a hybrid feature selection framework is adopted for obtaining a final solution. We compare the performance of our algorithm with related methods on both synthetic and real datasets. Simulation results show the effectiveness and practicality of the proposed method.", "title": "" }, { "docid": "2582b0fffad677d3f0ecf11b92d9702d", "text": "This study explores teenage girls' narrations of the relationship between self-presentation and peer comparison on social media in the context of beauty. Social media provide new platforms that manifest media and peer influences on teenage girls' understanding of beauty towards an idealized notion. Through 24 in-depth interviews, this study examines secondary school girls' self-presentation and peer comparison behaviors on social network sites where the girls posted self-portrait photographs or “selfies” and collected peer feedback in the forms of “likes,” “followers,” and comments. Results of thematic analysis reveal a gap between teenage girls' self-beliefs and perceived peer standards of beauty. Feelings of low self-esteem and insecurity underpinned their efforts in edited self-presentation and quest for peer recognition. Peers played multiple roles that included imaginary audiences, judges, vicarious learning sources, and comparison targets in shaping teenage girls' perceptions and presentation of beauty. Findings from this study reveal the struggles that teenage girls face today and provide insights for future investigations and interventions pertinent to teenage girls’ presentation and evaluation of self on", "title": "" }, { "docid": "13afc7b4786ee13c6b0bfb1292f50153", "text": "Heavy metals are discharged into water from various industries. They can be toxic or carcinogenic in nature and can cause severe problems for humans and aquatic ecosystems. Thus, the removal of heavy metals fromwastewater is a serious problem. The adsorption process is widely used for the removal of heavy metals from wastewater because of its low cost, availability and eco-friendly nature. Both commercial adsorbents and bioadsorbents are used for the removal of heavy metals fromwastewater, with high removal capacity. This review article aims to compile scattered information on the different adsorbents that are used for heavy metal removal and to provide information on the commercially available and natural bioadsorbents used for removal of chromium, cadmium and copper, in particular. This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence (CC BY-NC-ND 4.0), which permits copying and redistribution for non-commercial purposes with no derivatives, provided the original work is properly cited (http://creativecommons.org/ licenses/by-nc-nd/4.0/). doi: 10.2166/wrd.2016.104 Renu Madhu Agarwal (corresponding author) K. Singh Department of Chemical Engineering, Malaviya National Institute of Technology, JLN Marg, Jaipur 302017, India E-mail: madhunaresh@gmail.com", "title": "" }, { "docid": "b86fed0ebcf017adedbe9f3d14d6903d", "text": "The general employee scheduling problem extends the standard shift scheduling problem by discarding key limitations such as employee homogeneity and the absence of connections across time period blocks. The resulting increased generality yields a scheduling model that applies to real world problems confronted in a wide variety of areas. The price of the increased generality is a marked increase in size and complexity over related models reported in the literature. The integer programming formulation for the general employee scheduling problem, arising in typical real world settings, contains from one million to over four million zero~ne variables. By contrast, studies of special cases reported over the past decade have focused on problems involving between 100 and 500 variables. We characterize the relationship between the general employee scheduling problem and related problems, reporting computational results for a procedure that solves these more complex problems within 98-99 % optimality and runs on a microcomputer. We view our approach as an integration of management science and artificial intelligence techniques. The benefits of such an integration are suggested by the fact that other zero~ne scheduling implementations reported in the literature, including the one awarded the Lancaster Prize in 1984, have obtained comparable approximations of optimality only for problems from two to three orders of magnitude smaller, and then only by the use of large mainframe computers.", "title": "" }, { "docid": "df0e13e1322a95046a91fb7c867d968a", "text": "Taking into consideration both external (i.e. technology acceptance factors, website service quality) as well as internal factors (i.e. specific holdup cost) , this research explores how the customers’ satisfaction and loyalty, when shopping and purchasing on the internet , can be associated with each other and how they are affected by the above dynamics. This research adopts the Structural Equation Model (SEM) as the main analytical tool. It investigates those who used to have shopping experiences in major shopping websites of Taiwan. The research results point out the following: First, customer satisfaction will positively influence customer loyalty directly; second, technology acceptance factors will positively influence customer satisfaction and loyalty directly; third, website service quality can positively influence customer satisfaction and loyalty directly; and fourth, specific holdup cost can positively influence customer loyalty directly, but cannot positively influence customer satisfaction directly. This paper draws on the research results for implications of managerial practice, and then suggests some empirical tactics in order to help enhancing management performance for the website shopping industry.", "title": "" }, { "docid": "fb836666c993b27b99f6c789dd0aae05", "text": "Software transactions have received significant attention as a way to simplify shared-memory concurrent programming, but insufficient focus has been given to the precise meaning of software transactions or their interaction with other language features. This work begins to rectify that situation by presenting a family of formal languages that model a wide variety of behaviors for software transactions. These languages abstract away implementation details of transactional memory, providing high-level definitions suitable for programming languages. We use small-step semantics in order to represent explicitly the interleaved execution of threads that is necessary to investigate pertinent issues.\n We demonstrate the value of our core approach to modeling transactions by investigating two issues in depth. First, we consider parallel nesting, in which parallelism and transactions can nest arbitrarily. Second, we present multiple models for weak isolation, in which nontransactional code can violate the isolation of a transaction. For both, type-and-effect systems let us soundly and statically restrict what computation can occur inside or outside a transaction. We prove some key language-equivalence theorems to confirm that under sufficient static restrictions, in particular that each mutable memory location is used outside transactions or inside transactions (but not both), no program can determine whether the language implementation uses weak isolation or strong isolation.", "title": "" }, { "docid": "e5f6d7ed8d2dbf0bc2cde28e9c9e129b", "text": "Change detection is the process of finding out difference between two images taken at two different times. With the help of remote sensing the . Here we will try to find out the difference of the same image taken at different times. here we use mean ratio and log ratio to find out the difference in the images. Log is use to find background image and fore ground detected by mean ratio. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.", "title": "" }, { "docid": "92699fa23a516812c7fcb74ba38f42c6", "text": "Deep reinforcement learning (DRL) is poised to revolutionize the field of artificial intelligence (AI) and represents a step toward building autonomous systems with a higherlevel understanding of the visual world. Currently, deep learning is enabling reinforcement learning (RL) to scale to problems that were previously intractable, such as learning to play video games directly from pixels. DRL algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of RL, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep RL, including the deep Q-network (DQN), trust region policy optimization (TRPO), and asynchronous advantage actor critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via RL. To conclude, we describe several current areas of research within the field.", "title": "" }, { "docid": "a94278bafc093c37bcba719a4b6a03fa", "text": "Community detection and analysis is an important methodology for understanding the organization of various real-world networks and has applications in problems as diverse as consensus formation in social communities or the identification of functional modules in biochemical networks. Currently used algorithms that identify the community structures in large-scale real-world networks require a priori information such as the number and sizes of communities or are computationally expensive. In this paper we investigate a simple label propagation algorithm that uses the network structure alone as its guide and requires neither optimization of a predefined objective function nor prior information about the communities. In our algorithm every node is initialized with a unique label and at every step each node adopts the label that most of its neighbors currently have. In this iterative process densely connected groups of nodes form a consensus on a unique label to form communities. We validate the algorithm by applying it to networks whose community structures are known. We also demonstrate that the algorithm takes an almost linear time and hence it is computationally less expensive than what was possible so far.", "title": "" }, { "docid": "d469d31d26d8bc07b9d8dfa8ce277e47", "text": "BACKGROUND/PURPOSE\nMorbidity in children treated with appendicitis results either from late diagnosis or negative appendectomy. A Prospective analysis of efficacy of Pediatric Appendicitis Score for early diagnosis of appendicitis in children was conducted.\n\n\nMETHODS\nIn the last 5 years, 1,170 children aged 4 to 15 years with abdominal pain suggestive of acute appendicitis were evaluated prospectively. Group 1 (734) were patients with appendicitis and group 2 (436) nonappendicitis. Multiple linear logistic regression analysis of all clinical and investigative parameters was performed for a model comprising 8 variables to form a diagnostic score.\n\n\nRESULTS\nLogistic regression analysis yielded a model comprising 8 variables, all statistically significant, P <.001. These variables in order of their diagnostic index were (1) cough/percussion/hopping tenderness in the right lower quadrant of the abdomen (0.96), (2) anorexia (0.88), (3) pyrexia (0.87), (4) nausea/emesis (0.86), (5) tenderness over the right iliac fossa (0.84), (6) leukocytosis (0.81), (7) polymorphonuclear neutrophilia (0.80) and (8) migration of pain (0.80). Each of these variables was assigned a score of 1, except for physical signs (1 and 5), which were scored 2 to obtain a total of 10. The Pediatric Appendicitis Score had a sensitivity of 1, specificity of 0.92, positive predictive value of 0.96, and negative predictive value of 0.99.\n\n\nCONCLUSION\nPediatric appendicitis score is a simple, relatively accurate diagnostic tool for accessing an acute abdomen and diagnosing appendicitis in children.", "title": "" }, { "docid": "e1adb8ebfd548c2aca5110e2a9e8d667", "text": "This paper introduces an active object detection and localization framework that combines a robust untextured object detection and 3D pose estimation algorithm with a novel next-best-view selection strategy. We address the detection and localization problems by proposing an edge-based registration algorithm that refines the object position by minimizing a cost directly extracted from a 3D image tensor that encodes the minimum distance to an edge point in a joint direction/location space. We face the next-best-view problem by exploiting a sequential decision process that, for each step, selects the next camera position which maximizes the mutual information between the state and the next observations. We solve the intrinsic intractability of this solution by generating observations that represent scene realizations, i.e. combination samples of object hypothesis provided by the object detector, while modeling the state by means of a set of constantly resampled particles. Experiments performed on different real world, challenging datasets confirm the effectiveness of the proposed methods.", "title": "" }, { "docid": "2038dbe6e16892c8d37a4dac47d4f681", "text": "Sentences with different structures may convey the same meaning. Identification of sentences with paraphrases plays an important role in text related research and applications. This work focus on the statistical measures and semantic analysis of Malayalam sentences to detect the paraphrases. The statistical similarity measures between sentences, based on symbolic characteristics and structural information, could measure the similarity between sentences without any prior knowledge but only on the statistical information of sentences. The semantic representation of Universal Networking Language(UNL), represents only the inherent meaning in a sentence without any syntactic details. Thus, comparing the UNL graphs of two sentences can give an insight into how semantically similar the two sentences are. Combination of statistical similarity and semantic similarity score results the overall similarity score. This is the first attempt towards paraphrases of malayalam sentences.", "title": "" }, { "docid": "259e95c8d756f31408d30bbd7660eea3", "text": "The capacity to identify cheaters is essential for maintaining balanced social relationships, yet humans have been shown to be generally poor deception detectors. In fact, a plethora of empirical findings holds that individuals are only slightly better than chance when discerning lies from truths. Here, we report 5 experiments showing that judges' ability to detect deception greatly increases after periods of unconscious processing. Specifically, judges who were kept from consciously deliberating outperformed judges who were encouraged to do so or who made a decision immediately; moreover, unconscious thinkers' detection accuracy was significantly above chance level. The reported experiments further show that this improvement comes about because unconscious thinking processes allow for integrating the particularly rich information basis necessary for accurate lie detection. These findings suggest that the human mind is not unfit to distinguish between truth and deception but that this ability resides in previously overlooked processes.", "title": "" }, { "docid": "49a87829a12168de2be2ee32a23ddeb7", "text": "Crowdsourcing emerged with the development of Web 2.0 technologies as a distributed online practice that harnesses the collective aptitudes and skills of the crowd in order to reach specific goals. The success of crowdsourcing systems is influenced by the users’ levels of participation and interactions on the platform. Therefore, there is a need for the incorporation of appropriate incentive mechanisms that would lead to sustained user engagement and quality contributions. Accordingly, the aim of the particular paper is threefold: first, to provide an overview of user motives and incentives, second, to present the corresponding incentive mechanisms used to trigger these motives, alongside with some indicative examples of successful crowdsourcing platforms that incorporate these incentive mechanisms, and third, to provide recommendations on their careful design in order to cater to the context and goal of the platform.", "title": "" }, { "docid": "0b3555b8c1932a2364a7264cbf2f7c25", "text": "This paper introduces a novel weighted unsupervised learning for object detection using an RGB-D camera. This technique is feasible for detecting the moving objects in the noisy environments that are captured by an RGB-D camera. The main contribution of this paper is a real-time algorithm for detecting each object using weighted clustering as a separate cluster. In a preprocessing step, the algorithm calculates the pose 3D position X, Y, Z and RGB color of each data point and then it calculates each data point’s normal vector using the point’s neighbor. After preprocessing, our algorithm calculates k-weights for each data point; each weight indicates membership. Resulting in clustered objects of the scene. Keywords—Weighted Unsupervised Learning, Object Detection, RGB-D camera, Kinect", "title": "" }, { "docid": "abda48a065aecbe34f86ce3490520402", "text": "Wireless Sensor Network (WSN) consists of small low-cost, low-power multifunctional nodes interconnected to efficiently aggregate and transmit data to sink. Cluster-based approaches use some nodes as Cluster Heads (CHs) and organize WSNs efficiently for aggregation of data and energy saving. A CH conveys information gathered by cluster nodes and aggregates/compresses data before transmitting it to a sink. However, this additional responsibility of the node results in a higher energy drain leading to uneven network degradation. Low Energy Adaptive Clustering Hierarchy (LEACH) offsets this by probabilistically rotating cluster heads role among nodes with energy above a set threshold. CH selection in WSN is NP-Hard as optimal data aggregation with efficient energy savings cannot be solved in polynomial time. In this work, a modified firefly heuristic, synchronous firefly algorithm, is proposed to improve the network performance. Extensive simulation shows the proposed technique to perform well compared to LEACH and energy-efficient hierarchical clustering. Simulations show the effectiveness of the proposed method in decreasing the packet loss ratio by an average of 9.63% and improving the energy efficiency of the network when compared to LEACH and EEHC.", "title": "" }, { "docid": "09168164e47fd781e4abeca45fb76c35", "text": "AUTOSAR is a standard for the development of software for embedded devices, primarily created for the automotive domain. It specifies a software architecture with more than 80 software modules that provide services to one or more software components. With the trend towards integrating safety-relevant systems into embedded devices, conformance with standards such as ISO 26262 [ISO11] or ISO/IEC 61508 [IEC10] becomes increasingly important. This article presents an approach to providing freedom from interference between software components by using the MPU available on many modern microcontrollers. Each software component gets its own dedicated memory area, a so-called memory partition. This concept is well known in other industries like the aerospace industry, where the IMA architecture is now well established. The memory partitioning mechanism is implemented by a microkernel, which integrates seamlessly into the architecture specified by AUTOSAR. The development has been performed as SEooC as described in ISO 26262, which is a new development approach. We describe the procedure for developing an SEooC. AUTOSAR: AUTomotive Open System ARchitecture, see [ASR12]. MPU: Memory Protection Unit. 3 IMA: Integrated Modular Avionics, see [RTCA11]. 4 SEooC: Safety Element out of Context, see [ISO11].", "title": "" } ]
scidocsrr
cbe333e5804af8a9778780bff57dc255
Health Media: From Multimedia Signals to Personal Health Insights
[ { "docid": "e95253b765129a0940e4af899d9e5d72", "text": "Smart health devices monitor certain health parameters, are connected to an Internet service, and target primarily a lay consumer seeking a healthy lifestyle rather than the medical expert or the chronically ill person. These devices offer tremendous opportunities for wellbeing and self-management of health. This department reviews smart health devices from a pervasive computing perspective, discussing various devices and their functionality, limitations, and potential.", "title": "" } ]
[ { "docid": "b2058a09b3e83bb864cb238e066c8afb", "text": "The ability to reason with natural language is a fundamental prerequisite for many NLP tasks such as information extraction, machine translation and question answering. To quantify this ability, systems are commonly tested whether they can recognize textual entailment, i.e., whether one sentence can be inferred from another one. However, in most NLP applications only single source sentences instead of sentence pairs are available. Hence, we propose a new task that measures how well a model can generate an entailed sentence from a source sentence. We take entailment-pairs of the Stanford Natural Language Inference corpus and train an LSTM with attention. On a manually annotated test set we found that 82% of generated sentences are correct, an improvement of 10.3% over an LSTM baseline. A qualitative analysis shows that this model is not only capable of shortening input sentences, but also inferring new statements via paraphrasing and phrase entailment. We then apply this model recursively to input-output pairs, thereby generating natural language inference chains that can be used to automatically construct an entailment graph from source sentences. Finally, by swapping source and target sentences we can also train a model that given an input sentence invents additional information to generate a new sentence.", "title": "" }, { "docid": "3c4f19544e9cc51d307c6cc9aea63597", "text": "Math anxiety is a negative affective reaction to situations involving math. Previous work demonstrates that math anxiety can negatively impact math problem solving by creating performance-related worries that disrupt the working memory needed for the task at hand. By leveraging knowledge about the mechanism underlying the math anxiety-performance relationship, we tested the effectiveness of a short expressive writing intervention that has been shown to reduce intrusive thoughts and improve working memory availability. Students (N = 80) varying in math anxiety were asked to sit quietly (control group) prior to completing difficulty-matched math and word problems or to write about their thoughts and feelings regarding the exam they were about to take (expressive writing group). For the control group, high math-anxious individuals (HMAs) performed significantly worse on the math problems than low math-anxious students (LMAs). In the expressive writing group, however, this difference in math performance across HMAs and LMAs was significantly reduced. Among HMAs, the use of words related to anxiety, cause, and insight in their writing was positively related to math performance. Expressive writing boosts the performance of anxious students in math-testing situations.", "title": "" }, { "docid": "36342d65aaa9dff0339f8c1c8cb23f30", "text": "Recent approaches to Reinforcement Learning (RL) with function approximation include Neural Fitted Q Iteration and the use of Gaussian Processes. They belong to the class of fitted value iteration algorithms, which use a set of support points to fit the value-function in a batch iterative process. These techniques make efficient use of a reduced number of samples by reusing them as needed, and are appropriate for applications where the cost of experiencing a new sample is higher than storing and reusing it, but this is at the expense of increasing the computational effort, since these algorithms are not incremental. On the other hand, non-parametric models for function approximation, like Gaussian Processes, are preferred against parametric ones, due to their greater flexibility. A further advantage of using Gaussian Processes for function approximation is that they allow to quantify the uncertainty of the estimation at each point. In this paper, we propose a new approach for RL in continuous domains based on Probability Density Estimations. Our method combines the best features of the previous methods: it is non-parametric and provides an estimation of the variance of the approximated function at any point of the domain. In addition, our method is simple, incremental, and computationally efficient. All these features make this approach more appealing than Gaussian Processes and fitted value iteration algorithms in general.", "title": "" }, { "docid": "29e500aa57f82d63596ae13639d46cbf", "text": "In this paper we present a intrusion detection module capable of detecting malicious network traffic in a SCADA (Supervisory Control and Data Acquisition) system. Malicious data in a SCADA system disrupt its correct functioning and tamper with its normal operation. OCSVM (One-Class Support Vector Machine) is an intrusion detection mechanism that does not need any labeled data for training or any information about the kind of anomaly is expecting for the detection process. This feature makes it ideal for processing SCADA environment data and automate SCADA performance monitoring. The OCSVM module developed is trained by network traces off line and detect anomalies in the system real time. The module is part of an IDS (Intrusion Detection System) system developed under CockpitCI project and communicates with the other parts of the system by the exchange of IDMEF (Intrusion Detection Message Exchange Format) messages that carry information about the source of the incident, the time and a classification of the alarm.", "title": "" }, { "docid": "cbdfd886416664809046ff2e674f4ae1", "text": "Domain adaptation addresses the problem where data instances of a source domain have different distributions from that of a target domain, which occurs frequently in many real life scenarios. This work focuses on unsupervised domain adaptation, where labeled data are only available in the source domain. We propose to interpolate subspaces through dictionary learning to link the source and target domains. These subspaces are able to capture the intrinsic domain shift and form a shared feature representation for cross domain recognition. Further, we introduce a quantitative measure to characterize the shift between two domains, which enables us to select the optimal domain to adapt to the given multiple source domains. We present experiments on face recognition across pose, illumination and blur variations, cross dataset object recognition, and report improved performance over the state of the art.", "title": "" }, { "docid": "cee3c61474bf14158d4abf0c794a9c2a", "text": "This course will focus on describing techniques for handling datasets larger than main memory in scientific visualization and computer graphics. Recently, several external memory techniques have been developed for a wide variety of graphics and visualization problems, including surface simplification, volume rendering, isosurface generation, ray tracing, surface reconstruction, and so on. This work has had significant impact given that in recent years there has been a rapid increase in the raw size of datasets. Several technological trends are contributing to this, such as the development of high-resolution 3D scanners, and the need to visualize ASCI-size (Accelerated Strategic Computing Initiative) datasets. Another important push for this kind of technology is the growing speed gap between main memory and caches, such a gap penalizes algorithms which do not optimize for coherence of access. Because of these reasons, much research in computer graphics focuses on developing out-of-core (and often cache-friendly) techniques. This course reviews fundamental issues, current problems, and unresolved solutions, and presents an in-depth study of external memory algorithms developed in recent years. Its goal is to provide students and graphics researchers and professionals with an effective knowledge of current techniques, as well as the foundation to develop novel techniques on their own. Schedule (tentative) 5 min Introduction to the course Silva 45 min Overview of external memory algorithms Chiang 40 min Out-of-core scientific visualization Silva", "title": "" }, { "docid": "947d4c60427377bcb466fe1393c5474c", "text": "This paper presents a single BCD technology platform with high performance power devices at a wide range of operating voltages. The platform offers 6 V to 70 V LDMOS devices. All devices offer best-in-class specific on-resistance of 20 to 40 % lower than that of the state-of-the-art IC-based LDMOS devices and robustness better than the square SOA (safe-operating-area). Fully isolated LDMOS devices, in which independent bias is capable for circuit flexibility, demonstrate superior specific on-resistance (e.g. 11.9 mΩ-mm2 for breakdown voltage of 39 V). Moreover, the unusual sudden current enhancement appeared in the ID-VD saturation region of most of the high voltage LDMOS devices is significantly suppressed.", "title": "" }, { "docid": "413df06d6ba695aa5baa13ea0913c6e6", "text": "Time stamping is a technique used to prove the existence of certain digital data prior to a specific point in time. With the recent development of electronic commerce, time stamping is now widely recognized as an important technique used to ensure the integrity of digital data for a long time period. Various time stamping schemes and services have been proposed. When one uses a certain time stamping service, he should confirm in advance that its security level sufficiently meets his security requirements. However, time stamping schemes are generally so complicated that it is not easy to evaluate their security levels accurately. It is important for users to have a good grasp of current studies of time stamping schemes and to make use of such studies to select an appropriate time stamping service. Une and Matsumoto [2000], [2001a], [2001b] and [2002] have proposed a method of classifying time stamping schemes and evaluating their security systematically. Their papers have clarified the objectives, functions and entities involved in time stamping schemes and have discussed the conditions sufficient to detect the alteration of a time stamp in each scheme. This paper explains existing problems regarding the security evaluation of time stamping schemes and the results of Une and Matsumoto [2000], [2001a], [2001b] and [2002]. It also applies their results to some existing time stamping schemes and indicates possible directions of further research into time stamping schemes.", "title": "" }, { "docid": "269cff08201fd7815e3ea2c9a786d38b", "text": "In this paper, we are interested in developing compositional models to explicit representing pose, parts and attributes and tackling the tasks of attribute recognition, pose estimation and part localization jointly. This is different from the recent trend of using CNN-based approaches for training and testing on these tasks separately with a large amount of data. Conventional attribute models typically use a large number of region-based attribute classifiers on parts of pre-trained pose estimator without explicitly detecting the object or its parts, or considering the correlations between attributes. In contrast, our approach jointly represents both the object parts and their semantic attributes within a unified compositional hierarchy. We apply our attributed grammar model to the task of human parsing by simultaneously performing part localization and attribute recognition. We show our modeling helps performance improvements on pose-estimation task and also outperforms on other existing methods on attribute prediction task.", "title": "" }, { "docid": "0ff8c4799b62c70ef6b7d70640f1a931", "text": "Using on-chip interconnection networks in place of ad-hoc glo-bal wiring structures the top level wires on a chip and facilitates modular design. With this approach, system modules (processors, memories, peripherals, etc...) communicate by sending packets to one another over the network. The structured network wiring gives well-controlled electrical parameters that eliminate timing iterations and enable the use of high-performance circuits to reduce latency and increase bandwidth. The area overhead required to implement an on-chip network is modest, we estimate 6.6%. This paper introduces the concept of on-chip networks, sketches a simple network, and discusses some challenges in the architecture and design of these networks.", "title": "" }, { "docid": "cbfdea54abb1e4c1234ca44ca6913220", "text": "Seeds of chickpea (Cicer arietinum L.) were exposed in batches to static magnetic fields of strength from 0 to 250 mT in steps of 50 mT for 1-4 h in steps of 1 h for all fields. Results showed that magnetic field application enhanced seed performance in terms of laboratory germination, speed of germination, seedling length and seedling dry weight significantly compared to unexposed control. However, the response varied with field strength and duration of exposure without any particular trend. Among the various combinations of field strength and duration, 50 mT for 2 h, 100 mT for 1 h and 150 mT for 2 h exposures gave best results. Exposure of seeds to these three magnetic fields improved seed coat membrane integrity as it reduced the electrical conductivity of seed leachate. In soil, seeds exposed to these three treatments produced significantly increased seedling dry weights of 1-month-old plants. The root characteristics of the plants showed dramatic increase in root length, root surface area and root volume. The improved functional root parameters suggest that magnetically treated chickpea seeds may perform better under rainfed (un-irrigated) conditions where there is a restrictive soil moisture regime.", "title": "" }, { "docid": "8fca64bb24d9adc445fec504ee8efa5a", "text": "In this paper, the permeation properties of three types of liquids into HTV silicone rubber with different Alumina Tri-hydrate (ATH) contents had been investigated by weight gain experiments. The influence of differing exposure conditions on the diffusion into silicone rubber, in particular the effect of solution type, solution concentration, and test temperature were explored. Experimental results indicated that the liquids permeation into silicone rubber obeyed anomalous diffusion ways instead of the Fick diffusion model. Moreover, higher temperature would accelerate the permeation process, and silicone rubber with higher ATH content absorbed more liquids than that with lower ATH content. Furthermore, the material properties of silicone rubber before and after liquid permeation were examined using Fourier infrared spectroscopy (FTIR), thermal gravimetric analysis (TGA) and scanning electron microscopy (SEM), respectively. The permeation mechanisms and process were discussed in depth by combining the weight gain experiment results and the material properties analyses.", "title": "" }, { "docid": "2e510f3f8055b4936aadf502766e3e0d", "text": "Process mining techniques have proven to be a valuable tool for analyzing the execution of business processes. They rely on logs that identify events at an activity level, i.e., most process mining techniques assume that the information system explicitly supports the notion of activities/tasks. This is often not the case and only low-level events are being supported and logged. For example, users may provide different pieces of data which together constitute a single activity. The technique introduced in this paper uses clustering algorithms to derive activity logs from lower-level data modification logs, as produced by virtually every information system. This approach was implemented in the context of the ProM framework and its goal is to widen the scope of processes that can be analyzed using existing process mining techniques.", "title": "" }, { "docid": "ac2009434ea592577cdcdbfb51e3213c", "text": "Pair-wise ranking methods have been widely used in recommender systems to deal with implicit feedback. They attempt to discriminate between a handful of observed items and the large set of unobserved items. In these approaches, however, user preferences and item characteristics cannot be estimated reliably due to overfitting given highly sparse data. To alleviate this problem, in this paper, we propose a novel hierarchical Bayesian framework which incorporates “bag-ofwords” type meta-data on items into pair-wise ranking models for one-class collaborative filtering. The main idea of our method lies in extending the pair-wise ranking with a probabilistic topic modeling. Instead of regularizing item factors through a zero-mean Gaussian prior, our method introduces item-specific topic proportions as priors for item factors. As a by-product, interpretable latent factors for users and items may help explain recommendations in some applications. We conduct an experimental study on a real and publicly available dataset, and the results show that our algorithm is effective in providing accurate recommendation and interpreting user factors and item factors.", "title": "" }, { "docid": "edb7adc3e665aa2126be1849431c9d7f", "text": "This study evaluated the exploitation of unprocessed agricultural discards in the form of fresh vegetable leaves as a diet for the sea urchin Paracentrotus lividus through the assessment of their effects on gonad yield and quality. A stock of wild-caught P. lividus was fed on discarded leaves from three different species (Beta vulgaris, Brassica oleracea, and Lactuca sativa) and the macroalga Ulva lactuca for 3 months under controlled conditions. At the beginning and end of the experiment, total and gonad weight were measured, while gonad and diet total carbon (C%), nitrogen (N%), δ13C, δ15N, carbohydrates, lipids, and proteins were analyzed. The results showed that agricultural discards provided for the maintenance of gonad index and nutritional value (carbohydrate, lipid, and protein content) of initial specimens. L. sativa also improved gonadic color. The results of this study suggest that fresh vegetable discards may be successfully used in the preparation of more balanced diets for sea urchin aquaculture. The use of agricultural discards in prepared diets offers a number of advantages, including an abundant resource, the recycling of discards into new organic matter, and reduced pressure on marine organisms (i.e., macroalgae) in the production of food for cultured organisms.", "title": "" }, { "docid": "03fa5f5f6b6f307fc968a2b543e331a1", "text": "In recent years, several noteworthy large, cross-domain, and openly available knowledge graphs (KGs) have been created. These include DBpedia, Freebase, OpenCyc, Wikidata, and YAGO. Although extensively in use, these KGs have not been subject to an in-depth comparison so far. In this survey, we provide data quality criteria according to which KGs can be analyzed and analyze and compare the above mentioned KGs. Furthermore, we propose a framework for finding the most suitable KG for a given setting.", "title": "" }, { "docid": "6347b642cec08bf062f6e5594f805bd3", "text": "Using a multimethod approach, the authors conducted 4 studies to test life span hypotheses about goal orientations across adulthood. Confirming expectations, in Studies 1 and 2 younger adults reported a primary growth orientation in their goals, whereas older adults reported a stronger orientation toward maintenance and loss prevention. Orientation toward prevention of loss correlated negatively with well-being in younger adults. In older adults, orientation toward maintenance was positively associated with well-being. Studies 3 and 4 extend findings of a self-reported shift in goal orientation to the level of behavioral choice involving cognitive and physical fitness goals. Studies 3 and 4 also examine the role of expected resource demands. The shift in goal orientation is discussed as an adaptive mechanism to manage changing opportunities and constraints across adulthood.", "title": "" }, { "docid": "bb11b0de8915b6f4811cc76dffd6d8b2", "text": "In this work we introduced SnooperTrack, an algorithm for the automatic detection and tracking of text objects — such as store names, traffic signs, license plates, and advertisements — in videos of outdoor scenes. The purpose is to improve the performances of text detection process in still images by taking advantage of the temporal coherence in videos. We first propose an efficient tracking algorithm using particle filtering framework with original region descriptors. The second contribution is our strategy to merge tracked regions and new detections. We also propose an improved version of our previously published text detection algorithm in still images. Tests indicate that SnooperTrack is fast, robust, enable false positive suppression, and achieved great performances in complex videos of outdoor scenes.", "title": "" }, { "docid": "ef5cfd6c5eaf48805e39a9eb454aa7b9", "text": "Neural networks are artificial learning systems. For more than two decades, they have help for detecting hostile behaviors in a computer system. This review describes those systems and theirs limits. It defines and gives neural networks characteristics. It also itemizes neural networks which are used in intrusion detection systems. The state of the art on IDS made from neural networks is reviewed. In this paper, we also make a taxonomy and a comparison of neural networks intrusion detection systems. We end this review with a set of remarks and future works that can be done in order to improve the systems that have been presented. This work is the result of a meticulous scan of the literature.", "title": "" }, { "docid": "59e2564e565ead0bc36f9f691f4f70f3", "text": "INTRODUCTION In recent years “big data” has become something of a buzzword in business, computer science, information studies, information systems, statistics, and many other fields. As technology continues to advance, we constantly generate an ever-increasing amount of data. This growth does not differentiate between individuals and businesses, private or public sectors, institutions of learning and commercial entities. It is nigh universal and therefore warrants further study.", "title": "" } ]
scidocsrr
c205d05981a16dc9ba2c9e74a009d8db
Neural Cryptanalysis of Classical Ciphers
[ { "docid": "ff10bbde3ed18eea73375540135f99f4", "text": "Recurrent neural networks (RNNs) represent the state of the art in translation, image captioning, and speech recognition. They are also capable of learning algorithmic tasks such as long addition, copying, and sorting from a set of training examples. We demonstrate that RNNs can learn decryption algorithms – the mappings from plaintext to ciphertext – for three polyalphabetic ciphers (Vigenere, Autokey, and Enigma). Most notably, we demonstrate that an RNN with a 3000-unit Long Short-Term Memory (LSTM) cell can learn the decryption function of the Enigma machine. We argue that our model learns efficient internal representations of these ciphers 1) by exploring activations of individual memory neurons and 2) by comparing memory usage across the three ciphers. To be clear, our work is not aimed at ’cracking’ the Enigma cipher. However, we do show that our model can perform elementary cryptanalysis by running known-plaintext attacks on the Vigenere and Autokey ciphers. Our results indicate that RNNs can learn algorithmic representations of black box polyalphabetic ciphers and that these representations are useful for cryptanalysis.", "title": "" }, { "docid": "f8f1e4f03c6416e9d9500472f5e00dbe", "text": "Template attack is the most common and powerful profiled side channel attack. It relies on a realistic assumption regarding the noise of the device under attack: the probability density function of the data is a multivariate Gaussian distribution. To relax this assumption, a recent line of research has investigated new profiling approaches mainly by applying machine learning techniques. The obtained results are commensurate, and in some particular cases better, compared to template attack. In this work, we propose to continue this recent line of research by applying more sophisticated profiling techniques based on deep learning. Our experimental results confirm the overwhelming advantages of the resulting new attacks when targeting both unprotected and protected cryptographic implementations.", "title": "" } ]
[ { "docid": "2679d251d413adf208cb8b764ce55468", "text": "We compare variations of string comparators based on the Jaro-Winkler comparator and edit distance comparator. We apply the comparators to Census data to see which are better classifiers for matches and nonmatches, first by comparing their classification abilities using a ROC curve based analysis, then by considering a direct comparison between two candidate comparators in record linkage results.", "title": "" }, { "docid": "e0ec22fcdc92abe141aeb3fa67e9e55a", "text": "A mobile wireless infrastructure-less network is a collection of wireless mobile nodes dynamically forming a temporary network without the use of any preexisting network infrastructure or centralized administration. However, the battery life of these nodes is very limited, if their battery power is depleted fully, then this result in network partition, so these nodes becomes a critical spot in the network. These critical nodes can deplete their battery power earlier because of excessive load and processing for data forwarding. These unbalanced loads turn to increase the chances of nodes failure, network partition and reduce the route lifetime and route reliability of the MANETs. Due to this, energy consumption issue becomes a vital research topic in wireless infrastructure -less networks. The energy efficient routing is a most important design criterion for MANETs. This paper focuses of the routing approaches are based on the minimization of energy consum ption of individual nodes and many other ways. This paper surveys and classifies numerous energy-efficient routing mechanisms proposed for wireless infrastructure-less networks. Also presents detailed comparative study of lager number of energy efficient/power aware routing protocol in MANETs. Aim of this paper to helps the new researchers and application developers to explore an innovative idea for designing more efficient routing protocols. Keywords— Ad hoc Network Routing, Load Distribution, Energy Eff icient, Power Aware, Protocol Stack", "title": "" }, { "docid": "1ee1adcfd73e9685eab4e2abd28183c7", "text": "We describe an algorithm for generating spherical mosaics from a collection of images acquired from a common optical center. The algorithm takes as input an arbitrary number of partially overlapping images, an adjacency map relating the images, initial estimates of the rotations relating each image to a specified base image, and approximate internal calibration information for the camera. The algorithm's output is a rotation relating each image to the base image, and revised estimates of the camera's internal parameters. Our algorithm is novel in the following respects. First, it requires no user input. (Our image capture instrumentation provides both an adjacency map for the mosaic, and an initial rotation estimate for each image.) Second, it optimizes an objective function based on a global correlation of overlapping image regions. Third, our representation of rotations significantly increases the accuracy of the optimization. Finally, our representation and use of adjacency information guarantees globally consistent rotation estimates. The algorithm has proved effective on a collection of nearly four thousand images acquired from more than eighty distinct optical centers. The experimental results demonstrate that the described global optimization strategy is superior to non-global aggregation of pair-wise correlation terms, and that it successfully generates high-quality mosaics despite significant error in initial rotation estimates.", "title": "" }, { "docid": "1e31afb6d28b0489e67bb63d4dd60204", "text": "An educational use of Pepper, a personal robot that was developed by SoftBank Robotics Corp. and Aldebaran Robotics SAS, is described. Applying the two concepts of care-receiving robot (CRR) and total physical response (TPR) into the design of an educational application using Pepper, we offer a scenario in which children learn together with Pepper at their home environments from a human teacher who gives a lesson from a remote classroom. This paper is a case report that explains the developmental process of the application that contains three educational programs that children can select in interacting with Pepper. Feedbacks and knowledge obtained from test trials are also described.", "title": "" }, { "docid": "a112a01246256e38b563f616baf02cef", "text": "This is the second of two papers describing a procedure for the three dimensional nonlinear timehistory analysis of steel framed buildings. An overview of the procedure and the theory for the panel zone element and the plastic hinge beam element are presented in Part I. In this paper, the theory for an efficient new element for modeling beams and columns in steel frames called the elastofiber element is presented, along with four illustrative examples. The elastofiber beam element is divided into three segments two end nonlinear segments and an interior elastic segment. The cross-sections of the end segments are subdivided into fibers. Associated with each fiber is a nonlinear hysteretic stress-strain law for axial stress and strain. This accounts for coupling of nonlinear material behavior between bending about the major and minor axes of the cross-section and axial deformation. Examples presented include large deflection of an elastic cantilever, cyclic loading of a cantilever beam, pushover analysis of a 20-story steel moment-frame building to collapse, and strong ground motion analysis of a 2-story unsymmetric steel moment-frame building. 1Post-Doctoral Scholar, Seismological Laboratory, MC 252-21, California Institute of Technology, Pasadena, CA91125. Email: krishnan@caltech.edu 2Professor, Civil Engineering and Applied Mechanics, MC 104-44, California Institute of Technology, Pasadena, CA-91125", "title": "" }, { "docid": "429c6591223007b40ef7bffc5d9ac4db", "text": "A compact dual-polarized double E-shaped patch antenna with high isolation for pico base station applications is presented in this communication. The proposed antenna employs a stacked configuration composed of two layers of substrate. Two modified E-shaped patches are printed orthogonally on both sides of the upper substrate. Two probes are used to excite the E-shaped patches, and each probe is connected to one patch separately. A circular patch is printed on the lower substrate to broaden the impedance bandwidth. Both simulated and measured results show that the proposed antenna has a port isolation higher than 30 dB over the frequency band of 2.5 GHz - 2.7 GHz, while the return loss is less than - 15 dB within the band. Moreover, stable radiation pattern with a peak gain of 6.8 dBi - 7.4 dBi is obtained within the band.", "title": "" }, { "docid": "7adf46bb0a4ba677e58aee9968d06293", "text": "BACKGROUND\nWork-family conflict is a type of interrole conflict that occurs as a result of incompatible role pressures from the work and family domains. Work role characteristics that are associated with work demands refer to pressures arising from excessive workload and time pressures. Literature suggests that work demands such as number of hours worked, workload, shift work are positively associated with work-family conflict, which, in turn is related to poor mental health and negative organizational attitudes. The role of social support has been an issue of debate in the literature. This study examined social support both as a moderator and a main effect in the relationship among work demands, work-to-family conflict, and satisfaction with job and life.\n\n\nOBJECTIVES\nThis study examined the extent to which work demands (i.e., work overload, irregular work schedules, long hours of work, and overtime work) were related to work-to-family conflict as well as life and job satisfaction of nurses in Turkey. The role of supervisory support in the relationship among work demands, work-to-family conflict, and satisfaction with job and life was also investigated.\n\n\nDESIGN AND METHODS\nThe sample was comprised of 243 participants: 106 academic nurses (43.6%) and 137 clinical nurses (56.4%). All of the respondents were female. The research instrument was a questionnaire comprising nine parts. The variables were measured under four categories: work demands, work support (i.e., supervisory support), work-to-family conflict and its outcomes (i.e., life and job satisfaction).\n\n\nRESULTS\nThe structural equation modeling results showed that work overload and irregular work schedules were the significant predictors of work-to-family conflict and that work-to-family conflict was associated with lower job and life satisfaction. Moderated multiple regression analyses showed that social support from the supervisor did not moderate the relationships among work demands, work-to-family conflict, and satisfaction with job and life. Exploratory analyses suggested that social support could be best conceptualized as the main effect directly influencing work-to-family conflict and job satisfaction.\n\n\nCONCLUSION\nNurses' psychological well-being and organizational attitudes could be enhanced by rearranging work conditions to reduce excessive workload and irregular work schedule. Also, leadership development programs should be implemented to increase the instrumental and emotional support of the supervisors.", "title": "" }, { "docid": "97f748ee5667ee8c2230e07881574c22", "text": "The most widely used signal in clinical practice is the ECG. ECG conveys information regarding the electrical function of the heart, by altering the shape of its constituent waves, namely the P, QRS, and T waves. Thus, the required tasks of ECG processing are the reliable recognition of these waves, and the accurate measurement of clinically important parameters measured from the temporal distribution of the ECG constituent waves. In this paper, we shall review some current trends on ECG pattern recognition. In particular, we shall review non-linear transformations of the ECG, the use of principal component analysis (linear and non-linear), ways to map the transformed data into n-dimensional spaces, and the use of neural networks (NN) based techniques for ECG pattern recognition and classification. The problems we shall deal with are the QRS/PVC recognition and classification, the recognition of ischemic beats and episodes, and the detection of atrial fibrillation. Finally, a generalised approach to the classification problems in n-dimensional spaces will be presented using among others NN, radial basis function networks (RBFN) and non-linear principal component analysis (NLPCA) techniques. The performance measures of the sensitivity and specificity of these algorithms will also be presented using as training and testing data sets from the MIT-BIH and the European ST-T databases.", "title": "" }, { "docid": "f9468884fd24ff36b81fc2016a519634", "text": "We study a new variant of Arikan's successive cancellation decoder (SCD) for polar codes. We first propose a new decoding algorithm on a new decoder graph, where the various stages of the graph are permuted. We then observe that, even though the usage of the permuted graph doesn't affect the encoder, it can significantly affect the decoding performance of a given polar code. The new permuted successive cancellation decoder (PSCD) typically exhibits a performance degradation, since the polar code is optimized for the standard SCD. We then present a new polar code construction rule matched to the PSCD and show their performance in simulations. For all rates we observe that the polar code matched to a given PSCD performs the same as the original polar code with the standard SCD. We also see that a PSCD with a reversal permutation can lead to a natural decoding order, avoiding the standard bit-reversal decoding order in SCD without any loss in performance.", "title": "" }, { "docid": "101af3fab1f8abb4e2b75a067031048a", "text": "Although research on trust in an organizational context has advanced considerably in recent years, the literature has yet to produce a set of generalizable propositions that inform our understanding of the organization and coordination of work. We propose that conceptualizing trust as an organizing principle is a powerful way of integrating the diverse trust literature and distilling generalizable implications for how trust affects organizing. We develop the notion of trust as an organizing principle by specifying structuring and mobilizing as two sets of causal pathways through which trust influences several important properties of organizations. We further describe specific mechanisms within structuring and mobilizing that influence interaction patterns and organizational processes. The principal aim of the framework is to advance the literature by connecting the psychological and sociological micro-foundations of trust with the macro-bases of organizing. The paper concludes by demonstrating how the framework can be applied to yield novel insights into traditional views of organizations and to stimulate original and innovative avenues of organizational research that consider both the benefits and downsides of trust. (Trust; Organizing Principle; Structuring; Mobilizing) Introduction In the introduction to this special issue we observed that empirical research on trust was not keeping pace with theoretical developments in the field. We viewed this as a significant limitation and surmised that a special issue devoted to empirical research on trust would serve as a valuable vehicle for advancing the literature. In addition to the lack of empirical research, we would also make the observation that theories and evidence accumulating on trust in organizations is not well integrated and that the literature as a whole lacks coherence. At a general level, extant research provides “accumulating evidence that trust has a number of important benefits for organizations and their members” (Kramer 1999, p. 569). More specifically, Dirks and Ferrin’s (2001) review of the literature points to two distinct means through which trust generates these benefits. The dominant approach emphasizes the direct effects that trust has on important organizational phenomena such as: communication, conflict management, negotiation processes, satisfaction, and performance (both individual and unit). A second, less well studied, perspective points to the enabling effects of trust, whereby trust creates or enhances the conditions, such as positive interpretations of another’s behavior, that are conducive to obtaining organizational outcomes like cooperation and higher performance. The identification of these two perspectives provides a useful way of organizing the literature and generating insight into the mechanisms through which trust influences organizational outcomes. However, we are still left with a set of findings that have yet to be integrated on a theoretical level in a way that yields a set of generalizable propositions about the effects of trust on organizing. We believe this is due to the fact that research has, for the most part, embedded trust into existing theories. As a result, trust has been studied in a variety of different ways to address a wide range of organizational questions. This has yielded a diverse and eclectic body of knowledge about the relationship between trust and various organizational outcomes. At the same time, this approach has resulted in a somewhat fragmented view of the role of trust in an organizational context as a whole. In the remainder of this paper we begin to address the challenge of integrating the fragmented trust literature. While it is not feasible to develop a comprehensive framework that synthesizes the vast and diverse trust literature in a single paper, we draw together several key strands that relate to the organizational context. In particular, our paper aims to advance the literature by connecting the psychological and sociological microfoundations of trust with the macro-bases of organizing. BILL MCEVILY, VINCENZO PERRONE, AND AKBAR ZAHEER Trust as an Organizing Principle 92 ORGANIZATION SCIENCE/Vol. 14, No. 1, January–February 2003 Specifically, we propose that reconceptualizing trust as an organizing principle is a fruitful way of viewing the role of trust and comprehending how research on trust advances our understanding of the organization and coordination of economic activity. While it is our goal to generate a framework that coalesces our thinking about the processes through which trust, as an organizing principle, affects organizational life, we are not Pollyannish: trust indubitably has a down side, which has been little researched. We begin by elaborating on the notion of an organizing principle and then move on to conceptualize trust from this perspective. Next, we describe a set of generalizable causal pathways through which trust affects organizing. We then use that framework to identify some exemplars of possible research questions and to point to possible downsides of trust. Organizing Principles As Ouchi (1980) discusses, a fundamental purpose of organizations is to attain goals that require coordinated efforts. Interdependence and uncertainty make goal attainment more difficult and create the need for organizational solutions. The subdivision of work implies that actors must exchange information and rely on others to accomplish organizational goals without having complete control over, or being able to fully monitor, others’ behaviors. Coordinating actions is further complicated by the fact that actors cannot assume that their interests and goals are perfectly aligned. Consequently, relying on others is difficult when there is uncertainty about their intentions, motives, and competencies. Managing interdependence among individuals, units, and activities in the face of behavioral uncertainty constitutes a key organizational challenge. Organizing principles represent a way of solving the problem of interdependence and uncertainty. An organizing principle is the logic by which work is coordinated and information is gathered, disseminated, and processed within and between organizations (Zander and Kogut 1995). An organizing principle represents a heuristic for how actors interpret and represent information and how they select appropriate behaviors and routines for coordinating actions. Examples of organizing principles include: market, hierarchy, and clan (Ouchi 1980). Other have referred to these organizing principles as authority, price, and norms (Adler 2001, Bradach and Eccles 1989, Powell 1990). Each of these principles operates on the basis of distinct mechanisms that orient, enable, and constrain economic behavior. For instance, authority as an organizing principle solves the problem of coordinating action in the face of interdependence and uncertainty by reallocating decision-making rights (Simon 1957, Coleman 1990). Price-based organizing principles revolve around the idea of making coordination advantageous for each party involved by aligning incentives (Hayek 1948, Alchian and Demsetz 1972). Compliance to internalized norms and the resulting self-control of the clan form is another organizing principle that has been identified as a means of achieving coordinated action (Ouchi 1980). We propose that trust is also an organizing principle and that conceptualizing trust in this way provides a powerful means of integrating the disparate research on trust and distilling generalizable implications for how trust affects organizing. We view trust as most closely related to the clan organizing principle. By definition clans rely on trust (Ouchi 1980). However, trust can and does occur in organizational contexts outside of clans. For instance, there are a variety of organizational arrangements where cooperation in mixed-motive situations depends on trust, such as in repeated strategic alliances (Gulati 1995), buyer-supplier relationships (Dyer and Chu this issue), and temporary groups in organizations (Meyerson et al. 1996). More generally, we believe that trust frequently operates in conjunction with other organizing principles. For instance, Dirks (2000) found that while authority is important for behaviors that can be observed or controlled, trust is important when there exists performance ambiguity or behaviors that cannot be observed or controlled. Because most organizations have a combination of behaviors that can and cannot be observed or controlled, authority and trust co-occur. More generally, we believe that mixed or plural forms are the norm, consistent with Bradach and Eccles (1989). In some situations, however, trust may be the primary organizing principle, such as when monitoring and formal controls are difficult and costly to use. In these cases, trust represents an efficient choice. In other situations, trust may be relied upon due to social, rather than efficiency, considerations. For instance, achieving a sense of personal belonging within a collectivity (Podolny and Barron 1997) and the desire to develop and maintain rewarding social attachments (Granovetter 1985) may serve as the impetus for relying on trust as an organizing principle. Trust as an Organizing Principle At a general level trust is the willingness to accept vulnerability based on positive expectations about another’s intentions or behaviors (Mayer et al. 1995, Rousseau et al. 1998). Because trust represents a positive assumption BILL MCEVILY, VINCENZO PERRONE, AND AKBAR ZAHEER Trust as an Organizing Principle ORGANIZATION SCIENCE/Vol. 14, No. 1, January–February 2003 93 about the motives and intentions of another party, it allows people to economize on information processing and safeguarding behaviors. By representing an expectation that others will act in a way that serves, or at least is not inimical to, one’s interests (Gambetta 1988), trust as a heuristic is a frame of reference that al", "title": "" }, { "docid": "13897df01d4c03191dd015a04c3a5394", "text": "Medical or Health related search queries constitute a significant portion of the total number of queries searched everyday on the web. For health queries, the authenticity or authoritativeness of search results is of utmost importance besides relevance. So far, research in automatic detection of authoritative sources on the web has mainly focused on a) link structure based approaches and b) supervised approaches for predicting trustworthiness. However, the aforementioned approaches have some inherent limitations. For example, several content farm and low quality sites artificially boost their link-based authority rankings by forming a syndicate of highly interlinked domains and content which is algorithmically hard to detect. Moreover, the number of positively labeled training samples available for learning trustworthiness is also limited when compared to the size of the web. In this paper, we propose a novel unsupervised approach to detect and promote authoritative domains in health segment using click-through data. We argue that standard IR metrics such as NDCG are relevance-centric and hence are not suitable for evaluating authority. We propose a new authority-centric evaluation metric based on side-by-side judgment of results. Using real world search query sets, we evaluate our approach both quantitatively and qualitatively and show that it succeeds in significantly improving the authoritativeness of results when compared to a standard web ranking baseline. ∗Corresponding Author", "title": "" }, { "docid": "07570935aad8a481ea5e9d422c4f80ca", "text": "Continuous modification of the protein composition at synapses is a driving force for the plastic changes of synaptic strength, and provides the fundamental molecular mechanism of synaptic plasticity and information storage in the brain. Studying synaptic protein turnover is not only important for understanding learning and memory, but also has direct implication for understanding pathological conditions like aging, neurodegenerative diseases, and psychiatric disorders. Proteins involved in synaptic transmission and synaptic plasticity are typically concentrated at synapses of neurons and thus appear as puncta (clusters) in immunofluorescence microscopy images. Quantitative measurement of the changes in puncta density, intensity, and sizes of specific proteins provide valuable information on their function in synaptic transmission, circuit development, synaptic plasticity, and synaptopathy. Unfortunately, puncta quantification is very labor intensive and time consuming. In this article, we describe a software tool designed for the rapid semi-automatic detection and quantification of synaptic protein puncta from 2D immunofluorescence images generated by confocal laser scanning microscopy. The software, dubbed as SynPAnal (for Synaptic Puncta Analysis), streamlines data quantification for puncta density and average intensity, thereby increases data analysis throughput compared to a manual method. SynPAnal is stand-alone software written using the JAVA programming language, and thus is portable and platform-free.", "title": "" }, { "docid": "b4f82364c5c4900058f50325ccc9e4c4", "text": "OBJECTIVE\nThis study reports the psychometric properties of the 24-item version of the Diabetes Knowledge Questionnaire (DKQ).\n\n\nRESEARCH DESIGN AND METHODS\nThe original 60-item DKQ was administered to 502 adult Mexican-Americans with type 2 diabetes who are part of the Starr County Diabetes Education Study. The sample was composed of 252 participants and 250 support partners. The subjects were randomly assigned to the educational and social support intervention (n = 250) or to the wait-listed control group (n = 252). A shortened 24-item version of the DKQ was derived from the original instrument after data collection was completed. Reliability was assessed by means of Cronbach's coefficient alpha. To determine validity, differentiation between the experimental and control groups was conducted at baseline and after the educational portion of the intervention.\n\n\nRESULTS\nThe 24-item version of the DKQ (DKQ-24) attained a reliability coefficient of 0.78, indicating internal consistency, and showed sensitivity to the intervention, suggesting construct validation.\n\n\nCONCLUSIONS\nThe DKQ-24 is a reliable and valid measure of diabetes-related knowledge that is relatively easy to administer to either English or Spanish speakers.", "title": "" }, { "docid": "8b2b8eb2d16b28dac8ec8d4572b8db0e", "text": "Combining meaning, memory, and development, the perennially popular topic of intuition can be approached in a new way. Fuzzy-trace theory integrates these topics by distinguishing between meaning-based gist representations, which support fuzzy (yet advanced) intuition, and superficial verbatim representations of information, which support precise analysis. Here, I review the counterintuitive findings that led to the development of the theory and its most recent extensions to the neuroscience of risky decision making. These findings include memory interference (worse verbatim memory is associated with better reasoning); nonnumerical framing (framing effects increase when numbers are deleted from decision problems); developmental decreases in gray matter and increases in brain connectivity; developmental reversals in memory, judgment, and decision making (heuristics and biases based on gist increase from childhood to adulthood, challenging conceptions of rationality); and selective attention effects that provide critical tests comparing fuzzy-trace theory, expected utility theory, and its variants (e.g., prospect theory). Surprising implications for judgment and decision making in real life are also discussed, notably, that adaptive decision making relies mainly on gist-based intuition in law, medicine, and public health.", "title": "" }, { "docid": "fb58d6fe77092be4bce5dd0926c563de", "text": "We present the Mind the Gap Model (MGM), an approach for interpretable feature extraction and selection. By placing interpretability criteria directly into the model, we allow for the model to both optimize parameters related to interpretability and to directly report a global set of distinguishable dimensions to assist with further data exploration and hypothesis generation. MGM extracts distinguishing features on real-world datasets of animal features, recipes ingredients, and disease co-occurrence. It also maintains or improves performance when compared to related approaches. We perform a user study with domain experts to show the MGM’s ability to help with dataset exploration.", "title": "" }, { "docid": "6c221c4085c6868640c236b4dd72f777", "text": "Resilience has been most frequently defined as positive adaptation despite adversity. Over the past 40 years, resilience research has gone through several stages. From an initial focus on the invulnerable or invincible child, psychologists began to recognize that much of what seems to promote resilience originates outside of the individual. This led to a search for resilience factors at the individual, family, community - and, most recently, cultural - levels. In addition to the effects that community and culture have on resilience in individuals, there is growing interest in resilience as a feature of entire communities and cultural groups. Contemporary researchers have found that resilience factors vary in different risk contexts and this has contributed to the notion that resilience is a process. In order to characterize the resilience process in a particular context, it is necessary to identify and measure the risk involved and, in this regard, perceived discrimination and historical trauma are part of the context in many Aboriginal communities. Researchers also seek to understand how particular protective factors interact with risk factors and with other protective factors to support relative resistance. For this purpose they have developed resilience models of three main types: \"compensatory,\" \"protective,\" and \"challenge\" models. Two additional concepts are resilient reintegration, in which a confrontation with adversity leads individuals to a new level of growth, and the notion endorsed by some Aboriginal educators that resilience is an innate quality that needs only to be properly awakened.The review suggests five areas for future research with an emphasis on youth: 1) studies to improve understanding of what makes some Aboriginal youth respond positively to risk and adversity and others not; 2) case studies providing empirical confirmation of the theory of resilient reintegration among Aboriginal youth; 3) more comparative studies on the role of culture as a resource for resilience; 4) studies to improve understanding of how Aboriginal youth, especially urban youth, who do not live in self-governed communities with strong cultural continuity can be helped to become, or remain, resilient; and 5) greater involvement of Aboriginal researchers who can bring a nonlinear world view to resilience research.", "title": "" }, { "docid": "4c4bfcadd71890ccce9e58d88091f6b3", "text": "With the dramatic growth of the game industry over the past decade, its rapid inclusion in many sectors of today’s society, and the increased complexity of games, game development has reached a point where it is no longer humanly possible to use only manual techniques to create games. Large parts of games need to be designed, built, and tested automatically. In recent years, researchers have delved into artificial intelligence techniques to support, assist, and even drive game development. Such techniques include procedural content generation, automated narration, player modelling and adaptation, and automated game design. This research is still very young, but already the games industry is taking small steps to integrate some of these techniques in their approach to design. The goal of this seminar was to bring together researchers and industry representatives who work at the forefront of artificial intelligence (AI) and computational intelligence (CI) in games, to (1) explore and extend the possibilities of AI-driven game design, (2) to identify the most viable applications of AI-driven game design in the game industry, and (3) to investigate new approaches to AI-driven game design. To this end, the seminar included a wide range of researchers and developers, including specialists in AI/CI for abstract games, commercial video games, and serious games. Thus, it fostered a better understanding of and unified vision on AI-driven game design, using input from both scientists as well as AI specialists from industry. Seminar November 19–24, 2017 – http://www.dagstuhl.de/17471 1998 ACM Subject Classification I.2.1 Artificial Intelligence Games", "title": "" }, { "docid": "da61b8bd6c1951b109399629f47dad16", "text": "In this paper, we introduce an approach for distributed nonlinear control of multiple hovercraft-type underactuated vehicles with bounded and unidirectional inputs. First, a bounded nonlinear controller is given for stabilization and tracking of a single vehicle, using a cascade backstepping method. Then, this controller is combined with a distributed gradient-based control for multi-vehicle formation stabilization using formation potential functions previously constructed. The vehicles are used in the Caltech Multi-Vehicle Wireless Testbed (MVWT). We provide simulation and experimental results for stabilization and tracking of a single vehicle, and a simulation of stabilization of a six-vehicle formation, demonstrating that in all cases the control bounds and the control objective are satisfied.", "title": "" }, { "docid": "48b88774957a6d30ae9d0a97b9643647", "text": "The defect detection on manufactures is extremely important in the optimization of industrial processes; particularly, the visual inspection plays a fundamental role. The visual inspection is often carried out by a human expert. However, new technology features have made this inspection unreliable. For this reason, many researchers have been engaged to develop automatic analysis processes of manufactures and automatic optical inspections in the industrial production of printed circuit boards. Among the defects that could arise in this industrial process, those of the solder joints are very important, because they can lead to an incorrect functioning of the board; moreover, the amount of the solder paste can give some information on the quality of the industrial process. In this paper, a neural network-based automatic optical inspection system for the diagnosis of solder joint defects on printed circuit boards assembled in surface mounting technology is presented. The diagnosis is handled as a pattern recognition problem with a neural network approach. Five types of solder joints have been classified in respect to the amount of solder paste in order to perform the diagnosis with a high recognition rate and a detailed classification able to give information on the quality of the manufacturing process. The images of the boards under test are acquired and then preprocessed to extract the region of interest for the diagnosis. Three types of feature vectors are evaluated from each region of interest, which are the images of the solder joints under test, by exploiting the properties of the wavelet transform and the geometrical characteristics of the preprocessed images. The performances of three different classifiers which are a multilayer perceptron, a linear vector quantization, and a K-nearest neighbor classifier are compared. The n-fold cross-validation has been exploited to select the best architecture for the neural classifiers, while a number of experiments have been devoted to estimating the best value of K in the K-NN. The results have proved that the MLP network fed with the GW-features has the best recognition rate. This approach allows to carry out the diagnosis burden on image processing, feature extraction, and classification algorithms, reducing the cost and the complexity of the acquisition system. In fact, the experimental results suggest that the reason for the high recognition rate in the solder joint classification is due to the proper preprocessing steps followed as well as to the information contents of the features", "title": "" }, { "docid": "80a4de6098a4821e52ccc760db2aae18", "text": "This article presents P-Sense, a participatory sensing application for air pollution monitoring and control. The paper describes in detail the system architecture and individual components of a successfully implemented application. In addition, the paper points out several other research-oriented problems that need to be addressed before these applications can be effectively implemented in practice, in a large-scale deployment. Security, privacy, data visualization and validation, and incentives are part of our work-in-progress activities", "title": "" } ]
scidocsrr
5e7c2be0d66e726a1d4bd7d249df0187
Psychopathic Personality: Bridging the Gap Between Scientific Evidence and Public Policy.
[ { "docid": "32b5458ced294a01654f3747273db08d", "text": "Prior studies of childhood aggression have demonstrated that, as a group, boys are more aggressive than girls. We hypothesized that this finding reflects a lack of research on forms of aggression that are relevant to young females rather than an actual gender difference in levels of overall aggressiveness. In the present study, a form of aggression hypothesized to be typical of girls, relational aggression, was assessed with a peer nomination instrument for a sample of 491 third-through sixth-grade children. Overt aggression (i.e., physical and verbal aggression as assessed in past research) and social-psychological adjustment were also assessed. Results provide evidence for the validity and distinctiveness of relational aggression. Further, they indicated that, as predicted, girls were significantly more relationally aggressive than were boys. Results also indicated that relationally aggressive children may be at risk for serious adjustment difficulties (e.g., they were significantly more rejected and reported significantly higher levels of loneliness, depression, and isolation relative to their nonrelationally aggressive peers).", "title": "" } ]
[ { "docid": "d364aaa161cc92e28697988012c35c2a", "text": "Many people believe that information that is stored in long-term memory is permanent, citing examples of \"retrieval techniques\" that are alleged to uncover previously forgotten information. Such techniques include hypnosis, psychoanalytic procedures, methods for eliciting spontaneous and other conscious recoveries, and—perhaps most important—the electrical stimulation of the brain reported by Wilder Penfield and his associates. In this article we first evaluate • the evidence and conclude that, contrary to apparent popular belief, the evidence in no way confirms the view that all memories are permanent and thus potentially recoverable. We then describe some failures that resulted from attempts to elicit retrieval of previously stored information and conjecture what circumstances might cause information stored in memory to be irrevocably destroyed. Few would deny the existence of a phenomenon called \"forgetting,\" which is evident in the common observation that information becomes less available as the interval increases between the time of the information's initial acquisition and the time of its attempted retrieval. Despite the prevalence of the phenomenon, the factors that underlie forgetting have proved to be rather elusive, and the literature abounds with hypothesized mechanisms to account for the observed data. In this article we shall focus our attention on what is perhaps the fundamental issue concerning forgetting; Does forgetting consist of an actual loss of stored information, or does it result from a loss of access to information, which, once stored, remains forever? It should be noted at the outset that this question may be impossible to resolve in an absolute sense. Consider the following thought experiment. A person (call him Geoffrey) observes some event, say a traffic accident. During the period of observation, a movie camera strapped to Geoffrey's head records the event as Geoffrey experiences it. Some time later, Geoffrey attempts to recall and Vol. 35, No. S, 409-420 describe the event with the aid of some retrieval technique (e.g., hypnosis or brain stimulation), which is alleged to allow recovery of any information stored in his brain. While Geoffrey describes the event, a second person (Elizabeth) watches the movie that has been made of the event. Suppose, now, that Elizabeth is unable to decide whether Geoffrey is describing his memory or the movie—in other words, memory and movie are indistinguishable. Such a finding would constitute rather impressive support for the position held by many people that the mind registers an accurate representation of reality and that this information is stored permanently. But suppose, on the other hand, that Geoffrey's report—even with the aid of the miraculous retrieval technique—is incomplete, sketchy, and inaccurate, and furthermore, suppose that the accuracy of his report deteriorates over time. Such a finding, though consistent with the view that forgetting consists of information loss, would still be inconclusive, because it could be argued that the retrieval technique—no matter what it was— was simply not good enough to disgorge the information, which remained buried somewhere in the recesses of Geoffrey's brain. Thus, the question of information loss versus This article was written while E. Loftus was a fellow at the Center for Advanced Study in the Behavioral Sciences, Stanford, California, and G. Loftus was a visiting scholar in the Department of Psychology at Stanford University. James Fries generously picked apart an earlier version of this article. Paul Baltes translated the writings of Johann Nicolas Tetens (177?). The following financial sources are gratefully acknowledged: (a) National Science Foundation (NSF) Grant BNS 76-2337 to G. Loftus; (b) 'NSF Grant ENS 7726856 to E. Loftus; and (c) NSF Grant BNS 76-22943 and an Andrew Mellon Foundation grant to the Center for Advanced Study in the Behavioral Sciences. Requests for reprints should be sent to Elizabeth Loftus, Department of Psychology, University of Washington, Seattle, Washington 98195. AMERICAN PSYCHOLOGIST • MAY 1980 * 409 Copyright 1980 by the American Psychological Association, Inc. 0003-066X/80/3505-0409$00.75 retrieval failure may be unanswerable in principle. Nonetheless it often becomes necessary to choose sides. In the scientific arena, for example, a theorist constructing a model of memory may— depending on the details of the model'—be forced to adopt one position or the other. In fact, several leading theorists have suggested that although loss from short-term memory does occur, once material is registered in long-term memory, the information is never lost from the system, although it may normally be inaccessible (Shiffrin & Atkinson, 1969; Tulving, 1974). The idea is not new, however. Two hundred years earlier, the German philosopher Johann Nicolas Tetens (1777) wrote: \"Each idea does not only leave a trace or a consequent of that trace somewhere in the body, but each of them can be stimulated—-even if it is not possible to demonstrate this in a given situation\" (p, 7S1). He was explicit about his belief that certain ideas may seem to be forgotten, but that actually they are only enveloped by other ideas and, in truth, are \"always with us\" (p, 733). Apart from theoretical interest, the position one takes on the permanence of memory traces has important practical consequences. It therefore makes sense to air the issue from time to time, which is what we shall do here, The purpose of this paper is threefold. We shall first report some data bearing on people's beliefs about the question of information loss versus retrieval failure. To anticipate our findings, our survey revealed that a substantial number of the individuals queried take the position that stored information is permanent'—-or in other words, that all forgetting results from retrieval failure. In support of their answers, people typically cited data from some variant of the thought experiment described above, that is, they described currently available retrieval techniques that are alleged to uncover previously forgotten information. Such techniques include hypnosis, psychoanalytic procedures (e.g., free association), and— most important—the electrical stimulation of the brain reported by Wilder Penfield and his associates (Penfield, 1969; Penfield & Perot, 1963; Penfield & Roberts, 1959). The results of our survey lead to the second purpose of this paper, which is to evaluate this evidence. Finally, we shall describe some interesting failures that have resulted from attempts to elicit retrieval of previously stored information. These failures lend support to the contrary view that some memories are apparently modifiable, and that consequently they are probably unrecoverable. Beliefs About Memory In an informal survey, 169 individuals from various parts of the U.S. were asked to give their views about how memory works. Of these, 75 had formal graduate training in psychology, while the remaining 94 did not. The nonpsychologists had varied occupations. For example, lawyers, secretaries, taxicab drivers, physicians, philosophers, fire investigators, and even an 11-year-old child participated. They were given this question: Which of these statements best reflects your view on how human memory works? 1. Everything we learn is permanently stored in the mind, although sometimes particular details are not accessible. With hypnosis, or other special techniques, these inaccessible details could eventually be recovered. 2. Some details that we learn may be permanently lost from memory. Such details would never be» able to be recovered by hypnosis, or any other special technique, because these details are simply no longer there. Please elaborate briefly or give any reasons you may have for your view. We found that 84% of the psychologists chose Position 1, that is, they indicated a belief that all information in long-term memory is there, even though much of it cannot be retrieved; 14% chose Position 2, and 2% gave some other answer. A somewhat smaller percentage, 69%, of the nonpsychologists indicated a belief in Position 1; 23% chose Position 2, while 8% did not make a clear choice. What reasons did people give for their belief? The most common reason for choosing Position 1 was based on personal experience and involved the occasional recovery of an idea that the person had not thought about for quite some time. For example, one person wrote: \"I've experienced and heard too many descriptions of spontaneous recoveries of ostensibly quite trivial memories, which seem to have been triggered by just the right set of a person's experiences.\" A second reason for a belief in Position 1, commonly given by persons trained in psychology, was knowledge of the work of Wilder Penfield. One psychologist wrote: \"Even though Statement 1 is untestable, I think that evidence, weak though it is, such as Penfield's work, strongly suggests it may be correct.\" Occasionally respondents offered a comment about 410 • MAY 1980 • AMERICAN PSYCHOLOGIST hypnosis, and more rarely about psychoanalysis and repression, sodium pentothal, or even reincarnation, to support their belief in the permanence of memory. Admittedly, the survey was informally conducted, the respondents were not selected randomly, and the question itself may have pressured people to take sides when their true belief may have been a position in between. Nevertheless, the results suggest a widespread belief in the permanence of memories and give us some idea of the reasons people offer in support of this belief.", "title": "" }, { "docid": "702df543119d648be859233bfa2b5d03", "text": "We review more than 200 applications of neural networks in image processing and discuss the present and possible future role of neural networks, especially feed-forward neural networks, Kohonen feature maps and Hop1eld neural networks. The various applications are categorised into a novel two-dimensional taxonomy for image processing algorithms. One dimension speci1es the type of task performed by the algorithm: preprocessing, data reduction=feature extraction, segmentation, object recognition, image understanding and optimisation. The other dimension captures the abstraction level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level,ion level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level, object-set-level and scene characterisation. Each of the six types of tasks poses speci1c constraints to a neural-based approach. These speci1c conditions are discussed in detail. A synthesis is made of unresolved problems related to the application of pattern recognition techniques in image processing and speci1cally to the application of neural networks. Finally, we present an outlook into the future application of neural networks and relate them to novel developments. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "ca807d3bed994a8e7492898e6bfe6dd2", "text": "This paper proposes state-of-charge (SOC) and remaining charge estimation algorithm of each cell in series-connected lithium-ion batteries. SOC and remaining charge information are indicators for diagnosing cell-to-cell variation; thus, the proposed algorithm can be applied to SOC- or charge-based balancing in cell balancing controller. Compared to voltage-based balancing, SOC and remaining charge information improve the performance of balancing circuit but increase computational complexity which is a stumbling block in implementation. In this work, a simple current sensor-less SOC estimation algorithm with estimated current equalizer is used to achieve aforementioned object. To check the characteristics and validate the feasibility of the proposed method, a constant current discharging/charging profile is applied to a series-connected battery pack (twelve 2.6Ah Li-ion batteries). The experimental results show its applicability to SOC- and remaining charge-based balancing controller with high estimation accuracy.", "title": "" }, { "docid": "1bf43801d05551f376464d08893b211c", "text": "A Large number of digital text information is generated every day. Effectively searching, managing and exploring the text data has become a main task. In this paper, we first represent an introduction to text mining and a probabilistic topic model Latent Dirichlet allocation. Then two experiments are proposed Wikipedia articles and users’ tweets topic modelling. The former one builds up a document topic model, aiming to a topic perspective solution on searching, exploring and recommending articles. The latter one sets up a user topic model, providing a full research and analysis over Twitter users’ interest. The experiment process including data collecting, data pre-processing and model training is fully documented and commented. Further more, the conclusion and application of this paper could be a useful computation tool for social and business research.", "title": "" }, { "docid": "e85e8b54351247d5f20bf1756a133a08", "text": "In high speed ADC, comparator influences the overall performance of ADC directly. This paper describes a very high speed and high resolution preamplifier comparator. The comparator use a self biased differential amp to increase the output current sinking and sourcing capability. The threshold and width of the new comparator can be reduced to the millivolt (mV) range, the resolution and the dynamic characteristics are good. Based on UMC 0. 18um CMOS process model, simulated results show the comparator can work under a 25dB gain, 55MHz speed and 210. 10μW power .", "title": "" }, { "docid": "7e38ba11e394acd7d5f62d6a11253075", "text": "The body-schema concept is revisited in the context of embodied cognition, further developing the theory formulated by Marc Jeannerod that the motor system is part of a simulation network related to action, whose function is not only to shape the motor system for preparing an action (either overt or covert) but also to provide the self with information on the feasibility and the meaning of potential actions. The proposed computational formulation is based on a dynamical system approach, which is linked to an extension of the equilibrium-point hypothesis, called Passive Motor Paradigm: this dynamical system generates goal-oriented, spatio-temporal, sensorimotor patterns, integrating a direct and inverse internal model in a multi-referential framework. The purpose of such computational model is to operate at the same time as a general synergy formation machinery for planning whole-body actions in humanoid robots and/or for predicting coordinated sensory-motor patterns in human movements. In order to illustrate the computational approach, the integration of simultaneous, even partially conflicting tasks will be analyzed in some detail with regard to postural-focal dynamics, which can be defined as the fusion of a focal task, namely reaching a target with the whole-body, and a postural task, namely maintaining overall stability.", "title": "" }, { "docid": "b5cc41f689a1792b544ac66a82152993", "text": "0020-7225/$ see front matter 2009 Elsevier Ltd doi:10.1016/j.ijengsci.2009.08.001 * Corresponding author. Tel.: +66 2 9869009x220 E-mail address: thanan@siit.tu.ac.th (T. Leephakp Nowadays, Pneumatic Artificial Muscle (PAM) has become one of the most widely-used fluid-power actuators which yields remarkable muscle-like properties such as high force to weight ratio, soft and flexible structure, minimal compressed-air consumption and low cost. To obtain optimum design and usage, it is necessary to understand mechanical behaviors of the PAM. In this study, the proposed models are experimentally derived to describe mechanical behaviors of the PAMs. The experimental results show a non-linear relationship between contraction as well as air pressure within the PAMs and a pulling force of the PAMs. Three different sizes of PAMs available in industry are studied for empirical modeling and simulation. The case studies are presented to verify close agreement on the simulated results to the experimental results when the PAMs perform under various loads. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "174fb8b7cb0f45bed49a50ce5ad19c88", "text": "De-noising and extraction of the weak signature are crucial to fault prognostics in which case features are often very weak and masked by noise. The wavelet transform has been widely used in signal de-noising due to its extraordinary time-frequency representation capability. In this paper, the performance of wavelet decomposition-based de-noising and wavelet filter-based de-noising methods are compared based on signals from mechanical defects. The comparison result reveals that wavelet filter is more suitable and reliable to detect a weak signature of mechanical impulse-like defect signals, whereas the wavelet decomposition de-noising method can achieve satisfactory results on smooth signal detection. In order to select optimal parameters for the wavelet filter, a two-step optimization process is proposed. Minimal Shannon entropy is used to optimize the Morlet wavelet shape factor. A periodicity detection method based on singular value decomposition (SVD) is used to choose the appropriate scale for the wavelet transform. The signal de-noising results from both simulated signals and experimental data are presented and both support the proposed method. r 2005 Elsevier Ltd. All rights reserved. see front matter r 2005 Elsevier Ltd. All rights reserved. jsv.2005.03.007 ding author. Tel.: +1 414 229 3106; fax: +1 414 229 3107. resses: haiqiu@uwm.edu (H. Qiu), jaylee@uwm.edu (J. Lee), jinglin@mail.ioc.ac.cn (J. Lin).", "title": "" }, { "docid": "63f20dd528d54066ed0f189e4c435fe7", "text": "In many specific laboratories the students use only a PLC simulator software, because the hardware equipment is expensive. This paper presents a solution that allows students to study both the hardware and software parts, in the laboratory works. The hardware part of solution consists in an old plotter, an adapter board, a PLC and a HMI. The software part of this solution is represented by the projects of the students, in which they developed applications for programming the PLC and the HMI. This equipment can be made very easy and can be used in university labs by students, so that they design and test their applications, from low to high complexity [1], [2].", "title": "" }, { "docid": "363a465d626fec38555563722ae92bb1", "text": "A novel reverse-conducting insulated-gate bipolar transistor (RC-IGBT) featuring an oxide trench placed between the n-collector and the p-collector and a floating p-region (p-float) sandwiched between the n-drift and n-collector is proposed. First, the new structure introduces a high-resistance collector short resistor at low current density, which leads to the suppression of the snapback effect. Second, the collector short resistance can be adjusted by varying the p-float length without increasing the collector cell length. Third, the p-float layer also acts as the base of the n-collector/p-float/n-drift transistor which can be activated and offers a low-resistance current path at high current densities, which contributes to the low on-state voltage of the integrated freewheeling diode and the fast turnoff. As simulations show, the proposed RC-IGBT shows snapback-free output characteristics and faster turnoff compared with the conventional RC-IGBT.", "title": "" }, { "docid": "3dfb419706ae85d232753a085dc145f7", "text": "This chapter describes the different steps of designing, building, simulating, and testing an intelligent flight control module for an increasingly popular unmanned aerial vehicle (UAV), known as a quadrotor. It presents an in-depth view of the modeling of the kinematics, dynamics, and control of such an interesting UAV. A quadrotor offers a challenging control problem due to its highly unstable nature. An effective control methodology is therefore needed for such a unique airborne vehicle. The chapter starts with a brief overview on the quadrotor's background and its applications, in light of its advantages. Comparisons with other UAVs are made to emphasize the versatile capabilities of this special design. For a better understanding of the vehicle's behavior, the quadrotor's kinematics and dynamics are then detailed. This yields the equations of motion, which are used later as a guideline for developing the proposed intelligent flight control scheme. In this chapter, fuzzy logic is adopted for building the flight controller of the quadrotor. It has been witnessed that fuzzy logic control offers several advantages over certain types of conventional control methods, specifically in dealing with highly nonlinear systems and modeling uncertainties. Two types of fuzzy inference engines are employed in the design of the flight controller, each of which is explained and evaluated. For testing the designed intelligent flight controller, a simulation environment was first developed. The simulations were made as realistic as possible by incorporating environmental disturbances such as wind gust and the ever-present sensor noise. The proposed controller was then tested on a real test-bed built specifically for this project. Both the simulator and the real quadrotor were later used for conducting different attitude stabilization experiments to evaluate the performance of the proposed control strategy. The controller's performance was also benchmarked against conventional control techniques such as input-output linearization, backstepping and sliding mode control strategies. Conclusions were then drawn based on the conducted experiments and their results.", "title": "" }, { "docid": "50906e5d648b7598c307b09975daf2d8", "text": "Digitization forces industries to adapt to changing market conditions and consumer behavior. Exponential advances in technology, increased consumer power and sharpened competition imply that companies are facing the menace of commoditization. To sustainably succeed in the market, obsolete business models have to be adapted and new business models can be developed. Differentiation and unique selling propositions through innovation as well as holistic stakeholder engagement help companies to master the transformation. To enable companies and start-ups facing the implications of digital change, a tool was created and designed specifically for this demand: the Business Model Builder. This paper investigates the process of transforming the Business Model Builder into a software-supported digitized version. The digital twin allows companies to simulate the iterative adjustment of business models to constantly changing market conditions as well as customer needs on an ongoing basis. The user can modify individual variables, understand interdependencies and see the impact on the result of the business case, i.e. earnings before interest and taxes (EBIT) or economic value added (EVA). The simulation of a business models accordingly provides the opportunity to generate a dynamic view of the business model where any changes of input variables are considered in the result, the business case. Thus, functionality, feasibility and profitability of a business model can be reviewed, tested and validated in the digital simulation tool.", "title": "" }, { "docid": "48eacd86c14439454525e5a570db083d", "text": "RATIONALE, AIMS AND OBJECTIVES\nTotal quality in coagulation testing is a necessary requisite to achieve clinically reliable results. Evidence was provided that poor standardization in the extra-analytical phases of the testing process has the greatest influence on test results, though little information is available so far on prevalence and type of pre-analytical variability in coagulation testing.\n\n\nMETHODS\nThe present study was designed to describe all pre-analytical problems on inpatients routine and stat samples recorded in our coagulation laboratory over a 2-year period and clustered according to their source (hospital departments).\n\n\nRESULTS\nOverall, pre-analytic problems were identified in 5.5% of the specimens. Although the highest frequency was observed for paediatric departments, in no case was the comparison of the prevalence among the different hospital departments statistically significant. The more frequent problems could be referred to samples not received in the laboratory following a doctor's order (49.3%), haemolysis (19.5%), clotting (14.2%) and inappropriate volume (13.7%). Specimens not received prevailed in the intensive care unit, surgical and clinical departments, whereas clotted and haemolysed specimens were those most frequently recorded from paediatric and emergency departments, respectively. The present investigation demonstrates a high prevalence of pre-analytical problems affecting samples for coagulation testing.\n\n\nCONCLUSIONS\nFull implementation of a total quality system, encompassing a systematic error tracking system, is a valuable tool to achieve meaningful information on the local pre-analytic processes most susceptible to errors, enabling considerations on specific responsibilities and providing the ideal basis for an efficient feedback within the hospital departments.", "title": "" }, { "docid": "3f6cbad208a819fc8fc6a46208197d59", "text": "The use of visemes as atomic speech units in visual speech analysis and synthesis systems is well-established. Viseme labels are determined using a many-to-one phoneme-to-viseme mapping. However, due to the visual coarticulation effects, an accurate mapping from phonemes to visemes should define a many-to-many mapping scheme. In this research it was found that neither the use of standardized nor speaker-dependent many-to-one viseme labels could satisfy the quality requirements of concatenative visual speech synthesis. Therefore, a novel technique to define a many-to-many phoneme-to-viseme mapping scheme is introduced, which makes use of both treebased and k-means clustering approaches. We show that these many-to-many viseme labels more accurately describe the visual speech information as compared to both phoneme-based and many-toone viseme-based speech labels. In addition, we found that the use of these many-to-many visemes improves the precision of the segment selection phase in concatenative visual speech synthesis using limited speech databases. Furthermore, the resulting synthetic visual speech was both objectively and subjectively found to be of higher quality when the many-to-many visemes are used to describe the speech database as well as the synthesis targets.", "title": "" }, { "docid": "1afdefb31d7b780bb78b59ca8b0d3d8a", "text": "Convolutional Neural Network (CNN) is a very powerful approach to extract discriminative local descriptors for effective image search. Recent work adopts fine-tuned strategies to further improve the discriminative power of the descriptors. Taking a different approach, in this paper, we propose a novel framework to achieve competitive retrieval performance. Firstly, we propose various masking schemes, namely SIFT-mask, SUM-mask, and MAX-mask, to select a representative subset of local convolutional features and remove a large number of redundant features. We demonstrate that this can effectively address the burstiness issue and improve retrieval accuracy. Secondly, we propose to employ recent embedding and aggregating methods to further enhance feature discriminability. Extensive experiments demonstrate that our proposed framework achieves state-of-the-art retrieval accuracy.", "title": "" }, { "docid": "07348109c7838032850c039f9a463943", "text": "Ceramics are widely used biomaterials in prosthetic dentistry due to their attractive clinical properties. They are aesthetically pleasing with their color, shade and luster, and they are chemically stable. The main constituents of dental ceramic are Si-based inorganic materials, such as feldspar, quartz, and silica. Traditional feldspar-based ceramics are also referred to as “Porcelain”. The crucial difference between a regular ceramic and a dental ceramic is the proportion of feldspar, quartz, and silica contained in the ceramic. A dental ceramic is a multiphase system, i.e. it contains a dispersed crystalline phase surrounded by a continuous amorphous phase (a glassy phase). Modern dental ceramics contain a higher proportion of the crystalline phase that significantly improves the biomechanical properties of ceramics. Examples of these high crystalline ceramics include lithium disilicate and zirconia.", "title": "" }, { "docid": "affa48f455d5949564302b4c23324458", "text": "MicroRNAs (miRNAs) have within the past decade emerged as key regulators of metabolic homoeostasis. Major tissues in intermediary metabolism important during development of the metabolic syndrome, such as β-cells, liver, skeletal and heart muscle as well as adipose tissue, have all been shown to be affected by miRNAs. In the pancreatic β-cell, a number of miRNAs are important in maintaining the balance between differentiation and proliferation (miR-200 and miR-29 families) and insulin exocytosis in the differentiated state is controlled by miR-7, miR-375 and miR-335. MiR-33a and MiR-33b play crucial roles in cholesterol and lipid metabolism, whereas miR-103 and miR-107 regulates hepatic insulin sensitivity. In muscle tissue, a defined number of miRNAs (miR-1, miR-133, miR-206) control myofibre type switch and induce myogenic differentiation programmes. Similarly, in adipose tissue, a defined number of miRNAs control white to brown adipocyte conversion or differentiation (miR-365, miR-133, miR-455). The discovery of circulating miRNAs in exosomes emphasizes their importance as both endocrine signalling molecules and potentially disease markers. Their dysregulation in metabolic diseases, such as obesity, type 2 diabetes and atherosclerosis stresses their potential as therapeutic targets. This review emphasizes current ideas and controversies within miRNA research in metabolism.", "title": "" }, { "docid": "2795c78d2e81a064173f49887c9b1bb1", "text": "This paper reports a continuously tunable lumped bandpass filter implemented in a third-order coupled resonator configuration. The filter is fabricated on a Borosilicate glass substrate using a surface micromachining technology that offers hightunable passive components. Continuous electrostatic tuning is achieved using three tunable capacitor banks, each consisting of one continuously tunable capacitor and three switched capacitors with pull-in voltage of less than 40 V. The center frequency of the filter is tuned from 1 GHz down to 600 MHz while maintaining a 3-dB bandwidth of 13%-14% and insertion loss of less than 4 dB. The maximum group delay is less than 10 ns across the entire tuning range. The temperature stability of the center frequency from -50°C to 50°C is better than 2%. The measured tuning speed of the filter is better than 80 s, and the is better than 20 dBm, which are in good agreement with simulations. The filter occupies a small size of less than 1.5 cm × 1.1 cm. The implemented filter shows the highest performance amongst the fully integrated microelectromechanical systems filters operating at sub-gigahertz range.", "title": "" }, { "docid": "fd7c514e8681a5292bcbf2bbf6e75664", "text": "In modern days, a large no of automobile accidents are caused due to driver fatigue. To address the problem we propose a vision-based real-time driver fatigue detection system based on eye-tracking, which is an active safety system. Eye tracking is one of the key technologies, for, future driver assistance systems since human eyes contain much information about the driver's condition such as gaze, attention level, and fatigue level. Face and eyes of the driver are first localized and then marked in every frame obtained from the video source. The eyes are tracked in real time using correlation function with an automatically generated online template. Additionally, driver’s distraction and conversations with passengers during driving can lead to serious results. A real-time vision-based model for monitoring driver’s unsafe states, including fatigue state is proposed. A time-based eye glance to mitigate driver distraction is proposed. Keywords— Driver fatigue, Eye-Tracking, Template matching,", "title": "" } ]
scidocsrr
0e9bebb749f36ccfc7349c86c70ce298
Performance Modeling and Evaluation of Distributed Deep Learning Frameworks on GPUs
[ { "docid": "92008a84a80924ec8c0ad1538da2e893", "text": "Large-scale deep learning requires huge computational resources to train a multi-layer neural network. Recent systems propose using 100s to 1000s of machines to train networks with tens of layers and billions of connections. While the computation involved can be done more efficiently on GPUs than on more traditional CPU cores, training such networks on a single GPU is too slow and training on distributed GPUs can be inefficient, due to data movement overheads, GPU stalls, and limited GPU memory. This paper describes a new parameter server, called GeePS, that supports scalable deep learning across GPUs distributed among multiple machines, overcoming these obstacles. We show that GeePS enables a state-of-the-art single-node GPU implementation to scale well, such as to 13 times the number of training images processed per second on 16 machines (relative to the original optimized single-node code). Moreover, GeePS achieves a higher training throughput with just four GPU machines than that a state-of-the-art CPU-only system achieves with 108 machines.", "title": "" } ]
[ { "docid": "1498977b6e68df3eeca6e25c550a5edd", "text": "The Raven's Progressive Matrices (RPM) test is a commonly used test of intelligence. The literature suggests a variety of problem-solving methods for addressing RPM problems. For a graduate-level artificial intelligence class in Fall 2014, we asked students to develop intelligent agents that could address 123 RPM-inspired problems, essentially crowdsourcing RPM problem solving. The students in the class submitted 224 agents that used a wide variety of problem-solving methods. In this paper, we first report on the aggregate results of those 224 agents on the 123 problems, then focus specifically on four of the most creative, novel, and effective agents in the class. We find that the four agents, using four very different problem-solving methods, were all able to achieve significant success. This suggests the RPM test may be amenable to a wider range of problem-solving methods than previously reported. It also suggests that human computation might be an effective strategy for collecting a wide variety of methods for creative tasks.", "title": "" }, { "docid": "fb4d8685bd880f44b489d7d13f5f36ed", "text": "With the advancement in digitalization vast amount of Image data is uploaded and used via Internet in today’s world. With this revolution in uses of multimedia data, key problem in the area of Image processing, Computer vision and big data analytics is how to analyze, effectively process and extract useful information from such data. Traditional tactics to process such a data are extremely time and resource intensive. Studies recommend that parallel and distributed computing techniques have much more potential to process such data in efficient manner. To process such a complex task in efficient manner advancement in GPU based processing is also a candidate solution. This paper we introduce Hadoop-Mapreduce (Distributed system) and CUDA (Parallel system) based image processing. In our experiment using satellite images of different dimension we had compared performance or execution speed of canny edge detection algorithm. Performance is compared for CPU and GPU based Time Complexity.", "title": "" }, { "docid": "09e9b51bdd42ec5fae7d332ce7543053", "text": "This article investigates the cognitive strategies that people use to search computer displays. Several different visual layouts are examined: unlabeled layouts that contain multiple groups of items but no group headings, labeled layouts in which items are grouped and each group has a useful heading, and a target-only layout that contains just one item. A number of plausible strategies were proposed for each layout. Each strategy was programmed into the EPIC cognitive architecture, producing models that simulate the human visual-perceptual, oculomotor, and cognitive processing required for the task. The models generate search time predictions. For unlabeled layouts, the mean layout search times are predicted by a purely random search strategy, and the more detailed positional search times are predicted by a noisy systematic strategy. The labeled layout search times are predicted by a hierarchical strategy in which first the group labels are systematically searched, and then the contents of the target group. The target-only layout search times are predicted by a strategy in which the eyes move directly to the sudden appearance of the target. The models demonstrate that human visual search performance can be explained largely in terms of the cognitive strategy HUMAN–COMPUTER INTERACTION, 2004, Volume 19, pp. 183–223 Copyright © 2004, Lawrence Erlbaum Associates, Inc. Anthony Hornof is a computer scientist with interests in human–computer interaction, cognitive modeling, visual search, and eye tracking; he is an Assistant Professor in the Department of Computer and Information Science at the University of Oregon. that is used to coordinate the relevant perceptual and motor processes, a clear and useful visual hierarchy triggers a fundamentally different visual search strategy and effectively gives the user greater control over the visual navigation, and cognitive strategies will be an important component of a predictive visual search tool. The models provide insights pertaining to the visual-perceptual and oculomotor processes involved in visual search and contribute to the science base needed for predictive interface analysis. 184 HORNOF", "title": "" }, { "docid": "acd93c6b041a975dcf52c7bafaf05b16", "text": "Patients with carcinoma of the tongue including the base of the tongue who underwent total glossectomy in a period of just over ten years since January 1979 have been reviewed. Total glossectomy may be indicated as salvage surgery or as a primary procedure. The larynx may be preserved or may have to be sacrificed depending upon the site of the lesion. When the larynx is preserved the use of laryngeal suspension facilitates early rehabilitation and preserves the quality of life to a large extent. Cricopharyngeal myotomy seems unnecessary.", "title": "" }, { "docid": "8bcc223389b7cc2ce2ef4e872a029489", "text": "Issues concerning agriculture, countryside and farmers have been always hindering China’s development. The only solution to these three problems is agricultural modernization. However, China's agriculture is far from modernized. The introduction of cloud computing and internet of things into agricultural modernization will probably solve the problem. Based on major features of cloud computing and key techniques of internet of things, cloud computing, visualization and SOA technologies can build massive data involved in agricultural production. Internet of things and RFID technologies can help build plant factory and realize automatic control production of agriculture. Cloud computing is closely related to internet of things. A perfect combination of them can promote fast development of agricultural modernization, realize smart agriculture and effectively solve the issues concerning agriculture, countryside and farmers.", "title": "" }, { "docid": "9364e07801fc01e50d0598b61ab642aa", "text": "Online learning represents a family of machine learning methods, where a learner attempts to tackle some predictive (or any type of decision-making) task by learning from a sequence of data instances one by one at each time. The goal of online learning is to maximize the accuracy/correctness for the sequence of predictions/decisions made by the online learner given the knowledge of correct answers to previous prediction/learning tasks and possibly additional information. This is in contrast to traditional batch or offline machine learning methods that are often designed to learn a model from the entire training data set at once. Online learning has become a promising technique for learning from continuous streams of data in many real-world applications. This survey aims to provide a comprehensive survey of the online machine learning literature through a systematic review of basic ideas and key principles and a proper categorization of different algorithms and techniques. Generally speaking, according to the types of learning tasks and the forms of feedback information, the existing online learning works can be classified into three major categories: (i) online supervised learning where full feedback information is always available, (ii) online learning with limited feedback, and (iii) online unsupervised learning where no feedback is available. Due to space limitation, the survey will be mainly focused on the first category, but also briefly cover some basics of the other two categories. Finally, we also discuss some open issues and attempt to shed light on potential future research directions in this field.", "title": "" }, { "docid": "e442b7944062f6201e779aa1e7d6c247", "text": "We present pigeo, a Python geolocation prediction tool that predicts a location for a given text input or Twitter user. We discuss the design, implementation and application of pigeo, and empirically evaluate it. pigeo is able to geolocate informal text and is a very useful tool for users who require a free and easy-to-use, yet accurate geolocation service based on pre-trained models. Additionally, users can train their own models easily using pigeo’s API.", "title": "" }, { "docid": "82ca6a400bf287dc287df9fa751ddac2", "text": "Research on ontology is becoming increasingly widespread in the computer science community, and its importance is being recognized in a multiplicity of research fields and application areas, including knowledge engineering, database design and integration, information retrieval and extraction. We shall use the generic term “information systems”, in its broadest sense, to collectively refer to these application perspectives. We argue in this paper that so-called ontologies present their own methodological and architectural peculiarities: on the methodological side, their main peculiarity is the adoption of a highly interdisciplinary approach, while on the architectural side the most interesting aspect is the centrality of the role they can play in an information system, leading to the perspective of ontology-driven information systems.", "title": "" }, { "docid": "45b1cb6c9393128c9a9dcf9dbeb50778", "text": "Bitcoin, a distributed, cryptographic, digital currency, gained a lot of media attention for being an anonymous e-cash system. But as all transactions in the network are stored publicly in the blockchain, allowing anyone to inspect and analyze them, the system does not provide real anonymity but pseudonymity. There have already been studies showing the possibility to deanonymize bitcoin users based on the transaction graph and publicly available data. Furthermore, users could be tracked by bitcoin exchanges or shops, where they have to provide personal information that can then be linked to their bitcoin addresses. Special bitcoin mixing services claim to obfuscate the origin of transactions and thereby increase the anonymity of its users. In this paper we evaluate three of these services – Bitcoin Fog, BitLaundry, and the Send Shared functionality of Blockchain.info – by analyzing the transaction graph. While Bitcoin Fog and Blockchain.info successfully mix our transaction, we are able to find a direct relation between the input and output transactions in the graph of BitLaundry.", "title": "" }, { "docid": "30d0453033d3951f5b5faf3213eacb89", "text": "Semantic mapping is the incremental process of “mapping” relevant information of the world (i.e., spatial information, temporal events, agents and actions) to a formal description supported by a reasoning engine. Current research focuses on learning the semantic of environments based on their spatial location, geometry and appearance. Many methods to tackle this problem have been proposed, but the lack of a uniform representation, as well as standard benchmarking suites, prevents their direct comparison. In this paper, we propose a standardization in the representation of semantic maps, by defining an easily extensible formalism to be used on top of metric maps of the environments. Based on this, we describe the procedure to build a dataset (based on real sensor data) for benchmarking semantic mapping techniques, also hypothesizing some possible evaluation metrics. Nevertheless, by providing a tool for the construction of a semantic map ground truth, we aim at the contribution of the scientific community in acquiring data for populating the dataset.", "title": "" }, { "docid": "b7959c06c8057418762e12ef2c0ce2ce", "text": "According to Bayesian theories in psychology and neuroscience, minds and brains are (near) optimal in solving a wide range of tasks. We challenge this view and argue that more traditional, non-Bayesian approaches are more promising. We make 3 main arguments. First, we show that the empirical evidence for Bayesian theories in psychology is weak. This weakness relates to the many arbitrary ways that priors, likelihoods, and utility functions can be altered in order to account for the data that are obtained, making the models unfalsifiable. It further relates to the fact that Bayesian theories are rarely better at predicting data compared with alternative (and simpler) non-Bayesian theories. Second, we show that the empirical evidence for Bayesian theories in neuroscience is weaker still. There are impressive mathematical analyses showing how populations of neurons could compute in a Bayesian manner but little or no evidence that they do. Third, we challenge the general scientific approach that characterizes Bayesian theorizing in cognitive science. A common premise is that theories in psychology should largely be constrained by a rational analysis of what the mind ought to do. We question this claim and argue that many of the important constraints come from biological, evolutionary, and processing (algorithmic) considerations that have no adaptive relevance to the problem per se. In our view, these factors have contributed to the development of many Bayesian \"just so\" stories in psychology and neuroscience; that is, mathematical analyses of cognition that can be used to explain almost any behavior as optimal.", "title": "" }, { "docid": "f012c0d9fe795a738b3cd82cef94ef19", "text": "Fraud detection is an industry where incremental gains in predictive accuracy can have large benefits for banks and customers. Banks adapt models to the novel ways in which “fraudsters” commit credit card fraud. They collect data and engineer new features in order to increase predictive power. This research compares the algorithmic impact on the predictive power across three supervised classification models: logistic regression, gradient boosted trees, and deep learning. This research also explores the benefits of creating features using domain expertise and feature engineering using an autoencoder—an unsupervised feature engineering method. These two methods of feature engineering combined with the direct mapping of the original variables create six different feature sets. Across these feature sets this research compares the aforementioned models. This research concludes that creating features using domain expertise offers a notable improvement in predictive power. Additionally, the autoencoder offers a way to reduce the dimensionality of the data and slightly boost predictive power.", "title": "" }, { "docid": "a1cd5424dea527e365f038fce60fd821", "text": "Producing literature reviews of complex evidence for policymaking questions is a challenging methodological area. There are several established and emerging approaches to such reviews, but unanswered questions remain, especially around how to begin to make sense of large data sets drawn from heterogeneous sources. Drawing on Kuhn's notion of scientific paradigms, we developed a new method-meta-narrative review-for sorting and interpreting the 1024 sources identified in our exploratory searches. We took as our initial unit of analysis the unfolding 'storyline' of a research tradition over time. We mapped these storylines by using both electronic and manual tracking to trace the influence of seminal theoretical and empirical work on subsequent research within a tradition. We then drew variously on the different storylines to build up a rich picture of our field of study. We identified 13 key meta-narratives from literatures as disparate as rural sociology, clinical epidemiology, marketing and organisational studies. Researchers in different traditions had conceptualised, explained and investigated diffusion of innovations differently and had used different criteria for judging the quality of empirical work. Moreover, they told very different over-arching stories of the progress of their research. Within each tradition, accounts of research depicted human characters emplotted in a story of (in the early stages) pioneering endeavour and (later) systematic puzzle-solving, variously embellished with scientific dramas, surprises and 'twists in the plot'. By first separating out, and then drawing together, these different meta-narratives, we produced a synthesis that embraced the many complexities and ambiguities of 'diffusion of innovations' in an organisational setting. We were able to make sense of seemingly contradictory data by systematically exposing and exploring tensions between research paradigms as set out in their over-arching storylines. In some traditions, scientific revolutions were identifiable in which breakaway researchers had abandoned the prevailing paradigm and introduced a new set of concepts, theories and empirical methods. We concluded that meta-narrative review adds value to the synthesis of heterogeneous bodies of literature, in which different groups of scientists have conceptualised and investigated the 'same' problem in different ways and produced seemingly contradictory findings. Its contribution to the mixed economy of methods for the systematic review of complex evidence should be explored further.", "title": "" }, { "docid": "007791833b15bd3367c11bb17b7abf82", "text": "When speakers talk, they gesture. The goal of this review is to investigate the contribution that these gestures make to how we communicate and think. Gesture can play a role in communication and thought at many timespans. We explore, in turn, gesture's contribution to how language is produced and understood in the moment; its contribution to how we learn language and other cognitive skills; and its contribution to how language is created over generations, over childhood, and on the spot. We find that the gestures speakers produce when they talk are integral to communication and can be harnessed in a number of ways. (a) Gesture reflects speakers' thoughts, often their unspoken thoughts, and thus can serve as a window onto cognition. Encouraging speakers to gesture can thus provide another route for teachers, clinicians, interviewers, etc., to better understand their communication partners. (b) Gesture can change speakers' thoughts. Encouraging gesture thus has the potential to change how students, patients, witnesses, etc., think about a problem and, as a result, alter the course of learning, therapy, or an interchange. (c) Gesture provides building blocks that can be used to construct a language. By watching how children and adults who do not already have a language put those blocks together, we can observe the process of language creation. Our hands are with us at all times and thus provide researchers and learners with an ever-present tool for understanding how we talk and think.", "title": "" }, { "docid": "f442354c5a99ece9571168648285f763", "text": "A general closed-form subharmonic stability condition is derived for the buck converter with ripple-based constant on-time control and a feedback filter. The turn-on delay is included in the analysis. Three types of filters are considered: low-pass filter (LPF), phase-boost filter (PBF), and inductor current feedback (ICF) which changes the feedback loop frequency response like a filter. With the LPF, the stability region is reduced. With the PBF or ICF, the stability region is enlarged. Stability conditions are determined both for the case of a single output capacitor and for the case of two parallel-connected output capacitors having widely different time constants. The past research results related to the feedback filters become special cases. All theoretical predictions are verified by experiments.", "title": "" }, { "docid": "3b5b3802d4863a6569071b346b65600d", "text": "In vector space model (VSM), text representation is the task of transforming the content of a textual document into a vector in the term space so that the document could be recognized and classified by a computer or a classifier. Different terms (i.e. words, phrases, or any other indexing units used to identify the contents of a text) have different importance in a text. The term weighting methods assign appropriate weights to the terms to improve the performance of text categorization. In this study, we investigate several widely-used unsupervised (traditional) and supervised term weighting methods on benchmark data collections in combination with SVM and kNN algorithms. In consideration of the distribution of relevant documents in the collection, we propose a new simple supervised term weighting method, i.e. tf.rf, to improve the terms' discriminating power for text categorization task. From the controlled experimental results, these supervised term weighting methods have mixed performance. Specifically, our proposed supervised term weighting method, tf.rf, has a consistently better performance than other term weighting methods while other supervised term weighting methods based on information theory or statistical metric perform the worst in all experiments. On the other hand, the popularly used tf.idf method has not shown a uniformly good performance in terms of different data sets.", "title": "" }, { "docid": "794bba509b6c609e4f9204d96bf5fe9c", "text": "Power law distributions are an increasingly common model for computer science applications; for example, they have been used to describe file size distributions and inand out-degree distributions for the Web and Internet graphs. Recently, the similar lognormal distribution has also been suggested as an appropriate alternative model for file size distributions. In this paper, we briefly survey some of the history of these distributions, focusing on work in other fields. We find that several recently proposed models have antecedents in work from decades ago. We also find that lognormal and power law distributions connect quite naturally, and hence it is not surprising that lognormal distributions arise as a possible alternative to power law distributions.", "title": "" }, { "docid": "f74a0c176352b8378d9f27fdf93763c9", "text": "The future of user interfaces will be dominated by hand gestures. In this paper, we explore an intuitive hand gesture based interaction for smartphones having a limited computational capability. To this end, we present an efficient algorithm for gesture recognition with First Person View (FPV), which focuses on recognizing a four swipe model (Left, Right, Up and Down) for smartphones through single monocular camera vision. This can be used with frugal AR/VR devices such as Google Cardboard1 andWearality2 in building AR/VR based automation systems for large scale deployments, by providing a touch-less interface and real-time performance. We take into account multiple cues including palm color, hand contour segmentation, and motion tracking, which effectively deals with FPV constraints put forward by a wearable. We also provide comparisons of swipe detection with the existing methods under the same limitations. We demonstrate that our method outperforms both in terms of gesture recognition accuracy and computational time.", "title": "" }, { "docid": "6cfdad2bb361713616dd2971026758a7", "text": "We consider the problem of controlling a system with unknown, stochastic dynamics to achieve a complex, time-sensitive task. An example of this problem is controlling a noisy aerial vehicle with partially known dynamics to visit a pre-specified set of regions in any order while avoiding hazardous areas. In particular, we are interested in tasks which can be described by signal temporal logic (STL) specifications. STL is a rich logic that can be used to describe tasks involving bounds on physical parameters, continuous time bounds, and logical relationships over time and states. STL is equipped with a continuous measure called the robustness degree that measures how strongly a given sample path exhibits an STL property [4, 3]. This measure enables the use of continuous optimization problems to solve learning [7, 6] or formal synthesis problems [9] involving STL.", "title": "" }, { "docid": "3b49747ef98ebcfa515fb10a22f08017", "text": "This paper reports a qualitative study of thriving older people and illustrates the findings with design fiction. Design research has been criticized as \"solutionist\" i.e. solving problems that don't exist or providing \"quick fixes\" for complex social, political and environmental problems. We respond to this critique by presenting a \"solutionist\" board game used to generate design concepts. Players are given data cards and technology dice, they move around the board by pitching concepts that would support positive aging. We argue that framing concept design as a solutionist game explicitly foregrounds play, irony and the limitations of technological intervention. Three of the game concepts are presented as design fictions in the form of advertisements for products and services that do not exist. The paper argues that design fiction can help create a space for design beyond solutionism.", "title": "" } ]
scidocsrr
e09d38267455f5fcd48c41bd948716c1
Topic oriented community detection through social objects and link analysis in social networks
[ { "docid": "bb2504b2275a20010c0d5f9050173d40", "text": "Clustering nodes in a graph is a useful general technique in data mining of large network data sets. In this context, Newman and Girvan [9] recently proposed an objective function for graph clustering called the Q function which allows automatic selection of the number of clusters. Empirically, higher values of the Q function have been shown to correlate well with good graph clusterings. In this paper we show how optimizing the Q function can be reformulated as a spectral relaxation problem and propose two new spectral clustering algorithms that seek to maximize Q. Experimental results indicate that the new algorithms are efficient and effective at finding both good clusterings and the appropriate number of clusters across a variety of real-world graph data sets. In addition, the spectral algorithms are much faster for large sparse graphs, scaling roughly linearly with the number of nodes n in the graph, compared to O(n) for previous clustering algorithms using the Q function.", "title": "" } ]
[ { "docid": "210395d4f0c4db496546da0be3d2524d", "text": "Crimes are a social irritation and cost our society deeply in several ways. Any research that can help in solving crimes quickly will pay for itself. About 10% of the criminals commit about 50% of the crimes [9]. The system is trained by feeding previous years record of crimes taken from legitimate online portal of India listing various crimes such as murder, kidnapping and abduction, dacoits, robbery, burglary, rape and other such crimes. As per data of Indian statistics, which gives data of various crime of past 14 years (2001–2014) a regression model is created and the crime rate for the following years in various states can be predicted [8]. We have used supervised, semi-supervised and unsupervised learning technique [4] on the crime records for knowledge discovery and to help in increasing the predictive accuracy of the crime. This work will be helpful to the local police stations in crime suppression.", "title": "" }, { "docid": "7360c92ef44058694135338acad6838c", "text": "Modern chip multiprocessor (CMP) systems employ multiple memory controllers to control access to main memory. The scheduling algorithm employed by these memory controllers has a significant effect on system throughput, so choosing an efficient scheduling algorithm is important. The scheduling algorithm also needs to be scalable — as the number of cores increases, the number of memory controllers shared by the cores should also increase to provide sufficient bandwidth to feed the cores. Unfortunately, previous memory scheduling algorithms are inefficient with respect to system throughput and/or are designed for a single memory controller and do not scale well to multiple memory controllers, requiring significant finegrained coordination among controllers. This paper proposes ATLAS (Adaptive per-Thread Least-Attained-Service memory scheduling), a fundamentally new memory scheduling technique that improves system throughput without requiring significant coordination among memory controllers. The key idea is to periodically order threads based on the service they have attained from the memory controllers so far, and prioritize those threads that have attained the least service over others in each period. The idea of favoring threads with least-attained-service is borrowed from the queueing theory literature, where, in the context of a single-server queue it is known that least-attained-service optimally schedules jobs, assuming a Pareto (or any decreasing hazard rate) workload distribution. After verifying that our workloads have this characteristic, we show that our implementation of least-attained-service thread prioritization reduces the time the cores spend stalling and significantly improves system throughput. Furthermore, since the periods over which we accumulate the attained service are long, the controllers coordinate very infrequently to form the ordering of threads, thereby making ATLAS scalable to many controllers. We evaluate ATLAS on a wide variety of multiprogrammed SPEC 2006 workloads and systems with 4–32 cores and 1–16 memory controllers, and compare its performance to five previously proposed scheduling algorithms. Averaged over 32 workloads on a 24-core system with 4 controllers, ATLAS improves instruction throughput by 10.8%, and system throughput by 8.4%, compared to PAR-BS, the best previous CMP memory scheduling algorithm. ATLAS's performance benefit increases as the number of cores increases.", "title": "" }, { "docid": "a094fe8de029646a408bbb685824581c", "text": "Will reading habit influence your life? Many say yes. Reading computational intelligence principles techniques and applications is a good habit; you can develop this habit to be such interesting way. Yeah, reading habit will not only make you have any favourite activity. It will be one of guidance of your life. When reading has become a habit, you will not make it as disturbing activities or as boring activity. You can gain many benefits and importances of reading.", "title": "" }, { "docid": "0b0273a1e2aeb98eb4115113c8957fd2", "text": "This paper deals with the approach of integrating a bidirectional boost-converter into the drivetrain of a (hybrid) electric vehicle in order to exploit the full potential of the electric drives and the battery. Currently, the automotive norms and standards are defined based on the characteristics of the voltage source. The current technologies of batteries for automotive applications have voltage which depends on the load and the state-of charge. The aim of this paper is to provide better system performance by stabilizing the voltage without the need of redesigning any of the current components in the system. To show the added-value of the proposed electrical topology, loss estimation is developed and proved based on actual components measurements and design. The component and its modelling is then implemented in a global system simulation environment of the electric architecture to show how it contributes enhancing the performance of the system.", "title": "" }, { "docid": "d1c33990b7642ea51a8a568fa348d286", "text": "Connectionist temporal classification CTC has recently shown improved performance and efficiency in automatic speech recognition. One popular decoding implementation is to use a CTC model to predict the phone posteriors at each frame and then perform Viterbi beam search on a modified WFST network. This is still within the traditional frame synchronous decoding framework. In this paper, the peaky posterior property of CTC is carefully investigated and it is found that ignoring blank frames will not introduce additional search errors. Based on this phenomenon, a novel phone synchronous decoding framework is proposed by removing tremendous search redundancy due to blank frames, which results in significant search speed up. The framework naturally leads to an extremely compact phone-level acoustic space representation: CTC lattice. With CTC lattice, efficient and effective modular speech recognition approaches, second pass rescoring for large vocabulary continuous speech recognition LVCSR, and phone-based keyword spotting KWS, are also proposed in this paper. Experiments showed that phone synchronous decoding can achieve 3-4 times search speed up without performance degradation compared to frame synchronous decoding. Modular LVCSR with CTC lattice can achieve further WER improvement. KWS with CTC lattice not only achieved significant equal error rate improvement, but also greatly reduced the KWS model size and increased the search speed.", "title": "" }, { "docid": "8f6da9a81b4efe5e76356c6c30ddd6a6", "text": "Recently, independent component analysis (ICA) has been widely used in the analysis of brain imaging data. An important problem with most ICA algorithms is, however, that they are stochastic; that is, their results may be somewhat different in different runs of the algorithm. Thus, the outputs of a single run of an ICA algorithm should be interpreted with some reserve, and further analysis of the algorithmic reliability of the components is needed. Moreover, as with any statistical method, the results are affected by the random sampling of the data, and some analysis of the statistical significance or reliability should be done as well. Here we present a method for assessing both the algorithmic and statistical reliability of estimated independent components. The method is based on running the ICA algorithm many times with slightly different conditions and visualizing the clustering structure of the obtained components in the signal space. In experiments with magnetoencephalographic (MEG) and functional magnetic resonance imaging (fMRI) data, the method was able to show that expected components are reliable; furthermore, it pointed out components whose interpretation was not obvious but whose reliability should incite the experimenter to investigate the underlying technical or physical phenomena. The method is implemented in a software package called Icasso.", "title": "" }, { "docid": "8f930fc4f06f8b17e2826f0975af1fa1", "text": "Smart parking is a typical IoT application that can benefit from advances in sensor, actuator and RFID technologies to provide many services to its users and parking owners of a smart city. This paper considers a smart parking infrastructure where sensors are laid down on the parking spots to detect car presence and RFID readers are embedded into parking gates to identify cars and help in the billing of the smart parking. Both types of devices are endowed with wired and wireless communication capabilities for reporting to a gateway where the situation recognition is performed. The sensor devices are tasked to play one of the three roles: (1) slave sensor nodes located on the parking spot to detect car presence/absence; (2) master nodes located at one of the edges of a parking lot to detect presence and collect the sensor readings from the slave nodes; and (3) repeater sensor nodes, also called \"anchor\" nodes, located strategically at specific locations in the parking lot to increase the coverage and connectivity of the wireless sensor network. While slave and master nodes are placed based on geographic constraints, the optimal placement of the relay/anchor sensor nodes in smart parking is an important parameter upon which the cost and efficiency of the parking system depends. We formulate the optimal placement of sensors in smart parking as an integer linear programming multi-objective problem optimizing the sensor network engineering efficiency in terms of coverage and lifetime maximization, as well as its economic gain in terms of the number of sensors deployed for a specific coverage and lifetime. We propose an exact solution to the node placement problem using single-step and two-step solutions implemented in the Mosel language based on the Xpress-MPsuite of libraries. Experimental results reveal the relative efficiency of the single-step compared to the two-step model on different performance parameters. These results are consolidated by simulation results, which reveal that our solution outperforms a random placement in terms of both energy consumption, delay and throughput achieved by a smart parking network.", "title": "" }, { "docid": "c7162cc2e65c52d9575fe95e2c4f62f4", "text": "The enactive approach to cognition is typically proposed as a viable alternative to traditional cognitive science. Enactive cognition displaces the explanatory focus from the internal representations of the agent to the direct sensorimotor interaction with its environment. In this paper, we investigate enactive learning through means of artificial agent simulations. We compare the performances of the enactive agent to an agent operating on classical reinforcement learning in foraging tasks within maze environments. The characteristics of the agents are analysed in terms of the accessibility of the environmental states, goals, and exploration/exploitation tradeoffs. We confirm that the enactive agent can successfully interact with its environment and learn to avoid unfavourable interactions using intrinsically defined goals. The performance of the enactive agent is shown to be limited by the number of affordable actions.", "title": "" }, { "docid": "89c1ab96b509a80ff35103fa35d0a60c", "text": "The mobile ad-hoc network (MANET) is a new wireless technology, having features like dynamic topology and self-configuring ability of nodes. The self configuring ability of nodes in MANET made it popular among the critical situation such as military use and emergency recovery. But due to open medium and broad distribution of nodes make MANET vulnerable to different attacks. So to protect MANET from various attacks, it is important to develop an efficient and secure system for MANET. Intrusion means any set of actions that attempt to compromise the integrity, confidentiality, or availability of a resource. Intrusion Prevention is the primary defense because it is the first step to make the systems secure from attacks by using passwords, biometrics etc. Even if intrusion prevention methods are used, the system may be subjected to some vulnerability. So we need a second wall of defense known as Intrusion Detection Systems (IDSs), to detect and produce responses whenever necessary. In this article we present a survey of various intrusion detection schemes available for ad hoc networks. We have also described some of the basic attacks present in ad hoc network and discussed their available solution.", "title": "" }, { "docid": "124a50c2e797ffe549e1591d5720acda", "text": "Temporal information has useful features for recognizing facial expressions. However, to manually design useful features requires a lot of effort. In this paper, to reduce this effort, a deep learning technique, which is regarded as a tool to automatically extract useful features from raw data, is adopted. Our deep network is based on two different models. The first deep network extracts temporal appearance features from image sequences, while the other deep network extracts temporal geometry features from temporal facial landmark points. These two models are combined using a new integration method in order to boost the performance of the facial expression recognition. Through several experiments, we show that the two models cooperate with each other. As a result, we achieve superior performance to other state-of-the-art methods in the CK+ and Oulu-CASIA databases. Furthermore, we show that our new integration method gives more accurate results than traditional methods, such as a weighted summation and a feature concatenation method.", "title": "" }, { "docid": "71c31f41d116a51786a4e8ded2c5fb87", "text": "Targeting CTLA-4 represents a new type of immunotherapeutic approach, namely immune checkpoint inhibition. Blockade of CTLA-4 by ipilimumab was the first strategy to achieve a significant clinical benefit for late-stage melanoma patients in two phase 3 trials. These results fueled the notion of immunotherapy being the breakthrough strategy for oncology in 2013. Subsequently, many trials have been set up to test various immune checkpoint modulators in malignancies, not only in melanoma. In this review, recent new ideas about the mechanism of action of CTLA-4 blockade, its current and future therapeutic use, and the intensive search for biomarkers for response will be discussed. Immune checkpoint blockade, targeting CTLA-4 and/or PD-1/PD-L1, is currently the most promising systemic therapeutic approach to achieve long-lasting responses or even cure in many types of cancer, not just in patients with melanoma.", "title": "" }, { "docid": "0bc53a10750de315d5a37275dd7ae4a7", "text": "The term stigma refers to problems of knowledge (ignorance), attitudes (prejudice) and behaviour (discrimination). Most research in this area has been based on attitude surveys, media representations of mental illness and violence, has only focused upon schizophrenia, has excluded direct participation by service users, and has included few intervention studies. However, there is evidence that interventions to improve public knowledge about mental illness can be effective. The main challenge in future is to identify which interventions will produce behaviour change to reduce discrimination against people with mental illness.", "title": "" }, { "docid": "2f632cc12346cb0d6aa9ce8e765acd14", "text": "\\ Abstract: Earlier personality of person is predicted by spending lot of time with the person. As we know spending time with person is very difficult task. Referring to this problem, in the present study a method has been proposed for the behavioral prediction of a person through automated handwriting analysis. Handwriting analysis is a method to predict personality of a Person .This is done by Image Processing in MATLAB. In order to predict the personality we are going to take the writing sample and from it we are going to extract different features i.e. slant of letters and words, pen pressure, spacing between letter, spacing between word, size of letters, baseline Segmentation method is used to extract the feature of handwriting which are given to the SVM which shows the behavior of the writer sample. This gives optimum accuracy with the use of Redial Kernel function.", "title": "" }, { "docid": "67ca9035e792e2c6164b87330937bb36", "text": "In conventional full-duplex radio communication systems, the transmitter (Tx) is active at the same time as the receiver (Rx). The isolation between the Tx and the Rx is ensured by duplex filters. However, an increasing number of long-term evolution (LTE) bands crave multiband operation. Therefore, a new front-end architecture, addressing the increasing number of LTE bands, as well as multiple standards, is presented. In such an architecture, the Tx and Rx chains are separated throughout the front-end. Addition of bands is solved by making the antennas and filters tunable. Banks of duplex filters are replaced by tunable filters and antennas, providing a duplexer function over the air between the Tx and the Rx. A hardware system has been designed and fabricated to demonstrate the performance of this front-end architecture. Measurements demonstrate how the architecture addresses inter-modulation and Rx desensitization due to the Tx signal. The filters and antennas demonstrate tunability across multiple bands. System validation is detailed for LTE band I. Frequency response, as well as linearity measurements of the complete Tx and Rx front-end chains, show that the system requirements are fulfilled.", "title": "" }, { "docid": "c51acd24cb864b050432a055fef2de9a", "text": "Electric motor and power electronics-based inverter are the major components in industrial and automotive electric drives. In this paper, we present a model-based fault diagnostics system developed using a machine learning technology for detecting and locating multiple classes of faults in an electric drive. Power electronics inverter can be considered to be the weakest link in such a system from hardware failure point of view; hence, this work is focused on detecting faults and finding which switches in the inverter cause the faults. A simulation model has been developed based on the theoretical foundations of electric drives to simulate the normal condition, all single-switch and post-short-circuit faults. A machine learning algorithm has been developed to automatically select a set of representative operating points in the (torque, speed) domain, which in turn is sent to the simulated electric drive model to generate signals for the training of a diagnostic neural network, fault diagnostic neural network (FDNN). We validated the capability of the FDNN on data generated by an experimental bench setup. Our research demonstrates that with a robust machine learning approach, a diagnostic system can be trained based on a simulated electric drive model, which can lead to a correct classification of faults over a wide operating domain.", "title": "" }, { "docid": "dfd88750bc1d42e8cc798d2097426910", "text": "Melanoma is one of the most lethal forms of skin cancer. It occurs on the skin surface and develops from cells known as melanocytes. The same cells are also responsible for benign lesions commonly known as moles, which are visually similar to melanoma in its early stage. If melanoma is treated correctly, it is very often curable. Currently, much research is concentrated on the automated recognition of melanomas. In this paper, we propose an automated melanoma recognition system, which is based on deep learning method combined with so called hand-crafted RSurf features and Local Binary Patterns. The experimental evaluation on a large publicly available dataset demonstrates high classification accuracy, sensitivity, and specificity of our proposed approach when it is compared with other classifiers on the same dataset.", "title": "" }, { "docid": "c2ac1c1f08e7e4ccba14ea203acba661", "text": "This paper describes an approach to determine a layout for the order picking area in warehouses, such that the average travel distance for the order pickers is minimized. We give analytical formulas by which the average length of an order picking route can be calculated for two different routing policies. The optimal layout can be determined by using such formula as an objective function in a non-linear programming model. The optimal number of aisles in an order picking area appears to depend strongly on the required storage space and the pick list size.", "title": "" }, { "docid": "a506f3f6c401f83eaba830abb20c8fff", "text": "The mechanisms governing the recruitment of functional glutamate receptors at nascent excitatory postsynapses following initial axon-dendrite contact remain unclear. We examined here the ability of neurexin/neuroligin adhesions to mobilize AMPA-type glutamate receptors (AMPARs) at postsynapses through a diffusion/trap process involving the scaffold molecule PSD-95. Using single nanoparticle tracking in primary rat and mouse hippocampal neurons overexpressing or lacking neuroligin-1 (Nlg1), a striking inverse correlation was found between AMPAR diffusion and Nlg1 expression level. The use of Nlg1 mutants and inhibitory RNAs against PSD-95 demonstrated that this effect depended on intact Nlg1/PSD-95 interactions. Furthermore, functional AMPARs were recruited within 1 h at nascent Nlg1/PSD-95 clusters assembled by neurexin-1β multimers, a process requiring AMPAR membrane diffusion. Triggering novel neurexin/neuroligin adhesions also caused a depletion of PSD-95 from native synapses and a drop in AMPAR miniature EPSCs, indicating a competitive mechanism. Finally, both AMPAR level at synapses and AMPAR-dependent synaptic transmission were diminished in hippocampal slices from newborn Nlg1 knock-out mice, confirming an important role of Nlg1 in driving AMPARs to nascent synapses. Together, these data reveal a mechanism by which membrane-diffusing AMPARs can be rapidly trapped at PSD-95 scaffolds assembled at nascent neurexin/neuroligin adhesions, in competition with existing synapses.", "title": "" }, { "docid": "4e41e762756c32edfb73ce144bf7ba49", "text": "In this paper, we outline a model of semantics that integrates aspects of discourse-sensitive logics with the compositional mechanisms available from lexically-driven semantic interpretation. Specifically, we concentrate on developing a composition logic required to properly model complex types within the Generative Lexicon (henceforth GL), for which we employ SDRT principles. As we are presently interested in the composition of information to construct logical forms, we will build on one standard way of arriving at such representations, the lambda calculus, in which functional types are exploited. We outline a new type calculus that captures one of the fundamental ideas of GL: providing a set of techniques governing type shifting possibilities for various lexical items so as to allow for the combination of lexical items in cases where there is an apparent type mismatch. These techniques themselves should follow from the structure of the lexicon and its underlying logic.", "title": "" }, { "docid": "78582e3594deb53149422cc41387e330", "text": "Approximate entropy (ApEn) is a recently developed statistic quantifying regularity and complexity, which appears to have potential application to a wide variety of relatively short (greater than 100 points) and noisy time-series data. The development of ApEn was motivated by data length constraints commonly encountered, e.g., in heart rate, EEG, and endocrine hormone secretion data sets. We describe ApEn implementation and interpretation, indicating its utility to distinguish correlated stochastic processes, and composite deterministic/ stochastic models. We discuss the key technical idea that motivates ApEn, that one need not fully reconstruct an attractor to discriminate in a statistically valid manner-marginal probability distributions often suffice for this purpose. Finally, we discuss why algorithms to compute, e.g., correlation dimension and the Kolmogorov-Sinai (KS) entropy, often work well for true dynamical systems, yet sometimes operationally confound for general models, with the aid of visual representations of reconstructed dynamics for two contrasting processes. (c) 1995 American Institute of Physics.", "title": "" } ]
scidocsrr
d902b33a1bb72273c6bbe7750eeac7dd
How to Measure Motivation : A Guide for the Experimental Social Psychologist
[ { "docid": "3efb43150881649d020a0c721dc39ae5", "text": "Six studies explore the role of goal shielding in self-regulation by examining how the activation of focal goals to which the individual is committed inhibits the accessibility of alternative goals. Consistent evidence was found for such goal shielding, and a number of its moderators were identified: Individuals' level of commitment to the focal goal, their degree of anxiety and depression, their need for cognitive closure, and differences in their goal-related tenacity. Moreover, inhibition of alternative goals was found to be more pronounced when they serve the same overarching purpose as the focal goal, but lessened when the alternative goals facilitate focal goal attainment. Finally, goal shielding was shown to have beneficial consequences for goal pursuit and attainment.", "title": "" } ]
[ { "docid": "6cbdb95791cc214a1b977e92e69904bb", "text": "We study reinforcement learning of chat-bots with recurrent neural network architectures when the rewards are noisy and expensive to obtain. For instance, a chat-bot used in automated customer service support can be scored by quality assurance agents, but this process can be expensive, time consuming and noisy. Previous reinforcement learning work for natural language processing uses onpolicy updates and/or is designed for on-line learning settings. We demonstrate empirically that such strategies are not appropriate for this setting and develop an off-policy batch policy gradient method (BPG). We demonstrate the efficacy of our method via a series of synthetic experiments and an Amazon Mechanical Turk experiment on a restaurant recommendations dataset.", "title": "" }, { "docid": "553b72da13c28e56822ccc900ff114fa", "text": "This paper presents some of the unique verification, validation, and certification challenges that must be addressed during the development of adaptive system software for use in safety-critical aerospace applications. The paper first discusses the challenges imposed by the current regulatory guidelines for aviation software. Next, a number of individual technologies being researched by NASA and others are discussed that focus on various aspects of the software challenges. These technologies include the formal methods of model checking, compositional verification, static analysis, program synthesis, and runtime analysis. Then the paper presents some validation challenges for adaptive control, including proving convergence over long durations, guaranteeing controller stability, using new tools to compute statistical error bounds, identifying problems in fault-tolerant software, and testing in the presence of adaptation. These specific challenges are presented in the context of a software validation effort in testing the Integrated Flight Control System (IFCS) neural control software at the Dryden Flight Research Center. Lastly, the challenges to develop technologies to help prevent aircraft system failures, detect and identify failures that do occur, and provide enhanced guidance and control capability to prevent and recover from vehicle loss of control are briefly cited in connection with ongoing work at the NASA Langley Research Center.", "title": "" }, { "docid": "23ef781d3230124360f24cc6e38fb15f", "text": "Exploration of ANNs for the economic purposes is described and empirically examined with the foreign exchange market data. For the experiments, panel data of the exchange rates (USD/EUR, JPN/USD, USD/ GBP) are examined and optimized to be used for time-series predictions with neural networks. In this stage the input selection, in which the processing steps to prepare the raw data to a suitable input for the models are investigated. The best neural network is found with the best forecasting abilities, based on a certain performance measure. A visual graphs on the experiments data set is presented after processing steps, to illustrate that particular results. The out-of-sample results are compared with training ones. & 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "5d5c2016ec936969d3d3a07e0b48f51e", "text": "In an Information technology world, the ability to effectively process massive datasets has become integral to a broad range of scientific and other academic disciplines. We are living in an era of data deluge and as a result, the term “Big Data” is appearing in many contexts. It ranges from meteorology, genomics, complex physics simulations, biological and environmental research, finance and business to healthcare. Big Data refers to data streams of higher velocity and higher variety. The infrastructure required to support the acquisition of Big Data must deliver low, predictable latency in both capturing data and in executing short, simple queries. To be able to handle very high transaction volumes, often in a distributed environment; and support flexible, dynamic data structures. Data processing is considerably more challenging than simply locating, identifying, understanding, and citing data. For effective large-scale analysis all of this has to happen in a completely automated manner. This requires differences in data structure and semantics to be expressed in forms that are computer understandable, and then “robotically” resolvable. There is a strong body of work in data integration, mapping and transformations. However, considerable additional work is required to achieve automated error-free difference resolution. This paper proposes a framework on recent research for the Data Mining using Big Data.", "title": "" }, { "docid": "5525b8ddce9a8a6430da93f48e93dea5", "text": "One major goal of vision is to infer physical models of objects, surfaces, and their layout from sensors. In this paper, we aim to interpret indoor scenes from one RGBD image. Our representation encodes the layout of walls, which must conform to a Manhattan structure but is otherwise flexible, and the layout and extent of objects, modeled with CAD-like 3D shapes. We represent both the visible and occluded portions of the scene, producing a complete 3D parse. Such a scene interpretation is useful for robotics and visual reasoning, but difficult to produce due to the wellknown challenge of segmentation, the high degree of occlusion, and the diversity of objects in indoor scene. We take a data-driven approach, generating sets of potential object regions, matching to regions in training images, and transferring and aligning associated 3D models while encouraging fit to observations and overall consistency. We demonstrate encouraging results on the NYU v2 dataset and highlight a variety of interesting directions for future work.", "title": "" }, { "docid": "40c93dacc8318bc440d23fedd2acbd47", "text": "An electrical-balance duplexer uses series connected step-down transformers to enhance linearity and power handling capability by reducing the voltage swing across nonlinear components. Wideband, dual-notch Tx-to-Rx isolation is demonstrated experimentally with a planar inverted-F antenna. The 0.18μm CMOS prototype achieves >50dB isolation for 220MHz aggregated bandwidth or >40dB dual-notch isolation for 160MHz bandwidth, +49dBm Tx-path IIP3 and -48dBc ACLR1 for +27dBm at the antenna.", "title": "" }, { "docid": "0b0723466d6fc726154befea8a1d7398", "text": "● Volume of pages makes efficient WWW navigation difficult ● Aim: To analyse users' navigation history to generate tools that increase navigational efficiency – ie. Predictive server prefetching ● Provides a mathematical foundation to several concepts", "title": "" }, { "docid": "e4f4fe27fff75bd7ed079f3094deaedb", "text": "This paper considers the scenario that multiple data owners wish to apply a machine learning method over the combined dataset of all owners to obtain the best possible learning output but do not want to share the local datasets owing to privacy concerns. We design systems for the scenario that the stochastic gradient descent (SGD) algorithm is used as the machine learning method because SGD (or its variants) is at the heart of recent deep learning techniques over neural networks. Our systems differ from existing systems in the following features: (1) any activation function can be used, meaning that no privacy-preserving-friendly approximation is required; (2) gradients computed by SGD are not shared but the weight parameters are shared instead; and (3) robustness against colluding parties even in the extreme case that only one honest party exists. We prove that our systems, while privacy-preserving, achieve the same learning accuracy as SGD and hence retain the merit of deep learning with respect to accuracy. Finally, we conduct several experiments using benchmark datasets, and show that our systems outperform previous system in terms of learning accuracies. keywords: privacy preservation, stochastic gradient descent, distributed trainers, neural networks.", "title": "" }, { "docid": "9ac7dbae53fe06937780a53dd3432f80", "text": "Artefact evaluation is regarded as being crucial for Design Science Research (DSR) in order to rigorously proof an artefact’s relevance for practice. The availability of guidelines for structuring DSR processes notwithstanding, the current body of knowledge provides only rudimentary means for a design researcher to select and justify appropriate artefact evaluation strategies in a given situation. This paper proposes patterns that could be used to articulate and justify artefact evaluation strategies within DSR projects. These patterns have been synthesised from priorDSR literature concerned with evaluation strategies. They distinguish both ex ante as well as ex post evaluations and reflect current DSR approaches and evaluation criteria.", "title": "" }, { "docid": "c2a3344c607cf06c24ed8d2664243284", "text": "It is common for cloud users to require clusters of inter-connected virtual machines (VMs) in a geo-distributed IaaS cloud, to run their services. Compared to isolated VMs, key challenges on dynamic virtual cluster (VC) provisioning (computation + communication resources) lie in two folds: (1) optimal placement of VCs and inter-VM traffic routing involve NP-hard problems, which are non-trivial to solve offline, not to mention if an online efficient algorithm is sought; (2) an efficient pricing mechanism is missing, which charges a market-driven price for each VC as a whole upon request, while maximizing system efficiency or provider revenue over the entire span. This paper proposes efficient online auction mechanisms to address the above challenges. We first design SWMOA, a novel online algorithm for dynamic VC provisioning and pricing, achieving truthfulness, individual rationality, computation efficiency, and <inline-formula><tex-math notation=\"LaTeX\">$(1+2\\log \\mu)$</tex-math><alternatives> <inline-graphic xlink:href=\"wu-ieq1-2601905.gif\"/></alternatives></inline-formula>-competitiveness in social welfare, where <inline-formula><tex-math notation=\"LaTeX\">$\\mu$</tex-math><alternatives> <inline-graphic xlink:href=\"wu-ieq2-2601905.gif\"/></alternatives></inline-formula> is related to the problem size. Next, applying a randomized reduction technique, we convert the social welfare maximizing auction into a revenue maximizing online auction, PRMOA, achieving <inline-formula><tex-math notation=\"LaTeX\">$O(\\log \\mu)$ </tex-math><alternatives><inline-graphic xlink:href=\"wu-ieq3-2601905.gif\"/></alternatives></inline-formula> -competitiveness in provider revenue, as well as truthfulness, individual rationality and computation efficiency. We investigate auction design in different cases of resource cost functions in the system. We validate the efficacy of the mechanisms through solid theoretical analysis and trace-driven simulations.", "title": "" }, { "docid": "dd53308cc19f85e2a7ab2e379e196b6c", "text": "Due to the increasingly aging population, there is a rising demand for assistive living technologies for the elderly to ensure their health and well-being. The elderly are mostly chronic patients who require frequent check-ups of multiple vital signs, some of which (e.g., blood pressure and blood glucose) vary greatly according to the daily activities that the elderly are involved in. Therefore, the development of novel wearable intelligent systems to effectively monitor the vital signs continuously over a 24 hour period is in some cases crucial for understanding the progression of chronic symptoms in the elderly. In this paper, recent development of Wearable Intelligent Systems for e-Health (WISEs) is reviewed, including breakthrough technologies and technical challenges that remain to be solved. A novel application of wearable technologies for transient cardiovascular monitoring during water drinking is also reported. In particular, our latest results found that heart rate increased by 9 bpm (P < 0.001) and pulse transit time was reduced by 5 ms (P < 0.001), indicating a possible rise in blood pressure, during swallowing. In addition to monitoring physiological conditions during daily activities, it is anticipated that WISEs will have a number of other potentially viable applications, including the real-time risk prediction of sudden cardiovascular events and deaths. Category: Smart and intelligent computing", "title": "" }, { "docid": "1c77e4e01e20b33aca309adabb37868d", "text": "From the automated text processing point of view, natural language is very redundant in the sense that many different words share a common or similar meaning. For computer this can be hard to understand without some background knowledge. Latent Semantic Indexing (LSI) is a technique that helps in extracting some of this background knowledge from corpus of text documents. This can be also viewed as extraction of hidden semantic concepts from text documents. On the other hand visualization can be very helpful in data analysis, for instance, for finding main topics that appear in larger sets of documents. Extraction of main concepts from documents using techniques such as LSI, can make the results of visualizations more useful. For example, given a set of descriptions of European Research projects (6FP) one can find main areas that these projects cover including semantic web, e-learning, security, etc. In this paper we describe a method for visualization of document corpus based on LSI, the system implementing it and give results of using the system on several datasets.", "title": "" }, { "docid": "98f052bd353437e70b4ccc15d933d961", "text": "Current cloud providers use fixed-price based mechanisms to allocate Virtual Machine (VM) instances to their users. The fixed-price based mechanisms do not provide an efficient allocation of resources and do not maximize the revenue of the cloud providers. A better alternative would be to use combinatorial auction-based resource allocation mechanisms. In this PhD dissertation we will design, study and implement combinatorial auction-based mechanisms for efficient provisioning and allocation of VM instances in cloud computing environments. We present our preliminary results consisting of three combinatorial auction-based mechanisms for VM provisioning and allocation. We also present an efficient bidding algorithm that can be used by the cloud users to decide on how to bid for their requested bundles of VM instances.", "title": "" }, { "docid": "ef142067a29f8662e36d68ee37c07bce", "text": "The lack of assessment tools to analyze serious games and insufficient knowledge on their impact on players is a recurring critique in the field of game and media studies, education science and psychology. Although initial empirical studies on serious games usage deliver discussable results, numerous questions remain unacknowledged. In particular, questions regarding the quality of their formal conceptual design in relation to their purpose mostly stay uncharted. In the majority of cases the designers' good intentions justify incoherence and insufficiencies in their design. In addition, serious games are mainly assessed in terms of the quality of their content, not in terms of their intention-based design. This paper argues that analyzing a game's formal conceptual design, its elements, and their relation to each other based on the game's purpose is a constructive first step in assessing serious games. By outlining the background of the Serious Game Design Assessment Framework and exemplifying its use, a constructive structure to examine purpose-based games is introduced. To demonstrate how to assess the formal conceptual design of serious games we applied the SGDA Framework to the online games \"Sweatshop\" (2011) and \"ICED\" (2008).", "title": "" }, { "docid": "566144a980fe85005f7434f7762bfeb9", "text": "This article describes the rationale, development, and validation of the Scale for Suicide Ideation (SSI), a 19-item clinical research instrument designed to quantify and assess suicidal intention. The scale was found to have high internal consistency and moderately high correlations with clinical ratings of suicidal risk and self-administered measures of self-harm. Furthermore, it was sensitive to changes in levels of depression and hopelessness over time. Its construct validity was supported by two studies by different investigators testing the relationship between hopelessness, depression, and suicidal ideation and by a study demonstrating a significant relationship between high level of suicidal ideation and \"dichotomous\" attitudes about life and related concepts on a semantic differential test. Factor analysis yielded three meaningful factors: active suicidal desire, specific plans for suicide, and passive suicidal desire.", "title": "" }, { "docid": "176c9231f27d22658be5107a74ab2f32", "text": "The emerging ambient persuasive technology looks very promising for many areas of personal and ubiquitous computing. Persuasive applications aim at changing human attitudes or behavior through the power of software designs. This theory-creating article suggests the concept of a behavior change support system (BCSS), whether web-based, mobile, ubiquitous, or more traditional information system to be treated as the core of research into persuasion, influence, nudge, and coercion. This article provides a foundation for studying BCSSs, in which the key constructs are the O/C matrix and the PSD model. It will (1) introduce the archetypes of behavior change via BCSSs, (2) describe the design process for building persuasive BCSSs, and (3) exemplify research into BCSSs through the domain of health interventions. Recognizing the themes put forward in this article will help leverage the full potential of computing for producing behavioral changes.", "title": "" }, { "docid": "11ae42bedc18dedd0c29004000a4ec00", "text": "A hand injury can have great impact on a person's daily life. However, the current manual evaluations of hand functions are imprecise and inconvenient. In this research, a data glove embedded with 6-axis inertial sensors is proposed. With the proposed angle calculating algorithm, accurate bending angles are measured to estimate the real-time movements of hands. This proposed system can provide physicians with an efficient tool to evaluate the recovery of patients and improve the quality of hand rehabilitation.", "title": "" }, { "docid": "39be1d73b84872b0ae1d61bbd0fc96f8", "text": "Annotating data is a common bottleneck in building text classifiers. This is particularly problematic in social media domains, where data drift requires frequent retraining to maintain high accuracy. In this paper, we propose and evaluate a text classification method for Twitter data whose only required human input is a single keyword per class. The algorithm proceeds by identifying exemplar Twitter accounts that are representative of each class by analyzing Twitter Lists (human-curated collections of related Twitter accounts). A classifier is then fit to the exemplar accounts and used to predict labels of new tweets and users. We develop domain adaptation methods to address the noise and selection bias inherent to this approach, which we find to be critical to classification accuracy. Across a diverse set of tasks (topic, gender, and political affiliation classification), we find that the resulting classifier is competitive with a fully supervised baseline, achieving superior accuracy on four of six datasets despite using no manually labeled data.", "title": "" }, { "docid": "f1d4323cbabd294723a2fd68321ad640", "text": "Mycosis fungoides (MF), a low-grade lymphoproliferative disorder, is the most common type of cutaneous T-cell lymphoma. Typically, neoplastic T cells localize to the skin and produce patches, plaques, tumours or erythroderma. Diagnosis of MF can be difficult due to highly variable presentations and the sometimes nonspecific nature of histological findings. Molecular biology has improved the diagnostic accuracy. Nevertheless, clinical experience is of substantial importance as MF can resemble a wide variety of skin diseases. We performed a literature review and found that MF can mimic >50 different clinical entities. We present a structured framework of clinical variations of classical, unusual and distinct forms of MF. Distinct subforms such as ichthyotic MF, adnexotropic (including syringotropic and folliculotropic) MF, MF with follicular mucinosis, granulomatous MF with granulomatous slack skin and papuloerythroderma of Ofuji are delineated in more detail.", "title": "" }, { "docid": "5c74348ce0028786990b4ca39b1e858d", "text": "The terminology Internet of Things (IoT) refers to a future where every day physical objects are connected by the Internet in one form or the other, but outside the traditional desktop realm. The successful emergence of the IoT vision, however, will require computing to extend past traditional scenarios involving portables and smart-phones to the connection of everyday physical objects and the integration of intelligence with the environment. Subsequently, this will lead to the development of new computing features and challenges. The main purpose of this paper, therefore, is to investigate the features, challenges, and weaknesses that will come about, as the IoT becomes reality with the connection of more and more physical objects. Specifically, the study seeks to assess emergent challenges due to denial of service attacks, eavesdropping, node capture in the IoT infrastructure, and physical security of the sensors. We conducted a literature review about IoT, their features, challenges, and vulnerabilities. The methodology paradigm used was qualitative in nature with an exploratory research design, while data was collected using the desk research method. We found that, in the distributed form of architecture in IoT, attackers could hijack unsecured network devices converting them into bots to attack third parties. Moreover, attackers could target communication channels and extract data from the information flow. Finally, the perceptual layer in distributed IoT architecture is also found to be vulnerable to node capture attacks, including physical capture, brute force attack, DDoS attacks, and node privacy leaks.", "title": "" } ]
scidocsrr
dfbe3ab81b76c649f8e79edf81f8c8df
Some Faces are More Equal than Others: Hierarchical Organization for Accurate and Efficient Large-Scale Identity-Based Face Retrieval
[ { "docid": "535c8a15005505fce4b1dfc09d060981", "text": "The need for appropriate ways to measure the distance or similarity between data is ubiquitous in machine learning, pattern recognition and data mining, but handcrafting such good metrics for specific problems is generally difficult. This has led to the emergence of metric learning, which aims at automatically learning a metric from data and has attracted a lot of interest in machine learning and related fields for the past ten years. This survey paper proposes a systematic review of the metric learning literature, highlighting the pros and cons of each approach. We pay particular attention to Mahalanobis distance metric learning, a well-studied and successful framework, but additionally present a wide range of methods that have recently emerged as powerful alternatives, including nonlinear metric learning, similarity learning and local metric learning. Recent trends and extensions, such as semi-supervised metric learning, metric learning for histogram data and the derivation of generalization guarantees, are also covered. Finally, this survey addresses metric learning for structured data, in particular edit distance learning, and attempts to give an overview of the remaining challenges in metric learning for the years to come.", "title": "" } ]
[ { "docid": "ac910612672c2c46fb2abd039d65e1df", "text": "In the last few years, there has been a wave of articles related to behavioral addictions; some of them have a focus on online pornography addiction. However, despite all efforts, we are still unable to profile when engaging in this behavior becomes pathological. Common problems include: sample bias, the search for diagnostic instrumentals, opposing approximations to the matter, and the fact that this entity may be encompassed inside a greater pathology (i.e., sex addiction) that may present itself with very diverse symptomatology. Behavioral addictions form a largely unexplored field of study, and usually exhibit a problematic consumption model: loss of control, impairment, and risky use. Hypersexual disorder fits this model and may be composed of several sexual behaviors, like problematic use of online pornography (POPU). Online pornography use is on the rise, with a potential for addiction considering the \"triple A\" influence (accessibility, affordability, anonymity). This problematic use might have adverse effects in sexual development and sexual functioning, especially among the young population. We aim to gather existing knowledge on problematic online pornography use as a pathological entity. Here we try to summarize what we know about this entity and outline some areas worthy of further research.", "title": "" }, { "docid": "54f3c26ab9d9d6afdc9e1bf9e96f02f9", "text": "Game designers use human playtesting to gather feedback about game design elements when iteratively improving a game. Playtesting, however, is expensive: human testers must be recruited, playtest results must be aggregated and interpreted, and changes to game designs must be extrapolated from these results. Can automated methods reduce this expense? We show how active learning techniques can formalize and automate a subset of playtesting goals. Specifically, we focus on the low-level parameter tuning required to balance a game once the mechanics have been chosen. Through a case study on a shoot-‘em-up game we demonstrate the efficacy of active learning to reduce the amount of playtesting needed to choose the optimal set of game parameters for two classes of (formal) design objectives. This work opens the potential for additional methods to reduce the human burden of performing playtesting for a variety of relevant design concerns.", "title": "" }, { "docid": "b868a1bf3a3a45fbba8ea27527ca47fd", "text": "Social media and microblog tools are increasingly used by individuals to express their feelings and opinions in the form of short text messages. Detecting emotions in text has a wide range of applications including identifying anxiety or depression of individuals and measuring well-being or public mood of a community. In this paper, we propose a new approach for automatically classifying text messages of individuals to infer their emotional states. To model emotional states, we utilize the well-established Circumplex model that characterizes affective experience along two dimensions: valence and arousal. We select Twitter messages as input data set, as they provide a very large, diverse and freely available ensemble of emotions. Using hash-tags as labels, our methodology trains supervised classifiers to detect multiple classes of emotion on potentially huge data sets with no manual effort. We investigate the utility of several features for emotion detection, including unigrams, emoticons, negations and punctuations. To tackle the problem of sparse and high dimensional feature vectors of messages, we utilize a lexicon of emotions. We have compared the accuracy of several machine learning algorithms, including SVM, KNN, Decision Tree, and Naive Bayes for classifying Twitter messages. Our technique has an accuracy of over 90%, while demonstrating robustness across learning algorithms.", "title": "" }, { "docid": "518b96236ffa2ce0413a0e01d280937a", "text": "In this paper, we propose a low-rank representation with symmetric constraint (LRRSC) method for robust subspace clustering. Given a collection of data points approximately drawn from multiple subspaces, the proposed technique can simultaneously recover the dimension and members of each subspace. LRRSC extends the original low-rank representation algorithm by integrating a symmetric constraint into the low-rankness property of high-dimensional data representation. The symmetric low-rank representation, which preserves the subspace structures of high-dimensional data, guarantees weight consistency for each pair of data points so that highly correlated data points of subspaces are represented together. Moreover, it can be efficiently calculated by solving a convex optimization problem. We provide a rigorous proof for minimizing the nuclear-norm regularized least square problem with a symmetric constraint. The affinity matrix for spectral clustering can be obtained by further exploiting the angular information of the principal directions of the symmetric low-rank representation. This is a critical step towards evaluating the memberships between data points. Experimental results on benchmark databases demonstrate the effectiveness and robustness of LRRSC compared with several state-of-the-art subspace clustering algorithms.", "title": "" }, { "docid": "2074ab39d5cec1f9e645ff2ad457f3e3", "text": "[Context and motivation] The current breakthrough of natural language processing (NLP) techniques can provide the requirements engineering (RE) community with powerful tools that can help addressing specific tasks of natural language (NL) requirements analysis, such as traceability, ambiguity detection and requirements classification, to name a few. [Question/problem] However, modern NLP techniques are mainly statistical, and need large NL requirements datasets, to support appropriate training, test and validation of the techniques. The RE community has experimented with NLP since long time, but datasets were often proprietary, or limited to few software projects for which requirements were publicly available. Hence, replication of the experiments and generalization have always been an issue. [Principal idea/results] Our near future commitment is to provide a publicly available NL requirements dataset. [Contribution] To this end, we are collecting requirements documents from the Web, and we are representing them in a common XML format. In this paper, we present the current version of the dataset, together with our agenda concerning formatting, extension, and annotation of the dataset.", "title": "" }, { "docid": "8fac46b10cc8a439f9aa4eedfd2f413d", "text": "How does a lack of sleep affect our brains? In contrast to the benefits of sleep, frameworks exploring the impact of sleep loss are relatively lacking. Importantly, the effects of sleep deprivation (SD) do not simply reflect the absence of sleep and the benefits attributed to it; rather, they reflect the consequences of several additional factors, including extended wakefulness. With a focus on neuroimaging studies, we review the consequences of SD on attention and working memory, positive and negative emotion, and hippocampal learning. We explore how this evidence informs our mechanistic understanding of the known changes in cognition and emotion associated with SD, and the insights it provides regarding clinical conditions associated with sleep disruption.", "title": "" }, { "docid": "094dbd57522cb7b9b134b14852bea78b", "text": "When encountering qualitative research for the first time, one is confronted with both the number of methods and the difficulty of collecting, analysing and presenting large amounts of data. In quantitative research, it is possible to make a clear distinction between gathering and analysing data. However, this distinction is not clear-cut in qualitative research. The objective of this paper is to provide insight for the novice researcher and the experienced researcher coming to grounded theory for the first time. For those who already have experience in the use of the method the paper provides further much needed discussion arising out of デエW マWデエラSげゲ ;Sラヮデキラミ キミ デエW I“ aキWノSく In this paper the authors present a practical application and illustrate how grounded theory method was applied to an interpretive case study research. The paper discusses grounded theory method and provides guidance for the use of the method in interpretive studies.", "title": "" }, { "docid": "89dc7cad01e784f047774ab665fb53d4", "text": "This paper studies a top-k hierarchical classification problem. In top-k classification, one is allowed to make k predictions and no penalty is incurred if at least one of k predictions is correct. In hierarchical classification, classes form a structured hierarchy, and misclassification costs depend on the relation between the correct class and the incorrect class in the hierarchy. Despite that the fact that both top-k classification and hierarchical classification have gained increasing interests, the two problems have always been studied separately. In this paper, we define a top-k hierarchical loss function using a real world application. We provide the Bayes-optimal solution that minimizes the expected top-k hierarchical misclassification cost. Via numerical experiments, we show that our solution outperforms two baseline methods that address only one of the two issues.", "title": "" }, { "docid": "8401deada9010f05e3c9907a421d6760", "text": "Heuristics evaluation is one of the common techniques being used for usability evaluation. The potential of HE has been explored in games design and development and later playability heuristics evaluation (PHE) is generated. PHE has been used in evaluating games. Issues in games usability covers forms of game usability, game interface, game mechanics, game narrative and game play. This general heuristics has the potential to be further explored in specific domain of games that is educational games. Combination of general heuristics of games (tailored based on specific domain) and education heuristics seems to be an excellent focus in order to evaluate the usability issues in educational games especially educational games produced in Malaysia.", "title": "" }, { "docid": "3a5ac4dc112c079955104bda98f80b58", "text": "This review examines vestibular compensation and vestibular rehabilitation from a unified translational research perspective. Laboratory studies illustrate neurobiological principles of vestibular compensation at the molecular, cellular and systems levels in animal models that inform vestibular rehabilitation practice. However, basic research has been hampered by an emphasis on 'naturalistic' recovery, with time after insult and drug interventions as primary dependent variables. The vestibular rehabilitation literature, on the other hand, provides information on how the degree of compensation can be shaped by specific activity regimens. The milestones of the early spontaneous static compensation mark the re-establishment of static gaze stability, which provides a common coordinate frame for the brain to interpret residual vestibular information in the context of visual, somatosensory and visceral signals that convey gravitoinertial information. Stabilization of the head orientation and the eye orientation (suppression of spontaneous nystagmus) appear to be necessary by not sufficient conditions for successful rehabilitation, and define a baseline for initiating retraining. The lessons from vestibular rehabilitation in animal models offer the possibility of shaping the recovery trajectory to identify molecular and genetic factors that can improve vestibular compensation.", "title": "" }, { "docid": "cf52d720512c316dc25f8167d5571162", "text": "BACKGROUND\nHidradenitis suppurativa (HS) is a chronic relapsing skin disease. Recent studies have shown promising results of anti-tumor necrosis factor-alpha treatment.\n\n\nOBJECTIVE\nTo compare the efficacy and safety of infliximab and adalimumab in the treatment of HS.\n\n\nMETHODS\nA retrospective study was performed to compare 2 cohorts of 10 adult patients suffering from severe, recalcitrant HS. In 2005, 10 patients were treated with infliximab intravenous (i.v.) (3 infusions of 5 mg/kg at weeks 0, 2, and 6). In 2009, 10 other patients were treated in the same hospital with adalimumab subcutaneous (s.c.) 40 mg every other week. Both cohorts were followed up for 1 year using identical evaluation methods [Sartorius score, quality of life index, reduction of erythrocyte sedimentation rate (ESR) and C-reactive protein (CRP), patient and doctor global assessment, and duration of efficacy].\n\n\nRESULTS\nNineteen patients completed the study. In both groups, the severity of the HS diminished. Infliximab performed better in all aspects. The average Sartorius score was reduced to 54% of baseline for the infliximab group and 66% of baseline for the adalimumab group.\n\n\nCONCLUSIONS\nAdalimumab s.c. 40 mg every other week is less effective than infliximab i.v. 5 mg/kg at weeks 0, 2, and 6.", "title": "" }, { "docid": "5af009ec32eeda769e309b0979f5fbd3", "text": "A modified pole-and-and-knife (MPK) method of harvesting oil palms was designed and fabricated. The method was tested along with two existing methods, namely the bamboo pole-and-knife (BPK) and the single rope-and-cutlass (SRC) methods. Test results showed that the MPK method was superior to the other methods in reducing the time spent in searching for and collecting scattered loose fruits (and hence the harvesting time), increasing the recovery of scattered loose fruits, eliminating the waist problem of the fruit collectors and increasing the ease of transportation and use of the harvesting pole.", "title": "" }, { "docid": "047480185afbea439eee2ee803b9d1f9", "text": "The ability to perceive and analyze terrain is a key problem in mobile robot navigation. Terrain perception problems arise in planetary robotics, agriculture, mining, and, of course, self-driving cars. Here, we introduce the PTA (probabilistic terrain analysis) algorithm for terrain classification with a fastmoving robot platform. The PTA algorithm uses probabilistic techniques to integrate range measurements over time, and relies on efficient statistical tests for distinguishing drivable from nondrivable terrain. By using probabilistic techniques, PTA is able to accommodate severe errors in sensing, and identify obstacles with nearly 100% accuracy at speeds of up to 35mph. The PTA algorithm was an essential component in the DARPA Grand Challenge, where it enabled our robot Stanley to traverse the entire course in record time.", "title": "" }, { "docid": "406d839d15c18ac9c462c5f5af6b10b7", "text": "The Multiple Meanings of Open Government Data: Understanding Different Stakeholders and Their Perspectives Felipe Gonzalez-Zapata & Richard Heeks Centre for Development Informatics, University of Manchester, Manchester, M13 9PL, UK Corresponding author: Prof. Richard Heeks, Centre for Development Informatics, IDPM, SEED, University of Manchester, Manchester, M13 9PL, UK, +44-161-275-2870 richard.heeks@manchester.ac.uk", "title": "" }, { "docid": "e059d7e04c3dba8ed570ad1d72a647b5", "text": "An electronic throttle is a low-power dc servo drive which positions the throttle plate. Its application in modern automotive engines leads to improvements in vehicle drivability, fuel economy, and emissions. Transmission friction and the return spring limp-home nonlinearity significantly affect the electronic throttle performance. The influence of these effects is analyzed by means of computer simulations, experiments, and analytical calculations. A dynamic friction model is developed in order to adequately capture the experimentally observed characteristics of the presliding-displacement and breakaway effects. The linear part of electronic throttle process model is also analyzed and experimentally identified. A nonlinear control strategy is proposed, consisting of a proportional-integral-derivative (PID) controller and a feedback compensator for friction and limp-home effects. The PID controller parameters are analytically optimized according to the damping optimum criterion. The proposed control strategy is verified by computer simulations and experiments.", "title": "" }, { "docid": "6a196d894d94b194627f6e3c127c83fb", "text": "The advantages provided to memory by the distribution of multiple practice or study opportunities are among the most powerful effects in memory research. In this paper, we critically review the class of theories that presume contextual or encoding variability as the sole basis for the advantages of distributed practice, and recommend an alternative approach based on the idea that some study events remind learners of other study events. Encoding variability theory encounters serious challenges in two important phenomena that we review here: superadditivity and nonmonotonicity. The bottleneck in such theories lies in the assumption that mnemonic benefits arise from the increasing independence, rather than interdependence, of study opportunities. The reminding model accounts for many basic results in the literature on distributed practice, readily handles data that are problematic for encoding variability theories, including superadditivity and nonmonotonicity, and provides a unified theoretical framework for understanding the effects of repetition and the effects of associative relationships on memory.", "title": "" }, { "docid": "b2de2955568a37301828708e15b5ed15", "text": "ISPRS and CNES announced the HRS (High Resolution Stereo) Scientific Assessment Program during the ISPRS Commission I Symposium in Denver in November 2002. 9 test areas throughout the world have been selected for this program. One of the test sites is located in Bavaria, Germany, for which the PI comes from DLR. For a second region, which is situated in Catalonia – Barcelona and surroundings – DLR has the role of a Co-Investigator. The goal is to derive a DEM from the along-track stereo data of the SPOT HRS sensor and to assess the accuracy by comparison with ground control points and DEM data of superior quality. For the derivation of the DEM, the stereo processing software, developed at DLR for the MOMS-2P three line stereo camera is used. As a first step, the interior and exterior orientation of the camera, delivered as ancillary data (DORIS and ULS) are extracted. According to CNES these data should lead to an absolute orientation accuracy of about 30 m. No bundle block adjustment with ground control is used in the first step of the photogrammetric evaluation. A dense image matching, using very dense positions as kernel centers provides the parallaxes. The quality of the matching is controlled by forward and backward matching of the two stereo partners using the local least squares matching method. Forward intersection leads to points in object space which are then interpolated to a DEM of the region in a regular grid. Additionally, orthoimages are generated from the images of the two looking directions. The orthoimage and DEM accuracy is determined by using the ground control points and the available DEM data of superior accuracy (DEM derived from laser data and/or classical airborne photogrammetry). DEM filtering methods are applied and a comparison to SRTM-DEMs is performed. It is shown that a fusion of the DEMs derived from optical and radar data leads to higher accuracies. In the second step ground control points are used for bundle adjustment to improve the exterior orientation and the absolute accuracy of the SPOT-DEM.", "title": "" }, { "docid": "4ca5fec568185d3699c711cc86104854", "text": "Attackers often create systems that automatically rewrite and reorder their malware to avoid detection. Typical machine learning approaches, which learn a classifier based on a handcrafted feature vector, are not sufficiently robust to such reorderings. We propose a different approach, which, similar to natural language modeling, learns the language of malware spoken through the executed instructions and extracts robust, time domain features. Echo state networks (ESNs) and recurrent neural networks (RNNs) are used for the projection stage that extracts the features. These models are trained in an unsupervised fashion. A standard classifier uses these features to detect malicious files. We explore a few variants of ESNs and RNNs for the projection stage, including Max-Pooling and Half-Frame models which we propose. The best performing hybrid model uses an ESN for the recurrent model, Max-Pooling for non-linear sampling, and logistic regression for the final classification. Compared to the standard trigram of events model, it improves the true positive rate by 98.3% at a false positive rate of 0.1%.", "title": "" }, { "docid": "abda48a065aecbe34f86ce3490520402", "text": "Wireless Sensor Network (WSN) consists of small low-cost, low-power multifunctional nodes interconnected to efficiently aggregate and transmit data to sink. Cluster-based approaches use some nodes as Cluster Heads (CHs) and organize WSNs efficiently for aggregation of data and energy saving. A CH conveys information gathered by cluster nodes and aggregates/compresses data before transmitting it to a sink. However, this additional responsibility of the node results in a higher energy drain leading to uneven network degradation. Low Energy Adaptive Clustering Hierarchy (LEACH) offsets this by probabilistically rotating cluster heads role among nodes with energy above a set threshold. CH selection in WSN is NP-Hard as optimal data aggregation with efficient energy savings cannot be solved in polynomial time. In this work, a modified firefly heuristic, synchronous firefly algorithm, is proposed to improve the network performance. Extensive simulation shows the proposed technique to perform well compared to LEACH and energy-efficient hierarchical clustering. Simulations show the effectiveness of the proposed method in decreasing the packet loss ratio by an average of 9.63% and improving the energy efficiency of the network when compared to LEACH and EEHC.", "title": "" } ]
scidocsrr
463cfc839609d32f61e48ffd239310f4
Centering Theory in Spanish: Coding Manual
[ { "docid": "c1e39be2fa21a4f47d163c1407490dc8", "text": "Most existing anaphora resolution algorithms are designed to account only for anaphors with NP-antecedents. This paper describes an algorithm for the resolution of discourse deictic anaphors, which constitute a large percentage of anaphors in spoken dialogues. The success of the resolution is dependent on the classification of all pronouns and demonstratives into individual, discourse deictic and vague anaphora. Finally, the empirical results of the application of the algorithm to a corpus of spoken dialogues are presented.", "title": "" } ]
[ { "docid": "b35922663b4728c409528675be15d586", "text": "High-resolution screen printing of pristine graphene is introduced for the rapid fabrication of conductive lines on flexible substrates. Well-defined silicon stencils and viscosity-controlled inks facilitate the preparation of high-quality graphene patterns as narrow as 40 μm. This strategy provides an efficient method to produce highly flexible graphene electrodes for printed electronics.", "title": "" }, { "docid": "6d5b7b5e1738993991a1344a1f584b68", "text": "Smart route planning gathers increasing interest as cities become crowded and jammed. We present a system for individual trip planning that incorporates future traffic hazards in routing. Future traffic conditions are computed by a Spatio-Temporal Random Field based on a stream of sensor readings. In addition, our approach estimates traffic flow in areas with low sensor coverage using a Gaussian Process Regression. The conditioning of spatial regression on intermediate predictions of a discrete probabilistic graphical model allows to incorporate historical data, streamed online data and a rich dependency structure at the same time. We demonstrate the system and test model assumptions with a real-world use-case from Dublin city, Ireland.", "title": "" }, { "docid": "14fb6228827657ba6f8d35d169ad3c63", "text": "In a recent paper, the authors proposed a new class of low-complexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the first of two conference papers describing the derivation of these algorithms, connection with the related literature, extensions of the original framework, and new empirical evidence. In particular, the present paper outlines the derivation of AMP from standard sum-product belief propagation, and its extension in several directions. We also discuss relations with formal calculations based on statistical mechanics methods.", "title": "" }, { "docid": "c8768e560af11068890cc097f1255474", "text": "Abstract This paper describes the functionality of MEAD, a comprehensive, public domain, open source, multidocument multilingual summarization environment that has been thus far downloaded by more than 500 organizations. MEAD has been used in a variety of summarization applications ranging from summarization for mobile devices to Web page summarization within a search engine and to novelty detection.", "title": "" }, { "docid": "052ae69b1fe396f66cb4788372dc3c79", "text": "Model transformation by example is a novel approach in model-driven software engineering to derive model transformation rules from an initial prototypical set of interrelated source and target models, which describe critical cases of the model transformation problem in a purely declarative way. In the current paper, we automate this approach using inductive logic programming (Muggleton and Raedt in J Logic Program 19-20:629–679, 1994) which aims at the inductive construction of first-order clausal theories from examples and background knowledge.", "title": "" }, { "docid": "b3962fd4000fced796f3764d009c929e", "text": "Low-field extremity magnetic resonance imaging (lfMRI) is currently commercially available and has been used clinically to evaluate rheumatoid arthritis (RA). However, one disadvantage of this new modality is that the field of view (FOV) is too small to assess hand and wrist joints simultaneously. Thus, we have developed a new lfMRI system, compacTscan, with a FOV that is large enough to simultaneously assess the entire wrist to proximal interphalangeal joint area. In this work, we examined its clinical value compared to conventional 1.5 tesla (T) MRI. The comparison involved evaluating three RA patients by both 0.3 T compacTscan and 1.5 T MRI on the same day. Bone erosion, bone edema, and synovitis were estimated by our new compact MRI scoring system (cMRIS) and the kappa coefficient was calculated on a joint-by-joint basis. We evaluated a total of 69 regions. Bone erosion was detected in 49 regions by compacTscan and in 48 regions by 1.5 T MRI, while the total erosion score was 77 for compacTscan and 76.5 for 1.5 T MRI. These findings point to excellent agreement between the two techniques (kappa = 0.833). Bone edema was detected in 14 regions by compacTscan and in 19 by 1.5 T MRI, and the total edema score was 36.25 by compacTscan and 47.5 by 1.5 T MRI. Pseudo-negative findings were noted in 5 regions. However, there was still good agreement between the techniques (kappa = 0.640). Total number of evaluated joints was 33. Synovitis was detected in 13 joints by compacTscan and 14 joints by 1.5 T MRI, while the total synovitis score was 30 by compacTscan and 32 by 1.5 T MRI. Thus, although 1 pseudo-positive and 2 pseudo-negative findings resulted from the joint evaluations, there was again excellent agreement between the techniques (kappa = 0.827). Overall, the data obtained by our compacTscan system showed high agreement with those obtained by conventional 1.5 T MRI with regard to diagnosis and the scoring of bone erosion, edema, and synovitis. We conclude that compacTscan is useful for diagnosis and estimation of disease activity in patients with RA.", "title": "" }, { "docid": "50570741405703e6b47d285237b6eeed", "text": "The knowledge base is a machine-readable set of knowledge. More and more multi-domain and large-scale knowledge bases have emerged in recent years, and they play an essential role in many information systems and semantic annotation tasks. However we do not have a perfect knowledge base yet and maybe we will never have a perfect one, because all the knowledge bases have limited coverage while new knowledge continues to emerge. Therefore populating and enriching the existing knowledge base become important tasks. Traditional knowledge base population task usually leverages the information embedded in the unstructured free text. Recently researchers found that massive structured tables on the Web are high-quality relational data and easier to be utilized than the unstructured text. Our goal of this paper is to enrich the knowledge base using Wikipedia tables. Here, knowledge means binary relations between entities and we focus on the relations in some specific domains. There are two basic types of information can be used in this task: the existing relation instances and the connection between types and relations. We firstly propose two basic probabilistic models based on two types of information respectively. Then we propose a light-weight aggregated model to combine the advantages of basic models. The experimental results show that our method is an effective approach to enriching the knowledge base with both high precision and recall.", "title": "" }, { "docid": "5bc1c336b8e495e44649365f11af4ab8", "text": "Convolutional neural networks (CNN) are limited by the lack of capability to handle geometric information due to the fixed grid kernel structure. The availability of depth data enables progress in RGB-D semantic segmentation with CNNs. State-of-the-art methods either use depth as additional images or process spatial information in 3D volumes or point clouds. These methods suffer from high computation and memory cost. To address these issues, we present Depth-aware CNN by introducing two intuitive, flexible and effective operations: depth-aware convolution and depth-aware average pooling. By leveraging depth similarity between pixels in the process of information propagation, geometry is seamlessly incorporated into CNN. Without introducing any additional parameters, both operators can be easily integrated into existing CNNs. Extensive experiments and ablation studies on challenging RGB-D semantic segmentation benchmarks validate the effectiveness and flexibility of our approach.", "title": "" }, { "docid": "c9e11acaa2fbee77d079ecafbb9ae93a", "text": "Alcohol consumption is highly prevalent in university students. Early detection in future health professionals is important: their consumption might not only influence their own health but may determine how they deal with the implementation of preventive strategies in the future. The aim of this paper is to detect the prevalence of risky alcohol consumption in first- and last-degree year students and to compare their drinking patterns.Risky drinking in pharmacy students (n=434) was assessed and measured with the AUDIT questionnaire (Alcohol Use Disorders Identification Test). A comparative analysis between college students from the first and fifth years of the degree in pharmacy, and that of a group of professors was carried to see differences in their alcohol intake patterns.Risky drinking was detected in 31.3% of students. The highest prevalence of risky drinkers, and the total score of the AUDIT test was found in students in their first academic year. Students in the first academic level taking morning classes had a two-fold risk of risky drinking (OR=1.9 (IC 95%1.1-3.1)) compared with students in the fifth level. The frequency of alcohol consumption increases with the academic level, whereas the number of alcohol beverages per drinking occasion falls.Risky drinking is high during the first year of university. As alcohol consumption might decrease with age, it is important to design preventive strategies that will strengthen this tendency.", "title": "" }, { "docid": "b753eb752d4f87dbff82d77e8417f389", "text": "Our research team has spent the last few years studying the cognitive processes involved in simultaneous interpreting. The results of this research have shown that professional interpreters develop specific ways of using their working memory, due to their work in simultaneous interpreting; this allows them to perform the processes of linguistic input, lexical and semantic access, reformulation and production of the segment translated both simultaneously and under temporal pressure (Bajo, Padilla & Padilla, 1998). This research led to our interest in the processes involved in the tasks of mediation in general. We understand that linguistic and cultural mediation involves not only translation but also the different forms of interpreting: consecutive and simultaneous. Our general objective in this project is to outline a cognitive theory of translation and interpreting and find empirical support for it. From the field of translation and interpreting there have been some attempts to create global and partial theories of the processes of mediation (Gerver, 1976; Moser-Mercer, 1997; Gile, 1997), but most of these attempts lack empirical support. On the other hand, from the field of psycholinguistics there have been some attempts to make an empirical study of the tasks of translation (De Groot, 1993; Sánchez-Casas Davis and GarcíaAlbea, 1992) and interpreting (McDonald and Carpenter, 1981), but these have always been partial, concentrating on very specific aspects of translation and interpreting. The specific objectives of this project are:", "title": "" }, { "docid": "fa0f02cde08a3cee4b691788815cb757", "text": "Control strategies for these contaminants will require a better understanding of how they move around the globe.", "title": "" }, { "docid": "39803815c3edfaa2327327efaef80804", "text": "Spatial pyramid matching (SPM) based pooling has been the dominant choice for state-of-art image classification systems. In contrast, we propose a novel object-centric spatial pooling (OCP) approach, following the intuition that knowing the location of the object of interest can be useful for image classification. OCP consists of two steps: (1) inferring the location of the objects, and (2) using the location information to pool foreground and background features separately to form the image-level representation. Step (1) is particularly challenging in a typical classification setting where precise object location annotations are not available during training. To address this challenge, we propose a framework that learns object detectors using only image-level class labels, or so-called weak labels. We validate our approach on the challenging PASCAL07 dataset. Our learned detectors are comparable in accuracy with stateof-the-art weakly supervised detection methods. More importantly, the resulting OCP approach significantly outperforms SPM-based pooling in image classification.", "title": "" }, { "docid": "0f1a36a4551dc9c6b4ae127c34ff7330", "text": "Internet of Things (IoT) is reshaping our daily lives by bridging the gaps between physical and digital world. To enable ubiquitous sensing, seamless connection and real-time processing for IoT applications, fog computing is considered as a key component in a heterogeneous IoT architecture, which deploys storage and computing resources to network edges. However, the fog-based IoT architecture can lead to various security and privacy risks, such as compromised fog nodes that may impede developments of IoT by attacking the data collection and gathering period. In this paper, we propose a novel privacy-preserving and reliable scheme for the fog-based IoT to address the data privacy and reliability challenges of the selective data aggregation service. Specifically, homomorphic proxy re-encryption and proxy re-authenticator techniques are respectively utilized to deal with the data privacy and reliability issues of the service, which supports data aggregation over selective data types for any type-driven applications. We define a new threat model to formalize the non-collusive and collusive attacks of compromised fog nodes, and it is demonstrated that the proposed scheme can prevent both non-collusive and collusive attacks in our model. In addition, performance evaluations show the efficiency of the scheme in terms of computational costs and communication overheads.", "title": "" }, { "docid": "1a9e75efcc710b3bc8c5d450d29eea7c", "text": "This paper presents the tuning of the structure and parameters of a neural network using an improved genetic algorithm (GA). It is also shown that the improved GA performs better than the standard GA based on some benchmark test functions. A neural network with switches introduced to its links is proposed. By doing this, the proposed neural network can learn both the input-output relationships of an application and the network structure using the improved GA. The number of hidden nodes is chosen manually by increasing it from a small number until the learning performance in terms of fitness value is good enough. Application examples on sunspot forecasting and associative memory are given to show the merits of the improved GA and the proposed neural network.", "title": "" }, { "docid": "912c92dd4755cfb280f948bd4264ded7", "text": "A decision is a commitment to a proposition or plan of action based on information and values associated with the possible outcomes. The process operates in a flexible timeframe that is free from the immediacy of evidence acquisition and the real time demands of action itself. Thus, it involves deliberation, planning, and strategizing. This Perspective focuses on perceptual decision making in nonhuman primates and the discovery of neural mechanisms that support accuracy, speed, and confidence in a decision. We suggest that these mechanisms expose principles of cognitive function in general, and we speculate about the challenges and directions before the field.", "title": "" }, { "docid": "a671c6eff981b5e3a0466e53f22c4521", "text": "This paper investigates recently proposed approaches for defending against adversarial examples and evaluating adversarial robustness. We motivate adversarial risk as an objective for achieving models robust to worst-case inputs. We then frame commonly used attacks and evaluation metrics as defining a tractable surrogate objective to the true adversarial risk. This suggests that models may optimize this surrogate rather than the true adversarial risk. We formalize this notion as obscurity to an adversary, and develop tools and heuristics for identifying obscured models and designing transparent models. We demonstrate that this is a significant problem in practice by repurposing gradient-free optimization techniques into adversarial attacks, which we use to decrease the accuracy of several recently proposed defenses to near zero. Our hope is that our formulations and results will help researchers to develop more powerful defenses.", "title": "" }, { "docid": "f8d50c7fe96fdf8fbe06332ab7e1a2a6", "text": "There is a strong need for advanced control methods in battery management systems, especially in the plug-in hybrid and electric vehicles sector, due to cost and safety issues of new high-power battery packs and high-energy cell design. Limitations in computational speed and available memory require the use of very simple battery models and basic control algorithms, which in turn result in suboptimal utilization of the battery. This work investigates the possible use of optimal control strategies for charging. We focus on the minimum time charging problem, where different constraints on internal battery states are considered. Based on features of the open-loop optimal charging solution, we propose a simple one-step predictive controller, which is shown to recover the time-optimal solution, while being feasible for real-time computations. We present simulation results suggesting a decrease in charging time by 50% compared to the conventional constant-current / constant-voltage method for lithium-ion batteries.", "title": "" }, { "docid": "e05270c1d2abeda1cee99f1097c1c5d5", "text": "E-transactions have become promising and very much convenient due to worldwide and usage of the internet. The consumer reviews are increasing rapidly in number on various products. These large numbers of reviews are beneficial to manufacturers and consumers alike. It is a big task for a potential consumer to read all reviews to make a good decision of purchasing. It is beneficial to mine available consumer reviews for popular products from various product review sites of consumer. The first step is performing sentiment analysis to decide the polarity of a review. On the basis of polarity, we can then classify the review. Comparison is made among the different WEKA classifiers in the form of charts and graphs.", "title": "" }, { "docid": "dbf683e908ea9e5962d0830e6b8d24fd", "text": "This paper studies physical layer security in a wireless ad hoc network with numerous legitimate transmitter–receiver pairs and eavesdroppers. A hybrid full-duplex (FD)/half-duplex receiver deployment strategy is proposed to secure legitimate transmissions, by letting a fraction of legitimate receivers work in the FD mode sending jamming signals to confuse eavesdroppers upon their information receptions, and letting the other receivers work in the half-duplex mode just receiving their desired signals. The objective of this paper is to choose properly the fraction of FD receivers for achieving the optimal network security performance. Both accurate expressions and tractable approximations for the connection outage probability and the secrecy outage probability of an arbitrary legitimate link are derived, based on which the area secure link number, network-wide secrecy throughput, and network-wide secrecy energy efficiency are optimized, respectively. Various insights into the optimal fraction are further developed, and its closed-form expressions are also derived under perfect self-interference cancellation or in a dense network. It is concluded that the fraction of FD receivers triggers a non-trivial tradeoff between reliability and secrecy, and the proposed strategy can significantly enhance the network security performance.", "title": "" } ]
scidocsrr
08d3c2023248fc0fa4a853f3fd55733b
Measurement Issues in Galvanic Intrabody Communication: Influence of Experimental Setup
[ { "docid": "ff0b13d3841913de36104e37cc893b26", "text": "Modeling of intrabody communication (IBC) entails the understanding of the interaction between electromagnetic fields and living tissues. At the same time, an accurate model can provide practical hints toward the deployment of an efficient and secure communication channel for body sensor networks. In the literature, two main IBC coupling techniques have been proposed: galvanic and capacitive coupling. Nevertheless, models that are able to emulate both coupling approaches have not been reported so far. In this paper, a simple model based on a distributed parameter structure with the flexibility to adapt to both galvanic and capacitive coupling has been proposed. In addition, experimental results for both coupling methods were acquired by means of two harmonized measurement setups. The model simulations have been subsequently compared with the experimental data, not only to show their validity but also to revise the practical frequency operation range for both techniques. Finally, the model, along with the experimental results, has also allowed us to provide some practical rules to optimally tackle IBC design.", "title": "" }, { "docid": "df1124c8b5b3295f09da347d19f152f6", "text": "The signal transmission mechanism on the surface of the human body is studied for the application to body channel communication (BCC). From Maxwell's equations, the complete equation of electrical field on the human body is developed to obtain a general BCC model. The mechanism of BCC consists of three parts according to the operating frequencies and channel distances: the quasi-static near-field coupling part, the reactive induction-field radiation part, and the surface wave far-field propagation part. The general BCC model by means of the near-field and far-field approximation is developed to be valid in the frequency range from 100 kHz to 100 MHz and distance up to 1.3 m based on the measurements of the body channel characteristics. Finally, path loss characteristics of BCC are formulated for the design of BCC systems and many potential applications.", "title": "" }, { "docid": "704611db1aea020103b093a2156cd94d", "text": "With the growing number of wearable devices and applications, there is an increasing need for a flexible body channel communication (BCC) system that supports both scalable data rate and low power operation. In this paper, a highly flexible frequency-selective digital transmission (FSDT) transmitter that supports both data scalability and low power operation with the aid of two novel implementation methods is presented. In an FSDT system, data rate is limited by the number of Walsh spreading codes available for use in the optimal body channel band of 40-80 MHz. The first method overcomes this limitation by applying multi-level baseband coding scheme to a carrierless FSDT system to enhance the bandwidth efficiency and to support a data rate of 60 Mb/s within a 40-MHz bandwidth. The proposed multi-level coded FSDT system achieves six times higher data rate as compared to other BCC systems. The second novel implementation method lies in the use of harmonic frequencies of a Walsh encoded FSDT system that allows the BCC system to operate in the optimal channel bandwidth between 40-80 MHz with half the clock frequency. Halving the clock frequency results in a power consumption reduction of 32%. The transmitter was fabricated in a 65-nm CMOS process. It occupies a core area of 0.24 × 0.3 mm 2. When operating under a 60-Mb/s data-rate mode, the transmitter consumes 1.85 mW and it consumes only 1.26 mW when operating under a 5-Mb/s data-rate mode.", "title": "" } ]
[ { "docid": "0da9197d2f6839d01560b46cbb1fbc8d", "text": "Estimating the traversability of rough terrain is a critical task for an outdoor mobile robot. While classifying structured environment can be learned from large number of training data, it is an extremely difficult task to learn and estimate traversability of unstructured rough terrain. Moreover, in many cases information from a single sensor may not be sufficient for estimating traversability reliably in the absence of artificial landmarks such as lane markings or curbs. Our approach estimates traversability of the terrain and build a 2D probabilistic grid map online using 3D-LIDAR and camera. The combination of LIDAR and camera is favoured in many robotic application because they provide complementary information. Our approach assumes the data captured by these two sensors are independent and build separate traversability maps, each with information captured from one sensor. Traversability estimation with vision sensor autonomously collects training data and update classifier without human intervention as the vehicle traverse the terrain. Traversability estimation with 3D-LIDAR measures the slopes of the ground to predict the traversability. Two independently built probabilistic maps are fused using Bayes' rule to improve the detection performance. This is in contrast with other methods in which each sensor performs different tasks. We have implemented the algorithm on a UGV(Unmanned Ground Vehicle) and tested our approach on a rough terrain to evaluate the detection performance.", "title": "" }, { "docid": "cb95831a960ae9ec2d1ea4279cfa6ac2", "text": "In vivo fluorescence imaging suffers from suboptimal signal-to-noise ratio and shallow detection depth, which is caused by the strong tissue autofluorescence under constant external excitation and the scattering and absorption of short-wavelength light in tissues. Here we address these limitations by using a novel type of optical nanoprobes, photostimulable LiGa5O8:Cr(3+) near-infrared (NIR) persistent luminescence nanoparticles, which, with very-long-lasting NIR persistent luminescence and unique photo-stimulated persistent luminescence (PSPL) capability, allow optical imaging to be performed in an excitation-free and hence, autofluorescence-free manner. LiGa5O8:Cr(3+) nanoparticles pre-charged by ultraviolet light can be repeatedly (>20 times) stimulated in vivo, even in deep tissues, by short-illumination (~15 seconds) with a white light-emitting-diode flashlight, giving rise to multiple NIR PSPL that expands the tracking window from several hours to more than 10 days. Our studies reveal promising potential of these nanoprobes in cell tracking and tumor targeting, exhibiting exceptional sensitivity and penetration that far exceed those afforded by conventional fluorescence imaging.", "title": "" }, { "docid": "faa8bb95a4b05bed78dbdfaec1cd147c", "text": "This paper describes the SimBow system submitted at SemEval2017-Task3, for the question-question similarity subtask B. The proposed approach is a supervised combination of different unsupervised textual similarities. These textual similarities rely on the introduction of a relation matrix in the classical cosine similarity between bag-of-words, so as to get a softcosine that takes into account relations between words. According to the type of relation matrix embedded in the soft-cosine, semantic or lexical relations can be considered. Our system ranked first among the official submissions of subtask B.", "title": "" }, { "docid": "e2c2d56a92aa66453804c552ad0892b9", "text": "By analyzing the relationship of S-parameter between two-port differential and four-port single-ended networks, a method is found for measuring the S-parameter of a differential amplifier on wafer by using a normal two-port vector network analyzer. With this method, it should not especially purchase a four-port vector network analyzer. Furthermore, the method was also suitable for measuring S-parameter of any multi-port circuit by using two-ports measurement set.", "title": "" }, { "docid": "75ed78f9a59ec978432f16fd4407df60", "text": "The transition from user requirements to UML diagrams is a difficult task for the designer espec ially when he handles large texts expressing these needs. Modelin g class Diagram must be performed frequently, even during t he development of a simple application. This paper prop oses an approach to facilitate class diagram extraction from textual requirements using NLP techniques and domain ontolog y. Keywords-component; Class Diagram, Natural Language Processing, GATE, Domain ontology, requirements.", "title": "" }, { "docid": "147b8f02031ba9bc8788600dc48301c9", "text": "This paper gives an overview on different research activities on electronically steerable antennas at Ka-band within the framework of the SANTANA project. In addition, it gives an outlook on future objectives, namely the perspective of testing SANTANA technologies with the projected German research satellite “Heinrich Hertz”.", "title": "" }, { "docid": "af983aa7ac103dd41dfd914af452758f", "text": "The fast-growing nature of instant messaging applications usage on Android mobile devices brought about a proportional increase on the number of cyber-attack vectors that could be perpetrated on them. Android mobile phones store significant amount of information in the various memory partitions when Instant Messaging (IM) applications (WhatsApp, Skype, and Facebook) are executed on them. As a result of the enormous crimes committed using instant messaging applications, and the amount of electronic based traces of evidence that can be retrieved from the suspect’s device where an investigation could convict or refute a person in the court of law and as such, mobile phones have become a vulnerable ground for digital evidence mining. This paper aims at using forensic tools to extract and analyse left artefacts digital evidence from IM applications on Android phones using android studio as the virtual machine. Digital forensic investigation methodology by Bill Nelson was applied during this research. Some of the key results obtained showed how digital forensic evidence such as call logs, contacts numbers, sent/retrieved messages, and images can be mined from simulated android phones when running these applications. These artefacts can be used in the court of law as evidence during cybercrime investigation.", "title": "" }, { "docid": "1ebb333d5a72c649cd7d7986f5bf6975", "text": "\"Of what a strange nature is knowledge! It clings to the mind, when it has once seized on it, like a lichen on the rock,\" Abstract We describe a theoretical system intended to facilitate the use of knowledge In an understand­ ing system. The notion of script is introduced to account for knowledge about mundane situations. A program, SAM, is capable of using scripts to under­ stand. The notion of plans is introduced to ac­ count for general knowledge about novel situa­ tions. I. Preface In an attempt to provide theory where there have been mostly unrelated systems, Minsky (1974) recently described the as fitting into the notion of \"frames.\" Minsky at­ tempted to relate this work, in what is essentially language processing, to areas of vision research that conform to the same notion. Mlnsky's frames paper has created quite a stir in AI and some immediate spinoff research along the lines of developing frames manipulators (e.g. Bobrow, 1975; Winograd, 1975). We find that we agree with much of what Minsky said about frames and with his characterization of our own work. The frames idea is so general, however, that It does not lend itself to applications without further specialization. This paper is an attempt to devel­ op further the lines of thought set out in Schank (1975a) and Abelson (1973; 1975a). The ideas pre­ sented here can be viewed as a specialization of the frame idea. We shall refer to our central constructs as \"scripts.\" II. The Problem Researchers in natural language understanding have felt for some time that the eventual limit on the solution of our problem will be our ability to characterize world knowledge. Various researchers have approached world knowledge in various ways. Winograd (1972) dealt with the problem by severely restricting the world. This approach had the po­ sitive effect of producing a working system and the negative effect of producing one that was only minimally extendable. Charniak (1972) approached the problem from the other end entirely and has made some interesting first steps, but because his work is not grounded in any representational sys­ tem or any working computational system the res­ triction of world knowledge need not critically concern him. Our feeling is that an effective characteri­ zation of knowledge can result in a real under­ standing system in the not too distant future. We expect that programs based on the theory we out­ …", "title": "" }, { "docid": "a8da8a2d902c38c6656ea5db841a4eb1", "text": "The uses of the World Wide Web on the Internet for commerce and information access continue to expand. The e-commerce business has proven to be a promising channel of choice for consumers as it is gradually transforming into a mainstream business activity. However, lack of trust has been identified as a major obstacle to the adoption of online shopping. Empirical study of online trust is constrained by the shortage of high-quality measures of general trust in the e-commence contexts. Based on theoretical or empirical studies in the literature of marketing or information system, nine factors have sound theoretical sense and support from the literature. A survey method was used for data collection in this study. A total of 172 usable questionnaires were collected from respondents. This study presents a new set of instruments for use in studying online trust of an individual. The items in the instrument were analyzed using a factors analysis. The results demonstrated reliable reliability and validity in the instrument.This study identified seven factors has a significant impact on online trust. The seven dominant factors are reputation, third-party assurance, customer service, propensity to trust, website quality, system assurance and brand. As consumers consider that doing business with online vendors involves risk and uncertainty, online business organizations need to overcome these barriers. Further, implication of the finding also provides e-commerce practitioners with guideline for effectively engender online customer trust.", "title": "" }, { "docid": "5289fc231c716e2ce9e051fb0652ce94", "text": "Noninvasive body contouring has become one of the fastest-growing areas of esthetic medicine. Many patients appear to prefer nonsurgical less-invasive procedures owing to the benefits of fewer side effects and shorter recovery times. Increasingly, 635-nm low-level laser therapy (LLLT) has been used in the treatment of a variety of medical conditions and has been shown to improve wound healing, reduce edema, and relieve acute pain. Within the past decade, LLLT has also emerged as a new modality for noninvasive body contouring. Research has shown that LLLT is effective in reducing overall body circumference measurements of specifically treated regions, including the hips, waist, thighs, and upper arms, with recent studies demonstrating the long-term effectiveness of results. The treatment is painless, and there appears to be no adverse events associated with LLLT. The mechanism of action of LLLT in body contouring is believed to stem from photoactivation of cytochrome c oxidase within hypertrophic adipocytes, which, in turn, affects intracellular secondary cascades, resulting in the formation of transitory pores within the adipocytes' membrane. The secondary cascades involved may include, but are not limited to, activation of cytosolic lipase and nitric oxide. Newly formed pores release intracellular lipids, which are further metabolized. Future studies need to fully outline the cellular and systemic effects of LLLT as well as determine optimal treatment protocols.", "title": "" }, { "docid": "bf257fae514c28dc3b4c31ff656a00e9", "text": "The objective of the present study is to evaluate the acute effects of low-level laser therapy (LLLT) on functional capacity, perceived exertion, and blood lactate in hospitalized patients with heart failure (HF). Patients diagnosed with systolic HF (left ventricular ejection fraction <45 %) were randomized and allocated prospectively into two groups: placebo LLLT group (n = 10)—subjects who were submitted to placebo laser and active LLLT group (n = 10)—subjects who were submitted to active laser. The 6-min walk test (6MWT) was performed, and blood lactate was determined at rest (before LLLT application and 6MWT), immediately after the exercise test (time 0) and recovery (3, 6, and 30 min). A multi-diode LLLT cluster probe (DMC, São Carlos, Brazil) was used. Both groups increased 6MWT distance after active or placebo LLLT application compared to baseline values (p = 0.03 and p = 0.01, respectively); however, no difference was observed during intergroup comparison. The active LLLT group showed a significant reduction in the perceived exertion Borg (PEB) scale compared to the placebo LLLT group (p = 0.006). In addition, the group that received active LLLT showed no statistically significant difference for the blood lactate level through the times analyzed. The placebo LLLT group demonstrated a significant increase in blood lactate between the rest and recovery phase (p < 0.05). Acute effects of LLLT irradiation on skeletal musculature were not able to improve the functional capacity of hospitalized patients with HF, although it may favorably modulate blood lactate metabolism and reduce perceived muscle fatigue.", "title": "" }, { "docid": "1dd8fdb5f047e58f60c228e076aa8b66", "text": "Recurrent Neural Network Language Models (RNN-LMs) have recently shown exceptional performance across a variety of applications. In this paper, we modify the architecture to perform Language Understanding, and advance the state-of-the-art for the widely used ATIS dataset. The core of our approach is to take words as input as in a standard RNN-LM, and then to predict slot labels rather than words on the output side. We present several variations that differ in the amount of word context that is used on the input side, and in the use of non-lexical features. Remarkably, our simplest model produces state-of-the-art results, and we advance state-of-the-art through the use of bagof-words, word embedding, named-entity, syntactic, and wordclass features. Analysis indicates that the superior performance is attributable to the task-specific word representations learned by the RNN.", "title": "" }, { "docid": "ba1b3fb5f147b5af173e5f643a2794e0", "text": "The objective of this study is to examine how personal factors such as lifestyle, personality, and economic situations affect the consumer behavior of Malaysian university students. A quantitative approach was adopted and a self-administered questionnaire was distributed to collect data from university students. Findings illustrate that ‘personality’ influences the consumer behavior among Malaysian university student. This study also noted that the economic situation had a negative relationship with consumer behavior. Findings of this study improve our understanding of consumer behavior of Malaysian University Students. The findings of this study provide valuable insights in identifying and taking steps to improve on the services, ambience, and needs of the student segment of the Malaysian market.", "title": "" }, { "docid": "71fe8a71c2855499834b2f6a60b2a759", "text": "The pomegranate, Punica granatum L., is an ancient, mystical, unique fruit borne on a small, long-living tree cultivated throughout the Mediterranean region, as far north as the Himalayas, in Southeast Asia, and in California and Arizona in the United States. In addition to its ancient historical uses, pomegranate is used in several systems of medicine for a variety of ailments. The synergistic action of the pomegranate constituents appears to be superior to that of single constituents. In the past decade, numerous studies on the antioxidant, anticarcinogenic, and anti-inflammatory properties of pomegranate constituents have been published, focusing on treatment and prevention of cancer, cardiovascular disease, diabetes, dental conditions, erectile dysfunction, bacterial infections and antibiotic resistance, and ultraviolet radiation-induced skin damage. Other potential applications include infant brain ischemia, male infertility, Alzheimer's disease, arthritis, and obesity.", "title": "" }, { "docid": "5718c733a80805c5dbb4333c2d298980", "text": "{Portions reprinted, with permission from Keim et al. #2001 IEEE Abstract Simple presentation graphics are intuitive and easy-to-use, but show only highly aggregated data presenting only a very small number of data values (as in the case of bar charts) and may have a high degree of overlap occluding a significant portion of the data values (as in the case of the x-y plots). In this article, the authors therefore propose a generalization of traditional bar charts and x-y plots, which allows the visualization of large amounts of data. The basic idea is to use the pixels within the bars to present detailed information of the data records. The so-called pixel bar charts retain the intuitiveness of traditional bar charts while allowing very large data sets to be visualized in an effective way. It is shown that, for an effective pixel placement, a complex optimization problem has to be solved. The authors then present an algorithm which efficiently solves the problem. The application to a number of real-world ecommerce data sets shows the wide applicability and usefulness of this new idea, and a comparison to other well-known visualization techniques (parallel coordinates and spiral techniques) shows a number of clear advantages. Information Visualization (2002) 1, 20 – 34. DOI: 10.1057/palgrave/ivs/9500003", "title": "" }, { "docid": "b290b3b9db5e620e8a049ad9cd68346b", "text": "THE USE OF OBSERVATIONAL RESEARCH METHODS in the field of palliative care is vital to building the evidence base, identifying best practices, and understanding disparities in access to and delivery of palliative care services. As discussed in the introduction to this series, research in palliative care encompasses numerous areas in which the gold standard research design, the randomized controlled trial (RCT), is not appropriate, adequate, or even possible.1,2 The difficulties in conducting RCTs in palliative care include patient and family recruitment, gate-keeping by physicians, crossover contamination, high attrition rates, small sample sizes, and limited survival times. Furthermore, a number of important issues including variation in access to palliative care and disparities in the use and provision of palliative care simply cannot be answered without observational research methods. As research in palliative care broadens to encompass study designs other than the RCT, the collective understanding of the use, strengths, and limitations of observational research methods is critical. The goals of this first paper are to introduce the major types of observational study designs, discuss the issues of precision and validity, and provide practical insights into how to critically evaluate this literature in our field.", "title": "" }, { "docid": "a8fe62e387610682f90018ca1a56ba04", "text": "Aarskog-Scott syndrome (AAS), also known as faciogenital dysplasia (FGD, OMIM # 305400), is an X-linked disorder of recessive inheritance, characterized by short stature and facial, skeletal, and urogenital abnormalities. AAS is caused by mutations in the FGD1 gene (Xp11.22), with over 56 different mutations identified to date. We present the clinical and molecular analysis of four unrelated families of Mexican origin with an AAS phenotype, in whom FGD1 sequencing was performed. This analysis identified two stop mutations not previously reported in the literature: p.Gln664* and p.Glu380*. Phenotypically, every male patient met the clinical criteria of the syndrome, whereas discrepancies were found between phenotypes in female patients. Our results identify two novel mutations in FGD1, broadening the spectrum of reported mutations; and provide further delineation of the phenotypic variability previously described in AAS.", "title": "" }, { "docid": "f9aa9bdad364b7c4b6a4b67120686d9a", "text": "In this paper, we describe an SDN-based plastic architecture for 5G networks, designed to fulfill functional and performance requirements of new generation services and devices. The 5G logical architecture is presented in detail, and key procedures for dynamic control plane instantiation, device attachment, and service request and mobility management are specified. Key feature of the proposed architecture is flexibility, needed to support efficiently a heterogeneous set of services, including Machine Type Communication, Vehicle to X and Internet of Things traffic. These applications are imposing challenging targets, in terms of end-to-end latency, dependability, reliability and scalability. Additionally, backward compatibility with legacy systems is guaranteed by the proposed solution, and Control Plane and Data Plane are fully decoupled. The three levels of unified signaling unify Access, Non-access and Management strata, and a clean-slate forwarding layer, designed according to the software defined networking principle, replaces tunneling protocols for carrier grade mobility. Copyright © 2014 John Wiley & Sons, Ltd. *Correspondence R. Trivisonno, Huawei European Research Institute, Munich, Germany. E-mail: riccardo.trivisonno@huawei.com Received 13 October 2014; Revised 5 November 2014; Accepted 8 November 2014", "title": "" }, { "docid": "b93022efa40379ca7cc410d8b10ba48e", "text": "The shared nature of the network in today's multi-tenant datacenters implies that network performance for tenants can vary significantly. This applies to both production datacenters and cloud environments. Network performance variability hurts application performance which makes tenant costs unpredictable and causes provider revenue loss. Motivated by these factors, this paper makes the case for extending the tenant-provider interface to explicitly account for the network. We argue this can be achieved by providing tenants with a virtual network connecting their compute instances. To this effect, the key contribution of this paper is the design of virtual network abstractions that capture the trade-off between the performance guarantees offered to tenants, their costs and the provider revenue.\n To illustrate the feasibility of virtual networks, we develop Oktopus, a system that implements the proposed abstractions. Using realistic, large-scale simulations and an Oktopus deployment on a 25-node two-tier testbed, we demonstrate that the use of virtual networks yields significantly better and more predictable tenant performance. Further, using a simple pricing model, we find that the our abstractions can reduce tenant costs by up to 74% while maintaining provider revenue neutrality.", "title": "" }, { "docid": "8f54f2c6e9736a63ea4a99f89090e0a2", "text": "This article demonstrates how documents prepared in hypertext or word processor format can be saved in portable document format (PDF). These files are self-contained documents that that have the same appearance on screen and in print, regardless of what kind of computer or printer are used, and regardless of what software package was originally used to for their creation. PDF files are compressed documents, invariably smaller than the original files, hence allowing rapid dissemination and download.", "title": "" } ]
scidocsrr
4e25e3351ec840be9252a4cfb9808083
The Intentional Unintentional Agent: Learning to Solve Many Continuous Control Tasks Simultaneously
[ { "docid": "9bcba1b3d4e63c026d1bd16bfd2c8d7b", "text": "Developmental robotics is an emerging field located at the intersection of robotics, cognitive science and developmental sciences. This paper elucidates the main reasons and key motivations behind the convergence of fields with seemingly disparate interests, and shows why developmental robotics might prove to be beneficial for all fields involved. The methodology advocated is synthetic and two-pronged: on the one hand, it employs robots to instantiate models originating from developmental sciences; on the other hand, it aims to develop better robotic systems by exploiting insights gained from studies on ontogenetic development. This paper gives a survey of the relevant research issues and points to some future research directions. 1. Introduction Developmental robotics is an emergent area of research at the intersection of robotics and developmental sciences—in particular developmental psychology and developmental neuroscience. It constitutes an interdisciplinary and two-pronged approach to robotics, which on one side employs robots to instantiate and investigate models originating from developmental sciences, and on the other side seeks to design better robotic systems by applying insights gained from studies on ontogenetic development.", "title": "" }, { "docid": "37e82a54df827ddcfdb71fef7c12a47b", "text": "We tackle a task where an agent learns to navigate in a 2D maze-like environment called XWORLD. In each session, the agent perceives a sequence of raw-pixel frames, a natural language command issued by a teacher, and a set of rewards. The agent learns the teacher’s language from scratch in a grounded and compositional manner, such that after training it is able to correctly execute zero-shot commands: 1) the combination of words in the command never appeared before, and/or 2) the command contains new object concepts that are learned from another task but never learned from navigation. Our deep framework for the agent is trained end to end: it learns simultaneously the visual representations of the environment, the syntax and semantics of the language, and the action module that outputs actions. The zero-shot learning capability of our framework results from its compositionality and modularity with parameter tying. We visualize the intermediate outputs of the framework, demonstrating that the agent truly understands how to solve the problem. We believe that our results provide some preliminary insights on how to train an agent with similar abilities in a 3D environment.", "title": "" }, { "docid": "21abc097d58698c5eae1cddab9bf884e", "text": "Advances in deep reinforcement learning have allowed autonomous agents to perform well on Atari games, often outperforming humans, using only raw pixels to make their decisions. However, most of these games take place in 2D environments that are fully observable to the agent. In this paper, we present the first architecture to tackle 3D environments in first-person shooter games, that involve partially observable states. Typically, deep reinforcement learning methods only utilize visual input for training. We present a method to augment these models to exploit game feature information such as the presence of enemies or items, during the training phase. Our model is trained to simultaneously learn these features along with minimizing a Q-learning objective, which is shown to dramatically improve the training speed and performance of our agent. Our architecture is also modularized to allow different models to be independently trained for different phases of the game. We show that the proposed architecture substantially outperforms built-in AI agents of the game as well as average humans in deathmatch scenarios.", "title": "" }, { "docid": "955ae6e1dffbe580217b812f943b4339", "text": "Successful applications of reinforcement learning in realworld problems often require dealing with partially observable states. It is in general very challenging to construct and infer hidden states as they often depend on the agent’s entire interaction history and may require substantial domain knowledge. In this work, we investigate a deep-learning approach to learning the representation of states in partially observable tasks, with minimal prior knowledge of the domain. In particular, we study reinforcement learning with deep neural networks, including RNN and LSTM, which are equipped with the desired property of being able to capture long-term dependency on history, and thus providing an effective way of learning the representation of hidden states. We further develop a hybrid approach that combines the strength of both supervised learning (for representing hidden states) and reinforcement learning (for optimizing control) with joint training. Extensive experiments based on a KDD Cup 1998 direct mailing campaign problem demonstrate the effectiveness and advantages of the proposed approach, which performs the best across the board.", "title": "" }, { "docid": "9ec7b122117acf691f3bee6105deeb81", "text": "We describe a new physics engine tailored to model-based control. Multi-joint dynamics are represented in generalized coordinates and computed via recursive algorithms. Contact responses are computed via efficient new algorithms we have developed, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers. Models are specified using either a high-level C++ API or an intuitive XML file format. A built-in compiler transforms the user model into an optimized data structure used for runtime computation. The engine can compute both forward and inverse dynamics. The latter are well-defined even in the presence of contacts and equality constraints. The model can include tendon wrapping as well as actuator activation states (e.g. pneumatic cylinders or muscles). To facilitate optimal control applications and in particular sampling and finite differencing, the dynamics can be evaluated for different states and controls in parallel. Around 400,000 dynamics evaluations per second are possible on a 12-core machine, for a 3D homanoid with 18 dofs and 6 active contacts. We have already used the engine in a number of control applications. It will soon be made publicly available.", "title": "" }, { "docid": "033ee0637607fec8ae1b5834efe355dc", "text": "We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcementlearning tasks straightforwardly.", "title": "" }, { "docid": "7084e2455ea696eec4a0f93b3140d71b", "text": "Reinforcement learning is a simple, and yet, comprehensive theory of learning that simultaneously models the adaptive behavior of artificial agents, such as robots and autonomous software programs, as well as attempts to explain the emergent behavior of biological systems. It also gives rise to computational ideas that provide a powerful tool to solve problems involving sequential prediction and decision making. Temporal difference learning is the most widely used method to solve reinforcement learning problems, with a rich history dating back more than three decades. For these and many other reasons, devel1 This article is currently not under review for the journal Foundations and Trends in ML, but will be submitted for formal peer review at some point in the future, once the draft reaches a stable “equilibrium” state. ar X iv :1 40 5. 67 57 v1 [ cs .L G ] 2 6 M ay 2 01 4 oping a complete theory of reinforcement learning, one that is both rigorous and useful has been an ongoing research investigation for several decades. In this paper, we set forth a new vision of reinforcement learning developed by us over the past few years, one that yields mathematically rigorous solutions to longstanding important questions that have remained unresolved: (i) how to design reliable, convergent, and robust reinforcement learning algorithms (ii) how to guarantee that reinforcement learning satisfies pre-specified “safely” guarantees, and remains in a stable region of the parameter space (iii) how to design “off-policy” temporal difference learning algorithms in a reliable and stable manner, and finally (iv) how to integrate the study of reinforcement learning into the rich theory of stochastic optimization. In this paper, we provide detailed answers to all these questions using the powerful framework of proximal operators. The most important idea that emerges is the use of primal dual spaces connected through the use of a Legendre transform. This allows temporal difference updates to occur in dual spaces, allowing a variety of important technical advantages. The Legendre transform, as we show, elegantly generalizes past algorithms for solving reinforcement learning problems, such as natural gradient methods, which we show relate closely to the previously unconnected framework of mirror descent methods. Equally importantly, proximal operator theory enables the systematic development of operator splitting methods that show how to safely and reliably decompose complex products of gradients that occur in recent variants of gradient-based temporal difference learning. This key technical innovation makes it possible to finally design “true” stochastic gradient methods for reinforcement learning. Finally, Legendre transforms enable a variety of other benefits, including modeling sparsity and domain geometry. Our work builds extensively on recent work on the convergence of saddle-point algorithms, and on the theory of monotone operators in Hilbert spaces, both in optimization and for variational inequalities. The latter framework, the subject of another ongoing investigation by our group, holds the promise of an even more elegant framework for reinforcement learning. Its explication is currently the topic of a further monograph that will appear in due course. Dedicated to Andrew Barto and Richard Sutton for inspiring a generation of researchers to the study of reinforcement learning. Algorithm 1 TD (1984) (1) δt = rt + γφ ′ t T θt − φt θt (2) θt+1 = θt + βtδt Algorithm 2 GTD2-MP (2014) (1) wt+ 1 2 = wt + βt(δt − φt wt)φt, θt+ 1 2 = proxαth ( θt + αt(φt − γφt)(φt wt) ) (2) δt+ 1 2 = rt + γφ ′ t T θt+ 1 2 − φt θt+ 1 2 (3) wt+1 = wt + βt(δt+ 1 2 − φt wt+ 1 2 )φt , θt+1 = proxαth ( θt + αt(φt − γφt)(φt wt+ 1 2 ) )", "title": "" } ]
[ { "docid": "d422afa99137d5e09bd47edeb770e872", "text": "OBJECTIVE\nFood Insecurity (FI) occurs in 21% of families with children and adolescents in the United States, but the potential developmental and behavioral implications of this prevalent social determinant of health have not been comprehensively elucidated. This systematic review aims to examine the association between FI and childhood developmental and behavioral outcomes in western industrialized countries.\n\n\nMETHOD\nThis review provides a critical summary of 23 peer reviewed articles from developed countries on the associations between FI and adverse childhood developmental behavioral outcomes including early cognitive development, academic performance, inattention, externalizing behaviors, and depression in 4 groups-infants and toddlers, preschoolers, school age, and adolescents. Various approaches to measuring food insecurity are delineated. Potential confounding and mediating variables of this association are compared across studies. Alternate explanatory mechanisms of observed effects and need for further research are discussed.\n\n\nRESULTS\nThis review demonstrates that household FI, even at marginal levels, is associated with children's behavioral, academic, and emotional problems from infancy to adolescence across western industrialized countries - even after controlling for confounders.\n\n\nCONCLUSIONS\nWhile the American Academy of Pediatrics already recommends routine screening for food insecurity during health maintenance visits, the evidence summarized here should encourage developmental behavioral health providers to screen for food insecurity in their practices and intervene when possible. Conversely, children whose families are identified as food insecure in primary care settings warrant enhanced developmental behavioral assessment and possible intervention.", "title": "" }, { "docid": "31e3fddcaeb7e4984ba140cb30ff49bf", "text": "We show that a maximum-weight triangle in an undirected graph with n vertices and real weights assigned to vertices can be found in time O(nω + n2+o(1)), where ω is the exponent of the fastest matrix multiplication algorithm. By the currently best bound on ω, the running time of our algorithm is O(n2.376). Our algorithm substantially improves the previous time-bounds for this problem, and its asymptotic time complexity matches that of the fastest known algorithm for finding any triangle (not necessarily a maximum-weight one) in a graph. We can extend our algorithm to improve the upper bounds on finding a maximum-weight triangle in a sparse graph and on finding a maximum-weight subgraph isomorphic to a fixed graph. We can find a maximum-weight triangle in a vertex-weighted graph with m edges in asymptotic time required by the fastest algorithm for finding any triangle in a graph with m edges, i.e., in time O(m1.41). Our algorithms for a maximum-weight fixed subgraph (in particular any clique of constant size) are asymptotically as fast as the fastest known algorithms for a fixed subgraph.", "title": "" }, { "docid": "2e964b14ff4e45e3f1c339d7247a50d0", "text": "We report a method to additively build threedimensional (3-D) microelectromechanical systems (MEMS) and electrical circuitry by ink-jet printing nanoparticle metal colloids. Fabricating metallic structures from nanoparticles avoids the extreme processing conditions required for standard lithographic fabrication and molten-metal-droplet deposition. Nanoparticles typically measure 1 to 100 nm in diameter and can be sintered at plastic-compatible temperatures as low as 300 C to form material nearly indistinguishable from the bulk material. Multiple ink-jet print heads mounted to a computer-controlled 3-axis gantry deposit the 10% by weight metal colloid ink layer-by-layer onto a heated substrate to make two-dimensional (2-D) and 3-D structures. We report a high-Q resonant inductive coil, linear and rotary electrostatic-drive motors, and in-plane and vertical electrothermal actuators. The devices, printed in minutes with a 100 m feature size, were made out of silver and gold material with high conductivity,and feature as many as 400 layers, insulators, 10 : 1 vertical aspect ratios, and etch-released mechanical structure. These results suggest a route to a desktop or large-area MEMS fabrication system characterized by many layers, low cost, and data-driven fabrication for rapid turn-around time, and represent the first use of ink-jet printing to build active MEMS. [657]", "title": "" }, { "docid": "31da7acfb9d98421bbf7e70a508ba5df", "text": "Habronema muscae (Spirurida: Habronematidae) occurs in the stomach of equids, is transmitted by adult muscid dipterans and causes gastric habronemiasis. Scanning electron microscopy (SEM) was used to study the morphological aspects of adult worms of this nematode in detail. The worms possess two trilobed lateral lips. The buccal cavity was cylindrical, with thick walls and without teeth. Around the mouth, four submedian cephalic papillae and two amphids were seen. A pair of lateral cervical papillae was present. There was a single lateral ala and in the female the vulva was situated in the middle of the body. In the male, there were wide caudal alae, and the spicules were unequal and dissimilar. At the posterior end of the male, four pairs of stalked precloacal papillae, unpaired post-cloacal papillae and a cluster of small papillae were present. In one case, the anterior end showed abnormal features.", "title": "" }, { "docid": "f9fd7fc57dfdfbfa6f21dc074c9e9daf", "text": "Recently, Lin and Tsai proposed an image secret sharing scheme with steganography and authentication to prevent participants from the incidental or intentional provision of a false stego-image (an image containing the hidden secret image). However, dishonest participants can easily manipulate the stego-image for successful authentication but cannot recover the secret image, i.e., compromise the steganography. In this paper, we present a scheme to improve authentication ability that prevents dishonest participants from cheating. The proposed scheme also defines the arrangement of embedded bits to improve the quality of stego-image. Furthermore, by means of the Galois Field GF(2), we improve the scheme to a lossless version without additional pixels. 2006 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "c6ef33607a015c4187ac77b18d903a8a", "text": "OBJECTIVE\nA systematic review was conducted to identify effective intervention strategies for communication in individuals with Down syndrome.\n\n\nMETHODS\nWe updated and extended previous reviews by examining: (1) participant characteristics; (2) study characteristics; (3) characteristics of effective interventions (e.g., strategies and intensity); (4) whether interventions are tailored to the Down syndrome behavior phenotype; and (5) the effectiveness (i.e., percentage nonoverlapping data and Cohen's d) of interventions.\n\n\nRESULTS\nThirty-seven studies met inclusion criteria. The majority of studies used behavior analytic strategies and produced moderate gains in communication targets. Few interventions were tailored to the needs of the Down syndrome behavior phenotype.\n\n\nCONCLUSION\nThe results suggest that behavior analytic strategies are a promising approach, and future research should focus on replicating the effects of these interventions with greater methodological rigor.", "title": "" }, { "docid": "d64c30da6f8d94ca4effd83075b15901", "text": "The task of natural question generation is to generate a corresponding question given the input passage (fact) and answer. It is useful for enlarging the training set of QA systems. Previous work has adopted sequence-to-sequence models that take a passage with an additional bit to indicate answer position as input. However, they do not explicitly model the information between answer and other context within the passage. We propose a model that matches the answer with the passage before generating the question. Experiments show that our model outperforms the existing state of the art using rich features.", "title": "" }, { "docid": "71b6f02598ac24efbc4625ca060f1bae", "text": "Estimates of the worldwide incidence and mortality from 27 cancers in 2008 have been prepared for 182 countries as part of the GLOBOCAN series published by the International Agency for Research on Cancer. In this article, we present the results for 20 world regions, summarizing the global patterns for the eight most common cancers. Overall, an estimated 12.7 million new cancer cases and 7.6 million cancer deaths occur in 2008, with 56% of new cancer cases and 63% of the cancer deaths occurring in the less developed regions of the world. The most commonly diagnosed cancers worldwide are lung (1.61 million, 12.7% of the total), breast (1.38 million, 10.9%) and colorectal cancers (1.23 million, 9.7%). The most common causes of cancer death are lung cancer (1.38 million, 18.2% of the total), stomach cancer (738,000 deaths, 9.7%) and liver cancer (696,000 deaths, 9.2%). Cancer is neither rare anywhere in the world, nor mainly confined to high-resource countries. Striking differences in the patterns of cancer from region to region are observed.", "title": "" }, { "docid": "c04db0f2e638d0f5aab528776895fdc3", "text": "OBJECTIVE\nThis study is a detailed examination of the association between parental alcohol abuse (mother only, father only, or both parents) and multiple forms of childhood abuse, neglect, and other household dysfunction, known as adverse childhood experiences (ACEs).\n\n\nMETHOD\nA questionnaire about ACEs including child abuse, neglect, household dysfunction, and exposure to parental alcohol abuse was completed by 8629 adult HMO members to retrospectively assess the relationship of growing up with parental alcohol abuse to 10 ACEs and multiple ACEs (ACE score).\n\n\nRESULTS\nCompared to persons who grew up with no parental alcohol abuse, the adjusted odds ratio for each category of ACE was approximately 2 to 13 times higher if either the mother, father, or both parents abused alcohol (p < 0.05). For example, the likelihood of having a battered mother was increased 13-fold for men who grew up with both parents who abused alcohol (OR, 12.7; 95% CI: 8.4-19.1). For almost every ACE, those who grew up with both an alcohol-abusing mother and father had the highest likelihood of ACEs. The mean number of ACEs for persons with no parental alcohol abuse, father only, mother only, or both parents was 1.4, 2.6, 3.2, and 3.8, respectively (p < .001).\n\n\nCONCLUSION\nAlthough the retrospective reporting of these experiences cannot establish a causal association with certainty, exposure to parental alcohol abuse is highly associated with experiencing adverse childhood experiences. Improved coordination of adult and pediatric health care along with related social and substance abuse services may lead to earlier recognition, treatment, and prevention of both adult alcohol abuse and adverse childhood experiences, reducing the negative sequelae of ACEs in adolescents and adults.", "title": "" }, { "docid": "7df3fe3ffffaac2fb6137fdc440eb9f4", "text": "The amount of information in medical publications continues to increase at a tremendous rate. Systematic reviews help to process this growing body of information. They are fundamental tools for evidence-based medicine. In this paper, we show that automatic text classification can be useful in building systematic reviews for medical topics to speed up the reviewing process. We propose a per-question classification method that uses an ensemble of classifiers that exploit the particular protocol of a systematic review. We also show that when integrating the classifier in the human workflow of building a review the per-question method is superior to the global method. We test several evaluation measures on a real dataset.", "title": "" }, { "docid": "ec3b78f594042c2ed9be2e7b987f8d3d", "text": "In mammals, species with more frontally oriented orbits have broader binocular visual fields and relatively larger visual regions in the brain. Here, we test whether a similar pattern of correlated evolution is present in birds. Using both conventional statistics and modern comparative methods, we tested whether the relative size of the Wulst and optic tectum (TeO) were significantly correlated with orbit orientation, binocular visual field width and eye size in birds using a large, multi-species data set. In addition, we tested whether relative Wulst and TeO volumes were correlated with axial length of the eye. The relative size of the Wulst was significantly correlated with orbit orientation and the width of the binocular field such that species with more frontal orbits and broader binocular fields have relatively large Wulst volumes. Relative TeO volume, however, was not significant correlated with either variable. In addition, both relative Wulst and TeO volume were weakly correlated with relative axial length of the eye, but these were not corroborated by independent contrasts. Overall, our results indicate that relative Wulst volume reflects orbit orientation and possibly binocular visual field, but not eye size.", "title": "" }, { "docid": "cf131167592f02790a1b4e38ed3b5375", "text": "Monocular 3D facial shape reconstruction from a single 2D facial image has been an active research area due to its wide applications. Inspired by the success of deep neural networks (DNN), we propose a DNN-based approach for End-to-End 3D FAce Reconstruction (UH-E2FAR) from a single 2D image. Different from recent works that reconstruct and refine the 3D face in an iterative manner using both an RGB image and an initial 3D facial shape rendering, our DNN model is end-to-end, and thus the complicated 3D rendering process can be avoided. Moreover, we integrate in the DNN architecture two components, namely a multi-task loss function and a fusion convolutional neural network (CNN) to improve facial expression reconstruction. With the multi-task loss function, 3D face reconstruction is divided into neutral 3D facial shape reconstruction and expressive 3D facial shape reconstruction. The neutral 3D facial shape is class-specific. Therefore, higher layer features are useful. In comparison, the expressive 3D facial shape favors lower or intermediate layer features. With the fusion-CNN, features from different intermediate layers are fused and transformed for predicting the 3D expressive facial shape. Through extensive experiments, we demonstrate the superiority of our end-to-end framework in improving the accuracy of 3D face reconstruction.", "title": "" }, { "docid": "5d63b20254e8732807a0c029cd86014f", "text": "Various perceptual domains have underlying compositional semantics that are rarely captured in current models. We suspect this is because directly learning the compositional structure has evaded these models. Yet, the compositional structure of a given domain can be grounded in a separate domain thereby simplifying its learning. To that end, we propose a new approach to modeling bimodal percepts that explicitly relates distinct projections across each modality and then jointly learns a bimodal sparse representation. The resulting model enables compositionality across these distinct projections and hence can generalize to unobserved percepts spanned by this compositional basis. For example, our model can be trained on red triangles and blue squares; yet, implicitly will also have learned red squares and blue triangles. The structure of the projections and hence the compositional basis is learned automatically for a given language model. To test our model, we have acquired a new bimodal dataset comprising images and spoken utterances of colored shapes in a tabletop setup. Our experiments demonstrate the benefits of explicitly leveraging compositionality in both quantitative and human evaluation studies.", "title": "" }, { "docid": "dec78cff9fa87a3b51fc32681ba39a08", "text": "Alkaline saponification is often used to remove interfering chlorophylls and lipids during carotenoids analysis. However, saponification also hydrolyses esterified carotenoids and is known to induce artifacts. To avoid carotenoid artifact formation during saponification, Larsen and Christensen (2005) developed a gentler and simpler analytical clean-up procedure involving the use of a strong basic resin (Ambersep 900 OH). They hypothesised a saponification mechanism based on their Liquid Chromatography-Photodiode Array (LC-PDA) data. In the present study, we show with LC-PDA-accurate mass-Mass Spectrometry that the main chlorophyll removal mechanism is not based on saponification, apolar adsorption or anion exchange, but most probably an adsorption mechanism caused by H-bonds and dipole-dipole interactions. We showed experimentally that esterified carotenoids and glycerolipids were not removed, indicating a much more selective mechanism than initially hypothesised. This opens new research opportunities towards a much wider scope of applications (e.g. the refinement of oils rich in phytochemical content).", "title": "" }, { "docid": "1297f85b22be207611dc7d944f6a378a", "text": "Several factors make empirical research in software engineering particularly challenging as it requires studying not only technology but its stakeholders’ activities while drawing concepts and theories from social science. Researchers, in general, agree that selecting a research design in empirical software engineering research is challenging, because the implications of using individual research methods are not well recorded. The main objective of this article is to make researchers aware and support them in their research design, by providing a foundation of knowledge about empirical software engineering research decisions, in order to ensure that researchers make well-founded and informed decisions about their research designs. This article provides a decision-making structure containing a number of decision points, each one of them representing a specific aspect on empirical software engineering research. The article provides an introduction to each decision point and its constituents, as well as to the relationships between the different parts in the decision-making structure. The intention is the structure should act as a starting point for the research design before going into the details of the research design chosen. The article provides an in-depth discussion of decision points in relation to the research design when conducting empirical research.", "title": "" }, { "docid": "e89acdeb493d156390851a2a57231baf", "text": "Several approaches have recently been proposed for learning decentralized deep multiagent policies that coordinate via a differentiable communication channel. While these policies are effective for many tasks, interpretation of their induced communication strategies has remained a challenge. Here we propose to interpret agents’ messages by translating them. Unlike in typical machine translation problems, we have no parallel data to learn from. Instead we develop a translation model based on the insight that agent messages and natural language strings mean the same thing if they induce the same belief about the world in a listener. We present theoretical guarantees and empirical evidence that our approach preserves both the semantics and pragmatics of messages by ensuring that players communicating through a translation layer do not suffer a substantial loss in reward relative to players with a common language.1", "title": "" }, { "docid": "7f6f39e46010238dca3da94f78a21add", "text": "Labeling text data is quite time-consuming but essential for automatic text classification. Especially, manually creating multiple labels for each document may become impractical when a very large amount of data is needed for training multi-label text classifiers. To minimize the human-labeling efforts, we propose a novel multi-label active learning approach which can reduce the required labeled data without sacrificing the classification accuracy. Traditional active learning algorithms can only handle single-label problems, that is, each data is restricted to have one label. Our approach takes into account the multi-label information, and select the unlabeled data which can lead to the largest reduction of the expected model loss. Specifically, the model loss is approximated by the size of version space, and the reduction rate of the size of version space is optimized with Support Vector Machines (SVM). An effective label prediction method is designed to predict possible labels for each unlabeled data point, and the expected loss for multi-label data is approximated by summing up losses on all labels according to the most confident result of label prediction. Experiments on several real-world data sets (all are publicly available) demonstrate that our approach can obtain promising classification result with much fewer labeled data than state-of-the-art methods.", "title": "" }, { "docid": "e2e99eca77da211cac64ab69931ed1f4", "text": "Cross-site scripting (XSS) and SQL injection errors are two prominent examples of taint-based vulnerabilities that have been responsible for a large number of security breaches in recent years. This paper presents QED, a goal-directed model-checking system that automatically generates attacks exploiting taint-based vulnerabilities in large Java web applications. This is the first time where model checking has been used successfully on real-life Java programs to create attack sequences that consist of multiple HTTP requests. QED accepts any Java web application that is written to the standard servlet specification. The analyst specifies the vulnerability of interest in a specification that looks like a Java code fragment, along with a range of values for form parameters. QED then generates a goal-directed analysis from the specification to perform session-aware tests, optimizes to eliminate inputs that are not of interest, and feeds the remainder to a model checker. The checker will systematically explore the remaining state space and report example attacks if the vulnerability specification is matched. QED provides better results than traditional analyses because it does not generate any false positive warnings. It proves the existence of errors by providing an example attack and a program trace showing how the code is compromised. Past experience suggests this is important because it makes it easy for the application maintainer to recognize the errors and to make the necessary fixes. In addition, for a class of applications, QED can guarantee that it has found all the potential bugs in the program. We have run QED over 3 Java web applications totaling 130,000 lines of code. We found 10 SQL injections and 13 cross-site scripting errors.", "title": "" }, { "docid": "1c11c14bcc1e83a3fba3ef5e4c52d69b", "text": "Ontologies have become the de-facto modeling tool of choice, employed in many applications and prominently in the semantic web. Nevertheless, ontology construction remains a daunting task. Ontological bootstrapping, which aims at automatically generating concepts and their relations in a given domain, is a promising technique for ontology construction. Bootstrapping an ontology based on a set of predefined textual sources, such as web services, must address the problem of multiple, largely unrelated concepts. In this paper, we propose an ontology bootstrapping process for web services. We exploit the advantage that web services usually consist of both WSDL and free text descriptors. The WSDL descriptor is evaluated using two methods, namely Term Frequency/Inverse Document Frequency (TF/IDF) and web context generation. Our proposed ontology bootstrapping process integrates the results of both methods and applies a third method to validate the concepts using the service free text descriptor, thereby offering a more accurate definition of ontologies. We extensively validated our bootstrapping method using a large repository of real-world web services and verified the results against existing ontologies. The experimental results indicate high precision. Furthermore, the recall versus precision comparison of the results when each method is separately implemented presents the advantage of our integrated bootstrapping approach.", "title": "" } ]
scidocsrr
641cb2cdc570ee6410bc86e68ecb1800
PGX.D: a fast distributed graph processing engine
[ { "docid": "e92ab865f33c7548c21ba99785912d03", "text": "Given a query graph q and a data graph g, the subgraph isomorphism search finds all occurrences of q in g and is considered one of the most fundamental query types for many real applications. While this problem belongs to NP-hard, many algorithms have been proposed to solve it in a reasonable time for real datasets. However, a recent study has shown, through an extensive benchmark with various real datasets, that all existing algorithms have serious problems in their matching order selection. Furthermore, all algorithms blindly permutate all possible mappings for query vertices, often leading to useless computations. In this paper, we present an efficient and robust subgraph search solution, called TurboISO, which is turbo-charged with two novel concepts, candidate region exploration and the combine and permute strategy (in short, Comb/Perm). The candidate region exploration identifies on-the-fly candidate subgraphs (i.e, candidate regions), which contain embeddings, and computes a robust matching order for each candidate region explored. The Comb/Perm strategy exploits the novel concept of the neighborhood equivalence class (NEC). Each query vertex in the same NEC has identically matching data vertices. During subgraph isomorphism search, Comb/Perm generates only combinations for each NEC instead of permutating all possible enumerations. Thus, if a chosen combination is determined to not contribute to a complete solution, all possible permutations for that combination will be safely pruned. Extensive experiments with many real datasets show that TurboISO consistently and significantly outperforms all competitors by up to several orders of magnitude.", "title": "" }, { "docid": "88862d86e43d491ec4368410a61c13fb", "text": "With the proliferation of large, irregular, and sparse relational datasets, new storage and analysis platforms have arisen to fill gaps in performance and capability left by conventional approaches built on traditional database technologies and query languages. Many of these platforms apply graph structures and analysis techniques to enable users to ingest, update, query, and compute on the topological structure of the network represented as sets of edges relating sets of vertices. To store and process Facebook-scale datasets, software and algorithms must be able to support data sources with billions of edges, update rates of millions of updates per second, and complex analysis kernels. These platforms must provide intuitive interfaces that enable graph experts and novice programmers to write implementations of common graph algorithms. In this paper, we conduct a qualitative study and a performance comparison of 12 open source graph databases using four fundamental graph algorithms on networks containing up to 256 million edges.", "title": "" } ]
[ { "docid": "5f30867cb3071efa8fb0d34447b8a8f6", "text": "Money laundering is a global problem that affects all countries to various degrees. Although, many countries take benefits from money laundering, by accepting the money from laundering but keeping the crime abroad, at the long run, “money laundering attracts crime”. Criminals come to know a country, create networks and eventually also locate their criminal activities there. Most financial institutions have been implementing antimoney laundering solutions (AML) to fight investment fraud. The key pillar of a strong Anti-Money Laundering system for any financial institution depends mainly on a well-designed and effective monitoring system. The main purpose of the Anti-Money Laundering transactions monitoring system is to identify potential suspicious behaviors embedded in legitimate transactions. This paper presents a monitor framework that uses various techniques to enhance the monitoring capabilities. This framework is depending on rule base monitoring, behavior detection monitoring, cluster monitoring and link analysis based monitoring. The monitor detection processes are based on a money laundering deterministic finite automaton that has been obtained from their corresponding regular expressions. Index Terms – Anti Money Laundering system, Money laundering monitoring and detecting, Cycle detection monitoring, Suspected Link monitoring.", "title": "" }, { "docid": "9a7e491e4d4490f630b55a94703a6f00", "text": "Learning generic and robust feature representations with data from multiple domains for the same problem is of great value, especially for the problems that have multiple datasets but none of them are large enough to provide abundant data variations. In this work, we present a pipeline for learning deep feature representations from multiple domains with Convolutional Neural Networks (CNNs). When training a CNN with data from all the domains, some neurons learn representations shared across several domains, while some others are effective only for a specific one. Based on this important observation, we propose a Domain Guided Dropout algorithm to improve the feature learning procedure. Experiments show the effectiveness of our pipeline and the proposed algorithm. Our methods on the person re-identification problem outperform stateof-the-art methods on multiple datasets by large margins.", "title": "" }, { "docid": "2e8251644f82f3a965cf6360416eaaaa", "text": "The past decade has witnessed a rapid proliferation of video cameras in all walks of life and has resulted in a tremendous explosion of video content. Several applications such as content-based video annotation and retrieval, highlight extraction and video summarization require recognition of the activities occurring in the video. The analysis of human activities in videos is an area with increasingly important consequences from security and surveillance to entertainment and personal archiving. Several challenges at various levels of processing-robustness against errors in low-level processing, view and rate-invariant representations at midlevel processing and semantic representation of human activities at higher level processing-make this problem hard to solve. In this review paper, we present a comprehensive survey of efforts in the past couple of decades to address the problems of representation, recognition, and learning of human activities from video and related applications. We discuss the problem at two major levels of complexity: 1) \"actions\" and 2) \"activities.\" \"Actions\" are characterized by simple motion patterns typically executed by a single human. \"Activities\" are more complex and involve coordinated actions among a small number of humans. We will discuss several approaches and classify them according to their ability to handle varying degrees of complexity as interpreted above. We begin with a discussion of approaches to model the simplest of action classes known as atomic or primitive actions that do not require sophisticated dynamical modeling. Then, methods to model actions with more complex dynamics are discussed. The discussion then leads naturally to methods for higher level representation of complex activities.", "title": "" }, { "docid": "c3aaa53892e636f34d6923831a3b66bc", "text": "OBJECTIVES\nTo evaluate whether 7-mm-long implants could be an alternative to longer implants placed in vertically augmented posterior mandibles.\n\n\nMATERIALS AND METHODS\nSixty patients with posterior mandibular edentulism with 7-8 mm bone height above the mandibular canal were randomized to either vertical augmentation with anorganic bovine bone blocks and delayed 5-month placement of ≥10 mm implants or to receive 7-mm-long implants. Four months after implant placement, provisional prostheses were delivered, replaced after 4 months, by definitive prostheses. The outcome measures were prosthesis and implant failures, any complications and peri-implant marginal bone levels. All patients were followed to 1 year after loading.\n\n\nRESULTS\nOne patient dropped out from the short implant group. In two augmented mandibles, there was not sufficient bone to place 10-mm-long implants possibly because the blocks had broken apart during insertion. One prosthesis could not be placed when planned in the 7 mm group vs. three prostheses in the augmented group, because of early failure of one implant in each patient. Four complications (wound dehiscence) occurred during graft healing in the augmented group vs. none in the 7 mm group. No complications occurred after implant placement. These differences were not statistically significant. One year after loading, patients of both groups lost an average of 1 mm of peri-implant bone. There no statistically significant differences in bone loss between groups.\n\n\nCONCLUSIONS\nWhen residual bone height over the mandibular canal is between 7 and 8 mm, 7 mm short implants might be a preferable choice than vertical augmentation, reducing the chair time, expenses and morbidity. These 1-year preliminary results need to be confirmed by follow-up of at least 5 years.", "title": "" }, { "docid": "b8fa649e8b5a60a05aad257a0a364b51", "text": "This work intends to build a Game Mechanics Ontology based on the mechanics category presented in BoardGameGeek.com vis à vis the formal concepts from the MDA framework. The 51 concepts presented in BoardGameGeek (BGG) as game mechanics are analyzed and arranged in a systemic way in order to build a domain sub-ontology in which the root concept is the mechanics as defined in MDA. The relations between the terms were built from its available descriptions as well as from the authors’ previous experiences. Our purpose is to show that a set of terms commonly accepted by players can lead us to better understand how players perceive the games components that are closer to the designer. The ontology proposed in this paper is not exhaustive. The intent of this work is to supply a tool to game designers, scholars, and others that see game artifacts as study objects or are interested in creating games. However, although it can be used as a starting point for games construction or study, the proposed Game Mechanics Ontology should be seen as the seed of a domain ontology encompassing game mechanics in general.", "title": "" }, { "docid": "7fd1ac60f18827dbe10bc2c10f715ae9", "text": "Sentiment analysis in Twitter is a field that has recently attracted research interest. Twitter is one of the most popular microblog platforms on which users can publish their thoughts and opinions. Sentiment analysis in Twitter tackles the problem of analyzing the tweets in terms of the opinion they express. This survey provides an overview of the topic by investigating and briefly describing the algorithms that have been proposed for sentiment analysis in Twitter. The presented studies are categorized according to the approach they follow. In addition, we discuss fields related to sentiment analysis in Twitter including Twitter opinion retrieval, tracking sentiments over time, irony detection, emotion detection, and tweet sentiment quantification, tasks that have recently attracted increasing attention. Resources that have been used in the Twitter sentiment analysis literature are also briefly presented. The main contributions of this survey include the presentation of the proposed approaches for sentiment analysis in Twitter, their categorization according to the technique they use, and the discussion of recent research trends of the topic and its related fields.", "title": "" }, { "docid": "658fbe3164e93515d4222e634b413751", "text": "A prediction market is a place where individuals can wager on the outcomes of future events. Those who forecast the outcome correctly win money, and if they forecast incorrectly, they lose money. People value money, so they are incentivized to forecast such outcomes as accurately as they can. Thus, the price of a prediction market can serve as an excellent indicator of how likely an event is to occur [1, 2]. Augur is a decentralized platform for prediction markets. Our goal here is to provide a blueprint of a decentralized prediction market using Bitcoin’s input/output-style transactions. Many theoretical details of this project, such as its game-theoretic underpinning, are touched on lightly or not at all. This work builds on (and is intended to be read as a companion to) the theoretical foundation established in [3].", "title": "" }, { "docid": "2f7e5807415398cb95f8f1ab36a0438f", "text": "We present a Convolutional Neural Network (CNN) regression based framework for 2-D/3-D medical image registration, which directly estimates the transformation parameters from image features extracted from the DRR and the X-ray images using learned hierarchical regressors. Our framework consists of learning and application stages. In the learning stage, CNN regressors are trained using supervised machine learning to reveal the correlation between the transformation parameters and the image features. In the application stage, CNN regressors are applied on extracted image features in a hierarchical manner to estimate the transformation parameters. Our experiment results demonstrate that the proposed method can achieve real-time 2-D/3-D registration with very high (i.e., sub-milliliter) accuracy.", "title": "" }, { "docid": "083cb6546aecdc12c2a1e36a9b8d9b67", "text": "Machine translation systems achieve near human-level performance on some languages, yet their effectiveness strongly relies on the availability of large amounts of parallel sentences, which hinders their applicability to the majority of language pairs. This work investigates how to learn to translate when having access to only large monolingual corpora in each language. We propose two model variants, a neural and a phrase-based model. Both versions leverage a careful initialization of the parameters, the denoising effect of language models and automatic generation of parallel data by iterative back-translation. These models are significantly better than methods from the literature, while being simpler and having fewer hyper-parameters. On the widely used WMT’14 English-French and WMT’16 German-English benchmarks, our models respectively obtain 28.1 and 25.2 BLEU points without using a single parallel sentence, outperforming the state of the art by more than 11 BLEU points. On low-resource languages like English-Urdu and English-Romanian, our methods achieve even better results than semisupervised and supervised approaches leveraging the paucity of available bitexts. Our code for NMT and PBSMT is publicly available.1", "title": "" }, { "docid": "12f5447d9e83890c3e953e03a2e92c8f", "text": "BACKGROUND\nLong-term continuous systolic blood pressure (SBP) and heart rate (HR) monitors are of tremendous value to medical (cardiovascular, circulatory and cerebrovascular management), wellness (emotional and stress tracking) and fitness (performance monitoring) applications, but face several major impediments, such as poor wearability, lack of widely accepted robust SBP models and insufficient proofing of the generalization ability of calibrated models.\n\n\nMETHODS\nThis paper proposes a wearable cuff-less electrocardiography (ECG) and photoplethysmogram (PPG)-based SBP and HR monitoring system and many efforts are made focusing on above challenges. Firstly, both ECG/PPG sensors are integrated into a single-arm band to provide a super wearability. A highly convenient but challenging single-lead configuration is proposed for weak single-arm-ECG acquisition, instead of placing the electrodes on the chest, or two wrists. Secondly, to identify heartbeats and estimate HR from the motion artifacts-sensitive weak arm-ECG, a machine learning-enabled framework is applied. Then ECG-PPG heartbeat pairs are determined for pulse transit time (PTT) measurement. Thirdly, a PTT&HR-SBP model is applied for SBP estimation, which is also compared with many PTT-SBP models to demonstrate the necessity to introduce HR information in model establishment. Fourthly, the fitted SBP models are further evaluated on the unseen data to illustrate the generalization ability. A customized hardware prototype was established and a dataset collected from ten volunteers was acquired to evaluate the proof-of-concept system.\n\n\nRESULTS\nThe semi-customized prototype successfully acquired from the left upper arm the PPG signal, and the weak ECG signal, the amplitude of which is only around 10% of that of the chest-ECG. The HR estimation has a mean absolute error (MAE) and a root mean square error (RMSE) of only 0.21 and 1.20 beats per min, respectively. Through the comparative analysis, the PTT&HR-SBP models significantly outperform the PTT-SBP models. The testing performance is 1.63 ± 4.44, 3.68, 4.71 mmHg in terms of mean error ± standard deviation, MAE and RMSE, respectively, indicating a good generalization ability on the unseen fresh data.\n\n\nCONCLUSIONS\nThe proposed proof-of-concept system is highly wearable, and its robustness is thoroughly evaluated on different modeling strategies and also the unseen data, which are expected to contribute to long-term pervasive hypertension, heart health and fitness management.", "title": "" }, { "docid": "7e74cc21787c1e21fd64a38f1376c6a9", "text": "The Bidirectional Reflectance Distribution Function (BRDF) describes the appearance of a material by its interaction with light at a surface point. A variety of analytical models have been proposed to represent BRDFs. However, analysis of these models has been scarce due to the lack of high-resolution measured data. In this work we evaluate several well-known analytical models in terms of their ability to fit measured BRDFs. We use an existing high-resolution data set of a hundred isotropic materials and compute the best approximation for each analytical model. Furthermore, we have built a new setup for efficient acquisition of anisotropic BRDFs, which allows us to acquire anisotropic materials at high resolution. We have measured four samples of anisotropic materials (brushed aluminum, velvet, and two satins). Based on the numerical errors, function plots, and rendered images we provide insights into the performance of the various models. We conclude that for most isotropic materials physically-based analytic reflectance models can represent their appearance quite well. We illustrate the important difference between the two common ways of defining the specular lobe: around the mirror direction and with respect to the half-vector. Our evaluation shows that the latter gives a more accurate shape for the reflection lobe. Our analysis of anisotropic materials indicates current parametric reflectance models cannot represent their appearances faithfully in many cases. We show that using a sampled microfacet distribution computed from measurements improves the fit and qualitatively reproduces the measurements.", "title": "" }, { "docid": "6bd7a3d4b330972328257d958ec2730e", "text": "Structured sparse coding and the related structured dictionary learning problems are novel research areas in machine learning. In this paper we present a new application of structured dictionary learning for collaborative filtering based recommender systems. Our extensive numerical experiments demonstrate that the presented method outperforms its state-of-the-art competitors and has several advantages over approaches that do not put structured constraints on the dictionary elements.", "title": "" }, { "docid": "67b5bd59689c325365ac765a17886169", "text": "L-Systems have traditionally been used as a popular method for the modelling of spacefilling curves, biological systems and morphogenesis. In this paper, we adapt string rewriting grammars based on L-Systems into a system for music composition. Representation of pitch, duration and timbre are encoded as grammar symbols, upon which a series of re-writing rules are applied. Parametric extensions to the grammar allow the specification of continuous data for the purposes of modulation and control. Such continuous data is also under control of the grammar. Using non-deterministic grammars with context sensitivity allows the simulation of Nth-order Markov models with a more economical representation than transition matrices and greater flexibility than previous composition models based on finite state automata or Petri nets. Using symbols in the grammar to represent relationships between notes, (rather than absolute notes) in combination with a hierarchical grammar representation, permits the emergence of complex music compositions from a relatively simple grammars.", "title": "" }, { "docid": "ee37a743edd1b87d600dcf2d0050ca18", "text": "Recommender systems play a crucial role in mitigating the problem of information overload by suggesting users' personalized items or services. The vast majority of traditional recommender systems consider the recommendation procedure as a static process and make recommendations following a fixed strategy. In this paper, we propose a novel recommender system with the capability of continuously improving its strategies during the interactions with users. We model the sequential interactions between users and a recommender system as a Markov Decision Process (MDP) and leverage Reinforcement Learning (RL) to automatically learn the optimal strategies via recommending trial-and-error items and receiving reinforcements of these items from users' feedback. Users' feedback can be positive and negative and both types of feedback have great potentials to boost recommendations. However, the number of negative feedback is much larger than that of positive one; thus incorporating them simultaneously is challenging since positive feedback could be buried by negative one. In this paper, we develop a novel approach to incorporate them into the proposed deep recommender system (DEERS) framework. The experimental results based on real-world e-commerce data demonstrate the effectiveness of the proposed framework. Further experiments have been conducted to understand the importance of both positive and negative feedback in recommendations.", "title": "" }, { "docid": "4737fe7f718f79c74595de40f8778da2", "text": "In this paper we describe a method of procedurally generating maps using Markov chains. This method learns statistical patterns from human-authored maps, which are assumed to be of high quality. Our method then uses those learned patterns to generate new maps. We present a collection of strategies both for training the Markov chains, and for generating maps from such Markov chains. We then validate our approach using the game Super Mario Bros., by evaluating the quality of the produced maps based on different configurations for training and generation.", "title": "" }, { "docid": "e8f46d6e58c070965f83ca244e15c3d6", "text": "OBJECTIVES\nUrinalysis is one of the most commonly performed tests in the clinical laboratory. However, manual microscopic sediment examination is labor-intensive, time-consuming, and lacks standardization in high-volume laboratories. In this study, the concordance of analyses between manual microscopic examination and two different automatic urine sediment analyzers has been evaluated.\n\n\nDESIGN AND METHODS\n209 urine samples were analyzed by the Iris iQ200 ELITE (İris Diagnostics, USA), Dirui FUS-200 (DIRUI Industrial Co., China) automatic urine sediment analyzers and by manual microscopic examination. The degree of concordance (Kappa coefficient) and the rates within the same grading were evaluated.\n\n\nRESULTS\nFor erythrocytes, leukocytes, epithelial cells, bacteria, crystals and yeasts, the degree of concordance between the two instruments was better than the degree of concordance between the manual microscopic method and the individual devices. There was no concordance between all methods for casts.\n\n\nCONCLUSION\nThe results from the automated analyzers for erythrocytes, leukocytes and epithelial cells were similar to the result of microscopic examination. However, in order to avoid any error or uncertainty, some images (particularly: dysmorphic cells, bacteria, yeasts, casts and crystals) have to be analyzed by manual microscopic examination by trained staff. Therefore, the software programs which are used in automatic urine sediment analysers need further development to recognize urinary shaped elements more accurately. Automated systems are important in terms of time saving and standardization.", "title": "" }, { "docid": "bcbba4f99e33ac0daea893e280068304", "text": "Arterial plasma glucose values throughout a 24-h period average approximately 90 mg/dl, with a maximal concentration usually not exceeding 165 mg/dl such as after meal ingestion1 and remaining above 55 mg/dl such as after exercise2 or a moderate fast (60 h).3 This relative stability contrasts with the situation for other substrates such as glycerol, lactate, free fatty acids, and ketone bodies whose fluctuations are much wider (Table 2.1).4 This narrow range defining normoglycemia is maintained through an intricate regulatory and counterregulatory neuro-hormonal system: A decrement in plasma glucose as little as 20 mg/dl (from 90 to 70 mg/dl) will suppress the release of insulin and will decrease glucose uptake in certain areas in the brain (e.g., hypothalamus where glucose sensors are located); this will activate the sympathetic nervous system and trigger the release of counterregulatory hormones (glucagon, catecholamines, cortisol, and growth hormone).5 All these changes will increase glucose release into plasma and decrease its removal so as to restore normoglycemia. On the other hand, a 10 mg/dl increment in plasma glucose will stimulate insulin release and suppress glucagon secretion to prevent further increments and restore normoglycemia. Glucose in plasma either comes from dietary sources or is either the result of the breakdown of glycogen in liver (glycogenolysis) or the formation of glucose in liver and kidney from other carbons compounds (precursors) such as lactate, pyruvate, amino acids, and glycerol (gluconeogenesis). In humans, glucose removed from plasma may have different fates in different tissues and under different conditions (e.g., postabsorptive vs. postprandial), but the pathways for its disposal are relatively limited. It (1) may be immediately stored as glycogen or (2) may undergo glycolysis, which can be non-oxidative producing pyruvate (which can be reduced to lactate or transaminated to form alanine) or oxidative through conversion to acetyl CoA which is further oxidized through the tricarboxylic acid cycle to form carbon dioxide and water. Non-oxidative glycolysis carbons undergo gluconeogenesis and the newly formed glucose is either stored as glycogen or released back into plasma (Fig. 2.1).", "title": "" }, { "docid": "e17a1429f4ca9de808caaa842ee5a441", "text": "Large scale visual understanding is challenging, as it requires a model to handle the widely-spread and imbalanced distribution of 〈subject, relation, object〉 triples. In real-world scenarios with large numbers of objects and relations, some are seen very commonly while others are barely seen. We develop a new relationship detection model that embeds objects and relations into two vector spaces where both discriminative capability and semantic affinity are preserved. We learn a visual and a semantic module that map features from the two modalities into a shared space, where matched pairs of features have to discriminate against those unmatched, but also maintain close distances to semantically similar ones. Benefiting from that, our model can achieve superior performance even when the visual entity categories scale up to more than 80, 000, with extremely skewed class distribution. We demonstrate the efficacy of our model on a large and imbalanced benchmark based of Visual Genome that comprises 53, 000+ objects and 29, 000+ relations, a scale at which no previous work has been evaluated at. We show superiority of our model over competitive baselines on the original Visual Genome dataset with 80, 000+ categories. We also show state-of-the-art performance on the VRD dataset and the scene graph dataset which is a subset of Visual Genome with 200 categories.", "title": "" }, { "docid": "486e3f5614f69f60d8703d8641c73416", "text": "The Great East Japan Earthquake and Tsunami drastically changed Japanese society, and the requirements for ICT was completely redefined. After the disaster, it was impossible for disaster victims to utilize their communication devices, such as cellular phones, tablet computers, or laptop computers, to notify their families and friends of their safety and confirm the safety of their loved ones since the communication infrastructures were physically damaged or lacked the energy necessary to operate. Due to this drastic event, we have come to realize the importance of device-to-device communications. With the recent increase in popularity of D2D communications, many research works are focusing their attention on a centralized network operated by network operators and neglect the importance of decentralized infrastructureless multihop communication, which is essential for disaster relief applications. In this article, we propose the concept of multihop D2D communication network systems that are applicable to many different wireless technologies, and clarify requirements along with introducing open issues in such systems. The first generation prototype of relay by smartphone can deliver messages using only users' mobile devices, allowing us to send out emergency messages from disconnected areas as well as information sharing among people gathered in evacuation centers. The success of field experiments demonstrates steady advancement toward realizing user-driven networking powered by communication devices independent of operator networks.", "title": "" }, { "docid": "754163e498679e1d3c1449424c03a71f", "text": "J. K. Strosnider P. Nandi S. Kumaran S. Ghosh A. Arsanjani The current approach to the design, maintenance, and governance of service-oriented architecture (SOA) solutions has focused primarily on flow-driven assembly and orchestration of reusable service components. The practical application of this approach in creating industry solutions has been limited, because flow-driven assembly and orchestration models are too rigid and static to accommodate complex, real-world business processes. Furthermore, the approach assumes a rich, easily configured library of reusable service components when in fact the development, maintenance, and governance of these libraries is difficult. An alternative approach pioneered by the IBM Research Division, model-driven business transformation (MDBT), uses a model-driven software synthesis technology to automatically generate production-quality business service components from high-level business process models. In this paper, we present the business entity life cycle analysis (BELA) technique for MDBT-based SOA solution realization and its integration into serviceoriented modeling and architecture (SOMA), the end-to-end method from IBM for SOA application and solution development. BELA shifts the process-modeling paradigm from one that is centered on activities to one that is centered on entities. BELA teams process subject-matter experts with IT and data architects to identify and specify business entities and decompose business processes. Supporting synthesis tools then automatically generate the interacting business entity service components and their associated data stores and service interface definitions. We use a large-scale project as an example demonstrating the benefits of this innovation, which include an estimated 40 percent project cost reduction and an estimated 20 percent reduction in cycle time when compared with conventional SOA approaches.", "title": "" } ]
scidocsrr
1b7342cc547f410c6e149ec7a5d69b16
Towards Personality-driven Persuasive Health Games and Gamified Systems
[ { "docid": "372ab07026a861acd50e7dd7c605881d", "text": "This paper reviews peer-reviewed empirical studies on gamification. We create a framework for examining the effects of gamification by drawing from the definitions of gamification and the discussion on motivational affordances. The literature review covers results, independent variables (examined motivational affordances), dependent variables (examined psychological/behavioral outcomes from gamification), the contexts of gamification, and types of studies performed on the gamified systems. The paper examines the state of current research on the topic and points out gaps in existing literature. The review indicates that gamification provides positive effects, however, the effects are greatly dependent on the context in which the gamification is being implemented, as well as on the users using it. The findings of the review provide insight for further studies as well as for the design of gamified systems.", "title": "" }, { "docid": "8777063bfba463c05e46704f0ad2c672", "text": "Amazon's Mechanical Turk is an online labor market where requesters post jobs and workers choose which jobs to do for pay. The central purpose of this article is to demonstrate how to use this Web site for conducting behavioral research and to lower the barrier to entry for researchers who could benefit from this platform. We describe general techniques that apply to a variety of types of research and experiments across disciplines. We begin by discussing some of the advantages of doing experiments on Mechanical Turk, such as easy access to a large, stable, and diverse subject pool, the low cost of doing experiments, and faster iteration between developing theory and executing experiments. While other methods of conducting behavioral research may be comparable to or even better than Mechanical Turk on one or more of the axes outlined above, we will show that when taken as a whole Mechanical Turk can be a useful tool for many researchers. We will discuss how the behavior of workers compares with that of experts and laboratory subjects. Then we will illustrate the mechanics of putting a task on Mechanical Turk, including recruiting subjects, executing the task, and reviewing the work that was submitted. We also provide solutions to common problems that a researcher might face when executing their research on this platform, including techniques for conducting synchronous experiments, methods for ensuring high-quality work, how to keep data private, and how to maintain code security.", "title": "" } ]
[ { "docid": "71da47c6837022a80dccabb0a1f5c00e", "text": "The treatment of obesity and cardiovascular diseases is one of the most difficult and important challenges nowadays. Weight loss is frequently offered as a therapy and is aimed at improving some of the components of the metabolic syndrome. Among various diets, ketogenic diets, which are very low in carbohydrates and usually high in fats and/or proteins, have gained in popularity. Results regarding the impact of such diets on cardiovascular risk factors are controversial, both in animals and humans, but some improvements notably in obesity and type 2 diabetes have been described. Unfortunately, these effects seem to be limited in time. Moreover, these diets are not totally safe and can be associated with some adverse events. Notably, in rodents, development of nonalcoholic fatty liver disease (NAFLD) and insulin resistance have been described. The aim of this review is to discuss the role of ketogenic diets on different cardiovascular risk factors in both animals and humans based on available evidence.", "title": "" }, { "docid": "16a5313b414be4ae740677597291d580", "text": "We contribute a large scale database for 3D object recognition, named ObjectNet3D, that consists of 100 categories, 90,127 images, 201,888 objects in these images and 44,147 3D shapes. Objects in the 2D images in our database are aligned with the 3D shapes, and the alignment provides both accurate 3D pose annotation and the closest 3D shape annotation for each 2D object. Consequently, our database is useful for recognizing the 3D pose and 3D shape of objects from 2D images. We also provide baseline experiments on four tasks: region proposal generation, 2D object detection, joint 2D detection and 3D object pose estimation, and image-based 3D shape retrieval, which can serve as baselines for future research using our database. Our database is available online at http://cvgl.stanford.edu/projects/objectnet3d.", "title": "" }, { "docid": "81387b0f93b68e8bd6a56a4fd81477e9", "text": "We analyze microblog posts generated during two recent, concurrent emergency events in North America via Twitter, a popular microblogging service. We focus on communications broadcast by people who were \"on the ground\" during the Oklahoma Grassfires of April 2009 and the Red River Floods that occurred in March and April 2009, and identify information that may contribute to enhancing situational awareness (SA). This work aims to inform next steps for extracting useful, relevant information during emergencies using information extraction (IE) techniques.", "title": "" }, { "docid": "47b9d5585a0ca7d10cb0fd9da673dd0f", "text": "A novel deep architecture, the tensor deep stacking network (T-DSN), is presented. The T-DSN consists of multiple, stacked blocks, where each block contains a bilinear mapping from two hidden layers to the output layer, using a weight tensor to incorporate higher order statistics of the hidden binary (([0,1])) features. A learning algorithm for the T-DSN's weight matrices and tensors is developed and described in which the main parameter estimation burden is shifted to a convex subproblem with a closed-form solution. Using an efficient and scalable parallel implementation for CPU clusters, we train sets of T-DSNs in three popular tasks in increasing order of the data size: handwritten digit recognition using MNIST (60k), isolated state/phone classification and continuous phone recognition using TIMIT (1.1 m), and isolated phone classification using WSJ0 (5.2 m). Experimental results in all three tasks demonstrate the effectiveness of the T-DSN and the associated learning methods in a consistent manner. In particular, a sufficient depth of the T-DSN, a symmetry in the two hidden layers structure in each T-DSN block, our model parameter learning algorithm, and a softmax layer on top of T-DSN are shown to have all contributed to the low error rates observed in the experiments for all three tasks.", "title": "" }, { "docid": "1a26a00f0915e2eac01edf8cad0152c9", "text": "This paper describes the application of Rao-Blackwellised Gibbs sampling (RBGS) to speech recognition using switching linear dynamical systems (SLDSs) as the acoustic model. The SLDS is a hybrid of standard hidden Markov models (HMMs) and linear dynamical systems. It is an extension of the stochastic segment model (SSM) where segments are assumed independent. SLDSs explicitly take into account the strong co-articulation present in speech using a Gauss-Markov process in a low dimensional, latent, state space. Unfortunately , inference in SLDS is intractable unless the discrete state sequence is known. RBGS is one approach that may be applied for both improved training and decoding for this form of intractable model. The theory of SLDS and RBGS is described, along with an efficient proposal distribution. The performance of the SLDS and SSM using RBGS for training and inference is evaluated on the ARPA Resource Management task.", "title": "" }, { "docid": "da4ec6dcf7f47b8ec0261195db7af5ca", "text": "Smart factories are on the verge of becoming the new industrial paradigm, wherein optimization permeates all aspects of production, from concept generation to sales. To fully pursue this paradigm, flexibility in the production means as well as in their timely organization is of paramount importance. AI is planning a major role in this transition, but the scenarios encountered in practice might be challenging for current tools. Task planning is one example where AI enables more efficient and flexible operation through an online automated adaptation and rescheduling of the activities to cope with new operational constraints and demands. In this paper we present SMarTplan, a task planner specifically conceived to deal with real-world scenarios in the emerging smart factory paradigm. Including both special-purpose and general-purpose algorithms, SMarTplan is based on current automated reasoning technology and it is designed to tackle complex application domains. In particular, we show its effectiveness on a logistic scenario, by comparing its specialized version with the general purpose one, and extending the comparison to other state-of-the-art task planners.", "title": "" }, { "docid": "1ca4fbc998c41cec99abe68c5ebe944e", "text": "Wheeled mobile robots are increasingly being utilized in unknown and dangerous situations such as planetary surface exploration. Based on force analysis of the differential joints and force analysis between the wheels and the ground, this paper established the quasi-static mathematical model of the 6-wheel mobile system of planetary exploration rover with rocker-bogie structure. Considering the constraint conditions, with the method of finding the wheels’friction force solution space feasible region, obstacle-climbing capability of the mobile mechanism was analyzed. Given the same obstacle heights and contact angles of wheel-ground, the single side forward obstacle-climbing of the wheels was simulated respectively, and the results show that the rear wheel has the best obstacle-climbing capability, the middle wheel is the worst, and the front wheel is moderate.", "title": "" }, { "docid": "9b254da42083948029120552ede69652", "text": "Smart contracts are computer programs that can be consistently executed by a network of mutually distrusting nodes, without the arbitration of a trusted authority. Because of their resilience to tampering , smart contracts are appealing in many scenarios, especially in those which require transfers of money to respect certain agreed rules (like in financial services and in games). Over the last few years many platforms for smart contracts have been proposed, and some of them have been actually implemented and used. We study how the notion of smart contract is interpreted in some of these platforms. Focussing on the two most widespread ones, Bitcoin and Ethereum, we quantify the usage of smart contracts in relation to their application domain. We also analyse the most common programming patterns in Ethereum, where the source code of smart contracts is available.", "title": "" }, { "docid": "a41444799f295e5fc325626fd663d77d", "text": "Lexicon-based approaches to Twitter sentiment analysis are gaining much popularity due to their simplicity, domain independence, and relatively good performance. These approaches rely on sentiment lexicons, where a collection of words are marked with fixed sentiment polarities. However, words’ sentiment orientation (positive, neural, negative) and/or sentiment strengths could change depending on context and targeted entities. In this paper we present SentiCircle; a novel lexicon-based approach that takes into account the contextual and conceptual semantics of words when calculating their sentiment orientation and strength in Twitter. We evaluate our approach on three Twitter datasets using three different sentiment lexicons. Results show that our approach significantly outperforms two lexicon baselines. Results are competitive but inconclusive when comparing to state-of-art SentiStrength, and vary from one dataset to another. SentiCircle outperforms SentiStrength in accuracy on average, but falls marginally behind in F-measure.", "title": "" }, { "docid": "b4e56855d6f41c5829b441a7d2765276", "text": "College student attendance management of class plays an important position in the work of management of college student, this can help to urge student to class on time, improve learning efficiency, increase learning grade, and thus entirely improve the education level of the school. Therefore, colleges need an information system platform of check attendance management of class strongly to enhance check attendance management of class using the information technology which gathers the basic information of student automatically. According to current reality and specific needs of check attendance and management system of college students and the exist device of the system. Combined with the study of college attendance system, this paper gave the node design of check attendance system of class which based on RFID on the basic of characteristics of embedded ARM and RFID technology.", "title": "" }, { "docid": "88ffb30f1506bedaf7c1a3f43aca439e", "text": "The multiprotein mTORC1 protein kinase complex is the central component of a pathway that promotes growth in response to insulin, energy levels, and amino acids and is deregulated in common cancers. We find that the Rag proteins--a family of four related small guanosine triphosphatases (GTPases)--interact with mTORC1 in an amino acid-sensitive manner and are necessary for the activation of the mTORC1 pathway by amino acids. A Rag mutant that is constitutively bound to guanosine triphosphate interacted strongly with mTORC1, and its expression within cells made the mTORC1 pathway resistant to amino acid deprivation. Conversely, expression of a guanosine diphosphate-bound Rag mutant prevented stimulation of mTORC1 by amino acids. The Rag proteins do not directly stimulate the kinase activity of mTORC1, but, like amino acids, promote the intracellular localization of mTOR to a compartment that also contains its activator Rheb.", "title": "" }, { "docid": "ade88f8a9aa8a47dd2dc5153b3584695", "text": "A software environment is described which provides facilities at a variety of levels for “animating” algorithms: exposing properties of programs by displaying multiple dynamic views of the program and associated data structures. The system is operational on a network of graphics-based, personal workstations and has been used successfully in several applications for teaching and research in computer science and mathematics. In this paper, we outline the conceptual framework that we have developed for animating algorithms, describe the system that we have implemented, and give several examples drawn from the host of algorithms that we have animated.", "title": "" }, { "docid": "426a7c1572e9d68f4ed2429f143387d5", "text": "Face tracking is an active area of computer vision research and an important building block for many applications. However, opposed to face detection, there is no common benchmark data set to evaluate a tracker’s performance, making it hard to compare results between different approaches. In this challenge we propose a data set, annotation guidelines and a well defined evaluation protocol in order to facilitate the evaluation of face tracking systems in the future.", "title": "" }, { "docid": "5e9dce428a2bcb6f7bc0074d9fe5162c", "text": "This paper describes a real-time motion planning algorithm, based on the rapidly-exploring random tree (RRT) approach, applicable to autonomous vehicles operating in an urban environment. Extensions to the standard RRT are predominantly motivated by: 1) the need to generate dynamically feasible plans in real-time; 2) safety requirements; 3) the constraints dictated by the uncertain operating (urban) environment. The primary novelty is in the use of closed-loop prediction in the framework of RRT. The proposed algorithm was at the core of the planning and control software for Team MIT's entry for the 2007 DARPA Urban Challenge, where the vehicle demonstrated the ability to complete a 60 mile simulated military supply mission, while safely interacting with other autonomous and human driven vehicles.", "title": "" }, { "docid": "1a02d963590683c724a814f341f94f92", "text": "The concept of the quality attribute scenario was introduced in 2003 to support the development of software architectures. This concept is useful because it provides an operational means to represent the quality requirements of a system. It also provides a more concrete basis with which to teach software architecture. Teaching this concept however has some unexpected issues. In this paper, I present my experiences of teaching quality attribute scenarios and outline Bus Tracker, a case study I have developed to support my teaching.", "title": "" }, { "docid": "5935224c53222d0234adffddae23eb04", "text": "The multipath-rich wireless environment associated with typical wireless usage scenarios is characterized by a fading channel response that is time-varying, location-sensitive, and uniquely shared by a given transmitter-receiver pair. The complexity associated with a richly scattering environment implies that the short-term fading process is inherently hard to predict and best modeled stochastically, with rapid decorrelation properties in space, time, and frequency. In this paper, we demonstrate how the channel state between a wireless transmitter and receiver can be used as the basis for building practical secret key generation protocols between two entities. We begin by presenting a scheme based on level crossings of the fading process, which is well-suited for the Rayleigh and Rician fading models associated with a richly scattering environment. Our level crossing algorithm is simple, and incorporates a self-authenticating mechanism to prevent adversarial manipulation of message exchanges during the protocol. Since the level crossing algorithm is best suited for fading processes that exhibit symmetry in their underlying distribution, we present a second and more powerful approach that is suited for more general channel state distributions. This second approach is motivated by observations from quantizing jointly Gaussian processes, but exploits empirical measurements to set quantization boundaries and a heuristic log likelihood ratio estimate to achieve an improved secret key generation rate. We validate both proposed protocols through experimentations using a customized 802.11a platform, and show for the typical WiFi channel that reliable secret key establishment can be accomplished at rates on the order of 10 b/s.", "title": "" }, { "docid": "6646b66370ed02eb84661c8505eb7563", "text": "Re-identification is generally carried out by encoding the appearance of a subject in terms of outfit, suggesting scenarios where people do not change their attire. In this paper we overcome this restriction, by proposing a framework based on a deep convolutional neural network, SOMAnet, that additionally models other discriminative aspects, namely, structural attributes of the human figure (e.g. height, obesity, gender). Our method is unique in many respects. First, SOMAnet is based on the Inception architecture, departing from the usual siamese framework. This spares expensive data preparation (pairing images across cameras) and allows the understanding of what the network learned. Second, and most notably, the training data consists of a synthetic 100K instance dataset, SOMAset, created by photorealistic human body generation software. Synthetic data represents a good compromise between realistic imagery, usually not required in re-identification since surveillance cameras capture low-resolution silhouettes, and complete control of the samples, which is useful in order to customize the data w.r.t. the surveillance scenario at-hand, e.g. ethnicity. SOMAnet, trained on SOMAset and fine-tuned on recent re-identification benchmarks, outperforms all competitors, matching subjects even with different apparel. The combination of synthetic data with Inception architectures opens up new research avenues in re-identification.", "title": "" }, { "docid": "fc1009e9515d83166e97e4e01ae9ca69", "text": "In this paper, we present two large video multi-modal datasets for RGB and RGB-D gesture recognition: the ChaLearn LAP RGB-D Isolated Gesture Dataset (IsoGD) and the Continuous Gesture Dataset (ConGD). Both datasets are derived from the ChaLearn Gesture Dataset (CGD) that has a total of more than 50000 gestures for the \"one-shot-learning\" competition. To increase the potential of the old dataset, we designed new well curated datasets composed of 249 gesture labels, and including 47933 gestures manually labeled the begin and end frames in sequences. Using these datasets we will open two competitions on the CodaLab platform so that researchers can test and compare their methods for \"user independent\" gesture recognition. The first challenge is designed for gesture spotting and recognition in continuous sequences of gestures while the second one is designed for gesture classification from segmented data. The baseline method based on the bag of visual words model is also presented.", "title": "" }, { "docid": "906659aa61bbdb5e904a1749552c4741", "text": "The Rete–Match algorithm is a matching algorithm used to develop production systems. Although this algorithm is the fastest known algorithm, for many patterns and many objects matching, it still suffers from considerable amount of time needed due to the recursive nature of the problem. In this paper, a parallel version of the Rete–Match algorithm for distributed memory architecture is presented. Also, a theoretical analysis to its correctness and performance is discussed. q 1998 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "0ea07af19fc199f6a9909bd7df0576a1", "text": "Detection of overlapping communities in complex networks has motivated recent research in the relevant fields. Aiming this problem, we propose a Markov dynamics based algorithm, called UEOC, which means, “unfold and extract overlapping communities”. In UEOC, when identifying each natural community that overlaps, a Markov random walk method combined with a constraint strategy, which is based on the corresponding annealed network (degree conserving random network), is performed to unfold the community. Then, a cutoff criterion with the aid of a local community function, called conductance, which can be thought of as the ratio between the number of edges inside the community and those leaving it, is presented to extract this emerged community from the entire network. The UEOC algorithm depends on only one parameter whose value can be easily set, and it requires no prior knowledge on the hidden community structures. The proposed UEOC has been evaluated both on synthetic benchmarks and on some real-world networks, and was compared with a set of competing algorithms. Experimental result has shown that UEOC is highly effective and efficient for discovering overlapping communities.", "title": "" } ]
scidocsrr
71901f57a6acfafe99eb5e4efad3f2f5
Vision-Based Autonomous Navigation System Using ANN and FSM Control
[ { "docid": "c4feca5e27cfecdd2913e18cc7b7a21a", "text": "one component of intelligent transportation systems, IV systems use sensing and intelligent algorithms to understand the vehicle’s immediate environment, either assisting the driver or fully controlling the vehicle. Following the success of information-oriented systems, IV systems will likely be the “next wave” for ITS, functioning at the control layer to enable the driver–vehicle “subsystem” to operate more effectively. This column provides a broad overview of applications and selected activities in this field. IV application areas", "title": "" } ]
[ { "docid": "6f34ef57fcf0a2429e7dc2a3e56a99fd", "text": "Service-Oriented Architecture (SOA) provides a flexible framework for service composition. Using standard-based protocols (such as SOAP and WSDL), composite services can be constructed by integrating atomic services developed independently. Algorithms are needed to select service components with various QoS levels according to some application-dependent performance requirements. We design a broker-based architecture to facilitate the selection of QoS-based services. The objective of service selection is to maximize an application-specific utility function under the end-to-end QoS constraints. The problem is modeled in two ways: the combinatorial model and the graph model. The combinatorial model defines the problem as a multidimension multichoice 0-1 knapsack problem (MMKP). The graph model defines the problem as a multiconstraint optimal path (MCOP) problem. Efficient heuristic algorithms for service processes of different composition structures are presented in this article and their performances are studied by simulations. We also compare the pros and cons between the two models.", "title": "" }, { "docid": "caae0254ea28dad0abf2f65fcadc7971", "text": "Deregulation within the financial service industries and the widespread acceptance of new technologies is increasing competition in the finance marketplace. Central to the business strategy of every financial service company is the ability to retain existing customers and reach new prospective customers. Data mining is adopted to play an important role in these efforts. In this paper, we present a data mining approach for analyzing retailing bank customer attrition. We discuss the challenging issues such as highly skewed data, time series data unrolling, leaker field detection etc, and the procedure of a data mining project for the attrition analysis for retailing bank customers. We use lift as a proper measure for attrition analysis and compare the lift of data mining models of decision tree, boosted naïve Bayesian network, selective Bayesian network, neural network and the ensemble of classifiers of the above methods. Some interesting findings are reported. Our research work demonstrates the effectiveness and efficiency of data mining in attrition analysis for retailing bank.", "title": "" }, { "docid": "6fd3f4ab064535d38c01f03c0135826f", "text": "BACKGROUND\nThere is evidence of under-detection and poor management of pain in patients with dementia, in both long-term and acute care. Accurate assessment of pain in people with dementia is challenging and pain assessment tools have received considerable attention over the years, with an increasing number of tools made available. Systematic reviews on the evidence of their validity and utility mostly compare different sets of tools. This review of systematic reviews analyses and summarises evidence concerning the psychometric properties and clinical utility of pain assessment tools in adults with dementia or cognitive impairment.\n\n\nMETHODS\nWe searched for systematic reviews of pain assessment tools providing evidence of reliability, validity and clinical utility. Two reviewers independently assessed each review and extracted data from them, with a third reviewer mediating when consensus was not reached. Analysis of the data was carried out collaboratively. The reviews were synthesised using a narrative synthesis approach.\n\n\nRESULTS\nWe retrieved 441 potentially eligible reviews, 23 met the criteria for inclusion and 8 provided data for extraction. Each review evaluated between 8 and 13 tools, in aggregate providing evidence on a total of 28 tools. The quality of the reviews varied and the reporting often lacked sufficient methodological detail for quality assessment. The 28 tools appear to have been studied in a variety of settings and with varied types of patients. The reviews identified several methodological limitations across the original studies. The lack of a 'gold standard' significantly hinders the evaluation of tools' validity. Most importantly, the samples were small providing limited evidence for use of any of the tools across settings or populations.\n\n\nCONCLUSIONS\nThere are a considerable number of pain assessment tools available for use with the elderly cognitive impaired population. However there is limited evidence about their reliability, validity and clinical utility. On the basis of this review no one tool can be recommended given the existing evidence.", "title": "" }, { "docid": "1de5bb16d9304cbfc7c2854ea02f4e5c", "text": "Language acquisition is one of the most fundamental human traits, and it is obviously the brain that undergoes the developmental changes. During the years of language acquisition, the brain not only stores linguistic information but also adapts to the grammatical regularities of language. Recent advances in functional neuroimaging have substantially contributed to systems-level analyses of brain development. In this Viewpoint, I review the current understanding of how the \"final state\" of language acquisition is represented in the mature brain and summarize new findings on cortical plasticity for second language acquisition, focusing particularly on the function of the grammar center.", "title": "" }, { "docid": "25793a93fec7a1ccea0869252a8a0141", "text": "Condition monitoring of induction motors is a fast emerging technology for online detection of incipient faults. It avoids unexpected failure of a critical system. Approximately 30-40% of faults of induction motors are stator faults. This work presents a comprehensive review of various stator faults, their causes, detection parameters/techniques, and latest trends in the condition monitoring technology. It is aimed at providing a broad perspective on the status of stator fault monitoring to researchers and application engineers using induction motors. A list of 183 research publications on the subject is appended for quick reference.", "title": "" }, { "docid": "857132b27d87727454ec3019e52039ba", "text": "In this paper we will introduce an ensemble of codes called irregular repeat-accumulate (IRA) codes. IRA codes are a generalization of the repeat-accumluate codes introduced in [1], and as such have a natural linear-time encoding algorithm. We shall prove that on the binary erasure channel, IRA codes can be decoded reliably in linear time, using iterative sum-product decoding, at rates arbitrarily close to channel capacity. A similar result appears to be true on the AWGN channel, although we have no proof of this. We illustrate our results with numerical and experimental examples.", "title": "" }, { "docid": "643599f9b0dcfd270f9f3c55567ed985", "text": "OBJECTIVES\nTo describe a new first-trimester sonographic landmark, the retronasal triangle, which may be useful in the early screening for cleft palate.\n\n\nMETHODS\nThe retronasal triangle, i.e. the three echogenic lines formed by the two frontal processes of the maxilla and the palate visualized in the coronal view of the fetal face posterior to the nose, was evaluated prospectively in 100 consecutive normal fetuses at the time of routine first-trimester sonographic screening at 11 + 0 to 13 + 6 weeks' gestation. In a separate study of five fetuses confirmed postnatally as having a cleft palate, ultrasound images, including multiplanar three-dimensional views, were analyzed retrospectively to review the retronasal triangle.\n\n\nRESULTS\nNone of the fetuses evaluated prospectively was affected by cleft lip and palate. During their first-trimester scan, the retronasal triangle could not be identified in only two fetuses. Reasons for suboptimal visualization of this area included early gestational age at scanning (11 weeks) and persistent posterior position of the fetal face. Of the five cases with postnatal diagnosis of cleft palate, an abnormal configuration of the retronasal triangle was documented in all cases on analysis of digitally stored three-dimensional volumes.\n\n\nCONCLUSIONS\nThis study demonstrates the feasibility of incorporating evaluation of the retronasal triangle into the routine evaluation of the fetal anatomy at 11 + 0 to 13 + 6 weeks' gestation. Because fetuses with cleft palate have an abnormal configuration of the retronasal triangle, focused examination of the midface, looking for this area at the time of the nuchal translucency scan, may facilitate the early detection of cleft palate in the first trimester.", "title": "" }, { "docid": "d83d672642531e1744afe77ed8379b90", "text": "Customer churn prediction in Telecom Industry is a core research topic in recent years. A huge amount of data is generated in Telecom Industry every minute. On the other hand, there is lots of development in data mining techniques. Customer churn has emerged as one of the major issues in Telecom Industry. Telecom research indicates that it is more expensive to gain a new customer than to retain an existing one. In order to retain existing customers, Telecom providers need to know the reasons of churn, which can be realized through the knowledge extracted from Telecom data. This paper surveys the commonly used data mining techniques to identify customer churn patterns. The recent literature in the area of predictive data mining techniques in customer churn behavior is reviewed and a discussion on the future research directions is offered.", "title": "" }, { "docid": "2a7dce77aaff56b810f4a80c32dc80ea", "text": "Automatically segmenting and classifying clinical free text into sections is an important first step to automatic information retrieval, information extraction and data mining tasks, as it helps to ground the significance of the text within. In this work we describe our approach to automatic section segmentation of clinical records such as hospital discharge summaries and radiology reports, along with section classification into pre-defined section categories. We apply machine learning to the problems of section segmentation and section classification, comparing a joint (one-step) and a pipeline (two-step) approach. We demonstrate that our systems perform well when tested on three data sets, two for hospital discharge summaries and one for radiology reports. We then show the usefulness of section information by incorporating it in the task of extracting comorbidities from discharge summaries.", "title": "" }, { "docid": "277cec4e1df1bfe15376cba3cd23fa85", "text": "In this paper, we report the development, evaluation, and application of ultra-small low-power wireless sensor nodes for advancing animal husbandry, as well as for innovation of medical technologies. A radio frequency identification (RFID) chip with hybrid interface and neglectable power consumption was introduced to enable switching of ON/OFF and measurement mode after implantation. A wireless power transmission system with a maximum efficiency of 70% and an access distance of up to 5 cm was developed to allow the sensor node to survive for a duration of several weeks from a few minutes' remote charge. The results of field tests using laboratory mice and a cow indicated the high accuracy of the collected biological data and bio-compatibility of the package. As a result of extensive application of the above technologies, a fully solid wireless pH sensor and a surgical navigation system using artificial magnetic field and a 3D MEMS magnetic sensor are introduced in this paper, and the preliminary experimental results are presented and discussed.", "title": "" }, { "docid": "127692e52e1dfb3d71be11e67b1013e6", "text": "Internet social networks may be an abundant source of opportunities giving space to the “parallel world” which can and, in many ways, does surpass the realty. People share data about almost every aspect of their lives, starting with giving opinions and comments on global problems and events, friends tagging at locations up to the point of multimedia personalized content. Therefore, decentralized mini-campaigns about educational, cultural, political and sports novelties could be conducted. In this paper we have applied clustering algorithm to social network profiles with the aim of obtaining separate groups of people with different opinions about political views and parties. For network case, where some centroids are interconnected, we have implemented edge constraints into classical k-means algorithm. This approach enables fast and effective information analysis about the present state of affairs, but also discovers new tendencies in observed political sphere. All profile data, friendships, fanpage likes and statuses with interactions are collected by already developed software for neurolinguistics social network analysis “Symbols”.", "title": "" }, { "docid": "a40e91ecac0f70e04cc1241797786e77", "text": "In much of his writings on poverty, famines, and malnutrition, Amartya Sen argues that Democracy is the best way to avoid famines partly because of its ability to use a free press, and that the Indian experience since independence confirms this. His argument is partly empirical, but also relies on some a priori assumptions about human motivation. In his “Democracy as a Universal Value” he claims: Famines are easy to prevent if there is a serious effort to do so, and a democratic government, facing elections and criticisms from opposition parties and independent newspapers, cannot help but make such an effort. Not surprisingly, while India continued to have famines under British rule right up to independence ...they disappeared suddenly with the establishment of a multiparty democracy and a free press.", "title": "" }, { "docid": "da04a904a236c9b4c3c335eb7c65246e", "text": "BACKGROUND\nIdentifying the emotional state is helpful in applications involving patients with autism and other intellectual disabilities; computer-based training, human computer interaction etc. Electrocardiogram (ECG) signals, being an activity of the autonomous nervous system (ANS), reflect the underlying true emotional state of a person. However, the performance of various methods developed so far lacks accuracy, and more robust methods need to be developed to identify the emotional pattern associated with ECG signals.\n\n\nMETHODS\nEmotional ECG data was obtained from sixty participants by inducing the six basic emotional states (happiness, sadness, fear, disgust, surprise and neutral) using audio-visual stimuli. The non-linear feature 'Hurst' was computed using Rescaled Range Statistics (RRS) and Finite Variance Scaling (FVS) methods. New Hurst features were proposed by combining the existing RRS and FVS methods with Higher Order Statistics (HOS). The features were then classified using four classifiers - Bayesian Classifier, Regression Tree, K- nearest neighbor and Fuzzy K-nearest neighbor. Seventy percent of the features were used for training and thirty percent for testing the algorithm.\n\n\nRESULTS\nAnalysis of Variance (ANOVA) conveyed that Hurst and the proposed features were statistically significant (p < 0.001). Hurst computed using RRS and FVS methods showed similar classification accuracy. The features obtained by combining FVS and HOS performed better with a maximum accuracy of 92.87% and 76.45% for classifying the six emotional states using random and subject independent validation respectively.\n\n\nCONCLUSIONS\nThe results indicate that the combination of non-linear analysis and HOS tend to capture the finer emotional changes that can be seen in healthy ECG data. This work can be further fine tuned to develop a real time system.", "title": "" }, { "docid": "10da9f0fd1be99878e280d261ea81ba3", "text": "The fuzzy vault scheme is a cryptographic primitive being considered for storing fingerprint minutiae protected. A well-known problem of the fuzzy vault scheme is its vulnerability against correlation attack -based cross-matching thereby conflicting with the unlinkability requirement and irreversibility requirement of effective biometric information protection. Yet, it has been demonstrated that in principle a minutiae-based fuzzy vault can be secured against the correlation attack by passing the to-beprotected minutiae through a quantization scheme. Unfortunately, single fingerprints seem not to be capable of providing an acceptable security level against offline attacks. To overcome the aforementioned security issues, this paper shows how an implementation for multiple fingerprints can be derived on base of the implementation for single finger thereby making use of a Guruswami-Sudan algorithm-based decoder for verification. The implementation, of which public C++ source code can be downloaded, is evaluated for single and various multi-finger settings using the MCYTFingerprint-100 database and provides security enhancing features such as the possibility of combination with password and a slow-down mechanism.", "title": "" }, { "docid": "c695f74a41412606e31c771ec9d2b6d3", "text": "Osteochondrosis dissecans (OCD) is a form of osteochondrosis limited to the articular epiphysis. The most commonly affected areas include, in decreasing order of frequency, the femoral condyles, talar dome and capitellum of the humerus. OCD rarely occurs in the shoulder joint, where it involves either the humeral head or the glenoid. The purpose of this report is to present a case with glenoid cavity osteochondritis dissecans and clinical and radiological outcome after arthroscopic debridement. The patient underwent arthroscopy to remove the loose body and to microfracture the cavity. The patient was followed-up for 4 years and she is pain-free with full range of motion and a stable shoulder joint.", "title": "" }, { "docid": "12350d889ee7e66eeda886e1e3b03ff5", "text": "With the rapid development of cloud storage, more and more data owners store their data on the remote cloud, that can reduce data owners’ overhead because the cloud server maintaining the data for them, e.g., storing, updating and deletion. However, that leads to data deletion becomes a security challenge because the cloud server may not delete the data honestly for financial incentives. Recently, plenty of research works have been done on secure data deletion. However, most of the existing methods can be summarized with the same protocol essentially, which called “one-bit-return” protocol: the storage server deletes the data and returns a one-bit result. The data owner has to believe the returned result because he cannot verify it. In this paper, we propose a novel blockchain-based data deletion scheme, which can make the deletion operation more transparent. In our scheme, the data owner can verify the deletion result no matter how malevolently the cloud server behaves. Besides, with the application of blockchain, the proposed scheme can achieve public verification without any trusted third party.", "title": "" }, { "docid": "6d813684a21e3ccc7fb2e09c866be1f1", "text": "Cross-site scripting (XSS) is a code injection attack that allows an attacker to execute malicious script in another user’s browser. Once the attacker gains control over the Website vulnerable to XSS attack, it can perform actions like cookie-stealing, malware-spreading, session-hijacking and malicious redirection. Malicious JavaScripts are the most conventional ways of performing XSS attacks. Although several approaches have been proposed, XSS is still a live problem since it is very easy to implement, but di cult to detect. In this paper, we propose an e↵ective approach for XSS attack detection. Our method focuses on balancing the load between client and the server. Our method performs an initial checking in the client side for vulnerability using divergence measure. If the suspicion level exceeds beyond a threshold value, then the request is discarded. Otherwise, it is forwarded to the proxy for further processing. In our approach we introduce an attribute clustering method supported by rank aggregation technique to detect confounded JavaScripts. The approach is validated using real life data.", "title": "" }, { "docid": "c68633905f8bbb759c71388819e9bfa9", "text": "An additional mechanical mechanism for a passive parallelogram-based exoskeleton arm-support is presented. It consists of several levers and joints and an attached extension coil spring. The additional mechanism has two favourable features. On the one hand it exhibits an almost iso-elastic behaviour whereby the lifting force of the mechanism is constant for a wide working range. Secondly, the value of the supporting force can be varied by a simple linear movement of a supporting joint. Furthermore a standard tension spring can be used to gain the desired behavior. The additional mechanism is a 4-link mechanism affixed to one end of the spring within the parallelogram arm-support. It has several geometrical parameters which influence the overall behaviour. A standard optimisation routine with constraints on the parameters is used to find an optimal set of geometrical parameters. Based on the optimized geometrical parameters a prototype was constructed and tested. It is a lightweight wearable system, with a weight of 1.9 kg. Detailed experiments reveal a difference between measured and calculated forces. These variations can be explained by a 60 % higher pre load force of the tension spring and a geometrical offset in the construction.", "title": "" }, { "docid": "a964f8aeb9d48c739716445adc58e98c", "text": "A passive aeration composting study was undertaken to investigate the effects of aeration pipe orientation (PO) and perforation size (PS) on some physico-chemical properties of chicken litter (chicken manure + sawdust) during composting. The experimental set up was a two-factor completely randomised block design with two pipe orientations: horizontal (Ho) and vertical (Ve), and three perforation sizes: 15, 25 and 35 mm diameter. The properties monitored during composting were pile temperature, moisture content (MC), pH, electrical conductivity (EC), total carbon (C(T)), total nitrogen (N(T)) and total phosphorus (P(T)). Moisture level in the piles was periodically replenished to 60% for efficient microbial activities. The results of the study showed that optimum composting conditions (thermophilic temperatures and sanitation requirements) were attained in all the piles. During composting, both PO and PS significantly affected pile temperature, moisture level, pH, C(T) loss and P(T) gain. EC was only affected by PO while N(T) was affected by PS. Neither PO nor PS had a significant effect on the C:N ratio. A vertical pipe was effective for uniform air distribution, hence, uniform composting rate within the composting pile. The final values showed that PO of Ve and PS of 35 mm diameter resulted in the least loss in N(T). The PO of Ho was as effective as Ve in the conservation of C(T) and P(T). Similarly, the three PSs were equally effective in the conservation of C(T) and P(T). In conclusion, the combined effects of PO and PS showed that treatments Ve35 and Ve15 were the most effective in minimizing N(T) loss.", "title": "" }, { "docid": "f2ad701c00cf7cff75ddb8eba073a408", "text": "One of the high efficiency motors that were introduced to the industry in recent times is Line Start Permanent Magnet Synchronous Motor (LS-PMSM). Fault detection of LS-PMSM is one of interesting issues. This article presents a new technique for broken rotor bar detection based on the values of Mean and RMS features obtained from captured start-up current in the time domain. The extracted features were analyzed using analysis of variance method to predict the motor condition. Starting load condition and its interaction on detection of broken rotor bar were also investigated. The statistical evaluation of means for each feature at different conditions was performed using Tukey's method as post-hoc procedure. The result showed that the applied features could able to detect the broken rotor bar fault in LS-PMSMs.", "title": "" } ]
scidocsrr
fb6e29a915d2343b5b0810ff1c8b2bb1
Gaussian Process Regression for Fingerprinting based Localization
[ { "docid": "aa3da820fe9e98cb4f817f6a196c18e7", "text": "Location awareness is an important capability for mobile computing. Yet inexpensive, pervasive positioning—a requirement for wide-scale adoption of location-aware computing—has been elusive. We demonstrate a radio beacon-based approach to location, called Place Lab, that can overcome the lack of ubiquity and high-cost found in existing location sensing approaches. Using Place Lab, commodity laptops, PDAs and cell phones estimate their position by listening for the cell IDs of fixed radio beacons, such as wireless access points, and referencing the beacons’ positions in a cached database. We present experimental results showing that 802.11 and GSM beacons are sufficiently pervasive in the greater Seattle area to achieve 20-40 meter median accuracy with nearly 100% coverage measured by availability in people’s daily", "title": "" } ]
[ { "docid": "9eca9a069f8d1e7bf7c0f0b74e3129f0", "text": "With increasing use of GPS devices more and more location-based information is accessible. Thus not only more movements of people are tracked but also POI (point of interest) information becomes available in increasing geo-spatial density. To enable analysis of movement behavior, we present an approach to enrich trajectory data with semantic POI information and show how additional insights can be gained. Using a density-based clustering technique we extract 1.215 frequent destinations of ~150.000 user movements from a large e-mobility database. We query available context information from Foursquare, a popular location-based social network, to enrich the destinations with semantic background. As GPS measurements can be noisy, often more then one possible destination is found and movement patterns vary over time. Therefore we present highly interactive visualizations that enable an analyst to cope with the inherent geospatial and behavioral uncertainties.", "title": "" }, { "docid": "fbb6c8566fbe79bf8f78af0dc2dedc7b", "text": "Automatic essay evaluation (AEE) systems are designed to assist a teacher in the task of classroom assessment in order to alleviate the demands of manual subject evaluation. However, although numerous AEE systems are available, most of these systems do not use elaborate domain knowledge for evaluation, which limits their ability to give informative feedback to students and also their ability to constructively grade a student based on a particular domain of study. This paper is aimed at improving on the achievements of previous studies by providing a subject-focussed evaluation system that considers the domain knowledge while scoring and provides informative feedback to its user. The study employs a combination of techniques such as system design and modelling using Unified Modelling Language (UML), information extraction, ontology development, data management, and semantic matching in order to develop a prototype subject-focussed AEE system. The developed system was evaluated to determine its level of performance and usability. The result of the usability evaluation showed that the system has an overall mean rating of 4.17 out of maximum of 5, which indicates ‘good usability’. In terms of performance, the assessment done by the system was also found to have sufficiently high correlation with those done by domain experts, in addition to providing appropriate feedback to the user.", "title": "" }, { "docid": "1f333e1dbeec98d3733dd78dfd669933", "text": "Background and objectives: Food poisoning has been always a major concern in health system of every community and cream-filled products are one of the most widespread food poisoning causes in humans. In present study, we examined the preservative effect of the cinnamon oil in cream-filled cakes. Methods: Antimicrobial activity of Cinnamomum verum J. Presl (Cinnamon) bark essential oil was examined against five food-borne pathogens (Staphylococcus aureus, Escherichia coli, Candida albicans, Bacillus cereus and Salmonella typhimurium) to investigate its potential for use as a natural preservative in cream-filled baked goods. Chemical constituents of the oil were determined by gas chromatography/mass spectrometry. For evaluation of preservative sufficiency of the oil, pathogens were added to cream-filled cakes manually and 1 μL/mL of the essential oil was added to all samples except the blank. Results: Chemical constituents of the oil were determined by gas chromatography/mass spectrometry and twenty five components were identified where cinnamaldehyde (79.73%), linalool (4.08%), cinnamaldehyde para-methoxy (2.66%), eugenol (2.37%) and trans-caryophyllene (2.05%) were the major constituents. Cinnamon essential oil showed strong antimicrobial activity against selected pathogens in vitro and the minimum inhibitory concentration values against all tested microorganisms were determined as 0.5 μL/disc except for S. aureus for which, the oil was not effective in tested concentrations. After baking, no observable microorganism was observed in all susceptible microorganisms count in 72h stored samples. Conclusion: It was concluded that by analysing the sensory quality of the preserved food, cinnamon oil may be considered as a natural preservative in food industry, especially for cream-filled cakes and", "title": "" }, { "docid": "accda4f9cb11d92639cf2737c5e8fe78", "text": "Automatic segmentation in MR brain images is important for quantitative analysis in large-scale studies with images acquired at all ages. This paper presents a method for the automatic segmentation of MR brain images into a number of tissue classes using a convolutional neural network. To ensure that the method obtains accurate segmentation details as well as spatial consistency, the network uses multiple patch sizes and multiple convolution kernel sizes to acquire multi-scale information about each voxel. The method is not dependent on explicit features, but learns to recognise the information that is important for the classification based on training data. The method requires a single anatomical MR image only. The segmentation method is applied to five different data sets: coronal T2-weighted images of preterm infants acquired at 30 weeks postmenstrual age (PMA) and 40 weeks PMA, axial T2-weighted images of preterm infants acquired at 40 weeks PMA, axial T1-weighted images of ageing adults acquired at an average age of 70 years, and T1-weighted images of young adults acquired at an average age of 23 years. The method obtained the following average Dice coefficients over all segmented tissue classes for each data set, respectively: 0.87, 0.82, 0.84, 0.86, and 0.91. The results demonstrate that the method obtains accurate segmentations in all five sets, and hence demonstrates its robustness to differences in age and acquisition protocol.", "title": "" }, { "docid": "ab4fce4bd35bd8dd749bf0357c4b14b6", "text": "In this paper, we describe and analyze the performance of two iris-encoding techniques. The first technique is based on Principle Component Analysis (PCA) encoding method while the second technique is a combination of Principal Component Analysis with Independent Component Analysis (ICA) following it. Both techniques are applied globally. PCA and ICA are two well known methods used to process a variety of data. Though PCA has been used as a preprocessing step that reduces dimensions for obtaining ICA components for iris, it has never been analyzed in depth as an individual encoding method. In practice PCA and ICA are known as methods that extract global and fine features, respectively. It is shown here that when PCA and ICA methods are used to encode iris images, one of the critical steps required to achieve a good performance is compensation for rotation effect. We further study the effect of varying the image resolution level on the performance of the two encoding methods. The major motivation for this study is the cases in practice where images of the same or different irises taken at different distances have to be compared. The performance of encoding techniques is analyzed using the CASIA dataset. The original images are non-ideal and thus require a sequence of preprocessing steps prior to application of encoding methods. We plot a series of Receiver Operating Characteristics (ROCs) to demonstrate various effects on the performance of the iris-based recognition system implementing PCA and ICA encoding techniques.", "title": "" }, { "docid": "48c157638090b3168b6fd3cb50780184", "text": "Adverse reactions to drugs are among the most common causes of death in industrialized nations. Expensive clinical trials are not sufficient to uncover all of the adverse reactions a drug may cause, necessitating systems for post-marketing surveillance, or pharmacovigilance. These systems have typically relied on voluntary reporting by health care professionals. However, self-reported patient data has become an increasingly important resource, with efforts such as MedWatch from the FDA allowing reports directly from the consumer. In this paper, we propose mining the relationships between drugs and adverse reactions as reported by the patients themselves in user comments to health-related websites. We evaluate our system on a manually annotated set of user comments, with promising performance. We also report encouraging correlations between the frequency of adverse drug reactions found by our system in unlabeled data and the frequency of documented adverse drug reactions. We conclude that user comments pose a significant natural language processing challenge, but do contain useful extractable information which merits further exploration.", "title": "" }, { "docid": "d6fe99533c66075ffb85faf7c70475f0", "text": "Outlier detection has received significant attention in many applications, such as detecting credit card fraud or network intrusions. Most existing research focuses on numerical datasets, and cannot directly apply to categorical sets where there is little sense in calculating distances among data points. Furthermore, a number of outlier detection methods require quadratic time with respect to the dataset size and usually multiple dataset scans. These characteristics are undesirable for large datasets, potentially scattered over multiple distributed sites. In this paper, we introduce Attribute Value Frequency (A VF), a fast and scalable outlier detection strategy for categorical data. A VF scales linearly with the number of data points and attributes, and relies on a single data scan. AVF is compared with a list of representative outlier detection approaches that have not been contrasted against each other. Our proposed solution is experimentally shown to be significantly faster, and as effective in discovering outliers.", "title": "" }, { "docid": "341e0b7d04b333376674dac3c0888f50", "text": "Software archives contain historical information about the development process of a software system. Using data mining techniques rules can be extracted from these archives. In this paper we discuss how standard visualization techniques can be applied to interactively explore these rules. To this end we extended the standard visualization techniques for association rules and sequence rules to also show the hierarchical order of items. Clusters and outliers in the resulting visualizations provide interesting insights into the relation between the temporal development of a system and its static structure. As an example we look at the large software archive of the MOZILLA open source project. Finally we discuss what kind of regularities and anomalies we found and how these can then be leveraged to support software engineers.", "title": "" }, { "docid": "2ea626f0e1c4dfa3d5a23c80d8fbf70c", "text": "Although research studies in education show that use of technology can help student learning, its use is generally affected by certain barriers. In this paper, we first identify the general barriers typically faced by K-12 schools, both in the United States as well as other countries, when integrating technology into the curriculum for instructional purposes, namely: (a) resources, (b) institution, (c) subject culture, (d) attitudes and beliefs, (e) knowledge and skills, and (f) assessment. We then describe the strategies to overcome such barriers: (a) having a shared vision and technology integration plan, (b) overcoming the scarcity of resources, (c) changing attitudes and beliefs, (d) conducting professional development, and (e) reconsidering assessments. Finally, we identify several current knowledge gaps pertaining to the barriers and strategies of technology integration, and offer pertinent recommendations for future research.", "title": "" }, { "docid": "455a6fe5862e3271ac00057d1b569b11", "text": "Personalization technologies and recommender systems help online consumers avoid information overload by making suggestions regarding which information is most relevant to them. Most online shopping sites and many other applications now use recommender systems. Two new recommendation techniques leverage multicriteria ratings and improve recommendation accuracy as compared with single-rating recommendation approaches. Taking full advantage of multicriteria ratings in personalization applications requires new recommendation techniques. In this article, we propose several new techniques for extending recommendation technologies to incorporate and leverage multicriteria rating information.", "title": "" }, { "docid": "160e3c3fc9e3a13c4ee961e453532fd1", "text": "An encephalitis outbreak was investigated in Faridpur District, Bangladesh, in April-May 2004 to determine the cause of the outbreak and risk factors for disease. Biologic specimens were tested for Nipah virus. Surfaces were evaluated for Nipah virus contamination by using reverse transcription-PCR (RT-PCR). Thirty-six cases of Nipah virus illness were identified; 75% of case-patients died. Multiple peaks of illness occurred, and 33 case-patients had close contact with another Nipah virus patient before their illness. Results from a case-control study showed that contact with 1 patient carried the highest risk for infection (odds ratio 6.7, 95% confidence interval 2.9-16.8, p < 0.001). RT-PCR testing of environmental samples confirmed Nipah virus contamination of hospital surfaces. This investigation provides evidence for person-to-person transmission of Nipah virus. Capacity for person-to-person transmission increases the potential for wider spread of this highly lethal pathogen and highlights the need for infection control strategies for resource-poor settings.", "title": "" }, { "docid": "3a0275d7834a6fb1359bb7d3bef14e97", "text": "With the Internet of Things (IoT) becoming a major component of our daily life, understanding how to improve quality of service (QoS) in IoT networks is becoming a challenging problem. Currently most interaction between the IoT devices and the supporting back-end servers is done through large scale cloud data centers. However, with the exponential growth of IoT devices and the amount of data they produce, communication between \"things\" and cloud will be costly, inefficient, and in some cases infeasible. Fog computing serves as solution for this as it provides computation, storage, and networking resource for IoT, closer to things and users. One of the promising advantages of fog is reducing service delay for end user applications, whereas cloud provides extensive computation and storage capacity with a higher latency. Thus it is necessary to understand the interplay between fog computing and cloud, and to evaluate the effect of fog computing on the IoT service delay and QoS. In this paper we will introduce a general framework for IoT-fog-cloud applications, and propose a delay-minimizing policy for fog-capable devices that aims to reduce the service delay for IoT applications. We then develop an analytical model to evaluate our policy and show how the proposed framework helps to reduce IoT service delay.", "title": "" }, { "docid": "31cf550d44266e967716560faeb30f2b", "text": "The explosion in workload complexity and the recent slow-down in Moore’s law scaling call for new approaches towards efficient computing. Researchers are now beginning to use recent advances in machine learning in software optimizations, augmenting or replacing traditional heuristics and data structures. However, the space of machine learning for computer hardware architecture is only lightly explored. In this paper, we demonstrate the potential of deep learning to address the von Neumann bottleneck of memory performance. We focus on the critical problem of learning memory access patterns, with the goal of constructing accurate and efficient memory prefetchers. We relate contemporary prefetching strategies to n-gram models in natural language processing, and show how recurrent neural networks can serve as a drop-in replacement. On a suite of challenging benchmark datasets, we find that neural networks consistently demonstrate superior performance in terms of precision and recall. This work represents the first step towards practical neural-network based prefetching, and opens a wide range of exciting directions for machine learning in computer architecture research.", "title": "" }, { "docid": "8c3fb435c46c8ff3c509a2bfeb6625d7", "text": "The objective of this study was to quantify the electrode-tissue interface impedance of electrodes used for deep brain stimulation (DBS). We measured the impedance of DBS electrodes using electrochemical impedance spectroscopy in vitro in a carbonate- and phosphate-buffered saline solution and in vivo following acute implantation in the brain. The components of the impedance, including the series resistance (R(s)), the Faradaic resistance (R(f)) and the double layer capacitance (C(dl)), were estimated using an equivalent electrical circuit. Both R(f) and C(dl) decreased as the sinusoidal frequency was increased, but the ratio of the capacitive charge transfer to the Faradaic charge transfer was relatively insensitive to the change of frequency. R(f) decreased and C(dl) increased as the current density was increased, and above a critical current density the interface impedance became nonlinear. Thus, the magnitude of the interface impedance was strongly dependent on the intensity (pulse amplitude and duration) of stimulation. The temporal dependence and spatial non-uniformity of R(f) and C(dl) suggested that a distributed network, with each element of the network having dynamics tailored to a specific stimulus waveform, is required to describe adequately the impedance of the DBS electrode-tissue interface. Voltage transients to biphasic square current pulses were measured and suggested that the electrode-tissue interface did not operate in a linear range at clinically relevant current amplitudes, and that the assumption of the DBS electrode being ideally polarizable was not valid under clinical stimulating conditions.", "title": "" }, { "docid": "89a1e91c2ab1393f28a6381ba94de12d", "text": "In this paper, a simulation environment encompassing realistic propagation conditions and system parameters is employed in order to analyze the performance of future multigigabit indoor communication systems at tetrahertz frequencies. The influence of high-gain antennas on transmission aspects is investigated. Transmitter position for optimal signal coverage is also analyzed. Furthermore, signal coverage maps and achievable data rates are calculated for generic indoor scenarios with and without furniture for a variety of possible propagation conditions.", "title": "" }, { "docid": "0251f38f48c470e2e04fb14fc7ba34b2", "text": "The fast development of Internet of Things (IoT) and cyber-physical systems (CPS) has triggered a large demand of smart devices which are loaded with sensors collecting information from their surroundings, processing it and relaying it to remote locations for further analysis. The wide deployment of IoT devices and the pressure of time to market of device development have raised security and privacy concerns. In order to help better understand the security vulnerabilities of existing IoT devices and promote the development of low-cost IoT security methods, in this paper, we use both commercial and industrial IoT devices as examples from which the security of hardware, software, and networks are analyzed and backdoors are identified. A detailed security analysis procedure will be elaborated on a home automation system and a smart meter proving that security vulnerabilities are a common problem for most devices. Security solutions and mitigation methods will also be discussed to help IoT manufacturers secure their products.", "title": "" }, { "docid": "cb702c48a242c463dfe1ac1f208acaa2", "text": "In 2011, Lake Erie experienced the largest harmful algal bloom in its recorded history, with a peak intensity over three times greater than any previously observed bloom. Here we show that long-term trends in agricultural practices are consistent with increasing phosphorus loading to the western basin of the lake, and that these trends, coupled with meteorological conditions in spring 2011, produced record-breaking nutrient loads. An extended period of weak lake circulation then led to abnormally long residence times that incubated the bloom, and warm and quiescent conditions after bloom onset allowed algae to remain near the top of the water column and prevented flushing of nutrients from the system. We further find that all of these factors are consistent with expected future conditions. If a scientifically guided management plan to mitigate these impacts is not implemented, we can therefore expect this bloom to be a harbinger of future blooms in Lake Erie.", "title": "" }, { "docid": "a506f3f6c401f83eaba830abb20c8fff", "text": "The mechanisms governing the recruitment of functional glutamate receptors at nascent excitatory postsynapses following initial axon-dendrite contact remain unclear. We examined here the ability of neurexin/neuroligin adhesions to mobilize AMPA-type glutamate receptors (AMPARs) at postsynapses through a diffusion/trap process involving the scaffold molecule PSD-95. Using single nanoparticle tracking in primary rat and mouse hippocampal neurons overexpressing or lacking neuroligin-1 (Nlg1), a striking inverse correlation was found between AMPAR diffusion and Nlg1 expression level. The use of Nlg1 mutants and inhibitory RNAs against PSD-95 demonstrated that this effect depended on intact Nlg1/PSD-95 interactions. Furthermore, functional AMPARs were recruited within 1 h at nascent Nlg1/PSD-95 clusters assembled by neurexin-1β multimers, a process requiring AMPAR membrane diffusion. Triggering novel neurexin/neuroligin adhesions also caused a depletion of PSD-95 from native synapses and a drop in AMPAR miniature EPSCs, indicating a competitive mechanism. Finally, both AMPAR level at synapses and AMPAR-dependent synaptic transmission were diminished in hippocampal slices from newborn Nlg1 knock-out mice, confirming an important role of Nlg1 in driving AMPARs to nascent synapses. Together, these data reveal a mechanism by which membrane-diffusing AMPARs can be rapidly trapped at PSD-95 scaffolds assembled at nascent neurexin/neuroligin adhesions, in competition with existing synapses.", "title": "" }, { "docid": "149073f577d0e1fb380ae395ff1ca0c5", "text": "A complete kinematic model of the 5 DOF-Mitsubishi RV-M1 manipulator is presented in this paper. The forward kinematic model is based on the Modified Denavit-Hartenberg notation, and the inverse one is derived in closed form by fixing the orientation of the tool. A graphical interface is developed using MATHEMATICA software to illustrate the forward and inverse kinematics, allowing student or researcher to have hands-on of virtual graphical model that fully describe both the robot's geometry and the robot's motion in its workspace before to tackle any real task.", "title": "" }, { "docid": "4d6e9bc0a8c55e65d070d1776e781173", "text": "As electronic device feature sizes scale-down, the power consumed due to onchip communications as compared to computations will increase dramatically; likewise, the available bandwidth per computational operation will continue to decrease. Integrated photonics can offer savings in power and potential increase in bandwidth for onchip networks. Classical diffraction-limited photonics currently utilized in photonic integrated circuits (PIC) is characterized by bulky and inefficient devices compared to their electronic counterparts due to weak light matter interactions (LMI). Performance critical for the PIC is electro-optic modulators (EOM), whose performances depend inherently on enhancing LMIs. Current EOMs based on diffraction-limited optical modes often deploy ring resonators and are consequently bulky, photon-lifetime modulation limited, and power inefficient due to large electrical...", "title": "" } ]
scidocsrr
43c305b4a8dc2035524c87b78485678c
Document dissimilarity within and across languages: A benchmarking study
[ { "docid": "5bc9b4a952465bed83b5e84d6ab2bba8", "text": "We present a new algorithm for duplicate document detection thatuses collection statistics. We compare our approach with thestate-of-the-art approach using multiple collections. Thesecollections include a 30 MB 18,577 web document collectiondeveloped by Excite@Home and three NIST collections. The first NISTcollection consists of 100 MB 18,232 LA-Times documents, which isroughly similar in the number of documents to theExcite&at;Home collection. The other two collections are both 2GB and are the 247,491-web document collection and the TREC disks 4and 5---528,023 document collection. We show that our approachcalled I-Match, scales in terms of the number of documents andworks well for documents of all sizes. We compared our solution tothe state of the art and found that in addition to improvedaccuracy of detection, our approach executed in roughly one-fifththe time.", "title": "" } ]
[ { "docid": "41b7b8638fa1d3042873ca70f9c338f1", "text": "The LC50 (78, 85 ppm) and LC90 (88, 135 ppm) of Anagalis arvensis and Calendula micrantha respectively against Biomphalaria alexandrina were higher than those of the non-target snails, Physa acuta, Planorbis planorbis, Helisoma duryi and Melanoides tuberculata. In contrast, the LC50 of Niclosamide (0.11 ppm) and Copper sulphate (CuSO4) (0.42 ppm) against B. alexandrina were lower than those of the non-target snails. The mortalities percentage among non-target snails ranged between 0.0 & 20% when sublethal concentrations of CuSO4 against B. alexandrina mixed with those of C. micrantha and between 0.0 & 40% when mixed with A. arvensis. Mortalities ranged between 0.0 & 50% when Niclosamide was mixed with each of A. arvensis and C. micrantha. A. arvensis induced 100% mortality on Oreochromis niloticus after 48 hrs exposure and after 24 hrs for Gambusia affinis. C. micrantha was non-toxic to the fish. The survival rate of O. niloticus and G. affinis after 48 hrs exposure to 0.11 ppm of Niclosamide were 83.3% & 100% respectively. These rates were 91.7% & 93.3% respectively when each of the two fish species was exposed to 0.42 ppm of CuSO4. Mixture of sub-lethal concentrations of A. arvensis against B. alexandrina and those of Niclosamide or CuSO4 at ratios 10:40 & 25:25 induced 66.6% mortalities on O. niloticus and 83.3% at 40:10. These mixtures caused 100% mortalities on G. affinis at all ratios. A. arvensis CuSO4 mixtures at 10:40 induced 83.3% & 40% mortalities on O. niloticus and G. affinis respectively and 100% mortalities on both fish species at ratios 25:25 & 40:10. A mixture of sub-lethal concentrations of C. micrantha against B. alexandrina and of Niclosamide or CuSO4 caused mortalities of O. niloticus between 0.0 & 33.3% and between 5% & 35% of G. affinis. The residue of Cu in O. niloticus were 4.69, 19.06 & 25.37 mg/1kgm fish after 24, 48 & 72 hrs exposure to LC0 of CuSO4 against B. alexandrina respectively.", "title": "" }, { "docid": "673fea40e5cb12b54cc296b1a2c98ddb", "text": "Matrix completion is a rank minimization problem to recover a low-rank data matrix from a small subset of its entries. Since the matrix rank is nonconvex and discrete, many existing approaches approximate the matrix rank as the nuclear norm. However, the truncated nuclear norm is known to be a better approximation to the matrix rank than the nuclear norm, exploiting a priori target rank information about the problem in rank minimization. In this paper, we propose a computationally efficient truncated nuclear norm minimization algorithm for matrix completion, which we call TNNM-ALM. We reformulate the original optimization problem by introducing slack variables and considering noise in the observation. The central contribution of this paper is to solve it efficiently via the augmented Lagrange multiplier (ALM) method, where the optimization variables are updated by closed-form solutions. We apply the proposed TNNM-ALM algorithm to ghost-free high dynamic range imaging by exploiting the low-rank structure of irradiance maps from low dynamic range images. Experimental results on both synthetic and real visual data show that the proposed algorithm achieves significantly lower reconstruction errors and superior robustness against noise than the conventional approaches, while providing substantial improvement in speed, thereby applicable to a wide range of imaging applications.", "title": "" }, { "docid": "7aa9a5f9bde62b5aafb30cbd9c79f9e9", "text": "Congestion in traffic is a serious issue. In existing system signal timings are fixed and they are independent of traffic density. Large red light delays leads to traffic congestion. In this paper, IoT based traffic control system is implemented in which signal timings are updated based on the vehicle counting. This system consists of WI-FI transceiver module it transmits the vehicle count of the current system to the next traffic signal. Based on traffic density of previous signal it controls the signals of the next signal. The system is based on raspberry-pi and Arduino. Image processing of traffic video is done in MATLAB with simulink support. The whole vehicle counting is performed by raspberry pi.", "title": "" }, { "docid": "7d38b4b2d07c24fdfb2306116017cd5e", "text": "Science Technology Engineering, Art, Mathematics (STEAM) is an integration of art into Science Technology Engineering, Mathematics (STEM). Connecting art to science makes learning more effective and innovative. This study aims to determine the increase in mastery of the concept of high school students after the application of STEAM education in learning with the theme of Water and Us. The research method used is one group Pretestposttest design with students of class VII (n = 37) junior high school. The instrument used in the form of question of mastery of concepts in the form of multiple choices amounted to 20 questions and observation sheet of learning implementation. The results of the study show that there is an increase in conceptualization on the theme of Water and Us which is categorized as medium (<g>=0, 46) after the application of the STEAM approach. The conclusion obtained that by applying STEAM approach in learning can improve the mastery of concept", "title": "" }, { "docid": "d29eba4f796cb642d64e73b76767e59d", "text": "In this paper, a novel segmentation and recognition approach to automatically extract street lighting poles from mobile LiDAR data is proposed. First, points on or around the ground are extracted and removed through a piecewise elevation histogram segmentation method. Then, a new graph-cut-based segmentation method is introduced to extract the street lighting poles from each cluster obtained through a Euclidean distance clustering algorithm. In addition to the spatial information, the street lighting pole's shape and the point's intensity information are also considered to formulate the energy function. Finally, a Gaussian-mixture-model-based method is introduced to recognize the street lighting poles from the candidate clusters. The proposed approach is tested on several point clouds collected by different mobile LiDAR systems. Experimental results show that the proposed method is robust to noises and achieves an overall performance of 90% in terms of true positive rate.", "title": "" }, { "docid": "1c03c9e9fb2697cbff3ee3063593d33c", "text": "Hand pose estimation from a monocular RGB image is an important but challenging task. A main factor affecting its performance is the lack of a sufficiently large training dataset with accurate hand-keypoint annotations. In this work, we circumvent this problem by proposing an effective method for generating realistic hand poses, and show that state-of-the-art algorithms for hand pose estimation can be greatly improved by utilizing the generated hand poses as training data. Specifically, we first adopt an augmented reality (AR) simulator to synthesize hand poses with accurate hand-keypoint labels. Although the synthetic hand poses come with precise joint labels, eliminating the need of manual annotations, they look unnatural and are not the ideal training data. To produce more realistic hand poses, we propose to blend a synthetic hand pose with a real background, such as arms and sleeves. To this end, we develop tonality-alignment generative adversarial networks (TAGANs), which align the tonality and color distributions between synthetic hand poses and real backgrounds, and can generate high quality hand poses. We evaluate TAGAN on three benchmarks, including the RHP, STB, and CMUPS hand pose datasets. With the aid of the synthesized poses, our method performs favorably against the state-ofthe-arts in both 2D and 3D hand pose estimations.", "title": "" }, { "docid": "eec15a5d14082d625824452bd070ec38", "text": "Food waste is a major environmental issue. Expired products are thrown away, implying that too much food is ordered compared to what is sold and that a more accurate prediction model is required within grocery stores. In this study the two prediction models Long Short-Term Memory (LSTM) and Autoregressive Integrated Moving Average (ARIMA) were compared on their prediction accuracy in two scenarios, given sales data for different products, to observe if LSTM is a model that can compete against the ARIMA model in the field of sales forecasting in retail. In the first scenario the models predict sales for one day ahead using given data, while they in the second scenario predict each day for a week ahead. Using the evaluation measures RMSE and MAE together with a t-test the results show that the difference between the LSTM and ARIMA model is not of statistical significance in the scenario of predicting one day ahead. However when predicting seven days ahead, the results show that there is a statistical significance in the difference indicating that the LSTM model has higher accuracy. This study therefore concludes that the LSTM model is promising in the field of sales forecasting in retail and able to compete against the ARIMA model.", "title": "" }, { "docid": "d3b6fcc353382c947cfb0b4a73eda0ef", "text": "Robust object tracking is a challenging task in computer vision. To better solve the partial occlusion issue, part-based methods are widely used in visual object trackers. However, due to the complicated online training and updating process, most of these part-based trackers cannot run in real-time. Correlation filters have been used in tracking tasks recently because of the high efficiency. However, the conventional correlation filter based trackers cannot deal with occlusion. Furthermore, most correlation filter based trackers fix the scale and rotation of the target which makes the trackers unreliable in long-term tracking tasks. In this paper, we propose a novel tracking method which track objects based on parts with multiple correlation filters. Our method can run in real-time. Additionally, the Bayesian inference framework and a structural constraint mask are adopted to enable our tracker to be robust to various appearance changes. Extensive experiments have been done to prove the effectiveness of our method.", "title": "" }, { "docid": "274373d46b748d92e6913496507353b1", "text": "This paper introduces a blind watermarking based on a convolutional neural network (CNN). We propose an iterative learning framework to secure robustness of watermarking. One loop of learning process consists of the following three stages: Watermark embedding, attack simulation, and weight update. We have learned a network that can detect a 1-bit message from a image sub-block. Experimental results show that this learned network is an extension of the frequency domain that is widely used in existing watermarking scheme. The proposed scheme achieved robustness against geometric and signal processing attacks with a learning time of one day.", "title": "" }, { "docid": "bca27f6e44d64824a0be41d5f2beea4d", "text": "In Infrastructure-as-a-Service (IaaS) clouds, intrusion detection systems (IDSes) increase their importance. To securely detect attacks against virtual machines (VMs), IDS offloading with VM introspection (VMI) has been proposed. In semi-trusted clouds, however, it is difficult to securely offload IDSes because there may exist insiders such as malicious system administrators. First, secure VM execution cannot coexist with IDS offloading although it has to be enabled to prevent information leakage to insiders. Second, offloaded IDSes can be easily disabled by insiders. To solve these problems, this paper proposes IDS remote offloading with remote VMI. Since IDSes can run at trusted remote hosts outside semi-trusted clouds, they cannot be disabled by insiders in clouds. Remote VMI enables IDSes at remote hosts to introspect VMs via the trusted hypervisor inside semi-trusted clouds. Secure VM execution can be bypassed by performing VMI in the hypervisor. Remote VMI preserves the integrity and confidentiality of introspected data between the hypervisor and remote hosts. The integrity of the hypervisor can be guaranteed by various existing techniques. We have developed RemoteTrans for remotely offloading legacy IDSes and confirmed that RemoteTrans could achieve surprisingly efficient execution of legacy IDSes at remote hosts.", "title": "" }, { "docid": "b31f5af2510461479d653be1ddadaa22", "text": "Integrating smart temperature sensors into digital platforms facilitates information to be processed and transmitted, and open up new applications. Furthermore, temperature sensors are crucial components in computing platforms to manage power-efficiency trade-offs reliably under a thermal budget. This paper presents a holistic perspective about smart temperature sensor design from system- to device-level including manufacturing concerns. Through smart sensor design evolutions, we identify some scaling paths and circuit techniques to surmount analog/mixed-signal design challenges in 32-nm and beyond. We close with opportunities to design smarter temperature sensors.", "title": "" }, { "docid": "0b19bd9604fae55455799c39595c8016", "text": "Our study concerns an important current problem, that of diffusion of information in social networks. This problem has received significant attention from the Internet research community in the recent times, driven by many potential applications such as viral marketing and sales promotions. In this paper, we focus on the target set selection problem, which involves discovering a small subset of influential players in a given social network, to perform a certain task of information diffusion. The target set selection problem manifests in two forms: 1) top-k nodes problem and 2) λ -coverage problem. In the top-k nodes problem, we are required to find a set of k key nodes that would maximize the number of nodes being influenced in the network. The λ-coverage problem is concerned with finding a set of key nodes having minimal size that can influence a given percentage λ of the nodes in the entire network. We propose a new way of solving these problems using the concept of Shapley value which is a well known solution concept in cooperative game theory. Our approach leads to algorithms which we call the ShaPley value-based Influential Nodes (SPINs) algorithms for solving the top-k nodes problem and the λ -coverage problem. We compare the performance of the proposed SPIN algorithms with well known algorithms in the literature. Through extensive experimentation on four synthetically generated random graphs and six real-world data sets (Celegans, Jazz, NIPS coauthorship data set, Netscience data set, High-Energy Physics data set, and Political Books data set), we show that the proposed SPIN approach is more powerful and computationally efficient.", "title": "" }, { "docid": "78a2bf1c2edec7ec9eb48f8b07dc9e04", "text": "The performance of the most commonly used metal antennas close to the human body is one of the limiting factors of the performance of bio-sensors and wireless body area networks (WBAN). Due to the high dielectric and conductivity contrast with respect to most parts of the human body (blood, skin, …), the range of most of the wireless sensors operating in RF and microwave frequencies is limited to 1–2 cm when attached to the body. In this paper, we introduce the very novel idea of liquid antennas, that is based on engineering the properties of liquids. This approach allows for the improvement of the range by a factor of 5–10 in a very easy-to-realize way, just modifying the salinity of the aqueous solution of the antenna. A similar methodology can be extended to the development of liquid RF electronics for implantable devices and wearable real-time bio-signal monitoring, since it can potentially lead to very flexible antenna and electronic configurations.", "title": "" }, { "docid": "eec819447de1d6482f9ff4a04fb73782", "text": "Estimating the travel time of any path (denoted by a sequence of connected road segments) in a city is of great importance to traffic monitoring, route planning, ridesharing, taxi/Uber dispatching, etc. However, it is a very challenging problem, affected by diverse complex factors, including spatial correlations, temporal dependencies, external conditions (e.g. weather, traffic lights). Prior work usually focuses on estimating the travel times of individual road segments or sub-paths and then summing up these times, which leads to an inaccurate estimation because such approaches do not consider road intersections/traffic lights, and local errors may accumulate. To address these issues, we propose an end-to-end Deep learning framework for Travel Time Estimation (called DeepTTE) that estimates the travel time of the whole path directly. More specifically, we present a geo-convolution operation by integrating the geographic information into the classical convolution, capable of capturing spatial correlations. By stacking recurrent unit on the geo-convoluton layer, our DeepTTE can capture the temporal dependencies as well. A multi-task learning component is given on the top of DeepTTE, that learns to estimate the travel time of both the entire path and each local path simultaneously during the training phase. Extensive experiments on two trajectory datasets show our DeepTTE significantly outperforms the state-of-the-art methods.", "title": "" }, { "docid": "35830166ddf17086a61ab07ec41be6b0", "text": "As the need for Human Computer Interaction (HCI) designers increases so does the need for courses that best prepare students for their future work life. Multidisciplinary teamwork is what very frequently meets the graduates in their new work situations. Preparing students for such multidisciplinary work through education is not easy to achieve. In this paper, we investigate ways to engage computer science students, majoring in design, use, and interaction (with technology), in design practices through an advanced graduate course in interaction design. Here, we take a closer look at how prior embodied and explicit knowledge of HCI that all of the students have, combined with understanding of design practice through the course, shape them as human-computer interaction designers. We evaluate the results of the effort in terms of increase in creativity, novelty of ideas, body language when engaged in design activities, and in terms of perceptions of how well this course prepared the students for the work practice outside of the university. Keywords—HCI education; interaction design; studio; design education; multidisciplinary teamwork.", "title": "" }, { "docid": "230d3cdc0bd444bfe5c910f32bd1a109", "text": "Programming is taught as foundation module at the beginning of undergraduate studies and/or during foundation year. Learning introductory programming languages such as Pascal, Basic / C (procedural) and C++ / Java (object oriented) requires learners to understand the underlying programming paradigm, syntax, logic and the structure. Learning to program is considered hard for novice learners and it is important to understand what makes learning program so difficult and how students learn.\n The prevailing focus on multimedia learning objects provides promising approach to create better knowledge transfer. This project aims to investigate: (a) students' perception in learning to program and the difficulties. (b) effectiveness of multimedia learning objects in learning introductory programming language in a face-to-face learning environment.", "title": "" }, { "docid": "06518637c2b44779da3479854fdbb84d", "text": "OBJECTIVE\nThe relative short-term efficacy and long-term benefits of pharmacologic versus psychotherapeutic interventions have not been studied for posttraumatic stress disorder (PTSD). This study compared the efficacy of a selective serotonin reup-take inhibitor (SSRI), fluoxetine, with a psychotherapeutic treatment, eye movement desensitization and reprocessing (EMDR), and pill placebo and measured maintenance of treatment gains at 6-month follow-up.\n\n\nMETHOD\nEighty-eight PTSD subjects diagnosed according to DSM-IV criteria were randomly assigned to EMDR, fluoxetine, or pill placebo. They received 8 weeks of treatment and were assessed by blind raters posttreatment and at 6-month follow-up. The primary outcome measure was the Clinician-Administered PTSD Scale, DSM-IV version, and the secondary outcome measure was the Beck Depression Inventory-II. The study ran from July 2000 through July 2003.\n\n\nRESULTS\nThe psychotherapy intervention was more successful than pharmacotherapy in achieving sustained reductions in PTSD and depression symptoms, but this benefit accrued primarily for adult-onset trauma survivors. At 6-month follow-up, 75.0% of adult-onset versus 33.3% of child-onset trauma subjects receiving EMDR achieved asymptomatic end-state functioning compared with none in the fluoxetine group. For most childhood-onset trauma patients, neither treatment produced complete symptom remission.\n\n\nCONCLUSIONS\nThis study supports the efficacy of brief EMDR treatment to produce substantial and sustained reduction of PTSD and depression in most victims of adult-onset trauma. It suggests a role for SSRIs as a reliable first-line intervention to achieve moderate symptom relief for adult victims of childhood-onset trauma. Future research should assess the impact of lengthier intervention, combination treatments, and treatment sequencing on the resolution of PTSD in adults with childhood-onset trauma.", "title": "" }, { "docid": "0a2d2a018348f1740a086977cf19ceb4", "text": "This paper describes the design of UART (universal asynchronous receiver transmitter) based on VHDL. As UART is consider as a low speed, low cost data exchange between computer and peripherals.[1].To overcome the problem of low speed data transmission , a 16 bit UART is proposed in this paper. It works on 9600bps baud rate. This will result in increased the speed of UART.Whole design is simulated with Xilinx ISE8.2i software and results are completely consistent with UART protocol. Keywords— Baud rate generator, HDL, ISE8.2i, Receiver, Serial communication, Transmitter, Xilinx.", "title": "" }, { "docid": "b84f84961c655ea98920513bf3074241", "text": "This study took place in Sakarya Anatolian High School, Profession High School and Vocational High School for Industry (SAPHPHVHfI) where a flexible and nonroutine organising style was tried to be realized. The management style was initiated on a study group at first, but then it helped the group to come out as natural team spontaneously. The main purpose of the study is to make an evaluation on five teams within the school where team (based) management has been experienced in accordance with Belbin (1981)’s team roles theory [9]. The study group of the research consists of 28 people. The data was obtained from observations, interviews and the answers given to the questions in Belbin Team Roles Self Perception Inventory (BTRSPI). Some of the findings of the study are; (1) There was no paralellism between the team and functional roles of the members of the mentioned five team, (2) The team roles were distributed equaly balanced but it was also found that most of the roles were played by the members who were less inclined to play it, (3) The there were very few members who played plant role within the teams and there were nearly no one who were inclined to play leader role.", "title": "" }, { "docid": "19fbd4a685e7fc8c299447644f496d5f", "text": "The creation of the e-learning web services are increasingly growing. Therefore, their discovery is a very important challenge. The choice of the e-learning web services depend, generally, on the pedagogic, the financial and the technological constraints. The Learning Quality ontology extends existing ontology such as OWL-S to provide a semantically rich description of these constraints. However, due to the diversity of web services customers, other parameters must be considered during the discovery process, such as their preferences. For this purpose, the user profile takes into account to increase the degree of relevance of discovery results. We also present a modeling scenario to illustrate how our ontology can be used.", "title": "" } ]
scidocsrr
15a2ffc1ca94feb12059ba5d4285a66c
Learning Decision Trees Using the Area Under the ROC Curve
[ { "docid": "e9017607252973b36f9d4c3c659fe858", "text": "In this paper, we address the problem of retrospectively pruning decision trees induced from data, according to a topdown approach. This problem has received considerable attention in the areas of pattern recognition and machine learning, and many distinct methods have been proposed in literature. We make a comparative study of six well-known pruning methods with the aim of understanding their theoretical foundations, their computational complexity, and the strengths and weaknesses of their formulation. Comments on the characteristics of each method are empirically supported. In particular, a wide experimentation performed on several data sets leads us to opposite conclusions on the predictive accuracy of simplified trees from some drawn in the literature. We attribute this divergence to differences in experimental designs. Finally, we prove and make use of a property of the reduced error pruning method to obtain an objective evaluation of the tendency to overprune/underprune observed in each method. Index Terms —Decision trees, top-down induction of decision trees, simplification of decision trees, pruning and grafting operators, optimal pruning, comparative studies. —————————— ✦ ——————————", "title": "" }, { "docid": "a9bc9d9098fe852d13c3355ab6f81edb", "text": "The area under the ROC curve, or the equivalent Gini index, is a widely used measure of performance of supervised classification rules. It has the attractive property that it side-steps the need to specify the costs of the different kinds of misclassification. However, the simple form is only applicable to the case of two classes. We extend the definition to the case of more than two classes by averaging pairwise comparisons. This measure reduces to the standard form in the two class case. We compare its properties with the standard measure of proportion correct and an alternative definition of proportion correct based on pairwise comparison of classes for a simple artificial case and illustrate its application on eight data sets. On the data sets we examined, the measures produced similar, but not identical results, reflecting the different aspects of performance that they were measuring. Like the area under the ROC curve, the measure we propose is useful in those many situations where it is impossible to give costs for the different kinds of misclassification.", "title": "" } ]
[ { "docid": "9a48e31b5911e68b11c846d543f897be", "text": "Today’s smartphone users face a security dilemma: many apps they install operate on privacy-sensitive data, although they might originate from developers whose trustworthiness is hard to judge. Researchers have addressed the problem with more and more sophisticated static and dynamic analysis tools as an aid to assess how apps use private user data. Those tools, however, rely on the manual configuration of lists of sources of sensitive data as well as sinks which might leak data to untrusted observers. Such lists are hard to come by. We thus propose SUSI, a novel machine-learning guided approach for identifying sources and sinks directly from the code of any Android API. Given a training set of hand-annotated sources and sinks, SUSI identifies other sources and sinks in the entire API. To provide more fine-grained information, SUSI further categorizes the sources (e.g., unique identifier, location information, etc.) and sinks (e.g., network, file, etc.). For Android 4.2, SUSI identifies hundreds of sources and sinks with over 92% accuracy, many of which are missed by current information-flow tracking tools. An evaluation of about 11,000 malware samples confirms that many of these sources and sinks are indeed used. We furthermore show that SUSI can reliably classify sources and sinks even in new, previously unseen Android versions and components like Google Glass or", "title": "" }, { "docid": "b15dcda2b395d02a2df18f6d8bfa3b19", "text": "We present a method for human pose tracking that learns explicitly about the dynamic effects of human motion on joint appearance. In contrast to previous techniques which employ generic tools such as dense optical flow or spatiotemporal smoothness constraints to pass pose inference cues between frames, our system instead learns to predict joint displacements from the previous frame to the current frame based on the possibly changing appearance of relevant pixels surrounding the corresponding joints in the previous frame. This explicit learning of pose deformations is formulated by incorporating concepts from human pose estimation into an optical flow-like framework. With this approach, state-of-the-art performance is achieved on standard benchmarks for various pose tracking tasks including 3D body pose tracking in RGB video, 3D hand pose tracking in depth sequences, and 3D hand gesture tracking in RGB video.", "title": "" }, { "docid": "8025825afec9258d9a0a3da1f609f4ef", "text": "The task of measuring sentence similarity is defined as determining how similar the meanings of two sentences are. Computing sentence similarity is not a trivial task, due to the variability of natural language expressions. Measuring semantic similarity of sentences is closely related to semantic similarity between words. It makes a relationship between a word and the sentence through their meanings. The intention is to enhance the concepts of semantics over the syntactic measures that are able to categorize the pair of sentences effectively. Semantic similarity plays a vital role in Natural language processing, Informational Retrieval, Text Mining, Q & A systems, text-related research and application area. Traditional similarity measures are based on the syntactic features and other path based measures. In this project, we evaluated and tested three different semantic similarity approaches like cosine similarity, path based approach (wu – palmer and shortest path based), and feature based approach. Our proposed approaches exploits preprocessing of pair of sentences which identifies the bag of words and then applying the similarity measures like cosine similarity, path based similarity measures. In our approach the main contributions are comparison of existing similarity measures and feature based measure based on Wordnet. In feature based approach we perform the tagging and lemmatization and generates the similarity score based on the nouns and verbs. We evaluate our project output by comparing the existing measures based on different thresholds and comparison between three approaches. Finally we conclude that feature based measure generates better semantic score.", "title": "" }, { "docid": "cf1720877ddc4400bdce2a149b5ec8b4", "text": "How do we find patterns in author-keyword associations, evolving over time? Or in data cubes (tensors), with product-branchcustomer sales information? And more generally, how to summarize high-order data cubes (tensors)? How to incrementally update these patterns over time? Matrix decompositions, like principal component analysis (PCA) and variants, are invaluable tools for mining, dimensionality reduction, feature selection, rule identification in numerous settings like streaming data, text, graphs, social networks, and many more settings. However, they have only two orders (i.e., matrices, like author and keyword in the previous example).\n We propose to envision such higher-order data as tensors, and tap the vast literature on the topic. However, these methods do not necessarily scale up, let alone operate on semi-infinite streams. Thus, we introduce a general framework, incremental tensor analysis (ITA), which efficiently computes a compact summary for high-order and high-dimensional data, and also reveals the hidden correlations. Three variants of ITA are presented: (1) dynamic tensor analysis (DTA); (2) streaming tensor analysis (STA); and (3) window-based tensor analysis (WTA). In paricular, we explore several fundamental design trade-offs such as space efficiency, computational cost, approximation accuracy, time dependency, and model complexity.\n We implement all our methods and apply them in several real settings, such as network anomaly detection, multiway latent semantic indexing on citation networks, and correlation study on sensor measurements. Our empirical studies show that the proposed methods are fast and accurate and that they find interesting patterns and outliers on the real datasets.", "title": "" }, { "docid": "fe7668dd82775cf02116faacd1dd945f", "text": "In the last years, the advent of unmanned aerial vehicles (UAVs) for civilian remote sensing purposes has generated a lot of interest because of the various new applications they can offer. One of them is represented by the automatic detection and counting of cars. In this paper, we propose a novel car detection method. It starts with a feature extraction process based on scalar invariant feature transform (SIFT) thanks to which a set of keypoints is identified in the considered image and opportunely described. Successively, the process discriminates between keypoints assigned to cars and those associated with all remaining objects by means of a support vector machine (SVM) classifier. Experimental results have been conducted on a real UAV scene. They show how the proposed method allows providing interesting detection performances.", "title": "" }, { "docid": "5a063c2373aa849b59e20e6115a4df54", "text": "A GUI skeleton is the starting point for implementing a UI design image. To obtain a GUI skeleton from a UI design image, developers have to visually understand UI elements and their spatial layout in the image, and then translate this understanding into proper GUI components and their compositions. Automating this visual understanding and translation would be beneficial for bootstraping mobile GUI implementation, but it is a challenging task due to the diversity of UI designs and the complexity of GUI skeletons to generate. Existing tools are rigid as they depend on heuristically-designed visual understanding and GUI generation rules. In this paper, we present a neural machine translator that combines recent advances in computer vision and machine translation for translating a UI design image into a GUI skeleton. Our translator learns to extract visual features in UI images, encode these features' spatial layouts, and generate GUI skeletons in a unified neural network framework, without requiring manual rule development. For training our translator, we develop an automated GUI exploration method to automatically collect large-scale UI data from real-world applications. We carry out extensive experiments to evaluate the accuracy, generality and usefulness of our approach.", "title": "" }, { "docid": "3d862e488798629d633f78260a569468", "text": "Training workshops and professional meetings are important tools for capacity building and professional development. These social events provide professionals and educators a platform where they can discuss and exchange constructive ideas, and receive feedback. In particular, competition-based training workshops where participants compete on solving similar and common challenging problems are effective tools for stimulating students’ learning and aspirations. This paper reports the results of a two-day training workshop where memory and disk forensics were taught using a competition-based security educational tool. The workshop included training sessions for professionals, educators, and students to learn features of Tracer FIRE, a competition-based digital forensics and assessment tool, developed by Sandia National Laboratories. The results indicate that competitionbased training can be very effective in stimulating students’ motivation to learn. However, extra caution should be taken into account when delivering these types of training workshops. Keywords-component; cyber security, digital forenciscs, partcipatory training workshop, competition-based learning,", "title": "" }, { "docid": "95d1a35068e7de3293f8029e8b8694f9", "text": "Botnet is one of the major threats on the Internet for committing cybercrimes, such as DDoS attacks, stealing sensitive information, spreading spams, etc. It is a challenging issue to detect modern botnets that are continuously improving for evading detection. In this paper, we propose a machine learning based botnet detection system that is shown to be effective in identifying P2P botnets. Our approach extracts convolutional version of effective flow-based features, and trains a classification model by using a feed-forward artificial neural network. The experimental results show that the accuracy of detection using the convolutional features is better than the ones using the traditional features. It can achieve 94.7% of detection accuracy and 2.2% of false positive rate on the known P2P botnet datasets. Furthermore, our system provides an additional confidence testing for enhancing performance of botnet detection. It further classifies the network traffic of insufficient confidence in the neural network. The experiment shows that this stage can increase the detection accuracy up to 98.6% and decrease the false positive rate up to 0.5%.", "title": "" }, { "docid": "802b1bf3a263d9641dc7dc689f7eab10", "text": "Type I membrane oscillators such as the Connor model (Connor et al. 1977) and the Morris-Lecar model (Morris and Lecar 1981) admit very low frequency oscillations near the critical applied current. Hansel et al. (1995) have numerically shown that synchrony is difficult to achieve with these models and that the phase resetting curve is strictly positive. We use singular perturbation methods and averaging to show that this is a general property of Type I membrane models. We show in a limited sense that so called Type II resetting occurs with models that obtain rhythmicity via a Hopf bifurcation. We also show the differences between synapses that act rapidly and those that act slowly and derive a canonical form for the phase interactions.", "title": "" }, { "docid": "2ceedf1be1770938c94892c80ae956e4", "text": "Although there is interest in the educational potential of online multiplayer games and virtual worlds, there is still little evidence to explain specifically what and how people learn from these environments. This paper addresses this issue by exploring the experiences of couples that play World of Warcraft together. Learning outcomes were identified (involving the management of ludic, social and material resources) along with learning processes, which followed Wenger’s model of participation in Communities of Practice. Comparing this with existing literature suggests that productive comparisons can be drawn with the experiences of distance education students and the social pressures that affect their participation. Introduction Although there is great interest in the potential that computer games have in educational settings (eg, McFarlane, Sparrowhawk & Heald, 2002), and their relevance to learning more generally (eg, Gee, 2003), there has been relatively little in the way of detailed accounts of what is actually learnt when people play (Squire, 2002), and still less that relates such learning to formal education. In this paper, we describe a study that explores how people learn when they play the massively multiplayer online role-playing game (MMORPG), World of Warcraft. Detailed, qualitative research was undertaken with couples to explore their play, adopting a social perspective on learning. The paper concludes with a discussion that relates this to formal curricula and considers the implications for distance learning. British Journal of Educational Technology Vol 40 No 3 2009 444–457 doi:10.1111/j.1467-8535.2009.00948.x © 2009 The Authors. Journal compilation © 2009 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Background Researchers have long been interested in games and learning. There is, for example, a tradition of work within psychology exploring what makes games motivating, and relating this to learning (eg, Malone & Lepper, 1987). Games have been recently featured in mainstream educational policy (eg, DfES, 2005), and it has been suggested (eg, Gee, 2003) that they provide a model that should inform educational practice more generally. However, research exploring how games can be used in formal education suggests that the potential value of games to support learning is not so easy to realise. McFarlane et al (2002, p. 16), for example, argued that ‘the greatest obstacle to integrating games into the curriculum is the mismatch between the skills and knowledge developed in games, and those recognised explicitly within the school system’. Mitchell and Savill-Smith (2004) noted that although games have been used to support various kinds of learning (eg, recall of content, computer literacy, strategic skills), such uses were often problematic, being complicated by the need to integrate games into existing educational contexts. Furthermore, games specifically designed to be educational were ‘typically disliked’ (p. 44) as well as being expensive to produce. Until recently, research on the use of games in education tended to focus on ‘stand alone’ or single player games. Such games can, to some extent, be assessed in terms of their content coverage or instructional design processes, and evaluated for their ‘fit’ with a given curriculum (eg, Kirriemuir, 2002). Gaming, however, is generally a social activity, and this is even more apparent when we move from a consideration of single player games to a focus on multiplayer, online games. Viewing games from a social perspective opens the possibility of understanding learning as a social achievement, not just a process of content acquisition or skills development (Squire, 2002). In this study, we focus on a particular genre of online, multiplayer game: an MMORPG. MMORPGs incorporate structural elements drawn from table-top role-playing games (Dungeons & Dragons being the classic example). Play takes place in an expansive and persistent graphically rendered world. Players form teams and guilds, undertake group missions, meet in banks and auction houses, chat, congregate in virtual cities and engage in different modes of play, which involve various forms of collaboration and competition. As Squire noted (2002), socially situated accounts of actual learning in games (as opposed to what they might, potentially, help people to learn) have been lacking, partly because the topic is so complex. How, indeed, should the ‘game’ be understood—is it limited to the rules, or the player’s interactions with these rules? Does it include other players, and all possible interactions, and extend to out-of-game related activities and associated materials such as fan forums? Such questions have methodological implications, and hint at the ambiguities that educators working with virtual worlds might face (Carr, Oliver & Burn, 2008). Learning in virtual worlds 445 © 2009 The Authors. Journal compilation © 2009 Becta. Work in this area is beginning to emerge, particularly in relation to the learning and mentoring that takes place within player ‘guilds’ and online clans (see Galarneau, 2005; Steinkuehler, 2005). However, it is interesting to note that the research emerging from a digital game studies perspective, including much of the work cited thus far, is rarely utilised by educators researching the pedagogic potentials of virtual worlds such as Second Life. This study is informed by and attempts to speak to both of these communities. Methodology The purpose of this study was to explore how people learn in such virtual worlds in general. It was decided that focusing on a MMORPG such as World of Warcraft would be practical and offer a rich opportunity to study learning. MMORPGs are games; they have rules and goals, and particular forms of progression. Expertise in a virtual world such as Second Life is more dispersed, because the range of activities is that much greater (encompassing building, playing, scripting, creating machinima or socialising, for instance). Each of these activities would involve particular forms of expertise. The ‘curriculum’ proposed by World of Warcraft is more specified. It was important to approach learning practices in this game without divorcing such phenomena from the real-world contexts in which play takes place. In order to study players’ accounts of learning and the links between their play and other aspects of their social lives, we sought participants who would interact with each other both in the context of the game and outside of it. To this end, we recruited couples that play together in the virtual environment of World of Warcraft, while sharing real space. This decision was taken to manage the potential complexity of studying social settings: couples were the simplest stable social formation that we could identify who would interact both in the context of the game and outside of this too. Interviews were conducted with five couples. These were theoretically sampled, to maximise diversity in players’ accounts (as with any theoretically sampled study, this means that no claims can be made about prevalence or typicality). Players were recruited through online guilds and real-world social networks. The first two sets of participants were sampled for convenience (two heterosexual couples); the rest were invited to participate in order to broaden this sample (one couple was chosen because they shared a single account, one where a partner had chosen to stop playing and one mother–son pairing). All participants were adults, and conventional ethical procedures to ensure informed consent were followed, as specified in the British Educational Research Association guidelines. The couples were interviewed in the game world at a location of their choosing. The interviews, which were semi-structured, were chat-logged and each lasted 60–90 minutes. The resulting transcripts were split into self-contained units (typically a single statement, or a question and answer, or a short exchange) and each was categorised 446 British Journal of Educational Technology Vol 40 No 3 2009 © 2009 The Authors. Journal compilation © 2009 Becta. thematically. The initial categories were then jointly reviewed in order to consolidate and refine them, cross-checking them with the source transcripts to ensure their relevance and coherence. At this stage, the categories included references to topics such as who started first, self-assessments of competence, forms of help, guilds, affect, domestic space and assets, ‘alts’ (multiple characters) and so on. These were then reviewed to develop a single category that might provide an overview or explanation of the process. It should be noted that although this approach was informed by ‘grounded theory’ processes as described in Glaser and Strauss (1967), it does not share their positivistic stance on the status of the model that has been developed. Instead, it accords more closely with the position taken by Charmaz (2000), who recognises the central role of the researcher in shaping the data collected and making sense of it. What is produced therefore is seen as a socially constructed model, based on personal narratives, rather than an objective account of an independent reality. Reviewing the categories that emerged in this case led to ‘management of resources’ being selected as a general marker of learning. As players moved towards greater competence, they identified and leveraged an increasingly complex array of in-game resources, while also negotiating real-world resources and demands. To consider this framework in greater detail, ‘management of resources’ was subdivided into three categories: ludic (concerning the skills, knowledge and practices of game play), social and material (concerning physical resources such as the embodied setting for play) (see Carr & Oliver, 2008). Using this explanation of learning, the transcripts were re-reviewed in order to ", "title": "" }, { "docid": "39673b789ee8d8c898c93b7627b31f0a", "text": "In this position paper, we initiate a systematic treatment of reaching consensus in a permissionless network. We prove several simple but hopefully insightful lower bounds that demonstrate exactly why reaching consensus in a permission-less setting is fundamentally more difficult than the classical, permissioned setting. We then present a simplified proof of Nakamoto's blockchain which we recommend for pedagogical purposes. Finally, we survey recent results including how to avoid well-known painpoints in permissionless consensus, and how to apply core ideas behind blockchains to solve consensus in the classical, permissioned setting and meanwhile achieve new properties that are not attained by classical approaches.", "title": "" }, { "docid": "590ad5ce089e824d5e9ec43c54fa3098", "text": "The abstraction of a shared memory is of growing importance in distributed computing systems. Traditional memory consistency ensures that all processes agree on a common order of all operations on memory. Unfortunately, providing these guarantees entails access latencies that prevent scaling to large systems. This paper weakens such guarantees by definingcausal memory, an abstraction that ensures that processes in a system agree on the relative ordering of operations that arecausally related. Because causal memory isweakly consistent, it admits more executions, and hence more concurrency, than either atomic or sequentially consistent memories. This paper provides a formal definition of causal memory and gives an implementation for message-passing systems. In addition, it describes a practical class of programs that, if developed for a strongly consistent memory, run correctly with causal memory.", "title": "" }, { "docid": "01800567648367a34aa80a3161a21871", "text": "Single-image haze-removal is challenging due to limited information contained in one single image. Previous solutions largely rely on handcrafted priors to compensate for this deficiency. Recent convolutional neural network (CNN) models have been used to learn haze-related priors but they ultimately work as advanced image filters. In this paper we propose a novel semantic approach towards single image haze removal. Unlike existing methods, we infer color priors based on extracted semantic features. We argue that semantic context can be exploited to give informative cues for (a) learning color prior on clean image and (b) estimating ambient illumination. This design allowed our model to recover clean images from challenging cases with strong ambiguity, e.g. saturated illumination color and sky regions in image. In experiments, we validate our approach upon synthetic and real hazy images, where our method showed superior performance over state-of-the-art approaches, suggesting semantic information facilitates the haze removal task.", "title": "" }, { "docid": "3afa5356d956e2a525836b873442aa6b", "text": "The problem of secure data processing by means of a neural network (NN) is addressed. Secure processing refers to the possibility that the NN owner does not get any knowledge about the processed data since they are provided to him in encrypted format. At the same time, the NN itself is protected, given that its owner may not be willing to disclose the knowledge embedded within it. The considered level of protection ensures that the data provided to the network and the network weights and activation functions are kept secret. Particular attention is given to prevent any disclosure of information that could bring a malevolent user to get access to the NN secrets by properly inputting fake data to any point of the proposed protocol. With respect to previous works in this field, the interaction between the user and the NN owner is kept to a minimum with no resort to multiparty computation protocols.", "title": "" }, { "docid": "8fb37cad9ad964598ed718f0c32eaff1", "text": "A planar W-band monopulse antenna array is designed based on the substrate integrated waveguide (SIW) technology. The sum-difference comparator, 16-way divider and 32 × 32 slot array antenna are all integrated on a single dielectric substrate in the compact layout through the low-cost PCB process. Such a substrate integrated monopulse array is able to operate over 93 ~ 96 GHz with narrow-beam and high-gain. The maximal gain is measured to be 25.8 dBi, while the maximal null-depth is measured to be - 43.7 dB. This SIW monopulse antenna not only has advantages of low-cost, light, easy-fabrication, etc., but also has good performance validated by measurements. It presents an excellent candidate for W-band directional-finding systems.", "title": "" }, { "docid": "7f27e9b29e6ed2800ef850e6022d29ba", "text": "In 2004, the US Center for Disease Control (CDC) published a paper showing that there is no link between the age at which a child is vaccinated with MMR and the vaccinated children's risk of a subsequent diagnosis of autism. One of the authors, William Thompson, has now revealed that statistically significant information was deliberately omitted from the paper. Thompson first told Dr S Hooker, a researcher on autism, about the manipulation of the data. Hooker analysed the raw data from the CDC study afresh. He confirmed that the risk of autism among African American children vaccinated before the age of 2 years was 340% that of those vaccinated later.", "title": "" }, { "docid": "7e325afeaaf3cc548bca023e35fbd203", "text": "The short length of the estrous cycle of rats makes them ideal for investigation of changes occurring during the reproductive cycle. The estrous cycle lasts four days and is characterized as: proestrus, estrus, metestrus and diestrus, which may be determined according to the cell types observed in the vaginal smear. Since the collection of vaginal secretion and the use of stained material generally takes some time, the aim of the present work was to provide researchers with some helpful considerations about the determination of the rat estrous cycle phases in a fast and practical way. Vaginal secretion of thirty female rats was collected every morning during a month and unstained native material was observed using the microscope without the aid of the condenser lens. Using the 10 x objective lens, it was easier to analyze the proportion among the three cellular types, which are present in the vaginal smear. Using the 40 x objective lens, it is easier to recognize each one of these cellular types. The collection of vaginal lavage from the animals, the observation of the material, in the microscope, and the determination of the estrous cycle phase of all the thirty female rats took 15-20 minutes.", "title": "" }, { "docid": "5df22a15a1bd768782214647b1b87ebe", "text": "Hierarchical temporal memory (HTM) provides a theoretical framework that models several key computational principles of the neocortex. In this paper, we analyze an important component of HTM, the HTM spatial pooler (SP). The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs) using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the SP, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells, and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the SP outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the SP in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.", "title": "" }, { "docid": "acdd0043b764fe8bb9904ea6ca71e5cf", "text": "We investigate the task of 2D articulated human pose estimation in unconstrained still images. This is extremely challenging because of variation in pose, anatomy, clothing, and imaging conditions. Current methods use simple models of body part appearance and plausible configurations due to limitations of available training data and constraints on computational expense. We show that such models severely limit accuracy. Building on the successful pictorial structure model (PSM) we propose richer models of both appearance and pose, using state-of-the-art discriminative classifiers without introducing unacceptable computational expense. We introduce a new annotated database of challenging consumer images, an order of magnitude larger than currently available datasets, and demonstrate over 50% relative improvement in pose estimation accuracy over a stateof-the-art method.", "title": "" } ]
scidocsrr
ab640c04dd25df53ae412ac5ce28c102
Neural Stance Detectors for Fake News Challenge
[ { "docid": "0201a5f0da2430ec392284938d4c8833", "text": "Natural language sentence matching is a fundamental technology for a variety of tasks. Previous approaches either match sentences from a single direction or only apply single granular (wordby-word or sentence-by-sentence) matching. In this work, we propose a bilateral multi-perspective matching (BiMPM) model. Given two sentences P and Q, our model first encodes them with a BiLSTM encoder. Next, we match the two encoded sentences in two directions P against Q and Q against P . In each matching direction, each time step of one sentence is matched against all timesteps of the other sentence from multiple perspectives. Then, another BiLSTM layer is utilized to aggregate the matching results into a fixed-length matching vector. Finally, based on the matching vector, a decision is made through a fully connected layer. We evaluate our model on three tasks: paraphrase identification, natural language inference and answer sentence selection. Experimental results on standard benchmark datasets show that our model achieves the state-of-the-art performance on all tasks.", "title": "" }, { "docid": "a0e4080652269445c6e36b76d5c8cd09", "text": "Enabling a computer to understand a document so that it can answer comprehension questions is a central, yet unsolved goal of NLP. A key factor impeding its solution by machine learned systems is the limited availability of human-annotated data. Hermann et al. (2015) seek to solve this problem by creating over a million training examples by pairing CNN and Daily Mail news articles with their summarized bullet points, and show that a neural network can then be trained to give good performance on this task. In this paper, we conduct a thorough examination of this new reading comprehension task. Our primary aim is to understand what depth of language understanding is required to do well on this task. We approach this from one side by doing a careful hand-analysis of a small subset of the problems and from the other by showing that simple, carefully designed systems can obtain accuracies of 72.4% and 75.8% on these two datasets, exceeding current state-of-the-art results by over 5% and approaching what we believe is the ceiling for performance on this task.1", "title": "" } ]
[ { "docid": "c4f0e371ea3950e601f76f8d34b736e3", "text": "Discretization is an essential preprocessing technique used in many knowledge discovery and data mining tasks. Its main goal is to transform a set of continuous attributes into discrete ones, by associating categorical values to intervals and thus transforming quantitative data into qualitative data. In this manner, symbolic data mining algorithms can be applied over continuous data and the representation of information is simplified, making it more concise and specific. The literature provides numerous proposals of discretization and some attempts to categorize them into a taxonomy can be found. However, in previous papers, there is a lack of consensus in the definition of the properties and no formal categorization has been established yet, which may be confusing for practitioners. Furthermore, only a small set of discretizers have been widely considered, while many other methods have gone unnoticed. With the intention of alleviating these problems, this paper provides a survey of discretization methods proposed in the literature from a theoretical and empirical perspective. From the theoretical perspective, we develop a taxonomy based on the main properties pointed out in previous research, unifying the notation and including all the known methods up to date. Empirically, we conduct an experimental study in supervised classification involving the most representative and newest discretizers, different types of classifiers, and a large number of data sets. The results of their performances measured in terms of accuracy, number of intervals, and inconsistency have been verified by means of nonparametric statistical tests. Additionally, a set of discretizers are highlighted as the best performing ones.", "title": "" }, { "docid": "c5122000c9d8736cecb4d24e6f56aab8", "text": "New credit cards containing Europay, MasterCard and Visa (EMV) chips for enhanced security used in-store purchases rather than online purchases have been adopted considerably. EMV supposedly protects the payment cards in such a way that the computer chip in a card referred to as chip-and-pin cards generate a unique one time code each time the card is used. The one time code is designed such that if it is copied or stolen from the merchant system or from the system terminal cannot be used to create a counterfeit copy of that card or counterfeit chip of the transaction. However, in spite of this design, EMV technology is not entirely foolproof from failure. In this paper we discuss the issues, failures and fraudulent cases associated with EMV Chip-And-Card technology.", "title": "" }, { "docid": "0d8c38444954a0003117e7334195cb00", "text": "Although mature technologies exist for acquiring images, geometry, and normals of small objects, they remain cumbersome and time-consuming for non-experts to employ on a large scale. In an archaeological setting, a practical acquisition system for routine use on every artifact and fragment would open new possibilities for archiving, analysis, and dissemination. We present an inexpensive system for acquiring all three types of information, and associated metadata, for small objects such as fragments of wall paintings. The acquisition system requires minimal supervision, so that a single, non-expert user can scan at least 10 fragments per hour. To achieve this performance, we introduce new algorithms to robustly and automatically align range scans, register 2-D scans to 3-D geometry, and compute normals from 2-D scans. As an illustrative application, we present a novel 3-D matching algorithm that efficiently searches for matching fragments using the scanned geometry.", "title": "" }, { "docid": "bccb8e4cf7639dbcd3896e356aceec8d", "text": "Over 50 million people worldwide suffer from epilepsy. Traditional diagnosis of epilepsy relies on tedious visual screening by highly trained clinicians from lengthy EEG recording that contains the presence of seizure (ictal) activities. Nowadays, there are many automatic systems that can recognize seizure-related EEG signals to help the diagnosis. However, it is very costly and inconvenient to obtain long-term EEG data with seizure activities, especially in areas short of medical resources. We demonstrate in this paper that we can use the interictal scalp EEG data, which is much easier to collect than the ictal data, to automatically diagnose whether a person is epileptic. In our automated EEG recognition system, we extract three classes of features from the EEG data and build Probabilistic Neural Networks (PNNs) fed with these features. We optimize the feature extraction parameters and combine these PNNs through a voting mechanism. As a result, our system achieves an impressive 94.07% accuracy.", "title": "" }, { "docid": "129c1b9a723b062a52b821988d124486", "text": "Modern applications employ text files widely for providing data storage in a readable format for applications ranging from database systems to mobile phones. Traditional text processing tools are built around a byte-at-a-time sequential processing model that introduces significant branch and cache miss penalties. Recent work has explored an alternative, transposed representation of text, Parabix (Parallel Bit Streams), to accelerate scanning and parsing using SIMD facilities. This paper advocates and develops Parabix as a general framework and toolkit, describing the software toolchain and run-time support that allows applications to exploit modern SIMD instructions for high performance text processing. The goal is to generalize the techniques to ensure that they apply across a wide variety of applications and architectures. The toolchain enables the application developer to write constructs assuming unbounded character streams and Parabix's code translator generates code based on machine specifics (e.g., SIMD register widths). The general argument in support of Parabix technology is made by a detailed performance and energy study of XML parsing across a range of processor architectures. Parabix exploits intra-core SIMD hardware and demonstrates 2×-7× speedup and 4× improvement in energy efficiency when compared with two widely used conventional software parsers, Expat and Apache-Xerces. SIMD implementations across three generations of x86 processors are studied including the new SandyBridge. The 256-bit AVX technology in Intel SandyBridge is compared with the well established 128-bit SSE technology to analyze the benefits and challenges of 3-operand instruction formats and wider SIMD hardware. Finally, the XML program is partitioned into pipeline stages to demonstrate that thread-level parallelism enables the application to exploit SIMD units scattered across the different cores, achieving improved performance (2× on 4 cores) while maintaining single-threaded energy levels.", "title": "" }, { "docid": "7fcd8eee5f2dccffd3431114e2b0ed5a", "text": "Crowdsourcing is becoming more and more important for commercial purposes. With the growth of crowdsourcing platforms like Amazon Mechanical Turk or Microworkers, a huge work force and a large knowledge base can be easily accessed and utilized. But due to the anonymity of the workers, they are encouraged to cheat the employers in order to maximize their income. Thus, this paper we analyze two widely used crowd-based approaches to validate the submitted work. Both approaches are evaluated with regard to their detection quality, their costs and their applicability to different types of typical crowdsourcing tasks.", "title": "" }, { "docid": "ba4121003eb56d3ab6aebe128c219ab7", "text": "Mediation is said to occur when a causal effect of some variable X on an outcome Y is explained by some intervening variable M. The authors recommend that with small to moderate samples, bootstrap methods (B. Efron & R. Tibshirani, 1993) be used to assess mediation. Bootstrap tests are powerful because they detect that the sampling distribution of the mediated effect is skewed away from 0. They argue that R. M. Baron and D. A. Kenny's (1986) recommendation of first testing the X --> Y association for statistical significance should not be a requirement when there is a priori belief that the effect size is small or suppression is a possibility. Empirical examples and computer setups for bootstrap analyses are provided.", "title": "" }, { "docid": "43e630794f1bce27688d2cedbb19f17d", "text": "The systematic maintenance of mining machinery and equipment is the crucial factor for the proper functioning of a mine without production process interruption. For high-quality maintenance of the technical systems in mining, it is necessary to conduct a thorough analysis of machinery and accompanying elements in order to determine the critical elements in the system which are prone to failures. The risk assessment of the failures of system parts leads to obtaining precise indicators of failures which are also excellent guidelines for maintenance services. This paper presents a model of the risk assessment of technical systems failure based on the fuzzy sets theory, fuzzy logic and min–max composition. The risk indicators, severity, occurrence and detectability are analyzed. The risk indicators are given as linguistic variables. The model presented was applied for assessing the risk level of belt conveyor elements failure which works in severe conditions in a coal mine. Moreover, this paper shows the advantages of this model when compared to a standard procedure of RPN calculating – in the FMEA method of risk", "title": "" }, { "docid": "7533347e8c5daf17eb09e64db0fa4394", "text": "Android has become the most popular smartphone operating system. This rapidly increasing adoption of Android has resulted in significant increase in the number of malwares when compared with previous years. There exist lots of antimalware programs which are designed to effectively protect the users’ sensitive data in mobile systems from such attacks. In this paper, our contribution is twofold. Firstly, we have analyzed the Android malwares and their penetration techniques used for attacking the systems and antivirus programs that act against malwares to protect Android systems. We categorize many of the most recent antimalware techniques on the basis of their detection methods. We aim to provide an easy and concise view of the malware detection and protection mechanisms and deduce their benefits and limitations. Secondly, we have forecast Android market trends for the year up to 2018 and provide a unique hybrid security solution and take into account both the static and dynamic analysis an android application. Keywords—Android; Permissions; Signature", "title": "" }, { "docid": "4ac26e974e2d3861659323ae2aa7323c", "text": "Episacral lipoma is a small, tender subcutaneous nodule primarily occurring over the posterior iliac crest. Episacral lipoma is a significant and treatable cause of acute and chronic low back pain. Episacral lipoma occurs as a result of tears in the thoracodorsal fascia and subsequent herniation of a portion of the underlying dorsal fat pad through the tear. This clinical entity is common, and recognition is simple. The presence of a painful nodule with disappearance of pain after injection with anaesthetic, is diagnostic. Medication and physical therapy may not be effective. Local injection of the nodule with a solution of anaesthetic and steroid is effective in treating the episacral lipoma. Here we describe 2 patients with painful nodules over the posterior iliac crest. One patient complained of severe lower back pain radiating to the left lower extremity and this patient subsequently underwent disc operation. The other patient had been treated for greater trochanteric pain syndrome. In both patients, symptoms appeared to be relieved by local injection of anaesthetic and steroid. Episacral lipoma should be considered during diagnostic workup and in differential diagnosis of acute and chronic low back pain.", "title": "" }, { "docid": "4244af4f70e49c3e08e3943a88c79645", "text": "From a dynamic system point of view, bat locomotion stands out among other forms of flight. During a large part of bat wingbeat cycle the moving body is not in a static equilibrium. This is in sharp contrast to what we observe in other simpler forms of flight such as insects, which stay at their static equilibrium. Encouraged by biological examinations that have revealed bats exhibit periodic and stable limit cycles, this work demonstrates that one effective approach to stabilize articulated flying robots with bat morphology is locating feasible limit cycles for these robots; then, designing controllers that retain the closed-loop system trajectories within a bounded neighborhood of the designed periodic orbits. This control design paradigm has been evaluated in practice on a recently developed bio-inspired robot called Bat Bot (B2).", "title": "" }, { "docid": "79833f074b2e06d5c56898ca3f008c00", "text": "Regular expressions have served as the dominant workhorse of practical information extraction for several years. However, there has been little work on reducing the manual effort involved in building high-quality, complex regular expressions for information extraction tasks. In this paper, we propose ReLIE, a novel transformation-based algorithm for learning such complex regular expressions. We evaluate the performance of our algorithm on multiple datasets and compare it against the CRF algorithm. We show that ReLIE, in addition to being an order of magnitude faster, outperforms CRF under conditions of limited training data and cross-domain data. Finally, we show how the accuracy of CRF can be improved by using features extracted by ReLIE.", "title": "" }, { "docid": "13ae9c0f1c802de86b80906558b27713", "text": "Anaerobic saccharolytic bacteria thriving at high pH values were studied in a cellulose-degrading enrichment culture originating from the alkaline lake, Verkhneye Beloye (Central Asia). In situ hybridization of the enrichment culture with 16S rRNA-targeted probes revealed that abundant, long, thin, rod-shaped cells were related to Cytophaga. Bacteria of this type were isolated with cellobiose and five isolates were characterized. Isolates were thin, flexible, gliding rods. They formed a spherical cyst-like structure at one cell end during the late growth phase. The pH range for growth was 7.5–10.2, with an optimum around pH 8.5. Cultures produced a pinkish pigment tentatively identified as a carotenoid. Isolates did not degrade cellulose, indicating that they utilized soluble products formed by so far uncultured hydrolytic cellulose degraders. Besides cellobiose, the isolates utilized other carbohydrates, including xylose, maltose, xylan, starch, and pectin. The main organic fermentation products were propionate, acetate, and succinate. Oxygen, which was not used as electron acceptor, impaired growth. A representative isolate, strain Z-7010, with Marinilabilia salmonicolor as the closest relative, is described as a new genus and species, Alkaliflexus imshenetskii. This is the first cultivated alkaliphilic anaerobic member of the Cytophaga/Flavobacterium/Bacteroides phylum.", "title": "" }, { "docid": "804322502b82ad321a0f97d6f83858ee", "text": "Cheating is a real problem in the Internet of Things. The fundamental question that needs to be answered is how we can trust the validity of the data being generated in the first place. The problem, however, isnt inherent in whether or not to embrace the idea of an open platform and open-source software, but to establish a methodology to verify the trustworthiness and control any access. This paper focuses on building an access control model and system based on trust computing. This is a new field of access control techniques which includes Access Control, Trust Computing, Internet of Things, network attacks, and cheating technologies. Nevertheless, the target access control systems can be very complex to manage. This paper presents an overview of the existing work on trust computing, access control models and systems in IoT. It not only summarizes the latest research progress, but also provides an understanding of the limitations and open issues of the existing work. It is expected to provide useful guidelines for future research. Access Control, Trust Management, Internet of Things Today, our world is characterized by increasing connectivity. Things in this world are increasingly being connected. Smart phones have started an era of global proliferation and rapid consumerization of smart devices. It is predicted that the next disruptive transformation will be the concept of ‘Internet of Things’ [2]. From networked computers to smart devices, and to connected people, we are now moving towards connected ‘things’. Items of daily use are being turned into smart devices as various sensors are embedded in consumer and enterprise equipment, industrial and household appliances and personal devices. Pervasive connectivity mechanisms build bridges between our clothing and vehicles. Interaction among these things/devices can happen with little or no human intervention, thereby conjuring an enormous network, namely the Internet of Things (IoT). One of the primary goals behind IoT is to sense and send data over remote locations to enable detection of significant events, and take relevant actions sooner rather than later [25]. This technological trend is being pursued actively in all areas including the medical and health care fields. IoT provides opportunities to dramatically improve many medical applications, such as glucose level sensing, remote health monitoring (e.g. electrocardiogram, blood pressure, body temperature, and oxygen saturation monitoring, etc), rehabilitation systems, medication management, and ambient assisted living systems. The connectivity offered by IoT extends from humanto-machine to machine-to-machine communications. The interconnected devices collect all kinds of data about patients. Intelligent and ubiquitous services can then be built upon the useful information extracted from the data. During the data aggregation, fusion, and analysis processes, user ar X iv :1 61 0. 01 06 5v 1 [ cs .C R ] 4 O ct 2 01 6 2 Z. Yunpeng and X. Wu privacy and information security become major concerns for IoT services and applications. Security breaches will seriously compromise user acceptance and consumption on IoT applications in the medical and health care areas. The large scale of integration of heterogeneous devices in IoT poses a great challenge for the provision of standard security services. Many IoT devices are vulnerable to attacks since no high-level intelligence can be enabled on these passive devices [10], and security vulnerabilities in products uncovered by researchers have spread from cars [13] to garage doors [9] and to skateboards [35]. Technological utopianism surrounding IoT was very real until the emergence of the Volkswagen emissions scandal [4]. The German conglomerate admitted installing software in its diesel cars that recognizes and identifies patterns when vehicles are being tested for nitrogen oxide emissions and cuts them so that they fall within the limits prescribed by US regulators (004 g/km). Once the test is over, the car returns to its normal state: emitting nitrogen oxides (nitric oxide and nitrogen dioxide) at up to 35 times the US legal limit. The focus of IoT is not the thing itself, but the data generated by the devices and the value therein. What Volkswagen has brought to light goes far beyond protecting data and privacy, preventing intrusion, and keeping the integrity of the data. It casts doubts on the credibility of the IoT industry and its ability to secure data, reach agreement on standards, or indeed guarantee that consumer privacy rights are upheld. All in all, IoT holds tremendous potential to improve our health, make our environment safer, boost productivity and efficiency, and conserve both water and energy. IoT needs to improve its trustworthiness, however, before it can be used to solve challenging economic and environmental problems tied to our social lives. The fundamental question that needs to be answered is how we can trust the validity of the data being generated in the first place. If a node of IoT cheats, how does a system identify the cheating node and prevent a malicious attack from misbehaving nodes? This paper focuses on an access control mechanism that will only grant network access permission to trustworthy nodes. Embedding trust management into access control will improve the systems ability to discover untrustworthy participating nodes and prevent discriminatory attacks. There has been substantial research in this domain, most of which has been related to attacks like self-promotion and ballot stuffing where a node falsely promotes its importance and boosts the reputation of a malicious node (by providing good recommendations) to engage in a collusion-style attack. The traditional trust computation model is inefficient in differentiating a participant object in IoT, which is designed to win trust by cheating. In particular, the trust computation model will fail when a malicious node intelligently adjusts its behavior to hide its defect and obtain a higher trust value for its own gain. 1 Access Control Model and System IoT comprises the following three Access Control types Access Control in Internet of Things: A Survey 3 – Role-based access control (RBAC) – Credential-based access control (CBAC) — in order to access some resources and data, users require certain certificate information that falls into the following two types: 1. Attribute-Based access control (ABAC) : If a user has some special attributes, it is possible to access a particular resource or piece of data. 2. Capability-Based access control (Cap-BAC): A capability is a communicable, unforgeable rights markup, which corresponds to a value that uniquely specifies certain access rights to objects owned by subjects. – Trust-based access control (TBAC) In addition, there are also combinations of the aforementioned three methods. In order to improve the security of the system, some of the access control methods include encryption and key management mechanisms.", "title": "" }, { "docid": "3d81867b694a7fa56383583d9ee2637f", "text": "Elasticity is undoubtedly one of the most striking characteristics of cloud computing. Especially in the area of high performance computing (HPC), elasticity can be used to execute irregular and CPU-intensive applications. However, the on- the-fly increase/decrease in resources is more widespread in Web systems, which have their own IaaS-level load balancer. Considering the HPC area, current approaches usually focus on batch jobs or assumptions such as previous knowledge of application phases, source code rewriting or the stop-reconfigure-and-go approach for elasticity. In this context, this article presents AutoElastic, a PaaS-level elasticity model for HPC in the cloud. Its differential approach consists of providing elasticity for high performance applications without user intervention or source code modification. The scientific contributions of AutoElastic are twofold: (i) an Aging-based approach to resource allocation and deallocation actions to avoid unnecessary virtual machine (VM) reconfigurations (thrashing) and (ii) asynchronism in creating and terminating VMs in such a way that the application does not need to wait for completing these procedures. The prototype evaluation using OpenNebula middleware showed performance gains of up to 26 percent in the execution time of an application with the AutoElastic manager. Moreover, we obtained low intrusiveness for AutoElastic when reconfigurations do not occur.", "title": "" }, { "docid": "f3f70e5ba87399e9d44bda293a231399", "text": "During natural disasters or crises, users on social media tend to easily believe contents of postings related to the events, and retweet the postings with hoping them to be reached to many other users. Unfortunately, there are malicious users who understand the tendency and post misinformation such as spam and fake messages with expecting wider propagation. To resolve the problem, in this paper we conduct a case study of 2013 Moore Tornado and Hurricane Sandy. Concretely, we (i) understand behaviors of these malicious users, (ii) analyze properties of spam, fake and legitimate messages, (iii) propose flat and hierarchical classification approaches, and (iv) detect both fake and spam messages with even distinguishing between them. Our experimental results show that our proposed approaches identify spam and fake messages with 96.43% accuracy and 0.961 F-measure.", "title": "" }, { "docid": "477769b83e70f1d46062518b1d692664", "text": "Deep Neural Networks (DNNs) have been demonstrated to perform exceptionally well on most recognition tasks such as image classification and segmentation. However, they have also been shown to be vulnerable to adversarial examples. This phenomenon has recently attracted a lot of attention but it has not been extensively studied on multiple, large-scale datasets and complex tasks such as semantic segmentation which often require more specialised networks with additional components such as CRFs, dilated convolutions, skip-connections and multiscale processing. In this paper, we present what to our knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets. We analyse the effect of different network architectures, model capacity and multiscale processing, and show that many observations made on the task of classification do not always transfer to this more complex task. Furthermore, we show how mean-field inference in deep structured models and multiscale processing naturally implement recently proposed adversarial defenses. Our observations will aid future efforts in understanding and defending against adversarial examples. Moreover, in the shorter term, we show which segmentation models should currently be preferred in safety-critical applications due to their inherent robustness.", "title": "" }, { "docid": "1107a5b766a3d471ae00c9e02d8592da", "text": "In this paper, a wideband dual polarized self-complementary connected array antenna with low radar cross section (RCS) under normal and oblique incidence is presented. First, an analytical model of the multilayer structure is proposed in order to obtain a fast and reliable predimensioning tool providing an optimized design of the infinite array. The accuracy of this model is demonstrated thanks to comparative simulations with a full wave analysis software. RCS reduction compared to a perfectly conducting flat plate of at least 10 dB has been obtained over an ultrawide bandwidth of nearly 7:1 at normal incidence and 5:1 (3.8 to 19 GHz) at 60° in both polarizations. These performances are confirmed by finite element tearing and interconnecting computations of finite arrays of different sizes. Finally, the realization of a $28 \\times 28$ cell prototype and measurement results are detailed.", "title": "" }, { "docid": "1e4a86dcc05ff3d593a4bf7b88f8b23a", "text": "Fog/edge computing has been proposed to be integrated with Internet of Things (IoT) to enable computing services devices deployed at network edge, aiming to improve the user’s experience and resilience of the services in case of failures. With the advantage of distributed architecture and close to end-users, fog/edge computing can provide faster response and greater quality of service for IoT applications. Thus, fog/edge computing-based IoT becomes future infrastructure on IoT development. To develop fog/edge computing-based IoT infrastructure, the architecture, enabling techniques, and issues related to IoT should be investigated first, and then the integration of fog/edge computing and IoT should be explored. To this end, this paper conducts a comprehensive overview of IoT with respect to system architecture, enabling technologies, security and privacy issues, and present the integration of fog/edge computing and IoT, and applications. Particularly, this paper first explores the relationship between cyber-physical systems and IoT, both of which play important roles in realizing an intelligent cyber-physical world. Then, existing architectures, enabling technologies, and security and privacy issues in IoT are presented to enhance the understanding of the state of the art IoT development. To investigate the fog/edge computing-based IoT, this paper also investigate the relationship between IoT and fog/edge computing, and discuss issues in fog/edge computing-based IoT. Finally, several applications, including the smart grid, smart transportation, and smart cities, are presented to demonstrate how fog/edge computing-based IoT to be implemented in real-world applications.", "title": "" }, { "docid": "889747dbf541583475cbce74c42dc616", "text": "This paper presents an analysis of FastSLAM - a Rao-Blackwellised particle filter formulation of simultaneous localisation and mapping. It shows that the algorithm degenerates with time, regardless of the number of particles used or the density of landmarks within the environment, and would always produce optimistic estimates of uncertainty in the long-term. In essence, FastSLAM behaves like a non-optimal local search algorithm; in the short-term it may produce consistent uncertainty estimates but, in the long-term, it is unable to adequately explore the state-space to be a reasonable Bayesian estimator. However, the number of particles and landmarks does affect the accuracy of the estimated mean and, given sufficient particles, FastSLAM can produce good non-stochastic estimates in practice. FastSLAM also has several practical advantages, particularly with regard to data association, and would probably work well in combination with other versions of stochastic SLAM, such as EKF-based SLAM", "title": "" } ]
scidocsrr
3d29e2996f9e625152fa1ec7e456a8e4
A literature survey on Facial Expression Recognition using Global Features
[ { "docid": "d537214f407128585d6a4e6bab55a45b", "text": "It is well known that how to extract dynamical features is a key issue for video based face analysis. In this paper, we present a novel approach of facial action units (AU) and expression recognition based on coded dynamical features. In order to capture the dynamical characteristics of facial events, we design the dynamical haar-like features to represent the temporal variations of facial events. Inspired by the binary pattern coding, we further encode the dynamic haar-like features into binary pattern features, which are useful to construct weak classifiers for boosting learning. Finally the Adaboost is performed to learn a set of discriminating coded dynamic features for facial active units and expression recognition. Experiments on the CMU expression database and our own facial AU database show its encouraging performance.", "title": "" } ]
[ { "docid": "add26519d60ec2a972ad550cd79129d6", "text": "The hybrid runtime (HRT) model offers a plausible path towards high performance and efficiency. By integrating the OS kernel, parallel runtime, and application, an HRT allows the runtime developer to leverage the full privileged feature set of the hardware and specialize OS services to the runtime's needs. However, conforming to the HRT model currently requires a complete port of the runtime and application to the kernel level, for example to our Nautilus kernel framework, and this requires knowledge of kernel internals. In response, we developed Multiverse, a system that bridges the gap between a built-from-scratch HRT and a legacy runtime system. Multiverse allows existing, unmodified applications and runtimes to be brought into the HRT model without any porting effort whatsoever. Developers simply recompile their package with our compiler toolchain, and Multiverse automatically splits the execution of the application between the domains of a legacy OS and an HRT environment. To the user, the package appears to run as usual on Linux, but the bulk of it now runs as a kernel. The developer can then incrementally extend the runtime and application to take advantage of the HRT model. We describe the design and implementation of Multiverse, and illustrate its capabilities using the Racket runtime system.", "title": "" }, { "docid": "441633276271b94dc1bd3e5e28a1014d", "text": "While a large number of consumers in the US and Europe frequently shop on the Internet, research on what drives consumers to shop online has typically been fragmented. This paper therefore proposes a framework to increase researchers’ understanding of consumers’ attitudes toward online shopping and their intention to shop on the Internet. The framework uses the constructs of the Technology Acceptance Model (TAM) as a basis, extended by exogenous factors and applies it to the online shopping context. The review shows that attitudes toward online shopping and intention to shop online are not only affected by ease of use, usefulness, and enjoyment, but also by exogenous factors like consumer traits, situational factors, product characteristics, previous online shopping experiences, and trust in online shopping.", "title": "" }, { "docid": "e6a60fab31af5985520cc64b93b5deb0", "text": "BACKGROUND\nGenital warts may mimic a variety of conditions, thus complicating their diagnosis and treatment. The recognition of early flat lesions presents a diagnostic challenge.\n\n\nOBJECTIVE\nWe sought to describe the dermatoscopic features of genital warts, unveiling the possibility of their diagnosis by dermatoscopy.\n\n\nMETHODS\nDermatoscopic patterns of 61 genital warts from 48 consecutively enrolled male patients were identified with their frequencies being used as main outcome measures.\n\n\nRESULTS\nThe lesions were examined dermatoscopically and further classified according to their dermatoscopic pattern. The most frequent finding was an unspecific pattern, which was found in 15/61 (24.6%) lesions; a fingerlike pattern was observed in 7 (11.5%), a mosaic pattern in 6 (9.8%), and a knoblike pattern in 3 (4.9%) cases. In almost half of the lesions, pattern combinations were seen, of which a fingerlike/knoblike pattern was the most common, observed in 11/61 (18.0%) cases. Among the vascular features, glomerular, hairpin/dotted, and glomerular/dotted vessels were the most frequent finding seen in 22 (36.0%), 15 (24.6%), and 10 (16.4%) of the 61 cases, respectively. In 10 (16.4%) lesions no vessels were detected. Hairpin vessels were more often seen in fingerlike (χ(2) = 39.31, P = .000) and glomerular/dotted vessels in knoblike/mosaic (χ(2) = 9.97, P = .008) pattern zones; vessels were frequently missing in unspecified (χ(2) = 8.54, P = .014) areas.\n\n\nLIMITATIONS\nOnly male patients were examined.\n\n\nCONCLUSIONS\nThere is a correlation between dermatoscopic patterns and vascular features reflecting the life stages of genital warts; dermatoscopy may be useful in the diagnosis of early-stage lesions.", "title": "" }, { "docid": "f193816262da8f4edb523e172a83f953", "text": "The European FF POIROT project (IST-2001-38248) aims at developing applications for tackling financial fraud, using formal ontological repositories as well as multilingual terminological resources. In this article, we want to focus on the development cycle towards an application recognizing several types of e-mail fraud, such as phishing, Nigerian advance fee fraud and lottery scam. The development cycle covers four tracks of development - language engineering, terminology engineering, knowledge engineering and system engineering. These development tracks are preceded by a problem determination phase and followed by a deployment phase. Each development track is supported by a methodology. All methodologies and phases in the development cycle will be discussed in detail", "title": "" }, { "docid": "f5f70dca677752bcaa39db59988c088e", "text": "To examine how inclusive our schools are after 25 years of educational reform, students with disabilities and their parents were asked to identify current barriers and provide suggestions for removing those barriers. Based on a series of focus group meetings, 15 students with mobility limitations (9-15 years) and 12 parents identified four categories of barriers at their schools: (a) the physical environment (e.g., narrow doorways, ramps); (b) intentional attitudinal barriers (e.g., isolation, bullying); (c) unintentional attitudinal barriers (e.g., lack of knowledge, understanding, or awareness); and (d) physical limitations (e.g., difficulty with manual dexterity). Recommendations for promoting accessibility and full participation are provided and discussed in relation to inclusive education efforts. Exceptional Children", "title": "" }, { "docid": "eaeccd0d398e0985e293d680d2265528", "text": "Deep networks have been successfully applied to visual tracking by learning a generic representation offline from numerous training images. However the offline training is time-consuming and the learned generic representation may be less discriminative for tracking specific objects. In this paper we present that, even without learning, simple convolutional networks can be powerful enough to develop a robust representation for visual tracking. In the first frame, we randomly extract a set of normalized patches from the target region as filters, which define a set of feature maps in the subsequent frames. These maps measure similarities between each filter and the useful local intensity patterns across the target, thereby encoding its local structural information. Furthermore, all the maps form together a global representation, which maintains the relative geometric positions of the local intensity patterns, and hence the inner geometric layout of the target is also well preserved. A simple and effective online strategy is adopted to update the representation, allowing it to robustly adapt to target appearance variations. Our convolution networks have surprisingly lightweight structure, yet perform favorably against several state-of-the-art methods on a large benchmark dataset with 50 challenging videos.", "title": "" }, { "docid": "2a45f4ed21d9534a937129532cb32020", "text": "BACKGROUND\nCore stability training has grown in popularity over 25 years, initially for back pain prevention or therapy. Subsequently, it developed as a mode of exercise training for health, fitness and sport. The scientific basis for traditional core stability exercise has recently been questioned and challenged, especially in relation to dynamic athletic performance. Reviews have called for clarity on what constitutes anatomy and function of the core, especially in healthy and uninjured people. Clinical research suggests that traditional core stability training is inappropriate for development of fitness for heath and sports performance. However, commonly used methods of measuring core stability in research do not reflect functional nature of core stability in uninjured, healthy and athletic populations. Recent reviews have proposed a more dynamic, whole body approach to training core stabilization, and research has begun to measure and report efficacy of these modes training. The purpose of this study was to assess extent to which these developments have informed people currently working and participating in sport.\n\n\nMETHODS\nAn online survey questionnaire was developed around common themes on core stability training as defined in the current scientific literature and circulated to a sample population of people working and participating in sport. Survey results were assessed against key elements of the current scientific debate.\n\n\nRESULTS\nPerceptions on anatomy and function of the core were gathered from a representative cohort of athletes, coaches, sports science and sports medicine practitioners (n = 241), along with their views on effectiveness of various current and traditional exercise training modes. Most popular method of testing and measuring core function was subjective assessment through observation (43%), while a quarter (22%) believed there was no effective method of measurement. Perceptions of people in sport reflect the scientific debate, and practitioners have adopted a more functional approach to core stability training. There was strong support for loaded, compound exercises performed upright, compared to moderate support for traditional core stability exercises. Half of the participants (50%) in the survey, however, still support a traditional isolation core stability training.\n\n\nCONCLUSION\nPerceptions in applied practice on core stability training for dynamic athletic performance are aligned to a large extent to the scientific literature.", "title": "" }, { "docid": "2e864dcde57ea1716847f47977af0140", "text": "I focus on the role of case studies in developing causal explanations. I distinguish between the theoretical purposes of case studies and the case selection strategies or research designs used to advance those objectives. I construct a typology of case studies based on their purposes: idiographic (inductive and theory-guided), hypothesis-generating, hypothesis-testing, and plausibility probe case studies. I then examine different case study research designs, including comparable cases, most and least likely cases, deviant cases, and process tracing, with attention to their different purposes and logics of inference. I address the issue of selection bias and the “single logic” debate, and I emphasize the utility of multi-method research.", "title": "" }, { "docid": "885b3a5b386e642dc567c9b7944112d5", "text": "Derived from the field of art curation, digital provenance is an unforgeable record of a digital object’s chain of successive custody and sequence of operations performed on it. Digital provenance forms an immutable directed acyclic graph (DAG) structure. Recent works in digital provenance have focused on provenance generation, storage and management frameworks in different fields. In this paper, we address two important aspects of digital provenance that have not been investigated thoroughly in existing works: 1) capturing the DAG structure of provenance and 2) supporting dynamic information sharing. We propose a scheme that uses signature-based mutual agreements between successive users to clearly delineate the transition of responsibility of the document as it is passed along the chain of users. In addition to preserving the properties of confidentiality, immutability and availability for a digital provenance chain, it supports the representation of DAG structures of provenance. Our scheme supports dynamic information sharing scenarios where the sequence of users who have custody of the document is not predetermined. Security analysis and empirical results indicate that our scheme improves the security of the existing Onion and PKLC provenance schemes with comparable performance. Keywords—Provenance, cryptography, signatures, integrity, confidentiality, availability", "title": "" }, { "docid": "9bbc3e426c7602afaa857db85e754229", "text": "Knowledge bases of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge bases are typically incomplete, it is useful to be able to perform link prediction, i.e., predict whether a relationship not in the knowledge base is likely to be true. This paper combines insights from several previous link prediction models into a new embedding model STransE that represents each entity as a lowdimensional vector, and each relation by two matrices and a translation vector. STransE is a simple combination of the SE and TransE models, but it obtains better link prediction performance on two benchmark datasets than previous embedding models. Thus, STransE can serve as a new baseline for the more complex models in the link prediction task.", "title": "" }, { "docid": "c3ca913fa81b2e79a2fff6d7a5e2fea7", "text": "We present Query-Regression Network (QRN), a variant of Recurrent Neural Network (RNN) that is suitable for end-to-end machine comprehension. While previous work [18, 22] largely relied on external memory and global softmax attention mechanism, QRN is a single recurrent unit with internal memory and local sigmoid attention. Unlike most RNN-based models, QRN is able to effectively handle long-term dependencies and is highly parallelizable. In our experiments we show that QRN obtains the state-of-the-art result in end-to-end bAbI QA tasks [21].", "title": "" }, { "docid": "3c27b3e11ba9924e9c102fc9ba7907b6", "text": "The Visagraph IITM Eye Movement Recording System is an instrument that assesses reading eye movement efficiency and related parameters objectively. It also incorporates automated data analysis. In the standard protocol, the patient reads selections only at the level of their current school grade, or at the level that has been determined by a standardized reading test. In either case, deficient reading eye movements may be the consequence of a language-based reading disability, an oculomotor-based reading inefficiency, or both. We propose an addition to the standard protocol: the patient’s eye movements are recorded a second time with text that is significantly below the grade level of the initial reading. The goal is to determine which factor is primarily contributing to the patient’s reading problem, oculomotor or language. This concept is discussed in the context of two representative cases.", "title": "" }, { "docid": "272d83db41293889d9ca790717983193", "text": "The ability to measure the level of customer satisfaction with online shopping is essential in gauging the success and failure of e-commerce. To do so, Internet businesses must be able to determine and understand the values of their existing and potential customers. Hence, it is important for IS researchers to develop and validate a diverse array of metrics to comprehensively capture the attitudes and feelings of online customers. What factors make online shopping appealing to customers? What customer values take priority over others? This study’s purpose is to answer these questions, examining the role of several technology, shopping, and product factors on online customer satisfaction. This is done using a conjoint analysis of consumer preferences based on data collected from 188 young consumers. Results indicate that the three most important attributes to consumers for online satisfaction are privacy (technology factor), merchandising (product factor), and convenience (shopping factor). These are followed by trust, delivery, usability, product customization, product quality, and security. Implications of these findings are discussed and suggestions for future research are provided.", "title": "" }, { "docid": "6de2b5fa5c8d3db9f9d599b6ebb56782", "text": "Extreme sensitivity of soil organic carbon (SOC) to climate and land use change warrants further research in different terrestrial ecosystems. The aim of this study was to investigate the link between aggregate and SOC dynamics in a chronosequence of three different land uses of a south Chilean Andisol: a second growth Nothofagus obliqua forest (SGFOR), a grassland (GRASS) and a Pinus radiataplantation (PINUS). Total carbon content of the 0–10 cm soil layer was higher for GRASS (6.7 kg C m −2) than for PINUS (4.3 kg C m−2), while TC content of SGFOR (5.8 kg C m−2) was not significantly different from either one. High extractable oxalate and pyrophosphate Al concentrations (varying from 20.3–24.4 g kg −1, and 3.9– 11.1 g kg−1, respectively) were found in all sites. In this study, SOC and aggregate dynamics were studied using size and density fractionation experiments of the SOC, δ13C and total carbon analysis of the different SOC fractions, and C mineralization experiments. The results showed that electrostatic sorption between and among amorphous Al components and clay minerals is mainly responsible for the formation of metal-humus-clay complexes and the stabilization of soil aggregates. The process of ligand exchange between SOC and Al would be of minor importance resulting in the absence of aggregate hierarchy in this soil type. Whole soil C mineralization rate constants were highest for SGFOR and PINUS, followed by GRASS (respectively 0.495, 0.266 and 0.196 g CO 2-C m−2 d−1 for the top soil layer). In contrast, incubation experiments of isolated macro organic matter fractions gave opposite results, showing that the recalcitrance of the SOC decreased in another order: PINUS>SGFOR>GRASS. We deduced that electrostatic sorption processes and physical protection of SOC in soil aggregates were the main processes determining SOC stabilization. As a result, high aggregate carbon concentraCorrespondence to: D. Huygens (dries.huygens@ugent.be) tions, varying from 148 till 48 g kg −1, were encountered for all land use sites. Al availability and electrostatic charges are dependent on pH, resulting in an important influence of soil pH on aggregate stability. Recalcitrance of the SOC did not appear to largely affect SOC stabilization. Statistical correlations between extractable amorphous Al contents, aggregate stability and C mineralization rate constants were encountered, supporting this hypothesis. Land use changes affected SOC dynamics and aggregate stability by modifying soil pH (and thus electrostatic charges and available Al content), root SOC input and management practices (such as ploughing and accompanying drying of the soil).", "title": "" }, { "docid": "f26df52af74f9c2f51ff0e56daeb4c38", "text": "Browsing is part of the information seeking process, used when information needs are ill-defined or unspecific. Browsing and searching are often interleaved during information seeking to accommodate changing awareness of information needs. Digital Libraries often support full-text search, but are not so helpful in supporting browsing. Described here is a novel browsing system created for the Greenstone software used by the New Zealand Digital Library that supports users in a more natural approach to the information seeking process.", "title": "" }, { "docid": "ad14a9f120aedc84abc99f1715e6769b", "text": "We introduce an interactive tool which enables a user to quickly assemble an architectural model directly over a 3D point cloud acquired from large-scale scanning of an urban scene. The user loosely defines and manipulates simple building blocks, which we call SmartBoxes, over the point samples. These boxes quickly snap to their proper locations to conform to common architectural structures. The key idea is that the building blocks are smart in the sense that their locations and sizes are automatically adjusted on-the-fly to fit well to the point data, while at the same time respecting contextual relations with nearby similar blocks. SmartBoxes are assembled through a discrete optimization to balance between two snapping forces defined respectively by a data-fitting term and a contextual term, which together assist the user in reconstructing the architectural model from a sparse and noisy point cloud. We show that a combination of the user's interactive guidance and high-level knowledge about the semantics of the underlying model, together with the snapping forces, allows the reconstruction of structures which are partially or even completely missing from the input.", "title": "" }, { "docid": "3ca7c89e12c81ac90d5d12d6f9a2b7f2", "text": "Texture classification is one of the problems which has been paid much attention on by computer scientists since late 90s. If texture classification is done correctly and accurately, it can be used in many cases such as Pattern recognition, object tracking, and shape recognition. So far, there have been so many methods offered to solve this problem. Near all these methods have tried to extract and define features to separate different labels of textures really well. This article has offered an approach which has an overall process on the images of textures based on Local binary pattern and Gray Level Co-occurrence matrix and then by edge detection, and finally, extracting the statistical features from the images would classify them. Although, this approach is a general one and is could be used in different applications, the method has been tested on the stone texture and the results have been compared with some of the previous approaches to prove the quality of proposed approach. Keywords-Texture Classification, Gray level Co occurrence, Local Binary Pattern, Statistical Features", "title": "" }, { "docid": "eb7990a677cd3f96a439af6620331400", "text": "Solving the visual symbol grounding problem has long been a goal of artificial intelligence. The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images. In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure. Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words. By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies. We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation.", "title": "" }, { "docid": "e20d26ce3dea369ae6817139ff243355", "text": "This article explores the roots of white support for capital punishment in the United States. Our analysis addresses individual-level and contextual factors, paying particular attention to how racial attitudes and racial composition influence white support for capital punishment. Our findings suggest that white support hinges on a range of attitudes wider than prior research has indicated, including social and governmental trust and individualist and authoritarian values. Extending individual-level analyses, we also find that white responses to capital punishment are sensitive to local context. Perhaps most important, our results clarify the impact of race in two ways. First, racial prejudice emerges here as a comparatively strong predictor of white support for the death penalty. Second, black residential proximity functions to polarize white opinion along lines of racial attitude. As the black percentage of county residents rises, so too does the impact of racial prejudice on white support for capital punishment.", "title": "" }, { "docid": "3196c06c66b49c052d07ced0de683d02", "text": "Programming by Examples (PBE) involves synthesizing intended programs in an underlying domain-specific language from examplebased specifications. PBE systems are already revolutionizing the application domain of data wrangling and are set to significantly impact several other domains including code refactoring. There are three key components in a PBE system. (i) A search algorithm that can efficiently search for programs that are consistent with the examples provided by the user. We leverage a divide-and-conquerbased deductive search paradigm that inductively reduces the problem of synthesizing a program expression of a certain kind that satisfies a given specification into sub-problems that refer to sub-expressions or sub-specifications. (ii) Program ranking techniques to pick an intended program from among the many that satisfy the examples provided by the user. We leverage features of the program structure as well of the outputs generated by the program on test inputs. (iii) User interaction models to facilitate usability and debuggability. We leverage active-learning techniques based on clustering inputs and synthesizing multiple programs. Each of these PBE components leverage both symbolic reasoning and heuristics. We make the case for synthesizing these heuristics from training data using appropriate machine learning methods. This can not only lead to better heuristics, but can also enable easier development, maintenance, and even personalization of a PBE system.", "title": "" } ]
scidocsrr
551c90304020aee22c4aff6a9ae6cf02
Interpretable Representation Learning for Healthcare via Capturing Disease Progression through Time
[ { "docid": "e7659e2c20e85f99996e4394fdc37a5c", "text": "Gaining knowledge and actionable insights from complex, high-dimensional and heterogeneous biomedical data remains a key challenge in transforming health care. Various types of data have been emerging in modern biomedical research, including electronic health records, imaging, -omics, sensor data and text, which are complex, heterogeneous, poorly annotated and generally unstructured. Traditional data mining and statistical learning approaches typically need to first perform feature engineering to obtain effective and more robust features from those data, and then build prediction or clustering models on top of them. There are lots of challenges on both steps in a scenario of complicated data and lacking of sufficient domain knowledge. The latest advances in deep learning technologies provide new effective paradigms to obtain end-to-end learning models from complex data. In this article, we review the recent literature on applying deep learning technologies to advance the health care domain. Based on the analyzed work, we suggest that deep learning approaches could be the vehicle for translating big biomedical data into improved human health. However, we also note limitations and needs for improved methods development and applications, especially in terms of ease-of-understanding for domain experts and citizen scientists. We discuss such challenges and suggest developing holistic and meaningful interpretable architectures to bridge deep learning models and human interpretability.", "title": "" } ]
[ { "docid": "76049ed267e9327412d709014e8e9ed4", "text": "A wireless massive MIMO system entails a large number (tens or hundreds) of base station antennas serving a much smaller number of users, with large gains in spectralefficiency and energy-efficiency compared with conventional MIMO technology. Until recently it was believed that in multicellular massive MIMO system, even in the asymptotic regime, as the number of service antennas tends to infinity, the performance is limited by directed inter-cellular interference. This interference results from unavoidable re-use of reverse-link training sequences (pilot contamination) by users in different cells. We devise a new concept that leads to the effective elimination of inter-cell interference in massive MIMO systems. This is achieved by outer multi-cellular precoding, which we call LargeScale Fading Precoding (LSFP). The main idea of LSFP is that each base station linearly combines messages aimed to users from different cells that re-use the same training sequence. Crucially, the combining coefficients depend only on the slowfading coefficients between the users and the base stations. Each base station independently transmits its LSFP-combined symbols using conventional linear precoding that is based on estimated fast-fading coefficients. Further, we derive estimates for downlink and uplink SINRs and capacity lower bounds for the case of massive MIMO systems with LSFP and a finite number of base station antennas.", "title": "" }, { "docid": "4ab644ac13d8753aa6e747c4070e95e9", "text": "This paper presents a framework for modeling the phase noise in complementary metal–oxide–semiconductor (CMOS) ring oscillators. The analysis considers both linear and nonlinear operations, and it includes both device noise and digital switching noise coupled through the power supply and substrate. In this paper, we show that fast rail-to-rail switching is required in order to achieve low phase noise. Further, flicker noise from the bias circuit can potentially dominate the phase noise at low offset frequencies. We define the effective factor for ring oscillators with large and nonlinear voltage swings and predict its increase for CMOS processes with smaller feature sizes. Our phase-noise analysis is validated via simulation and measurement results for ring oscillators fabricated in a number of CMOS processes.", "title": "" }, { "docid": "b5d7c6a4d9551bf9b47b4e3754fb5911", "text": "Discovering significant types of relations from the web is challenging because of its open nature. Unsupervised algorithms are developed to extract relations from a corpus without knowing the relations in advance, but most of them rely on tagging arguments of predefined types. Recently, a new algorithm was proposed to jointly extract relations and their argument semantic classes, taking a set of relation instances extracted by an open IE algorithm as input. However, it cannot handle polysemy of relation phrases and fails to group many similar (“synonymous”) relation instances because of the sparseness of features. In this paper, we present a novel unsupervised algorithm that provides a more general treatment of the polysemy and synonymy problems. The algorithm incorporates various knowledge sources which we will show to be very effective for unsupervised extraction. Moreover, it explicitly disambiguates polysemous relation phrases and groups synonymous ones. While maintaining approximately the same precision, the algorithm achieves significant improvement on recall compared to the previous method. It is also very efficient. Experiments on a realworld dataset show that it can handle 14.7 million relation instances and extract a very large set of relations from the web.", "title": "" }, { "docid": "27745116e5c05802bda2bc6dc548cce6", "text": "Recently, many researchers have attempted to classify Facial Attributes (FAs) by representing characteristics of FAs such as attractiveness, age, smiling and so on. In this context, recent studies have demonstrated that visual FAs are a strong background for many applications such as face verification, face search and so on. However, Facial Attribute Classification (FAC) in a wide range of attributes based on the regression representation -predicting of FAs as real-valued labelsis still a significant challenge in computer vision and psychology. In this paper, a regression model formulation is proposed for FAC in a wide range of FAs (e.g. 73 FAs). The proposed method accommodates real-valued scores to the probability of what percentage of the given FAs is present in the input image. To this end, two simultaneous dictionary learning methods are proposed to learn the regression and identity feature dictionaries simultaneously. Accordingly, a multi-level feature extraction is proposed for FAC. Then, four regression classification methods are proposed using a regression model formulated based on dictionary learning, SRC and CRC. Convincing results are", "title": "" }, { "docid": "35bc2da7f6a3e18f831b4560fba7f94d", "text": "findings All countries—developing and developed alike—find it difficult to stay competitive without inflows of foreign direct investment (FDI). FDI brings to host countries not only capital, productive facilities, and technology transfers, but also employment, new job skills and management expertise. These ingredients are particularly important in the case of Russia today, where the pressure for firms to compete with each other remains low. With blunted incentives to become efficient, due to interregional barriers to trade, weak exercise of creditor rights and administrative barriers to new entrants—including foreign invested firms—Russian enterprises are still in the early stages of restructuring. This paper argues that the policy regime governing FDI in the Russian Federation is still characterized by the old paradigm of FDI, established before the Second World War and seen all over the world during the 1950s and 1960s. In this paradigm there are essentially only two motivations for foreign direct investment: access to inputs for production, and access to markets for outputs. These kinds of FDI are useful, but often based either on exports that exploit cheap labor or natural resources, or else aimed at protected local markets and not necessarily at world standards for price and quality. The fact is that Russia is getting relatively small amounts of these types of FDI, and almost none of the newer, more efficient kind—characterized by state-of-the-art technology and world-class competitive production linked to dynamic global (or regional) markets. The paper notes that Russia should phase out the three core pillars of the current FDI policy regime-(i) all existing high tariffs and non-tariff protection for the domestic market; (ii) tax preferences for foreign investors (including those offered in Special Economic Zones), which bring few benefits (in terms of increased FDI) but engender costs (in terms of foregone fiscal revenue); and (iii) the substantial number of existing restrictions on FDI (make them applicable only to a limited number of sectors and activities). This set of reforms would allow Russia to switch to a modern approach towards FDI. The paper suggests the following specific policy recommendations: (i) amend the newly enacted FDI law so as to give \" national treatment \" for both right of establishment and for post-establishment operations; abolish conditions that are inconsistent with the agreement on trade-related investment measures (TRIMs) of the WTO (such as local content restrictions); and make investor-State dispute resolution mechanisms more efficient, including giving foreign investors the opportunity to …", "title": "" }, { "docid": "00547f45936c7cea4b7de95ec1e0fbcd", "text": "With the emergence of the Internet of Things (IoT) and Big Data era, many applications are expected to assimilate a large amount of data collected from environment to extract useful information. However, how heterogeneous computing devices of IoT ecosystems can execute the data processing procedures has not been clearly explored. In this paper, we propose a framework which characterizes energy and performance requirements of the data processing applications across heterogeneous devices, from a server in the cloud and a resource-constrained gateway at edge. We focus on diverse machine learning algorithms which are key procedures for handling the large amount of IoT data. We build analytic models which automatically identify the relationship between requirements and data in a statistical way. The proposed framework also considers network communication cost and increasing processing demand. We evaluate the proposed framework on two heterogenous devices, a Raspberry Pi and a commercial Intel server. We show that the identified models can accurately estimate performance and energy requirements with less than error of 4.8% for both platforms. Based on the models, we also evaluate whether the resource-constrained gateway can process the data more efficiently than the server in the cloud. The results present that the less-powerful device can achieve better energy and performance efficiency for more than 50% of machine learning algorithms.", "title": "" }, { "docid": "620642c5437dc26cac546080c4465707", "text": "One of the most distinctive linguistic characteristics of modern academic writing is its reliance on nominalized structures. These include nouns that have been morphologically derived from verbs (e.g., development, progression) as well as verbs that have been ‘converted’ to nouns (e.g., increase, use). Almost any sentence taken from an academic research article will illustrate the use of such structures. For example, consider the opening sentences from three education research articles; derived nominalizations are underlined and converted nouns given in italics: 1", "title": "" }, { "docid": "e85e66b6ad6324a07ca299bf4f3cd447", "text": "To date, the majority of ad hoc routing protocol research has been done using simulation only. One of the most motivating reasons to use simulation is the difficulty of creating a real implementation. In a simulator, the code is contained within a single logical component, which is clearly defined and accessible. On the other hand, creating an implementation requires use of a system with many components, including many that have little or no documentation. The implementation developer must understand not only the routing protocol, but all the system components and their complex interactions. Further, since ad hoc routing protocols are significantly different from traditional routing protocols, a new set of features must be introduced to support the routing protocol. In this paper we describe the event triggers required for AODV operation, the design possibilities and the decisions for our ad hoc on-demand distance vector (AODV) routing protocol implementation, AODV-UCSB. This paper is meant to aid researchers in developing their own on-demand ad hoc routing protocols and assist users in determining the implementation design that best fits their needs.", "title": "" }, { "docid": "65849cfb115918dd264445e91698e868", "text": "Handwritten character recognition is always a frontier area of research in the field of pattern recognition. There is a large demand for OCR on hand written documents in Image processing. Even though, sufficient studies have performed in foreign scripts like Arabic, Chinese and Japanese, only a very few work can be traced for handwritten character recognition mainly for the south Indian scripts. OCR system development for Indian script has many application areas like preserving manuscripts and ancient literatures written in different Indian scripts and making digital libraries for the documents. Feature extraction and classification are essential steps of character recognition process affecting the overall accuracy of the recognition system. This paper presents a brief overview of digital image processing techniques such as Feature Extraction, Image Restoration and Image Enhancement. A brief history of OCR and various approaches to character recognition is also discussed in this paper.", "title": "" }, { "docid": "0b1a8b80b4414fa34d6cbb5ad1342ad7", "text": "OBJECTIVE\nThe aim of the study was to evaluate the efficacy of topical 2% lidocaine gel in reducing pain and discomfort associated with nasogastric tube insertion (NGTI) and compare lidocaine to ordinary lubricant gel in the ease in carrying out the procedure.\n\n\nMETHODS\nThis prospective, randomized, double-blind, placebo-controlled, convenience sample trial was conducted in the emergency department of our tertiary care university-affiliated hospital. Five milliliters of 2% lidocaine gel or placebo lubricant gel were administered nasally to alert hemodynamically stable adult patients 5 minutes before undergoing a required NGTI. The main outcome measures were overall pain, nasal pain, discomfort (eg, choking, gagging, nausea, vomiting), and difficulty in performing the procedure. Standard comparative statistical analyses were used.\n\n\nRESULTS\nThe study cohort included 62 patients (65% males). Thirty-one patients were randomized to either lidocaine or placebo groups. Patients who received lidocaine reported significantly less intense overall pain associated with NGTI compared to those who received placebo (37 ± 28 mm vs 51 ± 26 mm on 100-mm visual analog scale; P < .05). The patients receiving lidocaine also had significantly reduced nasal pain (33 ± 29 mm vs 48 ± 27 mm; P < .05) and significantly reduced sensation of gagging (25 ± 30 mm vs 39 ± 24 mm; P < .05). However, conducting the procedure was significantly more difficult in the lidocaine group (2.1 ± 0.9 vs 1.4 ± 0.7 on 5-point Likert scale; P < .05).\n\n\nCONCLUSION\nLidocaine gel administered nasally 5 minutes before NGTI significantly reduces pain and gagging sensations associated with the procedure but is associated with more difficult tube insertion compared to the use of lubricant gel.", "title": "" }, { "docid": "696320f53bb91db9a59a803ec5356727", "text": "Ransomware is a type of malware that encrypts data or locks a device to extort a ransom. Recently, a variety of high-profile ransomware attacks have been reported, and many ransomware defense systems have been proposed. However, none specializes in resisting untargeted attacks such as those by remote desktop protocol (RDP) attack ransomware. To resolve this problem, this paper proposes a way to combat RDP ransomware attacks by trapping and tracing. It discovers and ensnares the attacker through a network deception environment and uses an auxiliary tracing technology to find the attacker, finally achieving the goal of deterring the ransomware attacker and countering the RDP attack ransomware. Based on cyber deception, an auxiliary ransomware traceable system called RansomTracer is introduced in this paper. RansomTracer collects clues about the attacker by deploying monitors in the deception environment. Then, it automatically extracts and analyzes the traceable clues. Experiments and evaluations show that RansomTracer ensnares the adversary in the deception environment and improves the efficiency of clue analysis significantly. In addition, it is able to recognize the clues that identify the attacker and the screening rate reaches 98.34%.", "title": "" }, { "docid": "7c6708511e8a19c7a984ccc4b5c5926e", "text": "INTRODUCTION\nOtoplasty or correction of prominent ears, is one of most commonly performed surgeries in plastic surgery both in children and adults. Until nowadays, there have been more than 150 techniques described, but all with certain percentage of recurrence which varies from just a few up to 24.4%.\n\n\nOBJECTIVE\nThe authors present an otoplasty technique, a combination of Mustardé's original procedure with other techniques, which they have been using successfully in their everyday surgical practice for the last 9 years. The technique is based on posterior antihelical and conchal approach.\n\n\nMETHODS\nThe study included 102 patients (60 males and 42 females) operated on between 1999 and 2008. The age varied between 6 and 49 years. Each procedure was tailored to the aberrant anatomy which was analysed after examination. Indications and the operative procedure are described in step-by-step detail accompanied by drawings and photos taken during the surgery.\n\n\nRESULTS\nAll patients had bilateral ear deformity. In all cases was performed a posterior antihelical approach. The conchal reduction was done only when necessary and also through the same incision. The follow-up was from 1 to 5 years. There were no recurrent cases. A few minor complications were presented. Postoperative care, complications and advantages compared to other techniques are discussed extensively.\n\n\nCONCLUSION\nAll patients showed a high satisfaction rate with the final result and there was no necessity for further surgeries. The technique described in this paper is easy to reproduce even for young surgeons.", "title": "" }, { "docid": "c9ea36d15ec23b678c23ad1ae8d976a9", "text": "Privacy-preserving distributed machine learning has become more important than ever due to the high demand of large-scale data processing. This paper focuses on a class of machine learning problems that can be formulated as regularized empirical risk minimization, and develops a privacy-preserving learning approach to such problems. We use Alternating Direction Method of Multipliers (ADMM) to decentralize the learning algorithm, and apply Gaussian mechanisms to provide differential privacy guarantee. However, simply combining ADMM and local randomization mechanisms would result in a nonconvergent algorithm with poor performance even under moderate privacy guarantees. Besides, this intuitive approach requires a strong assumption that the objective functions of the learning problems should be differentiable and strongly convex. To address these concerns, we propose an improved ADMMbased Differentially Private distributed learning algorithm, DPADMM, where an approximate augmented Lagrangian function and Gaussian mechanisms with time-varying variance are utilized. We also apply the moments accountant method to bound the total privacy loss. Our theoretical analysis shows that DPADMM can be applied to a general class of convex learning problems, provides differential privacy guarantee, and achieves a convergence rate of O(1/ √ t), where t is the number of iterations. Our evaluations demonstrate that our approach can achieve good convergence and accuracy with moderate privacy guarantee.", "title": "" }, { "docid": "2819e5fd171e76a6ed90b5f576259f39", "text": "Moving obstacle avoidance is a fundamental requirement for any robot operating in real environments, where pedestrians, bicycles and cars are present. In this work, we design and validate a new approach that takes explicitly into account obstacle velocities, to achieve safe visual navigation in outdoor scenarios. A wheeled vehicle, equipped with an actuated pinhole camera and with a lidar, must follow a path represented by key images, without colliding with the obstacles. To estimate the obstacle velocities, we design a Kalman-based observer. Then, we adapt the tentacles designed in [1], to take into account the predicted obstacle positions. Finally, we validate our approach in a series of simulated and real experiments, showing that when the obstacle velocities are considered, the robot behaviour is safer, smoother, and faster than when it is not.", "title": "" }, { "docid": "9f25bc7a2dadb2b8c0d54ac6e70e92e5", "text": "Our research suggests that ML technologies will indeed grow more pervasive, but within job categories, what we define as the “suitability for machine learning” (SML) of work tasks varies greatly. We further propose that our SML rubric, illustrating the variability in task-level SML, can serve as an indicator for the potential reorganization of a job or an occupation because the set of tasks that form a job can be separated and re-bundled to redefine the job. Evaluating worker activities using our rubric, in fact, has the benefit of focusing on what ML can do instead of grouping all forms of automation together.", "title": "" }, { "docid": "75a1832a5fdd9c48f565eb17e8477b4b", "text": "We introduce a new interactive system: a game that is fun and can be used to create valuable output. When people play the game they help determine the contents of images by providing meaningful labels for them. If the game is played as much as popular online games, we estimate that most images on the Web can be labeled in a few months. Having proper labels associated with each image on the Web would allow for more accurate image search, improve the accessibility of sites (by providing descriptions of images to visually impaired individuals), and help users block inappropriate images. Our system makes a significant contribution because of its valuable output and because of the way it addresses the image-labeling problem. Rather than using computer vision techniques, which don't work well enough, we encourage people to do the work by taking advantage of their desire to be entertained.", "title": "" }, { "docid": "9b9cff2b6d1313844b88bad5a2724c52", "text": "A robot is usually an electro-mechanical machine that is guided by computer and electronic programming. Many robots have been built for manufacturing purpose and can be found in factories around the world. Designing of the latest inverted ROBOT which can be controlling using an APP for android mobile. We are developing the remote buttons in the android app by which we can control the robot motion with them. And in which we use Bluetooth communication to interface controller and android. Controller can be interfaced to the Bluetooth module though UART protocol. According to commands received from android the robot motion can be controlled. The consistent output of a robotic system along with quality and repeatability are unmatched. Pick and Place robots can be reprogrammable and tooling can be interchanged to provide for multiple applications.", "title": "" }, { "docid": "bd9f584e7dbc715327b791e20cd20aa9", "text": "We discuss learning a profile of user interests for recommending information sources such as Web pages or news articles. We describe the types of information available to determine whether to recommend a particular page to a particular user. This information includes the content of the page, the ratings of the user on other pages and the contents of these pages, the ratings given to that page by other users and the ratings of these other users on other pages and demographic information about users. We describe how each type of information may be used individually and then discuss an approach to combining recommendations from multiple sources. We illustrate each approach and the combined approach in the context of recommending restaurants.", "title": "" }, { "docid": "07179377e99a40beffcb50ac039ca503", "text": "RF-powered computers are small devices that compute and communicate using only the power that they harvest from RF signals. While existing technologies have harvested power from ambient RF sources (e.g., TV broadcasts), they require a dedicated gateway (like an RFID reader) for Internet connectivity. We present Wi-Fi Backscatter, a novel communication system that bridges RF-powered devices with the Internet. Specifically, we show that it is possible to reuse existing Wi-Fi infrastructure to provide Internet connectivity to RF-powered devices. To show Wi-Fi Backscatter's feasibility, we build a hardware prototype and demonstrate the first communication link between an RF-powered device and commodity Wi-Fi devices. We use off-the-shelf Wi-Fi devices including Intel Wi-Fi cards, Linksys Routers, and our organization's Wi-Fi infrastructure, and achieve communication rates of up to 1 kbps and ranges of up to 2.1 meters. We believe that this new capability can pave the way for the rapid deployment and adoption of RF-powered devices and achieve ubiquitous connectivity via nearby mobile devices that are Wi-Fi enabled.", "title": "" } ]
scidocsrr
f8965c62a7b6fbba3e11d13a94a648c5
Establishing moderators and biosignatures of antidepressant response in clinical care (EMBARC): Rationale and design.
[ { "docid": "469d83dd9996ca27217907362f44304c", "text": "Although cells in many brain regions respond to reward, the cortical-basal ganglia circuit is at the heart of the reward system. The key structures in this network are the anterior cingulate cortex, the orbital prefrontal cortex, the ventral striatum, the ventral pallidum, and the midbrain dopamine neurons. In addition, other structures, including the dorsal prefrontal cortex, amygdala, hippocampus, thalamus, and lateral habenular nucleus, and specific brainstem structures such as the pedunculopontine nucleus, and the raphe nucleus, are key components in regulating the reward circuit. Connectivity between these areas forms a complex neural network that mediates different aspects of reward processing. Advances in neuroimaging techniques allow better spatial and temporal resolution. These studies now demonstrate that human functional and structural imaging results map increasingly close to primate anatomy.", "title": "" } ]
[ { "docid": "39e550b269a66f31d467269c6389cde0", "text": "The artificial intelligence community has seen a recent resurgence in the area of neural network study. Inspired by the workings of the brain and nervous system, neural networks have solved some persistent problems in vision and speech processing. However, the new systems may offer an alternative approach to decision-making via high level pattern recognition. This paper will describe the distinguishing features of neurally inspired systems, and present popular systems in a discrete-time, algorithmic framework. Examples of applications to decision problems will appear, and guidelines for their use in operations research will be established.", "title": "" }, { "docid": "7f1eb105b7a435993767e4a4b40f7ed9", "text": "In the last two decades, organizations have recognized, indeed fixated upon, the impOrtance of quality and quality management One manifestation of this is the emergence of the total quality management (TQM) movement, which has been proclaimed as the latest and optimal way of managing organizations. Likewise, in the domain of human resource management, the concept of quality of work life (QWL) has also received much attention of late from theoreticians, researchers, and practitioners. However, little has been done to build a bridge between these two increasingly important concepts, QWL and TQM. The purpose of this research is to empirically examine the relationship between quality of work life (the internalized attitudes employees' have about their jobs) and an indicatorofTQM, customer service attitudes, CSA (the externalized signals employees' send to customers about their jobs). In addition, this study examines how job involvement and organizational commitment mediate the relationship between QWL and CSA. OWL and <:sA HlU.3 doc JJ a9t94 page 3 INTRODUCTION Quality and quality management have become increasingly important topics for both practitioners and researchers (Anderson, Rungtusanatham, & Schroeder, 1994). Among the many quality related activities that have arisen, the principle of total quality mana~ement (TQM) has been advanced as the optimal approach for managing people and processes. Indeed, it is considered by some to be the key to ensuring the long-term viability of organizations (Feigenbaum, 1982). Ofcourse, niany companies have invested heavily in total quality efforts in the form of capital expenditures on plant and equipment, and through various human resource management programs designed to spread the quality gospel. However, many still argue that there is insufficient theoretical development and empirical eviden~e for the determinants and consequences of quality management initiatives (Dean & Bowen, 1994). Mter reviewing the relevant research literatures, we find that three problems persist in the research on TQM. First, a definition of quality has not been agreed upon. Even more problematic is the fact that many of the definitions that do exist are continuously evolving. Not smprisingly, these variable definitions often lead to inconsistent and even conflicting conclusions, Second, very few studies have systematically examined these factors that influence: the quality of goods and services, the implementation of quality activities, or the performance of organizations subsequent to undertaking quality initiatives (Spencer, 1994). Certainly this has been true for quality-related human resource management interventions. Last, TQM has suffered from an \"implementation problem\" (Reger, Gustafson, Demarie, & Mullane, 1994, p. 565) which has prevented it from transitioning from the theoretical to the applied. In the domain of human resource management, quality of working life (QWL) has also received a fair amount of attention of late from theorists, researchers, and practitioners. The underlying, and mostimportant, principles of QWL capture an employee's satisfaction with and feelings about their: work, work environment, and organization. Most who study QWL, and TQM for that matter, tend to focus on the importance of employee systems and organizational performance, whereas researchers in the field ofHRM OWLmdCSA HlU.3doc 1J1l2f}4 pBgc4 usually emphasize individual attitudes and individual performance (Walden, 1994). Fmthennore, as Walden (1994) alludes to, there are significantly different managerial prescriptions and applied levels for routine human resource management processes, such as selection, performance appraisal, and compensation, than there are for TQM-driven processes, like teamwork, participative management, and shared decision-making (Deming, 1986, 1993; Juran, 1989; M. Walton, 1986; Dean & Bowen, 1994). To reiterate, these variations are attributable to the difference between a mico focus on employees as opposed to a more macrofocus on employee systems. These specific differences are but a few of the instances where the views of TQM and the views of traditional HRM are not aligned (Cardy & Dobbins, 1993). In summary, although TQM is a ubiquitous organizational phenomenon; it has been given little research attention, especially in the form ofempirical studies. Therefore, the goal of this study is to provide an empirical assessment of how one, internalized, indicator ofHRM effectiveness, QWL, is associated with one, externalized, indicator of TQM, customer service attitudes, CSA. In doing so, it bridges the gap between \"employee-focused\" H.RM outcoines and \"customer-focused\" TQM consequences. In addition, it examines the mediating effects of organizational commitment and job involvement on this relationship. QUALITY OF WORK LIFE AND CUSTOMER SERVICE AITITUDES In this section, we introduce and review the main principles of customer service attitudes, CSA, and discuss its measurement Thereafter, our extended conceptualization and measurement of QWL will be presented. Fmally, two variables hypothesized to function as mediators of the relationship between CSA and QWL, organization commitment and job involvement, will be· explored. Customer Service Attitudes (CSA) Despite all the ruminations about it in the business and trade press, TQM still remains an ambiguous notion, one that often gives rise to as many different definitions as there are observers. Some focus on the presence of organizational systems. Others, the importance of leadership. ., Many stress the need to reduce variation in organizational processes (Deming, 1986). A number · OWL and CSA mn.3 doc 11 fl9tlJ4 page 5 emphasize reducing costs through q~ty improvement (p.B. Crosby, 1979). Still others focus on quality planing, control, and improvement (Juran, 1989). Regardless of these differences, however, the most important, generally agreed upon principle is to be \"customer focused\" (Feigenbaum, 1982). The cornerstone for this principle is the belief that customer satisfaction and customer judgments about the organization and itsproducts are the most important determinants of long-term organizational viability (Oliva, Oliver & MacMillan, 1992). Not surprisingly, this belief is a prominent tenet in both the manufacturing and service sectors alike. Conventional wisdom holds that quality can best be evaluated from the customers' perspective. Certainly, customers can easily articulate how well a product or service meets their expectations. Therefore, managers and researchers must take into account subjective and cognitive factors that influence customers' judgments when trying to identify influential customer cues, rather than just relying on organizational presumptions. Recently, for example, Hannon & Sano (1994) described how customer-driven HR strategies and practices are pervasive in Japan. An example they cited was the practice of making the tOp graduates from the best schools work in low level, customer service jobs for their first 1-2 years so that they might better underst3nd customers and their needs. To be sure, defining quality in terms of whether a product or service meets the expectations ofcustomers is all-encompassing. As a result of the breadth of this issue, and the limited research on this topic, many importantquestions about the service relationship, particularly those penaining to exchanges between employees and customers, linger. Some include, \"What are the key dimensions of service quality?\" and \"What are the actions service employees might direct their efforts to in order to foster good relationships with customers?\" Arguably, the most readily obvious manifestations of quality for any customer are the service attitudes ofemployees. In fact, dming the employee-customer interaction, conventional wisdom holds that employees' customer service attitudes influence customer satisfaction, customer evaluations, and decisions to buy. . OWL and <:SA HJU.3,doc J J129m page 6 According to Rosander (1980), there are five dimensions of service quality: quality of employee performance, facility, data, decision, and outcome. Undoubtedly, the performance of the employee influences customer satisfaction. This phenomenon has been referred to as interactive quality (Lehtinen & Lehtinen, 1982). Parasuraman, Zeithaml, & Berry (1985) go so far as to suggest that service quality is ultimately a function of the relationship between the employee and the customer, not the product or the price. Sasser, Olsen, & Wyckoff (1987) echo the assertion that personnel performance is a critical factor in the satisfaction of customers. If all of them are right, the relationship between satisfaction with quality of work life and customer service attitudes cannot be understated. Measuring Customer Service Attitudes The challenge of measuring service quality has increasingly captured the attention of researchers (Teas, 1994; Cronin & Taylor, 1992). While the substance and determinants of quality may remain undefined, its importance to organizations is unquestionable. Nevertheless, numerous problems inherent in the measurement of customer service attitudes still exist (Reeves & Bednar, 1994). Perhaps the complexities involved in measuring this construct have deterred many researchers from attempting to define and model service quality. Maybe this is also the reason why many of the efforts to define and measure service quality have emanated primarily from manufacturing, rather than service, settings. When it has been measured, quality has sometimes been defined as a \"zero defect\" policy, a perspective the Japanese have embraced. Alternatively, P.B. Crosby (1979) quantifies quality as \"conformance to requirements.\" Garvin (1983; 1988), on the other hand, measures quality in terms ofcounting the incidence of \"internal failures\" and \"external failures.\" Other definitions include \"value\" (Abbot, 1955; Feigenbaum, 1982), \"concordance to specification'\" (Gilmo", "title": "" }, { "docid": "e8792ced13f1be61d031e2b150cc5cf6", "text": "Scientific literature cites a wide range of values for caffeine content in food products. The authors suggest the following standard values for the United States: coffee (5 oz) 85 mg for ground roasted coffee, 60 mg for instant and 3 mg for decaffeinated; tea (5 oz): 30 mg for leaf/bag and 20 mg for instant; colas: 18 mg/6 oz serving; cocoa/hot chocolate: 4 mg/5 oz; chocolate milk: 4 mg/6 oz; chocolate candy: 1.5-6.0 mg/oz. Some products from the United Kingdom and Denmark have higher caffeine content. Caffeine consumption survey data are limited. Based on product usage and available consumption data, the authors suggest a mean daily caffeine intake for US consumers of 4 mg/kg. Among children younger than 18 years of age who are consumers of caffeine-containing foods, the mean daily caffeine intake is about 1 mg/kg. Both adults and children in Denmark and UK have higher levels of caffeine intake.", "title": "" }, { "docid": "9d1dc15130b9810f6232b4a3c77e8038", "text": "This paper argues that we should seek the golden middle way between dynamically and statically typed languages.", "title": "" }, { "docid": "8a21ff7f3e4d73233208d5faa70eb7ce", "text": "Achieving robustness and energy efficiency in nanoscale CMOS process technologies is made challenging due to the presence of process, temperature, and voltage variations. Traditional fault-tolerance techniques such as N-modular redundancy (NMR) employ deterministic error detection and correction, e.g., majority voter, and tend to be power hungry. This paper proposes soft NMR that nontrivially extends NMR by consciously exploiting error statistics caused by nanoscale artifacts in order to design robust and energy-efficient systems. In contrast to conventional NMR, soft NMR employs Bayesian detection techniques in the voter. Soft voter algorithms are obtained through optimization of appropriate application aware cost functions. Analysis indicates that, on average, soft NMR outperforms conventional NMR. Furthermore, unlike NMR, in many cases, soft NMR is able to generate a correct output even when all N replicas are in error. This increase in robustness is then traded-off through voltage scaling to achieve energy efficiency. The design of a discrete cosine transform (DCT) image coder is employed to demonstrate the benefits of the proposed technique. Simulations in a commercial 45 nm, 1.2 V, CMOS process show that soft NMR provides up to 10× improvement in robustness, and 35 percent power savings over conventional NMR.", "title": "" }, { "docid": "373c89beb40ce164999892be2ccb8f46", "text": "Recent advances in mobile technologies (esp., smart phones and tablets with built-in cameras, GPS and Internet access) made augmented reality (AR ) applications available for the broad public. While many researchers have examined the af fordances and constraints of AR for teaching and learning, quantitative evidence for it s effectiveness is still scarce. To contribute to filling this research gap, we designed and condu cted a pretest-posttest crossover field experiment with 101 participants at a mathematics exh ibition to measure the effect of AR on acquiring and retaining mathematical knowledge in a n informal learning environment. We hypothesized that visitors acquire more knowledge f rom augmented exhibits than from exhibits without AR. The theoretical rationale for our h ypothesis is that AR allows for the efficient and effective implementation of a subset of the des ign principles defined in the cognitive theory of multimedia. The empirical results we obtaine d show that museum visitors performed better on knowledge acquisition and retention tests related to augmented exhibits than to nonaugmented exhibits and that they perceived AR as a valuable and desirable add-on for museum exhibitions.", "title": "" }, { "docid": "591e4719cadd8b9e6dfda932856fffce", "text": "Over the last two decades, multiple classifier system (MCS) or classifier ensemble has shown great potential to improve the accuracy and reliability of remote sensing image classification. Although there are lots of literatures covering the MCS approaches, there is a lack of a comprehensive literature review which presents an overall architecture of the basic principles and trends behind the design of remote sensing classifier ensemble. Therefore, in order to give a reference point for MCS approaches, this paper attempts to explicitly review the remote sensing implementations of MCS and proposes some modified approaches. The effectiveness of existing and improved algorithms are analyzed and evaluated by multi-source remotely sensed images, including high spatial resolution image (QuickBird), hyperspectral image (OMISII) and multi-spectral image (Landsat ETM+). Experimental results demonstrate that MCS can effectively improve the accuracy and stability of remote sensing image classification, and diversity measures play an active role for the combination of multiple classifiers. Furthermore, this survey provides a roadmap to guide future research, algorithm enhancement and facilitate knowledge accumulation of MCS in remote sensing community.", "title": "" }, { "docid": "fb97b11eba38f84f38b473a09119162a", "text": "We show how to encrypt a relational database in such a way that it can efficiently support a large class of SQL queries. Our construction is based solely on structured encryption and does not make use of any property-preserving encryption (PPE) schemes such as deterministic and order-preserving encryption. As such, our approach leaks considerably less than PPE-based solutions which have recently been shown to reveal a lot of information in certain settings (Naveed et al., CCS ’15 ). Our construction achieves asymptotically optimal query complexity under very natural conditions on the database and queries.", "title": "" }, { "docid": "5a583fe6fae9f0624bcde5043c56c566", "text": "In this paper, a microstrip dipole antenna on a flexible organic substrate is proposed. The antenna arms are tilted to make different variations of the dipole with more compact size and almost same performance. The antennas are fed using a coplanar stripline (CPS) geometry (Simons, 2001). The antennas are then conformed over cylindrical surfaces and their performances are compared to their flat counterparts. Good performance is achieved for both the flat and conformal antennas.", "title": "" }, { "docid": "09e164aa239be608e8c2ba250d168ebc", "text": "The alarming growth rate of malicious apps has become a serious issue that sets back the prosperous mobile ecosystem. A recent report indicates that a new malicious app for Android is introduced every 10 s. To combat this serious malware campaign, we need a scalable malware detection approach that can effectively and efficiently identify malware apps. Numerous malware detection tools have been developed, including system-level and network-level approaches. However, scaling the detection for a large bundle of apps remains a challenging task. In this paper, we introduce Significant Permission IDentification (SigPID), a malware detection system based on permission usage analysis to cope with the rapid increase in the number of Android malware. Instead of extracting and analyzing all Android permissions, we develop three levels of pruning by mining the permission data to identify the most significant permissions that can be effective in distinguishing between benign and malicious apps. SigPID then utilizes machine-learning-based classification methods to classify different families of malware and benign apps. Our evaluation finds that only 22 permissions are significant. We then compare the performance of our approach, using only 22 permissions, against a baseline approach that analyzes all permissions. The results indicate that when a support vector machine is used as the classifier, we can achieve over 90% of precision, recall, accuracy, and F-measure, which are about the same as those produced by the baseline approach while incurring the analysis times that are 4–32 times less than those of using all permissions. Compared against other state-of-the-art approaches, SigPID is more effective by detecting 93.62% of malware in the dataset and 91.4% unknown/new malware samples.", "title": "" }, { "docid": "c7857bde224ef6252602798c349beb44", "text": "Context Several studies show that people with low health literacy skills have poorer health-related knowledge and comprehension. Contribution This updated systematic review of 96 studies found that low health literacy is associated with poorer ability to understand and follow medical advice, poorer health outcomes, and differential use of some health care services. Caution No studies examined the relationship between oral literacy (speaking and listening skills) and outcomes. Implication Although it is challenging, we need to find feasible ways to improve patients' health literacy skills and reduce the negative effects of low health literacy on outcomes. The Editors The term health literacy refers to a set of skills that people need to function effectively in the health care environment (1). These skills include the ability to read and understand text and to locate and interpret information in documents (print literacy); use quantitative information for tasks, such as interpreting food labels, measuring blood glucose levels, and adhering to medication regimens (numeracy); and speak and listen effectively (oral literacy) (2, 3). Approximately 80 million U.S. adults are thought to have limited health literacy, which puts them at risk for poorer health outcomes. Rates of limited health literacy are higher among elderly, minority, and poor persons and those with less than a high school education (4). Numerous policy and advocacy organizations have expressed concern about barriers caused by low health literacy, notably the Institute of Medicine's report Health Literacy: A Prescription to End Confusion in 2004 (5) and the U.S. Department of Health and Human Services' report National Action Plan to Improve Health Literacy in 2010 (6). To understand the relationship between health literacy level and use of health care services, health outcomes, costs, and disparities in health outcomes, we conducted a systematic evidence review for the Agency for Healthcare Research and Quality (AHRQ) (published in 2004), which was limited to the relationship between print literacy and health outcomes (7). We found a consistent association between low health literacy (measured by reading skills) and more limited health-related knowledge and comprehension. The relationship between health literacy level and other outcomes was less clear, primarily because of a lack of studies and relatively unsophisticated methods in the available studies. In this review, we update and expand the earlier review (7). Since 2004, researchers have conducted new and more sophisticated studies. Thus, in synthesizing the literature, we can now consider the relationship between outcomes and health literacy (print literacy alone or combined with numeracy) and between outcomes and the numeracy component of health literacy alone. Methods We developed and followed a protocol that used standard AHRQ Evidence-based Practice Center methods. The full report describes study methods in detail and presents evidence tables for each included study (1). Literature Search We searched MEDLINE, CINAHL, the Cochrane Library, PsycINFO, and ERIC databases. For health literacy, our search dates were from 2003 to May 2010. For numeracy, they were from 1966 to May 2010; we began at an earlier date because numeracy was not addressed in our 2004 review. For this review, we updated our searches beyond what was included in the full report from May 2010 through 22 February 2011 to be current with the most recent literature. No Medical Subject Heading terms specifically identify health literacyrelated articles, so we conducted keyword searches, including health literacy, literacy, numeracy, and terms or phrases used to identify related measurement instruments. We also hand-searched reference lists of pertinent review articles and editorials. Appendix Table 1 shows the full search strategy. Appendix Table 1. Search Strategy Study Selection We included English-language studies on persons of all ages whose health literacy or that of their caregivers (including numeracy or oral health literacy) had been measured directly and had not been self-reported. Studies had to compare participants in relation to an outcome, including health care access and service use, health outcomes, and costs of care. For numeracy studies, outcomes also included knowledge, because our earlier review had established the relationship between only health literacy and knowledge. We did not examine outcomes concerning attitudes, social norms, or patientprovider relationships. Data Abstraction and Quality Assessment After determining article inclusion, 1 reviewer entered study data into evidence tables; a second, senior reviewer checked the information for accuracy and completeness. Two reviewers independently rated the quality of studies as good, fair, or poor by using criteria designed to detect potential risk of bias in an observational study (including selection bias, measurement bias, and control for potential confounding) and precision of measurement. Data Synthesis and Strength of Evidence We assessed the overall strength of the evidence for each outcome separately for studies measuring health literacy and those measuring numeracy on the basis of information only from good- and fair-quality studies. Using AHRQ guidance (8), we graded the strength of evidence as high, moderate, low, or insufficient on the basis of the potential risk of bias of included studies, consistency of effect across studies, directness of the evidence, and precision of the estimate (Table 1). We determined the grade on the basis of the literature from the update searches. We then considered whether the findings from the 2004 review would alter our conclusions. We graded the body of evidence for an outcome as low if the evidence was limited to 1 study that controlled for potential confounding variables or to several small studies in which all, or only some, controlled for potential confounding variables or as insufficient if findings across studies were inconsistent or were limited to 1 unadjusted study. Because of heterogeneity across studies in their approaches to measuring health literacy, numeracy, and outcomes, we summarized the evidence through consensus discussions and did not conduct any meta-analyses. Table 1. Strength of Evidence Grades and Definitions Role of the Funding Source AHRQ reviewed a draft report and provided copyright release for this manuscript. The funding source did not participate in conducting literature searches, determining study eligibility, evaluating individual studies, grading evidence, or interpreting results. Results First, we present the results from our literature search and a summary of characteristics across studies, followed by findings specific to health literacy then numeracy. We generally highlight evidence of moderate or high strength and mention only outcomes with low or insufficient evidence. Where relevant, we comment on the evidence provided through the 2004 review. Tables 2 and 3 summarize our findings and strength-of-evidence grade for each included health literacy and numeracy outcome, respectively. Table 2. Health Literacy Outcome Results: Strength of Evidence and Summary of Findings, 2004 and 2011 Table 3. Numeracy Outcome Results: Strength of Evidence and Summary of Findings, 2011 Characteristics of Reviewed Studies We identified 3823 citations and evaluated 1012 full-text articles (Appendix Figure). Ultimately, we included 96 studies rated as good or fair quality. These studies were reported in 111 articles because some investigators reported study results in multiple publications (98 articles on health literacy, 22 on numeracy, and 9 on both). We found no studies that examined outcomes by the oral (verbal) component of health literacy. Of the 111 articles, 100 were rated as fair quality. All studies were observational, primarily cross-sectional designs (91 of 111 articles). The Supplement (health literacy) and Appendix Table 2 (numeracy) present summary information for each included article. Supplement. Overview of Health Literacy Studies Appendix Figure. Summary of evidence search and selection. KQ = key question. Appendix Table 2. Overview of Numeracy Studies Studies varied in their measurement of health literacy and numeracy. Commonly used instruments to measure health literacy are the Rapid Estimate of Adult Literacy in Medicine (REALM) (9), the Test of Functional Health Literacy in Adults (TOFHLA) (10), and short TOFHLA (S-TOFHLA). Instruments frequently used to measure numeracy are the SchwartzWoloshin Numeracy Test (11) and the Wide Range Achievement Test (WRAT) math subtest (12). Studies also differed in how investigators distinguished between levels or thresholds of health literacyeither as a continuous measure or as categorical groups. Some studies identified 3 groups, often called inadequate, marginal, and adequate, whereas others combined 2 of the 3 groups. Because evidence was sparse for evaluating differences between marginal and adequate health literacy, our results focus on the differences between the lowest and highest groups. Studies in this update generally included multivariate analyses rather than simpler unadjusted analyses. They varied considerably, however, in regard to which potential confounding variables are controlled (Supplement and Appendix Table 2). All results reported here are from adjusted analyses that controlled for potential confounding variables, unless otherwise noted. Relationship Between Health Literacy and Outcomes Use of Health Care Services and Access to Care Emergency Care and Hospitalizations. Nine studies examining the risk for emergency care use (1321) and 6 examining the risk for hospitalizations (1419) provided moderate evidence showing increased use of both services among people with lower health literacy, including elderly persons, clinic and inner-city hospital patients, patients with asthma, and patients with congestive heart failure.", "title": "" }, { "docid": "f6fa1c4ce34f627d9d7d1ca702272e26", "text": "One of the most difficult aspects in rhinoplasty is resolving and preventing functional compromise of the nasal valve area reliably. The nasal valves are crucial for the individual breathing competence of the nose. Structural and functional elements contribute to this complex system: the nasolabial angle, the configuration and stability of the alae, the function of the internal nasal valve, the anterior septum symmetrically separating the bilateral airways and giving structural and functional support to the alar cartilage complex and to their junction with the upper lateral cartilages, the scroll area. Subsequently, the open angle between septum and sidewalls is important for sufficient airflow as well as the position and function of the head of the turbinates. The clinical examination of these elements is described. Surgical techniques are more or less well known and demonstrated with patient examples and drawings: anterior septoplasty, reconstruction of tip and dorsum support by septal extension grafts and septal replacement, tip suspension and lateral crural sliding technique, spreader grafts and suture techniques, splay grafts, alar batten grafts, lateral crural extension grafts, and lateral alar suspension. The numerous literature is reviewed.", "title": "" }, { "docid": "f3a044835e9cbd0c13218ab0f9c06dd1", "text": "Among the various human factors impinging upon making a decision in an uncertain environment, risk and trust are surely crucial ones. Several models for trust have been proposed in the literature but few explicitly take risk into account. This paper analyses the relationship between the two concepts by first looking at how a decision is made to enter into a transaction based on the risk information. We then draw a model of the invested fraction of the capital function of a decision surface. We finally define a model of trust composed of a reliability trust as the probability of transaction success and a decision trust derived from the decision surface.", "title": "" }, { "docid": "03e7070b1eb755d792564077f65ea012", "text": "The widespread use of online social networks (OSNs) to disseminate information and exchange opinions, by the general public, news media, and political actors alike, has enabled new avenues of research in computational political science. In this paper, we study the problem of quantifying and inferring the political leaning of Twitter users. We formulate political leaning inference as a convex optimization problem that incorporates two ideas: (a) users are consistent in their actions of tweeting and retweeting about political issues, and (b) similar users tend to be retweeted by similar audience. We then apply our inference technique to 119 million election-related tweets collected in seven months during the 2012 U.S. presidential election campaign. On a set of frequently retweeted sources, our technique achieves 94 percent accuracy and high rank correlation as compared with manually created labels. By studying the political leaning of 1,000 frequently retweeted sources, 232,000 ordinary users who retweeted them, and the hashtags used by these sources, our quantitative study sheds light on the political demographics of the Twitter population, and the temporal dynamics of political polarization as events unfold.", "title": "" }, { "docid": "b294a3541182e3195254e83b092f537d", "text": "This paper describes a new project intended to provide a firmer theoretical and empirical foundation for such tasks as enterprise modeling, enterprise integration, and process re-engineering. The project includes ( 1 ) collecting examples of how different organizations perform sim'lar processes, and ( 2 ) representing these examples in an on-line \"process handbook\" which includes the relative advantages of the alternatives. The handbook is intended to help (a) redesign existing Organizational processes, ( b ) invent new organizational processes that take advantage of information technology, and perhaps (e ) automatically generate sofivare to support organizational processes. A key element of the work is a novel approach to representing processes at various levels of abstraction. This approach uses ideas from computer science about inheritance and from coordinalion theory about managing dependencies. Its primary advantage is that it allows users to explicitly represent the similarities (and differences) among related processes and to easily find or generate sensible alternatives for how a given process could be", "title": "" }, { "docid": "157f5ef02675b789df0f893311a5db72", "text": "We present a novel spectral shading model for human skin. Our model accounts for both subsurface and surface scattering, and uses only four parameters to simulate the interaction of light with human skin. The four parameters control the amount of oil, melanin and hemoglobin in the skin, which makes it possible to match specific skin types. Using these parameters we generate custom wavelength dependent diffusion profiles for a two-layer skin model that account for subsurface scattering within the skin. These diffusion profiles are computed using convolved diffusion multipoles, enabling an accurate and rapid simulation of the subsurface scattering of light within skin. We combine the subsurface scattering simulation with a Torrance-Sparrow BRDF model to simulate the interaction of light with an oily layer at the surface of the skin. Our results demonstrate that this four parameter model makes it possible to simulate the range of natural appearance of human skin including African, Asian, and Caucasian skin types.", "title": "" }, { "docid": "57502ae793808fded7d446a3bb82ca74", "text": "Over the last decade, the “digitization” of the electron enterprise has grown at exponential rates. Utility, industrial, commercial, and even residential consumers are transforming all aspects of their lives into the digital domain. Moving forward, it is expected that every piece of equipment, every receptacle, every switch, and even every light bulb will possess some type of setting, monitoring and/or control. In order to be able to manage the large number of devices and to enable the various devices to communicate with one another, a new communication model was needed. That model has been developed and standardized as IEC61850 – Communication Networks and Systems in Substations. This paper looks at the needs of next generation communication systems and provides an overview of the IEC61850 protocol and how it meets these needs. I. Communication System Needs Communication has always played a critical role in the real-time operation of the power system. In the beginning, the telephone was used to communicate line loadings back to the control center as well as to dispatch operators to perform switching operations at substations. Telephoneswitching based remote control units were available as early as the 1930’s and were able to provide status and control for a few points. As digital communications became a viable option in the 1960’s, data acquisition systems (DAS) were installed to automatically collect measurement data from the substations. Since bandwidth was limited, DAS communication protocols were optimized to operate over low-bandwidth communication channels. The “cost” of this optimization was the time it took to configure, map, and document the location of the various data bits received by the protocol. As we move into the digital age, literally thousands of analog and digital data points are available in a single Intelligent Electronic Device (IED) and communication bandwidth is no longer a limiting factor. Substation to master communication data paths operating at 64,000 bits per second are becoming commonplace with an obvious migration path to much high rates. With this migration in technology, the “cost” component of a data acquisition system has now become the configuration and documentation component. Consequently, a key component of a communication system is the ability to describe themselves from both a data and services (communication functions that an IED performs) perspective. Other “key” requirements include: • High-speed IED to IED communication", "title": "" }, { "docid": "950d7d10b09f5d13e09692b2a4576c00", "text": "Prebiotics, as currently conceived of, are all carbohydrates of relatively short chain length. To be effective they must reach the cecum. Present evidence concerning the 2 most studied prebiotics, fructooligosaccharides and inulin, is consistent with their resisting digestion by gastric acid and pancreatic enzymes in vivo. However, the wide variety of new candidate prebiotics becoming available for human use requires that a manageable set of in vitro tests be agreed on so that their nondigestibility and fermentability can be established without recourse to human studies in every case. In the large intestine, prebiotics, in addition to their selective effects on bifidobacteria and lactobacilli, influence many aspects of bowel function through fermentation. Short-chain fatty acids are a major product of prebiotic breakdown, but as yet, no characteristic pattern of fermentation acids has been identified. Through stimulation of bacterial growth and fermentation, prebiotics affect bowel habit and are mildly laxative. Perhaps more importantly, some are a potent source of hydrogen in the gut. Mild flatulence is frequently observed by subjects being fed prebiotics; in a significant number of subjects it is severe enough to be unacceptable and to discourage consumption. Prebiotics are like other carbohydrates that reach the cecum, such as nonstarch polysaccharides, sugar alcohols, and resistant starch, in being substrates for fermentation. They are, however, distinctive in their selective effect on the microflora and their propensity to produce flatulence.", "title": "" }, { "docid": "e541be7c81576fdef564fd7eba5d67dd", "text": "As the cost of massively broadband® semiconductors continue to be driven down at millimeter wave (mm-wave) frequencies, there is great potential to use LMDS spectrum (in the 28-38 GHz bands) and the 60 GHz band for cellular/mobile and peer-to-peer wireless networks. This work presents urban cellular and peer-to-peer RF wideband channel measurements using a broadband sliding correlator channel sounder and steerable antennas at carrier frequencies of 38 GHz and 60 GHz, and presents measurements showing the propagation time delay spread and path loss as a function of separation distance and antenna pointing angles for many types of real-world environments. The data presented here show that at 38 GHz, unobstructed Line of Site (LOS) channels obey free space propagation path loss while non-LOS (NLOS) channels have large multipath delay spreads and can exploit many different pointing angles to provide propagation links. At 60 GHz, there is notably more path loss, smaller delay spreads, and fewer unique antenna angles for creating a link. For both 38 GHz and 60 GHz, we demonstrate empirical relationships between the RMS delay spread and antenna pointing angles, and observe that excess path loss (above free space) has an inverse relationship with transmitter-to-receiver separation distance.", "title": "" }, { "docid": "5392e45840929b05b549a64a250774e5", "text": "Faces in natural images are often occluded by a variety of objects. We propose a fully automated, probabilistic and occlusion-aware 3D morphable face model adaptation framework following an analysis-by-synthesis setup. The key idea is to segment the image into regions explained by separate models. Our framework includes a 3D morphable face model, a prototype-based beard model and a simple model for occlusions and background regions. The segmentation and all the model parameters have to be inferred from the single target image. Face model adaptation and segmentation are solved jointly using an expectation–maximization-like procedure. During the E-step, we update the segmentation and in the M-step the face model parameters are updated. For face model adaptation we apply a stochastic sampling strategy based on the Metropolis–Hastings algorithm. For segmentation, we apply loopy belief propagation for inference in a Markov random field. Illumination estimation is critical for occlusion handling. Our combined segmentation and model adaptation needs a proper initialization of the illumination parameters. We propose a RANSAC-based robust illumination estimation technique. By applying this method to a large face image database we obtain a first empirical distribution of real-world illumination conditions. The obtained empirical distribution is made publicly available and can be used as prior in probabilistic frameworks, for regularization or to synthesize data for deep learning methods.", "title": "" } ]
scidocsrr
842674cf8a39f07f2abf32dd670a7ec9
Anomalous lattice vibrations of single- and few-layer MoS2.
[ { "docid": "9068ae05b4064a98977f6a19bae6ccf0", "text": "We present Raman spectroscopy measurements on single- and few-layer graphene flakes. By using a scanning confocal approach, we collect spectral data with spatial resolution, which allows us to directly compare Raman images with scanning force micrographs. Single-layer graphene can be distinguished from double- and few-layer by the width of the D' line: the single peak for single-layer graphene splits into different peaks for the double-layer. These findings are explained using the double-resonant Raman model based on ab initio calculations of the electronic structure and of the phonon dispersion. We investigate the D line intensity and find no defects within the flake. A finite D line response originating from the edges can be attributed either to defects or to the breakdown of translational symmetry.", "title": "" } ]
[ { "docid": "23f9be150ae62c583d34b53b509818a4", "text": "Online social networks (OSNs) have experienced tremendous growth in recent years and become a de facto portal for hundreds of millions of Internet users. These OSNs offer attractive means for digital social interactions and information sharing, but also raise a number of security and privacy issues. While OSNs allow users to restrict access to shared data, they currently do not provide any mechanism to enforce privacy concerns over data associated with multiple users. To this end, we propose an approach to enable the protection of shared data associated with multiple users in OSNs. We formulate an access control model to capture the essence of multiparty authorization requirements, along with a multiparty policy specification scheme and a policy enforcement mechanism. Besides, we present a logical representation of our access control model that allows us to leverage the features of existing logic solvers to perform various analysis tasks on our model. We also discuss a proof-of-concept prototype of our approach as part of an application in Facebook and provide usability study and system evaluation of our method.", "title": "" }, { "docid": "e409a2a23fb0dbeb0aa57c89a10d61b1", "text": "Text is still the most prevalent Internet media type. Examples of this include popular social networking applications such as Twitter, Craigslist, Facebook, etc. Other web applications such as e-mail, blog, chat rooms, etc. are also mostly text based. A question we address in this paper that deals with text based Internet forensics is the following: given a short text document, can we identify if the author is a man or a woman? This question is motivated by recent events where people faked their gender on the Internet. Note that this is different from the authorship attribution problem. In this paper we investigate author gender identification for short length, multi-genre, content-free text, such as the ones found in many Internet applications. Fundamental questions we ask are: do men and women inherently use different classes of language styles? If this is true, what are good linguistic features that indicate gender? Based on research in human psychology, we propose 545 psycho-linguistic and gender-preferential cues along with stylometric features to build the feature space for this identification problem. Note that identifying the correct set of features that indicate gender is an open research problem. Three machine learning algorithms (support vector machine, Bayesian logistic regression and AdaBoost decision tree) are then designed for gender identification based on the proposed features. Extensive experiments on large text corpora (Reuters Corpus Volume 1 newsgroup data and Enron e-mail data) indicate an accuracy up to 85.1% in identifying the gender. Experiments also indicate that function words, word-based features and structural features are significant gender discriminators. a 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6fe77035a5101f60968a189d648e2feb", "text": "In the past few years, Reddit -- a community-driven platform for submitting, commenting and rating links and text posts -- has grown exponentially, from a small community of users into one of the largest online communities on the Web. To the best of our knowledge, this work represents the most comprehensive longitudinal study of Reddit's evolution to date, studying both (i) how user submissions have evolved over time and (ii) how the community's allocation of attention and its perception of submissions have changed over 5 years based on an analysis of almost 60 million submissions. Our work reveals an ever-increasing diversification of topics accompanied by a simultaneous concentration towards a few selected domains both in terms of posted submissions as well as perception and attention. By and large, our investigations suggest that Reddit has transformed itself from a dedicated gateway to the Web to an increasingly self-referential community that focuses on and reinforces its own user-generated image- and textual content over external sources.", "title": "" }, { "docid": "1c0e441afd88f00b690900c42b40841a", "text": "Convergence problems occur abundantly in all branches of mathematics or in the mathematical treatment of the sciences. Sequence transformations are principal tools to overcome convergence problems of the kind. They accomplish this by converting a slowly converging or diverging input sequence {sn} ∞ n=0 into another sequence {s ′ n }∞ n=0 with hopefully better numerical properties. Padé approximants, which convert the partial sums of a power series to a doubly indexed sequence of rational functions, are the best known sequence transformations, but the emphasis of the review will be on alternative sequence transformations which for some problems provide better results than Padé approximants.", "title": "" }, { "docid": "76e75c4549cbaf89796355b299bedfdc", "text": "Event cameras offer many advantages over standard frame-based cameras, such as low latency, high temporal resolution, and a high dynamic range. They respond to pixellevel brightness changes and, therefore, provide a sparse output. However, in textured scenes with rapid motion, millions of events are generated per second. Therefore, stateof-the-art event-based algorithms either require massive parallel computation (e.g., a GPU) or depart from the event-based processing paradigm. Inspired by frame-based pre-processing techniques that reduce an image to a set of features, which are typically the input to higher-level algorithms, we propose a method to reduce an event stream to a corner event stream. Our goal is twofold: extract relevant tracking information (corners do not suffer from the aperture problem) and decrease the event rate for later processing stages. Our event-based corner detector is very efficient due to its design principle, which consists of working on the Surface of Active Events (a map with the timestamp of the latest event at each pixel) using only comparison operations. Our method asynchronously processes event by event with very low latency. Our implementation is capable of processing millions of events per second on a single core (less than a micro-second per event) and reduces the event rate by a factor of 10 to 20.", "title": "" }, { "docid": "095c796491edf050dc372799ae82b3d3", "text": "Networks evolve continuously over time with the addition, deletion, and changing of links and nodes. Although many networks contain this type of temporal information, the majority of research in network representation learning has focused on static snapshots of the graph and has largely ignored the temporal dynamics of the network. In this work, we describe a general framework for incorporating temporal information into network embedding methods. The framework gives rise to methods for learning time-respecting embeddings from continuous-time dynamic networks. Overall, the experiments demonstrate the effectiveness of the proposed framework and dynamic network embedding approach as it achieves an average gain of 11.9% across all methods and graphs. The results indicate that modeling temporal dependencies in graphs is important for learning appropriate and meaningful network representations.", "title": "" }, { "docid": "b139bad3a500fad18c203316fb6fbb55", "text": "The current environment of web applications demands performance and scalability. Several previous approaches have implemented threading, events, or both, but increasing traffic requires new solutions for improved concurrent service. Node.js is a new web framework that achieves both through server-side JavaScript and eventdriven I/O. Tests will be performed against two comparable frameworks that compare service request times over a number of cores. The results will demonstrate the performance of JavaScript as a server-side language and the efficiency of the non-blocking asynchronous model.", "title": "" }, { "docid": "a027c9dd3b4522cdf09a2238bfa4c37e", "text": "Distributed word representations, or word vectors, have recently been applied to many tasks in natural language processing, leading to state-of-the-art performance. A key ingredient to the successful application of these representations is to train them on very large corpora, and use these pre-trained models in downstream tasks. In this paper, we describe how we trained such high quality word representations for 157 languages. We used two sources of data to train these models: the free online encyclopedia Wikipedia and data from the common crawl project. We also introduce three new word analogy datasets to evaluate these word vectors, for French, Hindi and Polish. Finally, we evaluate our pre-trained word vectors on 10 languages for which evaluation datasets exists, showing very strong performance compared to previous models.", "title": "" }, { "docid": "c78a4446be38b8fff2a949cba30a8b65", "text": "This paper will derive the Black-Scholes pricing model of a European option by calculating the expected value of the option. We will assume that the stock price is log-normally distributed and that the universe is riskneutral. Then, using Ito’s Lemma, we will justify the use of the risk-neutral rate in these initial calculations. Finally, we will prove put-call parity in order to price European put options, and extend the concepts of the Black-Scholes formula to value an option with pricing barriers.", "title": "" }, { "docid": "d76e46eec2aa0abcbbd47b8270673efa", "text": "OBJECTIVE\nTo explore the clinical efficacy and the mechanism of acupoint autohemotherapy in the treatment of allergic rhinitis.\n\n\nMETHODS\nForty-five cases were randomized into an autohemotherapy group (24 cases) and a western medication group (21 cases). In the autohemotherapy group, the acupoint autohemotherapy was applied to the bilateral Dingchuan (EX-B 1), Fengmen (BL 12), Feishu (BL 13), Quchi (LI 11), Zusanli (ST 36) and the others. In the western medication group, loratadine tablets were prescribed. The patients were treated continuously for 3 months in both groups. The clinical symptom score was taken for the assessment of clinical efficacy. The enzyme-linked immunoadsordent assay (ELISA) was adopted to determine the contents of serum interferon-gamma (IFN-gamma) and interleukin-12 (IL-12).\n\n\nRESULTS\nThe total effective rate was 83.3% (20/24) in the autohemotherapy group, which was obviously superior to 66.7% (14/21) in the western medication group (P < 0.05). After treatment, the clinical symptom scores of patients in the two groups were all reduced. The improvements in the scores of sneezing and clear nasal discharge in the autohemotherapy group were much more significant than those in the western medication group (both P < 0.05). After treatment, the serum IL-12 content of patients in the two groups was all increased to different extents as compared with that before treatment (both P < 0.05). In the autohemotherapy group, the serum IFN-gamma was increased after treatment (P < 0.05). In the western medication group, the serum IFN-gamma was not increased obviously after treatment (P > 0.05). The increase of the above index contents in the autohemotherapy group were more apparent than those in the western medication group (both P < 0.05).\n\n\nCONCLUSION\nThe acupoint autohemotherapy relieves significantly the clinical symptoms of allergic rhinitis and the therapeutic effect is better than that with oral administration of loratadine tablets, which is probably relevant with the increase of serum IL-12 content and the promotion of IFN-gamma synthesis.", "title": "" }, { "docid": "0739c95aca9678b3c001c4d2eb92ec57", "text": "The Image segmentation is referred to as one of the most important processes of image processing. Image segmentation is the technique of dividing or partitioning an image into parts, called segments. It is mostly useful for applications like image compression or object recognition, because for these types of applications, it is inefficient to process the whole image. So, image segmentation is used to segment the parts from image for further processing. There exist several image segmentation techniques, which partition the image into several parts based on certain image features like pixel intensity value, color, texture, etc. These all techniques are categorized based on the segmentation method used. In this paper the various image segmentation techniques are reviewed, discussed and finally a comparison of their advantages and disadvantages is listed.", "title": "" }, { "docid": "92e50fc2351b4a05d573590f3ed05e81", "text": "OBJECTIVE\nWe examined the effects of sensory-enhanced hatha yoga on symptoms of combat stress in deployed military personnel, compared their anxiety and sensory processing with that of stateside civilians, and identified any correlations between the State-Trait Anxiety Inventory scales and the Adolescent/Adult Sensory Profile quadrants.\n\n\nMETHOD\nSeventy military personnel who were deployed to Iraq participated in a randomized controlled trial. Thirty-five received 3 wk (≥9 sessions) of sensory-enhanced hatha yoga, and 35 did not receive any form of yoga.\n\n\nRESULTS\nSensory-enhanced hatha yoga was effective in reducing state and trait anxiety, despite normal pretest scores. Treatment participants showed significantly greater improvement than control participants on 16 of 18 mental health and quality-of-life factors. We found positive correlations between all test measures except sensory seeking. Sensory seeking was negatively correlated with all measures except low registration, which was insignificant.\n\n\nCONCLUSION\nThe results support using sensory-enhanced hatha yoga for proactive combat stress management.", "title": "" }, { "docid": "0c842ef34f1924e899e408309f306640", "text": "A single-tube 5' nuclease multiplex PCR assay was developed on the ABI 7700 Sequence Detection System (TaqMan) for the detection of Neisseria meningitidis, Haemophilus influenzae, and Streptococcus pneumoniae from clinical samples of cerebrospinal fluid (CSF), plasma, serum, and whole blood. Capsular transport (ctrA), capsulation (bexA), and pneumolysin (ply) gene targets specific for N. meningitidis, H. influenzae, and S. pneumoniae, respectively, were selected. Using sequence-specific fluorescent-dye-labeled probes and continuous real-time monitoring, accumulation of amplified product was measured. Sensitivity was assessed using clinical samples (CSF, serum, plasma, and whole blood) from culture-confirmed cases for the three organisms. The respective sensitivities (as percentages) for N. meningitidis, H. influenzae, and S. pneumoniae were 88.4, 100, and 91.8. The primer sets were 100% specific for the selected culture isolates. The ctrA primers amplified meningococcal serogroups A, B, C, 29E, W135, X, Y, and Z; the ply primers amplified pneumococcal serotypes 1, 2, 3, 4, 5, 6, 7, 8, 9, 10A, 11A, 12, 14, 15B, 17F, 18C, 19, 20, 22, 23, 24, 31, and 33; and the bexA primers amplified H. influenzae types b and c. Coamplification of two target genes without a loss of sensitivity was demonstrated. The multiplex assay was then used to test a large number (n = 4,113) of culture-negative samples for the three pathogens. Cases of meningococcal, H. influenzae, and pneumococcal disease that had not previously been confirmed by culture were identified with this assay. The ctrA primer set used in the multiplex PCR was found to be more sensitive (P < 0.0001) than the ctrA primers that had been used for meningococcal PCR testing at that time.", "title": "" }, { "docid": "b8702cb8d18ae53664f3dfff95152764", "text": "Word Sense Disambiguation is a longstanding task in Natural Language Processing, lying at the core of human language understanding. However, the evaluation of automatic systems has been problematic, mainly due to the lack of a reliable evaluation framework. In this paper we develop a unified evaluation framework and analyze the performance of various Word Sense Disambiguation systems in a fair setup. The results show that supervised systems clearly outperform knowledge-based models. Among the supervised systems, a linear classifier trained on conventional local features still proves to be a hard baseline to beat. Nonetheless, recent approaches exploiting neural networks on unlabeled corpora achieve promising results, surpassing this hard baseline in most test sets.", "title": "" }, { "docid": "ae956d5e1182986505ff8b4de8b23777", "text": "Device classification is important for many applications such as industrial quality controls, through-wall imaging, and network security. A novel approach to detection is proposed using a random noise radar (RNR), coupled with Radio Frequency “Distinct Native Attribute (RF-DNA)” fingerprinting processing algorithms to non-destructively interrogate microwave devices. RF-DNA has previously demonstrated “serial number” discrimination of passive Radio Frequency (RF) emissions such as Orthogonal Frequency Division Multiplexed (OFDM) signals, Worldwide Interoperability for Microwave Access (WiMAX) signals and others with classification accuracies above 80% using a Multiple Discriminant Analysis/Maximum Likelihood (MDAML) classifier. This approach proposes to couple the classification successes of the RF-DNA fingerprint processing with a non-destructive active interrogation waveform. An Ultra Wideband (UWB) noise waveform is uniquely suitable as an active interrogation method since it will not cause damage to sensitive microwave components and multiple RNRs can operate simultaneously in close proximity, allowing for significant parallelization of detection systems.", "title": "" }, { "docid": "6b1c17b9c4462aebbe7f908f4c88381b", "text": "This study examined neural activity associated with establishing causal relationships across sentences during on-line comprehension. ERPs were measured while participants read and judged the relatedness of three-sentence scenarios in which the final sentence was highly causally related, intermediately related, and causally unrelated to its context. Lexico-semantic co-occurrence was matched across the three conditions using a Latent Semantic Analysis. Critical words in causally unrelated scenarios evoked a larger N400 than words in both highly causally related and intermediately related scenarios, regardless of whether they appeared before or at the sentence-final position. At midline sites, the N400 to intermediately related sentence-final words was attenuated to the same degree as to highly causally related words, but otherwise the N400 to intermediately related words fell in between that evoked by highly causally related and intermediately related words. No modulation of the late positivity/P600 component was observed across conditions. These results indicate that both simple and complex causal inferences can influence the earliest stages of semantically processing an incoming word. Further, they suggest that causal coherence, at the situation level, can influence incremental word-by-word discourse comprehension, even when semantic relationships between individual words are matched.", "title": "" }, { "docid": "7b13637b634b11b3061f7ebe0c64b3a6", "text": "Analytical calculation methods for all the major components of the synchronous inductance of tooth-coil permanent-magnet synchronous machines are reevaluated in this paper. The inductance estimation is different in the tooth-coil machine compared with the one in the traditional rotating field winding machine. The accuracy of the analytical torque calculation highly depends on the estimated synchronous inductance. Despite powerful finite element method (FEM) tools, an accurate and fast analytical method is required at an early design stage to find an initial machine design structure with the desired performance. The results of the analytical inductance calculation are verified and assessed in terms of accuracy with the FEM simulation results and with the prototype measurement results.", "title": "" }, { "docid": "84ba070a14da00c37de479e62e78f126", "text": "The EEG (Electroencephalogram) signal indicates the electrical activity of the brain. They are highly random in nature and may contain useful information about the brain state. However, it is very difficult to get useful information from these signals directly in the time domain just by observing them. They are basically non-linear and nonstationary in nature. Hence, important features can be extracted for the diagnosis of different diseases using advanced signal processing techniques. In this paper the effect of different events on the EEG signal, and different signal processing methods used to extract the hidden information from the signal are discussed in detail. Linear, Frequency domain, time - frequency and non-linear techniques like correlation dimension (CD), largest Lyapunov exponent (LLE), Hurst exponent (H), different entropies, fractal dimension(FD), Higher Order Spectra (HOS), phase space plots and recurrence plots are discussed in detail using a typical normal EEG signal.", "title": "" }, { "docid": "b7da2182bbdf69c46ffba20b272fab02", "text": "Social Media is playing a key role in today's society. Many of the events that are taking place in diverse human activities could be explained by the study of these data. Big Data is a relatively new parading in Computer Science that is gaining increasing interest by the scientific community. Big Data Predictive Analytics is a Big Data discipline that is mostly used to analyze what is in the huge amounts of data and then perform predictions based on such analysis using advanced mathematics and computing techniques. The study of Social Media Data involves disciplines like Natural Language Processing, by the integration of this area to academic studies, useful findings have been achieved. Social Network Rating Systems are online platforms that allow users to know about goods and services, the way in how users review and rate their experience is a field of evolving research. This paper presents a deep investigation in the state of the art of these areas to discover and analyze the current status of the research that has been developed so far by academics of diverse background.", "title": "" }, { "docid": "b467763514576e3f37755fe0e18394c7", "text": "T study of lactic acid (HLa) and muscular contraction has a long history, beginning perhaps as early as 1807 when Berzelius found HLa in muscular fluid and thought that ‘‘the amount of free lactic acid in a muscle [was] proportional to the extent to which the muscle had previously been exercised’’ (cited in ref. 1). Several subsequent studies in the 19th century established the view that HLa was a byproduct of metabolism under conditions of O2 limitation. For example, in 1891, Araki (cited in ref. 2) reported elevated HLa levels in the blood and urine of a variety of animals subjected to hypoxia. In the early part of the 20th century, Fletcher and Hopkins (3) found an accumulation of HLa in anoxia as well as after prolonged stimulation to fatigue in amphibian muscle in vitro. Subsequently, based on the work of Fletcher and Hopkins (3) as well as his own studies, Hill (and colleagues; ref. 4) postulated that HLa increased during muscular exercise because of a lack of O2 for the energy requirements of the contracting muscles. These studies laid the groundwork for the anaerobic threshold concept, which was introduced and detailed by Wasserman and colleagues in the 1960s and early 1970s (5–7). The basic anaerobic threshold paradigm is that elevated HLa production and concentration during muscular contractions or exercise are the result of cellular hypoxia. Table 1 summarizes the essential components of the anaerobic threshold concept. However, several studies during the past '30 years have presented evidence questioning the idea that O2 limitation is a prerequisite for HLa production and accumulation in muscle and blood. Jöbsis and Stainsby (8) stimulated the canine gastrocnemius in situ at a rate known to elicit peak twitch oxygen uptake (V̇O2) and high net HLa output. They (8) reasoned that if the HLA output was caused by O2-limited oxidative phosphorylation, then there should be an accompanying reduction of members of the respiratory chain, including the NADHyNAD1 pair. Instead, muscle surface fluorometry indicated NADHyNAD1 oxidation in comparison to the resting condition. Later, Connett and colleagues (9–11), by using myoglobin cryomicrospectroscopy in small volumes of dog gracilis muscle, were unable to find loci with a PO2 less than the critical PO2 for maximal mitochondrial ox idative phosphorylation (0.1– 0.5 mmHg) during muscle contractions resulting in HLa output and an increase in muscle HLa concentration. More recently, Richardson and colleagues (12) used proton magnetic resonance spectroscopy to determine myoglobin saturation (and thereby an estimate of intramuscular PO2) during progressive exercise in humans. They found that HLa efflux was unrelated to muscle cytoplasmic PO2 during normoxia. Although there are legitimate criticisms of these studies, they and many others of a related nature have led to alternative explanations for HLa production that do not involve O2 limitation. In the present issue of PNAS, two papers (13, 14) illustrate the dichotomous relationship between lactic acid and oxygen. First, Kemper and colleagues (13) add further evidence against O2 as the key regulator of HLa production. They (13) used a unique model, the rattlesnake tailshaker muscle complex, to study intracellular glycolysis during ischemia in comparison to HLa efflux during free flow conditions; in both protocols, the muscle complex was active and producing rattling. In their first experiment, rattling was induced for 29 s during ischemia resulting from blood pressure cuff inflation between the cloaca and tailshaker muscle complex. In a second experiment, measures were taken during 108 s of rattling with normal, spontaneous blood flow. In both experiments, 31P magnetic resonance spectroscopy permitted measurement of changes in muscle levels of PCr, ATP, Pi, and pH before, during, and after rattling. Based on previous methods established in their laboratory, Kemper and colleagues (13) estimated glycolytic f lux during the ischemic and aerobic rattling protocols. The result was that total glycolytic f lux was the same under both conditions! Kemper and colleagues (13) conclude that HLa generation does not necessarily ref lect O2 limitation. To be fair, there are potential limitations to the excellent paper by Kemper and colleagues (13). First, and most importantly, they studied muscle metabolism in the transition from rest to rattling (29 s during ischemia and 108 s during free flow). Some investigators argue that oxidative phosphorylation is limited by O2 delivery to the exercising muscles during this nonsteady-state transition even with spontaneous blood flow (for review, see ref. 15). This remains a matter of debate, and the role of O2 in the transition from rest to contractions may depend on the intensity of contractions (16, 17). Of course, it is possible that the role of O2 in the transition to rattling may be tempered by the high volume density of mitochondria and the high blood supply to this unique muscle complex (13, 18). Second, there could be significant early lactate production within the first seconds of the transition (19). Third, it would have been helpful to have measurements of intramuscular lactate and glycogen concentra-", "title": "" } ]
scidocsrr
480c294d9d88c3aace8b12a9c0a1d89b
A typology of crowdfunding sponsors: Birds of a feather flock together?
[ { "docid": "e267fe4d2d7aa74ded8988fcdbfb3474", "text": "Consumers have recently begun to play a new role in some markets: that of providing capital and investment support to the offering. This phenomenon, called crowdfunding, is a collective effort by people who network and pool their money together, usually via the Internet, in order to invest in and support efforts initiated by other people or organizations. Successful service businesses that organize crowdfunding and act as intermediaries are emerging, attesting to the viability of this means of attracting investment. Employing a “Grounded Theory” approach, this paper performs an in-depth qualitative analysis of three cases involving crowdfunding initiatives: SellaBand in the music business, Trampoline in financial services, and Kapipal in non-profit services. These cases were selected to represent a diverse set of crowdfunding operations that vary in terms of risk/return for the investorconsumer and the type of consumer involvement. The analysis offers important insights about investor behaviour in crowdfunding service models, the potential determinants of such behaviour, and variations in behaviour and determinants across different service models. The findings have implications for service managers interested in launching and/or managing crowdfunding initiatives, and for service theory in terms of extending the consumer’s role from co-production and co-creation to investment.", "title": "" }, { "docid": "8654b5134dadc076a6298526e60f66fb", "text": "Ideas competitions appear to be a promising tool for crowdsourcing and open innovation processes, especially for business-to-business software companies. active participation of potential lead users is the key to success. Yet a look at existing ideas competitions in the software field leads to the conclusion that many information technology (It)–based ideas competitions fail to meet requirements upon which active participation is established. the paper describes how activation-enabling functionalities can be systematically designed and implemented in an It-based ideas competition for enterprise resource planning software. We proceeded to evaluate the outcomes of these design measures and found that participation can be supported using a two-step model. the components of the model support incentives and motives of users. Incentives and motives of the users then support the process of activation and consequently participation throughout the ideas competition. this contributes to the successful implementation and maintenance of the ideas competition, thereby providing support for the development of promising innovative ideas. the paper concludes with a discussion of further activation-supporting components yet to be implemented and points to rich possibilities for future research in these areas.", "title": "" } ]
[ { "docid": "41a0b9797c556368f84e2a05b80645f3", "text": "This paper describes and evaluates log-linear parsing models for Combinatory Categorial Grammar (CCG). A parallel implementation of the L-BFGS optimisation algorithm is described, which runs on a Beowulf cluster allowing the complete Penn Treebank to be used for estimation. We also develop a new efficient parsing algorithm for CCG which maximises expected recall of dependencies. We compare models which use all CCG derivations, including nonstandard derivations, with normal-form models. The performances of the two models are comparable and the results are competitive with existing wide-coverage CCG parsers.", "title": "" }, { "docid": "d2f2137602149b5062f60e7325d3610f", "text": "Recently a revision of the cell theory has been proposed, which has several implications both for physiology and pathology. This revision is founded on adapting the old Julius von Sach’s proposal (1892) of the Energide as the fundamental universal unit of eukaryotic life. This view maintains that, in most instances, the living unit is the symbiotic assemblage of the cell periphery complex organized around the plasma membrane, some peripheral semi-autonomous cytosol organelles (as mitochondria and plastids, which may be or not be present), and of the Energide (formed by the nucleus, microtubules, and other satellite structures). A fundamental aspect is the proposal that the Energide plays a pivotal and organizing role of the entire symbiotic assemblage (see Appendix 1). The present paper discusses how the Energide paradigm implies a revision of the concept of the internal milieu. As a matter of fact, the Energide interacts with the cytoplasm that, in turn, interacts with the interstitial fluid, and hence with the medium that has been, classically, known as the internal milieu. Some implications of this aspect have been also presented with the help of a computational model in a mathematical Appendix 2 to the paper. Finally, relevances of the Energide concept for the information handling in the central nervous system are discussed especially in relation to the inter-Energide exchange of information.", "title": "" }, { "docid": "d80580490ac7d968ff08c2a9ee159028", "text": "Statistical relational AI (StarAI) aims at reasoning and learning in noisy domains described in terms of objects and relationships by combining probability with first-order logic. With huge advances in deep learning in the current years, combining deep networks with first-order logic has been the focus of several recent studies. Many of the existing attempts, however, only focus on relations and ignore object properties. The attempts that do consider object properties are limited in terms of modelling power or scalability. In this paper, we develop relational neural networks (RelNNs) by adding hidden layers to relational logistic regression (the relational counterpart of logistic regression). We learn latent properties for objects both directly and through general rules. Back-propagation is used for training these models. A modular, layer-wise architecture facilitates utilizing the techniques developed within deep learning community to our architecture. Initial experiments on eight tasks over three real-world datasets show that RelNNs are promising models for relational learning.", "title": "" }, { "docid": "c9431b5a214dba08ca50706a27b2af7c", "text": "For artificial general intelligence (AGI) it would be efficient if multiple users trained the same giant neural network, permitting parameter reuse, without catastrophic forgetting. PathNet is a first step in this direction. It is a neural network algorithm that uses agents embedded in the neural network whose task is to discover which parts of the network to re-use for new tasks. Agents are pathways (views) through the network which determine the subset of parameters that are used and updated by the forwards and backwards passes of the backpropogation algorithm. During learning, a tournament selection genetic algorithm is used to select pathways through the neural network for replication and mutation. Pathway fitness is the performance of that pathway measured according to a cost function. We demonstrate successful transfer learning; fixing the parameters along a path learned on task A and re-evolving a new population of paths for task B, allows task B to be learned faster than it could be learned from scratch or after fine-tuning. Paths evolved on task B re-use parts of the optimal path evolved on task A. Positive transfer was demonstrated for binary MNIST, CIFAR, and SVHN supervised learning classification tasks, and a set of Atari and Labyrinth reinforcement learning tasks, suggesting PathNets have general applicability for neural network training. Finally, PathNet also significantly improves the robustness to hyperparameter choices of a parallel asynchronous reinforcement learning algorithm (A3C).", "title": "" }, { "docid": "f83481aef8fc3f61a6ecbe3548c9bde2", "text": "Establishing unique identities for both humans and end systems has been an active research problem in the security community, giving rise to innovative machine learning-based authentication techniques. Although such techniques offer an automated method to establish identity, they have not been vetted against sophisticated attacks that target their core machine learning technique. This paper demonstrates that mimicking the unique signatures generated by host fingerprinting and biometric authentication systems is possible. We expose the ineffectiveness of underlying machine learning classification models by constructing a blind attack based around the query synthesis framework and utilizing Explainable–AI (XAI) techniques. We launch an attack in under 130 queries on a state-of-the-art face authentication system, and under 100 queries on a host authentication system. We examine how these attacks can be defended against and explore their limitations. XAI provides an effective means for adversaries to infer decision boundaries and provides a new way forward in constructing attacks against systems using machine learning models for authentication.", "title": "" }, { "docid": "e99343a0ab1eb9007df4610ae35dec97", "text": "Who did what to whom is a major focus in natural language understanding, which is right the aim of semantic role labeling (SRL). Although SRL is naturally essential to text comprehension tasks, it is surprisingly ignored in previous work. This paper thus makes the first attempt to let SRL enhance text comprehension and inference through specifying verbal arguments and their corresponding semantic roles. In terms of deep learning models, our embeddings are enhanced by semantic role labels for more fine-grained semantics. We show that the salient labels can be conveniently added to existing models and significantly improve deep learning models in challenging text comprehension tasks. Extensive experiments on benchmark machine reading comprehension and inference datasets verify that the proposed semantic learning helps our system reach new state-of-the-art.", "title": "" }, { "docid": "8856fa1c0650970da31fae67cd8dcd86", "text": "In this paper, a new topology for rectangular waveguide bandpass and low-pass filters is presented. A simple, accurate, and robust design technique for these novel meandered waveguide filters is provided. The proposed filters employ a concatenation of ±90° $E$ -plane mitered bends (±90° EMBs) with different heights and lengths, whose dimensions are consecutively and independently calculated. Each ±90° EMB satisfies a local target reflection coefficient along the device so that they can be calculated separately. The novel structures allow drastically reduce the total length of the filters and embed bends if desired, or even to provide routing capabilities. Furthermore, the new meandered topology allows the introduction of transmission zeros above the passband of the low-pass filter, which can be controlled by the free parameters of the ±90° EMBs. A bandpass and a low-pass filter with meandered topology have been designed following the proposed novel technique. Measurements of the manufactured prototypes are also included to validate the novel topology and design technique, achieving excellent agreement with the simulation results.", "title": "" }, { "docid": "f327ed315be7d47b9f63dd9498999ae4", "text": "In this paper we propose a deep architecture for detecting people attributes (e.g. gender, race, clothing …) in surveillance contexts. Our proposal explicitly deal with poor resolution and occlusion issues that often occur in surveillance footages by enhancing the images by means of Deep Convolutional Generative Adversarial Networks (DCGAN). Experiments show that by combining both our Generative Reconstruction and Deep Attribute Classification Network we can effectively extract attributes even when resolution is poor and in presence of strong occlusions up to 80% of the whole person figure.", "title": "" }, { "docid": "19a1a5d69037f0072f67c785031b0881", "text": "In recent years, advances in the design of convolutional neural networks have resulted in signi€cant improvements on the image classi€cation and object detection problems. One of the advances is networks built by stacking complex cells, seen in such networks as InceptionNet and NasNet. Œese cells are either constructed by hand, generated by generative networks or discovered by search. Unlike conventional networks (where layers consist of a convolution block, sampling and non linear unit), the new cells feature more complex designs consisting of several €lters and other operators connected in series and parallel. Recently, several cells have been proposed or generated that are supersets of previously proposed custom or generated cells. Inƒuenced by this, we introduce a network construction method based on EnvelopeNets. An EnvelopeNet is a deep convolutional neural network of stacked EnvelopeCells. EnvelopeCells are supersets (or envelopes) of previously proposed handcra‰ed and generated cells. We propose a method to construct improved network architectures by restructuring EnvelopeNets. Œe algorithm restructures an EnvelopeNet by rearranging blocks in the network. It identi€es blocks to be restructured using metrics derived from the featuremaps collected during a partial training run of the EnvelopeNet. Œe method requires less computation resources to generate an architecture than an optimized architecture search over the entire search space of blocks. Œe restructured networks have higher accuracy on the image classi€cation problem on a representative dataset than both the generating EnvelopeNet and an equivalent arbitrary network.", "title": "" }, { "docid": "c8f3b235811dd64b9b1d35d596ff22f5", "text": "Open domain response generation has achieved remarkable progress in recent years, but sometimes yields short and uninformative responses. We propose a new paradigm, prototypethen-edit for response generation, that first retrieves a prototype response from a pre-defined index and then edits the prototype response according to the differences between the prototype context and current context. Our motivation is that the retrieved prototype provides a good start-point for generation because it is grammatical and informative, and the post-editing process further improves the relevance and coherence of the prototype. In practice, we design a contextaware editing model that is built upon an encoder-decoder framework augmented with an editing vector. We first generate an edit vector by considering lexical differences between a prototype context and current context. After that, the edit vector and the prototype response representation are fed to a decoder to generate a new response. Experiment results on a large scale dataset demonstrate that our new paradigm significantly increases the relevance, diversity and originality of generation results, compared to traditional generative models. Furthermore, our model outperforms retrieval-based methods in terms of relevance and originality.", "title": "" }, { "docid": "7709df997c72026406d257c85dacb271", "text": "This paper addresses the task of document retrieval based on the degree of document relatedness to the meanings of a query by presenting a semantic-enabled language model. Our model relies on the use of semantic linking systems for forming a graph representation of documents and queries, where nodes represent concepts extracted from documents and edges represent semantic relatedness between concepts. Based on this graph, our model adopts a probabilistic reasoning model for calculating the conditional probability of a query concept given values assigned to document concepts. We present an integration framework for interpolating other retrieval systems with the presented model in this paper. Our empirical experiments on a number of TREC collections show that the semantic retrieval has a synergetic impact on the results obtained through state of the art keyword-based approaches, and the consideration of semantic information obtained from entity linking on queries and documents can complement and enhance the performance of other retrieval models.", "title": "" }, { "docid": "2a443df82f61b198ceca472a7a080361", "text": "Despite rapid technological advances in computer hardware and software, insecure behavior by individual computer users continues to be a significant source of direct cost and productivity loss. Why do individuals, many of whom are aware of the possible grave consequences of low-level insecure behaviors such as failure to backup work and disclosing passwords, continue to engage in unsafe computing practices? In this article we propose a conceptual model of this behavior as the outcome of a boundedly-rational choice process. We explore this model in a survey of undergraduate students (N = 167) at two large public universities. We asked about the frequency with which they engaged in five commonplace but unsafe computing practices, and probed their decision processes with regard to these practices. Although our respondents saw themselves as knowledgeable, competent users, and were broadly aware that serious consequences were quite likely to result, they reported frequent unsafe computing behaviors. We discuss the implications of these findings both for further research on risky computing practices and for training and enforcement policies that will be needed in the organizations these students will shortly be entering.", "title": "" }, { "docid": "265b352775956004436b438574ee2d91", "text": "In the fashion industry, demand forecasting is particularly complex: companies operate with a large variety of short lifecycle products, deeply influenced by seasonal sales, promotional events, weather conditions, advertising and marketing campaigns, on top of festivities and socio-economic factors. At the same time, shelf-out-of-stock phenomena must be avoided at all costs. Given the strong seasonal nature of the products that characterize the fashion sector, this paper aims to highlight how the Fourier method can represent an easy and more effective forecasting method compared to other widespread heuristics normally used. For this purpose, a comparison between the fast Fourier transform algorithm and another two techniques based on moving average and exponential smoothing was carried out on a set of 4year historical sales data of a €60+ million turnover mediumto large-sized Italian fashion company, which operates in the women’s textiles apparel and clothing sectors. The entire analysis was performed on a common spreadsheet, in order to demonstrate that accurate results exploiting advanced numerical computation techniques can be carried out without necessarily using expensive software.", "title": "" }, { "docid": "903dc946b338c178634fcf9f14e1b1eb", "text": "Detecting system anomalies is an important problem in many fields such as security, fault management, and industrial optimization. Recently, invariant network has shown to be powerful in characterizing complex system behaviours. In the invariant network, a node represents a system component and an edge indicates a stable, significant interaction between two components. Structures and evolutions of the invariance network, in particular the vanishing correlations, can shed important light on locating causal anomalies and performing diagnosis. However, existing approaches to detect causal anomalies with the invariant network often use the percentage of vanishing correlations to rank possible casual components, which have several limitations: (1) fault propagation in the network is ignored, (2) the root casual anomalies may not always be the nodes with a high percentage of vanishing correlations, (3) temporal patterns of vanishing correlations are not exploited for robust detection, and (4) prior knowledge on anomalous nodes are not exploited for (semi-)supervised detection. To address these limitations, in this article we propose a network diffusion based framework to identify significant causal anomalies and rank them. Our approach can effectively model fault propagation over the entire invariant network and can perform joint inference on both the structural and the time-evolving broken invariance patterns. As a result, it can locate high-confidence anomalies that are truly responsible for the vanishing correlations and can compensate for unstructured measurement noise in the system. Moreover, when the prior knowledge on the anomalous status of some nodes are available at certain time points, our approach is able to leverage them to further enhance the anomaly inference accuracy. When the prior knowledge is noisy, our approach also automatically learns reliable information and reduces impacts from noises. By performing extensive experiments on synthetic datasets, bank information system datasets, and coal plant cyber-physical system datasets, we demonstrate the effectiveness of our approach.", "title": "" }, { "docid": "d59e64c1865193db3aaecc202f688690", "text": "Event-related desynchronization/synchronization patterns during right/left motor imagery (MI) are effective features for an electroencephalogram-based brain-computer interface (BCI). As MI tasks are subject-specific, selection of subject-specific discriminative frequency components play a vital role in distinguishing these patterns. This paper proposes a new discriminative filter bank (FB) common spatial pattern algorithm to extract subject-specific FB for MI classification. The proposed method enhances the classification accuracy in BCI competition III dataset IVa and competition IV dataset IIb. Compared to the performance offered by the existing FB-based method, the proposed algorithm offers error rate reductions of 17.42% and 8.9% for BCI competition datasets III and IV, respectively.", "title": "" }, { "docid": "70574bc8ad9fece3328ca77f17eec90f", "text": "Five different proposed measures of similarity or semantic distance in WordNet were experimentally compared by examining their performance in a real-word spelling correction system. It was found that Jiang and Conrath’s measure gave the best results overall. That of Hirst and St-Onge seriously over-related, that of Resnik seriously under-related, and those of Lin and of Leacock and Chodorow fell in between.", "title": "" }, { "docid": "7b13637b634b11b3061f7ebe0c64b3a6", "text": "Analytical calculation methods for all the major components of the synchronous inductance of tooth-coil permanent-magnet synchronous machines are reevaluated in this paper. The inductance estimation is different in the tooth-coil machine compared with the one in the traditional rotating field winding machine. The accuracy of the analytical torque calculation highly depends on the estimated synchronous inductance. Despite powerful finite element method (FEM) tools, an accurate and fast analytical method is required at an early design stage to find an initial machine design structure with the desired performance. The results of the analytical inductance calculation are verified and assessed in terms of accuracy with the FEM simulation results and with the prototype measurement results.", "title": "" }, { "docid": "f1681e1c8eef93f15adb5a4d7313c94c", "text": "The paper investigates techniques for extracting data from HTML sites through the use of automatically generated wrappers. To automate the wrapper generation and the data extraction process, the paper develops a novel technique to compare HTML pages and generate a wrapper based on their similarities and differences. Experimental results on real-life data-intensive Web sites confirm the feasibility of the approach.", "title": "" }, { "docid": "529ee26c337908488a5912835cc966c3", "text": "Nucleic acids have emerged as powerful biological and nanotechnological tools. In biological and nanotechnological experiments, methods of extracting and purifying nucleic acids from various types of cells and their storage are critical for obtaining reproducible experimental results. In nanotechnological experiments, methods for regulating the conformational polymorphism of nucleic acids and increasing sequence selectivity for base pairing of nucleic acids are important for developing nucleic acid-based nanomaterials. However, dearth of media that foster favourable behaviour of nucleic acids has been a bottleneck for promoting the biology and nanotechnology using the nucleic acids. Ionic liquids (ILs) are solvents that may be potentially used for controlling the properties of the nucleic acids. Here, we review researches regarding the behaviour of nucleic acids in ILs. The efficiency of extraction and purification of nucleic acids from biological samples is increased by IL addition. Moreover, nucleic acids in ILs show long-term stability, which maintains their structures and enhances nuclease resistance. Nucleic acids in ILs can be used directly in polymerase chain reaction and gene expression analysis with high efficiency. Moreover, the stabilities of the nucleic acids for duplex, triplex, and quadruplex (G-quadruplex and i-motif) structures change drastically with IL cation-nucleic acid interactions. Highly sensitive DNA sensors have been developed based on the unique changes in the stability of nucleic acids in ILs. The behaviours of nucleic acids in ILs detailed here should be useful in the design of nucleic acids to use as biological and nanotechnological tools.", "title": "" } ]
scidocsrr
872e08afa64afcdb8a0268c4fe1bc9ac
Byzantine Chain Replication
[ { "docid": "e2b74db574db8001dace37cbecb8c4eb", "text": "Distributed key-value stores are now a standard component of high-performance web services and cloud computing applications. While key-value stores offer significant performance and scalability advantages compared to traditional databases, they achieve these properties through a restricted API that limits object retrieval---an object can only be retrieved by the (primary and only) key under which it was inserted. This paper presents HyperDex, a novel distributed key-value store that provides a unique search primitive that enables queries on secondary attributes. The key insight behind HyperDex is the concept of hyperspace hashing in which objects with multiple attributes are mapped into a multidimensional hyperspace. This mapping leads to efficient implementations not only for retrieval by primary key, but also for partially-specified secondary attribute searches and range queries. A novel chaining protocol enables the system to achieve strong consistency, maintain availability and guarantee fault tolerance. An evaluation of the full system shows that HyperDex is 12-13x faster than Cassandra and MongoDB for finding partially specified objects. Additionally, HyperDex achieves 2-4x higher throughput for get/put operations.", "title": "" } ]
[ { "docid": "434ec0510dc38ea2e7effabe8090d4ce", "text": "Purpose: Big data analytics (BDA) increasingly provide value to firms for robust decision making and solving business problems. The purpose of this paper is to explore information quality dynamics in big data environment linking business value, user satisfaction and firm performance. Design/methodology/approach: Drawing on the appraisal-emotional response-coping framework, the authors propose a theory on information quality dynamics that helps in achieving business value, user satisfaction and firm performance with big data strategy and implementation. Information quality from BDA is conceptualized as the antecedent to the emotional response (e.g. value and satisfaction) and coping (performance). Proposed information quality dynamics are tested using data collected from 302 business analysts across various organizations in France and the USA. Findings: The findings suggest that information quality in BDA reflects four significant dimensions: completeness, currency, format and accuracy. The overall information quality has significant, positive impact on firm performance which is mediated by business value (e.g. transactional, strategic and transformational) and user satisfaction. Research limitations/implications: On the one hand, this paper shows how to operationalize information quality, business value, satisfaction and firm performance in BDA using PLS-SEM. On the other hand, it proposes an REBUS-PLS algorithm to automatically detect three groups of users sharing the same behaviors when determining the information quality perceptions of BDA. Practical implications: The study offers a set of determinants for information quality and business value in BDA projects, in order to support managers in their decision to enhance user satisfaction and firm performance. Originality/value: The paper extends big data literature by offering an appraisal-emotional response-coping framework that is well fitted for information quality modeling on firm performance. The methodological novelty lies in embracing REBUS-PLS to handle unobserved heterogeneity in the sample. Disciplines Business Publication Details Fosso Wamba, S., Akter, S., Trinchera, L. & De Bourmont, M. (2018). Turning information quality into firm performance in the big data economy. Management Decision, Online First 1-28. This journal article is available at Research Online: https://ro.uow.edu.au/gsbpapers/536", "title": "" }, { "docid": "fb099587aea7f8090a4b8fd8fc2d72df", "text": "This paper provides a review of explanations, visualizations and interactive elements of user interfaces (UI) in music recommendation systems. We call these UI features “recommendation aids”. Explanations are elements of the interface that inform the user why a certain recommendation was made. We highlight six possible goals for explanations, resulting in overall satisfaction towards the system. We found that the most of the existing music recommenders of popular systems provide no explanations, or very limited ones. Since explanations are not independent of other UI elements in recommendation process, we consider how the other elements can be used to achieve the same goals. To this end, we evaluated several existing music recommenders. We wanted to discover which of the six goals (transparency, scrutability, effectiveness, persuasiveness, efficiency and trust) the different UI elements promote in the existing music recommenders, and how they could be measured in order to create a simple framework for evaluating recommender UIs. By using this framework designers of recommendation systems could promote users’ trust and overall satisfaction towards a recommender system thereby improving the user experience with the system.", "title": "" }, { "docid": "914daf0fd51e135d6d964ecbe89a5b29", "text": "Large-scale parallel programming environments and algorithms require efficient group-communication on computing systems with failing nodes. Existing reliable broadcast algorithms either cannot guarantee that all nodes are reached or are very expensive in terms of the number of messages and latency. This paper proposes Corrected-Gossip, a method that combines Monte Carlo style gossiping with a deterministic correction phase, to construct a Las Vegas style reliable broadcast that guarantees reaching all the nodes at low cost. We analyze the performance of this method both analytically and by simulations and show how it reduces the latency and network load compared to existing algorithms. Our method improves the latency by 20% and the network load by 53% compared to the fastest known algorithm on 4,096 nodes. We believe that the principle of corrected-gossip opens an avenue for many other reliable group communication operations.", "title": "" }, { "docid": "5ca1c503cba0db452d0e5969e678db97", "text": "Deep neural network models have recently achieved state-of-the-art performance gains in a variety of natural language processing (NLP) tasks (Young, Hazarika, Poria, & Cambria, 2017). However, these gains rely on the availability of large amounts of annotated examples, without which state-of-the-art performance is rarely achievable. This is especially inconvenient for the many NLP fields where annotated examples are scarce, such as medical text. To improve NLP models in this situation, we evaluate five improvements on named entity recognition (NER) tasks when only ten annotated examples are available: (1) layer-wise initialization with pre-trained weights, (2) hyperparameter tuning, (3) combining pre-training data, (4) custom word embeddings, and (5) optimizing out-of-vocabulary (OOV) words. Experimental results show that the F1 score of 69.3% achievable by state-of-the-art models can be improved to 78.87%.", "title": "" }, { "docid": "289005e2f4d666a606f7dfd9c8f7a1f4", "text": "In this paper we present the design of a fin-like dielectric elastomer actuator (DEA) that drives a miniature autonomous underwater vehicle (AUV). The fin-like actuator is modular and independent of the body of the AUV. All electronics required to run the actuator are inside the 100 mm long 3D-printed body, allowing for autonomous mobility of the AUV. The DEA is easy to manufacture, requires no pre-stretch of the elastomers, and is completely sealed for underwater operation. The output thrust force can be tuned by stacking multiple actuation layers and modifying the Young's modulus of the elastomers. The AUV is reconfigurable by a shift of its center of mass, such that both planar and vertical swimming can be demonstrated on a single vehicle. For the DEA we measured thrust force and swimming speed for various actuator designs ran at frequencies from 1 Hz to 5 Hz. For the AUV we demonstrated autonomous planar swimming and closed-loop vertical diving. The actuators capable of outputting the highest thrust forces can power the AUV to swim at speeds of up to 0.55 body lengths per second. The speed falls in the upper range of untethered swimming robots powered by soft actuators. Our tunable DEAs also demonstrate the potential to mimic the undulatory motions of fish fins.", "title": "" }, { "docid": "313a902049654e951860b9225dc5f4e8", "text": "Financial portfolio management is the process of constant redistribution of a fund into different financial products. This paper presents a financial-model-free Reinforcement Learning framework to provide a deep machine learning solution to the portfolio management problem. The framework consists of the Ensemble of Identical Independent Evaluators (EIIE) topology, a Portfolio-Vector Memory (PVM), an Online Stochastic Batch Learning (OSBL) scheme, and a fully exploiting and explicit reward function. This framework is realized in three instants in this work with a Convolutional Neural Network (CNN), a basic Recurrent Neural Network (RNN), and a Long Short-Term Memory (LSTM). They are, along with a number of recently reviewed or published portfolio-selection strategies, examined in three back-test experiments with a trading period of 30 minutes in a cryptocurrency market. Cryptocurrencies are electronic and decentralized alternatives to government-issued money, with Bitcoin as the best-known example of a cryptocurrency. All three instances of the framework monopolize the top three positions in all experiments, outdistancing other compared trading algorithms. Although with a high commission rate of 0.25% in the backtests, the framework is able to achieve at least 4-fold returns in 50 days.", "title": "" }, { "docid": "066b4130dbc9c36d244e5da88936dfc4", "text": "Real-time strategy (RTS) games have drawn great attention in the AI research community, for they offer a challenging and rich testbed for both machine learning and AI techniques. Due to their enormous state spaces and possible map configurations, learning good and generalizable representations for machine learning is crucial to build agents that can perform well in complex RTS games. In this paper we present a convolutional neural network approach to learn an evaluation function that focuses on learning general features that are independent of the map configuration or size. We first train and evaluate the network on a winner prediction task on a dataset collected with a small set of maps with a fixed size. Then we evaluate the network’s generalizability to three set of larger maps. by using it as an evaluation function in the context of Monte Carlo Tree Search. Our results show that the presented architecture can successfully capture general and map-independent features applicable to more complex RTS situations.", "title": "" }, { "docid": "6fdd045448a1425ec1b9ac5d9bca9fa0", "text": "Fluorescence has been observed directly across the band gap of semiconducting carbon nanotubes. We obtained individual nanotubes, each encased in a cylindrical micelle, by ultrasonically agitating an aqueous dispersion of raw single-walled carbon nanotubes in sodium dodecyl sulfate and then centrifuging to remove tube bundles, ropes, and residual catalyst. Aggregation of nanotubes into bundles otherwise quenches the fluorescence through interactions with metallic tubes and substantially broadens the absorption spectra. At pH less than 5, the absorption and emission spectra of individual nanotubes show evidence of band gap-selective protonation of the side walls of the tube. This protonation is readily reversed by treatment with base or ultraviolet light.", "title": "" }, { "docid": "794ad922f93b85e2195b3c85665a8202", "text": "The paper shows how to create a probabilistic graph for WordNet. A node is created for every word and phrase in WordNet. An edge between two nodes is labeled with the probability that a user that is interested in the source concept will also be interested in the destination concept. For example, an edge with weight 0.3 between \"canine\" and \"dog\" indicates that there is a 30% probability that a user who searches for \"canine\" will be interested in results that contain the word \"dog\". We refer to the graph as probabilistic because we enforce the constraint that the sum of the weights of all the edges that go out of a node add up to one. Structural (e.g., the word \"canine\" is a hypernym (i.e., kind of) of the word \"dog\") and textual (e.g., the word \"canine\" appears in the textual definition of the word \"dog\") data from WordNet is used to create a Markov logic network, that is, a set of first order formulas with probabilities. The Markov logic network is then used to compute the weights of the edges in the probabilistic graph. We experimentally validate the quality of the data in the probabilistic graph on two independent benchmarks: Miller and Charles and WordSimilarity-353.", "title": "" }, { "docid": "c4f9b3c863323efd6eca0074c296addf", "text": "Lip reading, the ability to recognize text information from the movement of a speaker’s mouth, is a difficult and challenging task. Recently, the end-to-end model that maps a variable-length sequence of video frames to text performs poorly in real life situation where people unintentionally move the lips instead of speaking. The goal of this work is to improve the performance of lip reading task in real life. The model proposed in this article consists of two networks that are visual to audio feature network and audio feature to text network. Our experiments showed that the model proposed in this article can achieve 92.76% accuracy in lip reading task on the dataset that the unintentional lips movement was added.", "title": "" }, { "docid": "9b08be9d250822850fda92819774248e", "text": "In recent years, recommendation systems have been widely used in various commercial platforms to provide recommendations for users. Collaborative filtering algorithms are one of the main algorithms used in recommendation systems. Such algorithms are simple and efficient; however, the sparsity of the data and the scalability of the method limit the performance of these algorithms, and it is difficult to further improve the quality of the recommendation results. Therefore, a model combining a collaborative filtering recommendation algorithm with deep learning technology is proposed, therein consisting of two parts. First, the model uses a feature representation method based on a quadric polynomial regression model, which obtains the latent features more accurately by improving upon the traditional matrix factorization algorithm. Then, these latent features are regarded as the input data of the deep neural network model, which is the second part of the proposed model and is used to predict the rating scores. Finally, by comparing with other recommendation algorithms on three public datasets, it is verified that the recommendation performance can be effectively improved by our model.", "title": "" }, { "docid": "5bf3c1f19c368c1948db91bbd65da84b", "text": "As robot reasoning becomes more complex, debugging becomes increasingly hard based solely on observable behaviour, even for robot designers and technical specialists. Similarly, nonspecialist users find it hard to create useful mental models of robot reasoning solely from observed behaviour. The EPSRC Principles of Robotics mandate that our artefacts should be transparent, but what does this mean in practice, and how does transparency affect both trust and utility? We investigate this relationship in the literature and find it to be complex, particularly in non industrial environments where transparency may have a wider range of effects on trust and utility depending on the application and purpose of the robot. We outline our programme of research to support our assertion that it is nevertheless possible to create transparent agents that are emotionally engaging despite having a transparent machine nature.", "title": "" }, { "docid": "2089349f4f1dae4d07dfec8481ba748e", "text": "A signiicant limitation of neural networks is that the representations they learn are usually incomprehensible to humans. We present a novel algorithm, Trepan, for extracting comprehensible, symbolic representations from trained neural networks. Our algorithm uses queries to induce a decision tree that approximates the concept represented by a given network. Our experiments demonstrate that Trepan is able to produce decision trees that maintain a high level of delity to their respective networks while being com-prehensible and accurate. Unlike previous work in this area, our algorithm is general in its applicability and scales well to large networks and problems with high-dimensional input spaces.", "title": "" }, { "docid": "647ede4f066516a0343acef725e51d01", "text": "This work proposes a dual-polarized planar antenna; two post-wall slotted waveguide arrays with orthogonal 45/spl deg/ linearly-polarized waves interdigitally share the aperture on a single layer substrate. Uniform excitation of the two-dimensional slot array is confirmed by experiment in the 25 GHz band. The isolation between two slot arrays is also investigated in terms of the relative displacement along the radiation waveguide axis in the interdigital structure. The isolation is 33.0 dB when the relative shift of slot position between the two arrays is -0.5/spl lambda//sub g/, while it is only 12.8 dB when there is no shift. The cross-polarization level in the far field is -25.2 dB for a -0.5/spl lambda//sub g/ shift, which is almost equal to that of the isolated single polarization array. It is degraded down to -9.6 dB when there is no shift.", "title": "" }, { "docid": "038637eebbf8474bf15dab1c9a81ed6d", "text": "As the surplus market of failure analysis equipment continues to grow, the cost of performing invasive IC analysis continues to diminish. Hardware vendors in high-security applications utilize security by obscurity to implement layers of protection on their devices. High-security applications must assume that the attacker is skillful, well-equipped and well-funded. Modern security ICs are designed to make readout of decrypted data and changes to security configuration of the device impossible. Countermeasures such as meshes and attack sensors thwart many state of the art attacks. Because of the perceived difficulty and lack of publicly known attacks, the IC backside has largely been ignored by the security community. However, the backside is currently the weakest link in modern ICs because no devices currently on the market are protected against fully-invasive attacks through the IC backside. Fully-invasive backside attacks circumvent all known countermeasures utilized by modern implementations. In this work, we demonstrate the first two practical fully-invasive attacks against the IC backside. Our first attack is fully-invasive backside microprobing. Using this attack we were able to capture decrypted data directly from the data bus of the target IC's CPU core. We also present a fully invasive backside circuit edit. With this attack we were able to set security and configuration fuses of the device to arbitrary values.", "title": "" }, { "docid": "0cce6366df945f079dbb0b90d79b790e", "text": "Fourier ptychographic microscopy (FPM) is a recently developed imaging modality that uses angularly varying illumination to extend a system's performance beyond the limit defined by its optical components. The FPM technique applies a novel phase-retrieval procedure to achieve resolution enhancement and complex image recovery. In this Letter, we compare FPM data to theoretical prediction and phase-shifting digital holography measurement to show that its acquired phase maps are quantitative and artifact-free. We additionally explore the relationship between the achievable spatial and optical thickness resolution offered by a reconstructed FPM phase image. We conclude by demonstrating enhanced visualization and the collection of otherwise unobservable sample information using FPM's quantitative phase.", "title": "" }, { "docid": "d4869ee3fbc997f865cc16e9e1200d0b", "text": "The potential of mathematical models is widely acknowledged for examining components and interactions of natural systems, estimating the changes and uncertainties on outcomes, and fostering communication between scientists with different backgrounds and between scientists, managers and the community. For favourable reception of models, a systematic accrual of a good knowledge base is crucial for both science and decision-making. As the roles of models grow in importance, there is an increase in the need for appropriate methods with which to test their quality and performance. For biophysical models, the heterogeneity of data and the range of factors influencing usefulness of their outputs often make it difficult for full analysis and assessment. As a result, modelling studies in the domain of natural sciences often lack elements of good modelling practice related to model validation, that is correspondence of models to its intended purpose. Here we review validation issues and methods currently available for assessing the quality of biophysical models. The review covers issues of validation purpose, the robustness of model results, data quality, model prediction and model complexity. The importance of assessing input data quality and interpretation of phenomena is also addressed. Details are then provided on the range of measures commonly used for validation. Requirements for a methodology for assessment during the entire model-cycle are synthesised. Examples are used from a variety of modelling studies which mainly include agronomic modelling, e.g. crop growth and development, climatic modelling, e.g. climate scenarios, and hydrological modelling, e.g. soil hydrology, but the principles are essentially applicable to any area. It is shown that conducting detailed validation requires multi-faceted knowledge, and poses substantial scientific and technical challenges. Special emphasis is placed on using combined multiple statistics to expand our horizons in validation whilst also tailoring the validation requirements to the specific objectives of the application.", "title": "" }, { "docid": "22160219ffa40e4e42f1519fe25ecb6a", "text": "We propose a new prior distribution for classical (non-hierarchical) logistic regression models, constructed by first scaling all nonbinary variables to have mean 0 and standard deviation 0.5, and then placing independent Student-t prior distributions on the coefficients. As a default choice, we recommend the Cauchy distribution with center 0 and scale 2.5, which in the simplest setting is a longer-tailed version of the distribution attained by assuming one-half additional success and one-half additional failure in a logistic regression. Cross-validation on a corpus of datasets shows the Cauchy class of prior distributions to outperform existing implementations of Gaussian and Laplace priors. We recommend this prior distribution as a default choice for routine applied use. It has the advantage of always giving answers, even when there is complete separation in logistic regression (a common problem, even when the sample size is large and the number of predictors is small) and also automatically applying more shrinkage to higherorder interactions. This can be useful in routine data analysis as well as in automated procedures such as chained equations for missing-data imputation. We implement a procedure to fit generalized linear models in R with the Student-t prior distribution by incorporating an approximate EM algorithm into the usual iteratively weighted least squares. We illustrate with several applications, including a series of logistic regressions predicting voting preferences, a small bioassay experiment, and an imputation model for a public health data set.", "title": "" }, { "docid": "0824992bb506ac7c8a631664bf608086", "text": "There are many image fusion methods that can be used to produce high-resolution multispectral images from a high-resolution panchromatic image and low-resolution multispectral images. Starting from the physical principle of image formation, this paper presents a comprehensive framework, the general image fusion (GIF) method, which makes it possible to categorize, compare, and evaluate the existing image fusion methods. Using the GIF method, it is shown that the pixel values of the high-resolution multispectral images are determined by the corresponding pixel values of the low-resolution panchromatic image, the approximation of the high-resolution panchromatic image at the low-resolution level. Many of the existing image fusion methods, including, but not limited to, intensity-hue-saturation, Brovey transform, principal component analysis, high-pass filtering, high-pass modulation, the a/spl grave/ trous algorithm-based wavelet transform, and multiresolution analysis-based intensity modulation (MRAIM), are evaluated and found to be particular cases of the GIF method. The performance of each image fusion method is theoretically analyzed based on how the corresponding low-resolution panchromatic image is computed and how the modulation coefficients are set. An experiment based on IKONOS images shows that there is consistency between the theoretical analysis and the experimental results and that the MRAIM method synthesizes the images closest to those the corresponding multisensors would observe at the high-resolution level.", "title": "" }, { "docid": "5995a2775a6a10cf4f2bd74a2959935d", "text": "Artemisinin-based combination therapy is recommended to treat Plasmodium falciparum worldwide, but observations of longer artemisinin (ART) parasite clearance times (PCTs) in Southeast Asia are widely interpreted as a sign of potential ART resistance. In search of an in vitro correlate of in vivo PCT after ART treatment, a ring-stage survival assay (RSA) of 0–3 h parasites was developed and linked to polymorphisms in the Kelch propeller protein (K13). However, RSA remains a laborious process, involving heparin, Percoll gradient, and sorbitol treatments to obtain rings in the 0–3 h window. Here two alternative RSA protocols are presented and compared to the standard Percoll-based method, one highly stage-specific and one streamlined for laboratory application. For all protocols, P. falciparum cultures were synchronized with 5 % sorbitol treatment twice over two intra-erythrocytic cycles. For a filtration-based RSA, late-stage schizonts were passed through a 1.2 μm filter to isolate merozoites, which were incubated with uninfected erythrocytes for 45 min. The erythrocytes were then washed to remove lysis products and further incubated until 3 h post-filtration. Parasites were pulsed with either 0.1 % dimethyl sulfoxide (DMSO) or 700 nM dihydroartemisinin in 0.1 % DMSO for 6 h, washed twice in drug-free media, and incubated for 66–90 h, when survival was assessed by microscopy. For a sorbitol-only RSA, synchronized young (0–3 h) rings were treated with 5 % sorbitol once more prior to the assay and adjusted to 1 % parasitaemia. The drug pulse, incubation, and survival assessment were as described above. Ring-stage survival of P. falciparum parasites containing either the K13 C580 or C580Y polymorphism (associated with low and high RSA survival, respectively) were assessed by the described filtration and sorbitol-only methods and produced comparable results to the reported Percoll gradient RSA. Advantages of both new methods include: fewer reagents, decreased time investment, and fewer procedural steps, with enhanced stage-specificity conferred by the filtration method. Assessing P. falciparum ART sensitivity in vitro via RSA can be streamlined and accurately evaluated in the laboratory by filtration or sorbitol synchronization methods, thus increasing the accessibility of the assay to research groups.", "title": "" } ]
scidocsrr
64330d665d11d79b3ab1fa880ebde586
Liveness Detection Using Gaze Collinearity
[ { "docid": "2e3f05ee44b276b51c1b449e4a62af94", "text": "We make some simple extensions to the Active Shape Model of Cootes et al. [4], and use it to locate features in frontal views of upright faces. We show on independent test data that with the extensions the Active Shape Model compares favorably with more sophisticated methods. The extensions are (i) fitting more landmarks than are actually needed (ii) selectively using twoinstead of one-dimensional landmark templates (iii) adding noise to the training set (iv) relaxing the shape model where advantageous (v) trimming covariance matrices by setting most entries to zero, and (vi) stacking two Active Shape Models in series.", "title": "" }, { "docid": "fe33ff51ca55bf745bdcdf8ee02e2d36", "text": "A robust face detection technique along with mouth localization, processing every frame in real time (video rate), is presented. Moreover, it is exploited for motion analysis onsite to verify \"liveness\" as well as to achieve lip reading of digits. A methodological novelty is the suggested quantized angle features (\"quangles\") being designed for illumination invariance without the need for preprocessing (e.g., histogram equalization). This is achieved by using both the gradient direction and the double angle direction (the structure tensor angle), and by ignoring the magnitude of the gradient. Boosting techniques are applied in a quantized feature space. A major benefit is reduced processing time (i.e., that the training of effective cascaded classifiers is feasible in very short time, less than 1 h for data sets of order 104). Scale invariance is implemented through the use of an image scale pyramid. We propose \"liveness\" verification barriers as applications for which a significant amount of computation is avoided when estimating motion. Novel strategies to avert advanced spoofing attempts (e.g., replayed videos which include person utterances) are demonstrated. We present favorable results on face detection for the YALE face test set and competitive results for the CMU-MIT frontal face test set as well as on \"liveness\" verification barriers.", "title": "" }, { "docid": "b40129a15767189a7a595db89c066cf8", "text": "To increase reliability of face recognition system, the system must be able to distinguish real face from a copy of face such as a photograph. In this paper, we propose a fast and memory efficient method of live face detection for embedded face recognition system, based on the analysis of the movement of the eyes. We detect eyes in sequential input images and calculate variation of each eye region to determine whether the input face is a real face or not. Experimental results show that the proposed approach is competitive and promising for live face detection. Keywords—Liveness Detection, Eye detection, SQI.", "title": "" } ]
[ { "docid": "0a08814f1f5f5f489f756df9ad5be051", "text": "Jump height is a critical aspect of volleyball players' blocking and attacking performance. Although previous studies demonstrated that creatine monohydrate supplementation (CrMS) improves jumping performance, none have yet evaluated its effect among volleyball players with proficient jumping skills. We examined the effect of 4 wk of CrMS on 1 RM spike jump (SJ) and repeated block jump (BJ) performance among 12 elite males of the Sherbrooke University volleyball team. Using a parallel, randomized, double-blind protocol, participants were supplemented with a placebo or creatine solution for 28 d, at a dose of 20 g/d in days 1-4, 10 g/d on days 5-6, and 5 g/d on days 7-28. Pre- and postsupplementation, subjects performed the 1 RM SJ test, followed by the repeated BJ test (10 series of 10 BJs; 3 s interval between jumps; 2 min recovery between series). Due to injuries (N = 2) and outlier data (N = 2), results are reported for eight subjects. Following supplementation, both groups improved SJ and repeated BJ performance. The change in performance during the 1 RM SJ test and over the first two repeated BJ series was unclear between groups. For series 3-6 and 7-10, respectively, CrMS further improved repeated BJ performance by 2.8% (likely beneficial change) and 1.9% (possibly beneficial change), compared with the placebo. Percent repeated BJ decline in performance across the 10 series did not differ between groups pre- and postsupplementation. In conclusion, CrMS likely improved repeated BJ height capability without influencing the magnitude of muscular fatigue in these elite, university-level volleyball players.", "title": "" }, { "docid": "162ad6b8d48f5d6c76067d25b320a947", "text": "Image Understanding is fundamental to systems that need to extract contents and infer concepts from images. In this paper, we develop an architecture for understanding images, through which a system can recognize the content and the underlying concepts of an image and, reason and answer questions about both using a visual module, a reasoning module, and a commonsense knowledge base. In this architecture, visual data combines with background knowledge and; iterates through visual and reasoning modules to answer questions about an image or to generate a textual description of an image. We first provide motivations of such a Deep Image Understanding architecture and then, we describe the necessary components it should include. We also introduce our own preliminary implementation of this architecture and empirically show how this more generic implementation compares with a recent end-to-end Neural approach on specific applications. We address the knowledge-representation challenge in such an architecture by representing an image using a directed labeled graph (called Scene Description Graph). Our implementation uses generic visual recognition techniques and commonsense reasoning1 to extract such graphs from images. Our experiments show that the extracted graphs capture the syntactic and semantic content of an image with reasonable accuracy.", "title": "" }, { "docid": "9c98685d50238cebb1e23e00201f8c09", "text": "A frequently asked questions (FAQ) retrieval system improves the access to information by allowing users to pose natural language queries over an FAQ collection. From an information retrieval perspective, FAQ retrieval is a challenging task, mainly because of the lexical gap that exists between a query and an FAQ pair, both of which are typically very short. In this work, we explore the use of supervised learning to rank to improve the performance of domain-specific FAQ retrieval. While supervised learning-to-rank models have been shown to yield effective retrieval performance, they require costly human-labeled training data in the form of document relevance judgments or question paraphrases. We investigate how this labeling effort can be reduced using a labeling strategy geared toward the manual creation of query paraphrases rather than the more time-consuming relevance judgments. In particular, we investigate two such strategies, and test them by applying supervised ranking models to two domain-specific FAQ retrieval data sets, showcasing typical FAQ retrieval scenarios. Our experiments show that supervised ranking models can yield significant improvements in the precision-at-rank-5 measure compared to unsupervised baselines. Furthermore, we show that a supervised model trained using data labeled via a low-effort paraphrase-focused strategy has the same performance as that of the same model trained using fully labeled data, indicating that the strategy is effective at reducing the labeling effort while retaining the performance gains of the supervised approach. To encourage further research on FAQ retrieval we make our FAQ retrieval data set publicly available.", "title": "" }, { "docid": "3196b8017cfb9a8cbfef0e892c508d05", "text": "The nuclear envelope is a physical barrier that isolates the cellular DNA from the rest of the cell, thereby limiting pathogen invasion. The Human Immunodeficiency Virus (HIV) has a remarkable ability to enter the nucleus of non-dividing target cells such as lymphocytes, macrophages and dendritic cells. While this step is critical for replication of the virus, it remains one of the less understood aspects of HIV infection. Here, we review the viral and host factors that favor or inhibit HIV entry into the nucleus, including the viral capsid, integrase, the central viral DNA flap, and the host proteins CPSF6, TNPO3, Nucleoporins, SUN1, SUN2, Cyclophilin A and MX2. We review recent perspectives on the mechanism of action of these factors, and formulate fundamental questions that remain. Overall, these findings deepen our understanding of HIV nuclear import and strengthen the favorable position of nuclear HIV entry for antiviral targeting.", "title": "" }, { "docid": "faea285dfac31a520e23c0a3ee06cea6", "text": "Since 2006, Alberts and Dorofee have led MSCE with a focus on returning risk management to its original intent—supporting effective management decisions that lead to program success. They began rethinking the traditional approaches to risk management, which led to the development of SEI Mosaic, a suite of methodologies that approach managing risk from a systemic view across the life cycle and supply chain. Using a systemic risk management approach enables program managers to develop and implement strategic, high-leverage mitigation solutions that align with mission and objectives.", "title": "" }, { "docid": "d470122d50dbb118ae9f3068998f8e14", "text": "Tumor heterogeneity presents a challenge for inferring clonal evolution and driver gene identification. Here, we describe a method for analyzing the cancer genome at a single-cell nucleotide level. To perform our analyses, we first devised and validated a high-throughput whole-genome single-cell sequencing method using two lymphoblastoid cell line single cells. We then carried out whole-exome single-cell sequencing of 90 cells from a JAK2-negative myeloproliferative neoplasm patient. The sequencing data from 58 cells passed our quality control criteria, and these data indicated that this neoplasm represented a monoclonal evolution. We further identified essential thrombocythemia (ET)-related candidate mutations such as SESN2 and NTRK1, which may be involved in neoplasm progression. This pilot study allowed the initial characterization of the disease-related genetic architecture at the single-cell nucleotide level. Further, we established a single-cell sequencing method that opens the way for detailed analyses of a variety of tumor types, including those with high genetic complex between patients.", "title": "" }, { "docid": "b09cacfb35cd02f6a5345c206347c6ae", "text": "Facebook, as one of the most popular social networking sites among college students, provides a platform for people to manage others' impressions of them. People tend to present themselves in a favorable way on their Facebook profile. This research examines the impact of using Facebook on people's perceptions of others' lives. It is argued that those with deeper involvement with Facebook will have different perceptions of others than those less involved due to two reasons. First, Facebook users tend to base judgment on examples easily recalled (the availability heuristic). Second, Facebook users tend to attribute the positive content presented on Facebook to others' personality, rather than situational factors (correspondence bias), especially for those they do not know personally. Questionnaires, including items measuring years of using Facebook, time spent on Facebook each week, number of people listed as their Facebook \"friends,\" and perceptions about others' lives, were completed by 425 undergraduate students taking classes across various academic disciplines at a state university in Utah. Surveys were collected during regular class period, except for two online classes where surveys were submitted online. The multivariate analysis indicated that those who have used Facebook longer agreed more that others were happier, and agreed less that life is fair, and those spending more time on Facebook each week agreed more that others were happier and had better lives. Furthermore, those that included more people whom they did not personally know as their Facebook \"friends\" agreed more that others had better lives.", "title": "" }, { "docid": "4a18861ce15cfae3eaa2519d2fdc98c8", "text": "This paper presents deadlock prevention are use to solve the deadlock problem of flexible manufacturing systems (FMS). Petri nets have been successfully as one of the most powerful tools for modeling of FMS. Their modeling power and a mathematical arsenal supporting the analysis of the modeled systems stimulate the increasing interest in Petri nets. With the structural object of Petri nets, siphons are important in the analysis and control of deadlocks in Petri nets (PNs) excellent properties. The deadlock prevention method are caused by the unmarked siphons, during the Petri nets are an effective way to model, analyze, simulation and control deadlocks in FMS is presented in this work. The characterization of special structural elements in Petri net so-called siphons has been a major approach for the investigation of deadlock-freeness in the center of FMS. The siphons are structures which allow for some implications on the net's can be well controlled by adding a control place (called monitor) for each uncontrolled siphon in the net in order to become deadlock-free situation in the system. Finally, We proposed method of modeling, simulation, control of FMS by using Petri nets, where deadlock analysis have Production line in parallel processing is demonstrate by a practical example used Petri Net-tool in MATLAB, is effective, and explicitly although its off-line computation.", "title": "" }, { "docid": "2a384fe57f79687cba8482cabfb4243b", "text": "The Semantic Web graph is growing at an incredible pace, enabling opportunities to discover new knowledge by interlinking and analyzing previously unconnected data sets. This confronts researchers with a conundrum: Whilst the data is available the programming models that facilitate scalability and the infrastructure to run various algorithms on the graph are missing. Some use MapReduce – a good solution for many problems. However, even some simple iterative graph algorithms do not map nicely to that programming model requiring programmers to shoehorn their problem to the MapReduce model. This paper presents the Signal/Collect programming model for synchronous and asynchronous graph algorithms. We demonstrate that this abstraction can capture the essence of many algorithms on graphs in a concise and elegant way by giving Signal/Collect adaptations of various relevant algorithms. Furthermore, we built and evaluated a prototype Signal/Collect framework that executes algorithms in our programming model. We empirically show that this prototype transparently scales and that guiding computations by scoring as well as asynchronicity can greatly improve the convergence of some example algorithms. We released the framework under the Apache License 2.0 (at http://www.ifi.uzh.ch/ddis/research/sc).", "title": "" }, { "docid": "6241cb482e386435be2e33caf8d94216", "text": "A fog radio access network (F-RAN) is studied, in which $K_T$ edge nodes (ENs) connected to a cloud server via orthogonal fronthaul links, serve $K_R$ users through a wireless Gaussian interference channel. Both the ENs and the users have finite-capacity cache memories, which are filled before the user demands are revealed. While a centralized placement phase is used for the ENs, which model static base stations, a decentralized placement is leveraged for the mobile users. An achievable transmission scheme is presented, which employs a combination of interference alignment, zero-forcing and interference cancellation techniques in the delivery phase, and the \\textit{normalized delivery time} (NDT), which captures the worst-case latency, is analyzed.", "title": "" }, { "docid": "d994b23ea551f23215232c0771e7d6b3", "text": "It is said that there’s nothing so practical as good theory. It may also be said that there’s nothing so theoretically interesting as good practice1. This is particularly true of efforts to relate constructivism as a theory of learning to the practice of instruction. Our goal in this paper is to provide a clear link between the theoretical principles of constructivism, the practice of instructional design, and the practice of teaching. We will begin with a basic characterization of constructivism identifying what we believe to be the central principles in learning and understanding. We will then identify and elaborate on eight instructional principles for the design of a constructivist learning environment. Finally, we will examine what we consider to be one of the best exemplars of a constructivist learning environment -Problem Based Learning as described by Barrows (1985, 1986, 1992).", "title": "" }, { "docid": "7b999aaaa1374499b910c3f7d0918484", "text": "Research in face recognition has largely been divided between those projects concerned with front-end image processing and those projects concerned with memory for familiar people. These perceptual and cognitive programmes of research have proceeded in parallel, with only limited mutual influence. In this paper we present a model of human face recognition which combines both a perceptual and a cognitive component. The perceptual front-end is based on principal components analysis of images, and the cognitive back-end is based on a simple interactive activation and competition architecture. We demonstrate that this model has a much wider predictive range than either perceptual or cognitive models alone, and we show that this type of combination is necessary in order to analyse some important effects in human face recognition. In sum, the model takes varying images of \"known\" faces and delivers information about these people.", "title": "" }, { "docid": "1503d2a235b2ce75516d18cdea42bbb5", "text": "Phosphatidylinositol-3,4,5-trisphosphate (PtdIns(3,4,5)P3 or PIP3) mediates signalling pathways as a second messenger in response to extracellular signals. Although primordial functions of phospholipids and RNAs have been hypothesized in the ‘RNA world’, physiological RNA–phospholipid interactions and their involvement in essential cellular processes have remained a mystery. We explicate the contribution of lipid-binding long non-coding RNAs (lncRNAs) in cancer cells. Among them, long intergenic non-coding RNA for kinase activation (LINK-A) directly interacts with the AKT pleckstrin homology domain and PIP3 at the single-nucleotide level, facilitating AKT–PIP3 interaction and consequent enzymatic activation. LINK-A-dependent AKT hyperactivation leads to tumorigenesis and resistance to AKT inhibitors. Genomic deletions of the LINK-A PIP3-binding motif dramatically sensitized breast cancer cells to AKT inhibitors. Furthermore, meta-analysis showed the correlation between LINK-A expression and incidence of a single nucleotide polymorphism (rs12095274: A > G), AKT phosphorylation status, and poor outcomes for breast and lung cancer patients. PIP3-binding lncRNA modulates AKT activation with broad clinical implications.", "title": "" }, { "docid": "e51fe12eecec4116a9a3b7f4c2281938", "text": "The use of wireless technologies in automation systems offers attractive benefits, but introduces a number of new technological challenges. The paper discusses these aspects for home and building automation applications. Relevant standards are surveyed. A wireless extension to KNX/EIB based on tunnelling over IEEE 802.15.4 is presented. The design emulates the properties of the KNX/EIB wired medium via wireless communication, allowing a seamless extension. Furthermore, it is geared towards zero-configuration and supports the easy integration of protocol security.", "title": "" }, { "docid": "e6b4097ead39f9b5144e2bd8551762ed", "text": "Thanks to advances in medical imaging technologies and numerical methods, patient-specific modelling is more and more used to improve diagnosis and to estimate the outcome of surgical interventions. It requires the extraction of the domain of interest from the medical scans of the patient, as well as the discretisation of this geometry. However, extracting smooth multi-material meshes that conform to the tissue boundaries described in the segmented image is still an active field of research. We propose to solve this issue by combining an implicit surface reconstruction method with a multi-region mesh extraction scheme. The surface reconstruction algorithm is based on multi-level partition of unity implicit surfaces, which we extended to the multi-material case. The mesh generation algorithm consists in a novel multi-domain version of the marching tetrahedra. It generates multi-region meshes as a set of triangular surface patches consistently joining each other at material junctions. This paper presents this original meshing strategy, starting from boundary points extraction from the segmented data to heterogeneous implicit surface definition, multi-region surface triangulation and mesh adaptation. Results indicate that the proposed approach produces smooth and high-quality triangular meshes with a reasonable geometric accuracy. Hence, the proposed method is well suited for subsequent volume mesh generation and finite element simulations.", "title": "" }, { "docid": "99880fca88bef760741f48166a51ca6f", "text": "This paper describes first results using the Unified Medical Language System (UMLS) for distantly supervised relation extraction. UMLS is a large knowledge base which contains information about millions of medical concepts and relations between them. Our approach is evaluated using existing relation extraction data sets that contain relations that are similar to some of those in UMLS.", "title": "" }, { "docid": "6893ce06d616d08cf0a9053dc9ea493d", "text": "Hope is the sum of goal thoughts as tapped by pathways and agency. Pathways reflect the perceived capability to produce goal routes; agency reflects the perception that one can initiate action along these pathways. Using trait and state hope scales, studies explored hope in college student athletes. In Study 1, male and female athletes were higher in trait hope than nonathletes; moreover, hope significantly predicted semester grade averages beyond cumulative grade point average and overall self-worth. In Study 2, with female cross-country athletes, trait hope predicted athletic outcomes; further, weekly state hope tended to predict athletic outcomes beyond dispositional hope, training, and self-esteem, confidence, and mood. In Study 3, with female track athletes, dispositional hope significantly predicted athletic outcomes beyond variance related to athletic abilities and affectivity; moreover, athletes had higher hope than nonathletes.", "title": "" }, { "docid": "103b784d7cc23663584486fa3ca396bb", "text": "A single, stationary topic model such as latent Dirichlet allocation is inappropriate for modeling corpora that span long time periods, as the popularity of topics is likely to change over time. A number of models that incorporate time have been proposed, but in general they either exhibit limited forms of temporal variation, or require computationally expensive inference methods. In this paper we propose non-parametric Topics over Time (npTOT), a model for time-varying topics that allows an unbounded number of topics and flexible distribution over the temporal variations in those topics’ popularity. We develop a collapsed Gibbs sampler for the proposed model and compare against existing models on synthetic and real document sets.", "title": "" }, { "docid": "dd3efa1bea58934793c7c6a6064e1330", "text": "This paper gives a broad overview of a complete framework for assessing the predictive uncertainty of scientific computing applications. The framework is complete in the sense that it treats both types of uncertainty (aleatory and epistemic) and incorporates uncertainty due to the form of the model and any numerical approximations used. Aleatory (or random) uncertainties in model inputs are treated using cumulative distribution functions, while epistemic (lack of knowledge) uncertainties are treated as intervals. Approaches for propagating both types of uncertainties through the model to the system response quantities of interest are discussed. Numerical approximation errors (due to discretization, iteration, and round off) are estimated using verification techniques, and the conversion of these errors into epistemic uncertainties is discussed. Model form uncertainties are quantified using model validation procedures, which include a comparison of model predictions to experimental data and then extrapolation of this uncertainty structure to points in the application domain where experimental data do not exist. Finally, methods for conveying the total predictive uncertainty to decision makers are presented.", "title": "" } ]
scidocsrr
43fda67994521863cf18d5b59f1c239d
Re-ranking Person Re-identification with k-Reciprocal Encoding
[ { "docid": "2bc30693be1c5855a9410fb453128054", "text": "Person re-identification is to match pedestrian images from disjoint camera views detected by pedestrian detectors. Challenges are presented in the form of complex variations of lightings, poses, viewpoints, blurring effects, image resolutions, camera settings, occlusions and background clutter across camera views. In addition, misalignment introduced by the pedestrian detector will affect most existing person re-identification methods that use manually cropped pedestrian images and assume perfect detection. In this paper, we propose a novel filter pairing neural network (FPNN) to jointly handle misalignment, photometric and geometric transforms, occlusions and background clutter. All the key components are jointly optimized to maximize the strength of each component when cooperating with others. In contrast to existing works that use handcrafted features, our method automatically learns features optimal for the re-identification task from data. The learned filter pairs encode photometric transforms. Its deep architecture makes it possible to model a mixture of complex photometric and geometric transforms. We build the largest benchmark re-id dataset with 13, 164 images of 1, 360 pedestrians. Unlike existing datasets, which only provide manually cropped pedestrian images, our dataset provides automatically detected bounding boxes for evaluation close to practical applications. Our neural network significantly outperforms state-of-the-art methods on this dataset.", "title": "" } ]
[ { "docid": "141c28bfbeb5e71dc68d20b6220794c3", "text": "The development of topical cosmetic anti-aging products is becoming increasingly sophisticated. This is demonstrated by the benefit agents selected and the scientific approaches used to identify them, treatment protocols that increasingly incorporate multi-product regimens, and the level of rigor in the clinical testing used to demonstrate efficacy. Consistent with these principles, a new cosmetic anti-aging regimen was recently developed. The key product ingredients were identified based on an understanding of the key mechanistic themes associated with aging at the genomic level coupled with appropriate in vitro testing. The products were designed to provide optimum benefits when used in combination in a regimen format. This cosmetic regimen was then tested for efficacy against the appearance of facial wrinkles in a 24-week clinical trial compared with 0.02% tretinoin, a recognized benchmark prescription treatment for facial wrinkling. The cosmetic regimen significantly improved wrinkle appearance after 8 weeks relative to tretinoin and was better tolerated. Wrinkle appearance benefits from the two treatments in cohorts of subjects who continued treatment through 24 weeks were also comparable.", "title": "" }, { "docid": "14ba02b92184c21cbbe2344313e09c23", "text": "Smart meters are at high risk to be an attack target or to be used as an attacking means of malicious users because they are placed at the closest location to users in the smart gridbased infrastructure. At present, Korea is proceeding with 'Smart Grid Advanced Metering Infrastructure (AMI) Construction Project', and has selected Device Language Message Specification/ COmpanion Specification for Energy Metering (DLMS/COSEM) protocol for the smart meter communication. However, the current situation is that the vulnerability analysis technique is still insufficient to be applied to DLMS/COSEM-based smart meters. Therefore, we propose a new fuzzing architecture for analyzing vulnerabilities which is applicable to actual DLMS/COSEM-based smart meter devices. In addition, this paper presents significant case studies for verifying proposed fuzzing architecture through conducting the vulnerability analysis of the experimental results from real DLMS/COSEM-based smart meter devices used in Korea SmartGrid Testbed.", "title": "" }, { "docid": "dc8d9a7da61aab907ee9def56dfbd795", "text": "The ability to detect change-points in a dynamic network or a time series of graphs is an increasingly important task in many applications of the emerging discipline of graph signal processing. This paper formulates change-point detection as a hypothesis testing problem in terms of a generative latent position model, focusing on the special case of the Stochastic Block Model time series. We analyze two classes of scan statistics, based on distinct underlying locality statistics presented in the literature. Our main contribution is the derivation of the limiting properties and power characteristics of the competing scan statistics. Performance is compared theoretically, on synthetic data, and empirically, on the Enron email corpus.", "title": "" }, { "docid": "446af0ad077943a77ac4a38fd84df900", "text": "We investigate the manufacturability of 20-nm double-gate and FinFET devices in integrated circuits by projecting process tolerances. Two important factors affecting the sensitivity of device electrical parameters to physical variations were quantitatively considered. The quantum effect was computed using the density gradient method and the sensitivity of threshold voltage to random dopant fluctuation was studied by Monte Carlo simulation. Our results show the 3 value ofVT variation caused by discrete impurity fluctuation can be greater than 100%. Thus, engineering the work function of gate materials and maintaining a nearly intrinsic channel is more desirable. Based on a design with an intrinsic channel and ideal gate work function, we analyzed the sensitivity of device electrical parameters to several important physical fluctuations such as the variations in gate length, body thickness, and gate dielectric thickness. We found that quantum effects have great impact on the performance of devices. As a result, the device electrical behavior is sensitive to small variations of body thickness. The effect dominates over the effects produced by other physical fluctuations. To achieve a relative variation of electrical parameters comparable to present practice in industry, we face a challenge of fin width control (less than 1 nm 3 value of variation) for the 20-nm FinFET devices. The constraint of the gate length variation is about 10 15%. We estimate a tolerance of 1 2 A 3 value of oxide thickness variation and up to 30% front-back oxide thickness mismatch.", "title": "" }, { "docid": "41aa05455471ecd660599f4ec285ff29", "text": "The recent progress of human parsing techniques has been largely driven by the availability of rich data resources. In this work, we demonstrate some critical discrepancies between the current benchmark datasets and the real world human parsing scenarios. For instance, all the human parsing datasets only contain one person per image, while usually multiple persons appear simultaneously in a realistic scene. It is more practically demanded to simultaneously parse multiple persons, which presents a greater challenge to modern human parsing methods. Unfortunately, absence of relevant data resources severely impedes the development of multiple-human parsing methods. To facilitate future human parsing research, we introduce the Multiple-Human Parsing (MHP) dataset, which contains multiple persons in a real world scene per single image. The MHP dataset contains various numbers of persons (from 2 to 16) per image with 18 semantic classes for each parsing annotation. Persons appearing in the MHP images present sufficient variations in pose, occlusion and interaction. To tackle the multiple-human parsing problem, we also propose a novel Multiple-Human Parser (MH-Parser), which considers both the global context and local cues for each person in the parsing process. The model is demonstrated to outperform the naive “detect-and-parse” approach by a large margin, which will serve as a solid baseline and help drive the future research in real world human parsing.", "title": "" }, { "docid": "c215a497d39f4f95a9fc720debb14b05", "text": "Adding frequency reconfigurability to a compact metamaterial-inspired antenna is investigated. The antenna is a printed monopole with an incorporated slot and is fed by a coplanar waveguide (CPW) line. This antenna was originally inspired from the concept of negative-refractive-index metamaterial transmission lines and exhibits a dual-band behavior. By using a varactor diode, the lower band (narrowband) of the antenna, which is due to radiation from the incorporated slot, can be tuned over a broad frequency range, while the higher band (broadband) remains effectively constant. A detailed equivalent circuit model is developed that predicts the frequency-tuning behavior for the lower band of the antenna. The circuit model shows the involvement of both CPW even and odd modes in the operation of the antenna. Experimental results show that, for a varactor diode capacitance approximately ranging from 0.1-0.7 pF, a tuning range of 1.6-2.23 GHz is achieved. The size of the antenna at the maximum frequency is 0.056 λ0 × 0.047 λ0 and the antenna is placed over a 0.237 λ0 × 0.111 λ0 CPW ground plane (λ0 being the wavelength in vacuum).", "title": "" }, { "docid": "d8d102c3d6ac7d937bb864c69b4d3cd9", "text": "Question Answering (QA) systems are becoming the inspiring model for the future of search engines. While recently, underlying datasets for QA systems have been promoted from unstructured datasets to structured datasets with highly semantic-enriched metadata, but still question answering systems involve serious challenges which cause to be far beyond desired expectations. In this paper, we raise the challenges for building a Question Answering (QA) system especially with the focus of employing structured data (i.e. knowledge graph). This paper provide an exhaustive insight of the known challenges, so far. Thus, it helps researchers to easily spot open rooms for the future research agenda.", "title": "" }, { "docid": "c3a6a72c9d738656f356d67cd5ce6c47", "text": "Most doors are controlled by persons with the use of keys, security cards, password or pattern to open the door. Theaim of this paper is to help users forimprovement of the door security of sensitive locations by using face detection and recognition. Face is a complex multidimensional structure and needs good computing techniques for detection and recognition. This paper is comprised mainly of three subsystems: namely face detection, face recognition and automatic door access control. Face detection is the process of detecting the region of face in an image. The face is detected by using the viola jones method and face recognition is implemented by using the Principal Component Analysis (PCA). Face Recognition based on PCA is generally referred to as the use of Eigenfaces.If a face is recognized, it is known, else it is unknown. The door will open automatically for the known person due to the command of the microcontroller. On the other hand, alarm will ring for the unknown person. Since PCA reduces the dimensions of face images without losing important features, facial images for many persons can be stored in the database. Although many training images are used, computational efficiency cannot be decreased significantly. Therefore, face recognition using PCA can be more useful for door security system than other face recognition schemes.", "title": "" }, { "docid": "78ce06926ea3b2012277755f0916fbb7", "text": "We present a review of the historical evolution of software engineering, intertwining it with the history of knowledge engineering because \"those who cannot remember the past are condemned to repeat it.\" This retrospective represents a further step forward to understanding the current state of both types of engineerings; history has also positive experiences; some of them we would like to remember and to repeat. Two types of engineerings had parallel and divergent evolutions but following a similar pattern. We also define a set of milestones that represent a convergence or divergence of the software development methodologies. These milestones do not appear at the same time in software engineering and knowledge engineering, so lessons learned in one discipline can help in the evolution of the other one.", "title": "" }, { "docid": "d8e60dc8378fe39f698eede2b6687a0f", "text": "Today's complex software systems are neither secure nor reliable. The rudimentary software protection primitives provided by current hardware forces systems to run many distrusting software components (e.g., procedures, libraries, plugins, modules) in the same protection domain, or otherwise suffer degraded performance from address space switches.\n We present CODOMs (COde-centric memory DOMains), a novel architecture that can provide finer-grained isolation between software components with effectively zero run-time overhead, all at a fraction of the complexity of other approaches. An implementation of CODOMs in a cycle-accurate full-system x86 simulator demonstrates that with the right hardware support, finer-grained protection and run-time performance can peacefully coexist.", "title": "" }, { "docid": "dd211105651b376b40205eb16efe1c25", "text": "WBAN based medical-health technologies have great potential for continuous monitoring in ambulatory settings, early detection of abnormal conditions, and supervised rehabilitation. They can provide patients with increased confidence and a better quality of life, and promote healthy behavior and health awareness. Continuous monitoring with early detection likely has the potential to provide patients with an increased level of confidence, which in turn may improve quality of life. In addition, ambulatory monitoring will allow patients to engage in normal activities of daily life, rather than staying at home or close to specialized medical services. Last but not least, inclusion of continuous monitoring data into medical databases will allow integrated analysis of all data to optimize individualized care and provide knowledge discovery through integrated data mining. Indeed, with the current technological trend toward integration of processors and wireless interfaces, we will soon have coin-sized intelligent sensors. They will be applied as skin patches, seamlessly integrated into a personal monitoring system, and worn for extended periods of time.", "title": "" }, { "docid": "8b7cb051224008ba3e1bf91bac5e9d21", "text": "The Internet of things aspires to connect anyone with anything at any point of time at any place. Internet of Thing is generally made up of three-layer architecture. Namely Perception, Network and Application layers. A lot of security principles should be enabled at each layer for proper and efficient working of these applications. This paper represents the overview of Security principles, Security Threats and Security challenges at the application layer and its countermeasures to overcome those challenges. The Application layer plays an important role in all of the Internet of Thing applications. The most widely used application layer protocol is MQTT. The security threats for Application Layer Protocol MQTT is particularly selected and evaluated. Comparison is done between different Application layer protocols and security measures for those protocols. Due to the lack of common standards for IoT protocols, a lot of issues are considered while choosing the particular protocol.", "title": "" }, { "docid": "79465d290ab299b9d75e9fa617d30513", "text": "In this paper we describe computational experience in solving unconstrained quadratic zero-one problems using a branch and bound algorithm. The algorithm incorporates dynamic preprocessing techniques for forcing variables and heuristics to obtain good starting points. Computational results and comparisons with previous studies on several hundred test problems with dimensions up to 200 demonstrate the efficiency of our algorithm. In dieser Arbeit beschreiben wir rechnerische Erfahrungen bei der Lösung von unbeschränkten quadratischen Null-Eins-Problemen mit einem “Branch and Bound”-Algorithmus. Der Algorithmus erlaubt dynamische Vorbereitungs-Techniken zur Erzwingung ausgewählter Variablen und Heuristiken zur Wahl von guten Startpunkten. Resultate von Berechnungen und Vergleiche mit früheren Arbeiten mit mehreren hundert Testproblemen mit Dimensionen bis 200 zeigen die Effizienz unseres Algorithmus.", "title": "" }, { "docid": "b27b164a7ff43b8f360167e5f886f18a", "text": "Segmentation and grouping of image elements is required to proceed with image recognition. Due to the fact that the images are two dimensional (2D) representations of the real three dimensional (3D) scenes, the information of the third dimension, like geometrical relations between the objects that are important for reasonable segmentation and grouping, are lost in 2D image representations. Computer stereo vision implies on understanding information stored in 3D-scene. Techniques for stereo computation are observed in this paper. The methods for solving the correspondence problem in stereo image matching are presented. The process of 3D-scene reconstruction from stereo image pairs and extraction of parameters important for image understanding are described. Occluded and surrounding areas in stereo image pairs are stressed out as important for image understanding.", "title": "" }, { "docid": "4cc71db87682a96ddee09e49a861142f", "text": "BACKGROUND\nReadiness is an integral and preliminary step in the successful implementation of telehealth services into existing health systems within rural communities.\n\n\nMETHODS AND MATERIALS\nThis paper details and critiques published international peer-reviewed studies that have focused on assessing telehealth readiness for rural and remote health. Background specific to readiness and change theories is provided, followed by a critique of identified telehealth readiness models, including a commentary on their readiness assessment tools.\n\n\nRESULTS\nFour current readiness models resulted from the search process. The four models varied across settings, such as rural outpatient practices, hospice programs, rural communities, as well as government agencies, national associations, and organizations. All models provided frameworks for readiness tools. Two specifically provided a mechanism by which communities could be categorized by their level of telehealth readiness.\n\n\nDISCUSSION\nCommon themes across models included: an appreciation of practice context, strong leadership, and a perceived need to improve practice. Broad dissemination of these telehealth readiness models and tools is necessary to promote awareness and assessment of readiness. This will significantly aid organizations to facilitate the implementation of telehealth.", "title": "" }, { "docid": "44fee78f33e4d5c6d9c8b0126b1d5830", "text": "This paper discusses an industrial case study in which data mining has been applied to solve a quality engineering problem in electronics assembly. During the assembly process, solder balls occur underneath some components of printed circuit boards. The goal is to identify the cause of solder defects in a circuit board using a data mining approach. Statistical process control and design of experiment approaches did not provide conclusive results. The paper discusses features considered in the study, data collected, and the data mining solution approach to identify causes of quality faults in an industrial application.", "title": "" }, { "docid": "9ba6a2042e99c3ace91f0fc017fa3fdd", "text": "This paper proposes a two-element multi-input multi-output (MIMO) open-slot antenna implemented on the display ground plane of a laptop computer for eight-band long-term evolution/wireless wide-area network operations. The metal surroundings of the antennas have been well integrated as a part of the radiation structure. In the single-element open-slot antenna, the nearby hinge slot (which is bounded by two ground planes and two hinges) is relatively large as compared with the open slot itself and acts as a good radiator. In the MIMO antenna consisting of two open-slot elements, a T slot is embedded in the display ground plane and is connected to the hinge slot. The T and hinge slots when connected behave as a radiator; whereas, the T slot itself functions as an isolation element. With the isolation element, simulated isolations between the two elements of the MIMO antenna are raised from 8.3–11.2 to 15–17.1 dB in 698–960 MHz and from 12.1–21 to 15.9–26.7 dB in 1710–2690 MHz. Measured isolations with the isolation element in the desired low- and high-frequency ranges are 17.6–18.8 and 15.2–23.5 dB, respectively. Measured and simulated efficiencies for the two-element MIMO antenna with either element excited are both larger than 50% in the desired operating frequency bands.", "title": "" }, { "docid": "ad3add7522b3a58359d36e624e9e65f7", "text": "In this paper, global and local prosodic features extracted from sentence, word and syllables are proposed for speech emotion or affect recognition. In this work, duration, pitch, and energy values are used to represent the prosodic information, for recognizing the emotions from speech. Global prosodic features represent the gross statistics such as mean, minimum, maximum, standard deviation, and slope of the prosodic contours. Local prosodic features represent the temporal dynamics in the prosody. In this work, global and local prosodic features are analyzed separately and in combination at different levels for the recognition of emotions. In this study, we have also explored the words and syllables at different positions (initial, middle, and final) separately, to analyze their contribution towards the recognition of emotions. In this paper, all the studies are carried out using simulated Telugu emotion speech corpus (IITKGP-SESC). These results are compared with the results of internationally known Berlin emotion speech corpus (Emo-DB). Support vector machines are used to develop the emotion recognition models. The results indicate that, the recognition performance using local prosodic features is better compared to the performance of global prosodic features. Words in the final position of the sentences, syllables in the final position of the words exhibit more emotion discriminative information compared to the words and syllables present in the other positions. K.S. Rao ( ) · S.G. Koolagudi · R.R. Vempada School of Information Technology, Indian Institute of Technology Kharagpur, Kharagpur 721302, West Bengal, India e-mail: ksrao@iitkgp.ac.in S.G. Koolagudi e-mail: koolagudi@yahoo.com R.R. Vempada e-mail: ramu.csc@gmail.com", "title": "" }, { "docid": "33ed6ab1eef74e6ba6649ff5a85ded6b", "text": "With the rapid increasing of smart phones and their embedded sensing technologies, mobile crowd sensing (MCS) becomes an emerging sensing paradigm for performing large-scale sensing tasks. One of the key challenges of large-scale mobile crowd sensing systems is how to effectively select the minimum set of participants from the huge user pool to perform the tasks and achieve certain level of coverage. In this paper, we introduce a new MCS architecture which leverages the cached sensing data to fulfill partial sensing tasks in order to reduce the size of selected participant set. We present a newly designed participant selection algorithm with caching and evaluate it via extensive simulations with a real-world mobile dataset.", "title": "" }, { "docid": "f13ffbb31eedcf46df1aaecfbdf61be9", "text": "Finding one's way in a large-scale environment may engage different cognitive processes than following a familiar route. The neural bases of these processes were investigated using functional MRI (fMRI). Subjects found their way in one virtual-reality town and followed a well-learned route in another. In a control condition, subjects followed a visible trail. Within subjects, accurate wayfinding activated the right posterior hippocampus. Between-subjects correlations with performance showed that good navigators (i.e., accurate wayfinders) activated the anterior hippocampus during wayfinding and head of caudate during route following. These results coincide with neurophysiological evidence for distinct response (caudate) and place (hippocampal) representations supporting navigation. We argue that the type of representation used influences both performance and concomitant fMRI activation patterns.", "title": "" } ]
scidocsrr
10debb17e51145a4ff0adf56e6609281
A new sentence similarity measure and sentence based extractive technique for automatic text summarization
[ { "docid": "639bbe7b640c514ab405601c7c3cfa01", "text": "Measuring the semantic similarity between words is an important component in various tasks on the web such as relation extraction, community mining, document clustering, and automatic metadata extraction. Despite the usefulness of semantic similarity measures in these applications, accurately measuring semantic similarity between two words (or entities) remains a challenging task. We propose an empirical method to estimate semantic similarity using page counts and text snippets retrieved from a web search engine for two words. Specifically, we define various word co-occurrence measures using page counts and integrate those with lexical patterns extracted from text snippets. To identify the numerous semantic relations that exist between two given words, we propose a novel pattern extraction algorithm and a pattern clustering algorithm. The optimal combination of page counts-based co-occurrence measures and lexical pattern clusters is learned using support vector machines. The proposed method outperforms various baselines and previously proposed web-based semantic similarity measures on three benchmark data sets showing a high correlation with human ratings. Moreover, the proposed method significantly improves the accuracy in a community mining task.", "title": "" }, { "docid": "91c024a832bfc07bc00b7086bcf77add", "text": "Topic-focused multi-document summarization aims to produce a summary biased to a given topic or user profile. This paper presents a novel extractive approach based on manifold-ranking of sentences to this summarization task. The manifold-ranking process can naturally make full use of both the relationships among all the sentences in the documents and the relationships between the given topic and the sentences. The ranking score is obtained for each sentence in the manifold-ranking process to denote the biased information richness of the sentence. Then the greedy algorithm is employed to impose diversity penalty on each sentence. The summary is produced by choosing the sentences with both high biased information richness and high information novelty. Experiments on DUC2003 and DUC2005 are performed and the ROUGE evaluation results show that the proposed approach can significantly outperform existing approaches of the top performing systems in DUC tasks and baseline approaches.", "title": "" } ]
[ { "docid": "c4183c8b08da8d502d84a650d804cac8", "text": "A three-phase current source gate turn-off (GTO) thyristor rectifier is described with a high power factor, low line current distortion, and a simple main circuit. It adopts pulse-width modulation (PWM) control techniques obtained by analyzing the PWM patterns of three-phase current source rectifiers/inverters, and it uses a method of generating such patterns. In addition, by using an optimum set-up of the circuit constants, the GTO switching frequency is reduced to 500 Hz. This rectifier is suitable for large power conversion, because it can reduce GTO switching loss and its snubber loss.<<ETX>>", "title": "" }, { "docid": "b9aaab241bab9c11ac38d6e9188b7680", "text": "Find loads of the research methods in the social sciences book catalogues in this site as the choice of you visiting this page. You can also join to the website book library that will show you numerous books from any types. Literature, science, politics, and many more catalogues are presented to offer you the best book to find. The book that really makes you feels satisfied. Or that's the book that will save you from your job deadline.", "title": "" }, { "docid": "ea4a1405e1c6444726d1854c7c56a30d", "text": "This paper presents a novel integrated approach for efficient optimization based online trajectory planning of topologically distinctive mobile robot trajectories. Online trajectory optimization deforms an initial coarse path generated by a global planner by minimizing objectives such as path length, transition time or control effort. Kinodynamic motion properties of mobile robots and clearance from obstacles impose additional equality and inequality constraints on the trajectory optimization. Local planners account for efficiency by restricting the search space to locally optimal solutions only. However, the objective function is usually non-convex as the presence of obstacles generates multiple distinctive local optima. The proposed method maintains and simultaneously optimizes a subset of admissible candidate trajectories of distinctive topologies and thus seeking the overall best candidate among the set of alternative local solutions. Time-optimal trajectories for differential-drive and carlike robots are obtained efficiently by adopting the Timed-Elastic-Band approach for the underlying trajectory optimization problem. The investigation of various example scenarios and a comparative analysis with conventional local planners confirm the advantages of integrated exploration, maintenance and optimization of topologically distinctive trajectories. ∗Corresponding author Email address: christoph.roesmann@tu-dortmund.de (Christoph Rösmann) Preprint submitted to Robotics and Autonomous Systems November 12, 2016", "title": "" }, { "docid": "e87a799822f1012f032cb66cd2925604", "text": "Curcumin, the yellow color pigment of turmeric, is produced industrially from turmeric oleoresin. The mother liquor after isolation of curcumin from oleoresin contains approximately 40% oil. The oil was extracted from the mother liquor using hexane at 60 degrees C, and the hexane extract was separated into three fractions using silica gel column chromatography. These fractions were tested for antibacterial activity by pour plate method against Bacillus cereus, Bacillus coagulans, Bacillus subtilis, Staphylococcus aureus, Escherichia coli, and Pseudomonas aeruginosa. Fraction II eluted with 5% ethyl acetate in hexane was found to be most active fraction. The turmeric oil, fraction I, and fraction II were analyzed by GC and GC-MS. ar-Turmerone, turmerone, and curlone were found to be the major compounds present in these fractions along with other oxygenated compounds.", "title": "" }, { "docid": "4654a1926d0caa787ade6aaf58e00474", "text": "GitHub is the most widely used social, distributed version control system. It has around 10 million registered users and hosts over 16 million public repositories. Its user base is also very active as GitHub ranks in the top 100 Alexa most popular websites. In this study, we collect GitHub’s state in its entirety. Doing so, allows us to study new aspects of the ecosystem. Although GitHub is the home to millions of users and repositories, the analysis of users’ activity time-series reveals that only around 10% of them can be considered active. The collected dataset allows us to investigate the popularity of programming languages and existence of pattens in the relations between users, repositories, and programming languages. By, applying a k-means clustering method to the usersrepositories commits matrix, we find that two clear clusters of programming languages separate from the remaining. One cluster forms for “web programming” languages (Java Script, Ruby, PHP, CSS), and a second for “system oriented programming” languages (C, C++, Python). Further classification, allow us to build a phylogenetic tree of the use of programming languages in GitHub. Additionally, we study the main and the auxiliary programming languages of the top 1000 repositories in more detail. We provide a ranking of these auxiliary programming languages using various metrics, such as percentage of lines of code, and PageRank.", "title": "" }, { "docid": "41481b2f081831d28ead1b685465d535", "text": "Triticum aestivum (Wheat grass juice) has high concentrations of chlorophyll, amino acids, minerals, vitamins, and enzymes. Fresh juice has been shown to possess anti-cancer activity, anti-ulcer activity, anti-inflammatory, antioxidant activity, anti-arthritic activity, and blood building activity in Thalassemia. It has been argued that wheat grass helps blood flow, digestion, and general detoxification of the body due to the presence of biologically active compounds and minerals in it and due to its antioxidant potential which is derived from its high content of bioflavonoids such as apigenin, quercitin, luteoline. Furthermore, indole compounds, amely choline, which known for antioxidants and also possess chelating property for iron overload disorders. The presence of 70% chlorophyll, which is almost chemically identical to haemoglobin. The only difference is that the central element in chlorophyll is magnesium and in hemoglobin it is iron. In wheat grass makes it more useful in various clinical conditions involving hemoglobin deficiency and other chronic disorders ultimately considered as green blood.", "title": "" }, { "docid": "071ba3d1cec138011f398cae8589b77b", "text": "The term ‘vulnerability’ is used in many different ways by various scholarly communities. The resulting disagreement about the appropriate definition of vulnerability is a frequent cause for misunderstanding in interdisciplinary research on climate change and a challenge for attempts to develop formal models of vulnerability. Earlier attempts at reconciling the various conceptualizations of vulnerability were, at best, partly successful. This paper presents a generally applicable conceptual framework of vulnerability that combines a nomenclature of vulnerable situations and a terminology of vulnerability concepts based on the distinction of four fundamental groups of vulnerability factors. This conceptual framework is applied to characterize the vulnerability concepts employed by the main schools of vulnerability research and to review earlier attempts at classifying vulnerability concepts. None of these onedimensional classification schemes reflects the diversity of vulnerability concepts identified in this review. The wide range of policy responses available to address the risks from global climate change suggests that climate impact, vulnerability, and adaptation assessments will continue to apply a variety of vulnerability concepts. The framework presented here provides the much-needed conceptual clarity and facilitates bridging the various approaches to researching vulnerability to climate change. r 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "48889a388562e195eff17488f57ca1e0", "text": "To clarify the effects of changing shift schedules from a full-day to a half-day before a night shift, 12 single nurses and 18 married nurses with children that engaged in night shift work in a Japanese hospital were investigated. Subjects worked 2 different shift patterns consisting of a night shift after a half-day shift (HF-N) and a night shift after a day shift (D-N). Physical activity levels were recorded with a physical activity volume meter to measure sleep/wake time more precisely without restricting subjects' activities. The duration of sleep before a night shift of married nurses was significantly shorter than that of single nurses for both shift schedules. Changing shift from the D-N to the HF-N increased the duration of sleep before a night shift for both groups, and made wake-up time earlier for single nurses only. Repeated ANCOVA of the series of physical activities showed significant differences with shift (p < 0.01) and marriage (p < 0.01) for variances, and age (p < 0.05) for a covariance. The paired t-test to compare the effects of changing shift patterns in each subject group and ANCOVA for examining the hourly activity differences between single and married nurses showed that the effects of a change in shift schedules seemed to have less effect on married nurses than single nurses. These differences might due to the differences of their family/home responsibilities.", "title": "" }, { "docid": "da8be182ac315342bead9df4b87c6bab", "text": "A new computational imaging technique, termed Fourier ptychographic microscopy (FPM), uses a sequence of low-resolution images captured under varied illumination to iteratively converge upon a high-resolution complex sample estimate. Here, we propose a mathematical model of FPM that explicitly connects its operation to conventional ptychography, a common procedure applied to electron and X-ray diffractive imaging. Our mathematical framework demonstrates that under ideal illumination conditions, conventional ptychography and FPM both produce datasets that are mathematically linked by a linear transformation. We hope this finding encourages the future cross-pollination of ideas between two otherwise unconnected experimental imaging procedures. In addition, the coherence state of the illumination source used by each imaging platform is critical to successful operation, yet currently not well understood. We apply our mathematical framework to demonstrate that partial coherence uniquely alters both conventional ptychography's and FPM's captured data, but up to a certain threshold can still lead to accurate resolution-enhanced imaging through appropriate computational post-processing. We verify this theoretical finding through simulation and experiment.", "title": "" }, { "docid": "073486fe6bcd756af5f5325b27c57912", "text": "This paper describes the case of a unilateral agraphic patient (GG) who makes letter substitutions only when writing letters and words with his dominant left hand. Accuracy is significantly greater when he is writing with his right hand and when he is asked to spell words orally. GG also makes case errors when writing letters, and will sometimes write words in mixed case. However, these allograph errors occur regardless of which hand he is using to write. In terms of cognitive models of peripheral dysgraphia (e.g., Ellis, 1988), it appears that he has an allograph level impairment that affects writing with both hands, and a separate problem in accessing graphic motor patterns that disrupts writing with the left hand only. In previous studies of left-handed patients with unilateral agraphia (Zesiger & Mayer, 1992; Zesiger, Pegna, & Rilliet, 1994), it has been suggested that allographic knowledge used for writing with both hands is stored exclusively in the left hemisphere, but that graphic motor patterns are represented separately in each hemisphere. The pattern of performance demonstrated by GG strongly supports such a conclusion.", "title": "" }, { "docid": "a318e8755d2f2ba3c84543ba853c34fc", "text": "Multi-view learning can provide self-supervision when different views are avail1 able of the same data. Distributional hypothesis provides another form of useful 2 self-supervision from adjacent sentences which are plentiful in large unlabelled 3 corpora. Motivated by the asymmetry in the two hemispheres of the human brain 4 as well as the observation that different learning architectures tend to emphasise 5 different aspects of sentence meaning, we present two multi-view frameworks for 6 learning sentence representations in an unsupervised fashion. One framework uses 7 a generative objective and the other a discriminative one. In both frameworks, 8 the final representation is an ensemble of two views, in which, one view encodes 9 the input sentence with a Recurrent Neural Network (RNN), and the other view 10 encodes it with a simple linear model. We show that, after learning, the vectors 11 produced by our multi-view frameworks provide improved representations over 12 their single-view learned counterparts, and the combination of different views gives 13 representational improvement over each view and demonstrates solid transferability 14 on standard downstream tasks. 15", "title": "" }, { "docid": "eb5c7c9fbe64cbfd4b6c7dd5490c17c1", "text": "Android packing services provide significant benefits in code protection by hiding original executable code, which help app developers to protect their code against reverse engineering. However, adversaries take the advantage of packers to hide their malicious code. A number of unpacking approaches have been proposed to defend against malicious packed apps. Unfortunately, most of the unpacking approaches work only for a limited time or for a particular type of packers. The analysis for different packers often requires specific domain knowledge and a significant amount of manual effort. In this paper, we conducted analyses of known Android packers appeared in recent years and propose to design an automatic detection and classification framework. The framework is capable of identifying packed apps, extracting the execution behavioral pattern of packers, and categorizing packed apps into groups. The variants of packer families share typical behavioral patterns reflecting their activities and packing techniques. The behavioral patterns obtained dynamically can be exploited to detect and classify unknown packers, which shed light on new directions for security researchers.", "title": "" }, { "docid": "71573bc8f5be1025837d5c72393b4fa6", "text": "This paper describes our initial work in developing a real-time audio-visual Chinese speech synthesizer with a 3D expressive avatar. The avatar model is parameterized according to the MPEG-4 facial animation standard [1]. This standard offers a compact set of facial animation parameters (FAPs) and feature points (FPs) to enable realization of 20 Chinese visemes and 7 facial expressions (i.e. 27 target facial configurations). The Xface [2] open source toolkit enables us to define the influence zone for each FP and the deformation function that relates them. Hence we can easily animate a large number of coordinates in the 3D model by specifying values for a small set of FAPs and their FPs. FAP values for 27 target facial configurations were estimated from available corpora. We extended the dominance blending approach to effect animations for coarticulated visemes superposed with expression changes. We selected six sentiment-carrying text messages and synthesized expressive visual speech (for all expressions, in randomized order) with neutral audio speech. A perceptual experiment involving 11 subjects shows that they can identify the facial expression that matches the text message’s sentiment 85% of the time.", "title": "" }, { "docid": "e92fd3ce5f90600f2fca84682c35c4e3", "text": "A software-defined radar is a versatile radar system, where most of the processing, like signal generation, filtering, up-and down conversion etc. is performed by a software. This paper presents a state of the art of software-defined radar technology. It describes the design concept of software-defined radars and the two possible implementations. A global assessment is presented, and the link with the Cognitive Radar is explained.", "title": "" }, { "docid": "ca1c193e5e5af821772a5d123e84b72a", "text": "Over the last few years, the phenomenon of adversarial examples — maliciously constructed inputs that fool trained machine learning models — has captured the attention of the research community, especially when the adversary is restricted to small modifications of a correctly handled input. Less surprisingly, image classifiers also lack human-level performance on randomly corrupted images, such as images with additive Gaussian noise. In this paper we provide both empirical and theoretical evidence that these are two manifestations of the same underlying phenomenon, establishing close connections between the adversarial robustness and corruption robustness research programs. This suggests that improving adversarial robustness should go hand in hand with improving performance in the presence of more general and realistic image corruptions. Based on our results we recommend that future adversarial defenses consider evaluating the robustness of their methods to distributional shift with benchmarks such as Imagenet-C.", "title": "" }, { "docid": "c6485365e8ce550ea8c507aa963a00c2", "text": "Consensus molecular subtypes and the evolution of precision medicine in colorectal cancer Rodrigo Dienstmann, Louis Vermeulen, Justin Guinney, Scott Kopetz, Sabine Tejpar and Josep Tabernero Nature Reviews Cancer 17, 79–92 (2017) In this article a source of grant funding for one of the authors was omitted from the Acknowledgements section. The online version of the article has been corrected to include: “The work of R.D. was supported by the Grant for Oncology Innovation under the project ‘Next generation of clinical trials with matched targeted therapies in colorectal cancer’”. C O R R E C T I O N", "title": "" }, { "docid": "1b923168160fcd643692d5473b828ce3", "text": "Interactive Evolutionary Computation (IEC) creates the intriguing possibility that a large variety of useful content can be produced quickly and easily for practical computer graphics and gaming applications. To show that IEC can produce such content, this paper applies IEC to particle system effects, which are the de facto method in computer graphics for generating fire, smoke, explosions, electricity, water, and many other special effects. While particle systems are capable of producing a broad array of effects, they require substantial mathematical and programming knowledge to produce. Therefore, efficient particle system generation tools are required for content developers to produce special effects in a timely manner. This paper details the design, representation, and animation of particle systems via two IEC tools called NEAT Particles and NEAT Projectiles. Both tools evolve artificial neural networks (ANN) with the NeuroEvolution of Augmenting Topologies (NEAT) method to control the behavior of particles. NEAT Particles evolves general-purpose particle effects, whereas NEAT Projectiles specializes in evolving particle weapon effects for video games. The primary advantage of this NEAT-based IEC approach is to decouple the creation of new effects from mathematics and programming, enabling content developers without programming knowledge to produce complex effects. Furthermore, it allows content designers to produce a broader range of effects than typical development tools. Finally, it acts as a concept generator, allowing content creators to interactively and efficiently explore the space of possible effects. Both NEAT Particles and NEAT Projectiles demonstrate how IEC can evolve useful content for graphical media and games, and are together a step toward the larger goal of automated content generation.", "title": "" }, { "docid": "d2ec8831779e7af4e82a10c617a2e9a1", "text": "In the new designs of military aircraft and unmanned aircraft there is a clear trend towards increasing demand of electrical power. This fact is mainly due to the replacement of mechanical, pneumatic and hydraulic equipments by partially or completely electrical systems. Generally, use of electrical power onboard is continuously increasing within the areas of communications, surveillance and general systems, such as: radar, cooling, landing gear or actuators systems. To cope with this growing demand for electric power, new levels of voltage (270 VDC), architectures and power electronics devices are being applied to the onboard electrical power distribution systems. The purpose of this paper is to present and describe the technological project HV270DC. In this project, one Electrical Power Distribution System (EPDS), applicable to the more electric aircrafts, has been developed. This system has been integrated by EADS in order to study the benefits and possible problems or risks that affect this kind of power distribution systems, in comparison with conventional distribution systems.", "title": "" }, { "docid": "a05b4878404f9127d576d90d6b241588", "text": "This paper presents an air-filled substrate integrated waveguide (AFSIW) filter post-process tuning technique. The emerging high-performance AFSIW technology is of high interest for the design of microwave and millimeter-wave substrate integrated systems based on low-cost multilayer printed circuit board (PCB) process. However, to comply with stringent specifications, especially for space, aeronautical and safety applications, a filter post-process tuning technique is desired. AFSIW single pole filter post-process tuning using a capacitive post is theoretically analyzed. It is demonstrated that a tuning of more than 3% of the resonant frequency is achieved at 21 GHz using a 0.3 mm radius post with a 40% insertion ratio. For experimental demonstration, a fourth-order AFSIW band pass filter operating in the 20.88 to 21.11 GHz band is designed and fabricated. Due to fabrication tolerances, it is shown that its performances are not in line with expected results. Using capacitive post tuning, characteristics are improved and agree with optimized results. This post-process tuning can be used for other types of substrate integrated devices.", "title": "" }, { "docid": "bf294a4c3af59162b2f401e2cdcb060b", "text": "We present MCTest, a freely available set of stories and associated questions intended for research on the machine comprehension of text. Previous work on machine comprehension (e.g., semantic modeling) has made great strides, but primarily focuses either on limited-domain datasets, or on solving a more restricted goal (e.g., open-domain relation extraction). In contrast, MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. Reading comprehension can test advanced abilities such as causal reasoning and understanding the world, yet, by being multiple-choice, still provide a clear metric. By being fictional, the answer typically can be found only in the story itself. The stories and questions are also carefully limited to those a young child would understand, reducing the world knowledge that is required for the task. We present the scalable crowd-sourcing methods that allow us to cheaply construct a dataset of 500 stories and 2000 questions. By screening workers (with grammar tests) and stories (with grading), we have ensured that the data is the same quality as another set that we manually edited, but at one tenth the editing cost. By being open-domain, yet carefully restricted, we hope MCTest will serve to encourage research and provide a clear metric for advancement on the machine comprehension of text. 1 Reading Comprehension A major goal for NLP is for machines to be able to understand text as well as people. Several research disciplines are focused on this problem: for example, information extraction, relation extraction, semantic role labeling, and recognizing textual entailment. Yet these techniques are necessarily evaluated individually, rather than by how much they advance us towards the end goal. On the other hand, the goal of semantic parsing is the machine comprehension of text (MCT), yet its evaluation requires adherence to a specific knowledge representation, and it is currently unclear what the best representation is, for open-domain text. We believe that it is useful to directly tackle the top-level task of MCT. For this, we need a way to measure progress. One common method for evaluating someone’s understanding of text is by giving them a multiple-choice reading comprehension test. This has the advantage that it is objectively gradable (vs. essays) yet may test a range of abilities such as causal or counterfactual reasoning, inference among relations, or just basic understanding of the world in which the passage is set. Therefore, we propose a multiple-choice reading comprehension task as a way to evaluate progress on MCT. We have built a reading comprehension dataset containing 500 fictional stories, with 4 multiple choice questions per story. It was built using methods which can easily scale to at least 5000 stories, since the stories were created, and the curation was done, using crowd sourcing almost entirely, at a total of $4.00 per story. We plan to periodically update the dataset to ensure that methods are not overfitting to the existing data. The dataset is open-domain, yet restricted to concepts and words that a 7 year old is expected to understand. This task is still beyond the capability of today’s computers and algorithms.", "title": "" } ]
scidocsrr
bef6b341dc12a62d9166bd111e7344e0
HOT BUTTONS AND TIME SINKS: THE EFFECTS OF ELECTRONIC COMMUNICATION DURING NONWORK TIME ON EMOTIONS AND WORK-NONWORK CONFLICT
[ { "docid": "eb4c25caba8c3e6f06d3cabe6c004cd5", "text": "The greater power of bad events over good ones is found in everyday events, major life events (e.g., trauma), close relationship outcomes, social network patterns, interpersonal interactions, and learning processes. Bad emotions, bad parents, and bad feedback have more impact than good ones, and bad information is processed more thoroughly than good. The self is more motivated to avoid bad self-definitions than to pursue good ones. Bad impressions and bad stereotypes are quicker to form and more resistant to disconfirmation than good ones. Various explanations such as diagnosticity and salience help explain some findings, but the greater power of bad events is still found when such variables are controlled. Hardly any exceptions (indicating greater power of good) can be found. Taken together, these findings suggest that bad is stronger than good, as a general principle across a broad range of psychological phenomena.", "title": "" } ]
[ { "docid": "59eaa9f4967abdc1c863f8fb256ae966", "text": "CONTEXT\nThe projected expansion in the next several decades of the elderly population at highest risk for Parkinson disease (PD) makes identification of factors that promote or prevent the disease an important goal.\n\n\nOBJECTIVE\nTo explore the association of coffee and dietary caffeine intake with risk of PD.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nData were analyzed from 30 years of follow-up of 8004 Japanese-American men (aged 45-68 years) enrolled in the prospective longitudinal Honolulu Heart Program between 1965 and 1968.\n\n\nMAIN OUTCOME MEASURE\nIncident PD, by amount of coffee intake (measured at study enrollment and 6-year follow-up) and by total dietary caffeine intake (measured at enrollment).\n\n\nRESULTS\nDuring follow-up, 102 men were identified as having PD. Age-adjusted incidence of PD declined consistently with increased amounts of coffee intake, from 10.4 per 10,000 person-years in men who drank no coffee to 1.9 per 10,000 person-years in men who drank at least 28 oz/d (P<.001 for trend). Similar relationships were observed with total caffeine intake (P<.001 for trend) and caffeine from non-coffee sources (P=.03 for trend). Consumption of increasing amounts of coffee was also associated with lower risk of PD in men who were never, past, and current smokers at baseline (P=.049, P=.22, and P=.02, respectively, for trend). Other nutrients in coffee, including niacin, were unrelated to PD incidence. The relationship between caffeine and PD was unaltered by intake of milk and sugar.\n\n\nCONCLUSIONS\nOur findings indicate that higher coffee and caffeine intake is associated with a significantly lower incidence of PD. This effect appears to be independent of smoking. The data suggest that the mechanism is related to caffeine intake and not to other nutrients contained in coffee. JAMA. 2000;283:2674-2679.", "title": "" }, { "docid": "152ef51d5264a2e681acefcc536da7cf", "text": "BACKGROUND AND PURPOSE\nHeart rate variability (HRV) as a measure of autonomic function might provide prognostic information in ischemic stroke. However, numerous difficulties are associated with HRV parameters assessment and interpretation, especially in short-term ECG recordings. For better understanding of derived HRV data and to avoid methodological bias we simultaneously recorded and analyzed heart rate, blood pressure and respiratory rate.\n\n\nMETHODS\nSeventy-five ischemic stroke patients underwent short-term ECG recordings. Linear and nonlinear parameters of HRV as well as beat-to-beat blood pressure and respiratory rate were assessed and compared in patients with different functional neurological outcomes at 7th and 90th days.\n\n\nRESULTS\nValues of Approximate, Sample and Fuzzy Entropy were significantly lower in patients with poor early neurological outcome. Patients with poor 90-day outcome had higher percentage of high frequency spectrum and normalized high frequency power, lower normalized low frequency power and lower low frequency/high frequency ratio. Low frequency/high frequency ratio correlated negatively with scores in the National Institutes of Health Stroke Scale and modified Rankin Scale (mRS) at the 7th and mRS at the 90th days. Mean RR interval, values of blood pressure as well as blood pressure variability did not differ between groups with good and poor outcomes. Respiratory frequency was significantly correlated with the functional neurological outcome at 7th and 90th days.\n\n\nCONCLUSION\nWhile HRV assessed by linear methods seems to have long-term prognostic value, complexity measures of HRV reflect the impact of the neurological state on distinct, temporary properties of heart rate dynamic. Respiratory rate during the first days of the stroke is associated with early and long-term neurological outcome and should be further investigated as a potential risk factor.", "title": "" }, { "docid": "bf9e56e0e125e922de95381fb5520569", "text": "Today, many private households as well as broadcasting or film companies own large collections of digital music plays. These are time series that differ from, e.g., weather reports or stocks market data. The task is normally that of classification, not prediction of the next value or recognizing a shape or motif. New methods for extracting features that allow to classify audio data have been developed. However, the development of appropriate feature extraction methods is a tedious effort, particularly because every new classification task requires tailoring the feature set anew. This paper presents a unifying framework for feature extraction from value series. Operators of this framework can be combined to feature extraction methods automatically, using a genetic programming approach. The construction of features is guided by the performance of the learning classifier which uses the features. Our approach to automatic feature extraction requires a balance between the completeness of the methods on one side and the tractability of searching for appropriate methods on the other side. In this paper, some theoretical considerations illustrate the trade-off. After the feature extraction, a second process learns a classifier from the transformed data. The practical use of the methods is shown by two types of experiments: classification of genres and classification according to user preferences.", "title": "" }, { "docid": "a500afda393ad60ddd1bb39778655172", "text": "The success and the failure of a data warehouse (DW) project are mainly related to the design phase according to most researchers in this domain. When analyzing the decision-making system requirements, many recurring problems appear and requirements modeling difficulties are detected. Also, we encounter the problem associated with the requirements expression by non-IT professionals and non-experts makers on design models. The ambiguity of the term of decision-making requirements leads to a misinterpretation of the requirements resulting from data warehouse design failure and incorrect OLAP analysis. Therefore, many studies have focused on the inclusion of vague data in information systems in general, but few studies have examined this case in data warehouses. This article describes one of the shortcomings of current approaches to data warehouse design which is the study of in the requirements inaccuracy expression and how ontologies can help us to overcome it. We present a survey on this topic showing that few works that take into account the imprecision in the study of this crucial phase in the decision-making process for the presentation of challenges and problems that arise and requires more attention by researchers to improve DW design. According to our knowledge, no rigorous study of vagueness in this area were made. Keywords— Data warehouses Design, requirements analysis, imprecision, ontology", "title": "" }, { "docid": "a7b6a491d85ae94285808a21dbc65ce9", "text": "In imbalanced learning, most standard classification algorithms usually fail to properly represent data distribution and provide unfavorable classification performance. More specifically, the decision rule of minority class is usually weaker than majority class, leading to many misclassification of expensive minority class data. Motivated by our previous work ADASYN [1], this paper presents a novel kernel based adaptive synthetic over-sampling approach, named KernelADASYN, for imbalanced data classification problems. The idea is to construct an adaptive over-sampling distribution to generate synthetic minority class data. The adaptive over-sampling distribution is first estimated with kernel density estimation methods and is further weighted by the difficulty level for different minority class data. The classification performance of our proposed adaptive over-sampling approach is evaluated on several real-life benchmarks, specifically on medical and healthcare applications. The experimental results show the competitive classification performance for many real-life imbalanced data classification problems.", "title": "" }, { "docid": "fce58bfa94acf2b26a50f816353e6bf2", "text": "The perspective directions in evaluating network security are simulating possible malefactor’s actions, building the representation of these actions as attack graphs (trees, nets), the subsequent checking of various properties of these graphs, and determining security metrics which can explain possible ways to increase security level. The paper suggests a new approach to security evaluation based on comprehensive simulation of malefactor’s actions, construction of attack graphs and computation of different security metrics. The approach is intended for using both at design and exploitation stages of computer networks. The implemented software system is described, and the examples of experiments for analysis of network security level are considered.", "title": "" }, { "docid": "fe20c0bee35db1db85968b4d2793b83b", "text": "The Smule Ocarina is a wind instrument designed for the iPhone, fully leveraging its wide array of technologies: microphone input (for breath input), multitouch (for fingering), accelerometer, real-time sound synthesis, highperformance graphics, GPS/location, and persistent data connection. In this mobile musical artifact, the interactions of the ancient flute-like instrument are both preserved and transformed via breath-control and multitouch finger-holes, while the onboard global positioning and persistent data connection provide the opportunity to create a new social experience, allowing the users of Ocarina to listen to one another. In this way, Ocarina is also a type of social instrument that enables a different, perhaps even magical, sense of global connectivity.", "title": "" }, { "docid": "d95ae6900ae353fa0ed32167e0c23f16", "text": "As well known, fully convolutional network (FCN) becomes the state of the art for semantic segmentation in deep learning. Currently, new hardware designs for deep learning have focused on improving the speed and parallelism of processing units. This motivates memristive solutions, in which the memory units (i.e., memristors) have computing capabilities. However, designing a memristive deep learning network is challenging, since memristors work very differently from the traditional CMOS hardware. This paper proposes a complete solution to implement memristive FCN (MFCN). Voltage selectors are firstly utilized to realize max-pooling layers with the detailed MFCN deconvolution hardware circuit by the massively parallel structure, which is effective since the deconvolution kernel and the input feature are similar in size. Then, deconvolution calculation is realized by converting the image into a column matrix and converting the deconvolution kernel into a sparse matrix. Meanwhile, the convolution realization in MFCN is also studied with the traditional sliding window method rather than the large matrix theory to overcome the shortcoming of low efficiency. Moreover, the conductance values of memristors are predetermined in Tensorflow with ex-situ training method. In other words, we train MFCN in software, then download the trained parameters to the simulink system by writing memristor. The effectiveness of the designed MFCN scheme is verified with improved accuracy over some existing machine learning methods. The proposed scheme is also adapt to LFW dataset with three-classification tasks. However, the MFCN training is time consuming as the computational burden is heavy with thousands of weight parameters with just six layers. In future, it is necessary to sparsify the weight parameters and layers of the MFCN network to speed up computing.", "title": "" }, { "docid": "3df95e4b2b1bb3dc80785b25c289da92", "text": "The problem of efficiently locating previously known patterns in a time series database (i.e., query by content) has received much attention and may now largely be regarded as a solved problem. However, from a knowledge discovery viewpoint, a more interesting problem is the enumeration of previously unknown, frequently occurring patterns. We call such patterns “motifs”, because of their close analogy to their discrete counterparts in computation biology. An efficient motif discovery algorithm for time series would be useful as a tool for summarizing and visualizing massive time series databases. In addition it could be used as a subroutine in various other data mining tasks, including the discovery of association rules, clustering and classification. In this work we carefully motivate, then introduce, a nontrivial definition of time series motifs. We propose an efficient algorithm to discover them, and we demonstrate the utility and efficiency of our approach on several real world datasets.", "title": "" }, { "docid": "a8287a99def9fec3a9a2fda06a95e36e", "text": "The abstraction of a process enables certain primitive forms of communication during process creation and destruction such as wait(). However, the operating system provides more general mechanisms for flexible inter-process communication. In this paper, we have studied and evaluated three commonly-used inter-process communication devices pipes, sockets and shared memory. We have identified the various factors that could affect their performance such as message size, hardware caches and process scheduling, and constructed experiments to reliably measure the latency and transfer rate of each device. We identified the most reliable timer APIs available for our measurements. Our experiments reveal that shared memory provides the lowest latency and highest throughput, followed by kernel pipes and lastly, TCP/IP sockets. However, the latency trends provide interesting insights into the construction of each mechanism. We also make certain observations on the pros and cons of each mechanism, highlighting its usefulness for different kinds of applications.", "title": "" }, { "docid": "2e35483beb568ab514601ba21d70c2d3", "text": "Determining the intended sense of words in text – word sense disambiguation (WSD) – is a long-standing problem in natural language processing. In this paper, we present WSD algorithms which use neural network language models to achieve state-of-the-art precision. Each of these methods learns to disambiguate word senses using only a set of word senses, a few example sentences for each sense taken from a licensed lexicon, and a large unlabeled text corpus. We classify based on cosine similarity of vectors derived from the contexts in unlabeled query and labeled example sentences. We demonstrate state-of-the-art results when using the WordNet sense inventory, and significantly better than baseline performance using the New Oxford American Dictionary inventory. The best performance was achieved by combining an LSTM language model with graph label propagation.", "title": "" }, { "docid": "c180a56ae8ab74cd6a77f9f47ee76544", "text": "Existing graph-based ranking methods for keyphrase extraction compute a single importance score for each word via a single random walk. Motivated by the fact that both documents and words can be represented by a mixture of semantic topics, we propose to decompose traditional random walk into multiple random walks specific to various topics. We thus build a Topical PageRank (TPR) on word graph to measure word importance with respect to different topics. After that, given the topic distribution of the document, we further calculate the ranking scores of words and extract the top ranked ones as keyphrases. Experimental results show that TPR outperforms state-of-the-art keyphrase extraction methods on two datasets under various evaluation metrics.", "title": "" }, { "docid": "af0dfe672a8828587e3b27ef473ea98e", "text": "Machine comprehension of text is the overarching goal of a great deal of research in natural language processing. The Machine Comprehension Test (Richardson et al., 2013) was recently proposed to assess methods on an open-domain, extensible, and easy-to-evaluate task consisting of two datasets. In this paper we develop a lexical matching method that takes into account multiple context windows, question types and coreference resolution. We show that the proposed method outperforms the baseline of Richardson et al. (2013), and despite its relative simplicity, is comparable to recent work using machine learning. We hope that our approach will inform future work on this task. Furthermore, we argue that MC500 is harder than MC160 due to the way question answer pairs were created.", "title": "" }, { "docid": "2c25a1333dc94bf98c74b693997e2793", "text": "In recent years, HCI has shown a rising interest in the creative practices associated with massive online communities, including crafters, hackers, DIY, and other expert amateurs. One strategy for researching creativity at this scale is through an analysis of a community's outputs, including its creative works, custom created tools, and emergent practices. In this paper, we offer one such case study, a historical account of World of Warcraft (WoW) machinima (i.e., videos produced inside of video games), which shows how the aesthetic needs and requirements of video making community coevolved with the community-made creativity support tools in use at the time. We view this process as inhabiting different layers and practices of appropriation, and through an analysis of them, we trace the ways that support for emerging stylistic conventions become built into creativity support tools over time.", "title": "" }, { "docid": "e0d553cc4ca27ce67116c62c49c53d23", "text": "We estimate a vehicle's speed, its wheelbase length, and tire track length by jointly estimating its acoustic wave pattern with a single passive acoustic sensor that records the vehicle's drive-by noise. The acoustic wave pattern is determined using the vehicle's speed, the Doppler shift factor, the sensor's distance to the vehicle's closest-point-of-approach, and three envelope shape (ES) components, which approximate the shape variations of the received signal's power envelope. We incorporate the parameters of the ES components along with estimates of the vehicle engine RPM, the number of cylinders, and the vehicle's initial bearing, loudness and speed to form a vehicle profile vector. This vector provides a fingerprint that can be used for vehicle identification and classification. We also provide possible reasons why some of the existing methods are unable to provide unbiased vehicle speed estimates using the same framework. The approach is illustrated using vehicle speed estimation and classification results obtained with field data.", "title": "" }, { "docid": "ea2d97e8bde8e21b8291c370ce5815bf", "text": "Can the cell's perception of time be expressed through the length of the shortest telomere? To address this question, we analyze an asymmetric random walk that models telomere length for each division that can decrease by a fixed length a or, if recognized by a polymerase, it increases by a fixed length b ≫ a. Our analysis of the model reveals two phases, first, a determinist drift of the length toward a quasi-equilibrium state, and second, persistence of the length near an attracting state for the majority of divisions. The measure of stability of the latter phase is the expected number of divisions at the attractor (\"lifetime\") prior to crossing a threshold T that model senescence. Using numerical simulations, we further study the distribution of times for the shortest telomere to reach the threshold T. We conclude that the telomerase regulates telomere stability by creating an effective potential barrier that separates statistically the arrival time of the shortest from the next shortest to T. The present model explains how random telomere dynamics underlies the extension of cell survival time.", "title": "" }, { "docid": "46209913057e33c17d38a565e50097a3", "text": "Power-on reset circuits are available as discrete devices as well as on-chip solutions and are indispensable to initialize some critical nodes of analog and digital designs during power-on. In this paper, we present a power-on reset circuit specifically designed for on-chip applications. The mentioned POR circuit should meet certain design requirements necessary to be integrated on-chip, some of them being area-efficiency, power-efficiency, supply rise-time insensitivity and ambient temperature insensitivity. The circuit is implemented within a small area (60mum times 35mum) using the 2.5V tolerant MOSFETs of a 0.28mum CMOS technology. It has a maximum quiescent current consumption of 40muA and works over infinite range of supply rise-times and ambient temperature range of -40degC to 150degC", "title": "" }, { "docid": "91446020934f6892a3a4807f5a7b3829", "text": "Collaborative filtering recommends items to a user based on the interests of other users having similar preferences. However, high dimensional, sparse data result in poor performance in collaborative filtering. This paper introduces an approach called multiple metadata-based collaborative filtering (MMCF), which utilizes meta-level information to alleviate this problem, e.g., metadata such as genre, director, and actor in the case of movie recommendation. MMCF builds a k-partite graph of users, movies and multiple metadata, and extracts implicit relationships among the metadata and between users and the metadata. Then the implicit relationships are propagated further by applying random walk process in order to alleviate the problem of sparseness in the original data set. The experimental results show substantial improvement over previous approaches on the real Netflix movie dataset.", "title": "" }, { "docid": "ee9cb495280dc6e252db80c23f2f8c2b", "text": "Due to the dramatical increase in popularity of mobile devices in the last decade, more sensitive user information is stored and accessed on these devices everyday. However, most existing technologies for user authentication only cover the login stage or only work in restricted controlled environments or GUIs in the post login stage. In this work, we present TIPS, a Touch based Identity Protection Service that implicitly and unobtrusively authenticates users in the background by continuously analyzing touch screen gestures in the context of a running application. To the best of our knowledge, this is the first work to incorporate contextual app information to improve user authentication. We evaluate TIPS over data collected from 23 phone owners and deployed it to 13 of them with 100 guest users. TIPS can achieve over 90% accuracy in real-life naturalistic conditions within a small amount of computational overhead and 6% of battery usage.", "title": "" }, { "docid": "d78acb79ccd229af7529dae1408dea6a", "text": "Making recommendations by learning to rank is becoming an increasingly studied area. Approaches that use stochastic gradient descent scale well to large collaborative filtering datasets, and it has been shown how to approximately optimize the mean rank, or more recently the top of the ranked list. In this work we present a family of loss functions, the k-order statistic loss, that includes these previous approaches as special cases, and also derives new ones that we show to be useful. In particular, we present (i) a new variant that more accurately optimizes precision at k, and (ii) a novel procedure of optimizing the mean maximum rank, which we hypothesize is useful to more accurately cover all of the user's tastes. The general approach works by sampling N positive items, ordering them by the score assigned by the model, and then weighting the example as a function of this ordered set. Our approach is studied in two real-world systems, Google Music and YouTube video recommendations, where we obtain improvements for computable metrics, and in the YouTube case, increased user click through and watch duration when deployed live on www.youtube.com.", "title": "" } ]
scidocsrr
49cfcea811b0d8d1823a5281c2317fb0
Untrimmed Video Classification for Activity Detection: submission to ActivityNet Challenge
[ { "docid": "848aae58854681e75fae293e2f8d2fc5", "text": "Over last several decades, computer vision researchers have been devoted to find good feature to solve different tasks, such as object recognition, object detection, object segmentation, activity recognition and so forth. Ideal features transform raw pixel intensity values to a representation in which these computer vision problems are easier to solve. Recently, deep features from covolutional neural network(CNN) have attracted many researchers in computer vision. In the supervised setting, these hierarchies are trained to solve specific problems by minimizing an objective function. More recently, the feature learned from large scale image dataset have been proved to be very effective and generic for many computer vision task. The feature learned from recognition task can be used in the object detection task. This work uncover the principles that lead to these generic feature representations in the transfer learning, which does not need to train the dataset again but transfer the rich feature from CNN learned from ImageNet dataset. We begin by summarize some related prior works, particularly the paper in object recognition, object detection and segmentation. We introduce the deep feature to computer vision task in intelligent transportation system. We apply deep feature in object detection task, especially in vehicle detection task. To make fully use of objectness proposals, we apply proposal generator on road marking detection and recognition task. Third, to fully understand the transportation situation, we introduce the deep feature into scene understanding. We experiment each task for different public datasets, and prove our framework is robust.", "title": "" } ]
[ { "docid": "97c3860dfb00517f744fd9504c4e7f9f", "text": "The plastic film surface treatment load is considered as a nonlinear capacitive load, which is rather difficult for designing of an inverter. The series resonant inverter (SRI) connected to the load via transformer has been found effective for it's driving. In this paper, a surface treatment based on a pulse density modulation (PDM) and pulse frequency modulation (PFM) hybrid control scheme is described. The PDM scheme is used to regulate the output power of the inverter and the PFM scheme is used to compensate for temperature and other environmental influences on the discharge. Experimental results show that the PDM and PFM hybrid control series-resonant inverter (SRI) makes the corona discharge treatment simple and compact, thus leading to higher efficiency.", "title": "" }, { "docid": "78321a0af7f5ab76809c6f7d08f2c15a", "text": "The mass media are ranked with respect to their perceived helpfulness in satisfying clusters of needs arising from social roles and individual dispositions. For example, integration into the sociopolitical order is best served by newspaper; while \"knowing oneself \" is best served by books. Cinema and books are more helpful as means of \"escape\" than is television. Primary relations, holidays and other cultural activities are often more important than the mass media in satisfying needs. Television is the least specialized medium, serving many different personal and political needs. The \"interchangeability\" of the media over a variety of functions orders televisions, radio, newspapers, books, and cinema in a circumplex. We speculate about which attributes of the media explain the social and psychological needs they serve best. The data, drawn from an Israeli survey, are presented as a basis for cross-cultural comparison. Disciplines Communication | Social and Behavioral Sciences This journal article is available at ScholarlyCommons: http://repository.upenn.edu/asc_papers/267 ON THE USE OF THE MASS MEDIA FOR IMPORTANT THINGS * ELIHU KATZ MICHAEL GUREVITCH", "title": "" }, { "docid": "c75388c19397bf1e743970cb32649b17", "text": "In recent years, there has been a substantial amount of work on large-scale data analytics using Hadoop-based platforms running on large clusters of commodity machines. A lessexplored topic is how those data, dominated by application logs, are collected and structured to begin with. In this paper, we present Twitter’s production logging infrastructure and its evolution from application-specific logging to a unified “client events” log format, where messages are captured in common, well-formatted, flexible Thrift messages. Since most analytics tasks consider the user session as the basic unit of analysis, we pre-materialize “session sequences”, which are compact summaries that can answer a large class of common queries quickly. The development of this infrastructure has streamlined log collection and data analysis, thereby improving our ability to rapidly experiment and iterate on various aspects of the service.", "title": "" }, { "docid": "086f5e6dd7889d8dcdaddec5852afbdb", "text": "Fast advances in the wireless technology and the intensive penetration of cell phones have motivated banks to spend large budget on building mobile banking systems, but the adoption rate of mobile banking is still underused than expected. Therefore, research to enrich current knowledge about what affects individuals to use mobile banking is required. Consequently, this study employs the Unified Theory of Acceptance and Use of Technology (UTAUT) to investigate what impacts people to adopt mobile banking. Through sampling 441 respondents, this study empirically concluded that individual intention to adopt mobile banking was significantly influenced by social influence, perceived financial cost, performance expectancy, and perceived credibility, in their order of influencing strength. The behavior was considerably affected by individual intention and facilitating conditions. As for moderating effects of gender and age, this study discovered that gender significantly moderated the effects of performance expectancy and perceived financial cost on behavioral intention, and the age considerably moderated the effects of facilitating conditions and perceived self-efficacy on actual adoption behavior.", "title": "" }, { "docid": "8f6d9ed651c783cf88bd6b3ab5b3012c", "text": "To the Editor: Gianotti-Crosti syndrome (GCS) classically presents in children as a self-limited, symmetric erythematous papular eruption affecting the cheeks, extremities, and buttocks. While initial reports implicated hepatitis B virus as the etiologic agent, many other bacterial, viral, and vaccine triggers have since been described. A previously healthy 2-year-old boy presented with a 3-week history of a cutaneous eruption that initially appeared on his legs and subsequently progressed to affect his arms and face. Two weeks after onset of the eruption, he was immunized with intramuscular Vaxigrip influenza vaccination (Sanofi Pasteur), and new lesions appeared at the immunization site on his right upper arm. Physical examination demonstrated an afebrile child with erythematous papules on the cheeks, arms, and legs (Fig 1). He had a localized papular eruption on his right upper arm (Fig 2). There was no lymphadenopathy or hepatosplenomegaly. Laboratory investigations revealed leukocytosis (white cell count, 14,600/mm) with a normal differential, reactive thrombocytosis ( platelet count, 1,032,000/mm), a positive urine culture for cytomegalovirus, and positive IgM serology for Epstein-Barr virus (EBV). Histopathologic examination of a skin biopsy specimen from the right buttock revealed a perivascular and somewhat interstitial lymphocytic infiltrate in the superficial and mid-dermis with intraepidermal exocytosis of lymphocytes, mild spongiosis and papillary dermal edema. He was treated with 2.5% hydrocortisone cream, and the eruption resolved. Twelve months later, he presented with a similar papular eruption localized to the left upper arm at the site of a recent intramuscular influenza vaccination (Vaxigrip). Although an infection represents the most important etiologic agent, a second event involving immunomodulation might lead to further disease accentuation, thus explaining the association of GCS with vaccinations. In our case, there was evidence of both cytomegalovirus (CMV) and EBV infection as well as a recent history of immunization. Localized accentuation of papules at the immunization site was unusual, as previous cases of GCS following immunizations have had a widespread and typically symmetric eruption. It is possible that trauma from the injection or a component of the vaccine elicited a Koebner response, causing local accentuation. There are no previous reports of recurrence of vaccine-associated GCS. One report documented recurrence with two different infectious triggers. As GCS is a mild and selflimiting disease, further vaccinations are not contraindicated. Andrei I. Metelitsa, MD, FRCPC, and Loretta Fiorillo, MD, FRCPC", "title": "" }, { "docid": "6e63abd83cc2822f011c831234c6d2e7", "text": "The rapid uptake of mobile devices and the rising popularity of mobile applications and services pose unprecedented demands on mobile and wireless networking infrastructure. Upcoming 5G systems are evolving to support exploding mobile traffic volumes, real-time extraction of fine-grained analytics, and agile management of network resources, so as to maximize user experience. Fulfilling these tasks is challenging, as mobile environments are increasingly complex, heterogeneous, and evolving. One potential solution is to resort to advanced machine learning techniques, in order to help manage the rise in data volumes and algorithm-driven applications. The recent success of deep learning underpins new and powerful tools that tackle problems in this space. In this paper we bridge the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas. We first briefly introduce essential background and state-of-theart in deep learning techniques with potential applications to networking. We then discuss several techniques and platforms that facilitate the efficient deployment of deep learning onto mobile systems. Subsequently, we provide an encyclopedic review of mobile and wireless networking research based on deep learning, which we categorize by different domains. Drawing from our experience, we discuss how to tailor deep learning to mobile environments. We complete this survey by pinpointing current challenges and open future directions for research.", "title": "" }, { "docid": "de04d3598687b34b877d744956ca4bcd", "text": "We investigate the reputational impact of financial fraud for outside directors based on a sample of firms facing shareholder class action lawsuits. Following a financial fraud lawsuit, outside directors do not face abnormal turnover on the board of the sued firm but experience a significant decline in other board seats held. The decline in other directorships is greater for more severe cases of fraud and when the outside director bears greater responsibility for monitoring fraud. Interlocked firms that share directors with the sued firm exhibit valuation declines at the lawsuit filing. When fraud-affiliated directors depart from boards of interlocked firms, these firms experience a significant increase in valuation.", "title": "" }, { "docid": "b60a4efcdd52d6209069415540016849", "text": "Vulnerabilities need to be detected and removed from software. Although previous studies demonstrated the usefulness of employing prediction techniques in deciding about vulnerabilities of software components, the accuracy and improvement of effectiveness of these prediction techniques is still a grand challenging research question. This paper proposes a hybrid technique based on combining N-gram analysis and feature selection algorithms for predicting vulnerable software components where features are defined as continuous sequences of token in source code files, i.e., Java class file. Machine learning-based feature selection algorithms are then employed to reduce the feature and search space. We evaluated the proposed technique based on some Java Android applications, and the results demonstrated that the proposed technique could predict vulnerable classes, i.e., software components, with high precision, accuracy and recall.", "title": "" }, { "docid": "b1b511c0e014861dac12c2254f6f1790", "text": "This paper describes automatic speech recognition (ASR) systems developed jointly by RWTH, UPB and FORTH for the 1ch, 2ch and 6ch track of the 4th CHiME Challenge. In the 2ch and 6ch tracks the final system output is obtained by a Confusion Network Combination (CNC) of multiple systems. The Acoustic Model (AM) is a deep neural network based on Bidirectional Long Short-Term Memory (BLSTM) units. The systems differ by front ends and training sets used for the acoustic training. The model for the 1ch track is trained without any preprocessing. For each front end we trained and evaluated individual acoustic models. We compare the ASR performance of different beamforming approaches: a conventional superdirective beamformer [1] and an MVDR beamformer as in [2], where the steering vector is estimated based on [3]. Furthermore we evaluated a BLSTM supported Generalized Eigenvalue beamformer using NN-GEV [4]. The back end is implemented using RWTH’s open-source toolkits RASR [5], RETURNN [6] and rwthlm [7]. We rescore lattices with a Long Short-Term Memory (LSTM) based language model. The overall best results are obtained by a system combination that includes the lattices from the system of UPB’s submission [8]. Our final submission scored second in each of the three tracks of the 4th CHiME Challenge.", "title": "" }, { "docid": "b73526f1fb0abb4373421994dbd07822", "text": "in our country around 2.78% of peoples are not able to speak (dumb). Their communications with others are only using the motion of their hands and expressions. We proposed a new technique called artificial speaking mouth for dumb people. It will be very helpful to them for conveying their thoughts to others. Some peoples are easily able to get the information from their motions. The remaining is not able to understand their way of conveying the message. In order to overcome the complexity the artificial mouth is introduced for the dumb peoples. This system is based on the motion sensor. According to dumb people, for every motion they have a meaning. That message is kept in a database. Likewise all templates are kept in the database. In the real time the template database is fed into a microcontroller and the motion sensor is fixed in their hand. For every action the motion sensors get accelerated and give the signal to the microcontroller. The microcontroller matches the motion with the database and produces the speech signal. The output of the system is using the speaker. By properly updating the database the dumb will speak like a normal person using the artificial mouth. The system also includes a text to speech conversion (TTS) block that interprets the matched gestures.", "title": "" }, { "docid": "874973c7a28652d5d9859088b965e76c", "text": "Recommender systems are commonly defined as applications that e-commerce sites exploit to suggest products and provide consumers with information to facilitate their decision-making processes.1 They implicitly assume that we can map user needs and constraints, through appropriate recommendation algorithms, and convert them into product selections using knowledge compiled into the intelligent recommender. Knowledge is extracted from either domain experts (contentor knowledge-based approaches) or extensive logs of previous purchases (collaborative-based approaches). Furthermore, the interaction process, which turns needs into products, is presented to the user with a rationale that depends on the underlying recommendation technology and algorithms. For example, if the system funnels the behavior of other users in the recommendation, it explicitly shows reviews of the selected products or quotes from a similar user. Recommender systems are now a popular research area2 and are increasingly used by e-commerce sites.1 For travel and tourism,3 the two most successful recommender system technologies (see Figure 1) are Triplehop’s TripMatcher (used by www. ski-europe.com, among others) and VacationCoach’s expert advice platform, MePrint (used by travelocity.com). Both of these recommender systems try to mimic the interactivity observed in traditional counselling sessions with travel agents when users search for advice on a possible holiday destination. From a technical viewpoint, they primarily use a content-based approach, in which the user expresses needs, benefits, and constraints using the offered language (attributes). The system then matches the user preferences with items in a catalog of destinations (described with the same language). VacationCoach exploits user profiling by explicitly asking the user to classify himself or herself in one profile (for example, as a “culture creature,” “beach bum,” or “trail trekker”), which induces implicit needs that the user doesn’t provide. The user can even input precise profile information by completing the appropriate form. TripleHop’s matching engine uses a more sophisticated approach to reduce user input. It guesses importance of attributes that the user does not explicitly mention. It then combines statistics on past user queries with a prediction computed as a weighted average of importance assigned by similar users.4", "title": "" }, { "docid": "d106a47637195845ed3d218dbb766c2c", "text": "The efficiency of three forward-pruning techniques, i.e., futility pruning, null-move pruning, and LMR, is analyzed in shogi, a Japanese chess variant. It is shown that the techniques with the a–b pruning reduce the effective branching factor of shogi endgames to 2.8 without sacrificing much accuracy of the search results. Because the average number of the raw branching factor in shogi is around 80, the pruning techniques reduce the search space more effectively than in chess. 2011 International Federation for Information Processing Published by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "8f1a5420deb75a2b664ceeaae8fc03f9", "text": "A stretchable and multiple-force-sensitive electronic fabric based on stretchable coaxial sensor electrodes is fabricated for artificial-skin application. This electronic fabric, with only one kind of sensor unit, can simultaneously map and quantify the mechanical stresses induced by normal pressure, lateral strain, and flexion.", "title": "" }, { "docid": "b27038accdabab12d8e0869aba20a083", "text": "Two architectures that generalize convolutional neural networks (CNNs) for the processing of signals supported on graphs are introduced. We start with the selection graph neural network (GNN), which replaces linear time invariant filters with linear shift invariant graph filters to generate convolutional features and reinterprets pooling as a possibly nonlinear subsampling stage where nearby nodes pool their information in a set of preselected sample nodes. A key component of the architecture is to remember the position of sampled nodes to permit computation of convolutional features at deeper layers. The second architecture, dubbed aggregation GNN, diffuses the signal through the graph and stores the sequence of diffused components observed by a designated node. This procedure effectively aggregates all components into a stream of information having temporal structure to which the convolution and pooling stages of regular CNNs can be applied. A multinode version of aggregation GNNs is further introduced for operation in large-scale graphs. An important property of selection and aggregation GNNs is that they reduce to conventional CNNs when particularized to time signals reinterpreted as graph signals in a circulant graph. Comparative numerical analyses are performed in a source localization application over synthetic and real-world networks. Performance is also evaluated for an authorship attribution problem and text category classification. Multinode aggregation GNNs are consistently the best-performing GNN architecture.", "title": "" }, { "docid": "b413cd956623afce3d50780ff90b0efe", "text": "Parkinson's disease (PD) is the second most common neurodegenerative disorder. The majority of cases do not arise from purely genetic factors, implicating an important role of environmental factors in disease pathogenesis. Well-established environmental toxins important in PD include pesticides, herbicides, and heavy metals. However, many toxicants linked to PD and used in animal models are rarely encountered. In this context, other factors such as dietary components may represent daily exposures and have gained attention as disease modifiers. Several in vitro, in vivo, and human epidemiological studies have found a variety of dietary factors that modify PD risk. Here, we critically review findings on association between dietary factors, including vitamins, flavonoids, calorie intake, caffeine, alcohol, and metals consumed via food and fatty acids and PD. We have also discussed key data on heterocyclic amines that are produced in high-temperature cooked meat, which is a new emerging field in the assessment of dietary factors in neurological diseases. While more research is clearly needed, significant evidence exists that specific dietary factors can modify PD risk.", "title": "" }, { "docid": "b07b369dc622fad777fd09b23c284e12", "text": "Stroke is the number one cause of severe physical disability in the UK. Recent studies have shown that technologies such as virtual reality and imaging can provide an engaging and motivating tool for physical rehabilitation. In this paper we summarize previous work in our group using virtual reality technology and webcam-based games. We then present early work we are conducting in experimenting with desktop augmented reality (AR) for rehabilitation. AR allows the user to use real objects to interact with computer-generated environments. Markers attached to the real objects enable the system (via a webcam) to track the position and orientation of each object as it is moved. The system can then augment the captured image of the real environment with computer-generated graphics to present a variety of game or task-driven scenarios to the user. We discuss the development of rehabilitation prototypes using available AR libraries and express our thoughts on the potential of AR technology.", "title": "" }, { "docid": "ba6709c1413a1c28c99e686e065ce564", "text": "Essential oils are complex mixtures of hydrocarbons and their oxygenated derivatives arising from two different isoprenoid pathways. Essential oils are produced by glandular trichomes and other secretory structures, specialized secretory tissues mainly diffused onto the surface of plant organs, particularly flowers and leaves, thus exerting a pivotal ecological role in plant. In addition, essential oils have been used, since ancient times, in many different traditional healing systems all over the world, because of their biological activities. Many preclinical studies have documented antimicrobial, antioxidant, anti-inflammatory and anticancer activities of essential oils in a number of cell and animal models, also elucidating their mechanism of action and pharmacological targets, though the paucity of in human studies limits the potential of essential oils as effective and safe phytotherapeutic agents. More well-designed clinical trials are needed in order to ascertain the real efficacy and safety of these plant products.", "title": "" }, { "docid": "56a4a9b20391f13e7ced38586af9743b", "text": "The most common type of nasopharyngeal tumor is nasopharyngeal carcinoma. The etiology is multifactorial with race, genetics, environment and Epstein-Barr virus (EBV) all playing a role. While rare in Caucasian populations, it is one of the most frequent nasopharyngeal cancers in Chinese, and has endemic clusters in Alaskan Eskimos, Indians, and Aleuts. Interestingly, as native-born Chinese migrate, the incidence diminishes in successive generations, although still higher than the native population. EBV is nearly always present in NPC, indicating an oncogenic role. There are raised antibodies, higher titers of IgA in patients with bulky (large) tumors, EBERs (EBV encoded early RNAs) in nearly all tumor cells, and episomal clonal expansion (meaning the virus entered the tumor cell before clonal expansion). Consequently, the viral titer can be used to monitor therapy or possibly as a diagnostic tool in the evaluation of patients who present with a metastasis from an unknown primary. The effect of environmental carcinogens, especially those which contain a high levels of volatile nitrosamines are also important in the etiology of NPC. Chinese eat salted fish, specifically Cantonese-style salted fish, and especially during early life. Perhaps early life (weaning period) exposure is important in the ‘‘two-hit’’ hypothesis of cancer development. Smoking, cooking, and working under poor ventilation, the use of nasal oils and balms for nose and throat problems, and the use of herbal medicines have also been implicated but are in need of further verification. Likewise, chemical fumes, dusts, formaldehyde exposure, and radiation have all been implicated in this complicated disorder. Various human leukocyte antigens (HLA) are also important etiologic or prognostic indicators in NPC. While histocompatibility profiles of HLA-A2, HLA-B17 and HLA-Bw46 show increased risk for developing NPC, there is variable expression depending on whether they occur alone or jointly, further conferring a variable prognosis (B17 is associated with a poor and A2B13 with a good prognosis, respectively).", "title": "" }, { "docid": "5cfc4911a59193061ab55c2ce5013272", "text": "What can you do with a million images? In this paper, we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless, but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks, we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data driven, requiring no annotations or labeling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of image completions and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches.", "title": "" }, { "docid": "afae66e9ff49274bbb546cd68490e5e4", "text": "Question-Answering Bulletin Boards (QABB), such as Yahoo! Answers and Windows Live QnA, are gaining popularity recently. Communications on QABB connect users, and the overall connections can be regarded as a social network. If the evolution of social networks can be predicted, it is quite useful for encouraging communications among users. This paper describes an improved method for predicting links based on weighted proximity measures of social networks. The method is based on an assumption that proximities between nodes can be estimated better by using both graph proximity measures and the weights of existing links in a social network. In order to show the effectiveness of our method, the data of Yahoo! Chiebukuro (Japanese Yahoo! Answers) are used for our experiments. The results show that our method outperforms previous approaches, especially when target social networks are sufficiently dense.", "title": "" } ]
scidocsrr
cfa5df626c7295941eb72c22ff6b61cf
Fast and robust face recognition via coding residual map learning based adaptive masking
[ { "docid": "da416ce58897f6f86d9cd7b0de422508", "text": "In linear representation based face recognition (FR), it is expected that a discriminative dictionary can be learned from the training samples so that the query sample can be better represented for classification. On the other hand, dimensionality reduction is also an important issue for FR. It can not only reduce significantly the storage space of face images, but also enhance the discrimination of face feature. Existing methods mostly perform dimensionality reduction and dictionary learning separately, which may not fully exploit the discriminative information in the training samples. In this paper, we propose to learn jointly the projection matrix for dimensionality reduction and the discriminative dictionary for face representation. The joint learning makes the learned projection and dictionary better fit with each other so that a more effective face classification can be obtained. The proposed algorithm is evaluated on benchmark face databases in comparison with existing linear representation based methods, and the results show that the joint learning improves the FR rate, particularly when the number of training samples per class is small.", "title": "" }, { "docid": "7fdb4e14a038b11bb0e92917d1e7ce70", "text": "Recently the sparse representation (or coding) based classification (SRC) has been successfully used in face recognition. In SRC, the testing image is represented as a sparse linear combination of the training samples, and the representation fidelity is measured by the l2-norm or l1-norm of coding residual. Such a sparse coding model actually assumes that the coding residual follows Gaussian or Laplacian distribution, which may not be accurate enough to describe the coding errors in practice. In this paper, we propose a new scheme, namely the robust sparse coding (RSC), by modeling the sparse coding as a sparsity-constrained robust regression problem. The RSC seeks for the MLE (maximum likelihood estimation) solution of the sparse coding problem, and it is much more robust to outliers (e.g., occlusions, corruptions, etc.) than SRC. An efficient iteratively reweighted sparse coding algorithm is proposed to solve the RSC model. Extensive experiments on representative face databases demonstrate that the RSC scheme is much more effective than state-of-the-art methods in dealing with face occlusion, corruption, lighting and expression changes, etc.", "title": "" }, { "docid": "7655df3f32e6cf7a5545ae2231f71e7c", "text": "Many problems in information processing involve some form of dimensionality reduction. In this thesis, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. Theoretical analysis shows that PCA, LPP, and Linear Discriminant Analysis (LDA) can be obtained from different graph models. Central to this is a graph structure that is inferred on the data points. LPP finds a projection that respects this graph structure. We have applied our algorithms to several real world applications, e.g. face analysis and document representation.", "title": "" } ]
[ { "docid": "9cb682049f4a4d1291189b7cfccafb1e", "text": "The sequencing by hybridization (SBH) of determining the order in which nucleotides should occur on a DNA string is still under discussion for enhancements on computational intelligence although the next generation of DNA sequencing has come into existence. In the last decade, many works related to graph theory-based DNA sequencing have been carried out in the literature. This paper proposes a method for SBH by integrating hypergraph with genetic algorithm (HGGA) for designing a novel analytic technique to obtain DNA sequence from its spectrum. The paper represents elements of the spectrum and its relation as hypergraph and applies the unimodular property to ensure the compatibility of relations between l-mers. The hypergraph representation and unimodular property are bound with the genetic algorithm that has been customized with a novel selection and crossover operator reducing the computational complexity with accelerated convergence. Subsequently, upon determining the primary strand, an anti-homomorphism is invoked to find the reverse complement of the sequence. The proposed algorithm is implemented in the GenBank BioServer datasets, and the results are found to prove the efficiency of the algorithm. The HGGA is a non-classical algorithm with significant advantages and computationally attractive complexity reductions ranging to $$O(n^{2} )$$ O ( n 2 ) with improved accuracy that makes it prominent for applications other than DNA sequencing like image processing, task scheduling and big data processing.", "title": "" }, { "docid": "b3962fd4000fced796f3764d009c929e", "text": "Low-field extremity magnetic resonance imaging (lfMRI) is currently commercially available and has been used clinically to evaluate rheumatoid arthritis (RA). However, one disadvantage of this new modality is that the field of view (FOV) is too small to assess hand and wrist joints simultaneously. Thus, we have developed a new lfMRI system, compacTscan, with a FOV that is large enough to simultaneously assess the entire wrist to proximal interphalangeal joint area. In this work, we examined its clinical value compared to conventional 1.5 tesla (T) MRI. The comparison involved evaluating three RA patients by both 0.3 T compacTscan and 1.5 T MRI on the same day. Bone erosion, bone edema, and synovitis were estimated by our new compact MRI scoring system (cMRIS) and the kappa coefficient was calculated on a joint-by-joint basis. We evaluated a total of 69 regions. Bone erosion was detected in 49 regions by compacTscan and in 48 regions by 1.5 T MRI, while the total erosion score was 77 for compacTscan and 76.5 for 1.5 T MRI. These findings point to excellent agreement between the two techniques (kappa = 0.833). Bone edema was detected in 14 regions by compacTscan and in 19 by 1.5 T MRI, and the total edema score was 36.25 by compacTscan and 47.5 by 1.5 T MRI. Pseudo-negative findings were noted in 5 regions. However, there was still good agreement between the techniques (kappa = 0.640). Total number of evaluated joints was 33. Synovitis was detected in 13 joints by compacTscan and 14 joints by 1.5 T MRI, while the total synovitis score was 30 by compacTscan and 32 by 1.5 T MRI. Thus, although 1 pseudo-positive and 2 pseudo-negative findings resulted from the joint evaluations, there was again excellent agreement between the techniques (kappa = 0.827). Overall, the data obtained by our compacTscan system showed high agreement with those obtained by conventional 1.5 T MRI with regard to diagnosis and the scoring of bone erosion, edema, and synovitis. We conclude that compacTscan is useful for diagnosis and estimation of disease activity in patients with RA.", "title": "" }, { "docid": "3a52576a2fdaa7f6f9632dc8c4bf0971", "text": "As known, fractional CO2 resurfacing treatments are more effective than non-ablative ones against aging signs, but post-operative redness and swelling prolong the overall downtime requiring up to steroid administration in order to reduce these local systems. In the last years, an increasing interest has been focused on the possible use of probiotics for treating inflammatory and allergic conditions suggesting that they can exert profound beneficial effects on skin homeostasis. In this work, the Authors report their experience on fractional CO2 laser resurfacing and provide the results of a new post-operative topical treatment with an experimental cream containing probiotic-derived active principles potentially able to modulate the inflammatory reaction associated to laser-treatment. The cream containing DermaACB (CERABEST™) was administered post-operatively to 42 consecutive patients who were treated with fractional CO2 laser. All patients adopted the cream twice a day for 2 weeks. Grades were given according to outcome scale. The efficacy of the cream containing DermaACB was evaluated comparing the rate of post-operative signs vanishing with a control group of 20 patients topically treated with an antibiotic cream and a hyaluronic acid based cream. Results registered with the experimental treatment were good in 22 patients, moderate in 17, and poor in 3 cases. Patients using the study cream took an average time of 14.3 days for erythema resolution and 9.3 days for swelling vanishing. The post-operative administration of the cream containing DermaACB induces a quicker reduction of post-operative erythema and swelling when compared to a standard treatment.", "title": "" }, { "docid": "b90b7b44971cf93ba343b5dcdd060875", "text": "This paper discusses a general approach to qualitative modeling based on fuzzy logic. The method of qualitative modeling is divided into two parts: fuzzy modeling and linguistic approximation. It proposes to use a fuzzy clustering method (fuzzy c-means method) to identify the structure of a fuzzy model. To clarify the advantages of the proposed method, it also shows some examples of modeling, among them a model of a dynamical process and a model of a human operator’s control action.", "title": "" }, { "docid": "a8a802b8130d2b6a1b2dae84d53fb7c9", "text": "This paper addresses an open challenge in educational data mining, i.e., the problem of using observed prerequisite relations among courses to learn a directed universal concept graph, and using the induced graph to predict unobserved prerequisite relations among a broader range of courses. This is particularly useful to induce prerequisite relations among courses from different providers (universities, MOOCs, etc.). We propose a new framework for inference within and across two graphs---at the course level and at the induced concept level---which we call Concept Graph Learning (CGL). In the training phase, our system projects the course-level links onto the concept space to induce directed concept links; in the testing phase, the concept links are used to predict (unobserved) prerequisite links for test-set courses within the same institution or across institutions. The dual mappings enable our system to perform an interlingua-style transfer learning, e.g. treating the concept graph as the interlingua, and inducing prerequisite links in a transferable manner across different universities. Experiments on our newly collected data sets of courses from MIT, Caltech, Princeton and CMU show promising results, including the viability of CGL for transfer learning.", "title": "" }, { "docid": "6fb1f05713db4e771d9c610fa9c9925d", "text": "Objectives: Straddle injury represents a rare and complex injury to the female genito urinary tract (GUT). Overall prevention would be the ultimate goal, but due to persistent inhomogenity and inconsistency in definitions and guidelines, or suboptimal coding, the optimal study design for a prevention programme is still missing. Thus, medical records data were tested for their potential use for an injury surveillance registry and their impact on future prevention programmes. Design: Retrospective record analysis out of a 3 year period. Setting: All patients were treated exclusively by the first author. Patients: Six girls, median age 7 years, range 3.5 to 12 years with classical straddle injury. Interventions: Medical treatment and recording according to National and International Standards. Main Outcome Measures: All records were analyzed for accuracy in diagnosis and coding, surgical procedure, time and location of incident and examination findings. Results: All registration data sets were complete. A specific code for “straddle injury” in International Classification of Diseases (ICD) did not exist. Coding followed mainly reimbursement issues and specific information about the injury was usually expressed in an individual style. Conclusions: As demonstrated in this pilot, population based medical record data collection can play a substantial part in local injury surveillance registry and prevention initiatives planning.", "title": "" }, { "docid": "7968e0f2960a7dce6017699fd1222e36", "text": "This work investigates the role of contrasting discourse relations signaled by cue phrases, together with phrase positional information, in predicting sentiment at the phrase level. Two domains of online reviews were chosen. The first domain is of nutritional supplement reviews, which are often poorly structured yet also allow certain simplifying assumptions to be made. The second domain is of hotel reviews, which have somewhat different characteristics. A corpus is built from these reviews, and manually tagged for polarity. We propose and evaluate a few new features that are realized through a lightweight method of discourse analysis, and use these features in a hybrid lexicon and machine learning based classifier. Our results show that these features may be used to obtain an improvement in classification accuracy compared to other traditional machine learning approaches.", "title": "" }, { "docid": "bf164afc6315bf29a07e6026a3db4a26", "text": "iBeacons are a new way to interact with hardware. An iBeacon is a Bluetooth Low Energy device that only sends a signal in a specific format. They are like a lighthouse that sends light signals to boats. This paper explains what an iBeacon is, how it works and how it can simplify your daily life, what restriction comes with iBeacon and how to improve this restriction., as well as, how to use Location-based Services to track items. E.g., every time you touchdown at an airport and wait for your suitcase at the luggage reclaim, you have no information when your luggage will arrive at the conveyor belt. With an iBeacon inside your suitcase, it is possible to track the luggage and to receive a push notification about it even before you can see it. This is just one possible solution to use them. iBeacon can create a completely new shopping experience or make your home smarter. This paper demonstrates the luggage tracking use case and evaluates its possibilities and restrictions.", "title": "" }, { "docid": "3bff3136e5e2823d0cca2f864fe9e512", "text": "Cloud computing provides variety of services with the growth of their offerings. Due to efficient services, it faces numerous challenges. It is based on virtualization, which provides users a plethora computing resources by internet without managing any infrastructure of Virtual Machine (VM). With network virtualization, Virtual Machine Manager (VMM) gives isolation among different VMs. But, sometimes the levels of abstraction involved in virtualization have been reducing the workload performance which is also a concern when implementing virtualization to the Cloud computing domain. In this paper, it has been explored how the vendors in cloud environment are using Containers for hosting their applications and also the performance of VM deployments. It also compares VM and Linux Containers with respect to the quality of service, network performance and security evaluation.", "title": "" }, { "docid": "8a4b1c87b85418ce934f16003a481f27", "text": "Current parking space vacancy detection systems use simple trip sensors at the entry and exit points of parking lots. Unfortunately, this type of system fails when a vehicle takes up more than one spot or when a parking lot has different types of parking spaces. Therefore, I propose a camera-based system that would use computer vision algorithms for detecting vacant parking spaces. My algorithm uses a combination of car feature point detection and color histogram classification to detect vacant parking spaces in static overhead images.", "title": "" }, { "docid": "d2f4159b73f6baf188d49c43e6215262", "text": "In this paper, we compare the performance of descriptors computed for local interest regions, as, for example, extracted by the Harris-Affine detector [Mikolajczyk, K and Schmid, C, 2004]. Many different descriptors have been proposed in the literature. It is unclear which descriptors are more appropriate and how their performance depends on the interest region detector. The descriptors should be distinctive and at the same time robust to changes in viewing conditions as well as to errors of the detector. Our evaluation uses as criterion recall with respect to precision and is carried out for different image transformations. We compare shape context [Belongie, S, et al., April 2002], steerable filters [Freeman, W and Adelson, E, Setp. 1991], PCA-SIFT [Ke, Y and Sukthankar, R, 2004], differential invariants [Koenderink, J and van Doorn, A, 1987], spin images [Lazebnik, S, et al., 2003], SIFT [Lowe, D. G., 1999], complex filters [Schaffalitzky, F and Zisserman, A, 2002], moment invariants [Van Gool, L, et al., 1996], and cross-correlation for different types of interest regions. We also propose an extension of the SIFT descriptor and show that it outperforms the original method. Furthermore, we observe that the ranking of the descriptors is mostly independent of the interest region detector and that the SIFT-based descriptors perform best. Moments and steerable filters show the best performance among the low dimensional descriptors.", "title": "" }, { "docid": "4033a48235fc21987549bdc0ca1a893c", "text": "A novel algorithm for vehicle safety distance between driving cars for vehicle safety warning system is presented in this paper. The presented system concept includes a distance obstacle detection and safety distance calculation. The system detects the distance between the car and the in front of vehicles (obstacles) and uses the vehicle speed and other parameters to calculate the braking safety distance of the moving car. The system compares the obstacle distance and braking safety distance which are used to determine the moving vehicle's safety distance is enough or not. This paper focuses on the solution algorithm presentation.", "title": "" }, { "docid": "44dbbc80c05cbbd95bacdf2f0a724db2", "text": "Most of the existing methods for the recognition of faces and expressions consider either the expression-invariant face recognition problem or the identity-independent facial expression recognition problem. In this paper, we propose joint face and facial expression recognition using a dictionary-based component separation algorithm (DCS). In this approach, the given expressive face is viewed as a superposition of a neutral face component with a facial expression component which is sparse with respect to the whole image. This assumption leads to a dictionary-based component separation algorithm which benefits from the idea of sparsity and morphological diversity. This entails building data-driven dictionaries for neutral and expressive components. The DCS algorithm then uses these dictionaries to decompose an expressive test face into its constituent components. The sparse codes we obtain as a result of this decomposition are then used for joint face and expression recognition. Experiments on publicly available expression and face data sets show the effectiveness of our method.", "title": "" }, { "docid": "4d403184b8f482449130bbb0ee1fb2cf", "text": "Finite element analysis A 2D finite element analysis for the numerical prediction of capacity curve of unreinforced masonry (URM) walls is conducted. The studied model is based on the fiber finite element approach. The emphasis of this paper will be on the errors obtained from fiber finite element analysis of URM structures under pushover analysis. The masonry material is modeled by different constitutive stress-strain model in compression and tension. OpenSees software is employed to analysis the URM walls. Comparison of numerical predictions with experimental data, it is shown that the fiber model employed in OpenSees cannot properly predict the behavior of URM walls with balance between accuracy and low computational efforts. Additionally, the finite element analyses results show appropriate predictions of some experimental data when the real tensile strength of masonry material is changed. Hence, from the viewpoint of this result, it is concluded that obtained results from fiber finite element analyses employed in OpenSees are unreliable because the exact behavior of masonry material is different from the adopted masonry material models used in modeling process.", "title": "" }, { "docid": "3f2081f9c1cf10e9ec27b2541f828320", "text": "As the heart of an aircraft, the aircraft engine's condition directly affects the safety, reliability, and operation of the aircraft. Prognostics and health management for aircraft engines can provide advance warning of failure and estimate the remaining useful life. However, aircraft engine systems are complex with both intangible and uncertain factors, it is difficult to model the complex degradation process, and no single prognostic approach can effectively solve this critical and complicated problem. Thus, fusion prognostics is conducted to obtain more accurate prognostics results. In this paper, a prognostics and health management-oriented integrated fusion prognostic framework is developed to improve the system state forecasting accuracy. This framework strategically fuses the monitoring sensor data and integrates the strengths of the data-driven prognostics approach and the experience-based approach while reducing their respective limitations. As an application example, this developed fusion prognostics framework is employed to predict the remaining useful life of an aircraft gas turbine engine based on sensor data. The results demonstrate that the proposed fusion prognostics framework is an effective prognostics tool, which can provide a more accurate and robust remaining useful life estimation than any single prognostics method.", "title": "" }, { "docid": "572ae23dd73dfb0a7cbc04d05772528f", "text": "Machine learning models with very low test error have been shown to be consistently vulnerable to small, adversarially chosen perturbations of the input. We hypothesize that this counterintuitive behavior is a result of the high-dimensional geometry of the data manifold, and explore this hypothesis on a simple highdimensional dataset. For this dataset we show a fundamental bound relating the classification error rate to the average distance to the nearest misclassification, which is independent of the model. We train different neural network architectures on this dataset and show their error sets approach this theoretical bound. As a result of the theory, the vulnerability of machine learning models to small adversarial perturbations is a logical consequence of the amount of test error observed. We hope that our theoretical analysis of this foundational synthetic case will point a way forward to explore how the geometry of complex real-world data sets leads to adversarial examples.", "title": "" }, { "docid": "f9d4b66f395ec6660da8cb22b96c436c", "text": "The purpose of the study was to measure objectively the home use of the reciprocating gait orthosis (RGO) and the electrically augmented (hybrid) RGO. It was hypothesised that RGO use would increase following provision of functional electrical stimulation (FES). Five adult subjects participated in the study with spinal cord lesions ranging from C2 (incomplete) to T6. Selection criteria included active RGO use and suitability for electrical stimulation. Home RGO use was measured for up to 18 months by determining the mean number of steps taken per week. During this time patients were supplied with the hybrid system. Three alternatives for the measurement of steps taken were investigated: a commercial digital pedometer, a magnetically actuated counter and a heel contact switch linked to an electronic counter. The latter was found to be the most reliable system and was used for all measurements. Additional information on RGO use was acquired using three patient diaries administered throughout the study and before and after the provision of the hybrid system. Testing of the original hypothesis was complicated by problems in finding a reliable measurement tool and difficulties with data collection. However, the results showed that overall use of the RGO, whether with or without stimulation, is low. Statistical analysis of the step counter results was not realistic. No statistically significant change in RGO use was found between the patient diaries. The study suggests that the addition of electrical stimulation does not increase RGO use. The study highlights the problem of objectively measuring orthotic use in the home.", "title": "" }, { "docid": "ec5ebfbe28daebaaac23fbf031b75ab3", "text": "Theoretical models predict that overconŽdent investors trade excessively. We test this prediction by partitioning investors on gender. Psychological research demonstrates that, in areas such as Žnance, men are more overconŽdent than women. Thus, theory predicts that men will trade more excessively than women. Using account data for over 35,000 households from a large discount brokerage, we analyze the common stock investments of men and women from February 1991 through January 1997. We document that men trade 45 percent more than women. Trading reduces men’s net returns by 2.65 percentage points a year as opposed to 1.72 percentage points for women.", "title": "" }, { "docid": "699c6a7b4f938d6a45d65878f08335e4", "text": "Fuzzing is a popular dynamic program analysis technique used to find vulnerabilities in complex software. Fuzzing involves presenting a target program with crafted malicious input designed to cause crashes, buffer overflows, memory errors, and exceptions. Crafting malicious inputs in an efficient manner is a difficult open problem and often the best approach to generating such inputs is through applying uniform random mutations to pre-existing valid inputs (seed files). We present a learning technique that uses neural networks to learn patterns in the input files from past fuzzing explorations to guide future fuzzing explorations. In particular, the neural models learn a function to predict good (and bad) locations in input files to perform fuzzing mutations based on the past mutations and corresponding code coverage information. We implement several neural models including LSTMs and sequence-to-sequence models that can encode variable length input files. We incorporate our models in the state-of-the-art AFL (American Fuzzy Lop) fuzzer and show significant improvements in terms of code coverage, unique code paths, and crashes for various input formats including ELF, PNG, PDF, and XML.", "title": "" }, { "docid": "7c2cb105e5fad90c90aea0e59aae5082", "text": "Life often presents us with situations in which it is important to assess the “true” qualities of a person or object, but in which some factor(s) might have affected (or might yet affect) our initial perceptions in an undesired way. For example, in the Reginald Denny case following the 1993 Los Angeles riots, jurors were asked to determine the guilt or innocence of two African-American defendants who were charged with violently assaulting a Caucasion truck driver. Some of the jurors in this case might have been likely to realize that in their culture many of the popular media portrayals of African-Americans are violent in nature. Yet, these jurors ideally would not want those portrayals to influence their perceptions of the particular defendants in the case. In fact, the justice system is based on the assumption that such portrayals will not influence jury verdicts. In our work on bias correction, we have been struck by the variety of potentially biasing factors that can be identified-including situational influences such as media, social norms, and general culture, and personal influences such as transient mood states, motives (e.g., to manage impressions or agree with liked others), and salient beliefs-and we have been impressed by the apparent ubiquity of correction phenomena (which appear to span many areas of psychological inquiry). Yet, systematic investigations of bias correction are in their early stages. Although various researchers have discussed the notion of effortful cognitive processes overcoming initial (sometimes “automatic”) biases in a variety of settings (e.g., Brewer, 1988; Chaiken, Liberman, & Eagly, 1989; Devine, 1989; Kruglanski & Freund, 1983; Neuberg & Fiske, 1987; Petty & Cacioppo, 1986), little attention has been given, until recently, to the specific processes by which biases are overcome when effort is targeted toward “correction of bias.” That is, when", "title": "" } ]
scidocsrr
1bb30aafa0064f1e7701cab0e6b4d216
A new approach to wafer sawing: stealth laser dicing technology
[ { "docid": "ef706ea7a6dcd5b71602ea4c28eb9bd3", "text": "\"Stealth Dicing (SD) \" was developed to solve such inherent problems of dicing process as debris contaminants and unnecessary thermal damage on work wafer. In SD, laser beam power of transmissible wavelength is absorbed only around focal point in the wafer by utilizing temperature dependence of absorption coefficient of the wafer. And these absorbed power forms modified layer in the wafer, which functions as the origin of separation in followed separation process. Since only the limited interior region of a wafer is processed by laser beam irradiation, damages and debris contaminants can be avoided in SD. Besides characteristics of devices will not be affected. Completely dry process of SD is another big advantage over other dicing methods.", "title": "" }, { "docid": "b7617b5dd2a6f392f282f6a34f5b6751", "text": "In the semiconductor market, the trend of packaging for die stacking technology moves to high density with thinner chips and higher capacity of memory devices. Moreover, the wafer sawing process is becoming more important for thin wafer, because its process speed tends to affect sawn quality and yield. ULK (Ultra low-k) device could require laser grooving application to reduce the stress during wafer sawing. Furthermore under 75um-thick thin low-k wafer is not easy to use the laser grooving application. So, UV laser dicing technology that is very useful tool for Si wafer was selected as full cut application, which has been being used on low-k wafer as laser grooving method.", "title": "" } ]
[ { "docid": "c23a86bc6d8011dab71ac5e1e2051c3b", "text": "The most widely used machine learning frameworks require users to carefully tune their memory usage so that the deep neural network (DNN) fits into the DRAM capacity of a GPU. This restriction hampers a researcher’s flexibility to study different machine learning algorithms, forcing them to either use a less desirable network architecture or parallelize the processing across multiple GPUs. We propose a runtime memory manager that virtualizes the memory usage of DNNs such that both GPU and CPU memory can simultaneously be utilized for training larger DNNs. Our virtualized DNN (vDNN) reduces the average memory usage of AlexNet by 61% and OverFeat by 83%, a significant reduction in memory requirements of DNNs. Similar experiments on VGG-16, one of the deepest and memory hungry DNNs to date, demonstrate the memory-efficiency of our proposal. vDNN enables VGG-16 with batch size 256 (requiring 28 GB of memory) to be trained on a single NVIDIA K40 GPU card containing 12 GB of memory, with 22% performance loss compared to a hypothetical GPU with enough memory to hold the entire DNN.", "title": "" }, { "docid": "a0ca7d86ae79c263644c8cd5ae4c0aed", "text": "Research in texture recognition often concentrates on the problem of material recognition in uncluttered conditions, an assumption rarely met by applications. In this work we conduct a first study of material and describable texture attributes recognition in clutter, using a new dataset derived from the OpenSurface texture repository. Motivated by the challenge posed by this problem, we propose a new texture descriptor, FV-CNN, obtained by Fisher Vector pooling of a Convolutional Neural Network (CNN) filter bank. FV-CNN substantially improves the state-of-the-art in texture, material and scene recognition. Our approach achieves 79.8% accuracy on Flickr material dataset and 81% accuracy on MIT indoor scenes, providing absolute gains of more than 10% over existing approaches. FV-CNN easily transfers across domains without requiring feature adaptation as for methods that build on the fully-connected layers of CNNs. Furthermore, FV-CNN can seamlessly incorporate multi-scale information and describe regions of arbitrary shapes and sizes. Our approach is particularly suited at localizing “stuff” categories and obtains state-of-the-art results on MSRC segmentation dataset, as well as promising results on recognizing materials and surface attributes in clutter on the OpenSurfaces dataset.", "title": "" }, { "docid": "cb1bfa58eb89539663be0f2b4ea8e64d", "text": "Hierarchical clustering is a recursive partitioning of a dataset into clusters at an increasingly finer granularity. Motivated by the fact that most work on hierarchical clustering was based on providing algorithms, rather than optimizing a specific objective, Dasgupta framed similarity-based hierarchical clustering as a combinatorial optimization problem, where a ‘good’ hierarchical clustering is one that minimizes a particular cost function [21]. He showed that this cost function has certain desirable properties: in order to achieve optimal cost, disconnected components (namely, dissimilar elements) must be separated at higher levels of the hierarchy and when the similarity between data elements is identical, all clusterings achieve the same cost. We take an axiomatic approach to defining ‘good’ objective functions for both similarity and dissimilarity-based hierarchical clustering. We characterize a set of admissible objective functions having the property that when the input admits a ‘natural’ ground-truth hierarchical clustering, the ground-truth clustering has an optimal value. We show that this set includes the objective function introduced by Dasgupta. Equipped with a suitable objective function, we analyze the performance of practical algorithms, as well as develop better and faster algorithms for hierarchical clustering. We also initiate a beyond worst-case analysis of the complexity of the problem, and design algorithms for this scenario.", "title": "" }, { "docid": "662ec285031306816814378e6e192782", "text": "One task of heterogeneous face recognition is to match a near infrared (NIR) face image to a visible light (VIS) image. In practice, there are often a few pairwise NIR-VIS face images but it is easy to collect lots of VIS face images. Therefore, how to use these unpaired VIS images to improve the NIR-VIS recognition accuracy is an ongoing issue. This paper presents a deep TransfeR NIR-VIS heterogeneous facE recognition neTwork (TRIVET) for NIR-VIS face recognition. First, to utilize large numbers of unpaired VIS face images, we employ the deep convolutional neural network (CNN) with ordinal measures to learn discriminative models. The ordinal activation function (Max-Feature-Map) is used to select discriminative features and make the models robust and lighten. Second, we transfer these models to NIR-VIS domain by fine-tuning with two types of NIR-VIS triplet loss. The triplet loss not only reduces intra-class NIR-VIS variations but also augments the number of positive training sample pairs. It makes fine-tuning deep models on a small dataset possible. The proposed method achieves state-of-the-art recognition performance on the most challenging CASIA NIR-VIS 2.0 Face Database. It achieves a new record on rank-1 accuracy of 95.74% and verification rate of 91.03% at FAR=0.001. It cuts the error rate in comparison with the best accuracy [27] by 69%.", "title": "" }, { "docid": "4bc1a78a3c9749460da218fd9d314e56", "text": "Fast and accurate side-chain conformation prediction is important for homology modeling, ab initio protein structure prediction, and protein design applications. Many methods have been presented, although only a few computer programs are publicly available. The SCWRL program is one such method and is widely used because of its speed, accuracy, and ease of use. A new algorithm for SCWRL is presented that uses results from graph theory to solve the combinatorial problem encountered in the side-chain prediction problem. In this method, side chains are represented as vertices in an undirected graph. Any two residues that have rotamers with nonzero interaction energies are considered to have an edge in the graph. The resulting graph can be partitioned into connected subgraphs with no edges between them. These subgraphs can in turn be broken into biconnected components, which are graphs that cannot be disconnected by removal of a single vertex. The combinatorial problem is reduced to finding the minimum energy of these small biconnected components and combining the results to identify the global minimum energy conformation. This algorithm is able to complete predictions on a set of 180 proteins with 34342 side chains in <7 min of computer time. The total chi(1) and chi(1 + 2) dihedral angle accuracies are 82.6% and 73.7% using a simple energy function based on the backbone-dependent rotamer library and a linear repulsive steric energy. The new algorithm will allow for use of SCWRL in more demanding applications such as sequence design and ab initio structure prediction, as well addition of a more complex energy function and conformational flexibility, leading to increased accuracy.", "title": "" }, { "docid": "98788b45932c8564d29615f49407d179", "text": "BACKGROUND\nAbnormal forms of grief, currently referred to as complicated grief or prolonged grief disorder, have been discussed extensively in recent years. While the diagnostic criteria are still debated, there is no doubt that prolonged grief is disabling and may require treatment. To date, few interventions have demonstrated efficacy.\n\n\nMETHODS\nWe investigated whether outpatients suffering from prolonged grief disorder (PGD) benefit from a newly developed integrative cognitive behavioural therapy for prolonged grief (PG-CBT). A total of 51 patients were randomized into two groups, stratified by the type of death and their relationship to the deceased; 24 patients composed the treatment group and 27 patients composed the wait list control group (WG). Treatment consisted of 20-25 sessions. Main outcome was change in grief severity; secondary outcomes were reductions in general psychological distress and in comorbidity.\n\n\nRESULTS\nPatients on average had 2.5 comorbid diagnoses in addition to PGD. Between group effect sizes were large for the improvement of grief symptoms in treatment completers (Cohen׳s d=1.61) and in the intent-to-treat analysis (d=1.32). Comorbid depressive symptoms also improved in PG-CBT compared to WG. The completion rate was 79% in PG-CBT and 89% in WG.\n\n\nLIMITATIONS\nThe major limitations of this study were a small sample size and that PG-CBT took longer than the waiting time.\n\n\nCONCLUSIONS\nPG-CBT was found to be effective with an acceptable dropout rate. Given the number of bereaved people who suffer from PGD, the results are of high clinical relevance.", "title": "" }, { "docid": "58d8e3bd39fa470d1dfa321aeba53106", "text": "There are over 1.2 million Australians registered as having vision impairment. In most cases, vision impairment severely affects the mobility and orientation of the person, resulting in loss of independence and feelings of isolation. GPS technology and its applications have now become omnipresent and are used daily to improve and facilitate the lives of many. Although a number of products specifically designed for the Blind and Vision Impaired (BVI) and relying on GPS technology have been launched, this domain is still a niche and ongoing R&D is needed to bring all the benefits of GPS in terms of information and mobility to the BVI. The limitations of GPS indoors and in urban canyons have led to the development of new systems and signals that bridge the gap and provide positioning in those environments. Although still in their infancy, there is no doubt indoor positioning technologies will one day become as pervasive as GPS. It is therefore important to design those technologies with the BVI in mind, to make them accessible from scratch. This paper will present an indoor positioning system that has been designed in that way, examining the requirements of the BVI in terms of accuracy, reliability and interface design. The system runs locally on a mid-range smartphone and relies at its core on a Kalman filter that fuses the information of all the sensors available on the phone (Wi-Fi chipset, accelerometers and magnetic field sensor). Each part of the system is tested separately as well as the final solution quality.", "title": "" }, { "docid": "9eaf39d4b612c3bd272498eb8a91effc", "text": "The relationship between the different approaches to quality in ISO standards is reviewed, contrasting the manufacturing approach to quality in ISO 9000 (quality is conformance to requirements) with the product orientation of ISO 8402 (quality is the presence of specified features) and the goal orientation of quality in use in ISO 14598-1 (quality is meeting user needs). It is shown how ISO 9241-11 enables quality in use to be measured, and ISO 13407 defines the activities necessary in the development lifecycle for achieving quality in use. APPROACHES TO QUALITY Although the term quality seems self-explanatory in everyday usage, in practice there are many different views of what it means and how it should be achieved as part of a software production process. ISO DEFINITIONS OF QUALITY ISO 9000 is concerned with quality assurance to provide confidence that a product will satisfy given requirements. Interpreted literally, this puts quality in the hands of the person producing the requirements specification a product may be deemed to have quality even if the requirements specification is inappropriate. This is one of the interpretations of quality reviewed by Garvin (1984). He describes it as Manufacturing quality: a product which conforms to specified requirements. A different emphasis is given in ISO 8402 which defines quality as the totality of characteristics of an entity that bear on its ability to satisfy stated and implied needs. This is an example of what Garvin calls Product quality: an inherent characteristic of the product determined by the presence or absence of measurable product attributes. Many organisations would like to be able to identify those attributes which can be designed into a product or evaluated to ensure quality. ISO 9126 (1992) takes this approach, and categorises the attributes of software quality as: functionality, efficiency, usability, reliability, maintainability and portability. To the extent that user needs are well-defined and common to the intended users this implies that quality is an inherent attribute of the product. However, if different groups of users have different needs, then they may require different characteristics for a product to have quality for their purposes. Assessment of quality thus becomes dependent on the perception of the user. USER PERCEIVED QUALITY AND QUALITY IN USE Garvin defines User perceived quality as the combination of product attributes which provide the greatest satisfaction to a specified user. Most approaches to quality do not deal explicitly with userperceived quality. User-perceived quality is regarded as an intrinsically inaccurate judgement of product quality. For instance Garvin, 1984, observes that \"Perceptions of quality can be as subjective as assessments of aesthetics\". However, there is a more fundamental reason for being concerned with user-perceived quality. Products can only have quality in relation to their intended purpose. For instance, the quality attributes required of an office carpet may be very different from those required of a bedroom carpet. For conventional products this is assumed to be selfevident. For general-purpose products it creates a problem. A text editor could be used by programmers for producing code, or by secretaries for producing letters. Some of the quality attributes required will be the same, but others will be different. Even for a word processor, the functionality, usability and efficiency attributes required by a trained user may be very different from those required by an occasional user. Reconciling work on usability with traditional approaches to software quality has led to another broader and potentially important view of quality which has been outside the scope of most existing quality systems. This embraces userperceived quality by relating quality to the needs of the user of an interactive product. ISO 14598-1 defines External quality as the extent to which a product satisfies stated and implied needs when used under specified conditions. This moves the focus of quality from the product in isolation to the satisfaction of the needs of particular users in particular situations. The purpose of a product is to help users achieve particular goals, which leads to the definition of Quality in use in ISO DIS 14598-1 as the effectiveness, efficiency and satisfaction with which specified users can achieve specified goals in specified environments. A product meets the requirements of the user if it is effective (accurate and complete), efficient in use of time and resources, and satisfying, regardless of the specific attributes it possesses. Specifying requirements in terms of performance has many benefits. This is recognised in the rules for drafting ISO standards (ISO, 1992) which suggest that to provide design flexibility, standards should specify the performance required of a product rather than the technical attributes needed to achieve the performance. Quality in use is a means of applying this principle to the performance which a product enables a human to achieve. An example is the ISO standard for VDT display screens (ISO 9241-3). The purpose of the standard is to ensure that the screen has the technical attributes required to achieve quality in use. The current version of the standard is specified in terms of the technical attributes of a traditional CRT. It is intended to extend the standard to permit alternative new technology screens to conform if it can be demonstrated that users are as effective, efficient and satisfied with the new screen as with an existing screen which meets the technical specifications. SOFTWARE QUALITY IN USE: ISO 14598-1 The purpose of designing an interactive system is to meet the needs of users: to provide quality in use (see Figure 1, from ISO/IEC 14598-1). The internal software attributes will determine the quality of a software product in use in a particular context. Software quality attributes are the cause, quality in use the effect. Quality in use is (or at least should be) the objective, software product quality is the means of achieving it. system behaviour external quality requirements External quality internal quality requirements Internal quality software attributes Specification Design and development Needs Quality in use Operation", "title": "" }, { "docid": "b76f10452e4a4b0d7408e6350b263022", "text": "In this paper, a Y-Δ hybrid connection for a high-voltage induction motor is described. Low winding harmonic content is achieved by careful consideration of the interaction between the Y- and Δ-connected three-phase winding sets so that the magnetomotive force (MMF) in the air gap is close to sinusoid. Essentially, the two winding sets operate in a six-phase mode. This paper goes on to verify that the fundamental distribution coefficient for the stator MMF is enhanced compared to a standard three-phase winding set. The design method for converting a conventional double-layer lap winding in a high-voltage induction motor into a Y-Δ hybrid lap winding is described using standard winding theory as often applied to small- and medium-sized motors. The main parameters addressed when designing the winding are the conductor wire gauge, coil turns, and parallel winding branches in the Y and Δ connections. A winding design scheme for a 1250-kW 6-kV induction motor is put forward and experimentally validated; the results show that the efficiency can be raised effectively without increasing the cost.", "title": "" }, { "docid": "8387c06436e850b4fb00c6b5e0dcf19f", "text": "Since the beginning of the epidemic, human immunodeficiency virus (HIV) has infected around 70 million people worldwide, most of whom reside is sub-Saharan Africa. There have been very promising developments in the treatment of HIV with anti-retroviral drug cocktails. However, drug resistance to anti-HIV drugs is emerging, and many people infected with HIV have adverse reactions or do not have ready access to currently available HIV chemotherapies. Thus, there is a need to discover new anti-HIV agents to supplement our current arsenal of anti-HIV drugs and to provide therapeutic options for populations with limited resources or access to currently efficacious chemotherapies. Plant-derived natural products continue to serve as a reservoir for the discovery of new medicines, including anti-HIV agents. This review presents a survey of plants that have shown anti-HIV activity, both in vitro and in vivo.", "title": "" }, { "docid": "f8d01364ff29ad18480dfe5d164bbebf", "text": "With companies such as Netflix and YouTube accounting for more than 50% of the peak download traffic on North American fixed networks in 2015, video streaming represents a significant source of Internet traffic. Multimedia delivery over the Internet has evolved rapidly over the past few years. The last decade has seen video streaming transitioning from User Datagram Protocol to Transmission Control Protocol-based technologies. Dynamic adaptive streaming over HTTP (DASH) has recently emerged as a standard for Internet video streaming. A range of rate adaptation mechanisms are proposed for DASH systems in order to deliver video quality that matches the throughput of dynamic network conditions for a richer user experience. This survey paper looks at emerging research into the application of client-side, server-side, and in-network rate adaptation techniques to support DASH-based content delivery. We provide context and motivation for the application of these techniques and review significant works in the literature from the past decade. These works are categorized according to the feedback signals used and the end-node that performs or assists with the adaptation. We also provide a review of several notable video traffic measurement and characterization studies and outline open research questions in the field.", "title": "" }, { "docid": "85bc241c03d417099aa155766e6a1421", "text": "Passwords continue to prevail on the web as the primary method for user authentication despite their well-known security and usability drawbacks. Password managers offer some improvement without requiring server-side changes. In this paper, we evaluate the security of dual-possession authentication, an authentication approach offering encrypted storage of passwords and theft-resistance without the use of a master password. We further introduce Tapas, a concrete implementation of dual-possession authentication leveraging a desktop computer and a smartphone. Tapas requires no server-side changes to websites, no master password, and protects all the stored passwords in the event either the primary or secondary device (e.g., computer or phone) is stolen. To evaluate the viability of Tapas as an alternative to traditional password managers, we perform a 30 participant user study comparing Tapas to two configurations of Firefox's built-in password manager. We found users significantly preferred Tapas. We then improve Tapas by incorporating feedback from this study, and reevaluate it with an additional 10 participants.", "title": "" }, { "docid": "c7f0856c282d1039e44ba6ef50948d32", "text": "This paper presents the analysis and operation of a three-phase pulsewidth modulation rectifier system formed by the star-connection of three single-phase boost rectifier modules (Y-rectifier) without a mains neutral point connection. The current forming operation of the Y-rectifier is analyzed and it is shown that the phase current has the same high quality and low ripple as the Vienna rectifier. The isolated star point of Y-rectifier results in a mutual coupling of the individual phase module outputs and has to be considered for control of the module dc link voltages. An analytical expression for the coupling coefficients of the Y-rectifier phase modules is derived. Based on this expression, a control concept with reduced calculation effort is designed and it provides symmetric loading of the phase modules and solves the balancing problem of the dc link voltages. The analysis also provides insight that enables the derivation of a control concept for two phase operation, such as in the case of a mains phase failure. The theoretical and simulated results are proved by experimental analysis on a fully digitally controlled, 5.4-kW prototype.", "title": "" }, { "docid": "1053653b3584180dd6f97866c13ce40a", "text": "• • The order of authorship on this paper is random and contributions were equal. We would like to thank Ron Burt, Jim March and Mike Tushman for many helpful suggestions. Olav Sorenson provided particularly extensive comments on this paper. We would like to acknowledge the financial support of the University of Chicago, Graduate School of Business and a grant from the Kauffman Center for Entrepreneurial Leadership. Clarifying the relationship between organizational aging and innovation processes is an important step in understanding the dynamics of high-technology industries, as well as for resolving debates in organizational theory about the effects of aging on organizational functioning. We argue that aging has two seemingly contradictory consequences for organizational innovation. First, we believe that aging is associated with increases in firms' rates of innovation. Simultaneously, however, we argue that the difficulties of keeping pace with incessant external developments causes firms' innovative outputs to become obsolete relative to the most current environmental demands. These seemingly contradictory outcomes are intimately related and reflect inherent trade-offs in organizational learning and innovation processes. Multiple longitudinal analyses of the relationship between firm age and patenting behavior in the semiconductor and biotechnology industries lend support to these arguments. Introduction In an increasingly knowledge-based economy, pinpointing the factors that shape the ability of organizations to produce influential ideas and innovations is a central issue for organizational studies. Among all organizational outputs, innovation is fundamental not only because of its direct impact on the viability of firms, but also because of its profound effects on the paths of social and economic change. In this paper, we focus on an ubiquitous organizational process-aging-and examine its multifaceted influence on organizational innovation. In so doing, we address an important unresolved issue in organizational theory, namely the nature of the relationship between aging and organizational behavior (Hannan 1998). Evidence clarifying the relationship between organizational aging and innovation promises to improve our understanding of the organizational dynamics of high-technology markets, and in particular the dynamics of technological leadership. For instance, consider the possibility that aging has uniformly positive consequences for innovative activity: on the foundation of accumulated experience, older firms innovate more frequently, and their innovations have greater significance than those of younger enterprises. In this scenario, technological change paradoxically may be associated with organizational stability, as incumbent organizations come to dominate the technological frontier and their preeminence only increases with their tenure. 1 Now consider the …", "title": "" }, { "docid": "e786d22cd1c30014d1a1dcdc655a56fb", "text": "Chemical fingerprints are used to represent chemical molecules by recording the presence or absence, or by counting the number of occurrences, of particular features or substructures, such as labeled paths in the 2D graph of bonds, of the corresponding molecule. These fingerprint vectors are used to search large databases of small molecules, currently containing millions of entries, using various similarity measures, such as the Tanimoto or Tversky's measures and their variants. Here, we derive simple bounds on these similarity measures and show how these bounds can be used to considerably reduce the subset of molecules that need to be searched. We consider both the case of single-molecule and multiple-molecule queries, as well as queries based on fixed similarity thresholds or aimed at retrieving the top K hits. We study the speedup as a function of query size and distribution, fingerprint length, similarity threshold, and database size |D| and derive analytical formulas that are in excellent agreement with empirical values. The theoretical considerations and experiments show that this approach can provide linear speedups of one or more orders of magnitude in the case of searches with a fixed threshold, and achieve sublinear speedups in the range of O(|D|0.6) for the top K hits in current large databases. This pruning approach yields subsecond search times across the 5 million compounds in the ChemDB database, without any loss of accuracy.", "title": "" }, { "docid": "dd271275654da4bae73ee41d76fe165c", "text": "BACKGROUND\nThe recovery period for patients who have been in an intensive care unitis often prolonged and suboptimal. Anxiety, depression and post-traumatic stress disorder are common psychological problems. Intensive care staff offer various types of intensive aftercare. Intensive care follow-up aftercare services are not standard clinical practice in Norway.\n\n\nOBJECTIVE\nThe overall aim of this study is to investigate how adult patients experience theirintensive care stay their recovery period, and the usefulness of an information pamphlet.\n\n\nMETHOD\nA qualitative, exploratory research with semi-structured interviews of 29 survivors after discharge from intensive care and three months after discharge from the hospital.\n\n\nRESULTS\nTwo main themes emerged: \"Being on an unreal, strange journey\" and \"Balancing between who I was and who I am\" Patients' recollection of their intensive care stay differed greatly. Continuity of care and the nurse's ability to see and value individual differences was highlighted. The information pamphlet helped intensive care survivors understand that what they went through was normal.\n\n\nCONCLUSIONS\nContinuity of care and an individual approach is crucial to meet patients' uniqueness and different coping mechanisms. Intensive care survivors and their families must be included when information material and rehabilitation programs are designed and evaluated.", "title": "" }, { "docid": "ec0733962301d6024da773ad9d0f636d", "text": "This paper focuses on the design, fabrication and characterization of unimorph actuators for a microaerial flapping mechanism. PZT-5H and PZN-PT are investigated as piezoelectric layers in the unimorph actuators. Design issues for microaerial flapping actuators are discussed, and criteria for the optimal dimensions of actuators are determined. For low power consumption actuation, a square wave based electronic driving circuit is proposed. Fabricated piezoelectric unimorphs are characterized by an optical measurement system in quasi-static and dynamic mode. Experimental performance of PZT5H and PZN-PT based unimorphs is compared with desired design specifications. A 1 d.o.f. flapping mechanism with a PZT-5H unimorph is constructed, and 180◦ stroke motion at 95 Hz is achieved. Thus, it is shown that unimorphs could be promising flapping mechanism actuators.", "title": "" }, { "docid": "7239b0f0a1b894c6383c538450c90e8a", "text": "To address the problem of underexposure, underrepresentation, and underproduction of diverse professionals in the field of computing, we target middle school education using an idea that combines computational thinking with dance and movement choreography. This lightning talk delves into a virtual reality education and entertainment application named Virtual Environment Interactions (VEnvI). Our in vivo study examines how VEnvI can be used to teach fundamental computer science concepts such as sequences, loops, variables, conditionals, functions, and parallel programming. We aim to reach younger students through a fun and intuitive interface for choreographing dance movements with a virtual character. Our study contrasts the highly immersive and embodied virtual reality metaphor of using VEnvI with a non-immersive desktop metaphor. Additionally, we examine the effects of user attachment by comparing the learning results gained with customizable virtual characters in contrast with character presets. By analyzing qualitative and quantitative user responses measuring cognition, presence, usability, and satisfaction, we hope to find how virtual reality can enhance interest in the field of computer science among middle school students.", "title": "" }, { "docid": "e4761bfc7c9b41881441928883660156", "text": "This paper presents a digital low-dropout regulator (D-LDO) with a proposed transient-response boost technique, which enables the reduction of transient response time, as well as overshoot/undershoot, when the load current is abruptly drawn. The proposed D-LDO detects the deviation of the output voltage by overshoot/undershoot, and increases its loop gain, for the time that the deviation is beyond a limit. Once the output voltage is settled again, the loop gain is returned. With the D-LDO fabricated on an 110-nm CMOS technology, we measured its settling time and peak of undershoot, which were reduced by 60% and 72%, respectively, compared with and without the transient-response boost mode. Using the digital logic gates, the chip occupies a small area of 0.04 mm2, and it achieves a maximum current efficiency of 99.98%, by consuming the quiescent current of 15 μA at 0.7-V input voltage.", "title": "" } ]
scidocsrr
a2fb0018d07bcf972886b10cc66ce964
Recurrent Neural Networks for Customer Purchase Prediction on Twitter
[ { "docid": "e2c6437d257559211d182b5707aca1a4", "text": "In present times, social forums such as Quora and Yahoo! Answers constitute powerful media through which people discuss on a variety of topics and express their intentions and thoughts. Here they often reveal their potential intent to purchase ‘Purchase Intent’ (PI). A purchase intent is defined as a text expression showing a desire to purchase a product or a service in future. Extracting posts having PI from a user’s social posts gives huge opportunities towards web personalization, targeted marketing and improving community observing systems. In this paper, we explore the novel problem of detecting PIs from social posts and classifying them. We find that using linguistic features along with statistical features of PI expressions achieves a significant improvement in PI classification over ‘bag-ofwords’ based features used in many present day socialmedia classification tasks. Our approach takes into consideration the specifics of social posts like limited contextual information, incorrect grammar, language ambiguities, etc. by extracting features at two different levels of text granularity word and phrase based features and grammatical dependency based features. Apart from these, the patterns observed in PI posts help us to identify some specific features.", "title": "" }, { "docid": "cf2fc7338a0a81e4c56440ec7c3c868e", "text": "We describe a new dependency parser for English tweets, TWEEBOPARSER. The parser builds on several contributions: new syntactic annotations for a corpus of tweets (TWEEBANK), with conventions informed by the domain; adaptations to a statistical parsing algorithm; and a new approach to exploiting out-of-domain Penn Treebank data. Our experiments show that the parser achieves over 80% unlabeled attachment accuracy on our new, high-quality test set and measure the benefit of our contributions. Our dataset and parser can be found at http://www.ark.cs.cmu.edu/TweetNLP.", "title": "" }, { "docid": "64330f538b3d8914cbfe37565ab0d648", "text": "The compositionality of meaning extends beyond the single sentence. Just as words combine to form the meaning of sentences, so do sentences combine to form the meaning of paragraphs, dialogues and general discourse. We introduce both a sentence model and a discourse model corresponding to the two levels of compositionality. The sentence model adopts convolution as the central operation for composing semantic vectors and is based on a novel hierarchical convolutional neural network. The discourse model extends the sentence model and is based on a recurrent neural network that is conditioned in a novel way both on the current sentence and on the current speaker. The discourse model is able to capture both the sequentiality of sentences and the interaction between different speakers. Without feature engineering or pretraining and with simple greedy decoding, the discourse model coupled to the sentence model obtains state of the art performance on a dialogue act classification experiment.", "title": "" } ]
[ { "docid": "6d9735b19ab2cb1251bd294045145367", "text": "Waveguide twists are often necessary to provide polarization rotation between waveguide-based components. At terahertz frequencies, it is desirable to use a twist design that is compact in order to reduce loss; however, these designs are difficult if not impossible to realize using standard machining. This paper presents a micromachined compact waveguide twist for terahertz frequencies. The Rud-Kirilenko twist geometry is ideally suited to the micromachining processes developed at the University of Virginia. Measurements of a WR-1.5 micromachined twist exhibit a return loss near 20 dB and a median insertion loss of 0.5 dB from 600 to 750 GHz.", "title": "" }, { "docid": "b3801b9d9548c49c79eacef4c71e84ad", "text": "Identifying that a given binary program implements a specific cryptographic algorithm and finding out more information about the cryptographic code is an important problem. Proprietary programs and especially malicious software (so called malware) often use cryptography and we want to learn more about the context, e.g., which algorithms and keys are used by the program. This helps an analyst to quickly understand what a given binary program does and eases analysis. In this paper, we present several methods to identify cryptographic primitives (e.g., entire algorithms or only keys) within a given binary program in an automated way. We perform fine-grained dynamic binary analysis and use the collected information as input for several heuristics that characterize specific, unique aspects of cryptographic code. Our evaluation shows that these methods improve the state-of-the-art approaches in this area and that we can successfully extract cryptographic keys from a given malware binary.", "title": "" }, { "docid": "a7d3d2f52a45cdb378863d4e8d96bc27", "text": "This paper presents a three-phase single-stage bidirectional isolated matrix based AC-DC converter for energy storage. The matrix (3 × 1) topology directly converts the three-phase line voltages into high-frequency AC voltage which is subsequently, processed using a high-frequency transformer followed by a controlled rectifier. A modified Space Vector Modulation (SVM) based switching scheme is proposed to achieve high input power quality with high power conversion efficiency. Compared to the conventional two stage converter, the proposed converter provides single-stage conversion resulting in higher power conversion efficiency and higher power density. The operating principles of the proposed converter in both AC-DC and DC-AC mode are explained followed by steady state analysis. Simulation results are presented for 230 V, 50 Hz to 48 V isolated bidirectional converter at 2 kW output power to validate the theoretical claims.", "title": "" }, { "docid": "9847936462257d8f0d03473c9a78f27d", "text": "In this paper, a vision-guided autonomous quadrotor in an air-ground multi-robot system has been proposed. This quadrotor is equipped with a monocular camera, IMUs and a flight computer, which enables autonomous flights. Two complementary pose/motion estimation methods, respectively marker-based and optical-flow-based, are developed by considering different altitudes in a flight. To achieve smooth take-off, stable tracking and safe landing with respect to a moving ground robot and desired trajectories, appropriate controllers are designed. Additionally, data synchronization and time delay compensation are applied to improve the system performance. Real-time experiments are conducted in both indoor and outdoor environments.", "title": "" }, { "docid": "eb3fad94acaf1f36783fdb22f3932ec7", "text": "This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process.", "title": "" }, { "docid": "bb19e122737f08997585999575d2a394", "text": "In this paper, shadow detection and compensation are treated as image enhancement tasks. The principal components analysis (PCA) and luminance based multi-scale Retinex (LMSR) algorithm are explored to detect and compensate shadow in high resolution satellite image. PCA provides orthogonally channels, thus allow the color to remain stable despite the modification of luminance. Firstly, the PCA transform is used to obtain the luminance channel, which enables us to detect shadow regions using histogram threshold technique. After detection, the LMSR technique is used to enhance the image only in luminance channel to compensate for shadows. Then the enhanced image is obtained by inverse transform of PCA. The final shadow compensation image is obtained by comparison of the original image, the enhanced image and the shadow detection image. Experiment results show the effectiveness of the proposed method.", "title": "" }, { "docid": "46938d041228481cf3363f2c6dfcc524", "text": "This paper investigates conditions under which modi cations to the reward function of a Markov decision process preserve the op timal policy It is shown that besides the positive linear transformation familiar from utility theory one can add a reward for tran sitions between states that is expressible as the di erence in value of an arbitrary poten tial function applied to those states Further more this is shown to be a necessary con dition for invariance in the sense that any other transformation may yield suboptimal policies unless further assumptions are made about the underlying MDP These results shed light on the practice of reward shap ing a method used in reinforcement learn ing whereby additional training rewards are used to guide the learning agent In par ticular some well known bugs in reward shaping procedures are shown to arise from non potential based rewards and methods are given for constructing shaping potentials corresponding to distance based and subgoal based heuristics We show that such po tentials can lead to substantial reductions in learning time", "title": "" }, { "docid": "12a214f172562d92c89183379a0c06a3", "text": "Robots that work with people foster social relationships between people and systems. The home is an interesting place to study the adoption and use of these systems. The home provides challenges from both technical and interaction perspectives. In addition, the home is a seat for many specialized human behaviors and needs, and has a long history of what is collected and used to functionally, aesthetically, and symbolically fit the home. To understand the social impact of robotic technologies, this paper presents an ethnographic study of consumer robots in the home. Six families' experience of floor cleaning after receiving a new vacuum (a Roomba robotic vacuum or the Flair, a handheld upright) was studied. While the Flair had little impact, the Roomba changed people, cleaning activities, and other product use. In addition, people described the Roomba in aesthetic and social terms. The results of this study, while initial, generate implications for how robots should be designed for the home.", "title": "" }, { "docid": "f1977e5f8fbc0df4df0ac6bf1715c254", "text": "Instabilities in MOS-based devices with various substrates ranging from Si, SiGe, IIIV to 2D channel materials, can be explained by defect levels in the dielectrics and non-radiative multi-phonon (NMP) barriers. However, recent results obtained on single defects have demonstrated that they can show a highly complex behaviour since they can transform between various states. As a consequence, detailed physical models are complicated and computationally expensive. As will be shown here, as long as only lifetime predictions for an ensemble of defects is needed, considerable simplifications are possible. We present and validate an oxide defect model that captures the essence of full physical models while reducing the complexity substantially. We apply this model to investigate the improvement in positive bias temperature instabilities due to a reliability anneal. Furthermore, we corroborate the simulated defect bands with prior defect-centric studies and perform lifetime projections.", "title": "" }, { "docid": "a6a364819f397a8e28ac0b19480253cc", "text": "News agencies and other news providers or consumers are confronted with the task of extracting events from news articles. This is done i) either to monitor and, hence, to be informed about events of specific kinds over time and/or ii) to react to events immediately. In the past, several promising approaches to extracting events from text have been proposed. Besides purely statistically-based approaches there are methods to represent events in a semantically-structured form, such as graphs containing actions (predicates), participants (entities), etc. However, it turns out to be very difficult to automatically determine whether an event is real or not. In this paper, we give an overview of approaches which proposed solutions for this research problem. We show that there is no gold standard dataset where real events are annotated in text documents in a fine-grained, semantically-enriched way. We present a methodology of creating such a dataset with the help of crowdsourcing and present preliminary results.", "title": "" }, { "docid": "ee141b7fd5c372fb65d355fe75ad47af", "text": "As 100-Gb/s coherent systems based on polarization- division multiplexed quadrature phase shift keying (PDM-QPSK), with aggregate wavelength-division multiplexed (WDM) capacities close to 10 Tb/s, are getting widely deployed, the use of high-spectral-efficiency quadrature amplitude modulation (QAM) to increase both per-channel interface rates and aggregate WDM capacities is the next evolutionary step. In this paper we review high-spectral-efficiency optical modulation formats for use in digital coherent systems. We look at fundamental as well as at technological scaling trends and highlight important trade-offs pertaining to the design and performance of coherent higher-order QAM transponders.", "title": "" }, { "docid": "7cdc858ad5837132c80ac278f3760e24", "text": "Gallium Nitride (GaN) based power devices have the potential to achieve higher efficiency and higher switching frequency than those possible with Silicon (Si) power devices. In literature, GaN based converters are claimed to offer higher power density. However, a detailed comparative analysis on the power density of GaN and Si based low power dc-dc flyback converter is not reported. In this paper, comparison of a 100 W, dc-dc flyback converter based on GaN and Si is presented. Both the converters are designed to ensure an efficiency of 80%. Based on this, the switching frequency for both the converters are determined. The analysis shows that the GaN based converter can be operated at approximately ten times the switching frequency of Si-based converter. This leads to a reduction in the area product of the flyback transformer required in GaN based converter. It is found that the volume of the flyback transformer can be reduced by a factor of six for a GaN based converter as compared to a Si based converter. Further, it is observed that the value of output capacitance used in the GaN based converter reduces by a factor of ten as compared to the Si based converter, implying a reduction in the size of the output capacitors. Therefore, a significant improvement in the power density of the GaN based converter as compared to the Si based converter is seen.", "title": "" }, { "docid": "145bbea9b4eb7c484c190aed77e2a8b2", "text": "The Rey–Osterrieth Complex Figure Test (ROCF), which was developed by Rey in 1941 and standardized by Osterrieth in 1944, is a widely used neuropsychological test for the evaluation of visuospatial constructional ability and visual memory. Recently, the ROCF has been a useful tool for measuring executive function that is mediated by the prefrontal lobe. The ROCF consists of three test conditions: Copy, Immediate Recall and Delayed Recall. At the first step, subjects are given the ROCF stimulus card, and then asked to draw the same figure. Subsequently, they are instructed to draw what they remembered. Then, after a delay of 30 min, they are required to draw the same figure once again. The anticipated results vary according to the scoring system used, but commonly include scores related to location, accuracy and organization. Each condition of the ROCF takes 10 min to complete and the overall time of completion is about 30 min.", "title": "" }, { "docid": "a2a7b5c0b4e95e0c7bcb42e29fa8db57", "text": "0747-5632/$ see front matter 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.chb.2012.11.017 ⇑ Corresponding author. Address: School of Psychology, Australian Catholic University, 1100 Nudgee Rd., Banyo, QLD 4014, Australia. Tel.: +61 7 3623 7346; fax: +61 7 3623 7277. E-mail address: rachel.grieve@acu.edu.au (R. Grieve). Rachel Grieve ⇑, Michaelle Indian, Kate Witteveen, G. Anne Tolan, Jessica Marrington", "title": "" }, { "docid": "38aa324964214620c55eb4edfecf1bd2", "text": "This paper presents ROC curve, lift chart and calibration plot, three well known graphical techniques that are useful for evaluating the quality of classification models used in data mining and machine learning. Each technique, normally used and studied separately, defines its own measure of classification quality and its visualization. Here, we give a brief survey of the methods and establish a common mathematical framework which adds some new aspects, explanations and interrelations between these techniques. We conclude with an empirical evaluation and a few examples on how to use the presented techniques to boost classification accuracy.", "title": "" }, { "docid": "eeb31177629a38882fa3664ad0ddfb48", "text": "Autonomous cars will likely hit the market soon, but trust into such a technology is one of the big discussion points in the public debate. Drivers who have always been in complete control of their car are expected to willingly hand over control and blindly trust a technology that could kill them. We argue that trust in autonomous driving can be increased by means of a driver interface that visualizes the car’s interpretation of the current situation and its corresponding actions. To verify this, we compared different visualizations in a user study, overlaid to a driving scene: (1) a chauffeur avatar, (2) a world in miniature, and (3) a display of the car’s indicators as the baseline. The world in miniature visualization increased trust the most. The human-like chauffeur avatar can also increase trust, however, we did not find a significant difference between the chauffeur and the baseline. ACM Classification", "title": "" }, { "docid": "14024a813302548d0bd695077185de1c", "text": "In this paper, we propose an innovative touch-less palm print recognition system. This project is motivated by the public’s demand for non-invasive and hygienic biometric technology. For various reasons, users are concerned about touching the biometric scanners. Therefore, we propose to use a low-resolution web camera to capture the user’s hand at a distance for recognition. The users do not need to touch any device for their palm print to be acquired. A novel hand tracking and palm print region of interest (ROI) extraction technique are used to track and capture the user’s palm in real-time video stream. The discriminative palm print features are extracted based on a new method that applies local binary pattern (LBP) texture descriptor on the palm print directional gradient responses. Experiments show promising result using the proposed method. Performance can be further improved when a modified probabilistic neural network (PNN) is used for feature matching. Verification can be performed in less than one second in the proposed system. 2008 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "1c2285aef1bcd54fb2203ebb7c992647", "text": "OBJECTIVES\nExtracting data from publication reports is a standard process in systematic review (SR) development. However, the data extraction process still relies too much on manual effort which is slow, costly, and subject to human error. In this study, we developed a text summarization system aimed at enhancing productivity and reducing errors in the traditional data extraction process.\n\n\nMETHODS\nWe developed a computer system that used machine learning and natural language processing approaches to automatically generate summaries of full-text scientific publications. The summaries at the sentence and fragment levels were evaluated in finding common clinical SR data elements such as sample size, group size, and PICO values. We compared the computer-generated summaries with human written summaries (title and abstract) in terms of the presence of necessary information for the data extraction as presented in the Cochrane review's study characteristics tables.\n\n\nRESULTS\nAt the sentence level, the computer-generated summaries covered more information than humans do for systematic reviews (recall 91.2% vs. 83.8%, p<0.001). They also had a better density of relevant sentences (precision 59% vs. 39%, p<0.001). At the fragment level, the ensemble approach combining rule-based, concept mapping, and dictionary-based methods performed better than individual methods alone, achieving an 84.7% F-measure.\n\n\nCONCLUSION\nComputer-generated summaries are potential alternative information sources for data extraction in systematic review development. Machine learning and natural language processing are promising approaches to the development of such an extractive summarization system.", "title": "" }, { "docid": "7f897e5994685f0b158da91cef99c855", "text": "Cloud computing and its pay-as-you-go model continue to provide significant cost benefits and a seamless service delivery model for cloud consumers. The evolution of small-scale and large-scale geo-distributed datacenters operated and managed by individual cloud service providers raises new challenges in terms of effective global resource sharing and management of autonomously-controlled individual datacenter resources. Earlier solutions for geo-distributed clouds have focused primarily on achieving global efficiency in resource sharing that results in significant inefficiencies in local resource allocation for individual datacenters leading to unfairness in revenue and profit earned. In this paper, we propose a new contracts-based resource sharing model for federated geo-distributed clouds that allows cloud service providers to establish resource sharing contracts with individual datacenters apriori for defined time intervals during a 24 hour time period. Based on the established contracts, individual cloud service providers employ a cost-aware job scheduling and provisioning algorithm that enables tasks to complete and meet their response time requirements. The proposed techniques are evaluated through extensive experiments using realistic workloads and the results demonstrate the effectiveness, scalability and resource sharing efficiency of the proposed model.", "title": "" } ]
scidocsrr
2b3ef3368782f4c4de17ddecc03f3a18
Habits in everyday life: thought, emotion, and action.
[ { "docid": "a25041f4b95b68d2b8b9356d2f383b69", "text": "The authors review evidence that self-control may consume a limited resource. Exerting self-control may consume self-control strength, reducing the amount of strength available for subsequent self-control efforts. Coping with stress, regulating negative affect, and resisting temptations require self-control, and after such self-control efforts, subsequent attempts at self-control are more likely to fail. Continuous self-control efforts, such as vigilance, also degrade over time. These decrements in self-control are probably not due to negative moods or learned helplessness produced by the initial self-control attempt. These decrements appear to be specific to behaviors that involve self-control; behaviors that do not require self-control neither consume nor require self-control strength. It is concluded that the executive component of the self--in particular, inhibition--relies on a limited, consumable resource.", "title": "" } ]
[ { "docid": "2e6c44dd18f44512528752101f2161be", "text": "This paper presents a LVDS (low voltage differential signal) driver, which works at 2Gbps, with a pre-emphasis circuit compensating the attenuation of limited bandwidth of channel. To make the output common-mode (CM) voltage stable over process, temperature, and supply voltage variations, a closed-loop negative feedback circuit is added in this work. The LVDS driver is designed in 0.13um CMOS technology using both thick (3.3V) and thin (1.2V) gate oxide device, simulated with transmission line model and package parasitic model. The simulated results show that this driver can operate up to 2Gbps with random data patterns.", "title": "" }, { "docid": "832e1a93428911406759f696eb9cb101", "text": "Reinforcement learning provides both qualitative and quantitative frameworks for understanding and modeling adaptive decision-making in the face of rewards and punishments. Here we review the latest dispatches from the forefront of this field, and map out some of the territories where lie monsters.", "title": "" }, { "docid": "8994337878d2ac35464cb4af5e32fccc", "text": "We describe an algorithm for approximate inference in graphical models based on Hölder’s inequality that provides upper and lower bounds on common summation problems such as computing the partition function or probability of evidence in a graphical model. Our algorithm unifies and extends several existing approaches, including variable elimination techniques such as minibucket elimination and variational methods such as tree reweighted belief propagation and conditional entropy decomposition. We show that our method inherits benefits from each approach to provide significantly better bounds on sum-product tasks.", "title": "" }, { "docid": "65500c886a91a58ac95365c1e8539902", "text": "This introductory overview tutorial on social network analysis (SNA) demonstrates through theory and practical case studies applications to research, particularly on social media, digital interaction and behavior records. NodeXL provides an entry point for non-programmers to access the concepts and core methods of SNA and allows anyone who can make a pie chart to now build, analyze and visualize complex networks.", "title": "" }, { "docid": "9c2debf407dce58d77910ccdfc55a633", "text": "In cybersecurity competitions, participants either create new or protect preconfigured information systems and then defend these systems against attack in a real-world setting. Institutions should consider important structural and resource-related issues before establishing such a competition. Critical infrastructures increasingly rely on information systems and on the Internet to provide connectivity between systems. Maintaining and protecting these systems requires an education in information warfare that doesn't merely theorize and describe such concepts. A hands-on, active learning experience lets students apply theoretical concepts in a physical environment. Craig Kaucher and John Saunders found that even for management-oriented graduate courses in information assurance, such an experience enhances the students' understanding of theoretical concepts. Cybersecurity exercises aim to provide this experience in a challenging and competitive environment. Many educational institutions use and implement these exercises as part of their computer science curriculum, and some are organizing competitions with commercial partners as capstone exercises, ad hoc hack-a-thons, and scenario-driven, multiday, defense-only competitions. Participants have exhibited much enthusiasm for these exercises, from the DEFCON capture-the-flag exercise to the US Military Academy's Cyber Defense Exercise (CDX). In February 2004, the US National Science Foundation sponsored the Cyber Security Exercise Workshop aimed at harnessing this enthusiasm and interest. The educators, students, and government and industry representatives attending the workshop discussed the feasibility and desirability of establishing regular cybersecurity exercises for postsecondary-level students. This article summarizes the workshop report.", "title": "" }, { "docid": "eb18d3bab3346ede781d11433f1267b4", "text": "INTRODUCTION\nIn the developing countries, diabetes mellitus as a chronic diseases, have replaced infectious diseases as the main causes of morbidity and mortality. International Diabetes Federation (IDF) recently estimates 382 million people have diabetes globally and more than 34.6 million people in the Middle East Region and this number will increase to 67.9 million by 2035. The aim of this study was to analyze Iran's research performance on diabetes in national and international context.\n\n\nMETHODS\nThis Scientometric analysis is based on the Iranian publication data in diabetes research retrieved from the Scopus citation database till the end of 2014. The string used to retrieve the data was developed using \"diabetes\" keyword in title, abstract and keywords, and finally Iran in the affiliation field was our main string.\n\n\nRESULTS\nIran's cumulative publication output in diabetes research consisted of 4425 papers from 1968 to 2014, with an average number of 96.2 papers per year and an annual average growth rate of 25.5%. Iran ranked 25th place with 4425 papers among top 25 countries with a global share of 0.72 %. Average of Iran's publication output was 6.19 citations per paper. The average citation per paper for Iranian publications in diabetes research increased from 1.63 during 1968-1999 to 10.42 for 2014.\n\n\nCONCLUSIONS\nAlthough diabetic population of Iran is increasing, number of diabetes research is not remarkable. International Diabetes Federation suggested increased funding for research in diabetes in Iran for cost-effective diabetes prevention and treatment. In addition to universal and comprehensive services for diabetes care and treatment provided by Iranian health care system, Iranian policy makers should invest more on diabetes research.", "title": "" }, { "docid": "102bec350390b46415ae07128cb4e77f", "text": "We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.", "title": "" }, { "docid": "8e18fa3850177d016a85249555621723", "text": "Obstacle fusion algorithms usually perform obstacle association and gating in order to improve the obstacle position if it was detected by multiple sensors. However, this strategy is not common in multi sensor occupancy grid fusion. Thus, the quality of the fused grid, in terms of obstacle position accuracy, largely depends on the sensor with the lowest accuracy. In this paper an efficient method to associate obstacles across sensor grids is proposed. Imprecise sensors are discounted locally in cells where a more accurate sensor, that detected the same obstacle, derived free space. Furthermore, fixed discount factors to optimize false negative and false positive rates are used. Because of its generic formulation with the covariance of each sensor grid, the method is scalable to any sensor setup. The quantitative evaluation with a highly precise navigation map shows an increased obstacle position accuracy compared to standard evidential occupancy grid fusion.", "title": "" }, { "docid": "f68f82e0d7f165557433580ad1e3e066", "text": "Four experiments demonstrate effects of prosodic structure on speech production latencies. Experiments 1 to 3 exploit a modified version of the Sternberg et al. (1978, 1980) prepared speech production paradigm to look for evidence of the generation of prosodic structure during the final stages of sentence production. Experiment 1 provides evidence that prepared sentence production latency is a function of the number of phonological words that a sentence comprises when syntactic structure, number of lexical items, and number of syllables are held constant. Experiment 2 demonstrated that production latencies in Experiment 1 were indeed determined by prosodic structure rather than the number of content words that a sentence comprised. The phonological word effect was replicated in Experiment 3 using utterances with a different intonation pattern and phrasal structure. Finally, in Experiment 4, an on-line version of the sentence production task provides evidence for the phonological word as the preferred unit of articulation during the on-line production of continuous speech. Our findings are consistent with the hypothesis that the phonological word is a unit of processing during the phonological encoding of connected speech. q 1997 Academic Press", "title": "" }, { "docid": "82180726cc1aaaada69f3b6cb0e89acc", "text": "The wheelchair is the major means of transport for physically disabled people. However, it cannot overcome architectural barriers such as curbs and stairs. In this paper, the authors proposed a method to avoid falling down of a wheeled inverted pendulum type robotic wheelchair for climbing stairs. The problem of this system is that the feedback gain of the wheels cannot be set high due to modeling errors and gear backlash, which results in the movement of wheels. Therefore, the wheels slide down the stairs or collide with the side of the stairs, and finally the wheelchair falls down. To avoid falling down, the authors proposed a slider control strategy based on skyhook model in order to decrease the movement of wheels, and a rotary link control strategy based on the staircase dimensions in order to avoid collision or slide down. The effectiveness of the proposed fall avoidance control strategy was validated by ODE simulations and the prototype wheelchair. Keywords—EPW, fall avoidance control, skyhook, wheeled inverted pendulum.", "title": "" }, { "docid": "f84f7ad81967a6704490243b2b1fbbe4", "text": "A fundamental question in frontal lobe function is how motivational and emotional parameters of behavior apply to executive processes. Recent advances in mood and personality research and the technology and methodology of brain research provide opportunities to address this question empirically. Using event-related-potentials to track error monitoring in real time, the authors demonstrated that variability in the amplitude of the error-related negativity (ERN) is dependent on mood and personality variables. College students who are high on negative affect (NA) and negative emotionality (NEM) displayed larger ERN amplitudes early in the experiment than participants who are low on these dimensions. As the high-NA and -NEM participants disengaged from the task, the amplitude of the ERN decreased. These results reveal that affective distress and associated behavioral patterns are closely related with frontal lobe executive functions.", "title": "" }, { "docid": "b31aaa6805524495f57a2f54d0dd86f1", "text": "CLINICAL HISTORY A 54-year-old white female was seen with a 10-year history of episodes of a burning sensation of the left ear. The episodes are preceded by nausea and a hot feeling for about 15 seconds and then the left ear becomes visibly red for an average of about 1 hour, with a range from about 30 minutes to 2 hours. About once every 2 years, she would have a flurry of episodes occurring over about a 1-month period during which she would average about five episodes with a range of 1 to 6. There was also an 18-year history of migraine without aura occurring about once a year. At the age of 36 years, she developed left-sided pulsatile tinnitus. A cerebral arteriogram revealed a proximal left internal carotid artery occlusion of uncertain etiology after extensive testing. An MRI scan at the age of 45 years was normal. Neurological examination was normal. A carotid ultrasound study demonstrated complete occlusion of the left internal carotid artery and a normal right. Question.—What is the diagnosis?", "title": "" }, { "docid": "1d29d30089ffd9748c925a20f8a1216e", "text": "• Users may freely distribute the URL that is used to identify this publication. • Users may download and/or print one copy of the publication from the University of Birmingham research portal for the purpose of private study or non-commercial research. • User may use extracts from the document in line with the concept of ‘fair dealing’ under the Copyright, Designs and Patents Act 1988 (?) • Users may not further distribute the material nor use it for the purposes of commercial gain.", "title": "" }, { "docid": "af0dfe672a8828587e3b27ef473ea98e", "text": "Machine comprehension of text is the overarching goal of a great deal of research in natural language processing. The Machine Comprehension Test (Richardson et al., 2013) was recently proposed to assess methods on an open-domain, extensible, and easy-to-evaluate task consisting of two datasets. In this paper we develop a lexical matching method that takes into account multiple context windows, question types and coreference resolution. We show that the proposed method outperforms the baseline of Richardson et al. (2013), and despite its relative simplicity, is comparable to recent work using machine learning. We hope that our approach will inform future work on this task. Furthermore, we argue that MC500 is harder than MC160 due to the way question answer pairs were created.", "title": "" }, { "docid": "565f815ef0c1dd5107f053ad39dade20", "text": "Intensity inhomogeneity often occurs in real-world images, which presents a considerable challenge in image segmentation. The most widely used image segmentation algorithms are region-based and typically rely on the homogeneity of the image intensities in the regions of interest, which often fail to provide accurate segmentation results due to the intensity inhomogeneity. This paper proposes a novel region-based method for image segmentation, which is able to deal with intensity inhomogeneities in the segmentation. First, based on the model of images with intensity inhomogeneities, we derive a local intensity clustering property of the image intensities, and define a local clustering criterion function for the image intensities in a neighborhood of each point. This local clustering criterion function is then integrated with respect to the neighborhood center to give a global criterion of image segmentation. In a level set formulation, this criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, by minimizing this energy, our method is able to simultaneously segment the image and estimate the bias field, and the estimated bias field can be used for intensity inhomogeneity correction (or bias correction). Our method has been validated on synthetic images and real images of various modalities, with desirable performance in the presence of intensity inhomogeneities. Experiments show that our method is more robust to initialization, faster and more accurate than the well-known piecewise smooth model. As an application, our method has been used for segmentation and bias correction of magnetic resonance (MR) images with promising results.", "title": "" }, { "docid": "f6e8bda7c3915fa023f1b0f88f101f46", "text": "This paper presents a formulation to the obstacle avoidance problem for semi-autonomous ground vehicles. The planning and tracking problems have been divided into a two-level hierarchical controller. The high level solves a nonlinear model predictive control problem to generate a feasible and obstacle free path. It uses a nonlinear vehicle model and utilizes a coordinate transformation which uses vehicle position along a path as the independent variable. The low level uses a higher fidelity model and solves the MPC problem with a sequential quadratic programming approach to track the planned path. Simulations show the method’s ability to safely avoid multiple obstacles while tracking the lane centerline. Experimental tests on a semi-autonomous passenger vehicle driving at high speed on ice show the effectiveness of the approach.", "title": "" }, { "docid": "3f5a6580d3c8d13a8cefaea9fd6f68b2", "text": "Most theorizing on the relationship between corporate social/environmental performance (CSP) and corporate financial performance (CFP) assumes that the current evidence is too fractured or too variable to draw any generalizable conclusions. With this integrative, quantitative study, we intend to show that the mainstream claim that we have little generalizable knowledge about CSP and CFP is built on shaky grounds. Providing a methodologically more rigorous review than previous efforts, we conduct a meta-analysis of 52 studies (which represent the population of prior quantitative inquiry) yielding a total sample size of 33,878 observations. The metaanalytic findings suggest that corporate virtue in the form of social responsibility and, to a lesser extent, environmental responsibility is likely to pay off, although the operationalizations of CSP and CFP also moderate the positive association. For example, CSP appears to be more highly correlated with accounting-based measures of CFP than with market-based indicators, and CSP reputation indices are more highly correlated with CFP than are other indicators of CSP. This meta-analysis establishes a greater degree of certainty with respect to the CSP–CFP relationship than is currently assumed to exist by many business scholars.", "title": "" }, { "docid": "ccc3cf21c4c97f9c56915b4d1e804966", "text": "In this paper we present a prototype of a Microwave Imaging (MI) system for breast cancer detection. Our system is based on low-cost off-the-shelf microwave components, custom-made antennas, and a small form-factor processing system with an embedded Field-Programmable Gate Array (FPGA) for accelerating the execution of the imaging algorithm. We show that our system can compete with a vector network analyzer in terms of accuracy, and it is more than 20x faster than a high-performance server at image reconstruction.", "title": "" }, { "docid": "6f72afeb0a2c904e17dca27f53be249e", "text": "With its three-term functionality offering treatment of both transient and steady-state responses, proportional-integral-derivative (PID) control provides a generic and efficient solution to real-world control problems. The wide application of PID control has stimulated and sustained research and development to \"get the best out of PID\", and \"the search is on to find the next key technology or methodology for PID tuning\". This article presents remedies for problems involving the integral and derivative terms. PID design objectives, methods, and future directions are discussed. Subsequently, a computerized simulation-based approach is presented, together with illustrative design results for first-order, higher order, and nonlinear plants. Finally, we discuss differences between academic research and industrial practice, so as to motivate new research directions in PID control.", "title": "" }, { "docid": "072d187f56635ebc574f2eedb8a91d14", "text": "With the development of location-based social networks, an increasing amount of individual mobility data accumulate over time. The more mobility data are collected, the better we can understand the mobility patterns of users. At the same time, we know a great deal about online social relationships between users, providing new opportunities for mobility prediction. This paper introduces a noveltyseeking driven predictive framework for mining location-based social networks that embraces not only a bunch of Markov-based predictors but also a series of location recommendation algorithms. The core of this predictive framework is the cooperation mechanism between these two distinct models, determining the propensity of seeking novel and interesting locations.", "title": "" } ]
scidocsrr
a6490c57d5ff74f170b49165fa9ec1de
Cooperative Co-evolution for large scale optimization through more frequent random grouping
[ { "docid": "d099cf0b4a74ddb018775b524ec92788", "text": "This report proposes 15 large-scale benchmark problems as an extension to the existing CEC’2010 large-scale global optimization benchmark suite. The aim is to better represent a wider range of realworld large-scale optimization problems and provide convenience and flexibility for comparing various evolutionary algorithms specifically designed for large-scale global optimization. Introducing imbalance between the contribution of various subcomponents, subcomponents with nonuniform sizes, and conforming and conflicting overlapping functions are among the major new features proposed in this report.", "title": "" }, { "docid": "07bbe54e3d0c9ef27ef5f9f1f1a2150c", "text": "Evolutionary algorithms (EAs) have been applied with success to many numerical and combinatorial optimization problems in recent years. However, they often lose their effectiveness and advantages when applied to large and complex problems, e.g., those with high dimensions. Although cooperative coevolution has been proposed as a promising framework for tackling high-dimensional optimization problems, only limited studies were reported by decomposing a high-dimensional problem into single variables (dimensions). Such methods of decomposition often failed to solve nonseparable problems, for which tight interactions exist among different decision variables. In this paper, we propose a new cooperative coevolution framework that is capable of optimizing large scale nonseparable problems. A random grouping scheme and adaptive weighting are introduced in problem decomposition and coevolution. Instead of conventional evolutionary algorithms, a novel differential evolution algorithm is adopted. Theoretical analysis is presented in this paper to show why and how the new framework can be effective for optimizing large nonseparable problems. Extensive computational studies are also carried out to evaluate the performance of newly proposed algorithm on a large number of benchmark functions with up to 1000 dimensions. The results show clearly that our framework and algorithm are effective as well as efficient for large scale evolutionary optimisation problems. We are unaware of any other evolutionary algorithms that can optimize 1000-dimension nonseparable problems as effectively and efficiently as we have done.", "title": "" } ]
[ { "docid": "f35dc45e28f2483d5ac66271590b365d", "text": "We present a vector space–based model for selectional preferences that predicts plausibility scores for argument headwords. It does not require any lexical resources (such as WordNet). It can be trained either on one corpus with syntactic annotation, or on a combination of a small semantically annotated primary corpus and a large, syntactically analyzed generalization corpus. Our model is able to predict inverse selectional preferences, that is, plausibility scores for predicates given argument heads. We evaluate our model on one NLP task (pseudo-disambiguation) and one cognitive task (prediction of human plausibility judgments), gauging the influence of different parameters and comparing our model against other model classes. We obtain consistent benefits from using the disambiguation and semantic role information provided by a semantically tagged primary corpus. As for parameters, we identify settings that yield good performance across a range of experimental conditions. However, frequency remains a major influence of prediction quality, and we also identify more robust parameter settings suitable for applications with many infrequent items.", "title": "" }, { "docid": "2793e8eb1410b2379a8a416f0560df0a", "text": "Alzheimer’s disease (AD) transgenic mice have been used as a standard AD model for basic mechanistic studies and drug discovery. These mouse models showed symbolic AD pathologies including β-amyloid (Aβ) plaques, gliosis and memory deficits but failed to fully recapitulate AD pathogenic cascades including robust phospho tau (p-tau) accumulation, clear neurofibrillary tangles (NFTs) and neurodegeneration, solely driven by familial AD (FAD) mutation(s). Recent advances in human stem cell and three-dimensional (3D) culture technologies made it possible to generate novel 3D neural cell culture models that recapitulate AD pathologies including robust Aβ deposition and Aβ-driven NFT-like tau pathology. These new 3D human cell culture models of AD hold a promise for a novel platform that can be used for mechanism studies in human brain-like environment and high-throughput drug screening (HTS). In this review, we will summarize the current progress in recapitulating AD pathogenic cascades in human neural cell culture models using AD patient-derived induced pluripotent stem cells (iPSCs) or genetically modified human stem cell lines. We will also explain how new 3D culture technologies were applied to accelerate Aβ and p-tau pathologies in human neural cell cultures, as compared the standard two-dimensional (2D) culture conditions. Finally, we will discuss a potential impact of the human 3D human neural cell culture models on the AD drug-development process. These revolutionary 3D culture models of AD will contribute to accelerate the discovery of novel AD drugs.", "title": "" }, { "docid": "bcb9886f4ba3651793581e021030cde2", "text": "This study looked at the individual difference correlates of self-rated character strengths and virtues. In all, 280 adults completed a short 24-item measure of strengths, a short personality measure of the Big Five traits and a fluid intelligence test. The Cronbach alphas for the six higher order virtues were satisfactory but factor analysis did not confirm the a priori classification yielding five interpretable factors. These factors correlated significantly with personality and intelligence. Intelligence and neuroticism were correlated negatively with all the virtues, while extraversion and conscientiousness were positively correlated with all virtues. Structural equation modeling showed personality and religiousness moderated the effect of intelligence on the virtues. Extraversion and openness were the largest correlates of the virtues. The use of shortened measured in research is discussed.", "title": "" }, { "docid": "d89d80791ac8157d054652e5f1292ebb", "text": "The Great Gatsby Curve, the observation that for OECD countries, greater crosssectional income inequality is associated with lower mobility, has become a prominent part of scholarly and policy discussions because of its implications for the relationship between inequality of outcomes and inequality of opportunities. We explore this relationship by focusing on evidence and interpretation of an intertemporal Gatsby Curve for the United States. We consider inequality/mobility relationships that are derived from nonlinearities in the transmission process of income from parents to children and the relationship that is derived from the effects of inequality of socioeconomic segregation, which then affects children. Empirical evidence for the mechanisms we identify is strong. We find modest reduced form evidence and structural evidence of an intertemporal Gatsby Curve for the US as mediated by social influences. Steven N. Durlauf Ananth Seshadri Department of Economics Department of Economics University of Wisconsin University of Wisconsin 1180 Observatory Drive 1180 Observatory Drive Madison WI, 53706 Madison WI, 53706 durlauf@gmail.com aseshadr@ssc.wisc.edu", "title": "" }, { "docid": "6e02cdb0ade3479e0df03c30d9d69fa3", "text": "Reinforcement learning is considered as a promising direction for driving policy learning. However, training autonomous driving vehicle with reinforcement learning in real environment involves non-affordable trial-and-error. It is more desirable to first train in a virtual environment and then transfer to the real environment. In this paper, we propose a novel realistic translation network to make model trained in virtual environment be workable in real world. The proposed network can convert non-realistic virtual image input into a realistic one with similar scene structure. Given realistic frames as input, driving policy trained by reinforcement learning can nicely adapt to real world driving. Experiments show that our proposed virtual to real (VR) reinforcement learning (RL) works pretty well. To our knowledge, this is the first successful case of driving policy trained by reinforcement learning that can adapt to real world driving data.", "title": "" }, { "docid": "cbe1dc1b56716f57fca0977383e35482", "text": "This project explores a novel experimental setup towards building spoken, multi-modally rich, and human-like multiparty tutoring agent. A setup is developed and a corpus is collected that targets the development of a dialogue system platform to explore verbal and nonverbal tutoring strategies in multiparty spoken interactions with embodied agents. The dialogue task is centered on two participants involved in a dialogue aiming to solve a card-ordering game. With the participants sits a tutor that helps the participants perform the task and organizes and balances their interaction. Different multimodal signals captured and auto-synchronized by different audio-visual capture technologies were coupled with manual annotations to build a situated model of the interaction based on the participants personalities, their temporally-changing state of attention, their conversational engagement and verbal dominance, and the way these are correlated with the verbal and visual feedback, turn-management, and conversation regulatory actions generated by the tutor. At the end of this chapter we discuss the potential areas of research and developments this work opens and some of the challenges that lie in the road ahead.", "title": "" }, { "docid": "5c112eb4be8321d79b63790e84de278f", "text": "Service-dominant logic continues its evolution, facilitated by an active community of scholars throughout the world. Along its evolutionary path, there has been increased recognition of the need for a crisper and more precise delineation of the foundational premises and specification of the axioms of S-D logic. It also has become apparent that a limitation of the current foundational premises/axioms is the absence of a clearly articulated specification of the mechanisms of (often massive-scale) coordination and cooperation involved in the cocreation of value through markets and, more broadly, in society. This is especially important because markets are even more about cooperation than about the competition that is more frequently discussed. To alleviate this limitation and facilitate a better understanding of cooperation (and coordination), an eleventh foundational premise (fifth axiom) is introduced, focusing on the role of institutions and institutional arrangements in systems of value cocreation: service ecosystems. Literature on institutions across multiple social disciplines, including marketing, is briefly reviewed and offered as further support for this fifth axiom.", "title": "" }, { "docid": "3ee79d711d6f8d1bbaef7e348a1c8dbc", "text": "As a commentary to Juhani Iivari’s insightful essay, I briefly analyze design science research as an embodiment of three closely related cycles of activities. The Relevance Cycle inputs requirements from the contextual environment into the research and introduces the research artifacts into environmental field testing. The Rigor Cycle provides grounding theories and methods along with domain experience and expertise from the foundations knowledge base into the research and adds the new knowledge generated by the research to the growing knowledge base. The central Design Cycle supports a tighter loop of research activity for the construction and evaluation of design artifacts and processes. The recognition of these three cycles in a research project clearly positions and differentiates design science from other research paradigms. The commentary concludes with a claim to the pragmatic nature of design science.", "title": "" }, { "docid": "3d238cc92a56e64f32f08e0833d117b3", "text": "The efficiency of two biomass pretreatment technologies, dilute acid hydrolysis and dissolution in an ionic liquid, are compared in terms of delignification, saccharification efficiency and saccharide yields with switchgrass serving as a model bioenergy crop. When subject to ionic liquid pretreatment (dissolution and precipitation of cellulose by anti-solvent) switchgrass exhibited reduced cellulose crystallinity, increased surface area, and decreased lignin content compared to dilute acid pretreatment. Pretreated material was characterized by powder X-ray diffraction, scanning electron microscopy, Fourier transform infrared spectroscopy, Raman spectroscopy and chemistry methods. Ionic liquid pretreatment enabled a significant enhancement in the rate of enzyme hydrolysis of the cellulose component of switchgrass, with a rate increase of 16.7-fold, and a glucan yield of 96.0% obtained in 24h. These results indicate that ionic liquid pretreatment may offer unique advantages when compared to the dilute acid pretreatment process for switchgrass. However, the cost of the ionic liquid process must also be taken into consideration.", "title": "" }, { "docid": "709a6b1a5c49bf0e41a24ed5a6b392c9", "text": "Th e paper presents a literature review of the main concepts of hotel revenue management (RM) and current state-of-the-art of its theoretical research. Th e article emphasises on the diff erent directions of hotel RM research and is structured around the elements of the hotel RM system and the stages of RM process. Th e elements of the hotel RM system discussed in the paper include hotel RM centres (room division, F&B, function rooms, spa & fi tness facilities, golf courses, casino and gambling facilities, and other additional services), data and information, the pricing (price discrimination, dynamic pricing, lowest price guarantee) and non-pricing (overbookings, length of stay control, room availability guarantee) RM tools, the RM software, and the RM team. Th e stages of RM process have been identifi ed as goal setting, collection of data and information, data analysis, forecasting, decision making, implementation and monitoring. Additionally, special attention is paid to ethical considerations in RM practice, the connections between RM and customer relationship management, and the legal aspect of RM. Finally, the article outlines future research perspectives and discloses potential evolution of RM in future.", "title": "" }, { "docid": "464b66e2e643096bd344bea8026f4780", "text": "In this paper we describe an application of our approach to temporal text mining in Competitive Intelligence for the biotechnology and pharmaceutical industry. The main objective is to identify changes and trends of associations among entities of interest that appear in text over time. Text Mining (TM) exploits information contained in textual data in various ways, including the type of analyses that are typically performed in Data Mining [17]. Information Extraction (IE) facilitates the semi-automatic creation of metadata repositories from text. Temporal Text mining combines Information Extraction and Data Mining techniques upon textual repositories and incorporates time and ontologies‟ issues. It consists of three main phases; the Information Extraction phase, the ontology driven generalisation of templates and the discovery of associations over time. Treatment of the temporal dimension is essential to our approach since it influences both the annotation part (IE) of the system as well as the mining part.", "title": "" }, { "docid": "7232ba57ae29c9ec395fe2b4501b6fd3", "text": "We propose a novel approach for using unsupervised boosting to create an ensemble of generative models, where models are trained in sequence to correct earlier mistakes. Our metaalgorithmic framework can leverage any existing base learner that permits likelihood evaluation, including recent deep expressive models. Further, our approach allows the ensemble to include discriminative models trained to distinguish real data from model-generated data. We show theoretical conditions under which incorporating a new model in the ensemble will improve the fit and empirically demonstrate the effectiveness of our black-box boosting algorithms on density estimation, classification, and sample generation on benchmark datasets for a wide range of generative models.", "title": "" }, { "docid": "02e1e622c64b67c1a170ce36a3873082", "text": "As retrieval systems become more complex, learning to rank approaches are being developed to automatically tune their parameters. Using online learning to rank, retrieval systems can learn directly from implicit feedback inferred from user interactions. In such an online setting, algorithms must obtain feedback for effective learning while simultaneously utilizing what has already been learned to produce high quality results. We formulate this challenge as an exploration–exploitation dilemma and propose two methods for addressing it. By adding mechanisms for balancing exploration and exploitation during learning, each method extends a state-of-the-art learning to rank method, one based on listwise learning and the other on pairwise learning. Using a recently developed simulation framework that allows assessment of online performance, we empirically evaluate both methods. Our results show that balancing exploration and exploitation can substantially and significantly improve the online retrieval performance of both listwise and pairwise approaches. In addition, the results demonstrate that such a balance affects the two approaches in different ways, especially when user feedback is noisy, yielding new insights relevant to making online learning to rank effective in practice.", "title": "" }, { "docid": "84750fa3f3176d268ae85830a87f7a24", "text": "Context: The pull-based model, widely used in distributed software development, offers an extremely low barrier to entry for potential contributors (anyone can submit of contributions to any project, through pull-requests). Meanwhile, the project’s core team must act as guardians of code quality, ensuring that pull-requests are carefully inspected before being merged into the main development line. However, with pull-requests becoming increasingly popular, the need for qualified reviewers also increases. GitHub facilitates this, by enabling the crowd-sourcing of pull-request reviews to a larger community of coders than just the project’s core team, as a part of their social coding philosophy. However, having access to more potential reviewers does not necessarily mean that it’s easier to find the right ones (the “needle in a haystack” problem). If left unsupervised, this process may result in communication overhead and delayed pull-request processing. Objective: This study aims to investigate whether and how previous approaches used in bug triaging and code review can be adapted to recommending reviewers for pull-requests, and how to improve the recommendation performance. Method: First, we extend three typical approaches used in bug triaging and code review for the new challenge of assigning reviewers to pull-requests. Second, we analyze social relations between contributors and reviewers, and propose a novel approach by mining each project’s comment networks (CNs). Finally, we combine the CNs with traditional approaches, and evaluate the effectiveness of all these methods on 84 GitHub projects through both quantitative and qualitative analysis. Results: We find that CN-based recommendation can achieve, by itself, similar performance as the traditional approaches. However, the mixed approaches can achieve significant improvements compared to using either of them independently. Conclusion: Our study confirms that traditional approaches to bug triaging and code review are feasible for pull-request reviewer recommendations on GitHub. Furthermore, their performance can be improved significantly by combining them with information extracted from prior social interactions between developers on GitHub. These results prompt for novel tools to support process automation in social coding platforms, that combine social (e.g., common interests among developers) and technical factors (e.g., developers’ expertise). © 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "4eec0ef04e80280c07bc1e9fd41e942a", "text": "One of the challenges with research on student engagement is the large variation in the measurement of this construct, which has made it challenging to compare fi ndings across studies. This chapter contributes to our understanding of the measurement of student in engagement in three ways. First, we describe strengths and limitations of different methods for assessing student engagement (i.e., self-report measures, experience sampling techniques, teacher ratings, interviews, and observations). Second, we compare and contrast 11 self-report survey measures of student engagement that have been used in prior research. Across these 11 measures, we describe what is measured (scale name and items), use of measure, samples, and the extent of reliability and validity information available on each measure. Finally, we outline limitations with current approaches to measurement and promising future directions. Researchers, educators, and policymakers are increasingly focused on student engagement as the key to addressing problems of low achievement, high levels of student boredom, alienation, and high dropout rates (Fredricks, Blumenfeld, & Paris, 2004 ) . Students become more disengaged as they progress from elementary to middle school, with some estimates that 25–40% of youth are showing signs of disengagement (i.e., uninvolved, apathetic, not trying very hard, and not paying attention) (Steinberg, Brown, & Dornbush, 1996 ; Yazzie-Mintz, 2007 ) . The consequences of disengagement for middle and high school youth from disadvantaged backgrounds are especially severe; these youth are less likely to graduate from high school and face limited employment prospects, increasing their risk for poverty, poorer health, and involvement in the criminal justice system (National Research Council and the Institute of Medicine, 2004 ) . Although there is growing interest in student engagement, there has been considerable variation in how this construct has been conceptualized over time (Appleton, Christenson, & Furlong, 2008 ; Fredricks et al., 2004 ; Jimerson, Campos, & Grief, 2003 ) . Scholars have used a broad range J. A. Fredricks , Ph.D. (*) Human Development , Connecticut College , New London , CT , USA e-mail: jfred@conncoll.edu W. McColskey , Ph.D. SERVE Center , University of North Carolina , Greensboro , NC , USA e-mail: wmccolsk@serve.org The Measurement of Student Engagement: A Comparative Analysis of Various Methods and Student Self-report Instruments Jennifer A. Fredricks and Wendy McColskey 764 J.A. Fredricks and W. McColskey of terms including student engagement, school engagement, student engagement in school, academic engagement, engagement in class, and engagement in schoolwork. In addition, there has been variation in the number of subcomponents of engagement including different conceptualizations. Some scholars have proposed a two-dimensional model of engagement which includes behavior (e.g., participation, effort, and positive conduct) and emotion (e.g., interest, belonging, value, and positive emotions) (Finn, 1989 ; Marks, 2000 ; Skinner, Kindermann, & Furrer, 2009b ) . More recently, others have outlined a three-component model of engagement that includes behavior, emotion, and a cognitive dimension (i.e., self-regulation, investment in learning, and strategy use) (e.g., Archaumbault, 2009 ; Fredricks et al., 2004 ; Jimerson et al., 2003 ; Wigfi eld et al., 2008 ) . Finally, Christenson and her colleagues (Appleton, Christenson, Kim, & Reschly, 2006 ; Reschly & Christenson, 2006 ) conceptualized engagement as having four dimensions: academic, behavioral, cognitive, and psychological (subsequently referred to as affective) engagement. In this model, aspects of behavior are separated into two different components: academics, which includes time on task, credits earned, and homework completion, and behavior, which includes attendance, class participation, and extracurricular participation. One commonality across the myriad of conceptualizations is that engagement is multidimensional. However, further theoretical and empirical work is needed to determine the extent to which these different dimensions are unique constructs and whether a three or four component model more accurately describes the construct of student engagement. Even when scholars have similar conceptualizations of engagement, there has been considerable variability in the content of items used in instruments. This has made it challenging to compare fi ndings from different studies. This chapter expands on our understanding of the measurement of student engagement in three ways. First, the strengths and limitations of different methods for assessing student engagement are described. Second, 11 self-report survey measures of student engagement that have been used in prior research are compared and contrasted on several dimensions (i.e., what is measured, purposes and uses, samples, and psychometric properties). Finally, we discuss limitations with current approaches to measurement. What is Student Engagement We defi ne student engagement as a meta-construct that includes behavioral, emotional, and cognitive engagement (Fredricks et al., 2004 ) . Although there are large individual bodies of literature on behavioral (i.e., time on task), emotional (i.e., interest and value), and cognitive engagement (i.e., self-regulation and learning strategies), what makes engagement unique is its potential as a multidimensional or “meta”-construct that includes these three dimensions. Behavioral engagement draws on the idea of participation and includes involvement in academic, social, or extracurricular activities and is considered crucial for achieving positive academic outcomes and preventing dropping out (Connell & Wellborn, 1991 ; Finn, 1989 ) . Other scholars defi ne behavioral engagement in terms of positive conduct, such as following the rules, adhering to classroom norms, and the absence of disruptive behavior such as skipping school or getting into trouble (Finn, Pannozzo, & Voelkl, 1995 ; Finn & Rock, 1997 ) . Emotional engagement focuses on the extent of positive (and negative) reactions to teachers, classmates, academics, or school. Others conceptualize emotional engagement as identifi cation with the school, which includes belonging, or a feeling of being important to the school, and valuing, or an appreciation of success in school-related outcomes (Finn, 1989 ; Voelkl, 1997 ) . Positive emotional engagement is presumed to create student ties to the institution and infl uence their willingness to do the work (Connell & Wellborn, 1991 ; Finn, 1989 ) . Finally, cognitive engagement is defi ned as student’s level of investment in learning. It includes being thoughtful, strategic, and willing to exert the necessary effort for comprehension of complex ideas or mastery of diffi cult skills (Corno & Mandinach, 1983 ; Fredricks et al., 2004 ; Meece, Blumenfeld, & Hoyle, 1988 ) . 765 37 The Measurement of Student Engagement... An important question is how engagement differs from motivation. Although the terms are used interchangeably by some, they are different and the distinctions between them are important. Motivation refers to the underlying reasons for a given behavior and can be conceptualized in terms of the direction, intensity, quality, and persistence of one’s energies (Maehr & Meyer, 1997 ) . A proliferation of motivational constructs (e.g., intrinsic motivation, goal theory, and expectancy-value models) have been developed to answer two broad questions “Can I do this task” and “Do I want to do this task and why?” ( Eccles, Wigfi eld, & Schiefele, 1998 ) . One commonality across these different motivational constructs is an emphasis on individual differences and underlying psychological processes. In contrast, engagement tends to be thought of in terms of action, or the behavioral, emotional, and cognitive manifestations of motivation (Skinner, Kindermann, Connell, & Wellborn, 2009a ) . An additional difference is that engagement refl ects an individual’s interaction with context (Fredricks et al., 2004 ; Russell, Ainsley, & Frydenberg, 2005 ) . In other words, an individual is engaged in something (i.e., task, activity, and relationship), and their engagement cannot be separated from their environment. This means that engagement is malleable and is responsive to variations in the context that schools can target in interventions (Fredricks et al., 2004 ; Newmann, Wehlage, & Lamborn, 1992 ). The self-system model of motivational development (Connell, 1990 ; Connell & Wellborn, 1991 ; Deci & Ryan, 1985 ) provides one theoretical model for studying motivation and engagement. This model is based on the assumption that individuals have three fundamental motivational needs: autonomy, competence, and relatedness. If schools provide children with opportunities to meet these three needs, students will be more engaged. Students’ need for relatedness is more likely to occur in classrooms where teachers and peers create a caring and supportive environment; their need for autonomy is met when they feel like they have a choice and when they are motivated by internal rather than external factors; and their need for competence is met when they experience the classroom as optimal in structure and feel like they can achieve desired ends (Fredricks et al., 2004 ) . In contrast, if students experience schools as uncaring, coercive, and unfair, they will become disengaged or disaffected (Skinner et al., 2009a, 2009b ) . This model assumes that motivation is a necessary but not suffi cient precursor to engagement (Appleton et al., 2008 ; Connell & Wellborn, 1991 ) . Methods for Studying Engagement", "title": "" }, { "docid": "678d9eab7d1e711f97bf8ef5aeaebcc4", "text": "This work presents a study of current and future bus systems with respect to their security against various malicious attacks. After a brief description of the most well-known and established vehicular communication systems, we present feasible attacks and potential exposures for these automotive networks. We also provide an approach for secured automotive communication based on modern cryptographic mechanisms that provide secrecy, manipulation prevention and authentication to solve most of the vehicular bus security issues.", "title": "" }, { "docid": "225b834e820b616e0ccfed7259499fd6", "text": "Introduction: Actinic cheilitis (AC) is a lesion potentially malignant that affects the lips after prolonged exposure to solar ultraviolet (UV) radiation. The present study aimed to assess and describe the proliferative cell activity, using silver-stained nucleolar organizer region (AgNOR) quantification proteins, and to investigate the potential associations between AgNORs and the clinical aspects of AC lesions. Materials and methods: Cases diagnosed with AC were selected and reviewed from Center of Histopathological Diagnosis of the Institute of Biological Sciences, Passo Fundo University, Brazil. Clinical data including clinical presentation of the patients affected with AC were collected. The AgNOR techniques were performed in all recovered cases. The different microscopic areas of interest were printed with magnification of *1000, and in each case, 200 epithelial cell nuclei were randomly selected. The mean quantity in each nucleus for NORs was recorded. One-way analysis of variance was used for statistical analysis. Results: A total of 22 cases of AC were diagnosed. The patients were aged between 46 and 75 years (mean age: 55 years). Most of the patients affected were males presenting asymptomatic white plaque lesions in the lower lip. The mean value quantified for AgNORs was 2.4 ± 0.63, ranging between 1.49 and 3.82. No statistically significant difference was observed associating the quantity of AgNORs with the clinical aspects collected from the patients (p > 0.05). Conclusion: The present study reports the lack of association between the proliferative cell activity and the clinical aspects observed in patients affected by AC through the quantification of AgNORs. Clinical significance: Knowing the potential relation between the clinical aspects of AC and the proliferative cell activity quantified by AgNORs could play a significant role toward the early diagnosis of malignant lesions in the clinical practice. Keywords: Actinic cheilitis, Proliferative cell activity, Silver-stained nucleolar organizer regions.", "title": "" }, { "docid": "8a32bdadcaa2c94f83e95c19e400835b", "text": "Create a short summary of your paper (200 words), double-spaced. Your summary will say something like: In this action research study of my classroom of 7 grade mathematics, I investigated ______. I discovered that ____________. As a result of this research, I plan to ___________. You now begin your paper. Pages should be numbered, with the first page of text following the abstract as page one. (In Microsoft Word: after your abstract, rather than inserting a “page break” insert a “section break” to start on the next page; this will allow you to start the 3 page being numbered as page 1). You should divide this report of your research into sections. We should be able to identity the following sections and you may use these headings (headings should be bold, centered, and capitalized). Consider the page length to be a minimum.", "title": "" }, { "docid": "c0484f3055d7e7db8dfea9d4483e1e06", "text": "Metastasis the spread of cancer cells to distant organs, is the main cause of death for cancer patients. Metastasis is often mediated by lymphatic vessels that invade the primary tumor, and an early sign of metastasis is the presence of cancer cells in the regional lymph node (the first lymph node colonized by metastasizing cancer cells from a primary tumor). Understanding the interplay between tumorigenesis and lymphangiogenesis (the formation of lymphatic vessels associated with tumor growth) will provide us with new insights into mechanisms that modulate metastatic spread. In the long term, these insights will help to define new molecular targets that could be used to block lymphatic vessel-mediated metastasis and increase patient survival. Here, we review the molecular mechanisms of embryonic lymphangiogenesis and those that are recapitulated in tumor lymphangiogenesis, with a view to identifying potential targets for therapies designed to suppress tumor lymphangiogenesis and hence metastasis.", "title": "" }, { "docid": "88ac730e4e54ecc527bcd188b7cc5bf5", "text": "In this paper we outline the nature of Neuro-linguistic Programming and explore its potential for learning and teaching. The paper draws on current research by Mathison (2003) to illustrate the role of language and internal imagery in teacherlearner interactions, and the way language influences beliefs about learning. Neuro-linguistic Programming (NLP) developed in the USA in the 1970's. It has achieved widespread popularity as a method for communication and personal development. The title, coined by the founders, Bandler and Grinder (1975a), refers to purported systematic, cybernetic links between a person's internal experience (neuro), their language (linguistic) and their patterns of behaviour (programming). In essence NLP is a form of modelling that offers potential for systematic and detailed understanding of people's subjective experience. NLP is eclectic, drawing on models and strategies from a wide range of sources. We outline NLP's approach to teaching and learning, and explore applications through illustrative data from Mathison's study. A particular implication for the training of educators is that of attention to communication skills. Finally we summarise criticisms of NLP that may represent obstacles to its acceptance by academe.", "title": "" } ]
scidocsrr
3cff79c9c9419de7a4a231917714c1e5
Design of Secure and Lightweight Authentication Protocol for Wearable Devices Environment
[ { "docid": "a85d07ae3f19a0752f724b39df5eca2b", "text": "Despite two decades of intensive research, it remains a challenge to design a practical anonymous two-factor authentication scheme, for the designers are confronted with an impressive list of security requirements (e.g., resistance to smart card loss attack) and desirable attributes (e.g., local password update). Numerous solutions have been proposed, yet most of them are shortly found either unable to satisfy some critical security requirements or short of a few important features. To overcome this unsatisfactory situation, researchers often work around it in hopes of a new proposal (but no one has succeeded so far), while paying little attention to the fundamental question: whether or not there are inherent limitations that prevent us from designing an “ideal” scheme that satisfies all the desirable goals? In this work, we aim to provide a definite answer to this question. We first revisit two foremost proposals, i.e. Tsai et al.'s scheme and Li's scheme, revealing some subtleties and challenges in designing such schemes. Then, we systematically explore the inherent conflicts and unavoidable trade-offs among the design criteria. Our results indicate that, under the current widely accepted adversarial model, certain goals are beyond attainment. This also suggests a negative answer to the open problem left by Huang et al. in 2014. To the best of knowledge, the present study makes the first step towards understanding the underlying evaluation metric for anonymous two-factor authentication, which we believe will facilitate better design of anonymous two-factor protocols that offer acceptable trade-offs among usability, security and privacy.", "title": "" } ]
[ { "docid": "f478bbf48161da50017d3ec9f8e677b4", "text": "Between November 1998 and December 1999, trained medical record abstractors visited the Micronesian jurisdictions of Chuuk, Kosrae, Pohnpei, and Yap (the four states of the Federated States of Micronesia), as well as the Republic of Palau (Belau), the Republic of Kiribati, the Republic of the Marshall Islands (RMI), and the Republic of Nauru to review all available medical records in order to describe the epidemiology of cancer in Micronesia. Annualized age-adjusted, site-specific cancer period prevalence rates for individual jurisdictions were calculated. Site-specific cancer occurrence in Micronesia follows a pattern characteristic of developing nations. At the same time, cancers associated with developed countries are also impacting these populations. Recommended are jurisdiction-specific plans that outline the steps and resources needed to establish or improve local cancer registries; expand cancer awareness and screening activities; and improve diagnostic and treatment capacity.", "title": "" }, { "docid": "62a51c43d4972d41d3b6cdfa23f07bb9", "text": "To meet the development of Internet of Things (IoT), IETF has proposed IPv6 standards working under stringent low-power and low-cost constraints. However, the behavior and performance of the proposed standards have not been fully understood, especially the RPL routing protocol lying at the heart the protocol stack. In this work, we make an in-depth study on a popular implementation of the RPL (routing protocol for low power and lossy network) to provide insights and guidelines for the adoption of these standards. Specifically, we use the Contiki operating system and COOJA simulator to evaluate the behavior of the ContikiRPL implementation. We analyze the performance for different networking settings. Different from previous studies, our work is the first effort spanning across the whole life cycle of wireless sensor networks, including both the network construction process and the functioning stage. The metrics evaluated include signaling overhead, latency, energy consumption and so on, which are vital to the overall performance of a wireless sensor network. Furthermore, based on our observations, we provide a few suggestions for RPL implemented WSN. This study can also serve as a basis for future enhancement on the proposed standards.", "title": "" }, { "docid": "6d97cbe726eca4b883cf7c8c2d939f8b", "text": "In this paper, a new ensemble forecasting model for short-term load forecasting (STLF) is proposed based on extreme learning machine (ELM). Four important improvements are used to support the ELM for increased forecasting performance. First, a novel wavelet-based ensemble scheme is carried out to generate the individual ELM-based forecasters. Second, a hybrid learning algorithm blending ELM and the Levenberg-Marquardt method is proposed to improve the learning accuracy of neural networks. Third, a feature selection method based on the conditional mutual information is developed to select a compact set of input variables for the forecasting model. Fourth, to realize an accurate ensemble forecast, partial least squares regression is utilized as a combining approach to aggregate the individual forecasts. Numerical testing shows that proposed method can obtain better forecasting results in comparison with other standard and state-of-the-art methods.", "title": "" }, { "docid": "cbb6bac245862ed0265f6d32e182df92", "text": "With the explosion of online communication and publication, texts become obtainable via forums, chat messages, blogs, book reviews and movie reviews. Usually, these texts are much short and noisy without sufficient statistical signals and enough information for a good semantic analysis. Traditional natural language processing methods such as Bow-of-Word (BOW) based probabilistic latent semantic models fail to achieve high performance due to the short text environment. Recent researches have focused on the correlations between words, i.e., term dependencies, which could be helpful for mining latent semantics hidden in short texts and help people to understand them. Long short-term memory (LSTM) network can capture term dependencies and is able to remember the information for long periods of time. LSTM has been widely used and has obtained promising results in variants of problems of understanding latent semantics of texts. At the same time, by analyzing the texts, we find that a number of keywords contribute greatly to the semantics of the texts. In this paper, we establish a keyword vocabulary and propose an LSTM-based model that is sensitive to the words in the vocabulary; hence, the keywords leverage the semantics of the full document. The proposed model is evaluated in a short-text sentiment analysis task on two datasets: IMDB and SemEval-2016, respectively. Experimental results demonstrate that our model outperforms the baseline LSTM by 1%~2% in terms of accuracy and is effective with significant performance enhancement over several non-recurrent neural network latent semantic models (especially in dealing with short texts). We also incorporate the idea into a variant of LSTM named the gated recurrent unit (GRU) model and achieve good performance, which proves that our method is general enough to improve different deep learning models.", "title": "" }, { "docid": "bf4776d6d01d63d3eb6dbeba693bf3de", "text": "As the development of microprocessors, power electronic converters and electric motor drives, electric power steering (EPS) system which uses an electric motor came to use a few year ago. Electric power steering systems have many advantages over traditional hydraulic power steering systems in engine efficiency, space efficiency, and environmental compatibility. This paper deals with design and optimization of an interior permanent magnet (IPM) motor for power steering application. Simulated Annealing method is used for optimization. After optimization and finding motor parameters, An IPM motor and drive with mechanical parts of EPS system is simulated and performance evaluation of system is done.", "title": "" }, { "docid": "71b0dbd905c2a9f4111dfc097bfa6c67", "text": "In this paper, the authors undertake a study of cyber warfare reviewing theories, law, policies, actual incidents and the dilemma of anonymity. Starting with the United Kingdom perspective on cyber warfare, the authors then consider United States' views including the perspective of its military on the law of war and its general inapplicability to cyber conflict. Consideration is then given to the work of the United Nations' group of cyber security specialists and diplomats who as of July 2010 have agreed upon a set of recommendations to the United Nations Secretary General for negotiations on an international computer security treaty. An examination of the use of a nation's cybercrime law to prosecute violations that occur over the Internet indicates the inherent limits caused by the jurisdictional limits of domestic law to address cross-border cybercrime scenarios. Actual incidents from Estonia (2007), Georgia (2008), Republic of Korea (2009), Japan (2010), ongoing attacks on the United States as well as other incidents and reports on ongoing attacks are considered as well. Despite the increasing sophistication of such cyber attacks, it is evident that these attacks were met with a limited use of law and policy to combat them that can be only be characterised as a response posture defined by restraint. Recommendations are then examined for overcoming the attribution problem. The paper then considers when do cyber attacks rise to the level of an act of war by reference to the work of scholars such as Schmitt and Wingfield. Further evaluation of the special impact that non-state actors may have and some theories on how to deal with the problem of asymmetric players are considered. Discussion and possible solutions are offered. A conclusion is offered drawing some guidance from the writings of the Chinese philosopher Sun Tzu. Finally, an appendix providing a technical overview of the problem of attribution and the dilemma of anonymity in cyberspace is provided. 1. The United Kingdom Perspective \"If I went and bombed a power station in France, that would be an act of war. If I went on to the net and took out a power station, is that an act of war? One", "title": "" }, { "docid": "a5d100fd83620d9cc868a33ab6367be2", "text": "Identifying the lineage path of neural cells is critical for understanding the development of brain. Accurate neural cell detection is a crucial step to obtain reliable delineation of cell lineage. To solve this task, in this paper we present an efficient neural cell detection method based on SSD (single shot multibox detector) neural network model. Our method adapts the original SSD architecture and removes the unnecessary blocks, leading to a light-weight model. Moreover, we formulate the cell detection as a binary regression problem, which makes our model much simpler. Experimental results demonstrate that, with only a small training set, our method is able to accurately capture the neural cells under severe shape deformation in a fast way.", "title": "" }, { "docid": "2a8f464e709dcae4e34f73654aefe31f", "text": "LTE 4G cellular networks are gradually being adopted by all major operators in the world and are expected to rule the cellular landscape at least for the current decade. They will also form the starting point for further progress beyond the current generation of mobile cellular networks to chalk a path towards fifth generation mobile networks. The lack of open cellular ecosystem has limited applied research in this field within the boundaries of vendor and operator R&D groups. Furthermore, several new approaches and technologies are being considered as potential elements making up such a future mobile network, including cloudification of radio network, radio network programability and APIs following SDN principles, native support of machine-type communication, and massive MIMO. Research on these technologies requires realistic and flexible experimentation platforms that offer a wide range of experimentation modes from real-world experimentation to controlled and scalable evaluations while at the same time retaining backward compatibility with current generation systems.\n In this work, we present OpenAirInterface (OAI) as a suitably flexible platform towards open LTE ecosystem and playground [1]. We will demonstrate an example of the use of OAI to deploy a low-cost open LTE network using commodity hardware with standard LTE-compatible devices. We also show the reconfigurability features of the platform.", "title": "" }, { "docid": "6513c4ca4197e9ff7028e527a621df0a", "text": "The development of complex distributed systems demands for the creation of suitable architectural styles (or paradigms) and related run-time infrastructures. An emerging style that is receiving increasing attention is based on the notion of event. In an event-based architecture, distributed software components interact by generating and consuming events. An event is the occurrence of some state change in a component of a software system, made visible to the external world. The occurrence of an event in a component is asynchronously notified to any other component that has declared some interest in it. This paradigm (usually called “publish/subscribe” from the names of the two basic operations that regulate the communication) holds the promise of supporting a flexible and effective interaction among highly reconfigurable, distributed software components. In the past two years, we have developed an object-oriented infrastructure called JEDI (Java Event-based Distributed Infrastructure). JEDI supports the development and operation of event-based systems and has been used to implement a significant example of distributed system, namely, the OPSS workflow management system (WFMS). The paper illustrates JEDI main features and how we have used them to implement OPSS. Moreover, the paper provides an initial evaluation of our experiences in using the event-based architectural style and a classification of some of the event-based infrastructures presented in the literature.", "title": "" }, { "docid": "4243f0bafe669ab862aaad2b184c6a0e", "text": "Generating adversarial examples is an intriguing problem and an important way of understanding the working mechanism of deep neural networks. Most existing approaches generated perturbations in the image space, i.e., each pixel can be modified independently. However, in this paper we pay special attention to the subset of adversarial examples that are physically authentic – those corresponding to actual changes in 3D physical properties (like surface normals, illumination condition, etc.). These adversaries arguably pose a more serious concern, as they demonstrate the possibility of causing neural network failure by small perturbations of real-world 3D objects and scenes. In the contexts of object classification and visual question answering, we augment state-of-the-art deep neural networks that receive 2D input images with a rendering module (either differentiable or not) in front, so that a 3D scene (in the physical space) is rendered into a 2D image (in the image space), and then mapped to a prediction (in the output space). The adversarial perturbations can now go beyond the image space, and have clear meanings in the 3D physical world. Through extensive experiments, we found that a vast majority of image-space adversaries cannot be explained by adjusting parameters in the physical space, i.e., they are usually physically inauthentic. But it is still possible to successfully attack beyond the image space on the physical space (such that authenticity is enforced), though this is more difficult than image-space attacks, reflected in lower success rates and heavier perturbations required.", "title": "" }, { "docid": "6737955fd1876a40fc0e662a4cac0711", "text": "Cloud computing is a novel perspective for large scale distributed computing and parallel processing. It provides computing as a utility service on a pay per use basis. The performance and efficiency of cloud computing services always depends upon the performance of the user tasks submitted to the cloud system. Scheduling of the user tasks plays significant role in improving performance of the cloud services. Task scheduling is one of the main types of scheduling performed. This paper presents a detailed study of various task scheduling methods existing for the cloud environment. A brief analysis of various scheduling parameters considered in these methods is also discussed in this paper.", "title": "" }, { "docid": "289942ca889ccea58d5b01dab5c82719", "text": "Concepts of basal ganglia organization have changed markedly over the past decade, due to significant advances in our understanding of the anatomy, physiology and pharmacology of these structures. Independent evidence from each of these fields has reinforced a growing perception that the functional architecture of the basal ganglia is essentially parallel in nature, regardless of the perspective from which these structures are viewed. This represents a significant departure from earlier concepts of basal ganglia organization, which generally emphasized the serial aspects of their connectivity. Current evidence suggests that the basal ganglia are organized into several structurally and functionally distinct 'circuits' that link cortex, basal ganglia and thalamus, with each circuit focused on a different portion of the frontal lobe. In this review, Garrett Alexander and Michael Crutcher, using the basal ganglia 'motor' circuit as the principal example, discuss recent evidence indicating that a parallel functional architecture may also be characteristic of the organization within each individual circuit.", "title": "" }, { "docid": "45009303764570cbfa3532a9d98f5393", "text": "The Wasserstein distance and its variations, e.g., the sliced-Wasserstein (SW) distance, have recently drawn attention from the machine learning community. The SW distance, specifically, was shown to have similar properties to the Wasserstein distance, while being much simpler to compute, and is therefore used in various applications including generative modeling and general supervised/unsupervised learning. In this paper, we first clarify the mathematical connection between the SW distance and the Radon transform. We then utilize the generalized Radon transform to define a new family of distances for probability measures, which we call generalized slicedWasserstein (GSW) distances. We also show that, similar to the SW distance, the GSW distance can be extended to a maximum GSW (max-GSW) distance. We then provide the conditions under which GSW and max-GSW distances are indeed distances. Finally, we compare the numerical performance of the proposed distances on several generative modeling tasks, including SW flows and SW auto-encoders.", "title": "" }, { "docid": "0e7da1ef24306eea2e8f1193301458fe", "text": "We consider the problem of object figure-ground segmentation when the object categories are not available during training (i.e. zero-shot). During training, we learn standard segmentation models for a handful of object categories (called “source objects”) using existing semantic segmentation datasets. During testing, we are given images of objects (called “target objects”) that are unseen during training. Our goal is to segment the target objects from the background. Our method learns to transfer the knowledge from the source objects to the target objects. Our experimental results demonstrate the effectiveness of our approach.", "title": "" }, { "docid": "e830098f9c045d376177e6d2644d4a06", "text": "OBJECTIVE\nTo determine whether acetyl-L-carnitine (ALC), a metabolite necessary for energy metabolism and essential fatty acid anabolism, might help attention-deficit/hyperactivity disorder (ADHD). Trials in Down's syndrome, migraine, and Alzheimer's disease showed benefit for attention. A preliminary trial in ADHD using L-carnitine reported significant benefit.\n\n\nMETHOD\nA multi-site 16-week pilot study randomized 112 children (83 boys, 29 girls) age 5-12 with systematically diagnosed ADHD to placebo or ALC in weight-based doses from 500 to 1500 mg b.i.d. The 2001 revisions of the Conners' parent and teacher scales (including DSM-IV ADHD symptoms) were administered at baseline, 8, 12, and 16 weeks. Analyses were ANOVA of change from baseline to 16 weeks with treatment, center, and treatment-by-center interaction as independent variables.\n\n\nRESULTS\nThe primary intent-to-treat analysis, of 9 DSM-IV teacher-rated inattentive symptoms, was not significant. However, secondary analyses were interesting. There was significant (p = 0.02) moderation by subtype: superiority of ALC over placebo in the inattentive type, with an opposite tendency in combined type. There was also a geographic effect (p = 0.047). Side effects were negligible; electrocardiograms, lab work, and physical exam unremarkable.\n\n\nCONCLUSION\nALC appears safe, but with no effect on the overall ADHD population (especially combined type). It deserves further exploration for possible benefit specifically in the inattentive type.", "title": "" }, { "docid": "cae9e77074db114690a6ed1330d9b14c", "text": "BACKGROUND\nOn December 8th, 2015, World Health Organization published a priority list of eight pathogens expected to cause severe outbreaks in the near future. To better understand global research trends and characteristics of publications on these emerging pathogens, we carried out this bibliometric study hoping to contribute to global awareness and preparedness toward this topic.\n\n\nMETHOD\nScopus database was searched for the following pathogens/infectious diseases: Ebola, Marburg, Lassa, Rift valley, Crimean-Congo, Nipah, Middle Eastern Respiratory Syndrome (MERS), and Severe Respiratory Acute Syndrome (SARS). Retrieved articles were analyzed to obtain standard bibliometric indicators.\n\n\nRESULTS\nA total of 8619 journal articles were retrieved. Authors from 154 different countries contributed to publishing these articles. Two peaks of publications, an early one for SARS and a late one for Ebola, were observed. Retrieved articles received a total of 221,606 citations with a mean ± standard deviation of 25.7 ± 65.4 citations per article and an h-index of 173. International collaboration was as high as 86.9%. The Centers for Disease Control and Prevention had the highest share (344; 5.0%) followed by the University of Hong Kong with 305 (4.5%). The top leading journal was Journal of Virology with 572 (6.6%) articles while Feldmann, Heinz R. was the most productive researcher with 197 (2.3%) articles. China ranked first on SARS, Turkey ranked first on Crimean-Congo fever, while the United States of America ranked first on the remaining six diseases. Of retrieved articles, 472 (5.5%) were on vaccine - related research with Ebola vaccine being most studied.\n\n\nCONCLUSION\nNumber of publications on studied pathogens showed sudden dramatic rise in the past two decades representing severe global outbreaks. Contribution of a large number of different countries and the relatively high h-index are indicative of how international collaboration can create common health agenda among distant different countries.", "title": "" }, { "docid": "180a840a22191da6e9a99af3d41ab288", "text": "The hippocampal CA3 region is classically viewed as a homogeneous autoassociative network critical for associative memory and pattern completion. However, recent evidence has demonstrated a striking heterogeneity along the transverse, or proximodistal, axis of CA3 in spatial encoding and memory. Here we report the presence of striking proximodistal gradients in intrinsic membrane properties and synaptic connectivity for dorsal CA3. A decreasing gradient of mossy fiber synaptic strength along the proximodistal axis is mirrored by an increasing gradient of direct synaptic excitation from entorhinal cortex. Furthermore, we uncovered a nonuniform pattern of reactivation of fear memory traces, with the most robust reactivation during memory retrieval occurring in mid-CA3 (CA3b), the region showing the strongest net recurrent excitation. Our results suggest that heterogeneity in both intrinsic properties and synaptic connectivity may contribute to the distinct spatial encoding and behavioral role of CA3 subregions along the proximodistal axis.", "title": "" }, { "docid": "6a82dfa1d79016388c38ccba77c56ae5", "text": "Scripts define knowledge about how everyday scenarios (such as going to a restaurant) are expected to unfold. One of the challenges to learning scripts is the hierarchical nature of the knowledge. For example, a suspect arrested might plead innocent or guilty, and a very different track of events is then expected to happen. To capture this type of information, we propose an autoencoder model with a latent space defined by a hierarchy of categorical variables. We utilize a recently proposed vector quantization based approach, which allows continuous embeddings to be associated with each latent variable value. This permits the decoder to softly decide what portions of the latent hierarchy to condition on by attending over the value embeddings for a given setting. Our model effectively encodes and generates scripts, outperforming a recent language modeling-based method on several standard tasks, and allowing the autoencoder model to achieve substantially lower perplexity scores compared to the previous language modelingbased method.", "title": "" }, { "docid": "bb799a3aac27f4ac764649e1f58ee9fb", "text": "White grubs (larvae of Coleoptera: Scarabaeidae) are abundant in below-ground systems and can cause considerable damage to a wide variety of crops by feeding on roots. White grub populations may be controlled by natural enemies, but the predator guild of the European species is barely known. Trophic interactions within soil food webs are difficult to study with conventional methods. Therefore, a polymerase chain reaction (PCR)-based approach was developed to investigate, for the first time, a soil insect predator-prey system. Can, however, highly sensitive detection methods identify carrion prey in predators, as has been shown for fresh prey? Fresh Melolontha melolontha (L.) larvae and 1- to 9-day-old carcasses were presented to Poecilus versicolor Sturm larvae. Mitochondrial cytochrome oxidase subunit I fragments of the prey, 175, 327 and 387 bp long, were detectable in 50% of the predators 32 h after feeding. Detectability decreased to 18% when a 585 bp sequence was amplified. Meal size and digestion capacity of individual predators had no influence on prey detection. Although prey consumption was negatively correlated with cadaver age, carrion prey could be detected by PCR as efficiently as fresh prey irrespective of carrion age. This is the first proof that PCR-based techniques are highly efficient and sensitive, both in fresh and carrion prey detection. Thus, if active predation has to be distinguished from scavenging, then additional approaches are needed to interpret the picture of prey choice derived by highly sensitive detection methods.", "title": "" }, { "docid": "97adb3a003347f579706cd01a762bdc9", "text": "The Universal Serial Bus (USB) is an extremely popular interface standard for computer peripheral connections and is widely used in consumer Mass Storage Devices (MSDs). While current consumer USB MSDs provide relatively high transmission speed and are convenient to carry, the use of USB MSDs has been prohibited in many commercial and everyday environments primarily due to security concerns. Security protocols have been previously proposed and a recent approach for the USB MSDs is to utilize multi-factor authentication. This paper proposes significant enhancements to the three-factor control protocol that now makes it secure under many types of attacks including the password guessing attack, the denial-of-service attack, and the replay attack. The proposed solution is presented with a rigorous security analysis and practical computational cost analysis to demonstrate the usefulness of this new security protocol for consumer USB MSDs.", "title": "" } ]
scidocsrr
8c75c8b4274533b14d267aed457d651c
Building Neuromorphic Circuits with Memristive Devices
[ { "docid": "5deaf3ef06be439ad0715355d3592cff", "text": "Hybrid reconfigurable logic circuits were fabricated by integrating memristor-based crossbars onto a foundry-built CMOS (complementary metal-oxide-semiconductor) platform using nanoimprint lithography, as well as materials and processes that were compatible with the CMOS. Titanium dioxide thin-film memristors served as the configuration bits and switches in a data routing network and were connected to gate-level CMOS components that acted as logic elements, in a manner similar to a field programmable gate array. We analyzed the chips using a purpose-built testing system, and demonstrated the ability to configure individual devices, use them to wire up various logic gates and a flip-flop, and then reconfigure devices.", "title": "" } ]
[ { "docid": "0ab14a40df6fe28785262d27a4f5b8ce", "text": "State-of-the-art 3D shape classification and retrieval algorithms, hereinafter referred to as shape analysis, are often based on comparing signatures or descriptors that capture the main geometric and topological properties of 3D objects. None of the existing descriptors, however, achieve best performance on all shape classes. In this article, we explore, for the first time, the usage of covariance matrices of descriptors, instead of the descriptors themselves, in 3D shape analysis. Unlike histogram -based techniques, covariance-based 3D shape analysis enables the fusion and encoding of different types of features and modalities into a compact representation. Covariance matrices, however, are elements of the non-linear manifold of symmetric positive definite (SPD) matrices and thus \\BBL2 metrics are not suitable for their comparison and clustering. In this article, we study geodesic distances on the Riemannian manifold of SPD matrices and use them as metrics for 3D shape matching and recognition. We then: (1) introduce the concepts of bag of covariance (BoC) matrices and spatially-sensitive BoC as a generalization to the Riemannian manifold of SPD matrices of the traditional bag of features framework, and (2) generalize the standard kernel methods for supervised classification of 3D shapes to the space of covariance matrices. We evaluate the performance of the proposed BoC matrices framework and covariance -based kernel methods and demonstrate their superiority compared to their descriptor-based counterparts in various 3D shape matching, retrieval, and classification setups.", "title": "" }, { "docid": "d5ac5e10fc2cc61e625feb28fc9095b5", "text": "Article history: Received 8 July 2016 Received in revised form 15 November 2016 Accepted 29 December 2016 Available online 25 January 2017 As part of the post-2015 United Nations sustainable development agenda, the world has its first urban sustainable development goal (USDG) “to make cities and human settlements inclusive, safe, resilient and sustainable”. This paper provides an overview of the USDG and explores some of the difficulties around using this goal as a tool for improving cities. We argue that challenges emerge around selecting the indicators in the first place and also around the practical use of these indicators once selected. Three main practical problems of indicator use include 1) the poor availability of standardized, open and comparable data 2) the lack of strong data collection institutions at the city scale to support monitoring for the USDG and 3) “localization” the uptake and context specific application of the goal by diverse actors in widely different cities. Adding to the complexity, the USDG conversation is taking place at the same time as the proliferation of a bewildering array of indicator systems at different scales. Prompted by technological change, debates on the “data revolution” and “smart city” also have direct bearing on the USDG. We argue that despite these many complexities and challenges, the USDG framework has the potential to encourage and guide needed reforms in our cities but only if anchored in local institutions and initiatives informed by open, inclusive and contextually sensitive data collection and monitoring. © 2017 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f86e3894a6c61c3734e1aabda3500ef0", "text": "We perform sensitivity analyses on a mathematical model of malaria transmission to determine the relative importance of model parameters to disease transmission and prevalence. We compile two sets of baseline parameter values: one for areas of high transmission and one for low transmission. We compute sensitivity indices of the reproductive number (which measures initial disease transmission) and the endemic equilibrium point (which measures disease prevalence) to the parameters at the baseline values. We find that in areas of low transmission, the reproductive number and the equilibrium proportion of infectious humans are most sensitive to the mosquito biting rate. In areas of high transmission, the reproductive number is again most sensitive to the mosquito biting rate, but the equilibrium proportion of infectious humans is most sensitive to the human recovery rate. This suggests strategies that target the mosquito biting rate (such as the use of insecticide-treated bed nets and indoor residual spraying) and those that target the human recovery rate (such as the prompt diagnosis and treatment of infectious individuals) can be successful in controlling malaria.", "title": "" }, { "docid": "15da6453d3580a9f26ecb79f9bc8e270", "text": "In 2005 the Commission for Africa noted that ‘Tackling HIV and AIDS requires a holistic response that recognises the wider cultural and social context’ (p. 197). Cultural factors that range from beliefs and values regarding courtship, sexual networking, contraceptive use, perspectives on sexual orientation, explanatory models for disease and misfortune and norms for gender and marital relations have all been shown to be factors in the various ways that HIV/AIDS has impacted on African societies (UNESCO, 2002). Increasingly the centrality of culture is being recognised as important to HIV/AIDS prevention, treatment, care and support. With culture having both positive and negative influences on health behaviour, international donors and policy makers are beginning to acknowledge the need for cultural approaches to the AIDS crisis (Nguyen et al., 2008). The development of cultural approaches to HIV/AIDS presents two major challenges for South Africa. First, the multi-cultural nature of the country means that there is no single sociocultural context in which the HIV/AIDS epidemic is occurring. South Africa is home to a rich tapestry of racial, ethnic, religious and linguistic groups. As a result of colonial history and more recent migration, indigenous Africans have come to live alongside large populations of people with European, Asian and mixed descent, all of whom could lay claim to distinctive cultural practices and spiritual beliefs. Whilst all South Africans are affected by the spread of HIV, the burden of the disease lies with the majority black African population (see Shisana et al., 2005; UNAIDS, 2007). Therefore, this chapter will focus on some sociocultural aspects of life within the majority black African population of South Africa, most of whom speak languages that are classified within the broad linguistic grouping of Bantu languages. This large family of linguistically related ethnic groups span across southern Africa and comprise the bulk of the African people who reside in South Africa today (Hammond-Tooke, 1974). A second challenge involves the legitimacy of the culture concept. Whilst race was used in apartheid as the rationale for discrimination, notions of culture and cultural differences were legitimised by segregating the country into various ‘homelands’. Within the homelands, the majority black South Africans could presumably", "title": "" }, { "docid": "bc6be8b5fd426e7f8d88645a2b21ff6a", "text": "irtually everyone would agree that a primary, yet insufficiently met, goal of schooling is to enable students to think critically. In layperson’s terms, critical thinking consists of seeing both sides of an issue, being open to new evidence that disconfirms your ideas, reasoning dispassionately, demanding that claims be backed by evidence, deducing and inferring conclusions from available facts, solving problems, and so forth. Then too, there are specific types of critical thinking that are characteristic of different subject matter: That’s what we mean when we refer to “thinking like a scientist” or “thinking like a historian.” This proper and commonsensical goal has very often been translated into calls to teach “critical thinking skills” and “higher-order thinking skills”—and into generic calls for teaching students to make better judgments, reason more logically, and so forth. In a recent survey of human resource officials and in testimony delivered just a few months ago before the Senate Finance Committee, business leaders have repeatedly exhorted schools to do a better job of teaching students to think critically. And they are not alone. Organizations and initiatives involved in education reform, such as the National Center on Education and the Economy, the American Diploma Project, and the Aspen Institute, have pointed out the need for students to think and/or reason critically. The College Board recently revamped the SAT to better assess students’ critical thinking. And ACT, Inc. offers a test of critical thinking for college students. These calls are not new. In 1983, A Nation At Risk, a report by the National Commission on Excellence in Education, found that many 17-year-olds did not possess the “‘higher-order’ intellectual skills” this country needed. It claimed that nearly 40 percent could not draw inferences from written material and only onefifth could write a persuasive essay. Following the release of A Nation At Risk, programs designed to teach students to think critically across the curriculum became extremely popular. By 1990, most states had initiatives designed to encourage educators to teach critical thinking, and one of the most widely used programs, Tactics for Thinking, sold 70,000 teacher guides. But, for reasons I’ll explain, the programs were not very effective—and today we still lament students’ lack of critical thinking. After more than 20 years of lamentation, exhortation, and little improvement, maybe it’s time to ask a fundamental question: Can critical thinking actually be taught? Decades of cognitive research point to a disappointing answer: not really. People who have sought to teach critical thinking have assumed that it is a skill, like riding a bicycle, and that, like other skills, once you learn it, you can apply it in any situation. Research from cognitive science shows that thinking is not that sort of skill. The processes of thinking are intertwined with the content of thought (that is, domain knowledge). Thus, if you remind a student to “look at an issue from multiple perspectives” often enough, he will learn that he ought to do so, but if he doesn’t know much about Critical Thinking", "title": "" }, { "docid": "bcd81794f9e1fc6f6b92fd36ccaa8dac", "text": "Reliable detection and avoidance of obstacles is a crucial prerequisite for autonomously navigating robots as both guarantee safety and mobility. To ensure safe mobility, the obstacle detection needs to run online, thereby taking limited resources of autonomous systems into account. At the same time, robust obstacle detection is highly important. Here, a too conservative approach might restrict the mobility of the robot, while a more reckless one might harm the robot or the environment it is operating in. In this paper, we present a terrain-adaptive approach to obstacle detection that relies on 3D-Lidar data and combines computationally cheap and fast geometric features, like step height and steepness, which are updated with the frequency of the lidar sensor, with semantic terrain information, which is updated with at lower frequency. We provide experiments in which we evaluate our approach on a real robot on an autonomous run over several kilometers containing different terrain types. The experiments demonstrate that our approach is suitable for autonomous systems that have to navigate reliable on different terrain types including concrete, dirt roads and grass.", "title": "" }, { "docid": "fc5f80f0554d248524f2aa67ad628773", "text": "Personality plays an important role in the way people manage the images they convey in self-presentations and employment interviews, trying to affect the other\"s first impressions and increase effectiveness. This paper addresses the automatically detection of the Big Five personality traits from short (30-120 seconds) self-presentations, by investigating the effectiveness of 29 simple acoustic and visual non-verbal features. Our results show that Conscientiousness and Emotional Stability/Neuroticism are the best recognizable traits. The lower accuracy levels for Extraversion and Agreeableness are explained through the interaction between situational characteristics and the differential activation of the behavioral dispositions underlying those traits.", "title": "" }, { "docid": "969ba9848fa6d02f74dabbce2f1fe3ab", "text": "With the rapid growth of social media, massive misinformation is also spreading widely on social media, e.g., Weibo and Twitter, and brings negative effects to human life. Today, automatic misinformation identification has drawn attention from academic and industrial communities. Whereas an event on social media usually consists of multiple microblogs, current methods are mainly constructed based on global statistical features. However, information on social media is full of noise, which should be alleviated. Moreover, most of the microblogs about an event have little contribution to the identification of misinformation, where useful information can be easily overwhelmed by useless information. Thus, it is important to mine significant microblogs for constructing a reliable misinformation identification method. In this article, we propose an attention-based approach for identification of misinformation (AIM). Based on the attention mechanism, AIM can select microblogs with the largest attention values for misinformation identification. The attention mechanism in AIM contains two parts: content attention and dynamic attention. Content attention is the calculated-based textual features of each microblog. Dynamic attention is related to the time interval between the posting time of a microblog and the beginning of the event. To evaluate AIM, we conduct a series of experiments on the Weibo and Twitter datasets, and the experimental results show that the proposed AIM model outperforms the state-of-the-art methods.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "08ab7142ae035c3594d3f3ae339d3e27", "text": "Sudoku is a very popular puzzle which consists of placing several numbers in a squared grid according to some simple rules. In this paper, we present a Sudoku solving technique named Boolean Sudoku Solver (BSS) using only simple Boolean algebras. Use of Boolean algebra increases the execution speed of the Sudoku solver. Simulation results show that our method returns the solution of the Sudoku in minimum number of iterations and outperforms the existing popular approaches.", "title": "" }, { "docid": "51a859f71bd2ec82188826af18204f02", "text": "This study examines the accuracy of 54 online dating photographs posted by heterosexual daters. We report data on (a1) online daters’ self-reported accuracy, (b) independent judges’ perceptions of accuracy, and (c) inconsistencies in the profile photograph identified by trained coders. While online daters rated their photos as relatively accurate, independent judges rated approximately 1/3 of the photographs as not accurate. Female photographs were judged as less accurate than male photographs, and were more likely to be older, to be retouched or taken by a professional photographer, and to contain inconsistencies, including changes in hair style and skin quality. The findings are discussed in terms of the tensions experienced by online daters to (a) enhance their physical attractiveness and (b) present a photograph that would not be judged deceptive in subsequent face-to-face meetings. The paper extends the theoretical concept of selective self-presentation to online photographs, and discusses issues of self-deception and social desirability bias.", "title": "" }, { "docid": "ac4c1d903e20b90da555b11ef2edd2f5", "text": "Program translation is an important tool to migrate legacy code in one language into an ecosystem built in a different language. In this work, we are the first to employ deep neural networks toward tackling this problem. We observe that program translation is a modular procedure, in which a sub-tree of the source tree is translated into the corresponding target sub-tree at each step. To capture this intuition, we design a tree-to-tree neural network to translate a source tree into a target one. Meanwhile, we develop an attention mechanism for the tree-to-tree model, so that when the decoder expands one non-terminal in the target tree, the attention mechanism locates the corresponding sub-tree in the source tree to guide the expansion of the decoder. We evaluate the program translation capability of our tree-to-tree model against several state-of-the-art approaches. Compared against other neural translation models, we observe that our approach is consistently better than the baselines with a margin of up to 15 points. Further, our approach can improve the previous state-of-the-art program translation approaches by a margin of 20 points on the translation of real-world projects.", "title": "" }, { "docid": "194156892cbdb0161e9aae6a01f78703", "text": "Model repositories play a central role in the model driven development of complex software-intensive systems by offering means to persist and manipulate models obtained from heterogeneous languages and tools. Complex models can be assembled by interconnecting model fragments by hard links, i.e., regular references, where the target end points to external resources using storage-specific identifiers. This approach, in certain application scenarios, may prove to be a too rigid and error prone way of interlinking models. As a flexible alternative, we propose to combine derived features with advanced incremental model queries as means for soft interlinking of model elements residing in different model resources. These soft links can be calculated on-demand with graceful handling for temporarily unresolved references. In the background, the links are maintained efficiently and flexibly by using incremental model query evaluation. The approach is applicable to modeling environments or even property graphs for representing query results as first-class relations, which also allows the chaining of soft links that is useful for modular applications. The approach is evaluated using the Eclipse Modeling Framework (EMF) and EMF-IncQuery in two complex industrial case studies. The first case study is motivated by a knowledge management project from the financial domain, involving a complex interlinked structure of concept and business process models. The second case study is set in the avionics domain with strict traceability requirements enforced by certification standards (DO-178b). It consists of multiple domain models describing the allocation scenario of software functions to hardware components.", "title": "" }, { "docid": "16c56a9ca685cb1100d175268b6e8ba6", "text": "In this paper, we study the stochastic gradient descent method in analyzing nonconvex statistical optimization problems from a diffusion approximation point of view. Using the theory of large deviation of random dynamical system, we prove in the small stepsize regime and the presence of omnidirectional noise the following: starting from a local minimizer (resp. saddle point) the SGD iteration escapes in a number of iteration that is exponentially (resp. linearly) dependent on the inverse stepsize. We take the deep neural network as an example to study this phenomenon. Based on a new analysis of the mixing rate of multidimensional Ornstein-Uhlenbeck processes, our theory substantiate a very recent empirical results by Keskar et al. (2016), suggesting that large batch sizes in training deep learning for synchronous optimization leads to poor generalization error.", "title": "" }, { "docid": "727a97b993098aa1386e5bfb11a99d4b", "text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.", "title": "" }, { "docid": "e0fc6fc1425bb5786847c3769c1ec943", "text": "Developing manufacturing simulation models usually requires experts with knowledge of multiple areas including manufacturing, modeling, and simulation software. The expertise requirements increase for virtual factory models that include representations of manufacturing at multiple resolution levels. This paper reports on an initial effort to automatically generate virtual factory models using manufacturing configuration data in standard formats as the primary input. The execution of the virtual factory generates time series data in standard formats mimicking a real factory. Steps are described for auto-generation of model components in a software environment primarily oriented for model development via a graphic user interface. Advantages and limitations of the approach and the software environment used are discussed. The paper concludes with a discussion of challenges in verification and validation of the virtual factory prototype model with its multiple hierarchical models and future directions.", "title": "" }, { "docid": "0a5e2cc403ba9a4397d04c084b25f43e", "text": "Ebola virus disease (EVD) distinguishes its feature as high infectivity and mortality. Thus, it is urgent for governments to draw up emergency plans against Ebola. However, it is hard to predict the possible epidemic situations in practice. Luckily, in recent years, computational experiments based on artificial society appeared, providing a new approach to study the propagation of EVD and analyze the corresponding interventions. Therefore, the rationality of artificial society is the key to the accuracy and reliability of experiment results. Individuals' behaviors along with travel mode directly affect the propagation among individuals. Firstly, artificial Beijing is reconstructed based on geodemographics and machine learning is involved to optimize individuals' behaviors. Meanwhile, Ebola course model and propagation model are built, according to the parameters in West Africa. Subsequently, propagation mechanism of EVD is analyzed, epidemic scenario is predicted, and corresponding interventions are presented. Finally, by simulating the emergency responses of Chinese government, the conclusion is finally drawn that Ebola is impossible to outbreak in large scale in the city of Beijing.", "title": "" }, { "docid": "58c4c9bd2033645ece7db895d368cda6", "text": "Nanorobotics is the technology of creating machines or robots of the size of few hundred nanometres and below consisting of components of nanoscale or molecular size. There is an all around development in nanotechnology towards realization of nanorobots in the last two decades. In the present work, the compilation of advancement in nanotechnology in context to nanorobots is done. The challenges and issues in movement of a nanorobot and innovations present in nature to overcome the difficulties in moving at nano-size regimes are discussed. The efficiency aspect in context to artificial nanorobot is also presented.", "title": "" }, { "docid": "bb01b5e24d7472ab52079dcb8a65358d", "text": "There are plenty of classification methods that perform well when training and testing data are drawn from the same distribution. However, in real applications, this condition may be violated, which causes degradation of classification accuracy. Domain adaptation is an effective approach to address this problem. In this paper, we propose a general domain adaptation framework from the perspective of prediction reweighting, from which a novel approach is derived. Different from the major domain adaptation methods, our idea is to reweight predictions of the training classifier on testing data according to their signed distance to the domain separator, which is a classifier that distinguishes training data (from source domain) and testing data (from target domain). We then propagate the labels of target instances with larger weights to ones with smaller weights by introducing a manifold regularization method. It can be proved that our reweighting scheme effectively brings the source and target domains closer to each other in an appropriate sense, such that classification in target domain becomes easier. The proposed method can be implemented efficiently by a simple two-stage algorithm, and the target classifier has a closed-form solution. The effectiveness of our approach is verified by the experiments on artificial datasets and two standard benchmarks, a visual object recognition task and a cross-domain sentiment analysis of text. Experimental results demonstrate that our method is competitive with the state-of-the-art domain adaptation algorithms.", "title": "" } ]
scidocsrr
513882e9992781626e656c002f99dbdf
Rectangular Dielectric Resonator Antenna Array for 28 GHz Applications
[ { "docid": "364eb800261105453f36b005ba1faf68", "text": "This article presents empirically-based large-scale propagation path loss models for fifth-generation cellular network planning in the millimeter-wave spectrum, based on real-world measurements at 28 GHz and 38 GHz in New York City and Austin, Texas, respectively. We consider industry-standard path loss models used for today's microwave bands, and modify them to fit the propagation data measured in these millimeter-wave bands for cellular planning. Network simulations with the proposed models using a commercial planning tool show that roughly three times more base stations are required to accommodate 5G networks (cell radii up to 200 m) compared to existing 3G and 4G systems (cell radii of 500 m to 1 km) when performing path loss simulations based on arbitrary pointing angles of directional antennas. However, when directional antennas are pointed in the single best directions at the base station and mobile, coverage range is substantially improved with little increase in interference, thereby reducing the required number of 5G base stations. Capacity gains for random pointing angles are shown to be 20 times greater than today's fourth-generation Long Term Evolution networks, and can be further improved when using directional antennas pointed in the strongest transmit and receive directions with the help of beam combining techniques.", "title": "" } ]
[ { "docid": "5f5828952aa0a0a95e348a0c0b2296fb", "text": "Indoor positioning has grasped great attention in recent years. A number of efforts have been exerted to achieve high positioning accuracy. However, there exists no technology that proves its efficacy in various situations. In this paper, we propose a novel positioning method based on fusing trilateration and dead reckoning. We employ Kalman filtering as a position fusion algorithm. Moreover, we adopt an Android device with Bluetooth Low Energy modules as the communication platform to avoid excessive energy consumption and to improve the stability of the received signal strength. To further improve the positioning accuracy, we take the environmental context information into account while generating the position fixes. Extensive experiments in a testbed are conducted to examine the performance of three approaches: trilateration, dead reckoning and the fusion method. Additionally, the influence of the knowledge of the environmental context is also examined. Finally, our proposed fusion method outperforms both trilateration and dead reckoning in terms of accuracy: experimental results show that the Kalman-based fusion, for our settings, achieves a positioning accuracy of less than one meter.", "title": "" }, { "docid": "f90efcef80233888fb8c218d1e5365a6", "text": "BACKGROUND\nMany low- and middle-income countries are undergoing a nutrition transition associated with rapid social and economic transitions. We explore the coexistence of over and under- nutrition at the neighborhood and household level, in an urban poor setting in Nairobi, Kenya.\n\n\nMETHODS\nData were collected in 2010 on a cohort of children aged under five years born between 2006 and 2010. Anthropometric measurements of the children and their mothers were taken. Additionally, dietary intake, physical activity, and anthropometric measurements were collected from a stratified random sample of adults aged 18 years and older through a separate cross-sectional study conducted between 2008 and 2009 in the same setting. Proportions of stunting, underweight, wasting and overweight/obesity were dettermined in children, while proportions of underweight and overweight/obesity were determined in adults.\n\n\nRESULTS\nOf the 3335 children included in the analyses with a total of 6750 visits, 46% (51% boys, 40% girls) were stunted, 11% (13% boys, 9% girls) were underweight, 2.5% (3% boys, 2% girls) were wasted, while 9% of boys and girls were overweight/obese respectively. Among their mothers, 7.5% were underweight while 32% were overweight/obese. A large proportion (43% and 37%%) of overweight and obese mothers respectively had stunted children. Among the 5190 adults included in the analyses, 9% (6% female, 11% male) were underweight, and 22% (35% female, 13% male) were overweight/obese.\n\n\nCONCLUSION\nThe findings confirm an existing double burden of malnutrition in this setting, characterized by a high prevalence of undernutrition particularly stunting early in life, with high levels of overweight/obesity in adulthood, particularly among women. In the context of a rapid increase in urban population, particularly in urban poor settings, this calls for urgent action. Multisectoral action may work best given the complex nature of prevailing circumstances in urban poor settings. Further research is needed to understand the pathways to this coexistence, and to test feasibility and effectiveness of context-specific interventions to curb associated health risks.", "title": "" }, { "docid": "e04cccfd59c056678e39fc4aed0eaa2b", "text": "BACKGROUND\nBreast cancer is by far the most frequent cancer of women. However the preventive measures for such problem are probably less than expected. The objectives of this study are to assess breast cancer knowledge and attitudes and factors associated with the practice of breast self examination (BSE) among female teachers of Saudi Arabia.\n\n\nPATIENTS AND METHODS\nWe conducted a cross-sectional survey of teachers working in female schools in Buraidah, Saudi Arabia using a self-administered questionnaire to investigate participants' knowledge about the risk factors of breast cancer, their attitudes and screening behaviors. A sample of 376 female teachers was randomly selected. Participants lived in urban areas, and had an average age of 34.7 ±5.4 years.\n\n\nRESULTS\nMore than half of the women showed a limited knowledge level. Among participants, the most frequently reported risk factors were non-breast feeding and the use of female sex hormones. The printed media was the most common source of knowledge. Logistic regression analysis revealed that high income was the most significant predictor of better knowledge level. Knowing a non-relative case with breast cancer and having a high knowledge level were identified as the significant predictors for practicing BSE.\n\n\nCONCLUSIONS\nThe study points to the insufficient knowledge of female teachers about breast cancer and identified the negative influence of low knowledge on the practice of BSE. Accordingly, relevant educational programs to improve the knowledge level of women regarding breast cancer are needed.", "title": "" }, { "docid": "0872a229806a1055ec6e42d7a36ef626", "text": "Attribute selection (AS) refers to the problem of selecting those input attributes or features that are most predictive of a given outcome; a problem encountered in many areas such as machine learning, pattern recognition and signal processing. Unlike other dimensionality reduction methods, attribute selectors preserve the original meaning of the attributes after reduction. This has found application in tasks that involve datasets containing huge numbers of attributes (in the order of tens of thousands) which, for some learning algorithms, might be impossible to process further. Recent examples include text processing and web content classification. AS techniques have also been applied to small and medium-sized datasets in order to locate the most informative attributes for later use. One of the many successful applications of rough set theory has been to this area. The rough set ideology of using only the supplied data and no other information has many benefits in AS, where most other methods require supplementary knowledge. However, the main limitation of rough set-based attribute selection in the literature is the restrictive requirement that all data is discrete. In classical rough set theory, it is not possible to consider real-valued or noisy data. This paper investigates a novel approach based on fuzzy-rough sets, fuzzy rough feature selection (FRFS), that addresses these problems and retains dataset semantics. FRFS is applied to two challenging domains where a feature reducing step is important; namely, web content classification and complex systems monitoring. The utility of this approach is demonstrated and is compared empirically with several dimensionality reducers. In the experimental studies, FRFS is shown to equal or improve classification accuracy when compared to the results from unreduced data. Classifiers that use a lower dimensional set of attributes which are retained by fuzzy-rough reduction outperform those that employ more attributes returned by the existing crisp rough reduction method. In addition, it is shown that FRFS is more powerful than the other AS techniques in the comparative study", "title": "" }, { "docid": "8492ba0660b06ca35ab3f4e96f3a33c3", "text": "Young men who have sex with men (YMSM) are increasingly using mobile smartphone applications (“apps”), such as Grindr, to meet sex partners. A probability sample of 195 Grindr-using YMSM in Southern California were administered an anonymous online survey to assess patterns of and motivations for Grindr use in order to inform development and tailoring of smartphone-based HIV prevention for YMSM. The number one reason for using Grindr (29 %) was to meet “hook ups.” Among those participants who used both Grindr and online dating sites, a statistically significantly greater percentage used online dating sites for “hook ups” (42 %) compared to Grindr (30 %). Seventy percent of YMSM expressed a willingness to participate in a smartphone app-based HIV prevention program. Development and testing of smartphone apps for HIV prevention delivery has the potential to engage YMSM in HIV prevention programming, which can be tailored based on use patterns and motivations for use. Los hombres que mantienen relaciones sexuales con hombres (YMSM por las siglas en inglés de Young Men Who Have Sex with Men) están utilizando más y más aplicaciones para teléfonos inteligentes (smartphones), como Grindr, para encontrar parejas sexuales. En el Sur de California, se administró de forma anónima un sondeo en internet a una muestra de probabilidad de 195 YMSM usuarios de Grindr, para evaluar los patrones y motivaciones del uso de Grindr, con el fin de utilizar esta información para el desarrollo y personalización de prevención del VIH entre YMSM con base en teléfonos inteligentes. La principal razón para utilizar Grindr (29 %) es para buscar encuentros sexuales casuales (hook-ups). Entre los participantes que utilizan tanto Grindr como otro sitios de citas online, un mayor porcentaje estadísticamente significativo utilizó los sitios de citas online para encuentros casuales sexuales (42 %) comparado con Grindr (30 %). Un setenta porciento de los YMSM expresó su disposición para participar en programas de prevención del VIH con base en teléfonos inteligentes. El desarrollo y evaluación de aplicaciones para teléfonos inteligentes para el suministro de prevención del VIH tiene el potencial de involucrar a los YMSM en la programación de la prevención del VIH, que puede ser adaptada según los patrones y motivaciones de uso.", "title": "" }, { "docid": "32b292c3ea5c95411a5e67d664d6ce30", "text": "Many difficult combinatorial optimization problems have been modeled as static problems. However, in practice, many problems are dynamic and changing, while some decisions have to be made before all the design data are known. For example, in the Dynamic Vehicle Routing Problem (DVRP), new customer orders appear over time, and new routes must be reconfigured while executing the current solution. Montemanni et al. [1] considered a DVRP as an extension to the standard vehicle routing problem (VRP) by decomposing a DVRP as a sequence of static VRPs, and then solving them with an ant colony system (ACS) algorithm. This paper presents a genetic algorithm (GA) methodology for providing solutions for the DVRP model employed in [1]. The effectiveness of the proposed GA is evaluated using a set of benchmarks found in the literature. Compared with a tabu search approach implemented herein and the aforementioned ACS, the proposed GA methodology performs better in minimizing travel costs.", "title": "" }, { "docid": "8a8c1099dfe0cf45746f11da7d6923d8", "text": "The future of procedural content generation (PCG) lies beyond the dominant motivations of “replayability” and creating large environments for players to explore. This paper explores both the past and potential future for PCG, identifying five major lenses through which we can view PCG and its role in a game: data vs. process intensiveness, the interactive extent of the content, who has control over the generator, how many players interact with it, and the aesthetic purpose for PCG being used in the game. Using these lenses, the paper proposes several new research directions for PCG that require both deep technical research and innovative game design.", "title": "" }, { "docid": "3eeaf56aaf9dda0f2b16c1c46f6c1c75", "text": "In satellite earth station antenna systems there is an increasing demand for complex single aperture, multi-function and multi-frequency band capable feed systems. In this work, a multi band feed system (6/12 GHz) is described which employs quadrature junctions (QJ) and supports transmit and receive functionality in the C and Ku bands respectively. This feed system is designed for a 16.4 m diameter shaped cassegrain antenna. It is a single aperture, 4 port system with transmit capability in circular polarization (CP) mode over the 6.625-6.69 GHz band and receive in the linear polarization (LP) mode over the 12.1-12.3 GHz band", "title": "" }, { "docid": "c744354fcc6115a83c916dcc71b381f4", "text": "The spread of false rumours during emergencies can jeopardise the well-being of citizens as they are monitoring the stream of news from social media to stay abreast of the latest updates. In this paper, we describe the methodology we have developed within the PHEME project for the collection and sampling of conversational threads, as well as the tool we have developed to facilitate the annotation of these threads so as to identify rumourous ones. We describe the annotation task conducted on threads collected during the 2014 Ferguson unrest and we present and analyse our findings. Our results show that we can collect effectively social media rumours and identify multiple rumours associated with a range of stories that would have been hard to identify by relying on existing techniques that need manual input of rumour-specific keywords.", "title": "" }, { "docid": "e227e21d9b0523fdff82ca898fea0403", "text": "As computer games become more complex and consumers demand more sophisticated computer controlled agents, developers are required to place a greater emphasis on the artificial intelligence aspects of their games. One source of sophisticated AI techniques is the artificial intelligence research community. This paper discusses recent efforts by our group at the University of Michigan Artificial Intelligence Lab to apply state of the art artificial intelligence techniques to computer games. Our experience developing intelligent air combat agents for DARPA training exercises, described in John Laird's lecture at the 1998 Computer Game Developer's Conference, suggested that many principles and techniques from the research community are applicable to games. A more recent project, called the Soar/Games project, has followed up on this by developing agents for computer games, including Quake II and Descent 3. The result of these two research efforts is a partially implemented design of an artificial intelligence engine for games based on well established AI systems and techniques.", "title": "" }, { "docid": "debe25489a0176c48c07d1f2d5b8513e", "text": "In order to formulate a high-level understanding of driver behavior from massive naturalistic driving data, an effective approach is needed to automatically process or segregate data into low-level maneuvers. Besides traditional computer vision processing, this study addresses the lane-change detection problem by using vehicle dynamic signals (steering angle and vehicle speed) extracted from the CAN-bus, which is collected with 58 drivers around Dallas, TX area. After reviewing the literature, this study proposes a machine learning-based segmentation and classification algorithm, which is stratified into three stages. The first stage is preprocessing and prefiltering, which is intended to reduce noise and remove clear left and right turning events. Second, a spectral time-frequency analysis segmentation approach is employed to generalize all potential time-variant lane-change and lane-keeping candidates. The final stage compares two possible classification methods—1) dynamic time warping feature with k -nearest neighbor classifier and 2) hidden state sequence prediction with a combined hidden Markov model. The overall optimal classification accuracy can be obtained at 80.36% for lane-change-left and 83.22% for lane-change-right. The effectiveness and issues of failures are also discussed. With the availability of future large-scale naturalistic driving data, such as SHRP2, this proposed effective lane-change detection approach can further contribute to characterize both automatic route recognition as well as distracted driving state analysis.", "title": "" }, { "docid": "b1b1af81e84e1f79a0193773a22199d4", "text": "Layered multicast is an efficient technique to deliver video to heterogeneous receivers over wired and wireless networks. In this paper, we consider such a multicast system in which the server adapts the bandwidth and forward-error correction code (FEC) of each layer so as to maximize the overall video quality, given the heterogeneous client characteristics in terms of their end-to-end bandwidth, packet drop rate over the wired network, and bit-error rate in the wireless hop. In terms of FECs, we also study the value of a gateway which “transcodes” packet-level FECs to byte-level FECs before forwarding packets from the wired network to the wireless clients. We present an analysis of the system, propose an efficient algorithm on FEC allocation for the base layer, and formulate a dynamic program with a fast and accurate approximation for the joint bandwidth and FEC allocation of the enhancement layers. Our results show that a transcoding gateway performs only slightly better than the nontranscoding one in terms of end-to-end loss rate, and our allocation is effective in terms of FEC parity and bandwidth served to each user.", "title": "" }, { "docid": "3dfb419706ae85d232753a085dc145f7", "text": "This chapter describes the different steps of designing, building, simulating, and testing an intelligent flight control module for an increasingly popular unmanned aerial vehicle (UAV), known as a quadrotor. It presents an in-depth view of the modeling of the kinematics, dynamics, and control of such an interesting UAV. A quadrotor offers a challenging control problem due to its highly unstable nature. An effective control methodology is therefore needed for such a unique airborne vehicle. The chapter starts with a brief overview on the quadrotor's background and its applications, in light of its advantages. Comparisons with other UAVs are made to emphasize the versatile capabilities of this special design. For a better understanding of the vehicle's behavior, the quadrotor's kinematics and dynamics are then detailed. This yields the equations of motion, which are used later as a guideline for developing the proposed intelligent flight control scheme. In this chapter, fuzzy logic is adopted for building the flight controller of the quadrotor. It has been witnessed that fuzzy logic control offers several advantages over certain types of conventional control methods, specifically in dealing with highly nonlinear systems and modeling uncertainties. Two types of fuzzy inference engines are employed in the design of the flight controller, each of which is explained and evaluated. For testing the designed intelligent flight controller, a simulation environment was first developed. The simulations were made as realistic as possible by incorporating environmental disturbances such as wind gust and the ever-present sensor noise. The proposed controller was then tested on a real test-bed built specifically for this project. Both the simulator and the real quadrotor were later used for conducting different attitude stabilization experiments to evaluate the performance of the proposed control strategy. The controller's performance was also benchmarked against conventional control techniques such as input-output linearization, backstepping and sliding mode control strategies. Conclusions were then drawn based on the conducted experiments and their results.", "title": "" }, { "docid": "479fbdcd776904e9ba20fd95b4acb267", "text": "Tall building developments have been rapidly increasing worldwide. This paper reviews the evolution of tall building’s structural systems and the technological driving force behind tall building developments. For the primary structural systems, a new classification – interior structures and exterior structures – is presented. While most representative structural systems for tall buildings are discussed, the emphasis in this review paper is on current trends such as outrigger systems and diagrid structures. Auxiliary damping systems controlling building motion are also discussed. Further, contemporary “out-of-the-box” architectural design trends, such as aerodynamic and twisted forms, which directly or indirectly affect the structural performance of tall buildings, are reviewed. Finally, the future of structural developments in tall buildings is envisioned briefly.", "title": "" }, { "docid": "bf5874dc1fc1c968d7c41eb573d8d04a", "text": "As creativity is increasingly recognised as a vital component of entrepreneurship, researchers and educators struggle to reform enterprise pedagogy. To help in this effort, we use a personality test and open-ended interviews to explore creativity between two groups of entrepreneurship masters’ students: one at a business school and one at an engineering school. The findings indicate that both groups had high creative potential, but that engineering students channelled this into practical and incremental efforts whereas the business students were more speculative and had a clearer market focus. The findings are drawn on to make some suggestions for entrepreneurship education.", "title": "" }, { "docid": "5ce00014f84277aca0a4b7dfefc01cbb", "text": "The design of a planar dual-band wide-scan phased array is presented. The array uses novel dual-band comb-slot-loaded patch elements supporting two separate bands with a frequency ratio of 1.4:1. The antenna maintains consistent radiation patterns and incorporates a feeding configuration providing good bandwidths in both bands. The design has been experimentally validated with an X-band planar 9 × 9 array. The array supports wide-angle scanning up to a maximum of 60 ° and 50 ° at the low and high frequency bands respectively.", "title": "" }, { "docid": "e9582d921b783a378e91c7b5ddaf9d16", "text": "Pneumatic soft actuators produce flexion and meet the new needs of collaborative robotics, which is rapidly emerging in the industry landscape 4.0. The soft actuators are not only aimed at industrial progress, but their application ranges in the field of medicine and rehabilitation. Safety and reliability are the main requirements for coexistence and human-robot interaction; such requirements, together with the versatility and lightness, are the precious advantages that is offered by this new category of actuators. The objective is to develop an actuator with high compliance, low cost, high versatility and easy to produce, aimed at the realization of the fingers of a robotic hand that can faithfully reproduce the motion of a real hand. The proposed actuator is equipped with an intrinsic compliance thanks to the hyper-elastic silicone rubber used for its realization; the bending is allowed by the high compliance of the silicone and by a square-meshed gauze which contains the expansion and guides the movement through appropriate cuts in correspondence of the joints. A numerical model of the actuator is developed and an optimal configuration of the five fingers of the hand is achieved; finally, the index finger is built, on which the experimental validation tests are carried out.", "title": "" }, { "docid": "81f9a52b6834095cd7be70b39af0e7f0", "text": "In this paper we present BatchDB, an in-memory database engine designed for hybrid OLTP and OLAP workloads. BatchDB achieves good performance, provides a high level of data freshness, and minimizes load interaction between the transactional and analytical engines, thus enabling real time analysis over fresh data under tight SLAs for both OLTP and OLAP workloads.\n BatchDB relies on primary-secondary replication with dedicated replicas, each optimized for a particular workload type (OLTP, OLAP), and a light-weight propagation of transactional updates. The evaluation shows that for standard TPC-C and TPC-H benchmarks, BatchDB can achieve competitive performance to specialized engines for the corresponding transactional and analytical workloads, while providing a level of performance isolation and predictable runtime for hybrid workload mixes (OLTP+OLAP) otherwise unmet by existing solutions.", "title": "" }, { "docid": "3eec1e9abcb677a4bc8f054fa8827f4f", "text": "We present a neural semantic parser that translates natural language questions into executable SQL queries with two key ideas. First, we develop an encoder-decoder model, where the decoder uses a simple type system of SQL to constraint the output prediction, and propose a value-based loss when copying from input tokens. Second, we explore using the execution semantics of SQL to repair decoded programs that result in runtime error or return empty result. We propose two modelagnostics repair approaches, an ensemble model and a local program repair, and demonstrate their effectiveness over the original model. We evaluate our model on the WikiSQL dataset and show that our model achieves close to state-of-the-art results with lesser model complexity.", "title": "" }, { "docid": "241f5a88f53c929cc11ce0edce191704", "text": "Enabled by mobile and wearable technology, personal health data delivers immense and increasing value for healthcare, benefiting both care providers and medical research. The secure and convenient sharing of personal health data is crucial to the improvement of the interaction and collaboration of the healthcare industry. Faced with the potential privacy issues and vulnerabilities existing in current personal health data storage and sharing systems, as well as the concept of self-sovereign data ownership, we propose an innovative user-centric health data sharing solution by utilizing a decentralized and permissioned blockchain to protect privacy using channel formation scheme and enhance the identity management using the membership service supported by the blockchain. A mobile application is deployed to collect health data from personal wearable devices, manual input, and medical devices, and synchronize data to the cloud for data sharing with healthcare providers and health insurance companies. To preserve the integrity of health data, within each record, a proof of integrity and validation is permanently retrievable from cloud database and is anchored to the blockchain network. Moreover, for scalable and performance considerations, we adopt a tree-based data processing and batching method to handle large data sets of personal health data collected and uploaded by the mobile platform.", "title": "" } ]
scidocsrr
d2e63cfca2fea6b2e02ea3e37e6d077a
BLACKLISTED SPEAKER IDENTIFICATION USING TRIPLET NEURAL NETWORKS
[ { "docid": "c9ecb6ac5417b5fea04e5371e4250361", "text": "Deep learning has proven itself as a successful set of models for learning useful semantic representations of data. These, however, are mostly implicitly learned as part of a classification task. In this paper we propose the triplet network model, which aims to learn useful representations by distance comparisons. A similar model was defined by Wang et al. (2014), tailor made for learning a ranking for image information retrieval. Here we demonstrate using various datasets that our model learns a better representation than that of its immediate competitor, the Siamese network. We also discuss future possible usage as a framework for unsupervised learning.", "title": "" } ]
[ { "docid": "3d5fb6eff6d0d63c17ef69c8130d7a77", "text": "A new measure of event-related brain dynamics, the event-related spectral perturbation (ERSP), is introduced to study event-related dynamics of the EEG spectrum induced by, but not phase-locked to, the onset of the auditory stimuli. The ERSP reveals aspects of event-related brain dynamics not contained in the ERP average of the same response epochs. Twenty-eight subjects participated in daily auditory evoked response experiments during a 4 day study of the effects of 24 h free-field exposure to intermittent trains of 89 dB low frequency tones. During evoked response testing, the same tones were presented through headphones in random order at 5 sec intervals. No significant changes in behavioral thresholds occurred during or after free-field exposure. ERSPs induced by target pips presented in some inter-tone intervals were larger than, but shared common features with, ERSPs induced by the tones, most prominently a ridge of augmented EEG amplitude from 11 to 18 Hz, peaking 1-1.5 sec after stimulus onset. Following 3-11 h of free-field exposure, this feature was significantly smaller in tone-induced ERSPs; target-induced ERSPs were not similarly affected. These results, therefore, document systematic effects of exposure to intermittent tones on EEG brain dynamics even in the absence of changes in auditory thresholds.", "title": "" }, { "docid": "bea412d20a95c853fe06e7640acb9158", "text": "We propose a novel approach to synthesizing images that are effective for training object detectors. Starting from a small set of real images, our algorithm estimates the rendering parameters required to synthesize similar images given a coarse 3D model of the target object. These parameters can then be reused to generate an unlimited number of training images of the object of interest in arbitrary 3D poses, which can then be used to increase classification performances. A key insight of our approach is that the synthetically generated images should be similar to real images, not in terms of image quality, but rather in terms of features used during the detector training. We show in the context of drone, plane, and car detection that using such synthetically generated images yields significantly better performances than simply perturbing real images or even synthesizing images in such way that they look very realistic, as is often done when only limited amounts of training data are available. 2015 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "779d5380c72827043111d00510e32bfd", "text": "OBJECTIVE\nThe purpose of this review is 2-fold. The first is to provide a review for physiatrists already providing care for women with musculoskeletal pelvic floor pain and a resource for physiatrists who are interested in expanding their practice to include this patient population. The second is to describe how musculoskeletal dysfunctions involving the pelvic floor can be approached by the physiatrist using the same principles used to evaluate and treat others dysfunctions in the musculoskeletal system. This discussion clarifies that evaluation and treatment of pelvic floor pain of musculoskeletal origin is within the scope of practice for physiatrists. The authors review the anatomy of the pelvic floor, including the bony pelvis and joints, muscle and fascia, and the peripheral and autonomic nervous systems. Pertinent history and physical examination findings are described. The review concludes with a discussion of differential diagnosis and treatment of musculoskeletal pelvic floor pain in women. Improved recognition of pelvic floor dysfunction by healthcare providers will reduce impairment and disability for women with pelvic floor pain. A physiatrist is in the unique position to treat the musculoskeletal causes of this condition because it requires an expert grasp of anatomy, function, and the linked relationship between the spine and pelvis. Further research regarding musculoskeletal causes and treatment of pelvic floor pain will help validate these concepts and improve awareness and care for women limited by this condition.", "title": "" }, { "docid": "337b03633afacc96b443880ad996f013", "text": "Mobile security becomes a hot topic recently, especially in mobile payment and privacy data fields. Traditional solution can't keep a good balance between convenience and security. Against this background, a dual OS security solution named Trusted Execution Environment (TEE) is proposed and implemented by many institutions and companies. However, it raised TEE fragmentation and control problem. Addressing this issue, a mobile security infrastructure named Trusted Execution Environment Integration (TEEI) is presented to integrate multiple different TEEs. By using Trusted Virtual Machine (TVM) tech-nology, TEEI allows multiple TEEs running on one secure world on one mobile device at the same time and isolates them safely. Furthermore, a Virtual Network protocol is proposed to enable communication and cooperation among TEEs which includes TEE on TVM and TEE on SE. At last, a SOA-like Internal Trusted Service (ITS) framework is given to facilitate the development and maintenance of TEEs.", "title": "" }, { "docid": "452f71b953ddffad88cec65a4c7fbece", "text": "The password based authorization scheme for all available security systems can effortlessly be hacked by the hacker or a malicious user. One might not be able to guarantee that the person who is using the password is authentic or not. Only biometric systems are one which make offered automated authentication. There are very exceptional chances of losing the biometric identity, only if the accident of an individual may persists. Footprint based biometric system has been evaluated so far. In this paper a number of approaches of footprint recognition have been deliberated. General Terms Biometric pattern recognition, Image processing.", "title": "" }, { "docid": "8183fe0c103e2ddcab5b35549ed8629f", "text": "The performance of Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) (i.e. Douglas-Rachford splitting on the dual problem) are sensitive to conditioning of the problem data. For a restricted class of problems that enjoy a linear rate of convergence, we show in this paper how to precondition the optimization data to optimize a bound on that rate. We also generalize the preconditioning methods to problems that do not satisfy all assumptions needed to guarantee a linear convergence. The efficiency of the proposed preconditioning is confirmed in a numerical example, where improvements of more than one order of magnitude are observed compared to when no preconditioning is used.", "title": "" }, { "docid": "f4aa06f7782a22eeb5f30d0ad27eaff9", "text": "Friction effects are particularly critical for industrial robots, since they can induce large positioning errors, stick-slip motions, and limit cycles. This paper offers a reasoned overview of the main friction compensation techniques that have been developed in the last years, regrouping them according to the adopted kind of control strategy. Some experimental results are reported, to show how the control performances can be affected not only by the chosen method, but also by the characteristics of the available robotic architecture and of the executed task.", "title": "" }, { "docid": "6f9ae554513bba3c685f86909e31645f", "text": "Triboelectric energy harvesting has been applied to various fields, from large-scale power generation to small electronics. Triboelectric energy is generated when certain materials come into frictional contact, e.g., static electricity from rubbing a shoe on a carpet. In particular, textile-based triboelectric energy-harvesting technologies are one of the most promising approaches because they are not only flexible, light, and comfortable but also wearable. Most previous textile-based triboelectric generators (TEGs) generate energy by vertically pressing and rubbing something. However, we propose a corrugated textile-based triboelectric generator (CT-TEG) that can generate energy by stretching. Moreover, the CT-TEG is sewn into a corrugated structure that contains an effective air gap without additional spacers. The resulting CT-TEG can generate considerable energy from various deformations, not only by pressing and rubbing but also by stretching. The maximum output performances of the CT-TEG can reach up to 28.13 V and 2.71 μA with stretching and releasing motions. Additionally, we demonstrate the generation of sufficient energy from various activities of a human body to power about 54 LEDs. These results demonstrate the potential application of CT-TEGs for self-powered systems.", "title": "" }, { "docid": "719c945e9f45371f8422648e0e81178f", "text": "As technology in the cloud increases, there has been a lot of improvements in the maturity and firmness of cloud storage technologies. Many end-users and IT managers are getting very excited about the potential benefits of cloud storage, such as being able to store and retrieve data in the cloud and capitalizing on the promise of higher-performance, more scalable and cut-price storage. In this thesis, we present a typical Cloud Storage system architecture, a referral Cloud Storage model and Multi-Tenancy Cloud Storage model, value the past and the state-ofthe-art of Cloud Storage, and examine the Edge and problems that must be addressed to implement Cloud Storage. Use cases in diverse Cloud Storage offerings were also abridged. KEYWORDS—Cloud Storage, Cloud Computing, referral model, Multi-Tenancy, survey", "title": "" }, { "docid": "5956e9399cfe817aa1ddec5553883bef", "text": "Most existing zero-shot learning methods consider the problem as a visual semantic embedding one. Given the demonstrated capability of Generative Adversarial Networks(GANs) to generate images, we instead leverage GANs to imagine unseen categories from text descriptions and hence recognize novel classes with no examples being seen. Specifically, we propose a simple yet effective generative model that takes as input noisy text descriptions about an unseen class (e.g. Wikipedia articles) and generates synthesized visual features for this class. With added pseudo data, zero-shot learning is naturally converted to a traditional classification problem. Additionally, to preserve the inter-class discrimination of the generated features, a visual pivot regularization is proposed as an explicit supervision. Unlike previous methods using complex engineered regularizers, our approach can suppress the noise well without additional regularization. Empirically, we show that our method consistently outperforms the state of the art on the largest available benchmarks on Text-based Zero-shot Learning.", "title": "" }, { "docid": "b72d0d187fe12d1f006c8e17834af60e", "text": "Pseudoangiomatous stromal hyperplasia (PASH) is a rare benign mesenchymal proliferative lesion of the breast. In this study, we aimed to show a case of PASH with mammographic and sonographic features, which fulfill the criteria for benign lesions and to define its recently discovered elastography findings. A 49-year-old premenopausal female presented with breast pain in our outpatient surgery clinic. In ultrasound images, a hypoechoic solid mass located at the 3 o'clock position in the periareolar region of the right breast was observed. Due to it was not detected on earlier mammographies, the patient underwent a tru-cut biopsy, although the mass fulfilled the criteria for benign lesions on mammography, ultrasound, and elastography. Elastography is a new technique differentiating between benign and malignant lesions. It is also useful to determine whether a biopsy is necessary or follow-up is sufficient.", "title": "" }, { "docid": "c851bad8a1f7c8526d144453b3f2aa4f", "text": "Taxonomies of person characteristics are well developed, whereas taxonomies of psychologically important situation characteristics are underdeveloped. A working model of situation perception implies the existence of taxonomizable dimensions of psychologically meaningful, important, and consequential situation characteristics tied to situation cues, goal affordances, and behavior. Such dimensions are developed and demonstrated in a multi-method set of 6 studies. First, the \"Situational Eight DIAMONDS\" dimensions Duty, Intellect, Adversity, Mating, pOsitivity, Negativity, Deception, and Sociality (Study 1) are established from the Riverside Situational Q-Sort (Sherman, Nave, & Funder, 2010, 2012, 2013; Wagerman & Funder, 2009). Second, their rater agreement (Study 2) and associations with situation cues and goal/trait affordances (Studies 3 and 4) are examined. Finally, the usefulness of these dimensions is demonstrated by examining their predictive power of behavior (Study 5), particularly vis-à-vis measures of personality and situations (Study 6). Together, we provide extensive and compelling evidence that the DIAMONDS taxonomy is useful for organizing major dimensions of situation characteristics. We discuss the DIAMONDS taxonomy in the context of previous taxonomic approaches and sketch future research directions.", "title": "" }, { "docid": "aefa4559fa6f8e0c046cd7e02d3e1b92", "text": "The concept of smart city is considered as the new engine for economic and social growths since it is supported by the rapid development of information and communication technologies. However, each technology not only brings its advantages, but also the challenges that cities have to face in order to implement it. So, this paper addresses two research questions : « What are the most important technologies that drive the development of smart cities ?» and « what are the challenges that cities will face when adopting these technologies ? » Relying on a literature review of studies published between 1990 and 2017, the ensuing results show that Artificial Intelligence and Internet of Things represent the most used technologies for smart cities. So, the focus of this paper will be on these two technologies by showing their advantages and their challenges.", "title": "" }, { "docid": "123a21d9913767e1a8d1d043f6feab01", "text": "Permanent magnet synchronous machines generate parasitic torque pulsations owing to distortion of the stator flux linkage distribution, variable magnetic reluctance at the stator slots, and secondary phenomena. The consequences are speed oscillations which, although small in magnitude, deteriorate the performance of the drive in demanding applications. The parasitic effects are analysed and modelled using the complex state-variable approach. A fast current control system is employed to produce highfrequency electromagnetic torque components for compensation. A self-commissioning scheme is described which identifies the machine parameters, particularly the torque ripple functions which depend on the angular position of the rotor. Variations of permanent magnet flux density with temperature are compensated by on-line adaptation. The algorithms for adaptation and control are implemented in a standard microcontroller system without additional hardware. The effectiveness of the adaptive torque ripple compensation is demonstrated by experiments.", "title": "" }, { "docid": "ccc4994ba255084af5456925ba6c164e", "text": "This letter proposes a novel, small, printed monopole antenna for ultrawideband (UWB) applications with dual band-notch function. By cutting an inverted fork-shaped slit in the ground plane, additional resonance is excited, and hence much wider impedance bandwidth can be produced. To generate dual band-notch characteristics, we use a coupled inverted U-ring strip in the radiating patch. The measured results reveal that the presented dual band-notch monopole antenna offers a wide bandwidth with two notched bands, covering all the 5.2/5.8-GHz WLAN, 3.5/5.5-GHz WiMAX, and 4-GHz C-bands. The proposed antenna has a small size of 12<formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\,\\times\\,$</tex> </formula>18 mm<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{2}$</tex> </formula> or about <formula formulatype=\"inline\"><tex Notation=\"TeX\">$0.15 \\lambda \\times 0.25 \\lambda$</tex></formula> at 4.2 GHz (first resonance frequency), which has a size reduction of 28% with respect to the previous similar antenna. Simulated and measured results are presented to validate the usefulness of the proposed antenna structure UWB applications.", "title": "" }, { "docid": "e75ec4137b0c559a1c375d97993448b0", "text": "In recent years, consumer-class UAVs have come into public view and cyber security starts to attract the attention of researchers and hackers. The tasks of positioning, navigation and return-to-home (RTH) of UAV heavily depend on GPS. However, the signal structure of civil GPS used by UAVs is completely open and unencrypted, and the signal received by ground devices is very weak. As a result, GPS signals are vulnerable to jamming and spoofing. The development of software define radio (SDR) has made GPS-spoofing easy and costless. GPS-spoofing may cause UAVs to be out of control or even hijacked. In this paper, we propose a novel method to detect GPS-spoofing based on monocular camera and IMU sensor of UAV. Our method was demonstrated on the UAV of DJI Phantom 4.", "title": "" }, { "docid": "bd20bbe7deb2383b6253ec3f576dcf56", "text": "Despite recent advances, the remaining bottlenecks in deep generative models are necessity of extensive training and difficulties with generalization from small number of training examples. We develop a new generative model called Generative Matching Network which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks. By conditioning on the additional input dataset, our model can instantly learn new concepts that were not available in the training data but conform to a similar generative process. The proposed framework does not explicitly restrict diversity of the conditioning data and also does not require an extensive inference procedure for training or adaptation. Our experiments on the Omniglot dataset demonstrate that Generative Matching Networks significantly improve predictive performance on the fly as more additional data is available and outperform existing state of the art conditional generative models.", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "8f957dab2aa6b186b61bc309f3f2b5c3", "text": "Learning deeper convolutional neural networks has become a tendency in recent years. However, many empirical evidences suggest that performance improvement cannot be attained by simply stacking more layers. In this paper, we consider the issue from an information theoretical perspective, and propose a novel method Relay Backpropagation, which encourages the propagation of effective information through the network in training stage. By virtue of the method, we achieved the first place in ILSVRC 2015 Scene Classification Challenge. Extensive experiments on two challenging large scale datasets demonstrate the effectiveness of our method is not restricted to a specific dataset or network architecture.", "title": "" }, { "docid": "455c080ab112cd4f71a29ab84af019f5", "text": "We propose a novel image inpainting approach in which the exemplar and the sparse representation are combined together skillfully. In the process of image inpainting, often there will be such a situation: although the sum of squared differences (SSD) of exemplar patch is the smallest among all the candidate patches, there may be a noticeable visual discontinuity in the recovered image when using the exemplar patch to replace the target patch. In this case, we cleverly use the sparse representation of image over a redundant dictionary to recover the target patch, instead of using the exemplar patch to replace it, so that we can promptly prevent the occurrence and accumulation of errors, and obtain satisfied results. Experiments on a number of real and synthetic images demonstrate the effectiveness of proposed algorithm, and the recovered images can better meet the requirements of human vision.", "title": "" } ]
scidocsrr
acba07b0f0738c55be978ceeccf1a993
Emotion Recognition Based on Joint Visual and Audio Cues
[ { "docid": "8877d6753d6b7cd39ba36c074ca56b00", "text": "Perhaps the most fundamental application of affective computing will be Human-Computer Interaction (HCI) in which the computer should have the ability to detect and track the user's affective states, and make corresponding feedback. The human multi-sensor affect system defines the expectation of multimodal affect analyzer. In this paper, we present our efforts toward audio-visual HCI-related affect recognition. With HCI applications in mind, we take into account some special affective states which indicate users' cognitive/motivational states. Facing the fact that a facial expression is influenced by both an affective state and speech content, we apply a smoothing method to extract the information of the affective state from facial features. In our fusion stage, a voting method is applied to combine audio and visual modalities so that the final affect recognition accuracy is greatly improved. We test our bimodal affect recognition approach on 38 subjects with 11 HCI-related affect states. The extensive experimental results show that the average person-dependent affect recognition accuracy is almost 90% for our bimodal fusion.", "title": "" }, { "docid": "d9ffb9e4bba1205892351b1328977f6c", "text": "Bayesian network models provide an attractive framework for multimodal sensor fusion. They combine an intuitive graphical representation with efficient algorithms for inference and learning. However, the unsupervised nature of standard parameter learning algorithms for Bayesian networks can lead to poor performance in classification tasks. We have developed a supervised learning framework for Bayesian networks, which is based on the Adaboost algorithm of Schapire and Freund. Our framework covers static and dynamic Bayesian networks with both discrete and continuous states. We have tested our framework in the context of a novel multimodal HCI application: a speech-based command and control interface for a Smart Kiosk. We provide experimental evidence for the utility of our boosted learning approach.", "title": "" }, { "docid": "c8e321ac8b32643ac9cbe151bb9e5f8f", "text": "The most expressive way humans display emotions is through facial expressions. In this work we report on several advances we have made in building a system for classification of facial expressions from continuous video input. We introduce and test different Bayesian network classifiers for classifying expressions from video, focusing on changes in distribution assumptions, and feature dependency structures. In particular we use Naive–Bayes classifiers and change the distribution from Gaussian to Cauchy, and use Gaussian Tree-Augmented Naive Bayes (TAN) classifiers to learn the dependencies among different facial motion features. We also introduce a facial expression recognition from live video input using temporal cues. We exploit the existing methods and propose a new architecture of hidden Markov models (HMMs) for automatically segmenting and recognizing human facial expression from video sequences. The architecture performs both segmentation and recognition of the facial expressions automatically using a multi-level architecture composed of an HMM layer and a Markov model layer. We explore both person-dependent and person-independent recognition of expressions and compare the different methods. 2003 Elsevier Inc. All rights reserved. * Corresponding author. E-mail addresses: iracohen@ifp.uiuc.edu (I. Cohen), nicu@science.uva.nl (N. Sebe), ashutosh@ us.ibm.com (A. Garg), lawrence.chen@kodak.com (L. Chen), huang@ifp.uiuc.edu (T.S. Huang). 1077-3142/$ see front matter 2003 Elsevier Inc. All rights reserved. doi:10.1016/S1077-3142(03)00081-X I. Cohen et al. / Computer Vision and Image Understanding 91 (2003) 160–187 161", "title": "" } ]
[ { "docid": "e0ee4f306bb7539d408f606d3c036ac5", "text": "Despite the growing popularity of mobile web browsing, the energy consumed by a phone browser while surfing the web is poorly understood. We present an infrastructure for measuring the precise energy used by a mobile browser to render web pages. We then measure the energy needed to render financial, e-commerce, email, blogging, news and social networking sites. Our tools are sufficiently precise to measure the energy needed to render individual web elements, such as cascade style sheets (CSS), Javascript, images, and plug-in objects. Our results show that for popular sites, downloading and parsing cascade style sheets and Javascript consumes a significant fraction of the total energy needed to render the page. Using the data we collected we make concrete recommendations on how to design web pages so as to minimize the energy needed to render the page. As an example, by modifying scripts on the Wikipedia mobile site we reduced by 30% the energy needed to download and render Wikipedia pages with no change to the user experience. We conclude by estimating the point at which offloading browser computations to a remote proxy can save energy on the phone.", "title": "" }, { "docid": "10994a99bb4da87a34d835720d005668", "text": "Wireless sensor networks (WSNs), consisting of a large number of nodes to detect ambient environment, are widely deployed in a predefined area to provide more sophisticated sensing, communication, and processing capabilities, especially concerning the maintenance when hundreds or thousands of nodes are required to be deployed over wide areas at the same time. Radio frequency identification (RFID) technology, by reading the low-cost passive tags installed on objects or people, has been widely adopted in the tracing and tracking industry and can support an accurate positioning within a limited distance. Joint utilization of WSN and RFID technologies is attracting increasing attention within the Internet of Things (IoT) community, due to the potential of providing pervasive context-aware applications with advantages from both fields. WSN-RFID convergence is considered especially promising in context-aware systems with indoor positioning capabilities, where data from deployed WSN and RFID systems can be opportunistically exploited to refine and enhance the collected data with position information. In this papera, we design and evaluate a hybrid system which combines WSN and RFID technologies to provide an indoor positioning service with the capability of feeding position information into a general-purpose IoT environment. Performance of the proposed system is evaluated by means of simulations and a small-scale experimental set-up. The performed analysis demonstrates that the joint use of heterogeneous technologies can increase the robustness and the accuracy of the indoor positioning systems.", "title": "" }, { "docid": "1c6bf44a2fea9e9b1ffc015759f8986f", "text": "Convolutional neural networks (CNNs) typically suffer from slow convergence rates in training, which limits their wider application. This paper presents a new CNN learning approach, based on second-order methods, aimed at improving: a) Convergence rates of existing gradient-based methods, and b) Robustness to the choice of learning hyper-parameters (e.g., learning rate). We derive an efficient back-propagation algorithm for simultaneously computing both gradients and second derivatives of the CNN's learning objective. These are then input to a Long Short Term Memory (LSTM) to predict optimal updates of CNN parameters in each learning iteration. Both meta-learning of the LSTM and learning of the CNN are conducted jointly. Evaluation on image classification demonstrates that our second-order backpropagation has faster convergences rates than standard gradient-based learning for the same CNN, and that it converges to better optima leading to better performance under a budgeted time for learning. We also show that an LSTM learned to learn a small CNN network can be readily used for learning a larger network.", "title": "" }, { "docid": "564045d00d2e347252fda301a332f30a", "text": "In this contribution, the control of a reverse osmosis desalination plant by using an optimal multi-loop approach is presented. Controllers are assumed to be players of a cooperative game, whose solution is obtained by multi-objective optimization (MOO). The MOO problem is solved by applying a genetic algorithm and the final solution is found from this Pareto set. For the reverse osmosis plant a control scheme consisting of two PI control loops are proposed. Simulation results show that in some cases, as for example this desalination plant, multi-loop control with several controllers, which have been obtained by join multi-objective optimization, perform as good as more complex controllers but with less implementation effort.", "title": "" }, { "docid": "848e56ec20ccab212567087178e36979", "text": "The technologies of mobile communications pervade our society and wireless networks sense the movement of people, generating large volumes of mobility data, such as mobile phone call records and Global Positioning System (GPS) tracks. In this work, we illustrate the striking analytical power of massive collections of trajectory data in unveiling the complexity of human mobility. We present the results of a large-scale experiment, based on the detailed trajectories of tens of thousands private cars with on-board GPS receivers, tracked during weeks of ordinary mobile activity. We illustrate the knowledge discovery process that, based on these data, addresses some fundamental questions of mobility analysts: what are the frequent patterns of people’s travels? How big attractors and extraordinary events influence mobility? How to predict areas of dense traffic in the near future? How to characterize traffic jams and congestions? We also describe M-Atlas, the querying and mining language and system that makes this analytical process possible, providing the mechanisms to master the complexity of transforming raw GPS tracks into mobility knowledge. M-Atlas is centered onto the concept of a trajectory, and the mobility knowledge discovery process can be specified by M-Atlas queries that realize data transformations, data-driven estimation of the parameters of the mining methods, the quality assessment of the obtained results, the quantitative and visual exploration of the discovered behavioral patterns and models, the composition of mined patterns, models and data with further analyses and mining, and the incremental mining strategies to address scalability.", "title": "" }, { "docid": "e8e658d677a3b1a23650b25edd32fc84", "text": "The aim of the study is to facilitate the suture on the sacral promontory for laparoscopic sacrocolpopexy. We hypothesised that a new method of sacral anchorage using a biosynthetic material, the polyether ether ketone (PEEK) harpoon, might be adequate because of its tensile strength, might reduce complications owing to its well-known biocompatibility, and might shorten the duration of surgery. We verified the feasibility of insertion and quantified the stress resistance of the harpoons placed in the promontory in nine fresh cadavers, using four stress tests in each case. Mean values were analysed and compared using the Wilcoxon and Fisher’s exact tests. The harpoon resists for at least 30 s against a pulling force of 1 N, 5 N and 10 N. Maximum tensile strength is 21 N for the harpoon and 32 N for the suture. Harpoons broke in 6 % and threads in 22 % of cases. Harpoons detached owing to ligament rupture in 64 % of the cases. Regarding failures of the whole complex, the failure involves the harpoon in 92 % of cases and the thread in 56 %. The four possible placements of the harpoon in the promontory were equally safe in terms of resistance to traction. The PEEK harpoon can be easily anchored in the promontory. Thread is more resistant to traction than the harpoon, but the latter makes the surgical technique easier. Any of the four locations tested is feasible for anchoring the device.", "title": "" }, { "docid": "4d383a53c180d5dc4473ab9d7795639a", "text": "With pervasive applications of medical imaging in health-care, biomedical image segmentation plays a central role in quantitative analysis, clinical diagnosis, and medical intervention. Since manual annotation suffers limited reproducibility, arduous efforts, and excessive time, automatic segmentation is desired to process increasingly larger scale histopathological data. Recently, deep neural networks (DNNs), particularly fully convolutional networks (FCNs), have been widely applied to biomedical image segmentation, attaining much improved performance. At the same time, quantization of DNNs has become an active research topic, which aims to represent weights with less memory (precision) to considerably reduce memory and computation requirements of DNNs while maintaining acceptable accuracy. In this paper, we apply quantization techniques to FCNs for accurate biomedical image segmentation. Unlike existing literatures on quantization which primarily targets memory and computation complexity reduction, we apply quantization as a method to reduce overfitting in FCNs for better accuracy. Specifically, we focus on a state-of-the-art segmentation framework, suggestive annotation [26], which judiciously extracts representative annotation samples from the original training dataset, obtaining an effective small-sized balanced training dataset. We develop two new quantization processes for this framework: (1) suggestive annotation with quantization for highly representative training samples, and (2) network training with quantization for high accuracy. Extensive experiments on the MICCAI Gland dataset show that both quantization processes can improve the segmentation performance, and our proposed method exceeds the current state-of-the-art performance by up to 1%. In addition, our method has a reduction of up to 6.4x on memory usage.", "title": "" }, { "docid": "71b31941082d639dfc6178ff74fba487", "text": "This paper describes ETH Zurich’s submission to the TREC 2016 Clinical Decision Support (CDS) track. In three successive stages, we apply query expansion based on literal as well as semantic term matches, rank documents in a negation-aware manner and, finally, re-rank them based on clinical intent types as well as semantic and conceptual affinity to the medical case in question. Empirical results show that the proposed method can distill patient representations from raw clinical notes that result in a retrieval performance superior to that of manually constructed case descriptions.", "title": "" }, { "docid": "3be0bd7f02c941f32903f6ad2379f45b", "text": "Spinal cord injury induces the disruption of blood-spinal cord barrier and triggers a complex array of tissue responses, including endoplasmic reticulum (ER) stress and autophagy. However, the roles of ER stress and autophagy in blood-spinal cord barrier disruption have not been discussed in acute spinal cord trauma. In the present study, we respectively detected the roles of ER stress and autophagy in blood-spinal cord barrier disruption after spinal cord injury. Besides, we also detected the cross-talking between autophagy and ER stress both in vivo and in vitro. ER stress inhibitor, 4-phenylbutyric acid, and autophagy inhibitor, chloroquine, were respectively or combinedly administrated in the model of acute spinal cord injury rats. At day 1 after spinal cord injury, blood-spinal cord barrier was disrupted and activation of ER stress and autophagy were involved in the rat model of trauma. Inhibition of ER stress by treating with 4-phenylbutyric acid decreased blood-spinal cord barrier permeability, prevented the loss of tight junction (TJ) proteins and reduced autophagy activation after spinal cord injury. On the contrary, inhibition of autophagy by treating with chloroquine exacerbated blood-spinal cord barrier permeability, promoted the loss of TJ proteins and enhanced ER stress after spinal cord injury. When 4-phenylbutyric acid and chloroquine were combinedly administrated in spinal cord injury rats, chloroquine abolished the blood-spinal cord barrier protective effect of 4-phenylbutyric acid by exacerbating ER stress after spinal cord injury, indicating that the cross-talking between autophagy and ER stress may play a central role on blood-spinal cord barrier integrity in acute spinal cord injury. The present study illustrates that ER stress induced by spinal cord injury plays a detrimental role on blood-spinal cord barrier integrity, on the contrary, autophagy induced by spinal cord injury plays a furthersome role in blood-spinal cord barrier integrity in acute spinal cord injury.", "title": "" }, { "docid": "c27ba892408391234da524ffab0e7418", "text": "Sunlight and skylight are rarely rendered correctly in computer graphics. A major reason for this is high computational expense. Another is that precise atmospheric data is rarely available. We present an inexpensive analytic model that approximates full spectrum daylight for various atmospheric conditions. These conditions are parameterized using terms that users can either measure or estimate. We also present an inexpensive analytic model that approximates the effects of atmosphere (aerial perspective). These models are fielded in a number of conditions and intermediate results verified against standard literature from atmospheric science. Our goal is to achieve as much accuracy as possible without sacrificing usability.", "title": "" }, { "docid": "be3640467394a0e0b5a5035749b442e9", "text": "Data pre-processing is an important and critical step in the data mining process and it has a huge impact on the success of a data mining project.[1](3) Data pre-processing is a step of the Knowledge discovery in databases (KDD) process that reduces the complexity of the data and offers better conditions to subsequent analysis. Through this the nature of the data is better understood and the data analysis is performed more accurately and efficiently. Data pre-processing is challenging as it involves extensive manual effort and time in developing the data operation scripts. There are a number of different tools and methods used for pre-processing, including: sampling, which selects a representative subset from a large population of data; transformation, which manipulates raw data to produce a single input; denoising, which removes noise from data; normalization, which organizes data for more efficient access; and feature extraction, which pulls out specified data that is significant in some particular context. Pre-processing technique is also useful for association rules algo. LikeAprior, Partitioned, Princer-search algo. and many more algos.", "title": "" }, { "docid": "566913d3a3d2e8fe24d6f5ff78440b94", "text": "We describe a Digital Advertising System Simulation (DASS) for modeling advertising and its impact on user behavior. DASS is both flexible and general, and can be applied to research on a wide range of topics, such as digital attribution, ad fatigue, campaign optimization, and marketing mix modeling. This paper introduces the basic DASS simulation framework and illustrates its application to digital attribution. We show that common position-based attribution models fail to capture the true causal effects of advertising across several simple scenarios. These results lay a groundwork for the evaluation of more complex attribution models, and the development of improved models.", "title": "" }, { "docid": "3aa36b86391a2596ea1fe1fe75470362", "text": "Experimental and computational studies of the hovering performance of microcoaxial shrouded rotors were carried out. The ATI Mini Multi-Axis Force/Torque Transducer system was used to measure all six components of the force and moment. Meanwhile, numerical simulation of flow field around rotor was carried out using sliding mesh method and multiple reference frame technique by ANASYS FLUENT. The computational results were well agreed with experimental data. Several important factors, such as blade pitch angle, rotor spacing and tip clearance, which influence the performance of shrouded coaxial rotor are studied in detail using CFD method in this paper. Results shows that, evaluated in terms of Figure of Merit, open coaxial rotor is suited for smaller pitch angle condition while shrouded coaxial rotor is suited for larger pitch angle condition. The negative pressure region around the shroud lip is the main source of the thrust generation. In order to have a better performance for shrouded coaxial rotor, the tip clearance must be smaller. The thrust sharing of upper- and lower-rotor is also discussed in this paper.", "title": "" }, { "docid": "785bd7171800d3f2f59f90838a84dc37", "text": "BACKGROUND\nCancer is considered to develop due to disruptions in the tissue microenvironment in addition to genetic disruptions in the tumor cells themselves. The two most important microenvironmental disruptions in cancer are arguably tissue hypoxia and disrupted circadian rhythmicity. Endothelial cells, which line the luminal side of all blood vessels transport oxygen or endocrine circadian regulators to the tissue and are therefore of key importance for circadian disruption and hypoxia in tumors.\n\n\nSCOPE OF REVIEW\nHere I review recent findings on the role of circadian rhythms and hypoxia in cancer and metastasis, with particular emphasis on how these pathways link tumor metastasis to pathological functions of blood vessels. The involvement of disrupted cell metabolism and redox homeostasis in this context and the use of novel zebrafish models for such studies will be discussed.\n\n\nMAJOR CONCLUSIONS\nCircadian rhythms and hypoxia are involved in tumor metastasis on all levels from pathological deregulation of the cell to the tissue and the whole organism. Pathological tumor blood vessels cause hypoxia and disruption in circadian rhythmicity which in turn drives tumor metastasis. Zebrafish models may be used to increase our understanding of the mechanisms behind hypoxia and circadian regulation of metastasis.\n\n\nGENERAL SIGNIFICANCE\nDisrupted blood flow in tumors is currently seen as a therapeutic goal in cancer treatment, but may drive invasion and metastasis via pathological hypoxia and circadian clock signaling. Understanding the molecular details behind such regulation is important to optimize treatment for patients with solid tumors in the future. This article is part of a Special Issue entitled Redox regulation of differentiation and de-differentiation.", "title": "" }, { "docid": "a398f3f5b670a9d2c9ae8ad84a4a3cb8", "text": "This project deals with online simultaneous localization and mapping (SLAM) problem without taking any assistance from Global Positioning System (GPS) and Inertial Measurement Unit (IMU). The main aim of this project is to perform online odometry and mapping in real time using a 2-axis lidar mounted on a robot. This involves use of two algorithms, the first of which runs at a higher frequency and uses the collected data to estimate velocity of the lidar which is fed to the second algorithm, a scan registration and mapping algorithm, to perform accurate matching of point cloud data.", "title": "" }, { "docid": "fada1434ec6e060eee9a2431688f82f3", "text": "Neural language models (NLMs) have been able to improve machine translation (MT) thanks to their ability to generalize well to long contexts. Despite recent successes of deep neural networks in speech and vision, the general practice in MT is to incorporate NLMs with only one or two hidden layers and there have not been clear results on whether having more layers helps. In this paper, we demonstrate that deep NLMs with three or four layers outperform those with fewer layers in terms of both the perplexity and the translation quality. We combine various techniques to successfully train deep NLMs that jointly condition on both the source and target contexts. When reranking nbest lists of a strong web-forum baseline, our deep models yield an average boost of 0.5 TER / 0.5 BLEU points compared to using a shallow NLM. Additionally, we adapt our models to a new sms-chat domain and obtain a similar gain of 1.0 TER / 0.5 BLEU points.", "title": "" }, { "docid": "3ca76a840ac35d94677fa45c767e61f1", "text": "A three dimensional (3-D) imaging system is implemented by employing 2-D range migration algorithm (RMA) for frequency modulated continuous wave synthetic aperture radar (FMCW-SAR). The backscattered data of a 1-D synthetic aperture at specific altitudes are coherently integrated to form 2-D images. These 2-D images at different altitudes are stitched vertically to form a 3-D image. Numerical simulation for near-field scenario are also presented to validate the proposed algorithm.", "title": "" }, { "docid": "e82681b5140f3a9b283bbd02870f18d5", "text": "Employee turnover has been identified as a key issue for organizations because of its adverse impact on work place productivity and long term growth strategies. To solve this problem, organizations use machine learning techniques to predict employee turnover. Accurate predictions enable organizations to take action for retention or succession planning of employees. However, the data for this modeling problem comes from HR Information Systems (HRIS); these are typically under-funded compared to the Information Systems of other domains in the organization which are directly related to its priorities. This leads to the prevalence of noise in the data that renders predictive models prone to over-fitting and hence inaccurate. This is the key challenge that is the focus of this paper, and one that has not been addressed historically. The novel contribution of this paper is to explore the application of Extreme Gradient Boosting (XGBoost) technique which is more robust because of its regularization formulation. Data from the HRIS of a global retailer is used to compare XGBoost against six historically used supervised classifiers and demonstrate its significantly higher accuracy for predicting employee turnover. Keywords—turnover prediction; machine learning; extreme gradient boosting; supervised classification; regularization", "title": "" }, { "docid": "ba573c3dd5206e7f71be11d030060484", "text": "The availability of camera phones provides people with a mobile platform for decoding bar codes, whereas conventional scanners lack mobility. However, using a normal camera phone in such applications is challenging due to the out-of-focus problem. In this paper, we present the research effort on the bar code reading algorithms using a VGA camera phone, NOKIA 7650. EAN-13, a widely used 1D bar code standard, is taken as an example to show the efficiency of the method. A wavelet-based bar code region location and knowledge-based bar code segmentation scheme is applied to extract bar code characters from poor-quality images. All the segmented bar code characters are input to the recognition engine, and based on the recognition distance, the bar code character string with the smallest total distance is output as the final recognition result of the bar code. In order to train an efficient recognition engine, the modified Generalized Learning Vector Quantization (GLVQ) method is designed for optimizing a feature extraction matrix and the class reference vectors. 19 584 samples segmented from more than 1000 bar code images captured by NOKIA 7650 are involved in the training process. Testing on 292 bar code images taken by the same phone, the correct recognition rate of the entire bar code set reaches 85.62%. We are confident that auto focus or macro modes on camera phones will bring the presented method into real world mobile use.", "title": "" } ]
scidocsrr
9705b47395ef0884d8739af8b47e69b1
Tell me a story--a conceptual exploration of storytelling in healthcare education.
[ { "docid": "4ade01af5fd850722fd690a5d8f938f4", "text": "IT may appear blasphemous to paraphrase the title of the classic article of Vannevar Bush but it may be a mitigating factor that it is done to pay tribute to another legendary scientist, Eugene Garfield. His ideas of citationbased searching, resource discovery and quantitative evaluation of publications serve as the basis for many of the most innovative and powerful online information services these days. Bush 60 years ago contemplated – among many other things – an information workstation, the Memex. A researcher would use it to annotate, organize, link, store, and retrieve microfilmed documents. He is acknowledged today as the forefather of the hypertext system, which in turn, is the backbone of the Internet. He outlined his thoughts in an essay published in the Atlantic Monthly. Maybe because of using a nonscientific outlet the paper was hardly quoted and cited in scholarly and professional journals for 30 years. Understandably, the Atlantic Monthly was not covered by the few, specialized abstracting and indexing databases of scientific literature. Such general interest magazines are not source journals in either the Web of Science (WoS), or Scopus databases. However, records for items which cite the ‘As We May Think’ article of Bush (also known as the ‘Memex’ paper) are listed with appropriate bibliographic information. Google Scholar (G-S) lists the records for the Memex paper and many of its citing papers. It is a rather confusing list with many dead links or otherwise dysfunctional links, and a hodge-podge of information related to Bush. It is quite telling that (based on data from the 1945– 2005 edition of WoS) the article of Bush gathered almost 90% of all its 712 citations in WoS between 1975 and 2005, peaking in 1999 with 45 citations in that year alone. Undoubtedly, this proportion is likely to be distorted because far fewer source articles from far fewer journals were processed by the Institute for Scientific Information for 1945–1974 than for 1975–2005. Scopus identifies 267 papers citing the Bush article. The main reason for the discrepancy is that Scopus includes cited references only from 1995 onward, while WoS does so from 1945. Bush’s impatience with the limitations imposed by the traditional classification and indexing tools and practices of the time is palpable. It is worth to quote it as a reminder. Interestingly, he brings up the terms ‘web of trails’ and ‘association of thoughts’ which establishes the link between him and Garfield.", "title": "" } ]
[ { "docid": "4da3f01ac76da39be45ab39c1e46bcf0", "text": "Depth cameras are low-cost, plug & play solution to generate point cloud. 3D depth camera yields depth images which do not convey the actual distance. A 3D camera driver does not support raw depth data output, these are usually filtered and calibrated as per the sensor specifications and hence a method is required to map every pixel back to its original point in 3D space. This paper demonstrates the method to triangulate a pixel from the 2D depth image back to its actual position in 3D space. Further this method illustrates the independence of this mapping operation, which facilitates parallel computing. Triangulation method and ratios between the pixel positions and camera parameters are used to estimate the true position in 3D space. The algorithm performance can be increased by 70% by the usage of TPL libraries. This performance differs from processor to processor", "title": "" }, { "docid": "9d5d667c6d621bd90a688c993065f5df", "text": "Creative individuals increasingly rely on online crowdfunding platforms to crowdsource funding for new ventures. For novice crowdfunding project creators, however, there are few resources to turn to for assistance in the planning of crowdfunding projects. We are building a tool for novice project creators to get feedback on their project designs. One component of this tool is a comparison to existing projects. As such, we have applied a variety of machine learning classifiers to learn the concept of a successful online crowdfunding project at the time of project launch. Currently our classifier can predict with roughly 68% accuracy, whether a project will be successful or not. The classification results will eventually power a prediction segment of the proposed feedback tool. Future work involves turning the results of the machine learning algorithms into human-readable content and integrating this content into the feedback tool.", "title": "" }, { "docid": "80e26e5bcbadf034896fcd206cd16099", "text": "This paper focuses on localization that serves as a smart service. Among the primary services provided by Internet of Things (IoT), localization offers automatically discoverable services. Knowledge relating to an object's position, especially when combined with other information collected from sensors and shared with other smart objects, allows us to develop intelligent systems to fast respond to changes in an environment. Today, wireless sensor networks (WSNs) have become a critical technology for various kinds of smart environments through which different kinds of devices can connect with each other coinciding with the principles of IoT. Among various WSN techniques designed for positioning an unknown node, the trilateration approach based on the received signal strength is the most suitable for localization due to its implementation simplicity and low hardware requirement. However, its performance is susceptible to external factors, such as the number of people present in a room, the shape and dimension of an environment, and the positions of objects and devices. To improve the localization accuracy of trilateration, we develop a novel distributed localization algorithm with a dynamic-circle-expanding mechanism capable of more accurately establishing the geometric relationship between an unknown node and reference nodes. The results of real world experiments and computer simulation show that the average error of position estimation is 0.67 and 0.225 m in the best cases, respectively. This suggests that the proposed localization algorithm outperforms other existing methods.", "title": "" }, { "docid": "d0ad2b6a36dce62f650323cb5dd40bc9", "text": "If two hospitals are providing identical services in all respects, except for the brand name, why are customers willing to pay more for one hospital than the other? That is, the brand name is not just a name, but a name that contains value (brand equity). Brand equity is the value that the brand name endows to the product, such that consumers are willing to pay a premium price for products with the particular brand name. Accordingly, a company needs to manage its brand carefully so that its brand equity does not depreciate. Although measuring brand equity is important, managers have no brand equity index that is psychometrically robust and parsimonious enough for practice. Indeed, index construction is quite different from conventional scale development. Moreover, researchers might still be unaware of the potential appropriateness of formative indicators for operationalizing particular constructs. Toward this end, drawing on the brand equity literature and following the index construction procedure, this study creates a brand equity index for a hospital. The results reveal a parsimonious five-indicator brand equity index that can adequately capture the full domain of brand equity. This study also illustrates the differences between index construction and scale development.", "title": "" }, { "docid": "9a522060a52474850ff328cef5ea4121", "text": "Mild cognitive impairment (MCI) is the prodromal stage of Alzheimer's disease (AD). Identifying MCI subjects who are at high risk of converting to AD is crucial for effective treatments. In this study, a deep learning approach based on convolutional neural networks (CNN), is designed to accurately predict MCI-to-AD conversion with magnetic resonance imaging (MRI) data. First, MRI images are prepared with age-correction and other processing. Second, local patches, which are assembled into 2.5 dimensions, are extracted from these images. Then, the patches from AD and normal controls (NC) are used to train a CNN to identify deep learning features of MCI subjects. After that, structural brain image features are mined with FreeSurfer to assist CNN. Finally, both types of features are fed into an extreme learning machine classifier to predict the AD conversion. The proposed approach is validated on the standardized MRI datasets from the Alzheimer's Disease Neuroimaging Initiative (ADNI) project. This approach achieves an accuracy of 79.9% and an area under the receiver operating characteristic curve (AUC) of 86.1% in leave-one-out cross validations. Compared with other state-of-the-art methods, the proposed one outperforms others with higher accuracy and AUC, while keeping a good balance between the sensitivity and specificity. Results demonstrate great potentials of the proposed CNN-based approach for the prediction of MCI-to-AD conversion with solely MRI data. Age correction and assisted structural brain image features can boost the prediction performance of CNN.", "title": "" }, { "docid": "44bee5e310c91c778e874d347c64bc18", "text": "In this paper, we consider a deterministic global optimization algorithm for solving a general linear sum of ratios (LFP). First, an equivalent optimization problem (LFP1) of LFP is derived by exploiting the characteristics of the constraints of LFP. By a new linearizing method the linearization relaxation function of the objective function of LFP1 is derived, then the linear relaxation programming (RLP) of LFP1 is constructed and the proposed branch and bound algorithm is convergent to the global minimum through the successive refinement of the linear relaxation of the feasible region of the objection function and the solutions of a series of RLP. And finally the numerical experiments are given to illustrate the feasibility of the proposed algorithm. 2006 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "7c3c06529ae52055de668cbefce39c5f", "text": "Context-aware recommendation algorithms focus on refining recommendations by considering additional information, available to the system. This topic has gained a lot of attention recently. Among others, several factorization methods were proposed to solve the problem, although most of them assume explicit feedback which strongly limits their real-world applicability. While these algorithms apply various loss functions and optimization strategies, the preference modeling under context is less explored due to the lack of tools allowing for easy experimentation with various models. As context dimensions are introduced beyond users and items, the space of possible preference models and the importance of proper modeling largely increases. In this paper we propose a general factorization framework (GFF), a single flexible algorithm that takes the preference model as an input and computes latent feature matrices for the input dimensions. GFF allows us to easily experiment with various linear models on any context-aware recommendation task, be it explicit or implicit feedback based. The scaling properties makes it usable under real life circumstances as well. We demonstrate the framework’s potential by exploring various preference models on a 4-dimensional context-aware problem with contexts that are available for almost any real life datasets. We show in our experiments—performed on five real life, implicit feedback datasets—that proper preference modelling significantly increases recommendation accuracy, and previously unused models outperform the traditional ones. Novel models in GFF also outperform state-of-the-art factorization algorithms. We also extend the method to be fully compliant to the Multidimensional Dataspace Model, one of the most extensive data models of context-enriched data. Extended GFF allows the seamless incorporation of information into the factorization framework beyond context, like item metadata, social networks, session information, etc. Preliminary experiments show great potential of this capability.", "title": "" }, { "docid": "c59cae78ce3482450776755b9d9d5199", "text": "Traditional information systems return answers after a user submits a complete query. Users often feel “left in the dark” when they have limited knowledge about the underlying data and have to use a try-and-see approach for finding information. A recent trend of supporting autocomplete in these systems is a first step toward solving this problem. In this paper, we study a new information-access paradigm, called “type-ahead search” in which the system searches the underlying data “on the fly” as the user types in query keywords. It extends autocomplete interfaces by allowing keywords to appear at different places in the underlying data. This framework allows users to explore data as they type, even in the presence of minor errors. We study research challenges in this framework for large amounts of data. Since each keystroke of the user could invoke a query on the backend, we need efficient algorithms to process each query within milliseconds. We develop various incremental-search algorithms for both single-keyword queries and multi-keyword queries, using previously computed and cached results in order to achieve a high interactive speed. We develop novel techniques to support fuzzy search by allowing mismatches between query keywords and answers. We have deployed several real prototypes using these techniques. One of them has been deployed to support type-ahead search on the UC Irvine people directory, which has been used regularly and well received by users due to its friendly interface and high efficiency.", "title": "" }, { "docid": "07a4f79dbe16be70877724b142013072", "text": "Safety planning in the construction industry is generally done separately from the project execution planning. This separation creates difficulties for safety engineers to analyze what, when, why and where safety measures are needed for preventing accidents. Lack of information and integration of available data (safety plan, project schedule, 2D project drawings) during the planning stage often results in scheduling work activities with overlapping space needs that then can create hazardous conditions, for example, work above other crew. These space requirements are time dependent and often neglected due to the manual effort that is required to handle the data. Representation of project-specific activity space requirements in 4D models hardly happen along with schedule and work break-down structure. Even with full cooperation of all related stakeholders, current safety planning and execution still largely depends on manual observation and past experiences. The traditional manual observation is inefficient, error-prone, and the observed result can be easily effected by subjective judgments. This paper will demonstrate the development of an automated safety code checking tool for Building Information Modeling (BIM), work breakdown structure, and project schedules in conjunction with safety criteria to reduce the potential for accidents on construction projects. The automated safety compliance rule checker code builds on existing applications for building code compliance checking, structural analysis, and constructability analysis etc. and also the advances in 4D simulations for scheduling. Preliminary results demonstrate a computer-based automated tool can assist in safety planning and execution of projects on a day to day basis.", "title": "" }, { "docid": "70374d2cbf730fab13c3e126359b59e8", "text": "We define a new distance measure the resistor-average distance between two probability distributions that is closely related to the Kullback-Leibler distance. While the KullbackLeibler distance is asymmetric in the two distributions, the resistor-average distance is not. It arises from geometric considerations similar to those used to derive the Chernoff distance. Determining its relation to well-known distance measures reveals a new way to depict how commonly used distance measures relate to each other.", "title": "" }, { "docid": "d3b0957b31f47620c0fa8e65a1cc086a", "text": "In this paper, we propose series of algorithms for detecting change points in time-series data based on subspace identification, meaning a geometric approach for estimating linear state-space models behind time-series data. Our algorithms are derived from the principle that the subspace spanned by the columns of an observability matrix and the one spanned by the subsequences of time-series data are approximately equivalent. In this paper, we derive a batch-type algorithm applicable to ordinary time-series data, i.e. consisting of only output series, and then introduce the online version of the algorithm and the extension to be available with input-output time-series data. We illustrate the effectiveness of our algorithms with comparative experiments using some artificial and real datasets.", "title": "" }, { "docid": "aa80e0ad489c03ec94a1835d6d4907a3", "text": "Cloud computing is a term coined to a network that offers incredible processing power, a wide array of storage space and unbelievable speed of computation. Social media channels, corporate structures and individual consumers are all switching to the magnificent world of cloud computing. The flip side to this coin is that with cloud storage emerges the security issues of confidentiality, data integrity and data availability. Since the “cloud” is a mere collection of tangible super computers spread across the world, authentication and authorization for data access is more than a necessity. Our work attempts to overcome these security threats. The proposed methodology suggests the encryption of the files to be uploaded on the cloud. The integrity and confidentiality of the data uploaded by the user is ensured doubly by not only encrypting it but also providing access to the data only on successful authentication. KeywordsCloud computing, security, encryption, password based AES algorithm", "title": "" }, { "docid": "2126c47fe320af2d908ec01a426419ce", "text": "Stretching has long been used in many physical activities to increase range of motion (ROM) around a joint. Stretching also has other acute effects on the neuromuscular system. For instance, significant reductions in maximal voluntary strength, muscle power or evoked contractile properties have been recorded immediately after a single bout of static stretching, raising interest in other stretching modalities. Thus, the effects of dynamic stretching on subsequent muscular performance have been questioned. This review aimed to investigate performance and physiological alterations following dynamic stretching. There is a substantial amount of evidence pointing out the positive effects on ROM and subsequent performance (force, power, sprint and jump). The larger ROM would be mainly attributable to reduced stiffness of the muscle-tendon unit, while the improved muscular performance to temperature and potentiation-related mechanisms caused by the voluntary contraction associated with dynamic stretching. Therefore, if the goal of a warm-up is to increase joint ROM and to enhance muscle force and/or power, dynamic stretching seems to be a suitable alternative to static stretching. Nevertheless, numerous studies reporting no alteration or even performance impairment have highlighted possible mitigating factors (such as stretch duration, amplitude or velocity). Accordingly, ballistic stretching, a form of dynamic stretching with greater velocities, would be less beneficial than controlled dynamic stretching. Notwithstanding, the literature shows that inconsistent description of stretch procedures has been an important deterrent to reaching a clear consensus. In this review, we highlight the need for future studies reporting homogeneous, clearly described stretching protocols, and propose a clarified stretching terminology and methodology.", "title": "" }, { "docid": "9f6da52c8ea3ba605ecbed71e020d31a", "text": "With the exponential growth of information being transmitted as a result of various networks, the issues related to providing security to transmit information have considerably increased. Mathematical models were proposed to consolidate the data being transmitted and to protect the same from being tampered with. Work was carried out on the application of 1D and 2D cellular automata (CA) rules for data encryption and decryption in cryptography. A lot more work needs to be done to develop suitable algorithms and 3D CA rules for encryption and description of 3D chaotic information systems. Suitable coding for the algorithms are developed and the results are evaluated for the performance of the algorithms. Here 3D cellular automata encryption and decryption algorithms are used to provide security of data by arranging plain texts and images into layers of cellular automata by using the cellular automata neighbourhood system. This has resulted in highest order of security for transmitted data.", "title": "" }, { "docid": "19067b3d0f951bad90c80688371532fc", "text": "Research in Artificial Intelligence is breaking technology barriers every day. New algorithms and high performance computing are making things possible which we could only have imagined earlier. Though the enhancements in AI are making life easier for human beings day by day, there is constant fear that AI based systems will pose a threat to humanity. People in AI community have diverse set of opinions regarding the pros and cons of AI mimicking human behavior. Instead of worrying about AI advancements, we propose a novel idea of cognitive agents, including both human and machines, living together in a complex adaptive ecosystem, collaborating on human computation for producing essential social goods while promoting sustenance, survival and evolution of the agents’ life cycle. We highlight several research challenges and technology barriers in achieving this goal. We propose a governance mechanism around this ecosystem to ensure ethical behaviors of all cognitive agents. Along with a novel set of use-cases of Cogniculture , we discuss the road map ahead", "title": "" }, { "docid": "7e4c00d8f17166cbfb3bdac8d5e5ad09", "text": "Twitter is now used to distribute substantive content such as breaking news, increasing the importance of assessing the credibility of tweets. As users increasingly access tweets through search, they have less information on which to base credibility judgments as compared to consuming content from direct social network connections. We present survey results regarding users' perceptions of tweet credibility. We find a disparity between features users consider relevant to credibility assessment and those currently revealed by search engines. We then conducted two experiments in which we systematically manipulated several features of tweets to assess their impact on credibility ratings. We show that users are poor judges of truthfulness based on content alone, and instead are influenced by heuristics such as user name when making credibility assessments. Based on these findings, we discuss strategies tweet authors can use to enhance their credibility with readers (and strategies astute readers should be aware of!). We propose design improvements for displaying social search results so as to better convey credibility.", "title": "" }, { "docid": "59323291555a82ef99013bd4510b3020", "text": "This paper aims to classify and analyze recent as well as classic image registration techniques. Image registration is the process of super imposing images of the same scene taken at different times, location and by different sensors. It is a key enabling technology in medical image analysis for integrating and analyzing information from various modalities. Basically image registration finds temporal correspondences between the set of images and uses transformation model to infer features from these correspondences.The approaches for image registration can beclassified according to their nature vizarea-based and feature-based and dimensionalityvizspatial domain and frequency domain. The procedure of image registration by intensity based model, spatial domain transform, Rigid transform and Non rigid transform based on the above mentioned classification has been performed and the eminence of image is measured by the three quality parameters such as SNR, PSNR and MSE. The techniques have been implemented and inferred thatthe non-rigid transform exhibit higher perceptual quality and offer visually sharper image than other techniques.Problematic issues of image registration techniques and outlook for the future research are discussed. This work may be one of the comprehensive reference sources for the researchers involved in image registration.", "title": "" }, { "docid": "68dc61e0c6b33729f08cdd73e8e86096", "text": "Many important data analysis applications present with severely imbalanced datasets with respect to the target variable. A typical example is medical image analysis, where positive samples are scarce, while performance is commonly estimated against the correct detection of these positive examples. We approach this challenge by formulating the problem as anomaly detection with generative models. We train a generative model without supervision on the ‘negative’ (common) datapoints and use this model to estimate the likelihood of unseen data. A successful model allows us to detect the ‘positive’ case as low likelihood datapoints. In this position paper, we present the use of state-of-the-art deep generative models (GAN and VAE) for the estimation of a likelihood of the data. Our results show that on the one hand both GANs and VAEs are able to separate the ‘positive’ and ‘negative’ samples in the MNIST case. On the other hand, for the NLST case, neither GANs nor VAEs were able to capture the complexity of the data and discriminate anomalies at the level that this task requires. These results show that even though there are a number of successes presented in the literature for using generative models in similar applications, there remain further challenges for broad successful implementation.", "title": "" }, { "docid": "a6a364819f397a8e28ac0b19480253cc", "text": "News agencies and other news providers or consumers are confronted with the task of extracting events from news articles. This is done i) either to monitor and, hence, to be informed about events of specific kinds over time and/or ii) to react to events immediately. In the past, several promising approaches to extracting events from text have been proposed. Besides purely statistically-based approaches there are methods to represent events in a semantically-structured form, such as graphs containing actions (predicates), participants (entities), etc. However, it turns out to be very difficult to automatically determine whether an event is real or not. In this paper, we give an overview of approaches which proposed solutions for this research problem. We show that there is no gold standard dataset where real events are annotated in text documents in a fine-grained, semantically-enriched way. We present a methodology of creating such a dataset with the help of crowdsourcing and present preliminary results.", "title": "" }, { "docid": "a6f08476ea81c50a36497bd65137ca16", "text": "In this paper we tackle the inversion of large-scale dense matrices via conventional matrix factorizations (LU, Cholesky, LDL ) and the Gauss-Jordan method on hybrid platforms consisting of a multi-core CPU and a many-core graphics processor (GPU). Specifically, we introduce the different matrix inversion algorithms using a unified framework based on the notation from the FLAME project; we develop hybrid implementations for those matrix operations underlying the algorithms, alternative to those in existing libraries for singleGPU systems; and we perform an extensive experimental study on a platform equipped with state-of-the-art general-purpose architectures from Intel and a “Fermi” GPU from NVIDIA that exposes the efficiency of the different inversion approaches. Our study and experimental results show the simplicity and performance advantage of the GJE-based inversion methods, and the difficulties associated with the symmetric indefinite case.", "title": "" } ]
scidocsrr
ad1d0433a6ca7d8d26521c8a6206608c
Actions speak as loud as words: predicting relationships from social behavior data
[ { "docid": "cae43bdbf48e694b7fb509ea3b3392f1", "text": "As user-generated content and interactions have overtaken the web as the default mode of use, questions of whom and what to trust have become increasingly important. Fortunately, online social networks and social media have made it easy for users to indicate whom they trust and whom they do not. However, this does not solve the problem since each user is only likely to know a tiny fraction of other users, we must have methods for inferring trust - and distrust - between users who do not know one another. In this paper, we present a new method for computing both trust and distrust (i.e., positive and negative trust). We do this by combining an inference algorithm that relies on a probabilistic interpretation of trust based on random graphs with a modified spring-embedding algorithm. Our algorithm correctly classifies hidden trust edges as positive or negative with high accuracy. These results are useful in a wide range of social web applications where trust is important to user behavior and satisfaction.", "title": "" }, { "docid": "b12d3dfe42e5b7ee06821be7dcd11ab9", "text": "Social media is a place where users present themselves to the world, revealing personal details and insights into their lives. We are beginning to understand how some of this information can be utilized to improve the users' experiences with interfaces and with one another. In this paper, we are interested in the personality of users. Personality has been shown to be relevant to many types of interactions, it has been shown to be useful in predicting job satisfaction, professional and romantic relationship success, and even preference for different interfaces. Until now, to accurately gauge users' personalities, they needed to take a personality test. This made it impractical to use personality analysis in many social media domains. In this paper, we present a method by which a user's personality can be accurately predicted through the publicly available information on their Twitter profile. We will describe the type of data collected, our methods of analysis, and the machine learning techniques that allow us to successfully predict personality. We then discuss the implications this has for social media design, interface design, and broader domains.", "title": "" } ]
[ { "docid": "5e0110f6ae9698e8dd92aad22f1d9fcf", "text": "Social networking sites (SNS) are especially attractive for adolescents, but it has also been shown that these users can suffer from negative psychological consequences when using these sites excessively. We analyze the role of fear of missing out (FOMO) and intensity of SNS use for explaining the link between psychopathological symptoms and negative consequences of SNS use via mobile devices. In an online survey, 1468 Spanish-speaking Latin-American social media users between 16 and 18 years old completed the Hospital Anxiety and Depression Scale (HADS), the Social Networking Intensity scale (SNI), the FOMO scale (FOMOs), and a questionnaire on negative consequences of using SNS via mobile device (CERM). Using structural equation modeling, it was found that both FOMO and SNI mediate the link between psychopathology and CERM, but by different mechanisms. Additionally, for girls, feeling depressed seems to trigger higher SNS involvement. For boys, anxiety triggers higher SNS involvement.", "title": "" }, { "docid": "0441fb016923cd0b7676d3219951c230", "text": "Globally modeling and reasoning over relations between regions can be beneficial for many computer vision tasks on both images and videos. Convolutional Neural Networks (CNNs) excel at modeling local relations by convolution operations, but they are typically inefficient at capturing global relations between distant regions and require stacking multiple convolution layers. In this work, we propose a new approach for reasoning globally in which a set of features are globally aggregated over the coordinate space and then projected to an interaction space where relational reasoning can be efficiently computed. After reasoning, relation-aware features are distributed back to the original coordinate space for down-stream tasks. We further present a highly efficient instantiation of the proposed approach and introduce the Global Reasoning unit (GloRe unit) that implements the coordinate-interaction space mapping by weighted global pooling and weighted broadcasting, and the relation reasoning via graph convolution on a small graph in interaction space. The proposed GloRe unit is lightweight, end-to-end trainable and can be easily plugged into existing CNNs for a wide range of tasks. Extensive experiments show our GloRe unit can consistently boost the performance of state-of-the-art backbone architectures, including ResNet [15, 16], ResNeXt [33], SE-Net [18] and DPN [9], for both 2D and 3D CNNs, on image classification, semantic segmentation and video action recognition task.", "title": "" }, { "docid": "48623054af5217d48b05aed57a67ae66", "text": "This paper proposes an ontology-based approach to analyzing and assessing the security posture for software products. It provides measurements of trust for a software product based on its security requirements and evidence of assurance, which are retrieved from an ontology built for vulnerability management. Our approach differentiates with the previous work in the following aspects: (1) It is a holistic approach emphasizing that the system assurance cannot be determined or explained by its component assurance alone. Instead, the software system as a whole determines its assurance level. (2) Our approach is based on widely accepted standards such as CVSS, CVE, CWE, CPE, and CAPEC. Our ontology integrated these standards seamlessly thus provides a solid foundation for security assessment. (3) Automated tools have been built to support our approach, delivering the environmental scores for software products.", "title": "" }, { "docid": "0c0388754f2964f1db05df3b62cd7389", "text": "Considerable research has been devoted to utilizing multimodal features for better understanding multimedia data. However, two core research issues have not yet been adequately addressed. First, given a set of features extracted from multiple media sources (e.g., extracted from the visual, audio, and caption track of videos), how do we determine the best modalities? Second, once a set of modalities has been identified, how do we best fuse them to map to semantics? In this paper, we propose a two-step approach. The first step finds <i>statistically independent modalities</i> from raw features. In the second step, we use <i>super-kernel fusion</i> to determine the optimal combination of individual modalities. We carefully analyze the tradeoffs between three design factors that affect fusion performance: <i>modality independence</i>, <i>curse of dimensionality</i>, and <i>fusion-model complexity</i>. Through analytical and empirical studies, we demonstrate that our two-step approach, which achieves a careful balance of the three design factors, can improve class-prediction accuracy over traditional techniques.", "title": "" }, { "docid": "8dd6a3cbe9ddb4c50beb83355db5aa5a", "text": "Fuzzy logic controllers have gained popularity in the past few decades with highly successful implementation in many fields. Fuzzy logic enables designers to control complex systems more effectively than traditional methods. Teaching students fuzzy logic in a laboratory can be a time-consuming and an expensive task. This paper presents a low-cost educational microcontroller-based tool for fuzzy logic controlled line following mobile robot. The robot is used in the second year of undergraduate teaching in an elective course in the department of computer engineering of the Near East University. Hardware details of the robot and the software implementing the fuzzy logic control algorithm are given in the paper. 2009 Wiley Periodicals, Inc. Comput Appl Eng Educ; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20347", "title": "" }, { "docid": "0da4b25ce3d4449147f7258d0189165f", "text": "We present Listen, Attend and Spell (LAS), a neural speech recognizer that transcribes speech utterances directly to characters without pronunciation models, HMMs or other components of traditional speech recognizers. In LAS, the neural network architecture subsumes the acoustic, pronunciation and language models making it not only an end-to-end trained system but an end-to-end model. In contrast to DNN-HMM, CTC and most other models, LAS makes no independence assumptions about the probability distribution of the output character sequences given the acoustic sequence. Our system has two components: a listener and a speller. The listener is a pyramidal recurrent network encoder that accepts filter bank spectra as inputs. The speller is an attention-based recurrent network decoder that emits each character conditioned on all previous characters, and the entire acoustic sequence. On a Google voice search task, LAS achieves a WER of 14.1% without a dictionary or an external language model and 10.3% with language model rescoring over the top 32 beams. In comparison, the state-of-the-art CLDNN-HMM model achieves a WER of 8.0% on the same set.", "title": "" }, { "docid": "62e445cabbb5c79375f35d7b93f9a30d", "text": "The recent outbreak of indie games has popularized volumetric terrains to a new level, although video games have used them for decades. These terrains contain geological data, such as materials or cave systems. To improve the exploration experience and due to the large amount of data needed to construct volumetric terrains, industry uses procedural methods to generate them. However, they use their own methods, which are focused on their specific problem domains, lacking customization features. Besides, the evaluation of the procedural terrain generators remains an open issue in this field since no standard metrics have been established yet. In this paper, we propose a new approach to procedural volumetric terrains. It generates completely customizable volumetric terrains with layered materials and other features (e.g., mineral veins, underground caves, material mixtures and underground material flow). The method allows the designer to specify the characteristics of the terrain using intuitive parameters. Additionally, it uses a specific representation for the terrain based on stacked material structures, reducing memory requirements. To overcome the problem in the evaluation of the generators, we propose a new set of metrics for the generated content.", "title": "" }, { "docid": "3afea784f4a9eb635d444a503266d7cd", "text": "Gallium nitride high-electron mobility transistors (GaN HEMTs) have attractive properties, low on-resistances and fast switching speeds. This paper presents the characteristics of a normally-on GaN HEMT that we fabricated. Further, the circuit operation of a Class-E amplifier is analyzed. Experimental results demonstrate the excellent performance of the gate drive circuit for the normally-on GaN HEMT and the 13.56MHz radio frequency (RF) power amplifier.", "title": "" }, { "docid": "61c3f890943c34736564680dca3aae4a", "text": "Secondary nocturnal enuresis accounts for about one quarter of patients with bed-wetting. Although a psychological cause is responsible in some children, various other causes are possible and should be considered. This article reviews the epidemiology, psychological and social impact, causes, investigation, management, and prognosis of secondary nocturnal enuresis.", "title": "" }, { "docid": "a2246533e2973193586e2a3c8e672c10", "text": "Krill Herd (KH) optimization algorithm was recently proposed based on herding behavior of krill individuals in the nature for solving optimization problems. In this paper, we develop Standard Krill Herd (SKH) algorithm and propose Fuzzy Krill Herd (FKH) optimization algorithm which is able to dynamically adjust the participation amount of exploration and exploitation by looking the progress of solving the problem in each step. In order to evaluate the proposed FKH algorithm, we utilize some standard benchmark functions and also Inventory Control Problem. Experimental results indicate the superiority of our proposed FKH optimization algorithm in comparison with the standard KH optimization algorithm.", "title": "" }, { "docid": "991a388d1159667a5b2494ded71c5abe", "text": "Organizations around the world have called for the responsible development of nanotechnology. The goals of this approach are to emphasize the importance of considering and controlling the potential adverse impacts of nanotechnology in order to develop its capabilities and benefits. A primary area of concern is the potential adverse impact on workers, since they are the first people in society who are exposed to the potential hazards of nanotechnology. Occupational safety and health criteria for defining what constitutes responsible development of nanotechnology are needed. This article presents five criterion actions that should be practiced by decision-makers at the business and societal levels-if nanotechnology is to be developed responsibly. These include (1) anticipate, identify, and track potentially hazardous nanomaterials in the workplace; (2) assess workers' exposures to nanomaterials; (3) assess and communicate hazards and risks to workers; (4) manage occupational safety and health risks; and (5) foster the safe development of nanotechnology and realization of its societal and commercial benefits. All these criteria are necessary for responsible development to occur. Since it is early in the commercialization of nanotechnology, there are still many unknowns and concerns about nanomaterials. Therefore, it is prudent to treat them as potentially hazardous until sufficient toxicology, and exposure data are gathered for nanomaterial-specific hazard and risk assessments. In this emergent period, it is necessary to be clear about the extent of uncertainty and the need for prudent actions.", "title": "" }, { "docid": "b83eb2f78c4b48cf9b1ca07872d6ea1a", "text": "Network Function Virtualization (NFV) is emerging as one of the most innovative concepts in the networking landscape. By migrating network functions from dedicated mid-dleboxes to general purpose computing platforms, NFV can effectively reduce the cost to deploy and to operate large networks. However, in order to achieve its full potential, NFV needs to encompass also the radio access network allowing Mobile Virtual Network Operators to deploy custom resource allocation solutions within their virtual radio nodes. Such requirement raises several challenges in terms of performance isolation and resource provisioning. In this work we formalize the Virtual Network Function (VNF) placement problem for radio access networks as an integer linear programming problem and we propose a VNF placement heuristic. Moreover, we also present a proof-of-concept implementation of an NFV management and orchestration framework for Enterprise WLANs. The proposed architecture builds upon a programmable network fabric where pure forwarding nodes are mixed with radio and packet processing nodes leveraging on general computing platforms.", "title": "" }, { "docid": "7697aa5665f4699f2000779db2b0d24f", "text": "The majority of smart devices used nowadays (e.g., smartphones, laptops, tablets) is capable of both Wi-Fi and Bluetooth wireless communications. Both network interfaces are identified by a unique 48-bits MAC address, assigned during the manufacturing process and unique worldwide. Such addresses, fundamental for link-layer communications and contained in every frame transmitted by the device, can be easily collected through packet sniffing and later used to perform higher level analysis tasks (user tracking, crowd density estimation, etc.). In this work we propose a system to pair the Wi-Fi and Bluetooth MAC addresses belonging to a physical unique device, starting from packets captured through a network of wireless sniffers. We propose several algorithms to perform such a pairing and we evaluate their performance through experiments in a controlled scenario. We show that the proposed algorithms can pair the MAC addresses with good accuracy. The findings of this paper may be useful to improve the precision of indoor localization and crowd density estimation systems and open some questions on the privacy issues of Wi-Fi and Bluetooth enabled devices.", "title": "" }, { "docid": "adf57fe7ec7ab1481561f7664110a1e8", "text": "This paper presents a scalable 28-GHz phased-array architecture suitable for fifth-generation (5G) communication links based on four-channel ( $2\\times 2$ ) transmit/receive (TRX) quad-core chips in SiGe BiCMOS with flip-chip packaging. Each channel of the quad-core beamformer chip has 4.6-dB noise figure (NF) in the receive (RX) mode and 10.5-dBm output 1-dB compression point (OP1dB) in the transmit (TX) mode with 6-bit phase control and 14-dB gain control. The phase change with gain control is only ±3°, allowing orthogonality between the variable gain amplifier and the phase shifter. The chip has high RX linearity (IP1dB = −22 dBm/channel) and consumes 130 mW in the RX mode and 200 mW in the TX mode at P1dB per channel. Advantages of the scalable all-RF beamforming architecture and circuit design techniques are discussed in detail. 4- and 32-element phased-arrays are demonstrated with detailed data link measurements using a single or eight of the four-channel TRX core chips on a low-cost printed circuit board with microstrip antennas. The 32-element array achieves an effective isotropic radiated power (EIRP) of 43 dBm at P1dB, a 45-dBm saturated EIRP, and a record-level system NF of 5.2 dB when the beamformer loss and transceiver NF are taken into account and can scan to ±50° in azimuth and ±25° in elevation with < −12-dB sidelobes and without any phase or amplitude calibration. A wireless link is demonstrated using two 32-element phased-arrays with a state-of-the-art data rate of 1.0–1.6 Gb/s in a single beam using 16-QAM waveforms over all scan angles at a link distance of 300 m.", "title": "" }, { "docid": "0496af98bbef3d4d6f5e7a67e9ef5508", "text": "Cancer is second only to heart disease as a cause of death in the US, with a further negative economic impact on society. Over the past decade, details have emerged which suggest that different glycosylphosphatidylinositol (GPI)-anchored proteins are fundamentally involved in a range of cancers. This post-translational glycolipid modification is introduced into proteins via the action of the enzyme GPI transamidase (GPI-T). In 2004, PIG-U, one of the subunits of GPI-T, was identified as an oncogene in bladder cancer, offering a direct connection between GPI-T and cancer. GPI-T is a membrane-bound, multi-subunit enzyme that is poorly understood, due to its structural complexity and membrane solubility. This review is divided into three sections. First, we describe our current understanding of GPI-T, including what is known about each subunit and their roles in the GPI-T reaction. Next, we review the literature connecting GPI-T to different cancers with an emphasis on the variations in GPI-T subunit over-expression. Finally, we discuss some of the GPI-anchored proteins known to be involved in cancer onset and progression and that serve as potential biomarkers for disease-selective therapies. Given that functions for only one of GPI-T's subunits have been robustly assigned, the separation between healthy and malignant GPI-T activity is poorly defined.", "title": "" }, { "docid": "e5f2e7b7dfdfaee33a2187a0a7183cfb", "text": "BACKGROUND\nPossible associations between television viewing and video game playing and children's aggression have become public health concerns. We did a systematic review of studies that examined such associations, focussing on children and young people with behavioural and emotional difficulties, who are thought to be more susceptible.\n\n\nMETHODS\nWe did computer-assisted searches of health and social science databases, gateways, publications from relevant organizations and for grey literature; scanned bibliographies; hand-searched key journals; and corresponded with authors. We critically appraised all studies.\n\n\nRESULTS\nA total of 12 studies: three experiments with children with behavioural and emotional difficulties found increased aggression after watching aggressive as opposed to low-aggressive content television programmes, one found the opposite and two no clear effect, one found such children no more likely than controls to imitate aggressive television characters. One case-control study and one survey found that children and young people with behavioural and emotional difficulties watched more television than controls; another did not. Two studies found that children and young people with behavioural and emotional difficulties viewed more hours of aggressive television programmes than controls. One study on video game use found that young people with behavioural and emotional difficulties viewed more minutes of violence and played longer than controls. In a qualitative study children with behavioural and emotional difficulties, but not their parents, did not associate watching television with aggression. All studies had significant methodological flaws. None was based on power calculations.\n\n\nCONCLUSION\nThis systematic review found insufficient, contradictory and methodologically flawed evidence on the association between television viewing and video game playing and aggression in children and young people with behavioural and emotional difficulties. If public health advice is to be evidence-based, good quality research is needed.", "title": "" }, { "docid": "d60f812bb8036a2220dab8740f6a74c4", "text": "UNLABELLED\nThe limit of the Colletotrichum gloeosporioides species complex is defined genetically, based on a strongly supported clade within the Colletotrichum ITS gene tree. All taxa accepted within this clade are morphologically more or less typical of the broadly defined C. gloeosporioides, as it has been applied in the literature for the past 50 years. We accept 22 species plus one subspecies within the C. gloeosporioides complex. These include C. asianum, C. cordylinicola, C. fructicola, C. gloeosporioides, C. horii, C. kahawae subsp. kahawae, C. musae, C. nupharicola, C. psidii, C. siamense, C. theobromicola, C. tropicale, and C. xanthorrhoeae, along with the taxa described here as new, C. aenigma, C. aeschynomenes, C. alatae, C. alienum, C. aotearoa, C. clidemiae, C. kahawae subsp. ciggaro, C. salsolae, and C. ti, plus the nom. nov. C. queenslandicum (for C. gloeosporioides var. minus). All of the taxa are defined genetically on the basis of multi-gene phylogenies. Brief morphological descriptions are provided for species where no modern description is available. Many of the species are unable to be reliably distinguished using ITS, the official barcoding gene for fungi. Particularly problematic are a set of species genetically close to C. musae and another set of species genetically close to C. kahawae, referred to here as the Musae clade and the Kahawae clade, respectively. Each clade contains several species that are phylogenetically well supported in multi-gene analyses, but within the clades branch lengths are short because of the small number of phylogenetically informative characters, and in a few cases individual gene trees are incongruent. Some single genes or combinations of genes, such as glyceraldehyde-3-phosphate dehydrogenase and glutamine synthetase, can be used to reliably distinguish most taxa and will need to be developed as secondary barcodes for species level identification, which is important because many of these fungi are of biosecurity significance. In addition to the accepted species, notes are provided for names where a possible close relationship with C. gloeosporioides sensu lato has been suggested in the recent literature, along with all subspecific taxa and formae speciales within C. gloeosporioides and its putative teleomorph Glomerella cingulata.\n\n\nTAXONOMIC NOVELTIES\nName replacement - C. queenslandicum B. Weir & P.R. Johnst. New species - C. aenigma B. Weir & P.R. Johnst., C. aeschynomenes B. Weir & P.R. Johnst., C. alatae B. Weir & P.R. Johnst., C. alienum B. Weir & P.R. Johnst, C. aotearoa B. Weir & P.R. Johnst., C. clidemiae B. Weir & P.R. Johnst., C. salsolae B. Weir & P.R. Johnst., C. ti B. Weir & P.R. Johnst. New subspecies - C. kahawae subsp. ciggaro B. Weir & P.R. Johnst. Typification: Epitypification - C. queenslandicum B. Weir & P.R. Johnst.", "title": "" }, { "docid": "32bb9f12da68d89a897c8fc7937c0a7d", "text": "In recent years, the videogame industry has been characterized by a great boost in gesture recognition and motion tracking, following the increasing request of creating immersive game experiences. The Microsoft Kinect sensor allows acquiring RGB, IR and depth images with a high frame rate. Because of the complementary nature of the information provided, it has proved an attractive resource for researchers with very different backgrounds. In summer 2014, Microsoft launched a new generation of Kinect on the market, based on time-of-flight technology. This paper proposes a calibration of Kinect for Xbox One imaging sensors, focusing on the depth camera. The mathematical model that describes the error committed by the sensor as a function of the distance between the sensor itself and the object has been estimated. All the analyses presented here have been conducted for both generations of Kinect, in order to quantify the improvements that characterize every single imaging sensor. Experimental results show that the quality of the delivered model improved applying the proposed calibration procedure, which is applicable to both point clouds and the mesh model created with the Microsoft Fusion Libraries.", "title": "" }, { "docid": "549d486d6ff362bc016c6ce449e29dc9", "text": "Aging is very often associated with magnesium (Mg) deficit. Total plasma magnesium concentrations are remarkably constant in healthy subjects throughout life, while total body Mg and Mg in the intracellular compartment tend to decrease with age. Dietary Mg deficiencies are common in the elderly population. Other frequent causes of Mg deficits in the elderly include reduced Mg intestinal absorption, reduced Mg bone stores, and excess urinary loss. Secondary Mg deficit in aging may result from different conditions and diseases often observed in the elderly (i.e. insulin resistance and/or type 2 diabetes mellitus) and drugs (i.e. use of hypermagnesuric diuretics). Chronic Mg deficits have been linked to an increased risk of numerous preclinical and clinical outcomes, mostly observed in the elderly population, including hypertension, stroke, atherosclerosis, ischemic heart disease, cardiac arrhythmias, glucose intolerance, insulin resistance, type 2 diabetes mellitus, endothelial dysfunction, vascular remodeling, alterations in lipid metabolism, platelet aggregation/thrombosis, inflammation, oxidative stress, cardiovascular mortality, asthma, chronic fatigue, as well as depression and other neuropsychiatric disorders. Both aging and Mg deficiency have been associated to excessive production of oxygen-derived free radicals and low-grade inflammation. Chronic inflammation and oxidative stress are also present in several age-related diseases, such as many vascular and metabolic conditions, as well as frailty, muscle loss and sarcopenia, and altered immune responses, among others. Mg deficit associated to aging may be at least one of the pathophysiological links that may help to explain the interactions between inflammation and oxidative stress with the aging process and many age-related diseases.", "title": "" }, { "docid": "939f05a2265c6ab21b273a8127806279", "text": "Acne is a common inflammatory disease. Scarring is an unwanted end point of acne. Both atrophic and hypertrophic scar types occur. Soft-tissue augmentation aims to improve atrophic scars. In this review, we will focus on the use of dermal fillers for acne scar improvement. Therefore, various filler types are characterized, and available data on their use in acne scar improvement are analyzed.", "title": "" } ]
scidocsrr
534d8debd1364fafb2acd2fe01e62619
Cost-Efficient Strategies for Restraining Rumor Spreading in Mobile Social Networks
[ { "docid": "d056e5ea017eb3e5609dcc978e589158", "text": "In this paper we study and evaluate rumor-like methods for combating the spread of rumors on a social network. We model rumor spread as a diffusion process on a network and suggest the use of an \"anti-rumor\" process similar to the rumor process. We study two natural models by which these anti-rumors may arise. The main metrics we study are the belief time, i.e., the duration for which a person believes the rumor to be true and point of decline, i.e., point after which anti-rumor process dominates the rumor process. We evaluate our methods by simulating rumor spread and anti-rumor spread on a data set derived from the social networking site Twitter and on a synthetic network generated according to the Watts and Strogatz model. We find that the lifetime of a rumor increases if the delay in detecting it increases, and the relationship is at least linear. Further our findings show that coupling the detection and anti-rumor strategy by embedding agents in the network, we call them beacons, is an effective means of fighting the spread of rumor, even if these beacons do not share information.", "title": "" } ]
[ { "docid": "37e644b7b2d47e6830e30ae191bc453c", "text": "Technological forecasting is now poised to respond to the emerging needs of private and public sector organizations in the highly competitive global environment. The history of the subject and its variant forms, including impact assessment, national foresight studies, roadmapping, and competitive technological intelligence, shows how it has responded to changing institutional motivations. Renewed focus on innovation, attention to science-based opportunities, and broad social and political factors will bring renewed attention to technological forecasting in industry, government, and academia. Promising new tools are anticipated, borrowing variously from fields such as political science, computer science, scientometrics, innovation management, and complexity science.  2001 Elsevier Science Inc. Introduction Technological forecasting—its purpose, methods, terminology, and uses—will be shaped in the future, as in the past, by the needs of corporations and government agencies.1 These have a continual pressing need to anticipate and cope with the direction and rate of technological change. The future of technological forecasting will also depend on the views of the public and their elected representatives about technological progress, economic competition, and the government’s role in technological development. In the context of this article, “technological forecasting” (TF) includes several new forms—for example, national foresight studies, roadmapping, and competitive technological intelligence—that have evolved to meet the changing demands of user institutions. It also encompasses technology assessment (TA) or social impact analysis, which emphasizes the downstream effects of technology’s invention, innovation, and evolution. VARY COATES is associated with the Institute for Technology Assessment, Washington, DC. MAHMUD FAROQUE is with George Mason University, Fairfax, VA. RICHARD KLAVANS is with CRP, Philadelphia, PA. KOTY LAPID is with Softblock, Beer Sheba, Israel. HAROLD LINSTONE is with Portland State University. CARL PISTORIUS is with the University of Pretoria, South Africa. ALAN PORTER is with the Georgia Institute of Technology, Atlanta, GA. We also thank Joseph Coates and Joseph Martino for helpful critiques. 1 The term “technological forecasting” is used in this article to apply to all purposeful and systematic attempts to anticipate and understand the potential direction, rate, characteristics, and effects of technological change, especially invention, innovation, adoption, and use. No distinction is intended between “technological forecasting” “technology forecasting,” or “technology foresight,” except as specifically described in the text. Technological Forecasting and Social Change 67, 1–17 (2001)  2001 Elsevier Science Inc. All rights reserved. 0040-1625/01/$–see front matter 655 Avenue of the Americas, New York, NY 10010 PII S0040-1625(00)00122-0", "title": "" }, { "docid": "fdbb5f67eb2f9b651c0d2e1cf8077923", "text": "The periodical maintenance of railway systems is very important in terms of maintaining safe and comfortable transportation. In particular, the monitoring and diagnosis of faults in the pantograph catenary system are required to provide a transmission from the catenary line to the electric energy locomotive. Surface wear that is caused by the interaction between the pantograph and catenary and nonuniform distribution on the surface of a pantograph of the contact points can cause serious accidents. In this paper, a novel approach is proposed for image processing-based monitoring and fault diagnosis in terms of the interaction and contact points between the pantograph and catenary in a moving train. For this purpose, the proposed method consists of two stages. In the first stage, the pantograph catenary interaction has been modeled; the simulation results were given a failure analysis with a variety of scenarios. In the second stage, the contact points between the pantograph and catenary were detected and implemented in real time with image processing algorithms using actual video images. The pantograph surface for a fault analysis was divided into three regions: safe, dangerous, and fault. The fault analysis of the system was presented using the number of contact points in each region. The experimental results demonstrate the effectiveness, applicability, and performance of the proposed approach.", "title": "" }, { "docid": "7c5ce3005c4529e0c34220c538412a26", "text": "Six studies investigate whether and how distant future time perspective facilitates abstract thinking and impedes concrete thinking by altering the level at which mental representations are construed. In Experiments 1-3, participants who envisioned their lives and imagined themselves engaging in a task 1 year later as opposed to the next day subsequently performed better on a series of insight tasks. In Experiments 4 and 5 a distal perspective was found to improve creative generation of abstract solutions. Moreover, Experiment 5 demonstrated a similar effect with temporal distance manipulated indirectly, by making participants imagine their lives in general a year from now versus tomorrow prior to performance. In Experiment 6, distant time perspective undermined rather than enhanced analytical problem solving.", "title": "" }, { "docid": "062d366387e6161ba6faadc32c53e820", "text": "Image processing has been proved to be effective tool for analysis in various fields and applications. Agriculture sector where the parameters like canopy, yield, quality of product were the important measures from the farmers&apos; point of view. Many times expert advice may not be affordable, majority times the availability of expert and their services may consume time. Image processing along with availability of communication network can change the situation of getting the expert advice well within time and at affordable cost since image processing was the effective tool for analysis of parameters. This paper intends to focus on the survey of application of image processing in agriculture field such as imaging techniques, weed detection and fruit grading. The analysis of the parameters has proved to be accurate and less time consuming as compared to traditional methods. Application of image processing can improve decision making for vegetation measurement, irrigation, fruit sorting, etc.", "title": "" }, { "docid": "7dfbb5e01383b5f50dbeb87d55ceb719", "text": "In recent years, a number of network forensics techniques have been proposed to investigate the increasing number of cybercrimes. Network forensics techniques assist in tracking internal and external network attacks by focusing on inherent network vulnerabilities and communication mechanisms. However, investigation of cybercrime becomes more challenging when cyber criminals erase the traces in order to avoid detection. Therefore, network forensics techniques employ mechanisms to facilitate investigation by recording every single packet and event that is disseminated into the network. As a result, it allows identification of the origin of the attack through reconstruction of the recorded data. In the current literature, network forensics techniques are studied on the basis of forensic tools, process models and framework implementations. However, a comprehensive study of cybercrime investigation using network forensics frameworks along with a critical review of present network forensics techniques is lacking. In other words, our study is motivated by the diversity of digital evidence and the difficulty of addressing numerous attacks in the network using network forensics techniques. Therefore, this paper reviews the fundamental mechanism of network forensics techniques to determine how network attacks are identified in the network. Through an extensive review of related literature, a thematic taxonomy is proposed for the classification of current network forensics techniques based on its implementation as well as target data sets involved in the conducting of forensic investigations. The critical aspects and significant features of the current network forensics techniques are investigated using qualitative analysis technique. We derive significant parameters from the literature for discussing the similarities and differences in existing network forensics techniques. The parameters include framework nature, mechanism, target dataset, target instance, forensic processing, time of investigation, execution definition, and objective function. Finally, open research challenges are discussed in network forensics to assist researchers in selecting the appropriate domains for further research and obtain ideas for exploring optimal techniques for investigating cyber-crimes. & 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "fcf8649ff7c2972e6ef73f837a3d3f4d", "text": "The kitchen environment is one of the scenarios in the home where users can benefit from Ambient Assisted Living (AAL) applications. Moreover, it is the place where old people suffer from most domestic injuries. This paper presents a novel design, implementation and assessment of a Smart Kitchen which provides Ambient Assisted Living services; a smart environment that increases elderly and disabled people's autonomy in their kitchen-related activities through context and user awareness, appropriate user interaction and artificial intelligence. It is based on a modular architecture which integrates a wide variety of home technology (household appliances, sensors, user interfaces, etc.) and associated communication standards and media (power line, radio frequency, infrared and cabled). Its software architecture is based on the Open Services Gateway initiative (OSGi), which allows building a complex system composed of small modules, each one providing the specific functionalities required, and can be easily scaled to meet our needs. The system has been evaluated by a large number of real users (63) and carers (31) in two living labs in Spain and UK. Results show a large potential of system functionalities combined with good usability and physical, sensory and cognitive accessibility.", "title": "" }, { "docid": "2dfad4f4b0d69085341dfb64d6b37d54", "text": "Modern applications and progress in deep learning research have created renewed interest for generative models of text and of images. However, even today it is unclear what objective functions one should use to train and evaluate these models. In this paper we present two contributions. Firstly, we present a critique of scheduled sampling, a state-of-the-art training method that contributed to the winning entry to the MSCOCO image captioning benchmark in 2015. Here we show that despite this impressive empirical performance, the objective function underlying scheduled sampling is improper and leads to an inconsistent learning algorithm. Secondly, we revisit the problems that scheduled sampling was meant to address, and present an alternative interpretation. We argue that maximum likelihood is an inappropriate training objective when the end-goal is to generate natural-looking samples. We go on to derive an ideal objective function to use in this situation instead. We introduce a generalisation of adversarial training, and show how such method can interpolate between maximum likelihood training and our ideal training objective. To our knowledge this is the first theoretical analysis that explains why adversarial training tends to produce samples with higher perceived quality.", "title": "" }, { "docid": "3654827519075eac6bfe5ee442c6d4b2", "text": "We examined the relations among phonological awareness, music perception skills, and early reading skills in a population of 100 4- and 5-year-old children. Music skills were found to correlate significantly with both phonological awareness and reading development. Regression analyses indicated that music perception skills contributed unique variance in predicting reading ability, even when variance due to phonological awareness and other cognitive abilities (math, digit span, and vocabulary) had been accounted for. Thus, music perception appears to tap auditory mechanisms related to reading that only partially overlap with those related to phonological awareness, suggesting that both linguistic and nonlinguistic general auditory mechanisms are involved in reading.", "title": "" }, { "docid": "7843fb4bbf2e94a30c18b359076899ab", "text": "In the area of magnetic resonance imaging (MRI), an extensive range of non-linear reconstruction algorithms has been proposed which can be used with general Fourier subsampling patterns. However, the design of these subsampling patterns has typically been considered in isolation from the reconstruction rule and the anatomy under consideration. In this paper, we propose a learning-based framework for optimizing MRI subsampling patterns for a specific reconstruction rule and anatomy, considering both the noiseless and noisy settings. Our learning algorithm has access to a representative set of training signals, and searches for a sampling pattern that performs well on average for the signals in this set. We present a novel parameter-free greedy mask selection method and show it to be effective for a variety of reconstruction rules and performance metrics. Moreover, we also support our numerical findings by providing a rigorous justification of our framework via statistical learning theory.", "title": "" }, { "docid": "7e127a6f25e932a67f333679b0d99567", "text": "This paper presents a novel manipulator for human-robot interaction that has low mass and inertia without losing stiffness and payload performance. A lightweight tension amplifying mechanism that increases the joint stiffness in quadratic order is proposed. High stiffness is essential for precise and rapid manipulation, and low mass and inertia are important factors for safety due to low stored kinetic energy. The proposed tension amplifying mechanism was applied to a 1-DOF elbow joint and then extended to a 3-DOF wrist joint. The developed manipulator was analyzed in terms of inertia, stiffness, and strength properties. Its moving part weighs 3.37 kg, and its inertia is 0.57 kg·m2, which is similar to that of a human arm. The stiffness of the developed elbow joint is 1440Nm/rad, which is comparable to that of the joints with rigid components in industrial manipulators. A detailed description of the design is provided, and thorough analysis verifies the performance of the proposed mechanism.", "title": "" }, { "docid": "627aee14031293785224efdb7bac69f0", "text": "Data on characteristics of metal-oxide surge arresters indicates that for fast front surges, those with rise times less than 8μs, the peak of the voltage wave occurs before the peak of the current wave and the residual voltage across the arrester increases as the time to crest of the arrester discharge current decreases. Several models have been proposed to simulate this frequency-dependent characteristic. These models differ in the calculation and adjustment of their parameters. In the present paper, a simulation of metal oxide surge arrester (MOSA) dynamic behavior during fast electromagnetic transients on power systems is done. Some models proposed in the literature are used. The simulations are performed with the Alternative Transients Program (ATP) version of Electromagnetic Transient Program (EMTP) to evaluate some metal oxide surge arrester models and verify their accuracy.", "title": "" }, { "docid": "94b84ed0bb69b6c4fc7a268176146eea", "text": "We consider the problem of representing image matrices with a set of basis functions. One common solution for that problem is to first transform the 2D image matrices into 1D image vectors and then to represent those 1D image vectors with eigenvectors, as done in classical principal component analysis. In this paper, we adopt a natural representation for the 2D image matrices using eigenimages, which are 2D matrices with the same size of original images and can be directly computed from original 2D image matrices. We discuss how to compute those eigenimages effectively. Experimental result on ORL image database shows the advantages of eigenimages method in representing the 2D images.", "title": "" }, { "docid": "2e5ce96ba3c503704a9152ae667c24ec", "text": "We use methods of classical and quantum mechanics for mathematical modeling of price dynamics at the financial market. The Hamiltonian formalism on the price/price-change phase space is used to describe the classical-like evolution of prices. This classical dynamics of prices is determined by ”hard” conditions (natural resources, industrial production, services and so on). These conditions as well as ”hard” relations between traders at the financial market are mathematically described by the classical financial potential. At the real financial market ”hard” conditions are not the only source of price changes. The information exchange and market psychology play important (and sometimes determining) role in price dynamics. We propose to describe this ”soft” financial factors by using the pilot wave (Bohmian) model of quantum mechanics. The theory of financial mental (or psychological) waves is used to take into account market psychology. The real trajectories of prices are determined (by the financial analogue of the second Newton law) by two financial potentials: classical-like (”hard” market conditions) and quantum-like (”soft” market conditions).", "title": "" }, { "docid": "fa42192f3ffd08332e35b98019e622ff", "text": "Human immunodeficiency virus 1 (HIV-1) and other retroviruses synthesize a DNA copy of their genome after entry into the host cell. Integration of this DNA into the host cell's genome is an essential step in the viral replication cycle. The viral DNA is synthesized in the cytoplasm and is associated with viral and cellular proteins in a large nucleoprotein complex. Before integration into the host genome can occur, this complex must be transported to the nucleus and must cross the nuclear envelope. This Review summarizes our current knowledge of how this journey is accomplished.", "title": "" }, { "docid": "9b17dd1fc2c7082fa8daecd850fab91c", "text": "This paper presents all the stages of development of a solar tracker for a photovoltaic panel. The system was made with a microcontroller which was design as an embedded control. It has a data base of the angles of orientation horizontal axle, therefore it has no sensor inlet signal and it function as an open loop control system. Combined of above mention characteristics in one the tracker system is a new technique of the active type. It is also a rotational robot of 1 degree of freedom.", "title": "" }, { "docid": "a757624e5fd2d4a364f484d55a430702", "text": "The main challenge in P2P computing is to design and implement a robust and scalable distributed system composed of inexpensive, individually unreliable computers in unrelated administrative domains. The participants in a typical P2P system might include computers at homes, schools, and businesses, and can grow to several million concurrent participants.", "title": "" }, { "docid": "6149a6aaa9c39a1e02ab8fbe64fcb62b", "text": "The thoracic diaphragm is a dome-shaped septum, composed of muscle surrounding a central tendon, which separates the thoracic and abdominal cavities. The function of the diaphragm is to expand the chest cavity during inspiration and to promote occlusion of the gastroesophageal junction. This article provides an overview of the normal anatomy of the diaphragm.", "title": "" }, { "docid": "6524efda795834105bae7d65caf15c53", "text": "PURPOSE\nThis paper examines respondents' relationship with work following a stroke and explores their experiences including the perceived barriers to and facilitators of a return to employment.\n\n\nMETHOD\nOur qualitative study explored the experiences and recovery of 43 individuals under 60 years who had survived a stroke. Participants, who had experienced a first stroke less than three months before and who could engage in in-depth interviews, were recruited through three stroke services in South East England. Each participant was invited to take part in four interviews over an 18-month period and to complete a diary for one week each month during this period.\n\n\nRESULTS\nAt the time of their stroke a minority of our sample (12, 28% of the original sample) were not actively involved in the labour market and did not return to the work during the period that they were involved in the study. Of the 31 participants working at the time of the stroke, 13 had not returned to work during the period that they were involved in the study, six returned to work after three months and nine returned in under three months and in some cases virtually immediately after their stroke. The participants in our study all valued work and felt that working, especially in paid employment, was more desirable than not working. The participants who were not working at the time of their stroke or who had not returned to work during the period of the study also endorsed these views. However they felt that there were a variety of barriers and practical problems that prevented them working and in some cases had adjusted to a life without paid employment. Participants' relationship with work was influenced by barriers and facilitators. The positive valuations of work were modified by the specific context of stroke, for some participants work was a cause of stress and therefore potentially risky, for others it was a way of demonstrating recovery from stroke. The value and meaning varied between participants and this variation was related to past experience and biography. Participants who wanted to work indicated that their ability to work was influenced by the nature and extent of their residual disabilities. A small group of participants had such severe residual disabilities that managing everyday life was a challenge and that working was not a realistic prospect unless their situation changed radically. The remaining participants all reported residual disabilities. The extent to which these disabilities formed a barrier to work depended on an additional range of factors that acted as either barriers or facilitator to return to work. A flexible working environment and supportive social networks were cited as facilitators of return to paid employment.\n\n\nCONCLUSION\nParticipants in our study viewed return to work as an important indicator of recovery following a stroke. Individuals who had not returned to work felt that paid employment was desirable but they could not overcome the barriers. Individuals who returned to work recognized the barriers but had found ways of managing them.", "title": "" }, { "docid": "1168c9e6ce258851b15b7e689f60e218", "text": "Modern deep learning architectures produce highly accurate results on many challenging semantic segmentation datasets. State-of-the-art methods are, however, not directly transferable to real-time applications or embedded devices, since naïve adaptation of such systems to reduce computational cost (speed, memory and energy) causes a significant drop in accuracy. We propose ContextNet, a new deep neural network architecture which builds on factorized convolution, network compression and pyramid representation to produce competitive semantic segmentation in real-time with low memory requirement. ContextNet combines a deep network branch at low resolution that captures global context information efficiently with a shallow branch that focuses on highresolution segmentation details. We analyse our network in a thorough ablation study and present results on the Cityscapes dataset, achieving 66.1% accuracy at 18.3 frames per second at full (1024× 2048) resolution (23.2 fps with pipelined computations for streamed data).", "title": "" }, { "docid": "71efff25f494a8b7a83099e7bdd9d9a8", "text": "Background: Problems with intubation of the ampulla Vateri during diagnostic and therapeutic endoscopic maneuvers are a well-known feature. The ampulla Vateri was analyzed three-dimensionally to determine whether these difficulties have a structural background. Methods: Thirty-five human greater duodenal papillae were examined by light and scanning electron microscopy as well as immunohistochemically. Results: Histologically, highly vascularized finger-like mucosal folds project far into the lumen of the ampulla Vateri. The excretory ducts of seromucous glands containing many lysozyme-secreting Paneth cells open close to the base of the mucosal folds. Scanning electron microscopy revealed large mucosal folds inside the ampulla that continued into the pancreatic and bile duct, comparable to valves arranged in a row. Conclusions: Mucosal folds form pocket-like valves in the lumen of the ampulla Vateri. They allow a unidirectional flow of secretions into the duodenum and prevent reflux from the duodenum into the ampulla Vateri. Subepithelial mucous gland secretions functionally clean the valvular crypts and protect the epithelium. The arrangement of pocket-like mucosal folds may explain endoscopic difficulties experienced when attempting to penetrate the papilla of Vater during endoscopic retrograde cholangiopancreaticographic procedures.", "title": "" } ]
scidocsrr
39617ab96f7fadab45c84dec7c02a77e
A Self-Powered Insole for Human Motion Recognition
[ { "docid": "8e02a76799f72d86e7240384bea563fd", "text": "We have developed the suspended-load backpack, which converts mechanical energy from the vertical movement of carried loads (weighing 20 to 38 kilograms) to electricity during normal walking [generating up to 7.4 watts, or a 300-fold increase over previous shoe devices (20 milliwatts)]. Unexpectedly, little extra metabolic energy (as compared to that expended carrying a rigid backpack) is required during electricity generation. This is probably due to a compensatory change in gait or loading regime, which reduces the metabolic power required for walking. This electricity generation can help give field scientists, explorers, and disaster-relief workers freedom from the heavy weight of replacement batteries and thereby extend their ability to operate in remote areas.", "title": "" } ]
[ { "docid": "f4401e483c519e1f2d33ee18ea23b8d7", "text": "Cultivation of mindfulness, the nonjudgmental awareness of experiences in the present moment, produces beneficial effects on well-being and ameliorates psychiatric and stress-related symptoms. Mindfulness meditation has therefore increasingly been incorporated into psychotherapeutic interventions. Although the number of publications in the field has sharply increased over the last two decades, there is a paucity of theoretical reviews that integrate the existing literature into a comprehensive theoretical framework. In this article, we explore several components through which mindfulness meditation exerts its effects: (a) attention regulation, (b) body awareness, (c) emotion regulation (including reappraisal and exposure, extinction, and reconsolidation), and (d) change in perspective on the self. Recent empirical research, including practitioners' self-reports and experimental data, provides evidence supporting these mechanisms. Functional and structural neuroimaging studies have begun to explore the neuroscientific processes underlying these components. Evidence suggests that mindfulness practice is associated with neuroplastic changes in the anterior cingulate cortex, insula, temporo-parietal junction, fronto-limbic network, and default mode network structures. The authors suggest that the mechanisms described here work synergistically, establishing a process of enhanced self-regulation. Differentiating between these components seems useful to guide future basic research and to specifically target areas of development in the treatment of psychological disorders.", "title": "" }, { "docid": "f052fae696370910cc59f48552ddd889", "text": "Decisions involve many intangibles that need to be traded off. To do that, they have to be measured along side tangibles whose measurements must also be evaluated as to, how well, they serve the objectives of the decision maker. The Analytic Hierarchy Process (AHP) is a theory of measurement through pairwise comparisons and relies on the judgements of experts to derive priority scales. It is these scales that measure intangibles in relative terms. The comparisons are made using a scale of absolute judgements that represents, how much more, one element dominates another with respect to a given attribute. The judgements may be inconsistent, and how to measure inconsistency and improve the judgements, when possible to obtain better consistency is a concern of the AHP. The derived priority scales are synthesised by multiplying them by the priority of their parent nodes and adding for all such nodes. An illustration is included.", "title": "" }, { "docid": "cce2d168e49620ead88953617cce52b0", "text": "We analyze state-of-the-art deep learning models for three tasks: question answering on (1) images, (2) tables, and (3) passages of text. Using the notion of attribution (word importance), we find that these deep networks often ignore important question terms. Leveraging such behavior, we perturb questions to craft a variety of adversarial examples. Our strongest attacks drop the accuracy of a visual question answering model from 61.1% to 19%, and that of a tabular question answering model from 33.5% to 3.3%. Additionally, we show how attributions can strengthen attacks proposed by Jia and Liang (2017) on paragraph comprehension models. Our results demonstrate that attributions can augment standard measures of accuracy and empower investigation of model performance. When a model is accurate but for the wrong reasons, attributions can surface erroneous logic in the model that indicates inadequacies in the test data.", "title": "" }, { "docid": "85f2e049dc90bf08ecb0d34899d8b3c5", "text": "Here is little doubt that the Internet represents the spearhead of the industrial revolution. I love new technologies and gadgets that promise new and better ways of doing things. I have many such gadgets myself and I even manage to use a few of them (though not without some pain).A new piece of technology is like a new relationship, fun and exciting at first, but eventually it requires some hard work to maintain, usually in the form of time and energy. I doubt technology’s promise to improve the quality of life and I am still surprised how time-distorting and dissociating the computer and the Internet can be for me, along with the thousands of people I’ve interviewed, studied and treated in my clinical practice. It seems clear that the Internet can be used and abused in a compulsive fashion, and that there are numerous psychological factors that contribute to the Internet’s power and appeal. It appears that the very same features that drive the potency of the Net are potentially habit-forming. This study examined the self-reported Internet behavior of nearly 18,000 people who answered a survey on the ABCNEWS.com web site. Results clearly support the psychoactive nature of the Internet, and the potential for compulsive use and abuse of the Internet for certain individuals. Introduction Technology, and most especially, computers and the Internet, seem to be at best easily overused/abused, and at worst, addictive. The combination of available stimulating content, ease of access, convenience, low cost, visual stimulation, autonomy, and anonymity—all contribute to a highly psychoactive experience. By psychoactive, that is Running Head: Virtual Addiction to say mood altering, and potentially behaviorally impacting. In other words these technologies affect the manner in which we live and love. It is my contention that some of these effects are indeed less than positive, and may contribute to various negative psychological effects. The Internet and other digital technologies are only the latest in a series of “improvements” to our world which may have unintended negative effects. The experience of problems with new and unknown technologies is far from new; we have seen countless examples of newer and better things that have had unintended and unexpected deleterious effects. Remember Thalidomide, PVC/PCB’s, Atomic power, fossil fuels, even television, along with other seemingly innocuous conveniences which have been shown to be conveniently helpful, but on other levels harmful. Some of these harmful effects are obvious and tragic, while others are more subtle and insidious. Even seemingly innocuous advances such as the elevator, remote controls, credit card gas pumps, dishwashers, and drive-through everything, have all had unintended negative effects. They all save time and energy, but the energy they save may dissuade us from using our physical bodies as they were designed to be used. In short we have convenience ourselves to a sedentary lifestyle. Technology is amoral; it is not inherently good or evil, but it is impact on the manner in which we live our lives. American’s love technology and for some of us this trust and blind faith almost parallels a religious fanaticism. Perhaps most of all, we love it Running Head: Virtual Addiction because of the hope for the future it promises; it is this promise of a better today and a longer tomorrow which captivates us to attend to the call for new better things to come. We live in the age were computer and digital technology are always on the cusp of great things-Newer, better ways of doing things (which in some ways is true). The old becomes obsolete within a year or two. Newer is always better. Computers and the Internet purport to make our lives easier, simpler, and therefore more fulfilling, but it may not be that simple. People have become physically and psychologically dependent on many behaviors and substances for centuries. This compulsive pattern does not reflect a casual interest, but rather consists of a driven pattern of use that can frequently escalate to negatively impact our lives. The key life-areas that seem to be impacted are marriages and relationships, employment, health, and legal/financial status. The fact that substances, such as alcohol and other mood-altering drugs can create a physical and/or psychological dependence is well known and accepted. And certain behaviors such as gambling, eating, work, exercise, shopping, and sex have gained more recent acceptance with regard to their addictive potential. More recently however, there has been an acknowledgement that the compulsive performance of these behaviors may mimic the compulsive process found with drugs, alcohol and other substances. This same process appears to also be found with certain aspects of the Internet. Running Head: Virtual Addiction The Internet can and does produce clear alterations in mood; nearly 30 percent of Internet users admit to using the Net to alter their mood so as to relieve a negative mood state. In other words, they use the Internet like a drug (Greenfield, 1999). In addressing the phenomenon of Internet behavior, initial behavioral research (Young, 1996, 1998) focused on conceptual definitions of Internet use and abuse, and demonstrated similar patterns of abuse as found in compulsive gambling. There have been further recent studies on the nature and effects of the Internet. Cooper, Scherer, Boies, and Gordon (1998) examined sexuality on the Internet utilizing an extensive online survey of 9,177 Web users, and Greenfield (1999) surveyed nearly 18,000 Web users on ABCNEWS.com to examine Internet use and abuse behavior. The later study did yield some interesting trends and patterns, but also raised further areas that require clarification. There has been very little research that actually examined and measured specific behavior related to Internet use. The Carnegie Mellon University study (Kraut, Patterson, Lundmark, Kiesler, Mukopadhyay, and Scherlis, 1998) did attempt to examine and verify actual Internet use among 173 people in 73 households. This initial study did seem to demonstrate that there may be some deleterious effects from heavy Internet use, which appeared to increase some measures of social isolation and depression. What seems to be abundantly clear from the limited research to date is that we know very little about the human/Internet interface. Theoretical suppositions abound, but we are only just beginning to understand the nature and implications of Internet use and Running Head: Virtual Addiction abuse. There is an abundance of clinical, legal, and anecdotal evidence to suggest that there is something unique about being online that seems to produce a powerful impact on people. It is my belief that as we expand our analysis of this new and exciting area we will likely discover that there are many subcategories of Internet abuse, some of which will undoubtedly exist as concomitant disorders alongside of other addictions including sex, gambling, and compulsive shopping/spending. There are probably two types of Internet based problems: the first is defined as a primary problem where the Internet itself becomes the focus on the compulsive pattern, and secondary, where a preexisting problem (or compulsive behavior) is exacerbated via the use of the Internet. In a secondary problem, necessity is no longer the mother of invention, but rather convenience is. The Internet simply makes everything easier to acquire, and therefore that much more easily abused. The ease of access, availability, low cost, anonymity, timelessness, disinhibition, and loss of boundaries all appear to contribute to the total Internet experience. This has particular relevance when it comes to well-established forms of compulsive consumer behavior such as gambling, shopping, stock trading, and compulsive sexual behavior where traditional modalities of engaging in these behaviors pale in comparison to the speed and efficiency of the Internet. There has been considerable debate regarding the terms and definitions in describing pathological Internet behavior. Many terms have been used, including Internet abuse, Internet addiction, and compulsive Internet use. The concern over terminology Running Head: Virtual Addiction seems spurious to me, as it seems irrelevant as to what the addictive process is labeled. The underlying neurochemical changes (probably Dopamine) that occur during any pleasurable act have proven themselves to be potentially habit-forming on a brainbehavior level. The net effect is ultimately the same with regard to potential life impact, which in the case of compulsive behavior can be quite large. Any time there is a highly pleasurable human behavior that can be acquired without human interface (as can be accomplished on the Net) there seems to be greater potential for abuse. The ease of purchasing a stock, gambling, or shopping online allows for a boundless and disinhibited experience. Without the normal human interaction there is a far greater likelihood of abusive and/or compulsive behavior in these areas. Research in the field of Internet behavior is in its relative infancy. This is in part due to the fact that the depth and breadth of the Internet and World Wide Web are changing at exponential rates. With thousands of new subscribers a day and approaching (perhaps exceeding) 200 million worldwide users, the Internet represents a communications, social, and economic revolution. The Net now serves at the pinnacle of the digital industrial revolution, and with any revolution come new problems and difficulties.", "title": "" }, { "docid": "a2314ce56557135146e43f0d4a02782d", "text": "This paper proposes a carrier-based pulse width modulation (CB-PWM) method with synchronous switching technique for a Vienna rectifier. In this paper, a Vienna rectifier is one of the 3-level converter topologies. It is similar to a 3-level T-type topology the used back-to-back switches. When CB-PWM switching method is used, a Vienna rectifier is operated with six PWM signals. On the other hand, when the back-to-back switches are synchronized, PWM signals can be reduced to three from six. However, the synchronous switching method has a problem that the current distortion around zero-crossing point is worse than one of the conventional CB-PWM switching method. To improve current distortions, this paper proposes a reactive current injection technique. The performance and effectiveness of the proposed synchronous switching method are verified by simulation with a 5-kW Vienna rectifier.", "title": "" }, { "docid": "faaa921bce23eeca714926acb1901447", "text": "This paper provides an overview along with our findings of the Chinese Spelling Check shared task at NLPTEA 2017. The goal of this task is to develop a computerassisted system to automatically diagnose typing errors in traditional Chinese sentences written by students. We defined six types of errors which belong to two categories. Given a sentence, the system should detect where the errors are, and for each detected error determine its type and provide correction suggestions. We designed, constructed, and released a benchmark dataset for this task.", "title": "" }, { "docid": "b6f0c5a136de9b85899814a436e7a497", "text": "The 'ferrule effect' is a long standing, accepted concept in dentistry that is a foundation principle for the restoration of teeth that have suffered advanced structure loss. A review of the literature based on a search in PubMed was performed looking at the various components of the ferrule effect, with particular attention to some of the less explored dimensions that influence the effectiveness of the ferrule when restoring severely broken down teeth. These include the width of the ferrule, the effect of a partial ferrule, the influence of both, the type of the restored tooth and the lateral loads present as well as the well established 2 mm ferrule height rule. The literature was collaborated and a classification based on risk assessment was derived from the available evidence. The system categorises teeth according to the effectiveness of ferrule effect that can be achieved based on the remaining amount of sound tooth structure. Furthermore, risk assessment for failure can be performed so that the practitioner and patient can better understand the prognosis of restoring a particular tooth. Clinical recommendations were extrapolated and presented as guidelines so as to improve the predictability and outcome of treatment when restoring structurally compromised teeth. The evidence relating to restoring the endodontic treated tooth with extensive destruction is deficient. This article aims to rethink ferrule by looking at other aspects of this accepted concept, and proposes a paradigm shift in the way it is thought of and utilised.", "title": "" }, { "docid": "0a1f6c27cd13735858e7a6686fc5c2c9", "text": "We address the problem of learning hierarchical deep neural network policies for reinforcement learning. In contrast to methods that explicitly restrict or cripple lower layers of a hierarchy to force them to use higher-level modulating signals, each layer in our framework is trained to directly solve the task, but acquires a range of diverse strategies via a maximum entropy reinforcement learning objective. Each layer is also augmented with latent random variables, which are sampled from a prior distribution during the training of that layer. The maximum entropy objective causes these latent variables to be incorporated into the layer’s policy, and the higher level layer can directly control the behavior of the lower layer through this latent space. Furthermore, by constraining the mapping from latent variables to actions to be invertible, higher layers retain full expressivity: neither the higher layers nor the lower layers are constrained in their behavior. Our experimental evaluation demonstrates that we can improve on the performance of single-layer policies on standard benchmark tasks simply by adding additional layers, and that our method can solve more complex sparse-reward tasks by learning higher-level policies on top of high-entropy skills optimized for simple low-level objectives.", "title": "" }, { "docid": "856a6fa093e0cf6e0512d83e1382d3c9", "text": "00Month2017 CORRIGENDUM: ACMG recommendations for reporting of incidental findings in clinical exome and genome sequencing Robert C. Green MD, MPH, Jonathan S. Berg MD, PhD, Wayne W. Grody MD, PhD, Sarah S. Kalia ScM, CGC, Bruce R. Korf MD, PhD, Christa L. Martin PhD, FACMG, Amy L. McGuire JD, PhD, Robert L. Nussbaum MD, Julianne M. O’Daniel MS, CGC, Kelly E. Ormond MS, CGC, Heidi L. Rehm PhD, FACMG, Michael S. Watson PhD, FACMG, Marc S. Williams MD, FACMG & Leslie G. Biesecker MD Genet Med (2013) 15, 565–574 doi:10.1038/gim.2013.73 In the published version of this paper, on page 567, on the 16th line in the last paragraph of the left column, the abbreviation of Expected Pathogenic is incorrect. The correct sentence should read, “For the purposes of these recommendations, variants fitting these descriptions were labeled as Known Pathogenic (KP) and Expected Pathogenic (EP), respectively.”", "title": "" }, { "docid": "665f109e8263b687764de476befcbab9", "text": "In this work we analyze the behavior on a company-internal social network site to determine which interaction patterns signal closeness between colleagues. Regression analysis suggests that employee behavior on social network sites (SNSs) reveals information about both professional and personal closeness. While some factors are predictive of general closeness (e.g. content recommendations), other factors signal that employees feel personal closeness towards their colleagues, but not professional closeness (e.g. mutual profile commenting). This analysis contributes to our understanding of how SNS behavior reflects relationship multiplexity: the multiple facets of our relationships with SNS connections.", "title": "" }, { "docid": "91b924c8dbb22ca4593150c5fadfd38b", "text": "This paper investigates the power allocation problem of full-duplex cooperative non-orthogonal multiple access (FD-CNOMA) systems, in which the strong users relay data for the weak users via a full duplex relaying mode. For the purpose of fairness, our goal is to maximize the minimum achievable user rate in a NOMA user pair. More specifically, we consider the power optimization problem for two different relaying schemes, i.e., the fixed relaying power scheme and the adaptive relaying power scheme. For the fixed relaying scheme, we demonstrate that the power allocation problem is quasi-concave and a closed-form optimal solution is obtained. Then, based on the derived results of the fixed relaying scheme, the optimal power allocation policy for the adaptive relaying scheme is also obtained by transforming the optimization objective function as a univariate function of the relay transmit power $P_R$. Simulation results show that the proposed FD- CNOMA scheme with adaptive relaying can always achieve better or at least the same performance as the conventional NOMA scheme. In addition, there exists a switching point between FD-CNOMA and half- duplex cooperative NOMA.", "title": "" }, { "docid": "159e040b0e74ad1b6124907c28e53daf", "text": "People (pedestrians, drivers, passengers in public transport) use different services on small mobile gadgets on a daily basis. So far, mobile applications don't react to context changes. Running services should adapt to the changing environment and new services should be installed and deployed automatically. We propose a classification of context elements that influence the behavior of the mobile services, focusing on the challenges of the transportation domain. Malware Detection on Mobile Devices Asaf Shabtai*, Ben-Gurion University, Israel Abstract: We present various approaches for mitigating malware on mobile devices which we have implemented and evaluated on Google Android. Our work is divided into the following three segments: a host-based intrusion detection framework; an implementation of SELinux in Android; and static analysis of Android application files. We present various approaches for mitigating malware on mobile devices which we have implemented and evaluated on Google Android. Our work is divided into the following three segments: a host-based intrusion detection framework; an implementation of SELinux in Android; and static analysis of Android application files. Dynamic Approximative Data Caching in Wireless Sensor Networks Nils Hoeller*, IFIS, University of Luebeck Abstract: Communication in Wireless Sensor Networks generally is the most energy consuming task. Retrieving query results from deep within the sensor network therefore consumes a lot of energy and hence shortens the network's lifetime. In this work optimizations Communication in Wireless Sensor Networks generally is the most energy consuming task. Retrieving query results from deep within the sensor network therefore consumes a lot of energy and hence shortens the network's lifetime. In this work optimizations for processing queries by using adaptive caching structures are discussed. Results can be retrieved from caches that are placed nearer to the query source. As a result the communication demand is reduced and hence energy is saved by using the cached results. To verify cache coherence in networks with non-reliable communication channels, an approximate update policy is presented. A degree of result quality can be defined for a query to find the adequate cache adaptively. Gossip-based Data Fusion Framework for Radio Resource Map Jin Yang*, Ilmenau University of Technology Abstract: In disaster scenarios, sensor networks are used to detect changes and estimate resource availability to further support the system recovery and rescue process. In this PhD thesis, sensor networks are used to detect available radio resources in order to form a global view of the radio resource map, based on locally sensed and measured data. Data fusion and harvesting techniques are employed for the generation and maintenance of this “radio resource map.” In order to guarantee the flexibility and fault tolerance goals of disaster scenarios, a gossip protocol is used to exchange information. The radio propagation field knowledge is closely coupled to harvesting and fusion protocols in order to achieve efficient fusing of radio measurement data. For the evaluation, simulations will be used to measure the efficiency and robustness in relation to time critical applications and various deployment densities. Actual radio data measurements within the Ilmenau area are being collected for further analysis of the map quality and in order to verify simulation results. In disaster scenarios, sensor networks are used to detect changes and estimate resource availability to further support the system recovery and rescue process. In this PhD thesis, sensor networks are used to detect available radio resources in order to form a global view of the radio resource map, based on locally sensed and measured data. Data fusion and harvesting techniques are employed for the generation and maintenance of this “radio resource map.” In order to guarantee the flexibility and fault tolerance goals of disaster scenarios, a gossip protocol is used to exchange information. The radio propagation field knowledge is closely coupled to harvesting and fusion protocols in order to achieve efficient fusing of radio measurement data. For the evaluation, simulations will be used to measure the efficiency and robustness in relation to time critical applications and various deployment densities. Actual radio data measurements within the Ilmenau area are being collected for further analysis of the map quality and in order to verify simulation results. Dynamic Social Grouping Based Routing in a Mobile Ad-Hoc Network Roy Cabaniss*, Missouri S&T Abstract: Trotta, University of Missouri, Kansas City, Srinivasa Vulli, Missouri University S&T The patterns of movement used by Mobile Ad-Hoc networks are application specific, in the sense that networks use nodes which travel in different paths. When these nodes are used in experiments involving social patterns, such as wildlife tracking, algorithms which detect and use these patterns can be used to improve routing efficiency. The intent of this paper is to introduce a routing algorithm which forms a series of social groups which accurately indicate a node’s regular contact patterns while dynamically shifting to represent changes to the social environment. With the social groups formed, a probabilistic routing schema is used to effectively identify which social groups have consistent contact with the base station, and route accordingly. The algorithm can be implemented dynamically, in the sense that the nodes initially have no awareness of their environment, and works to reduce overhead and message traffic while maintaining high delivery ratio. Trotta, University of Missouri, Kansas City, Srinivasa Vulli, Missouri University S&T The patterns of movement used by Mobile Ad-Hoc networks are application specific, in the sense that networks use nodes which travel in different paths. When these nodes are used in experiments involving social patterns, such as wildlife tracking, algorithms which detect and use these patterns can be used to improve routing efficiency. The intent of this paper is to introduce a routing algorithm which forms a series of social groups which accurately indicate a node’s regular contact patterns while dynamically shifting to represent changes to the social environment. With the social groups formed, a probabilistic routing schema is used to effectively identify which social groups have consistent contact with the base station, and route accordingly. The algorithm can be implemented dynamically, in the sense that the nodes initially have no awareness of their environment, and works to reduce overhead and message traffic while maintaining high delivery ratio. MobileSOA framework for Context-Aware Mobile Applications Aaratee Shrestha*, University of Leipzig Abstract: Mobile application development is more challenging when context-awareness is taken into account. This research introduces the benefit of implementing a Mobile Service Oriented Architecture (SOA). A robust mobile SOA framework is designed for building and operating lightweight and flexible Context-Aware Mobile Application (CAMA). We develop a lightweight and flexible CAMA to show dynamic integration of the systems, where all operations run smoothly in response to the rapidly changing environment using local and remote services. Keywords-service oriented architecture (SOA); mobile service; context-awareness; contextaware mobile application (CAMA). Mobile application development is more challenging when context-awareness is taken into account. This research introduces the benefit of implementing a Mobile Service Oriented Architecture (SOA). A robust mobile SOA framework is designed for building and operating lightweight and flexible Context-Aware Mobile Application (CAMA). We develop a lightweight and flexible CAMA to show dynamic integration of the systems, where all operations run smoothly in response to the rapidly changing environment using local and remote services. Keywords-service oriented architecture (SOA); mobile service; context-awareness; contextaware mobile application (CAMA). Performance Analysis of Secure Hierarchical Data Aggregation in Wireless Sensor Networks Vimal Kumar*, Missouri S&T Abstract: Data aggregation is a technique used to conserve battery power in wireless sensor networks (WSN). While providing security in such a scenario it is also important that we minimize the number of security operations as they are computationally expensive, without compromising on the security. In this paper we evaluate the performance of such an end to end security algorithm. We provide our results from the implementation of the algorithm on mica2 motes and conclude how it is better than traditional hop by hop security. Data aggregation is a technique used to conserve battery power in wireless sensor networks (WSN). While providing security in such a scenario it is also important that we minimize the number of security operations as they are computationally expensive, without compromising on the security. In this paper we evaluate the performance of such an end to end security algorithm. We provide our results from the implementation of the algorithm on mica2 motes and conclude how it is better than traditional hop by hop security. A Communication Efficient Framework for Finding Outliers in Wireless Sensor Networks Dylan McDonald*, MS&T Abstract: Outlier detection is a well studied problem in various fields. The unique challenges of wireless sensor networks make this problem especially challenging. Sensors can detect outliers for a plethora of reasons and these reasons need to be inferred in real time. Here, we present a new communication technique to find outliers in a wireless sensor network. Communication is minimized through controlling sensor when sensors are allowed to communicate. At the same time, minimal assumptions are made about the nature of the data set as to ", "title": "" }, { "docid": "15486c4dc2dfc0f2f5ccfc0cf6197af4", "text": "Nostalgia is a frequently experienced complex emotion, understood by laypersons in the United Kingdom and United States of America to (a) refer prototypically to fond, self-relevant, social memories and (b) be more pleasant (e.g., happy, warm) than unpleasant (e.g., sad, regretful). This research examined whether people across cultures conceive of nostalgia in the same way. Students in 18 countries across 5 continents (N = 1,704) rated the prototypicality of 35 features of nostalgia. The samples showed high levels of agreement on the rank-order of features. In all countries, participants rated previously identified central (vs. peripheral) features as more prototypical of nostalgia, and showed greater interindividual agreement regarding central (vs. peripheral) features. Cluster analyses revealed subtle variation among groups of countries with respect to the strength of these pancultural patterns. All except African countries manifested the same factor structure of nostalgia features. Additional exemplars generated by participants in an open-ended format did not entail elaboration of the existing set of 35 features. Findings identified key points of cross-cultural agreement regarding conceptions of nostalgia, supporting the notion that nostalgia is a pancultural emotion.", "title": "" }, { "docid": "dc2d5f9bfe41246ae9883aa6c0537c40", "text": "Phosphatidylinositol 3-kinases (PI3Ks) are crucial coordinators of intracellular signalling in response to extracellular stimuli. Hyperactivation of PI3K signalling cascades is one of the most common events in human cancers. In this Review, we discuss recent advances in our knowledge of the roles of specific PI3K isoforms in normal and oncogenic signalling, the different ways in which PI3K can be upregulated, and the current state and future potential of targeting this pathway in the clinic.", "title": "" }, { "docid": "5b763dbb9f06ff67e44b5d38920e92bf", "text": "With the growing popularity of the internet, everything is available at our doorstep and convenience. The rapid increase in e-commerce applications has resulted in the increased usage of the credit card for offline and online payments. Though there are various benefits of using credit cards such as convenience, instant cash, but when it comes to security credit card holders, banks, and the merchants are affected when the card is being stolen, lost or misused without the knowledge of the cardholder (Fraud activity). Streaming analytics is a time-based processing of data and it is used to enable near real-time decision making by inspecting, correlating and analyzing the data even as it is streaming into applications and database from myriad different sources. We are making use of streaming analytics to detect and prevent the credit card fraud. Rather than singling out specific transactions, our solution analyses the historical transaction data to model a system that can detect fraudulent patterns. This model is then used to analyze transactions in real-time.", "title": "" }, { "docid": "fd5b9187c6720c3408b5c2324b03905d", "text": "Recent anchor-based deep face detectors have achieved promising performance, but they are still struggling to detect hard faces, such as small, blurred and partially occluded faces. A reason is that they treat all images and faces equally, without putting more effort on hard ones; however, many training images only contain easy faces, which are less helpful to achieve better performance on hard images. In this paper, we propose that the robustness of a face detector against hard faces can be improved by learning small faces on hard images. Our intuitions are (1) hard images are the images which contain at least one hard face, thus they facilitate training robust face detectors; (2) most hard faces are small faces and other types of hard faces can be easily converted to small faces by shrinking. We build an anchor-based deep face detector, which only output a single feature map with small anchors, to specifically learn small faces and train it by a novel hard image mining strategy. Extensive experiments have been conducted on WIDER FACE, FDDB, Pascal Faces, and AFW datasets to show the effectiveness of our method. Our method achieves APs of 95.7, 94.9 and 89.7 on easy, medium and hard WIDER FACE val dataset respectively, which surpass the previous state-of-the-arts, especially on the hard subset. Code and model are available at https://github.com/bairdzhang/smallhardface.", "title": "" }, { "docid": "221b5ba25bff2522ab3ca65ffc94723f", "text": "This paper describes the design and implementation of HERD, a key-value system designed to make the best use of an RDMA network. Unlike prior RDMA-based key-value systems, HERD focuses its design on reducing network round trips while using efficient RDMA primitives; the result is substantially lower latency, and throughput that saturates modern, commodity RDMA hardware.\n HERD has two unconventional decisions: First, it does not use RDMA reads, despite the allure of operations that bypass the remote CPU entirely. Second, it uses a mix of RDMA and messaging verbs, despite the conventional wisdom that the messaging primitives are slow. A HERD client writes its request into the server's memory; the server computes the reply. This design uses a single round trip for all requests and supports up to 26 million key-value operations per second with 5μs average latency. Notably, for small key-value items, our full system throughput is similar to native RDMA read throughput and is over 2X higher than recent RDMA-based key-value systems. We believe that HERD further serves as an effective template for the construction of RDMA-based datacenter services.", "title": "" }, { "docid": "a6bc752bd6a4fc070fa01a5322fb30a1", "text": "The formulation of a generalized area-based confusion matrix for exploring the accuracy of area estimates is presented. The generalized confusion matrix is appropriate for both traditional classiŽ cation algorithms and sub-pixel area estimation models. An error matrix, derived from the generalized confusion matrix, allows the accuracy of maps generated using area estimation models to be assessed quantitatively and compared to the accuracies obtained from traditional classiŽ cation techniques. The application of this approach is demonstrated for an area estimation model applied to Landsat data of an urban area of the United Kingdom.", "title": "" }, { "docid": "449f984469b40fe10f7a2e0e3a359d1d", "text": "The correlation of phenotypic outcomes with genetic variation and environmental factors is a core pursuit in biology and biomedicine. Numerous challenges impede our progress: patient phenotypes may not match known diseases, candidate variants may be in genes that have not been characterized, model organisms may not recapitulate human or veterinary diseases, filling evolutionary gaps is difficult, and many resources must be queried to find potentially significant genotype-phenotype associations. Non-human organisms have proven instrumental in revealing biological mechanisms. Advanced informatics tools can identify phenotypically relevant disease models in research and diagnostic contexts. Large-scale integration of model organism and clinical research data can provide a breadth of knowledge not available from individual sources and can provide contextualization of data back to these sources. The Monarch Initiative (monarchinitiative.org) is a collaborative, open science effort that aims to semantically integrate genotype-phenotype data from many species and sources in order to support precision medicine, disease modeling, and mechanistic exploration. Our integrated knowledge graph, analytic tools, and web services enable diverse users to explore relationships between phenotypes and genotypes across species.", "title": "" }, { "docid": "e0ec608baa5af1c35672efbccbd618df", "text": "The “similarity-attraction” effect stands as one of the most well-known findings in social psychology. However, some research contends that perceived but not actual similarity influences attraction. The current study is the first to examine the effects of actual and perceived similarity simultaneously during a face-to-face initial romantic encounter. Participants attending a speed-dating event interacted with ∼12 members of the opposite sex for 4 min each. Actual and perceived similarity for each pair were calculated from questionnaire responses assessed before the event and after each date. Data revealed that perceived, but not actual, similarity significantly predicted romantic liking in this speed-dating context. Furthermore, perceived similarity was a far weaker predictor of attraction when assessed using specific traits rather than generally. Over the past 60 years, researchers have examined thoroughly the role that similarity between partners plays in predicting interpersonal attraction. Until recently, the general consensus has been that participants report stronger attraction to objectively similar others (i.e., actual similarity) than to those with whom they share fewer traits, beliefs, and/or attitudes. The similarity-attraction effect, commonly dubbed “Byrne’s law of attraction” or “Byrne’s law of similarity,” is a central Natasha D. Tidwell, Department of Psychology, Texas A&M University; Paul W. Eastwick, Department of Psychology, Texas A&M University; Eli J. Finkel, Department of Psychology, Northwestern University. We thank Jacob Matthews for his masterful programming of the Northwestern Speed-dating Study and the Northwestern Speed-Dating Team for conducting the studies themselves. We also thank David Kenny for his assistance with the social relations model analyses. Correspondence should be addressed to Natasha D. Tidwell, Texas A&M University, Department of Psychology, 4235 TAMU, College Station, TX 778434235, e-mail: ndtidwell@gmail.com or Paul W. Eastwick, Texas A&M University, Department of Psychology, 4235 TAMU, College Station, TX 77843-4235, e-mail: eastwick@tamu.edu. feature of textbook reviews of attraction and relationship initiation.1 Research on the actual similarity-attraction effect has most frequently examined similarity of attitudes, finding that participants are more likely to become attracted to a stranger with whom they share many common attitudes than to one with whom they share few (Byrne, 1961; Byrne, Ervin, & Lamberth, 1970). Scholars have also found that actual similarity of personality traits predicts initial attraction, but the results are not as robust as those for attitude similarity (Klohnen & Luo, 2003). Furthermore, some research has suggested that actual similarity in external qualities (e.g., age, hairstyle) is more predictive of 1. Researchers have also found that actual similarity predicts satisfaction and stability in existing relationships (e.g., Gaunt, 2006; Luo et al., 2008; Luo & Klohnen, 2005), suggesting that Byrne’s law of attraction may extend well beyond initial attraction per se. Although we review prior work on similarity in both initial attraction and established relationship contexts below, the present data specifically examine the association between similarity and attraction in an initial face-toface encounter.", "title": "" } ]
scidocsrr
4c5c9a90a4890c72422be643dbd864ce
Operation of Compressor and Electronic Expansion Valve via Different Controllers
[ { "docid": "8b3ad3d48da22c529e65c26447265372", "text": "It is demonstrated that neural networks can be used effectively for the identification and control of nonlinear dynamical systems. The emphasis is on models for both identification and control. Static and dynamic backpropagation methods for the adjustment of parameters are discussed. In the models that are introduced, multilayer and recurrent networks are interconnected in novel configurations, and hence there is a real need to study them in a unified fashion. Simulation results reveal that the identification and adaptive control schemes suggested are practically feasible. Basic concepts and definitions are introduced throughout, and theoretical questions that have to be addressed are also described.", "title": "" } ]
[ { "docid": "fb11b937a3c07fd4b76cda1ed1eadc07", "text": "Depth information plays an important role in a variety of applications, including manufacturing, medical imaging, computer vision, graphics, and virtual/augmented reality (VR/AR). Depth sensing has thus attracted sustained attention from both academia and industry communities for decades. Mainstream depth cameras can be divided into three categories: stereo, time of flight (ToF), and structured light. Stereo cameras require no active illumination and can be used outdoors, but they are fragile for homogeneous surfaces. Recently, off-the-shelf light field cameras have demonstrated improved depth estimation capability with a multiview stereo configuration. ToF cameras operate at a high frame rate and fit time-critical scenarios well, but they are susceptible to noise and limited to low resolution [3]. Structured light cameras can produce high-resolution, high-accuracy depth, provided that a number of patterns are sequentially used. Due to its promising and reliable performance, the structured light approach has been widely adopted for three-dimensional (3-D) scanning purposes. However, achieving real-time depth with structured light either requires highspeed (and thus expensive) hardware or sacrifices depth resolution and accuracy by using a single pattern instead.", "title": "" }, { "docid": "af628819a5392543266668b94c579a96", "text": "Elephantopus scaber is an ethnomedicinal plant used by the Zhuang people in Southwest China to treat headaches, colds, diarrhea, hepatitis, and bronchitis. A new δ -truxinate derivative, ethyl, methyl 3,4,3',4'-tetrahydroxy- δ -truxinate (1), was isolated from the ethyl acetate extract of the entire plant, along with 4 known compounds. The antioxidant activity of these 5 compounds was determined by ABTS radical scavenging assay. Compound 1 was also tested for its cytotoxicity effect against HepG2 by MTT assay (IC50 = 60  μ M), and its potential anti-inflammatory, antibiotic, and antitumor bioactivities were predicted using target fishing method software.", "title": "" }, { "docid": "bdefc8bcd92aefe966d4fcd98ab1fdbb", "text": "The automatic identification system (AIS) tracks vessel movement by means of electronic exchange of navigation data between vessels, with onboard transceiver, terrestrial, and/or satellite base stations. The gathered data contain a wealth of information useful for maritime safety, security, and efficiency. Because of the close relationship between data and methodology in marine data mining and the importance of both of them in marine intelligence research, this paper surveys AIS data sources and relevant aspects of navigation in which such data are or could be exploited for safety of seafaring, namely traffic anomaly detection, route estimation, collision prediction, and path planning.", "title": "" }, { "docid": "f14272db4779239dc7d392ef7dfac52d", "text": "3 The Rotating Calipers Algorithm 3 3.1 Computing the Initial Rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.2 Updating the Rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2.1 Distinct Supporting Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2.2 Duplicate Supporting Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2.3 Multiple Polygon Edges Attain Minimum Angle . . . . . . . . . . . . . . . . . . . . . 8 3.2.4 The General Update Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10", "title": "" }, { "docid": "19d4662287a5c3ce1cef85fa601b74ba", "text": "This paper compares two approaches in identifying outliers in multivariate datasets; Mahalanobis distance (MD) and robust distance (RD). MD has been known suffering from masking and swamping effects and RD is an approach that was developed to overcome problems that arise in MD. There are two purposes of this paper, first is to identify outliers using MD and RD and the second is to show that RD performs better than MD in identifying outliers. An observation is classified as an outlier if MD or RD is larger than a cut-off value. Outlier generating model is used to generate a set of data and MD and RD are computed from this set of data. The results showed that RD can identify outliers better than MD. However, in non-outliers data the performance for both approaches are similar. The results for RD also showed that RD can identify multivariate outliers much better when the number of dimension is large.", "title": "" }, { "docid": "33d530e8e74cd5b5dbfa2035a7608664", "text": "This paper presents an area-efficient ultra-low-power 32 kHz clock source for low power wireless communication systems using a temperature-compensated charge-pump-based digitally controlled oscillator (DCO). A highly efficient digital calibration method is proposed to achieve frequency stability over process variation and temperature drifts. This calibration method locks the DCO's output frequency to the reference clock of the wireless communication system during its active state. The introduced calibration scheme offers high jitter immunity and short locking periods overcoming frequency calibration errors for typical ultra-low-power DCO's. The circuit area of the proposed ultra-low-power clock source is 100μm × 140μm in a 130nm RF CMOS technology. In measurements the proposed ultra-low-power clock source achieves a frequency stability of 10 ppm/°C from 10 °C to 100 °C for temperature drifts of less than 1 °C/s with 80nW power consumption.", "title": "" }, { "docid": "5b16933905d36ba54ab74743251d7ca7", "text": "The explosive growth of the user-generated content on the Web has offered a rich data source for mining opinions. However, the large number of diverse review sources challenges the individual users and organizations on how to use the opinion information effectively. Therefore, automated opinion mining and summarization techniques have become increasingly important. Different from previous approaches that have mostly treated product feature and opinion extraction as two independent tasks, we merge them together in a unified process by using probabilistic models. Specifically, we treat the problem of product feature and opinion extraction as a sequence labeling task and adopt Conditional Random Fields models to accomplish it. As part of our work, we develop a computational approach to construct domain specific sentiment lexicon by combining semi-structured reviews with general sentiment lexicon, which helps to identify the sentiment orientations of opinions. Experimental results on two real world datasets show that the proposed method is effective.", "title": "" }, { "docid": "4b5c5b76d7370a82f96f36659cd63850", "text": "For force control of robot and collision detection with humans, robots that has joint torque sensors have been developed. However, existing torque sensors cannot measure correct torque because of crosstalk error. In order to solve this problem, we proposed a novel torque sensor that can measure the pure torque without crosstalk. The hexaform of the proposed sensor with truss structure increase deformation of the sensor and restoration, and the Wheatstone bridge circuit of strain gauge removes crosstalk error. Sensor performance is verified with FEM analysis.", "title": "" }, { "docid": "f09733894d94052707ed768aea8d26e6", "text": "The aim of this paper is to investigate the rules and constraints of code-switching (CS) in Hindi-English mixed language data. In this paper, we’ll discuss how we collected the mixed language corpus. This corpus is primarily made up of student interview speech. The speech was manually transcribed and verified by bilingual speakers of Hindi and English. The code-switching cases in the corpus are discussed and the reasons for code-switching are explained.", "title": "" }, { "docid": "2ca050b562ed14688dd9d68b454928e0", "text": "Electronic waste (e-waste) is one of the fastest-growing pollution problems worldwide given the presence if a variety of toxic substances which can contaminate the environment and threaten human health, if disposal protocols are not meticulously managed. This paper presents an overview of toxic substances present in e-waste, their potential environmental and human health impacts together with management strategies currently being used in certain countries. Several tools including life cycle assessment (LCA), material flow analysis (MFA), multi criteria analysis (MCA) and extended producer responsibility (EPR) have been developed to manage e-wastes especially in developed countries. The key to success in terms of e-waste management is to develop eco-design devices, properly collect e-waste, recover and recycle material by safe methods, dispose of e-waste by suitable techniques, forbid the transfer of used electronic devices to developing countries, and raise awareness of the impact of e-waste. No single tool is adequate but together they can complement each other to solve this issue. A national scheme such as EPR is a good policy in solving the growing e-waste problems.", "title": "" }, { "docid": "ced98c32f887001d40e783ab7b294e1a", "text": "This paper proposes a two-layer High Dynamic Range (HDR) coding scheme using a new tone mapping. Our tone mapping method transforms an HDR image onto a Low Dynamic Range (LDR) image by using a base map that is a smoothed version of the HDR luminance. In our scheme, the HDR image can be reconstructed from the tone mapped LDR image. Our method makes use of this property to realize a two-layer HDR coding by encoding both of the tone mapped LDR image and the base map. This paper validates its effectiveness of our approach through some experiments.", "title": "" }, { "docid": "e40a60aec86433eaac618e6b391e2a57", "text": "Marine microalgae have been used for a long time as food for humans, such as Arthrospira (formerly, Spirulina), and for animals in aquaculture. The biomass of these microalgae and the compounds they produce have been shown to possess several biological applications with numerous health benefits. The present review puts up-to-date the research on the biological activities and applications of polysaccharides, active biocompounds synthesized by marine unicellular algae, which are, most of the times, released into the surrounding medium (exo- or extracellular polysaccharides, EPS). It goes through the most studied activities of sulphated polysaccharides (sPS) or their derivatives, but also highlights lesser known applications as hypolipidaemic or hypoglycaemic, or as biolubricant agents and drag-reducers. Therefore, the great potentials of sPS from marine microalgae to be used as nutraceuticals, therapeutic agents, cosmetics, or in other areas, such as engineering, are approached in this review.", "title": "" }, { "docid": "390b0dbd01e88fec7f7a4b59cb753978", "text": "In this paper, we propose a segmentation method based on normalized cut and superpixels. The method relies on color and texture cues for fast computation and efficient use of memory. The method is used for food image segmentation as part of a mobile food record system we have developed for dietary assessment and management. The accurate estimate of nutrients relies on correctly labelled food items and sufficiently well-segmented regions. Our method achieves competitive results using the Berkeley Segmentation Dataset and outperforms some of the most popular techniques in a food image dataset.", "title": "" }, { "docid": "9e5aa162d1eecefe11abe5ecefbc11e3", "text": "Efficient algorithms for 3D character control in continuous control setting remain an open problem in spite of the remarkable recent advances in the field. We present a sampling-based model-predictive controller that comes in the form of a Monte Carlo tree search (MCTS). The tree search utilizes information from multiple sources including two machine learning models. This allows rapid development of complex skills such as 3D humanoid locomotion with less than a million simulation steps, in less than a minute of computing on a modest personal computer. We demonstrate locomotion of 3D characters with varying topologies under disturbances such as heavy projectile hits and abruptly changing target direction. In this paper we also present a new way to combine information from the various sources such that minimal amount of information is lost. We furthermore extend the neural network, involved in the algorithm, to represent stochastic policies. Our approach yields a robust control algorithm that is easy to use. While learning, the algorithm runs in near real-time, and after learning the sampling budget can be reduced for real-time operation.", "title": "" }, { "docid": "b5df59d926ca4778c306b255d60870a1", "text": "In this paper the transcription and evaluation of the corpus DIMEx100 for Mexican Spanish is presented. First we describe the corpus and explain the linguistic and computational motivation for its design and collection process; then, the phonetic antecedents and the alphabet adopted for the transcription task are presented; the corpus has been transcribed at three different granularity levels, which are also specified in detail. The corpus statistics for each transcription level are also presented. A set of phonetic rules describing phonetic context observed empirically in spontaneous conversation is also validated with the transcription. The corpus has been used for the construction of acoustic models and a phonetic dictionary for the construction of a speech recognition system. Initial performance results suggest that the data can be used to train good quality acoustic models.", "title": "" }, { "docid": "e777bb21d57393a4848fcb04c6d5b913", "text": "A 2.5 GHz fully integrated voltage controlled oscillator (VCO) for wireless application has been designed in a 0.35μm CMOS technology. A method for compensating the effect of temperature on the carrier oscillation frequency has been presented in this work. We compare also different VCOs topologies in order to select one with low phase noise, low supply sensitivity and large tuning frequency. Good results are obtained with a simple NMOS –Gm VCO. This proposed VCO has a wide operating range from 300 MHz with a good linearity between the output frequency and the control input voltage, with a temperature coefficient of -5 ppm/°C from 20°C to 120°C range. The phase noise is about -135.2dBc/Hz at 1MHz from the carrier with a power consumption of 5mW.", "title": "" }, { "docid": "cb641fc639b86abadec4f85efc226c14", "text": "The modernization of the US electric power infrastructure, especially in lieu of its aging, overstressed networks; shifts in social, energy and environmental policies, and also new vulnerabilities, is a national concern. Our system are required to be more adaptive and secure more than every before. Consumers are also demanding increased power quality and reliability of supply and delivery. As such, power industries, government and national laboratories and consortia have developed increased interest in what is now called the Smart Grid of the future. The paper outlines Smart Grid intelligent functions that advance interactions of agents such as telecommunication, control, and optimization to achieve adaptability, self-healing, efficiency and reliability of power systems. The author also presents a special case for the development of Dynamic Stochastic Optimal Power Flow (DSOPF) technology as a tool needed in Smart Grid design. The integration of DSOPF to achieve the design goals with advanced DMS capabilities are discussed herein. This reference paper also outlines research focus for developing next generation of advance tools for efficient and flexible power systems operation and control.", "title": "" }, { "docid": "29ce7251e5237b0666cef2aee7167126", "text": "Chinese characters have a huge set of character categories, more than 20, 000 and the number is still increasing as more and more novel characters continue being created. However, the enormous characters can be decomposed into a compact set of about 500 fundamental and structural radicals. This paper introduces a novel radical analysis network (RAN) to recognize printed Chinese characters by identifying radicals and analyzing two-dimensional spatial structures among them. The proposed RAN first extracts visual features from input by employing convolutional neural networks as an encoder. Then a decoder based on recurrent neural networks is employed, aiming at generating captions of Chinese characters by detecting radicals and two-dimensional structures through a spatial attention mechanism. The manner of treating a Chinese character as a composition of radicals rather than a single character class largely reduces the size of vocabulary and enables RAN to possess the ability of recognizing unseen Chinese character classes, namely zero-shot learning.", "title": "" }, { "docid": "33285ad9f7bc6e33b48e3f1e27a1ccc9", "text": "Information visualization is a very important tool in BigData analytics. BigData, structured and unstructured data which contains images, videos, texts, audio and other forms of data, collected from multiple datasets, is too big, too complex and moves too fast to analyse using traditional methods. This has given rise to two issues; 1) how to reduce multidimensional data without the loss of any data patterns for multiple datasets, 2) how to visualize BigData patterns for analysis. In this paper, we have classified the BigData attributes into `5Ws' data dimensions, and then established a `5Ws' density approach that represents the characteristics of data flow patterns. We use parallel coordinates to display the `5Ws' sending and receiving densities which provide more analytic features for BigData analysis. The experiment shows that this new model with parallel coordinate visualization can be efficiently used for BigData analysis and visualization.", "title": "" }, { "docid": "6844deb3346756b1858778a4cec26098", "text": "Deep Learning has recently been introduced as a new alternative to perform Side-Channel analysis [1]. Until now, studies have been focused on applying Deep Learning techniques to perform Profiled SideChannel attacks where an attacker has a full control of a profiling device and is able to collect a large amount of traces for different key values in order to characterize the device leakage prior to the attack. In this paper we introduce a new method to apply Deep Learning techniques in a Non-Profiled context, where an attacker can only collect a limited number of side-channel traces for a fixed unknown key value from a closed device. We show that by combining key guesses with observations of Deep Learning metrics, it is possible to recover information about the secret key. The main interest of this method, is that it is possible to use the power of Deep Learning and Neural Networks in a Non-Profiled scenario. We show that it is possible to exploit the translation-invariance property of Convolutional Neural Networks [2] against de-synchronized traces and use Data Augmentation techniques also during Non-Profiled side-channel attacks. Additionally, the present work shows that in some conditions, this method can outperform classic Non-Profiled attacks as Correlation Power Analysis. We also highlight that it is possible to target masked implementations without leakages combination pre-preprocessing and with less assumptions than classic high-order attacks. To illustrate these properties, we present a series of experiments performed on simulated data and real traces collected from the ChipWhisperer board and from the ASCAD database [3]. The results of our experiments demonstrate the interests of this new method and show that this attack can be performed in practice.", "title": "" } ]
scidocsrr
8c6fd0aedbea7938ae0b08297b62d4a7
Screening for Depression Patients in Family Medicine
[ { "docid": "f84f279b6ef3b112a0411f5cba82e1b0", "text": "PHILADELPHIA The difficulties inherent in obtaining consistent and adequate diagnoses for the purposes of research and therapy have been pointed out by a number of authors. Pasamanick12 in a recent article viewed the low interclinician agreement on diagnosis as an indictment of the present state of psychiatry and called for \"the development of objective, measurable and verifiable criteria of classification based not on personal or parochial considerations, buton behavioral and other objectively measurable manifestations.\" Attempts by other investigators to subject clinical observations and judgments to objective measurement have resulted in a wide variety of psychiatric rating ~ c a l e s . ~ J ~ These have been well summarized in a review article by Lorr l1 on \"Rating Scales and Check Lists for the E v a 1 u a t i o n of Psychopathology.\" In the area of psychological testing, a variety of paper-andpencil tests have been devised for the purpose of measuring specific personality traits; for example, the Depression-Elation Test, devised by Jasper in 1930. This report describes the development of an instrument designed to measure the behavioral manifestations of depression. In the planning of the research design of a project aimed at testing certain psychoanalytic formulations of depression, the necessity for establishing an appropriate system for identifying depression was recognized. Because of the reports on the low degree of interclinician agreement on diagnosis,13 we could not depend on the clinical diagnosis, but had to formulate a method of defining depression that would be reliable and valid. The available instruments were not considered adequate for our purposes. The Minnesota Multiphasic Personality Inventory, for example, was not specifically designed", "title": "" } ]
[ { "docid": "5945132041b353b72af11e88b6ba5b97", "text": "Oblivious RAM (ORAM) protocols are powerful techniques that hide a client’s data as well as access patterns from untrusted service providers. We present an oblivious cloud storage system, ObliviSync, that specifically targets one of the most widely-used personal cloud storage paradigms: synchronization and backup services, popular examples of which are Dropbox, iCloud Drive, and Google Drive. This setting provides a unique opportunity because the above privacy properties can be achieved with a simpler form of ORAM called write-only ORAM, which allows for dramatically increased efficiency compared to related work. Our solution is asymptotically optimal and practically efficient, with a small constant overhead of approximately 4x compared with non-private file storage, depending only on the total data size and parameters chosen according to the usage rate, and not on the number or size of individual files. Our construction also offers protection against timing-channel attacks, which has not been previously considered in ORAM protocols. We built and evaluated a full implementation of ObliviSync that supports multiple simultaneous read-only clients and a single concurrent read/write client whose edits automatically and seamlessly propagate to the readers. We show that our system functions under high work loads, with realistic file size distributions, and with small additional latency (as compared to a baseline encrypted file system) when paired with Dropbox as the synchronization service.", "title": "" }, { "docid": "38666c5299ee67e336dc65f23f528a56", "text": "Different modalities of magnetic resonance imaging (MRI) can indicate tumor-induced tissue changes from different perspectives, thus benefit brain tumor segmentation when they are considered together. Meanwhile, it is always interesting to examine the diagnosis potential from single modality, considering the cost of acquiring multi-modality images. Clinically, T1-weighted MRI is the most commonly used MR imaging modality, although it may not be the best option for contouring brain tumor. In this paper, we investigate whether synthesizing FLAIR images from T1 could help improve brain tumor segmentation from the single modality of T1. This is achieved by designing a 3D conditional Generative Adversarial Network (cGAN) for FLAIR image synthesis and a local adaptive fusion method to better depict the details of the synthesized FLAIR images. The proposed method can effectively handle the segmentation task of brain tumors that vary in appearance, size and location across samples.", "title": "" }, { "docid": "28b2bbcfb8960ff40f2fe456a5b00729", "text": "This paper presents an adaptation of Lesk’s dictionary– based word sense disambiguation algorithm. Rather than using a standard dictionary as the source of glosses for our approach, the lexical database WordNet is employed. This provides a rich hierarchy of semantic relations that our algorithm can exploit. This method is evaluated using the English lexical sample data from the Senseval-2 word sense disambiguation exercise, and attains an overall accuracy of 32%. This represents a significant improvement over the 16% and 23% accuracy attained by variations of the Lesk algorithm used as benchmarks during the Senseval-2 comparative exercise among word sense disambiguation", "title": "" }, { "docid": "07af60525d625fd50e75f61dca4107db", "text": "Spell checking is a well-known task in Natural Language Processing. Nowadays, spell checkers are an important component of a number of computer software such as web browsers, word processors and others. Spelling error detection and correction is the process that will check the spelling of words in a document, and in occurrence of any error, list out the correct spelling in the form of suggestions. This survey paper covers different spelling error detection and correction techniques in various languages. KeywordsNLP, Spell Checker, Spelling Errors, Error detection techniques, Error correction techniques.", "title": "" }, { "docid": "d6c34d138692851efdbb807a89d0fcca", "text": "Vaccine hesitancy reflects concerns about the decision to vaccinate oneself or one's children. There is a broad range of factors contributing to vaccine hesitancy, including the compulsory nature of vaccines, their coincidental temporal relationships to adverse health outcomes, unfamiliarity with vaccine-preventable diseases, and lack of trust in corporations and public health agencies. Although vaccination is a norm in the U.S. and the majority of parents vaccinate their children, many do so amid concerns. The proportion of parents claiming non-medical exemptions to school immunization requirements has been increasing over the past decade. Vaccine refusal has been associated with outbreaks of invasive Haemophilus influenzae type b disease, varicella, pneumococcal disease, measles, and pertussis, resulting in the unnecessary suffering of young children and waste of limited public health resources. Vaccine hesitancy is an extremely important issue that needs to be addressed because effective control of vaccine-preventable diseases generally requires indefinite maintenance of extremely high rates of timely vaccination. The multifactorial and complex causes of vaccine hesitancy require a broad range of approaches on the individual, provider, health system, and national levels. These include standardized measurement tools to quantify and locate clustering of vaccine hesitancy and better understand issues of trust; rapid, independent, and transparent review of an enhanced and appropriately funded vaccine safety system; adequate reimbursement for vaccine risk communication in doctors' offices; and individually tailored messages for parents who have vaccine concerns, especially first-time pregnant women. The potential of vaccines to prevent illness and save lives has never been greater. Yet, that potential is directly dependent on parental acceptance of vaccines, which requires confidence in vaccines, healthcare providers who recommend and administer vaccines, and the systems to make sure vaccines are safe.", "title": "" }, { "docid": "a1fcf0d2b9a619c0a70b210c70cf4bfd", "text": "This paper demonstrates a reliable navigation of a mobile robot in outdoor environment. We fuse differential GPS and odometry data using the framework of extended Kalman filter to localize a mobile robot. And also, we propose an algorithm to detect curbs through the laser range finder. An important feature of road environment is the existence of curbs. The mobile robot builds the map of the curbs of roads and the map is used for tracking and localization. The navigation system for the mobile robot consists of a mobile robot and a control station. The mobile robot sends the image data from a camera to the control station. The control station receives and displays the image data and the teleoperator commands the mobile robot based on the image data. Since the image data does not contain enough data for reliable navigation, a hybrid strategy for reliable mobile robot in outdoor environment is suggested. When the mobile robot is faced with unexpected obstacles or the situation that, if it follows the command, it can happen to collide, it sends a warning message to the teleoperator and changes the mode from teleoperated to autonomous to avoid the obstacles by itself. After avoiding the obstacles or the collision situation, the mode of the mobile robot is returned to teleoperated mode. We have been able to confirm that the appropriate change of navigation mode can help the teleoperator perform reliable navigation in outdoor environment through experiments in the road.", "title": "" }, { "docid": "0ce06f95b1dafcac6dad4413c8b81970", "text": "User acceptance of artificial intelligence agents might depend on their ability to explain their reasoning, which requires adding an interpretability layer that facilitates users to understand their behavior. This paper focuses on adding an interpretable layer on top of Semantic Textual Similarity (STS), which measures the degree of semantic equivalence between two sentences. The interpretability layer is formalized as the alignment between pairs of segments across the two sentences, where the relation between the segments is labeled with a relation type and a similarity score. We present a publicly available dataset of sentence pairs annotated following the formalization. We then develop a system trained on this dataset which, given a sentence pair, explains what is similar and different, in the form of graded and typed segment alignments. When evaluated on the dataset, the system performs better than an informed baseline, showing that the dataset and task are well-defined and feasible. Most importantly, two user studies show how the system output can be used to automatically produce explanations in natural language. Users performed better when having access to the explanations, providing preliminary evidence that our dataset and method to automatically produce explanations is useful in real applications.", "title": "" }, { "docid": "b1f3f0dac49d6613f381b30ebf5b0ad7", "text": "In the current Web scenario a video browsing tool that produces on-the-fly storyboards is more and more a need. Video summary techniques can be helpful but, due to their long processing time, they are usually unsuitable for on-the-fly usage. Therefore, it is common to produce storyboards in advance, penalizing users customization. The lack of customization is more and more critical, as users have different demands and might access the Web with several different networking and device technologies. In this paper we propose STIMO, a summarization technique designed to produce on-the-fly video storyboards. STIMO produces still and moving storyboards and allows advanced users customization (e.g., users can select the storyboard length and the maximum time they are willing to wait to get the storyboard). STIMO is based on a fast clustering algorithm that selects the most representative video contents using HSV frame color distribution. Experimental results show that STIMO produces storyboards with good quality and in a time that makes on-the-fly usage possible.", "title": "" }, { "docid": "16afaad8bfdc64f9d97e9829f2029bc6", "text": "The combination of limited individual information and costly information acquisition in markets for experience goods leads us to believe that significant peer effects drive demand in these markets. In this paper we model the effects of peers on the demand patterns of products in the market experience goods microfunding. By analyzing data from an online crowdfunding platform from 2006 to 2010 we are able to ascertain that peer effects, and not network externalities, influence consumption.", "title": "" }, { "docid": "c6283ee48fd5115d28e4ea0812150f25", "text": "Stochastic regular bi-languages has been recently proposed to model the joint probability distributions appearing in some statistical approaches of Spoken Dialog Systems. To this end a deterministic and probabilistic finite state biautomaton was defined to model the distribution probabilities for the dialog model. In this work we propose and evaluate decision strategies over the defined probabilistic finite state bi-automaton to select the best system action at each step of the interaction. To this end the paper proposes some heuristic decision functions that consider both action probabilities learn from a corpus and number of known attributes at running time. We compare either heuristics based on a single next turn or based on entire paths over the automaton. Experimental evaluation was carried out to test the model and the strategies over the Let’s Go Bus Information system. The results obtained show good system performances. They also show that local decisions can lead to better system performances than best path-based decisions due to the unpredictability of the user behaviors.", "title": "" }, { "docid": "dffe5305558e10a0ceba499f3a01f4d8", "text": "A simple framework Probabilistic Multi-view Graph Embedding (PMvGE) is proposed for multi-view feature learning with many-to-many associations so that it generalizes various existing multi-view methods. PMvGE is a probabilistic model for predicting new associations via graph embedding of the nodes of data vectors with links of their associations. Multi-view data vectors with many-to-many associations are transformed by neural networks to feature vectors in a shared space, and the probability of new association between two data vectors is modeled by the inner product of their feature vectors. While existing multi-view feature learning techniques can treat only either of many-to-many association or non-linear transformation, PMvGE can treat both simultaneously. By combining Mercer’s theorem and the universal approximation theorem, we prove that PMvGE learns a wide class of similarity measures across views. Our likelihoodbased estimator enables efficient computation of non-linear transformations of data vectors in largescale datasets by minibatch SGD, and numerical experiments illustrate that PMvGE outperforms existing multi-view methods.", "title": "" }, { "docid": "477769b83e70f1d46062518b1d692664", "text": "Deep Neural Networks (DNNs) have been demonstrated to perform exceptionally well on most recognition tasks such as image classification and segmentation. However, they have also been shown to be vulnerable to adversarial examples. This phenomenon has recently attracted a lot of attention but it has not been extensively studied on multiple, large-scale datasets and complex tasks such as semantic segmentation which often require more specialised networks with additional components such as CRFs, dilated convolutions, skip-connections and multiscale processing. In this paper, we present what to our knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets. We analyse the effect of different network architectures, model capacity and multiscale processing, and show that many observations made on the task of classification do not always transfer to this more complex task. Furthermore, we show how mean-field inference in deep structured models and multiscale processing naturally implement recently proposed adversarial defenses. Our observations will aid future efforts in understanding and defending against adversarial examples. Moreover, in the shorter term, we show which segmentation models should currently be preferred in safety-critical applications due to their inherent robustness.", "title": "" }, { "docid": "83e0fdbaa10c01aecdbe9cf853511230", "text": "We use an online travel context to test three aspects of communication", "title": "" }, { "docid": "58af6565b74f68371a1c61eab44a72c5", "text": "Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.", "title": "" }, { "docid": "ea937e1209c270a7b6ab2214e0989fed", "text": "With current projections regarding the growth of Internet sales, online retailing raises many questions about how to market on the Net. While convenience impels consumers to purchase items on the web, quality remains a significant factor in deciding where to shop online. The competition is increasing and personalization is considered to be the competitive advantage that will determine the winners in the market of online shopping in the following years. Recommender systems are a means of personalizing a site and a solution to the customer’s information overload problem. As such, many e-commerce sites already use them to facilitate the buying process. In this paper we present a recommender system for online shopping focusing on the specific characteristics and requirements of electronic retailing. We use a hybrid model supporting dynamic recommendations, which eliminates the problems the underlying techniques have when applied solely. At the end, we conclude with some ideas for further development and research in this area.", "title": "" }, { "docid": "fa9571673fe848d1d119e2d49f21d28d", "text": "Convolutional Neural Networks (CNNs) trained on large scale RGB databases have become the secret sauce in the majority of recent approaches for object categorization from RGB-D data. Thanks to colorization techniques, these methods exploit the filters learned from 2D images to extract meaningful representations in 2.5D. Still, the perceptual signature of these two kind of images is very different, with the first usually strongly characterized by textures, and the second mostly by silhouettes of objects. Ideally, one would like to have two CNNs, one for RGB and one for depth, each trained on a suitable data collection, able to capture the perceptual properties of each channel for the task at hand. This has not been possible so far, due to the lack of a suitable depth database. This paper addresses this issue, proposing to opt for synthetically generated images rather than collecting by hand a 2.5D large scale database. While being clearly a proxy for real data, synthetic images allow to trade quality for quantity, making it possible to generate a virtually infinite amount of data. We show that the filters learned from such data collection, using the very same architecture typically used on visual data, learns very different filters, resulting in depth features (a) able to better characterize the different facets of depth images, and (b) complementary with respect to those derived from CNNs pre-trained on 2D datasets. Experiments on two publicly available databases show the power of our approach.", "title": "" }, { "docid": "b54ca99ae8818517d5c04100bad0f3b4", "text": "Finding the sparsest solutions to a tensor complementarity problem is generally NP-hard due to the nonconvexity and noncontinuity of the involved 0 norm. In this paper, a special type of tensor complementarity problems with Z -tensors has been considered. Under some mild conditions, we show that to pursuit the sparsest solutions is equivalent to solving polynomial programming with a linear objective function. The involved conditions guarantee the desired exact relaxation and also allow to achieve a global optimal solution to the relaxednonconvexpolynomial programming problem. Particularly, in comparison to existing exact relaxation conditions, such as RIP-type ones, our proposed conditions are easy to verify. This research was supported by the National Natural Science Foundation of China (11301022, 11431002), the State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University (RCS2014ZT20, RCS2014ZZ01), and the Hong Kong Research Grant Council (Grant No. PolyU 502111, 501212, 501913 and 15302114). B Ziyan Luo starkeynature@hotmail.com Liqun Qi liqun.qi@polyu.edu.hk Naihua Xiu nhxiu@bjtu.edu.cn 1 State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing 100044, People’s Repubic of China 2 Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, People’s Repubic of China 3 Department of Mathematics, School of Science, Beijing Jiaotong University, Beijing, People’s Repubic of China 123 Author's personal copy", "title": "" }, { "docid": "fbcad29c075e8d58b9f6df5ee70aa0be", "text": "We present a motion planning framework for autonomous on-road driving considering both the uncertainty caused by an autonomous vehicle and other traffic participants. The future motion of traffic participants is predicted using a local planner, and the uncertainty along the predicted trajectory is computed based on Gaussian propagation. For the autonomous vehicle, the uncertainty from localization and control is estimated based on a Linear-Quadratic Gaussian (LQG) framework. Compared with other safety assessment methods, our framework allows the planner to avoid unsafe situations more efficiently, thanks to the direct uncertainty information feedback to the planner. We also demonstrate our planner's ability to generate safer trajectories compared to planning only with a LQG framework.", "title": "" }, { "docid": "6fb8b461530af2c56ec0fac36dd85d3a", "text": "Psoriatic arthritis is one of the spondyloarthritis. It is a disease of clinical heterogenicity, which may affect peripheral joints, as well as axial spine, with presence of inflammatory lesions in soft tissue, in a form of dactylitis and enthesopathy. Plain radiography remains the basic imaging modality for PsA diagnosis, although early inflammatory changes affecting soft tissue and bone marrow cannot be detected with its use, or the image is indistinctive. Typical radiographic features of PsA occur in an advanced disease, mainly within the synovial joints, but also in fibrocartilaginous joints, such as sacroiliac joints, and additionally in entheses of tendons and ligaments. Moll and Wright classified PsA into 5 subtypes: asymmetric oligoarthritis, symmetric polyarthritis, arthritis mutilans, distal interphalangeal arthritis of the hands and feet and spinal column involvement. In this part of the paper we discuss radiographic features of the disease. The next one will address magnetic resonance imaging and ultrasonography.", "title": "" } ]
scidocsrr
d3b2ea56837b774bdd1ba56a171bd547
Automating image segmentation verification and validation by learning test oracles
[ { "docid": "a5fc5e1bf35863d030b20c219732bc2b", "text": "Measures of overlap of labelled regions of images, such as the Dice and Tanimoto coefficients, have been extensively used to evaluate image registration and segmentation algorithms. Modern studies can include multiple labels defined on multiple images yet most evaluation schemes report one overlap per labelled region, simply averaged over multiple images. In this paper, common overlap measures are generalized to measure the total overlap of ensembles of labels defined on multiple test images and account for fractional labels using fuzzy set theory. This framework allows a single \"figure-of-merit\" to be reported which summarises the results of a complex experiment by image pair, by label or overall. A complementary measure of error, the overlap distance, is defined which captures the spatial extent of the nonoverlapping part and is related to the Hausdorff distance computed on grey level images. The generalized overlap measures are validated on synthetic images for which the overlap can be computed analytically and used as similarity measures in nonrigid registration of three-dimensional magnetic resonance imaging (MRI) brain images. Finally, a pragmatic segmentation ground truth is constructed by registering a magnetic resonance atlas brain to 20 individual scans, and used with the overlap measures to evaluate publicly available brain segmentation algorithms", "title": "" }, { "docid": "892cfde6defce89783f0c290df4822f2", "text": "Metamorphic testing has been shown to be a simple yet effective technique in addressing the quality assurance of applications that do not have test oracles, i.e., for which it is difficult or impossible to know what the correct output should be for arbitrary input. In metamorphic testing, existing test case input is modified to produce new test cases in such a manner that, when given the new input, the application should produce an output that can easily be computed based on the original output. That is, if input x produces output f(x), then we create input x' such that we can predict f(x') based on f(x); if the application does not produce the expected output, then a defect must exist, and either f(x), or f(x') (or both) is wrong.\n In practice, however, metamorphic testing can be a manually intensive technique for all but the simplest cases. The transformation of input data can be laborious for large data sets, or practically impossible for input that is not in human-readable format. Similarly, comparing the outputs can be error-prone for large result sets, especially when slight variations in the results are not actually indicative of errors (i.e., are false positives), for instance when there is non-determinism in the application and multiple outputs can be considered correct.\n In this paper, we present an approach called Automated Metamorphic System Testing. This involves the automation of metamorphic testing at the system level by checking that the metamorphic properties of the entire application hold after its execution. The tester is able to easily set up and conduct metamorphic tests with little manual intervention, and testing can continue in the field with minimal impact on the user. Additionally, we present an approach called Heuristic Metamorphic Testing which seeks to reduce false positives and address some cases of non-determinism. We also describe an implementation framework called Amsterdam, and present the results of empirical studies in which we demonstrate the effectiveness of the technique on real-world programs without test oracles.", "title": "" }, { "docid": "d4aaea0107cbebd7896f4cb57fa39c05", "text": "A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with user-defined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, a high-quality image segmentation may be obtained. Theoretical properties of this algorithm are developed along with the corresponding connections to discrete potential theory and electrical circuits. This algorithm is formulated in discrete space (i.e., on a graph) using combinatorial analogues of standard operators and principles from continuous potential theory, allowing it to be applied in arbitrary dimension on arbitrary graphs", "title": "" } ]
[ { "docid": "69d42340c09303b69eafb19de7170159", "text": "Based on an example of translational motion, this report shows how to model and initialize the Kalman Filter. Basic rules about physical motion are introduced to point out, that the well-known laws of physical motion are a mere approximation. Hence, motion of non-constant velocity or acceleration is modelled by additional use of white noise. Special attention is drawn to the matrix initialization for use in the Kalman Filter, as, in general, papers and books do not give any hint on this; thus inducing the impression that initializing is not important and may be arbitrary. For unknown matrices many users of the Kalman Filter choose the unity matrix. Sometimes it works, sometimes it does not. In order to close this gap, initialization is shown on the example of human interactive motion. In contrast to measuring instruments with documented measurement errors in manuals, the errors generated by vision-based sensoring must be estimated carefully. Of course, the described methods may be adapted to other circumstances.", "title": "" }, { "docid": "47501c171c7b3f8e607550c958852be1", "text": "Fundus images provide an opportunity for early detection of diabetes. Generally, retina fundus images of diabetic patients exhibit exudates, which are lesions indicative of Diabetic Retinopathy (DR). Therefore, computational tools can be considered to be used in assisting ophthalmologists and medical doctor for the early screening of the disease. Hence in this paper, we proposed visualisation of exudates in fundus images using radar chart and Color Auto Correlogram (CAC) technique. The proposed technique requires that the Optic Disc (OD) from the fundus image be removed. Next, image normalisation was performed to standardise the colors in the fundus images. The exudates from the modified image are then extracted using Artificial Neural Network (ANN) and visualised using radar chart and CAC technique. The proposed technique was tested on 149 images of the publicly available MESSIDOR database. Experimental results suggest that the method has potential to be used for early indication of DR, by visualising the overlap between CAC features of the fundus images.", "title": "" }, { "docid": "07e91583f63660a6b4aa4bb2063bd2b7", "text": "ScanSAR interferometry is an attractive option for efficient topographic mapping of large areas and for monitoring of large-scale motions. Only ScanSAR interferometry made it possible to map almost the entire landmass of the earth in the 11-day Shuttle Radar Topography Mission. Also the operational satellites RADARSAT and ENVISAT offer ScanSAR imaging modes and thus allow for repeat-pass ScanSAR interferometry. This paper gives a complete description of ScanSAR and burst-mode interferometric signal properties and compares different processing algorithms. The problems addressed are azimuth scanning pattern synchronization, spectral shift filtering in the presence of high squint, Doppler centroid estimation, different phase-preserving ScanSAR processing algorithms, ScanSAR interferogram formation, coregistration, and beam alignment. Interferograms and digital elevation models from RADARSAT ScanSAR Narrow modes are presented. The novel “pack-and-go” algorithm for efficient burst-mode range processing and a new time-variant fast interpolator for interferometric coregistration are introduced.", "title": "" }, { "docid": "8cf10c84e6e389c0c10238477c619175", "text": "Based on self-determination theory, this study proposes and tests a motivational model of intraindividual changes in teacher burnout (emotional exhaustion, depersonalization, and reduced personal accomplishment). Participants were 806 French-Canadian teachers in public elementary and high schools. Results show that changes in teachers’ perceptions of classroom overload and students’ disruptive behavior are negatively related to changes in autonomous motivation, which in turn negatively predict changes in emotional exhaustion. Results also indicate that changes in teachers’ perceptions of students’ disruptive behaviors and school principal’s leadership behaviors are related to changes in self-efficacy, which in turn negatively predict changes in three burnout components. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "98e9d8fb4a04ad141b3a196fe0a9c08b", "text": "ÐGraphs are a powerful and universal data structure useful in various subfields of science and engineering. In this paper, we propose a new algorithm for subgraph isomorphism detection from a set of a priori known model graphs to an input graph that is given online. The new approach is based on a compact representation of the model graphs that is computed offline. Subgraphs that appear multiple times within the same or within different model graphs are represented only once, thus reducing the computational effort to detect them in an input graph. In the extreme case where all model graphs are highly similar, the run-time of the new algorithm becomes independent of the number of model graphs. Both a theoretical complexity analysis and practical experiments characterizing the performance of the new approach will be given. Index TermsÐGraph matching, graph isomorphism, subgraph isomorphism, preprocessing.", "title": "" }, { "docid": "5507f3199296478abbc6e106943a53ba", "text": "Hiding a secret is needed in many situations. One might need to hide a password, an encryption key, a secret recipe, and etc. Information can be secured with encryption, but the need to secure the secret key used for such encryption is important too. Imagine you encrypt your important files with one secret key and if such a key is lost then all the important files will be inaccessible. Thus, secure and efficient key management mechanisms are required. One of them is secret sharing scheme (SSS) that lets you split your secret into several parts and distribute them among selected parties. The secret can be recovered once these parties collaborate in some way. This paper will study these schemes and explain the need for them and their security. Across the years, various schemes have been presented. This paper will survey some of them varying from trivial schemes to threshold based ones. Explanations on these schemes constructions are presented. The paper will also look at some applications of SSS.", "title": "" }, { "docid": "7275ce89ea2f5ab8eb8b6651e2487dcb", "text": "A major challenge of semantic parsing is the vocabulary mismatch problem between natural language and target ontology. In this paper, we propose a sentence rewriting based semantic parsing method, which can effectively resolve the mismatch problem by rewriting a sentence into a new form which has the same structure with its target logical form. Specifically, we propose two sentence-rewriting methods for two common types of mismatch: a dictionary-based method for 1N mismatch and a template-based method for N-1 mismatch. We evaluate our sentence rewriting based semantic parser on the benchmark semantic parsing dataset – WEBQUESTIONS. Experimental results show that our system outperforms the base system with a 3.4% gain in F1, and generates logical forms more accurately and parses sentences more robustly.", "title": "" }, { "docid": "f0c9db6cab187463162c8bba71ea011a", "text": "Traditional Network-on-Chips (NoCs) employ simple arbitration strategies, such as round-robin or oldest-first, to decide which packets should be prioritized in the network. This is counter-intuitive since different packets can have very different effects on system performance due to, e.g., different level of memory-level parallelism (MLP) of applications. Certain packets may be performance-critical because they cause the processor to stall, whereas others may be delayed for a number of cycles with no effect on application-level performance as their latencies are hidden by other outstanding packets'latencies. In this paper, we define slack as a key measure that characterizes the relative importance of a packet. Specifically, the slack of a packet is the number of cycles the packet can be delayed in the network with no effect on execution time. This paper proposes new router prioritization policies that exploit the available slack of interfering packets in order to accelerate performance-critical packets and thus improve overall system performance. When two packets interfere with each other in a router, the packet with the lower slack value is prioritized. We describe mechanisms to estimate slack, prevent starvation, and combine slack-based prioritization with other recently proposed application-aware prioritization mechanisms.\n We evaluate slack-based prioritization policies on a 64-core CMP with an 8x8 mesh NoC using a suite of 35 diverse applications. For a representative set of case studies, our proposed policy increases average system throughput by 21.0% over the commonlyused round-robin policy. Averaged over 56 randomly-generated multiprogrammed workload mixes, the proposed policy improves system throughput by 10.3%, while also reducing application-level unfairness by 30.8%.", "title": "" }, { "docid": "1bbd0eca854737c94e62442ee4cedac8", "text": "Most convolutional neural networks (CNNs) lack midlevel layers that model semantic parts of objects. This limits CNN-based methods from reaching their full potential in detecting and utilizing small semantic parts in recognition. Introducing such mid-level layers can facilitate the extraction of part-specific features which can be utilized for better recognition performance. This is particularly important in the domain of fine-grained recognition. In this paper, we propose a new CNN architecture that integrates semantic part detection and abstraction (SPDACNN) for fine-grained classification. The proposed network has two sub-networks: one for detection and one for recognition. The detection sub-network has a novel top-down proposal method to generate small semantic part candidates for detection. The classification sub-network introduces novel part layers that extract features from parts detected by the detection sub-network, and combine them for recognition. As a result, the proposed architecture provides an end-to-end network that performs detection, localization of multiple semantic parts, and whole object recognition within one framework that shares the computation of convolutional filters. Our method outperforms state-of-theart methods with a large margin for small parts detection (e.g. our precision of 93.40% vs the best previous precision of 74.00% for detecting the head on CUB-2011). It also compares favorably to the existing state-of-the-art on finegrained classification, e.g. it achieves 85.14% accuracy on CUB-2011.", "title": "" }, { "docid": "f9f54cf8c057d2d9f9b559eb62a94e38", "text": "The proliferation of malware has presented a serious threat to the security of computer systems. Traditional signature-based anti-virus systems fail to detect polymorphic/metamorphic and new, previously unseen malicious executables. Data mining methods such as Naive Bayes and Decision Tree have been studied on small collections of executables. In this paper, resting on the analysis of Windows APIs called by PE files, we develop the Intelligent Malware Detection System (IMDS) using Objective-Oriented Association (OOA) mining based classification. IMDS is an integrated system consisting of three major modules: PE parser, OOA rule generator, and rule based classifier. An OOA_Fast_FP-Growth algorithm is adapted to efficiently generate OOA rules for classification. A comprehensive experimental study on a large collection of PE files obtained from the anti-virus laboratory of KingSoft Corporation is performed to compare various malware detection approaches. Promising experimental results demonstrate that the accuracy and efficiency of our IMDS system outperform popular anti-virus software such as Norton AntiVirus and McAfee VirusScan, as well as previous data mining based detection systems which employed Naive Bayes, Support Vector Machine (SVM) and Decision Tree techniques. Our system has already been incorporated into the scanning tool of KingSoft’s Anti-Virus software.", "title": "" }, { "docid": "b18f5df68581789312d48c65ba7afb9d", "text": "In this study, an efficient addressing scheme for radix-4 FFT processor is presented. The proposed method uses extra registers to buffer and reorder the data inputs of the butterfly unit. It avoids the modulo-r addition in the address generation; hence, the critical path is significantly shorter than the conventional radix-4 FFT implementations. A significant property of the proposed method is that the critical path of the address generator is independent from the FFT transform length N, making it extremely efficient for large FFT transforms. For performance evaluation, the new FFT architecture has been implemented by FPGA (Altera Stratix) hardware and also synthesized by CMOS 0.18µm technology. The results confirm the speed and area advantages for large FFTs. Although only radix-4 FFT address generation is presented in the paper, it can be used for higher radix FFT.", "title": "" }, { "docid": "b68e09f879e51aad3ed0ce8b696da957", "text": "The status of current model-driven engineering technologies has matured over the last years whereas the infrastructure supporting model management is still in its infancy. Infrastructural means include version control systems, which are successfully used for the management of textual artifacts like source code. Unfortunately, they are only limited suitable for models. Consequently, dedicated solutions emerge. These approaches are currently hard to compare, because no common quality measure has been established yet and no structured test cases are available. In this paper, we analyze the challenges coming along with merging different versions of one model and derive a first categorization of typical changes and the therefrom resulting conflicts. On this basis we create a set of test cases on which we apply state-of-the-art versioning systems and report our experiences.", "title": "" }, { "docid": "a4197ab8a70142ac331599c506996bc9", "text": "This paper presents the findings of two studies that replicate previous work by Fred Davis on the subject of perceived usefulness, ease of use, and usage of information technology. The two studies focus on evaluating the psychometric properties of the ease of use and usefulness scales, while examining the relationship between ease of use, usefulness, and system usage. Study 1 provides a strong assessment of the convergent validity of the two scales by examining heterogeneous user groups dealing with heterogeneous implementations of messaging technology. In addition, because one might expect users to share similar perspectives about voice and electronic mail, the study also represents a strong test of discriminant validity. In this study a total of 118 respondents from 10 different organizations were surveyed for their attitudes toward two messaging technologies: voice and electronic mail. Study 2 complements the approach taken in Study 1 by focusing on the ability to demonstrate discriminant validity. Three popular software applications (WordPerfect, Lotus 1-2-3, and Harvard Graphics) were examined based on the expectation that they would all be rated highly on both scales. In this study a total of 73 users rated the three packages in terms of ease of use and usefulness. The results of the studies demonstrate reliable and valid scales for measurement of perceived ease of use and usefulness. In addition, the paper tests the relationships between ease of use, usefulness, and usage using structural equation modelling. The results of this model are consistent with previous research for Study 1, suggesting that usefulness is an important determinant of system use. For Study 2 the results are somewhat mixed, but indicate the importance of both ease of use and usefulness. Differences in conditions of usage are explored to explain these findings.", "title": "" }, { "docid": "d638bf6a0ec3354dd6ba90df0536aa72", "text": "Selected elements of dynamical system (DS) theory approach to nonlinear time series analysis are introduced. Key role in this concept plays a method of time delay. The method enables us reconstruct phase space trajectory of DS without knowledge of its governing equations. Our variant is tested and compared with wellknown TISEAN package for Lorenz and Hénon systems. Introduction There are number of methods of nonlinear time series analysis (e.g. nonlinear prediction or noise reduction) that work in a phase space (PS) of dynamical systems. We assume that a given time series of some variable is generated by a dynamical system. A specific state of the system can be represented by a point in the phase space and time evolution of the system creates a trajectory in the phase space. From this point of view we consider our time series to be a projection of trajectory of DS to one (or more – when we have more simultaneously measured variables) coordinates of phase space. This view was enabled due to formulation of embedding theorem [1], [2] at the beginning of the 1980s. It says that it is possible to reconstruct the phase space from the time series. One of the most frequently used methods of phase space reconstruction is the method of time delay. The main task while using this method is to determine values of time delay τ and embedding dimension m. We tested individual steps of this method on simulated data generated by Lorenz and Hénon systems. We compared results computed by our own programs with outputs of program package TISEAN created by R. Hegger, H. Kantz, and T. Schreiber [3]. Method of time delay The most frequently used method of PS reconstruction is the method of time delay. If we have a time series of a scalar variable we construct a vector ( ) , ,..., 1 , N i t x i = in phase space in time ti as following: ( ) ( ) ( ) ( ) ( ) ( ) [ ], 1 ,..., 2 , , τ τ τ − + + + = m t x t x t x t x t i i i i i X where i goes from 1 to N – (m – 1)τ, τ is time delay, m is a dimension of reconstructed space (embedding dimension) and M = N – (m – 1)τ is number of points (states) in the phase space. According to embedding theorem, when this is done in a proper way, dynamics reconstructed using this formula is equivalent to the dynamics on an attractor in the origin phase space in the sense that characteristic invariants of the system are conserved. The time delay method and related aspects are described in literature, e.g. [4]. We estimated the two parameters—time delay and embedding dimension—using algorithms below. Choosing a time delay To determine a suitable time delay we used average mutual information (AMI), a certain generalization of autocorrelation function. Average mutual information between sets of measurements A and B is defined [5]:", "title": "" }, { "docid": "57334078030a2b2d393a7c236d6a3a1c", "text": "Neural Architecture Search (NAS) aims at finding one “single” architecture that achieves the best accuracy for a given task such as image recognition. In this paper, we study the instance-level variation, and demonstrate that instance-awareness is an important yet currently missing component of NAS. Based on this observation, we propose InstaNAS for searching toward instance-level architectures; the controller is trained to search and form a “distribution of architectures” instead of a single final architecture. Then during the inference phase, the controller selects an architecture from the distribution, tailored for each unseen image to achieve both high accuracy and short latency. The experimental results show that InstaNAS reduces the inference latency without compromising classification accuracy. On average, InstaNAS achieves 48.9% latency reduction on CIFAR-10 and 40.2% latency reduction on CIFAR-100 with respect to MobileNetV2 architecture.", "title": "" }, { "docid": "51f47a5e873f7b24cd15aff4ceb8d35c", "text": "We introduce the Adaptive Skills, Adaptive Partitions (ASAP) framework that (1) learns skills (i.e., temporally extended actions or options) as well as (2) where to apply them. We believe that both (1) and (2) are necessary for a truly general skill learning framework, which is a key building block needed to scale up to lifelong learning agents. The ASAP framework can also solve related new tasks simply by adapting where it applies its existing learned skills. We prove that ASAP converges to a local optimum under natural conditions. Finally, our experimental results, which include a RoboCup domain, demonstrate the ability of ASAP to learn where to reuse skills as well as solve multiple tasks with considerably less experience than solving each task from scratch.", "title": "" }, { "docid": "148b7445ec2cd811d64fd81c61c20e02", "text": "Using sensors to measure parameters of interest in rotating environments and communicating the measurements in real-time over wireless links, requires a reliable power source. In this paper, we have investigated the possibility to generate electric power locally by evaluating six different energy-harvesting technologies. The applicability of the technology is evaluated by several parameters that are important to the functionality in an industrial environment. All technologies are individually presented and evaluated, a concluding table is also summarizing the technologies strengths and weaknesses. To support the technology evaluation on a more theoretical level, simulations has been performed to strengthen our claims. Among the evaluated and simulated technologies, we found that the variable reluctance-based harvesting technology is the strongest candidate for further technology development for the considered use-case.", "title": "" }, { "docid": "eab514f5951a9e2d3752002c7ba799d8", "text": "In industrial fabric productions, automated real time systems are needed to find out the minor defects. It will save the cost by not transporting defected products and also would help in making compmay image of quality fabrics by sending out only undefected products. A real time fabric defect detection system (FDDS), implementd on an embedded DSP platform is presented here. Textural features of fabric image are extracted based on gray level co-occurrence matrix (GLCM). A sliding window technique is used for defect detection where window moves over the whole image computing a textural energy from the GLCM of the fabric image. The energy values are compared to a reference and the deviations beyond a threshold are reported as defects and also visually represented by a window. The implementation is carried out on a TI TMS320DM642 platform and programmed using code composer studio software. The real time output of this implementation was shown on a monitor. KeywordsFabric Defects, Texture, Grey Level Co-occurrence Matrix, DSP Kit, Energy Computation, Sliding Window, FDDS", "title": "" }, { "docid": "5ae1191a27958704ab5f33749c6b30b5", "text": "Much of Bluetooth’s data remains confidential in practice due to the difficulty of eavesdropping it. We present mechanisms for doing so, therefore eliminating the data confidentiality properties of the protocol. As an additional security measure, devices often operate in “undiscoverable mode” in order to hide their identity and provide access control. We show how the full MAC address of such master devices can be obtained, therefore bypassing the access control of this feature. Our work results in the first open-source Bluetooth sniffer.", "title": "" }, { "docid": "657087aaadc0537e9fb19c422c27b485", "text": "Swarms of embedded devices provide new challenges for privacy and security. We propose Permissioned Blockchains as an effective way to secure and manage these systems of systems. A long view of blockchain technology yields several requirements absent in extant blockchain implementations. Our approach to Permissioned Blockchains meets the fundamental requirements for longevity, agility, and incremental adoption. Distributed Identity Management is an inherent feature of our Permissioned Blockchain and provides for resilient user and device identity and attribute management.", "title": "" } ]
scidocsrr
419e64d3afee302db4f7fabe52be4e3b
Offline signature verification using classifier combination of HOG and LBP features
[ { "docid": "7489989ecaa16bc699949608f9ffc8a1", "text": "A method for conducting off-line handwritten signature verification is described. It works at the global image level and measures the grey level variations in the image using statistical texture features. The co-occurrence matrix and local binary pattern are analysed and used as features. This method begins with a proposed background removal. A histogram is also processed to reduce the influence of different writing ink pens used by signers. Genuine samples and random forgeries have been used to train an SVM model and random and skilled forgeries have been used for testing it. Results are reasonable according to the state-of-the-art and approaches that use the same two databases: MCYT-75 and GPDS100 Corpuses. The combination of the proposed features and those proposed by other authors, based on geometric information, also promises improvements in performance. & 2010 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "7e1f0cd43cdc9685474e19b7fd65791b", "text": "Understanding human actions is a key problem in computer vision. However, recognizing actions is only the first step of understanding what a person is doing. In this paper, we introduce the problem of predicting why a person has performed an action in images. This problem has many applications in human activity understanding, such as anticipating or explaining an action. To study this problem, we introduce a new dataset of people performing actions annotated with likely motivations. However, the information in an image alone may not be sufficient to automatically solve this task. Since humans can rely on their lifetime of experiences to infer motivation, we propose to give computer vision systems access to some of these experiences by using recently developed natural language models to mine knowledge stored in massive amounts of text. While we are still far away from fully understanding motivation, our results suggest that transferring knowledge from language into vision can help machines understand why people in images might be performing an action.", "title": "" }, { "docid": "dc2770a8318dd4aa1142efebe5547039", "text": "The purpose of this study was to describe how reaching onset affects the way infants explore objects and their own bodies. We followed typically developing infants longitudinally from 2 through 5 months of age. At each visit we coded the behaviors infants performed with their hand when an object was attached to it versus when the hand was bare. We found increases in the performance of most exploratory behaviors after the emergence of reaching. These increases occurred both with objects and with bare hands. However, when interacting with objects, infants performed the same behaviors they performed on their bare hands but they performed them more often and in unique combinations. The results support the tenets that: (1) the development of object exploration begins in the first months of life as infants learn to selectively perform exploratory behaviors on their bodies and objects, (2) the onset of reaching is accompanied by significant increases in exploration of both objects and one's own body, (3) infants adapt their self-exploratory behaviors by amplifying their performance and combining them in unique ways to interact with objects.", "title": "" }, { "docid": "f2707d7fcd5d8d9200d4cc8de8ff1042", "text": "This paper describes recent work on the “Crosswatch” project, which is a computer vision-based smartphone system developed for providing guidance to blind and visually impaired travelers at traffic intersections. A key function of Crosswatch is self-localization - the estimation of the user's location relative to the crosswalks in the current traffic intersection. Such information may be vital to users with low or no vision to ensure that they know which crosswalk they are about to enter, and are properly aligned and positioned relative to the crosswalk. However, while computer vision-based methods have been used for finding crosswalks and helping blind travelers align themselves to them, these methods assume that the entire crosswalk pattern can be imaged in a single frame of video, which poses a significant challenge for a user who lacks enough vision to know where to point the camera so as to properly frame the crosswalk. In this paper we describe work in progress that tackles the problem of crosswalk detection and self-localization, building on recent work describing techniques enabling blind and visually impaired users to acquire 360° image panoramas while turning in place on a sidewalk. The image panorama is converted to an aerial (overhead) view of the nearby intersection, centered on the location that the user is standing at, so as to facilitate matching with a template of the intersection obtained from Google Maps satellite imagery. The matching process allows crosswalk features to be detected and permits the estimation of the user's precise location relative to the crosswalk of interest. We demonstrate our approach on intersection imagery acquired by blind users, thereby establishing the feasibility of the approach.", "title": "" }, { "docid": "f9876540ce148d7b27bab53839f1bf19", "text": "Recent research endeavors have shown the potential of using feed-forward convolutional neural networks to accomplish fast style transfer for images. In this work, we take one step further to explore the possibility of exploiting a feed-forward network to perform style transfer for videos and simultaneously maintain temporal consistency among stylized video frames. Our feed-forward network is trained by enforcing the outputs of consecutive frames to be both well stylized and temporally consistent. More specifically, a hybrid loss is proposed to capitalize on the content information of input frames, the style information of a given style image, and the temporal information of consecutive frames. To calculate the temporal loss during the training stage, a novel two-frame synergic training mechanism is proposed. Compared with directly applying an existing image style transfer method to videos, our proposed method employs the trained network to yield temporally consistent stylized videos which are much more visually pleasant. In contrast to the prior video style transfer method which relies on time-consuming optimization on the fly, our method runs in real time while generating competitive visual results.", "title": "" }, { "docid": "eb6572344dbaf8e209388f888fba1c10", "text": "[Purpose] The present study was performed to evaluate the changes in the scapular alignment, pressure pain threshold and pain in subjects with scapular downward rotation after 4 weeks of wall slide exercise or sling slide exercise. [Subjects and Methods] Twenty-two subjects with scapular downward rotation participated in this study. The alignment of the scapula was measured using radiographic analysis (X-ray). Pain and pressure pain threshold were assessed using visual analogue scale and digital algometer. Patients were assessed before and after a 4 weeks of exercise. [Results] In the within-group comparison, the wall slide exercise group showed significant differences in the resting scapular alignment, pressure pain threshold, and pain after four weeks. The between-group comparison showed that there were significant differences between the wall slide group and the sling slide group after four weeks. [Conclusion] The results of this study found that the wall slide exercise may be effective at reducing pain and improving scapular alignment in subjects with scapular downward rotation.", "title": "" }, { "docid": "c39836282acc36e77c95e732f4f1c1bc", "text": "In this paper, a new dataset, HazeRD, is proposed for benchmarking dehazing algorithms under more realistic haze conditions. HazeRD contains fifteen real outdoor scenes, for each of which five different weather conditions are simulated. As opposed to prior datasets that made use of synthetically generated images or indoor images with unrealistic parameters for haze simulation, our outdoor dataset allows for more realistic simulation of haze with parameters that are physically realistic and justified by scattering theory. All images are of high resolution, typically six to eight megapixels. We test the performance of several state-of-the-art dehazing techniques on HazeRD. The results exhibit a significant difference among algorithms across the different datasets, reiterating the need for more realistic datasets such as ours and for more careful benchmarking of the methods.", "title": "" }, { "docid": "49680e94843e070a5ed0179798f66f33", "text": "Routing protocols for Wireless Sensor Networks (WSN) are designed to select parent nodes so that data packets can reach their destination in a timely and efficient manner. Typically neighboring nodes with strongest connectivity are more selected as parents. This Greedy Routing approach can lead to unbalanced routing loads in the network. Consequently, the network experiences the early death of overloaded nodes causing permanent network partition. Herein, we propose a framework for load balancing of routing in WSN. In-network path tagging is used to monitor network traffic load of nodes. Based on this, nodes are identified as being relatively overloaded, balanced or underloaded. A mitigation algorithm finds suitable new parents for switching from overloaded nodes. The routing engine of the child of the overloaded node is then instructed to switch parent. A key future of the proposed framework is that it is primarily implemented at the Sink and so requires few changes to existing routing protocols. The framework was implemented in TinyOS on TelosB motes and its performance was assessed in a testbed network and in TOSSIM simulation. The algorithm increased the lifetime of the network by 41 % as recorded in the testbed experiment. The Packet Delivery Ratio was also improved from 85.97 to 99.47 %. Finally a comparative study was performed using the proposed framework with various existing routing protocols.", "title": "" }, { "docid": "c9c44cc22c71d580f4b2a24cd91ac274", "text": "One of the first steps in the utterance interpretation pipeline of many task-oriented conversational AI systems is to identify user intents and the corresponding slots. Neural sequence labeling models have achieved very high accuracy on these tasks when trained on large amounts of training data. However, collecting this data is very time-consuming and therefore it is unfeasible to collect large amounts of data for many languages. For this reason, it is desirable to make use of existing data in a high-resource language to train models in low-resource languages. In this paper, we investigate the performance of three different methods for cross-lingual transfer learning, namely (1) translating the training data, (2) using cross-lingual pre-trained embeddings, and (3) a novel method of using a multilingual machine translation encoder as contextual word representations. We find that given several hundred training examples in the the target language, the latter two methods outperform translating the training data. Further, in very low-resource settings, we find that multilingual contextual word representations give better results than using crosslingual static embeddings. We release a dataset of around 57k annotated utterances in English (43k), Spanish (8.6k) and Thai (5k) for three task oriented domains at https://fb.me/multilingual_task_oriented_data.", "title": "" }, { "docid": "1969bf5a07349cc5a9b498e0437e41fe", "text": "In this work, we tackle the problem of instance segmentation, the task of simultaneously solving object detection and semantic segmentation. Towards this goal, we present a model, called MaskLab, which produces three outputs: box detection, semantic segmentation, and direction prediction. Building on top of the Faster-RCNN object detector, the predicted boxes provide accurate localization of object instances. Within each region of interest, MaskLab performs foreground/background segmentation by combining semantic and direction prediction. Semantic segmentation assists the model in distinguishing between objects of different semantic classes including background, while the direction prediction, estimating each pixel's direction towards its corresponding center, allows separating instances of the same semantic class. Moreover, we explore the effect of incorporating recent successful methods from both segmentation and detection (e.g., atrous convolution and hypercolumn). Our proposed model is evaluated on the COCO instance segmentation benchmark and shows comparable performance with other state-of-art models.", "title": "" }, { "docid": "49d6b3f314b61ace11afc5eea7b652e3", "text": "Euler diagrams visually represent containment, intersection and exclusion using closed curves. They first appeared several hundred years ago, however, there has been a resurgence in Euler diagram research in the twenty-first century. This was initially driven by their use in visual languages, where they can be used to represent logical expressions diagrammatically. This work lead to the requirement to automatically generate Euler diagrams from an abstract description. The ability to generate diagrams has accelerated their use in information visualization, both in the standard case where multiple grouping of data items inside curves is required and in the area-proportional case where the area of curve intersections is important. As a result, examining the usability of Euler diagrams has become an important aspect of this research. Usability has been investigated by empirical studies, but much research has concentrated on wellformedness, which concerns how curves and other features of the diagram interrelate. This work has revealed the drawability of Euler diagrams under various wellformedness properties and has developed embedding methods that meet these properties. Euler diagram research surveyed in this paper includes theoretical results, generation techniques, transformation methods and the development of automated reasoning systems for Euler diagrams. It also overviews application areas and the ways in which Euler diagrams have been extended.", "title": "" }, { "docid": "db1cdc2a4e3fe26146a1f9c8b0926f9e", "text": "Sememes are defined as the minimum semantic units of human languages. People have manually annotated lexical sememes for words and form linguistic knowledge bases. However, manual construction is time-consuming and labor-intensive, with significant annotation inconsistency and noise. In this paper, we for the first time explore to automatically predict lexical sememes based on semantic meanings of words encoded by word embeddings. Moreover, we apply matrix factorization to learn semantic relations between sememes and words. In experiments, we take a real-world sememe knowledge base HowNet for training and evaluation, and the results reveal the effectiveness of our method for lexical sememe prediction. Our method will be of great use for annotation verification of existing noisy sememe knowledge bases and annotation suggestion of new words and phrases.", "title": "" }, { "docid": "681641e2593cad85fb1633d1027a9a4f", "text": "Overview Aggressive driving is a major concern of the American public, ranking at or near the top of traffic safety issues in national surveys of motorists. However, the concept of aggressive driving is not well defined, and its overall impact on traffic safety has not been well quantified due to inadequacies and limitation of available data. This paper reviews published scientific literature on aggressive driving; discusses various definitions of aggressive driving; cites several specific behaviors that are typically associated with aggressive driving; and summarizes past research on the individuals or groups most likely to behave aggressively. Since adequate data to precisely quantify the percentage of fatal crashes that involve aggressive driving do not exist, in this review, we have quantified the number of fatal crashes in which one or more driver actions typically associated with aggressive driving were reported. We found these actions were reported in 56 percent of fatal crashes from 2003 through 2007, with excessive speed being the number one factor. Ideally, an estimate of the prevalence of aggressive driving would include only instances in which such actions were performed intentionally; however, available data on motor vehicle crashes do not contain such information, thus it is important to recognize that this 56 percent may to some degree overestimate the contribution of aggressive driving to fatal crashes. On the other hand, it is likely that aggressive driving contributes to at least some crashes in which it is not reported due to lack of evidence. Despite the clear limitations associated with our attempt to estimate the contribution of potentially-aggressive driver actions to fatal crashes, it is clear that aggressive driving poses a serious traffic safety threat. In addition, our review further indicated that the \" Do as I say, not as I do \" culture, previously reported in the Foundation's Traffic Safety Culture Index, very much applies to aggressive driving.", "title": "" }, { "docid": "237437eae6a6154fb3b32c4c6c1fed07", "text": "Ontology is playing an increasingly important role in knowledge management and the Semantic Web. This study presents a novel episode-based ontology construction mechanism to extract domain ontology from unstructured text documents. Additionally, fuzzy numbers for conceptual similarity computing are presented for concept clustering and taxonomic relation definitions. Moreover, concept attributes and operations can be extracted from episodes to construct a domain ontology, while non-taxonomic relations can be generated from episodes. The fuzzy inference mechanism is also applied to obtain new instances for ontology learning. Experimental results show that the proposed approach can effectively construct a Chinese domain ontology from unstructured text documents. 2006 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "462afb864b255f94deefb661174a598b", "text": "Due to the heterogeneous and resource-constrained characters of Internet of Things (IoT), how to guarantee ubiquitous network connectivity is challenging. Although LTE cellular technology is the most promising solution to provide network connectivity in IoTs, information diffusion by cellular network not only occupies its saturating bandwidth, but also costs additional fees. Recently, NarrowBand-IoT (NB-IoT), introduced by 3GPP, is designed for low-power massive devices, which intends to refarm wireless spectrum and increase network coverage. For the sake of providing high link connectivity and capacity, we stimulate effective cooperations among user equipments (UEs), and propose a social-aware group formation framework to allocate resource blocks (RBs) effectively following an in-band NB-IoT solution. Specifically, we first introduce a social-aware multihop device-to-device (D2D) communication scheme to upload information toward the eNodeB within an LTE, so that a logical cooperative D2D topology can be established. Then, we formulate the D2D group formation as a scheduling optimization problem for RB allocation, which selects the feasible partition for the UEs by jointly considering relay method selection and spectrum reuse for NB-IoTs. Since the formulated optimization problem has a high computational complexity, we design a novel heuristic with a comprehensive consideration of power control and relay selection. Performance evaluations based on synthetic and real trace simulations manifest that the presented method can significantly increase link connectivity, link capacity, network throughput, and energy efficiency comparing with the existing solutions.", "title": "" }, { "docid": "3440de9ea0f76ba39949edcb5e2a9b54", "text": "This document is not intended to create, does not create, and may not be relied upon to create any rights, substantive or procedural, enforceable by law by any party in any matter civil or criminal. Findings and conclusions of the research reported here are those of the authors and do not necessarily reflect the official position or policies of the U.S. Department of Justice. The products, manufacturers, and organizations discussed in this document are presented for informational purposes only and do not constitute product approval or endorsement by the Much of crime mapping is devoted to detecting high-crime-density areas known as hot spots. Hot spot analysis helps police identify high-crime areas, types of crime being committed, and the best way to respond. This report discusses hot spot analysis techniques and software and identifies when to use each one. The visual display of a crime pattern on a map should be consistent with the type of hot spot and possible police action. For example, when hot spots are at specific addresses, a dot map is more appropriate than an area map, which would be too imprecise. In this report, chapters progress in sophis­ tication. Chapter 1 is for novices to crime mapping. Chapter 2 is more advanced, and chapter 3 is for highly experienced analysts. The report can be used as a com­ panion to another crime mapping report ■ Identifying hot spots requires multiple techniques; no single method is suffi­ cient to analyze all types of crime. ■ Current mapping technologies have sig­ nificantly improved the ability of crime analysts and researchers to understand crime patterns and victimization. ■ Crime hot spot maps can most effective­ ly guide police action when production of the maps is guided by crime theories (place, victim, street, or neighborhood).", "title": "" }, { "docid": "e4e97569f53ddde763f4f28559c96ba6", "text": "With a goal of understanding what drives generalization in deep networks, we consider several recently suggested explanations, including norm-based control, sharpness and robustness. We study how these measures can ensure generalization, highlighting the importance of scale normalization, and making a connection between sharpness and PAC-Bayes theory. We then investigate how well the measures explain different observed phenomena.", "title": "" }, { "docid": "5f4e761af11ace5a4d6819431893a605", "text": "The high power density converter is required due to the strict demands of volume and weight in more electric aircraft, which makes SiC extremely attractive for this application. In this work, a prototype of 50 kW SiC high power density converter with the topology of two-level three-phase voltage source inverter is demonstrated. This converter is driven at high switching speed based on the optimization in switching characterization. It operates at a switching frequency up to 100 kHz and a low dead time of 250 ns. And the converter efficiency is measured to be 99% at 40 kHz and 97.8% at 100 kHz.", "title": "" }, { "docid": "6cf4315ecce8a06d9354ca2f2684113c", "text": "BACKGROUND\nNutritional supplementation may be used to treat muscle loss with aging (sarcopenia). However, if physical activity does not increase, the elderly tend to compensate for the increased energy delivered by the supplements with reduced food intake, which results in a calorie substitution rather than supplementation. Thus, an effective supplement should stimulate muscle anabolism more efficiently than food or common protein supplements. We have shown that balanced amino acids stimulate muscle protein anabolism in the elderly, but it is unknown whether all amino acids are necessary to achieve this effect.\n\n\nOBJECTIVE\nWe assessed whether nonessential amino acids are required in a nutritional supplement to stimulate muscle protein anabolism in the elderly.\n\n\nDESIGN\nWe compared the response of muscle protein metabolism to either 18 g essential amino acids (EAA group: n = 6, age 69 +/- 2 y; +/- SD) or 40 g balanced amino acids (18 g essential amino acids + 22 g nonessential amino acids, BAA group; n = 8, age 71 +/- 2 y) given orally in small boluses every 10 min for 3 h to healthy elderly volunteers. Muscle protein metabolism was measured in the basal state and during amino acid administration via L-[ring-(2)H(5)]phenylalanine infusion, femoral arterial and venous catheterization, and muscle biopsies.\n\n\nRESULTS\nPhenylalanine net balance (in nmol x min(-1). 100 mL leg volume(-1)) increased from the basal state (P < 0.01), with no differences between groups (BAA: from -16 +/- 5 to 16 +/- 4; EAA: from -18 +/- 5 to 14 +/- 13) because of an increase (P < 0.01) in muscle protein synthesis and no change in breakdown.\n\n\nCONCLUSION\nEssential amino acids are primarily responsible for the amino acid-induced stimulation of muscle protein anabolism in the elderly.", "title": "" }, { "docid": "09168164e47fd781e4abeca45fb76c35", "text": "AUTOSAR is a standard for the development of software for embedded devices, primarily created for the automotive domain. It specifies a software architecture with more than 80 software modules that provide services to one or more software components. With the trend towards integrating safety-relevant systems into embedded devices, conformance with standards such as ISO 26262 [ISO11] or ISO/IEC 61508 [IEC10] becomes increasingly important. This article presents an approach to providing freedom from interference between software components by using the MPU available on many modern microcontrollers. Each software component gets its own dedicated memory area, a so-called memory partition. This concept is well known in other industries like the aerospace industry, where the IMA architecture is now well established. The memory partitioning mechanism is implemented by a microkernel, which integrates seamlessly into the architecture specified by AUTOSAR. The development has been performed as SEooC as described in ISO 26262, which is a new development approach. We describe the procedure for developing an SEooC. AUTOSAR: AUTomotive Open System ARchitecture, see [ASR12]. MPU: Memory Protection Unit. 3 IMA: Integrated Modular Avionics, see [RTCA11]. 4 SEooC: Safety Element out of Context, see [ISO11].", "title": "" } ]
scidocsrr
c7e75410ac860e6c15d26fac2db620a2
Vertical Versus Shared Leadership as Predictors of the Effectiveness of Change Management Teams : An Examination of Aversive , Directive , Transactional , Transformational , and Empowering Leader Behaviors
[ { "docid": "54850f62bf84e01716bc009f68aac3d7", "text": "© 1966 by the Massachusetts Institute of Technology. From Leadership and Motivation, Essays of Douglas McGregor, edited by W. G. Bennis and E. H. Schein (Cambridge, MA: MIT Press, 1966): 3–20. Reprinted with permission. I t has become trite to say that the most significant developments of the next quarter century will take place not in the physical but in the social sciences, that industry—the economic organ of society—has the fundamental know-how to utilize physical science and technology for the material benefit of mankind, and that we must now learn how to utilize the social sciences to make our human organizations truly effective. Many people agree in principle with such statements; but so far they represent a pious hope—and little else. Consider with me, if you will, something of what may be involved when we attempt to transform the hope into reality.", "title": "" }, { "docid": "a6872c1cab2577547c9a7643a6acd03e", "text": "Current theories and models of leadership seek to explain the influence of the hierarchical superior upon the satisfaction and performance of subordinates. While disagreeing with one another in important respects, these theories and models share an implicit assumption that while the style of leadership likely to be effective may vary according to the situation, some leadership style will be effective regardless of the situation. It has been found, however, that certain individual, task, and organizational variables act as \"substitutes for leadership,\" negating the hierarchical superior's ability to exert either positive or negative influence over subordinate attitudes and effectiveness. This paper identifies a number of such substitutes for leadership, presents scales of questionnaire items for their measurement, and reports some preliminary tests.", "title": "" } ]
[ { "docid": "6bbcbe9f4f4ede20d2b86f6da9167110", "text": "Avoiding vehicle-to-pedestrian crashes is a critical requirement for nowadays advanced driver assistant systems (ADAS) and future self-driving vehicles. Accordingly, detecting pedestrians from raw sensor data has a history of more than 15 years of research, with vision playing a central role. During the last years, deep learning has boosted the accuracy of image-based pedestrian detectors. However, detection is just the first step towards answering the core question, namely is the vehicle going to crash with a pedestrian provided preventive actions are not taken? Therefore, knowing as soon as possible if a detected pedestrian has the intention of crossing the road ahead of the vehicle is essential for performing safe and comfortable maneuvers that prevent a crash. However, compared to pedestrian detection, there is relatively little literature on detecting pedestrian intentions. This paper aims to contribute along this line by presenting a new vision-based approach which analyzes the pose of a pedestrian along several frames to determine if he or she is going to enter the road or not. We present experiments showing 750 ms of anticipation for pedestrians crossing the road, which at a typical urban driving speed of 50 km/h can provide 15 additional meters (compared to a pure pedestrian detector) for vehicle automatic reactions or to warn the driver. Moreover, in contrast with state-of-the-art methods, our approach is monocular, neither requiring stereo nor optical flow information.", "title": "" }, { "docid": "496e0a7bfd230f00bafefd6c1c8f29da", "text": "Modern society depends on information technology in nearly every facet of human activity including, finance, transportation, education, government, and defense. Organizations are exposed to various and increasing kinds of risks, including information technology risks. Several standards, best practices, and frameworks have been created to help organizations manage these risks. The purpose of this research work is to highlight the challenges facing enterprises in their efforts to properly manage information security risks when adopting international standards and frameworks. To assist in selecting the best framework to use in risk management, the article presents an overview of the most popular and widely used standards and identifies selection criteria. It suggests an approach to proper implementation as well. A set of recommendations is put forward with further research opportunities on the subject. KeywordsInformation security; risk management; security frameworks; security standards; security management.", "title": "" }, { "docid": "2b3c507c110452aa54c046f9e7f9200d", "text": "Word embeddings are crucial to many natural language processing tasks. The quality of embeddings relies on large nonnoisy corpora. Arabic dialects lack large corpora and are noisy, being linguistically disparate with no standardized spelling. We make three contributions to address this noise. First, we describe simple but effective adaptations to word embedding tools to maximize the informative content leveraged in each training sentence. Second, we analyze methods for representing disparate dialects in one embedding space, either by mapping individual dialects into a shared space or learning a joint model of all dialects. Finally, we evaluate via dictionary induction, showing that two metrics not typically reported in the task enable us to analyze our contributions’ effects on low and high frequency words. In addition to boosting performance between 2-53%, we specifically improve on noisy, low frequency forms without compromising accuracy on high frequency forms.", "title": "" }, { "docid": "e706c5071b87561f08ee8f9610e41e2e", "text": "Machine learning models are vulnerable to simple model stealing attacks if the adversary can obtain output labels for chosen inputs. To protect against these attacks, it has been proposed to limit the information provided to the adversary by omitting probability scores, significantly impacting the utility of the provided service. In this work, we illustrate how a service provider can still provide useful, albeit misleading, class probability information, while significantly limiting the success of the attack. Our defense forces the adversary to discard the class probabilities, requiring significantly more queries before they can train a model with comparable performance. We evaluate several attack strategies, model architectures, and hyperparameters under varying adversarial models, and evaluate the efficacy of our defense against the strongest adversary. Finally, we quantify the amount of noise injected into the class probabilities to mesure the loss in utility, e.g., adding 1.26 nats per query on CIFAR-10 and 3.27 on MNIST. Our evaluation shows our defense can degrade the accuracy of the stolen model at least 20%, or require up to 64 times more queries while keeping the accuracy of the protected model almost intact.", "title": "" }, { "docid": "1364758783c75a39112d01db7e7cfc63", "text": "Steganography plays an important role in secret communication in digital worlds and open environments like Internet. Undetectability and imperceptibility of confidential data are major challenges of steganography methods. This article presents a secure steganography method in frequency domain based on partitioning approach. The cover image is partitioned into 8×8 blocks and then integer wavelet transform through lifting scheme is performed for each block. The symmetric RC4 encryption method is applied to secret message to obtain high security and authentication. Tree Scan Order is performed in frequency domain to find proper location for embedding secret message. Secret message is embedded in cover image with minimal degrading of the quality. Experimental results demonstrate that the proposed method has achieved superior performance in terms of high imperceptibility of stego-image and it is secure against statistical attack in comparison with existing methods.", "title": "" }, { "docid": "a25338ae0035e8a90d6523ee5ef667f7", "text": "Activity recognition in video is dominated by low- and mid-level features, and while demonstrably capable, by nature, these features carry little semantic meaning. Inspired by the recent object bank approach to image representation, we present Action Bank, a new high-level representation of video. Action bank is comprised of many individual action detectors sampled broadly in semantic space as well as viewpoint space. Our representation is constructed to be semantically rich and even when paired with simple linear SVM classifiers is capable of highly discriminative performance. We have tested action bank on four major activity recognition benchmarks. In all cases, our performance is better than the state of the art, namely 98.2% on KTH (better by 3.3%), 95.0% on UCF Sports (better by 3.7%), 57.9% on UCF50 (baseline is 47.9%), and 26.9% on HMDB51 (baseline is 23.2%). Furthermore, when we analyze the classifiers, we find strong transfer of semantics from the constituent action detectors to the bank classifier.", "title": "" }, { "docid": "b525081979bebe54e2262086170cbb31", "text": " Activity recognition strategies assume large amounts of labeled training data which require tedious human labor to label.  They also use hand engineered features, which are not best for all applications, hence required to be done separately for each application.  Several recognition strategies have benefited from deep learning for unsupervised feature selection, which has two important property – fine tuning and incremental update. Question! Can deep learning be leveraged upon for continuous learning of activity models from streaming videos? Contributions", "title": "" }, { "docid": "cc204a8e12f47259059488bb421f8d32", "text": "Phishing is a web-based attack that uses social engineering techniques to exploit internet users and acquire sensitive data. Most phishing attacks work by creating a fake version of the real site's web interface to gain the user's trust.. We applied different methods for detecting phishing using known as well as new features. In this we used the heuristic-based approach to handle phishing attacks, in this approached several website features are collected and used to identify the type of the website. The heuristic-based approach can recognize newly created fake websites in real-time. One intelligent approach based on genetic algorithm seems a potential solution that may effectively detect phishing websites with high accuracy and prevent it by blocking them.", "title": "" }, { "docid": "a55881d3cd1091c0b7f614142022718c", "text": "Successful teams are characterized by high levels of trust between team members, allowing the team to learn from mistakes, take risks, and entertain diverse ideas. We investigated a robot's potential to shape trust within a team through the robot's expressions of vulnerability. We conducted a between-subjects experiment (N = 35 teams, 105 participants) comparing the behavior of three human teammates collaborating with either a social robot making vulnerable statements or with a social robot making neutral statements. We found that, in a group with a robot making vulnerable statements, participants responded more to the robot's comments and directed more of their gaze to the robot, displaying a higher level of engagement with the robot. Additionally, we discovered that during times of tension, human teammates in a group with a robot making vulnerable statements were more likely to explain their failure to the group, console team members who had made mistakes, and laugh together, all actions that reduce the amount of tension experienced by the team. These results suggest that a robot's vulnerable behavior can have \"ripple effects\" on their human team members' expressions of trust-related behavior.", "title": "" }, { "docid": "e8e8e6d288491e715177a03601500073", "text": "Protein–protein interactions constitute the regulatory network that coordinates diverse cellular functions. Co-immunoprecipitation (co-IP) is a widely used and effective technique to study protein–protein interactions in living cells. However, the time and cost for the preparation of a highly specific antibody is the major disadvantage associated with this technique. In the present study, a co-IP system was developed to detect protein–protein interactions based on an improved protoplast transient expression system by using commercially available antibodies. This co-IP system eliminates the need for specific antibody preparation and transgenic plant production. Leaf sheaths of rice green seedlings were used for the protoplast transient expression system which demonstrated high transformation and co-transformation efficiencies of plasmids. The transient expression system developed by this study is suitable for subcellular localization and protein detection. This work provides a rapid, reliable, and cost-effective system to study transient gene expression, protein subcellular localization, and characterization of protein–protein interactions in vivo.", "title": "" }, { "docid": "de0c3f4d5cbad1ce78e324666937c232", "text": "We propose an unsupervised method for learning multi-stage hierarchies of sparse convolutional features. While sparse coding has become an in creasingly popular method for learning visual features, it is most often traine d at the patch level. Applying the resulting filters convolutionally results in h ig ly redundant codes because overlapping patches are encoded in isolation. By tr aining convolutionally over large image windows, our method reduces the redudancy b etween feature vectors at neighboring locations and improves the efficienc y of the overall representation. In addition to a linear decoder that reconstruct s the image from sparse features, our method trains an efficient feed-forward encod er that predicts quasisparse features from the input. While patch-based training r arely produces anything but oriented edge detectors, we show that convolution al training produces highly diverse filters, including center-surround filters, corner detectors, cross detectors, and oriented grating detectors. We show that using these filters in multistage convolutional network architecture improves perfor mance on a number of visual recognition and detection tasks.", "title": "" }, { "docid": "f174469e907b60cd481da6b42bafa5f9", "text": "A static program checker that performs modular checking can check one program module for errors without needing to analyze the entire program. Modular checking requires that each module be accompanied by annotations that specify the module. To help reduce the cost of writing specifications, this paper presents Houdini, an annotation assistant for the modular checker ESC/Java. To infer suitable ESC/Java annotations for a given program, Houdini generates a large number of candidate annotations and uses ESC/Java to verify or refute each of these annotations. The paper describes the design, implementation, and preliminary evaluation of Houdini.", "title": "" }, { "docid": "a086686928333e06592cd901e8a346bd", "text": "BACKGROUND\nClosed-loop artificial pancreas device (APD) systems are externally worn medical devices that are being developed to enable people with type 1 diabetes to regulate their blood glucose levels in a more automated way. The innovative concept of this emerging technology is that hands-free, continuous, glycemic control can be achieved by using digital communication technology and advanced computer algorithms.\n\n\nMETHODS\nA horizon scanning review of this field was conducted using online sources of intelligence to identify systems in development. The systems were classified into subtypes according to their level of automation, the hormonal and glycemic control approaches used, and their research setting.\n\n\nRESULTS\nEighteen closed-loop APD systems were identified. All were being tested in clinical trials prior to potential commercialization. Six were being studied in the home setting, 5 in outpatient settings, and 7 in inpatient settings. It is estimated that 2 systems may become commercially available in the EU by the end of 2016, 1 during 2017, and 2 more in 2018.\n\n\nCONCLUSIONS\nThere are around 18 closed-loop APD systems progressing through early stages of clinical development. Only a few of these are currently in phase 3 trials and in settings that replicate real life.", "title": "" }, { "docid": "6c68bccf376da1f963aaa8ec5e08b646", "text": "The composition of the gut microbiota is in constant flow under the influence of factors such as the diet, ingested drugs, the intestinal mucosa, the immune system, and the microbiota itself. Natural variations in the gut microbiota can deteriorate to a state of dysbiosis when stress conditions rapidly decrease microbial diversity and promote the expansion of specific bacterial taxa. The mechanisms underlying intestinal dysbiosis often remain unclear given that combinations of natural variations and stress factors mediate cascades of destabilizing events. Oxidative stress, bacteriophages induction and the secretion of bacterial toxins can trigger rapid shifts among intestinal microbial groups thereby yielding dysbiosis. A multitude of diseases including inflammatory bowel diseases but also metabolic disorders such as obesity and diabetes type II are associated with intestinal dysbiosis. The characterization of the changes leading to intestinal dysbiosis and the identification of the microbial taxa contributing to pathological effects are essential prerequisites to better understand the impact of the microbiota on health and disease.", "title": "" }, { "docid": "742498bfa62278bd5c070145ad3750b0", "text": "In this paper we address the demand for flexibility and economic efficiency in industrial autonomous guided vehicle (AGV) systems by the use of cloud computing. We propose a cloud-based architecture that moves parts of mapping, localization and path planning tasks to a cloud server. We use a cooperative longterm Simultaneous Localization and Mapping (SLAM) approach which merges environment perception of stationary sensors and mobile robots into a central Holistic Environment Model (HEM). Further, we deploy a hierarchical cooperative path planning approach using Conflict-Based Search (CBS) to find optimal sets of paths which are then provided to the mobile robots. For communication we utilize the Manufacturing Service Bus (MSB) which is a component of the manufacturing cloud platform Virtual Fort Knox (VFK). We demonstrate the feasibility of this approach in a real-life industrial scenario. Additionally, we evaluate the system's communication and the planner for various numbers of agents.", "title": "" }, { "docid": "3a1b9a47a7fe51ab19f53ae6aaa18d6d", "text": "The overall context proposed in this paper is part of our long-standing goal to contribute to a group of community that suffers from Autism Spectrum Disorder (ASD); a lifelong developmental disability. The objective of this paper is to present the development of our pilot experiment protocol where children with ASD will be exposed to the humanoid robot NAO. This fully programmable humanoid offers an ideal research platform for human-robot interaction (HRI). This study serves as the platform for fundamental investigation to observe the initial response and behavior of the children in the said environment. The system utilizes external cameras, besides the robot's own visual system. Anticipated results are the real initial response and reaction of ASD children during the HRI with the humanoid robot. This shall leads to adaptation of new procedures in ASD therapy based on HRI, especially for a non-technical-expert person to be involved in the robotics intervention during the therapy session.", "title": "" }, { "docid": "823c00a4cbbfb3ca5fc302dfeff0fbb3", "text": "Given that the synthesis of cumulated knowledge is an essential condition for any field to grow and develop, we believe that the enhanced role of IS reviews requires that this expository form be given careful scrutiny. Over the past decade, several senior scholars have made calls for more review papers in our field. While the number of IS review papers has substantially increased in recent years, no prior research has attempted to develop a general framework to conduct and evaluate the rigor of standalone reviews. In this paper, we fill this gap. More precisely, we present a set of guidelines for guiding and evaluating IS literature reviews and specify to which review types they apply. To do so, we first distinguish between four broad categories of review papers and then propose a set of guidelines that are grouped according to the generic phases and steps of the review process. We hope our work will serve as a valuable source for those conducting, evaluating, and/or interpreting reviews in our field.", "title": "" }, { "docid": "56266e0f3be7a58cfed1c9bdd54798e5", "text": "In this paper, the design methods for four-way power combiners based on eight-port and nine-port mode networks are proposed. The eight-port mode network is fundamentally a two-stage binary four-way power combiner composed of three magic-Ts: two compact H-plane magic-Ts and one magic-T with coplanar arms. The two compact H-plane magic-Ts and the magic-T with coplanar arms function as the first and second stages, respectively. Thus, four-way coaxial-to-coaxial power combiners can be designed. A one-stage four-way power combiner based on a nine-port mode network is also proposed. Two matched coaxial ports and two matched rectangular ports are used to provide high isolation along the E-plane and the H-plane, respectively. The simulations agree well with the measured results. The designed four-way power combiners are superior in terms of their compact cross-sectional areas, a high degree of isolation, low insertion loss, low output-amplitude imbalance, and low phase imbalance, which make them well suited for solid-state power combination.", "title": "" }, { "docid": "a4c80a334a6f9cd70fe5c7000740c18f", "text": "CMOS SRAM cell is very less power consuming and have less read and write time. Higher cell ratios can decrease the read and write time and improve stability. PMOS transistor with less width reduces the power consumption. This paper implements 6T SRAM cell with reduced read and write time, area and power consumption. It has been noticed often that increased memory capacity increases the bit-line parasitic capacitance which in turn slows down voltage sensing and make bit-line voltage swings energy expensive. This result in slower and more energy hungry memories.. In this paper Two SRAM cell is being designed for 4 Kb of memory core with supply voltage 1.8 V. A technique of global bit line is used for reducing the power consumption and increasing the memory capacity.", "title": "" }, { "docid": "08d8e372c5ae4eef9848552ee87fbd64", "text": "What chiefly distinguishes cerebral cortex from other parts of the central nervous system is the great diversity of its cell types and inter-connexions. It would be astonishing if such a structure did not profoundly modify the response patterns of fibres coming into it. In the cat's visual cortex, the receptive field arrangements of single cells suggest that there is indeed a degree of complexity far exceeding anything yet seen at lower levels in the visual system. In a previous paper we described receptive fields of single cortical cells, observing responses to spots of light shone on one or both retinas (Hubel & Wiesel, 1959). In the present work this method is used to examine receptive fields of a more complex type (Part I) and to make additional observations on binocular interaction (Part II). This approach is necessary in order to understand the behaviour of individual cells, but it fails to deal with the problem of the relationship of one cell to its neighbours. In the past, the technique of recording evoked slow waves has been used with great success in studies of functional anatomy. It was employed by Talbot & Marshall (1941) and by Thompson, Woolsey & Talbot (1950) for mapping out the visual cortex in the rabbit, cat, and monkey. Daniel & Whitteiidge (1959) have recently extended this work in the primate. Most of our present knowledge of retinotopic projections, binocular overlap, and the second visual area is based on these investigations. Yet the method of evoked potentials is valuable mainly for detecting behaviour common to large populations of neighbouring cells; it cannot differentiate functionally between areas of cortex smaller than about 1 mm2. To overcome this difficulty a method has in recent years been developed for studying cells separately or in small groups during long micro-electrode penetrations through nervous tissue. Responses are correlated with cell location by reconstructing the electrode tracks from histological material. These techniques have been applied to CAT VISUAL CORTEX 107 the somatic sensory cortex of the cat and monkey in a remarkable series of studies by Mountcastle (1957) and Powell & Mountcastle (1959). Their results show that the approach is a powerful one, capable of revealing systems of organization not hinted at by the known morphology. In Part III of the present paper we use this method in studying the functional architecture of the visual cortex. It helped us attempt to explain on anatomical …", "title": "" } ]
scidocsrr
45ccd6eb242f7eb66191c737e6f6b719
Fundamental movement skills in children and adolescents: review of associated health benefits.
[ { "docid": "61eb3c9f401ec9d6e89264297395f9d3", "text": "PURPOSE\nCross-sectional evidence has demonstrated the importance of motor skill proficiency to physical activity participation, but it is unknown whether skill proficiency predicts subsequent physical activity.\n\n\nMETHODS\nIn 2000, children's proficiency in object control (kick, catch, throw) and locomotor (hop, side gallop, vertical jump) skills were assessed in a school intervention. In 2006/07, the physical activity of former participants was assessed using the Australian Physical Activity Recall Questionnaire. Linear regressions examined relationships between the reported time adolescents spent participating in moderate-to-vigorous or organized physical activity and their childhood skill proficiency, controlling for gender and school grade. A logistic regression examined the probability of participating in vigorous activity.\n\n\nRESULTS\nOf 481 original participants located, 297 (62%) consented and 276 (57%) were surveyed. All were in secondary school with females comprising 52% (144). Adolescent time in moderate-to-vigorous and organized activity was positively associated with childhood object control proficiency. Respective models accounted for 12.7% (p = .001), and 18.2% of the variation (p = .003). Object control proficient children became adolescents with a 10% to 20% higher chance of vigorous activity participation.\n\n\nCONCLUSIONS\nObject control proficient children were more likely to become active adolescents. Motor skill development should be a key strategy in childhood interventions aiming to promote long-term physical activity.", "title": "" } ]
[ { "docid": "ae46639adab554a921b5213b385a4472", "text": "We develop a framework for rendering photographic images by directly optimizing their perceptual similarity to the original visual scene. Specifically, over the set of all images that can be rendered on a given display, we minimize the normalized Laplacian pyramid distance (NLPD), a measure of perceptual dissimilarity that is derived from a simple model of the early stages of the human visual system. When rendering images acquired with a higher dynamic range than that of the display, we find that the optimization boosts the contrast of low-contrast features without introducing significant artifacts, yielding results of comparable visual quality to current state-of-the-art methods, but without manual intervention or parameter adjustment. We also demonstrate the effectiveness of the framework for a variety of other display constraints, including limitations on minimum luminance (black point), mean luminance (as a proxy for energy consumption), and quantized luminance levels (halftoning). We show that the method may generally be used to enhance details and contrast, and, in particular, can be used on images degraded by optical scattering (e.g., fog). Finally, we demonstrate the necessity of each of the NLPD components-an initial power function, a multiscale transform, and local contrast gain control-in achieving these results and we show that NLPD is competitive with the current state-of-the-art image quality metrics.", "title": "" }, { "docid": "22d78ead5b703225b34f3c29a5ff07ad", "text": "Children's experiences in early childhood have significant lasting effects in their overall development and in the United States today the majority of young children spend considerable amounts of time in early childhood education settings. At the national level, there is an expressed concern about the low levels of student interest and success in science, technology, engineering, and mathematics (STEM). Bringing these two conversations together our research focuses on how young children of preschool age exhibit behaviors that we consider relevant in engineering. There is much to be explored in STEM education at such an early age, and in order to proceed we created an experimental observation protocol in which we identified various pre-engineering behaviors based on pilot observations, related literature and expert knowledge. This protocol is intended for use by preschool teachers and other professionals interested in studying engineering in the preschool classroom.", "title": "" }, { "docid": "6c270eaa2b9b9a0e140e0d8879f5d383", "text": "More than 75% of hospital-acquired or nosocomial urinary tract infections are initiated by urinary catheters, which are used during the treatment of 15-25% of hospitalized patients. Among other purposes, urinary catheters are primarily used for draining urine after surgeries and for urinary incontinence. During catheter-associated urinary tract infections, bacteria travel up to the bladder and cause infection. A major cause of catheter-associated urinary tract infection is attributed to the use of non-ideal materials in the fabrication of urinary catheters. Such materials allow for the colonization of microorganisms, leading to bacteriuria and infection, depending on the severity of symptoms. The ideal urinary catheter is made out of materials that are biocompatible, antimicrobial, and antifouling. Although an abundance of research has been conducted over the last forty-five years on the subject, the ideal biomaterial, especially for long-term catheterization of more than a month, has yet to be developed. The aim of this review is to highlight the recent advances (over the past 10years) in developing antimicrobial materials for urinary catheters and to outline future requirements and prospects that guide catheter materials selection and design.\n\n\nSTATEMENT OF SIGNIFICANCE\nThis review article intends to provide an expansive insight into the various antimicrobial agents currently being researched for urinary catheter coatings. According to CDC, approximately 75% of urinary tract infections are caused by urinary catheters and 15-25% of hospitalized patients undergo catheterization. In addition to these alarming statistics, the increasing cost and health related complications associated with catheter associated UTIs make the research for antimicrobial urinary catheter coatings even more pertinent. This review provides a comprehensive summary of the history, the latest progress in development of the coatings and a brief conjecture on what the future entails for each of the antimicrobial agents discussed.", "title": "" }, { "docid": "1d6e20debb1fc89079e0c5e4861e3ca4", "text": "BACKGROUND\nThe aims of this study were to identify the independent factors associated with intermittent addiction and addiction to the Internet and to examine the psychiatric symptoms in Korean adolescents when the demographic and Internet-related factors were controlled.\n\n\nMETHODS\nMale and female students (N = 912) in the 7th-12th grades were recruited from 2 junior high schools and 2 academic senior high schools located in Seoul, South Korea. Data were collected from November to December 2004 using the Internet-Related Addiction Scale and the Symptom Checklist-90-Revision. A total of 851 subjects were analyzed after excluding the subjects who provided incomplete data.\n\n\nRESULTS\nApproximately 30% (n = 258) and 4.3% (n = 37) of subjects showed intermittent Internet addiction and Internet addiction, respectively. Multivariate logistic regression analysis showed that junior high school students and students having a longer period of Internet use were significantly associated with intermittent addiction. In addition, male gender, chatting, and longer Internet use per day were significantly associated with Internet addiction. When the demographic and Internet-related factors were controlled, obsessive-compulsive and depressive symptoms were found to be independently associated factors for intermittent addiction and addiction to the Internet, respectively.\n\n\nCONCLUSIONS\nStaff working in junior or senior high schools should pay closer attention to those students who have the risk factors for intermittent addiction and addiction to the Internet. Early preventive intervention programs are needed that consider the individual severity level of Internet addiction.", "title": "" }, { "docid": "3f1a2efdff6be4df064f3f5b978febee", "text": "D-galactose injection has been shown to induce many changes in mice that represent accelerated aging. This mouse model has been widely used for pharmacological studies of anti-aging agents. The underlying mechanism of D-galactose induced aging remains unclear, however, it appears to relate to glucose and 1ipid metabolic disorders. Currently, there has yet to be a study that focuses on investigating gene expression changes in D-galactose aging mice. In this study, integrated analysis of gas chromatography/mass spectrometry-based metabonomics and gene expression profiles was used to investigate the changes in transcriptional and metabolic profiles in mimetic aging mice injected with D-galactose. Our findings demonstrated that 48 mRNAs were differentially expressed between control and D-galactose mice, and 51 potential biomarkers were identified at the metabolic level. The effects of D-galactose on aging could be attributed to glucose and 1ipid metabolic disorders, oxidative damage, accumulation of advanced glycation end products (AGEs), reduction in abnormal substance elimination, cell apoptosis, and insulin resistance.", "title": "" }, { "docid": "03fc999e12a705e5228d44d97e126ee1", "text": "This paper describes a novel method called Deep Dynamic Neural Networks (DDNN) for multimodal gesture recognition. A semi-supervised hierarchical dynamic framework based on a Hidden Markov Model (HMM) is proposed for simultaneous gesture segmentation and recognition where skeleton joint information, depth and RGB images, are the multimodal input observations. Unlike most traditional approaches that rely on the construction of complex handcrafted features, our approach learns high-level spatiotemporal representations using deep neural networks suited to the input modality: a Gaussian-Bernouilli Deep Belief Network (DBN) to handle skeletal dynamics, and a 3D Convolutional Neural Network (3DCNN) to manage and fuse batches of depth and RGB images. This is achieved through the modeling and learning of the emission probabilities of the HMM required to infer the gesture sequence. This purely data driven approach achieves a Jaccard index score of 0.81 in the ChaLearn LAP gesture spotting challenge. The performance is on par with a variety of state-of-the-art hand-tuned feature-based approaches and other learning-based methods, therefore opening the door to the use of deep learning techniques in order to further explore multimodal time series data.", "title": "" }, { "docid": "b5dd3b83c680a9b3717597b92b03bb6b", "text": "In this correspondence we have not addressed the problem of constructing actual codebooks. Information theory indicates that, in principle , one can construct a codebook by drawing each component of each codeword independently, using the distribution obtained from the Blahut algorithm. This procedure is not in general practical. Practical ways to construct codewords may be found in the extensive literature on vector quantization (see, e.g., the tutorial paper by R. M. Gray [19] or the book [20]). It is not clear at this point if codebook constructing methods from the vector quantizer literature are practical in the setting of this correspondence. Alternatively, one can trade complexity and performance and construct a scalar quantizer. In this case, the distribution obtained from the Blahut algorithm may be used in the Max–Lloyd algorithm [21], [22]. Grenander, \" Conditional-mean estimation via jump-diffusion processes in multiple target tracking/recog-nition, \" IEEE Trans.matic target recognition organized via jump-diffusion algorithms, \" IEEE bounds for estimators on matrix lie groups for atr, \" IEEE Trans. Abstract—A hyperspectral image can be considered as an image cube where the third dimension is the spectral domain represented by hundreds of spectral wavelengths. As a result, a hyperspectral image pixel is actually a column vector with dimension equal to the number of spectral bands and contains valuable spectral information that can be used to account for pixel variability, similarity, and discrimination. In this correspondence, we present a new hyperspectral measure, Spectral Information Measure (SIM), to describe spectral variability and two criteria, spectral information divergence and spectral discriminatory probability, for spectral similarity and discrimination, respectively. The spectral information measure is an information-theoretic measure which treats each pixel as a random variable using its spectral signature histogram as the desired probability distribution. Spectral Information Divergence (SID) compares the similarity between two pixels by measuring the probabilistic discrepancy between two corresponding spectral signatures. The spectral discriminatory probability calculates spectral probabilities of a spectral database (library) relative to a pixel to be identified so as to achieve material identification. In order to compare the discriminatory power of one spectral measure relative to another , a criterion is also introduced for performance evaluation, which is based on the power of discriminating one pixel from another relative to a reference pixel. The experimental results demonstrate that the new hyper-spectral measure can characterize spectral variability more effectively than the commonly used Spectral Angle Mapper (SAM).", "title": "" }, { "docid": "f6e080319e7455fda0695f324941edcb", "text": "The Internet of Things (IoT) is a distributed system of physical objects that requires the seamless integration of hardware (e.g., sensors, actuators, electronics) and network communications in order to collect and exchange data. IoT smart objects need to be somehow identified to determine the origin of the data and to automatically detect the elements around us. One of the best positioned technologies to perform identification is RFID (Radio Frequency Identification), which in the last years has gained a lot of popularity in applications like access control, payment cards or logistics. Despite its popularity, RFID security has not been properly handled in numerous applications. To foster security in such applications, this article includes three main contributions. First, in order to establish the basics, a detailed review of the most common flaws found in RFID-based IoT systems is provided, including the latest attacks described in the literature. Second, a novel methodology that eases the detection and mitigation of such flaws is presented. Third, the latest RFID security tools are analyzed and the methodology proposed is applied through one of them (Proxmark 3) to validate it. Thus, the methodology is tested in different scenarios where tags are commonly used for identification. In such systems it was possible to clone transponders, extract information, and even emulate both tags and readers. Therefore, it is shown that the methodology proposed is useful for auditing security and reverse engineering RFID communications in IoT applications. It must be noted that, although this paper is aimed at fostering RFID communications security in IoT applications, the methodology can be applied to any RFID communications protocol.", "title": "" }, { "docid": "b9a2a41e12e259fbb646ff92956e148e", "text": "The paper presents a concept where pairs of ordinary RFID tags are exploited for use as remotely read moisture sensors. The pair of tags is incorporated into one label where one of the tags is embedded in a moisture absorbent material and the other is left open. In a humid environment the moisture concentration is higher in the absorbent material than the surrounding environment which causes degradation to the embedded tag's antenna in terms of dielectric losses and change of input impedance. The level of relative humidity or the amount of water in the absorbent material is determined for a passive RFID system by comparing the difference in RFID reader output power required to power up respectively the open and embedded tag. It is similarly shown how the backscattered signal strength of a semi-active RFID system is proportional to the relative humidity and amount of water in the absorbent material. Typical applications include moisture detection in buildings, especially from leaking water pipe connections hidden beyond walls. Presented solution has a cost comparable to ordinary RFID tags, and the passive system also has infinite life time since no internal power supply is needed. The concept is characterized for two commercial RFID systems, one passive operating at 868 MHz and one semi-active operating at 2.45 GHz.", "title": "" }, { "docid": "0df681e77b30e9143f7563b847eca5c6", "text": "BRIDGE bot is a 158 g, 10.7 × 8.9 × 6.5 cm3, magnetic-wheeled robot designed to traverse and inspect steel bridges. Utilizing custom magnetic wheels, the robot is able to securely adhere to the bridge in any orientation. The body platform features flexible, multi-material legs that enable a variety of plane transitions as well as robot shape manipulation. The robot is equipped with a Cortex-M0 processor, inertial sensors, and a modular wireless radio. A camera is included to provide images for detection and evaluation of identified problems. The robot has been demonstrated moving through plane transitions from 45° to 340° as well as over obstacles up to 9.5 mm in height. Preliminary use of sensor feedback to improve plane transitions has also been demonstrated.", "title": "" }, { "docid": "3867ff9ac24349b17e50ec2a34e84da4", "text": "Each generation that enters the workforce brings with it its own unique perspectives and values, shaped by the times of their life, about work and the work environment; thus posing atypical human resources management challenges. Following the completion of an extensive quantitative study conducted in Cyprus, and by adopting a qualitative methodology, the researchers aim to further explore the occupational similarities and differences of the two prevailing generations, X and Y, currently active in the workplace. Moreover, the study investigates the effects of the perceptual generational differences on managing the diverse hospitality workplace. Industry implications, recommendations for stakeholders as well as directions for further scholarly research are discussed.", "title": "" }, { "docid": "e55fdc146f334c9257e5b2a3e9f2d2d9", "text": "Customer churn prediction models aim to detect customers with a high propensity to attrite. Predictive accuracy, comprehensibility, and justifiability are three key aspects of a churn prediction model. An accurate model permits to correctly target future churners in a retention marketing campaign, while a comprehensible and intuitive rule-set allows to identify the main drivers for customers to churn, and to develop an effective retention strategy in accordance with domain knowledge. This paper provides an extended overview of the literature on the use of data mining in customer churn prediction modeling. It is shown that only limited attention has been paid to the comprehensibility and the intuitiveness of churn prediction models. Therefore, two novel data mining techniques are applied to churn prediction modeling, and benchmarked to traditional rule induction techniques such as C4.5 and RIPPER. Both AntMiner+ and ALBA are shown to induce accurate as well as comprehensible classification rule-sets. AntMiner+ is a high performing data mining technique based on the principles of Ant Colony Optimization that allows to include domain knowledge by imposing monotonicity constraints on the final rule-set. ALBA on the other hand combines the high predictive accuracy of a non-linear support vector machine model with the comprehensibility of the rule-set format. The results of the benchmarking experiments show that ALBA improves learning of classification techniques, resulting in comprehensible models with increased performance. AntMiner+ results in accurate, comprehensible, but most importantly justifiable models, unlike the other modeling techniques included in this study. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b466803c9a9be5d38171ece8d207365e", "text": "A large number of saliency models, each based on a different hypothesis, have been proposed over the past 20 years. In practice, while subscribing to one hypothesis or computational principle makes a model that performs well on some types of images, it hinders the general performance of a model on arbitrary images and large-scale data sets. One natural approach to improve overall saliency detection accuracy would then be fusing different types of models. In this paper, inspired by the success of late-fusion strategies in semantic analysis and multi-modal biometrics, we propose to fuse the state-of-the-art saliency models at the score level in a para-boosting learning fashion. First, saliency maps generated by several models are used as confidence scores. Then, these scores are fed into our para-boosting learner (i.e., support vector machine, adaptive boosting, or probability density estimator) to generate the final saliency map. In order to explore the strength of para-boosting learners, traditional transformation-based fusion strategies, such as Sum, Min, and Max, are also explored and compared in this paper. To further reduce the computation cost of fusing too many models, only a few of them are considered in the next step. Experimental results show that score-level fusion outperforms each individual model and can further reduce the performance gap between the current models and the human inter-observer model.", "title": "" }, { "docid": "d1d862185a20e1f1efc7d3dc7ca8524b", "text": "In what ways do the online behaviors of wizards and ogres map to players’ actual leadership status in the offline world? What can we learn from players’ experience in Massively Multiplayer Online games (MMOGs) to advance our understanding of leadership, especially leadership in online settings (E-leadership)? As part of a larger agenda in the emerging field of empirically testing the ‘‘mapping’’ between the online and offline worlds, this study aims to tackle a central issue in the E-leadership literature: how have technology and technology mediated communications transformed leadership-diagnostic traits and behaviors? To answer this question, we surveyed over 18,000 players of a popular MMOG and also collected behavioral data of a subset of survey respondents over a four-month period. Motivated by leadership theories, we examined the connection between respondents’ offline leadership status and their in-game relationship-oriented and task-related-behaviors. Our results indicate that individuals’ relationship-oriented behaviors in the virtual world are particularly relevant to players’ leadership status in voluntary organizations, while their task-oriented behaviors are marginally linked to offline leadership status in voluntary organizations, but not in companies. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "86de6e4d945f0d1fa7a0b699064d7bd5", "text": "BACKGROUND\nTo increase understanding of the relationships among sexual violence, paraphilias, and mental illness, the authors assessed the legal and psychiatric features of 113 men convicted of sexual offenses.\n\n\nMETHOD\n113 consecutive male sex offenders referred from prison, jail, or probation to a residential treatment facility received structured clinical interviews for DSM-IV Axis I and II disorders, including sexual disorders. Participants' legal, sexual and physical abuse, and family psychiatric histories were also evaluated. We compared offenders with and without paraphilias.\n\n\nRESULTS\nParticipants displayed high rates of lifetime Axis I and Axis II disorders: 96 (85%) had a substance use disorder; 84 (74%), a paraphilia; 66 (58%), a mood disorder (40 [35%], a bipolar disorder and 27 [24%], a depressive disorder); 43 (38%), an impulse control disorder; 26 (23%), an anxiety disorder; 10 (9%), an eating disorder; and 63 (56%), antisocial personality disorder. Presence of a paraphilia correlated positively with the presence of any mood disorder (p <.001), major depression (p =.007), bipolar I disorder (p =.034), any anxiety disorder (p=.034), any impulse control disorder (p =.006), and avoidant personality disorder (p =.013). Although offenders without paraphilias spent more time in prison than those with paraphilias (p =.019), paraphilic offenders reported more victims (p =.014), started offending at a younger age (p =.015), and were more likely to perpetrate incest (p =.005). Paraphilic offenders were also more likely to be convicted of (p =.001) or admit to (p <.001) gross sexual imposition of a minor. Nonparaphilic offenders were more likely to have adult victims exclusively (p =.002), a prior conviction for theft (p <.001), and a history of juvenile offenses (p =.058).\n\n\nCONCLUSIONS\nSex offenders in the study population displayed high rates of mental illness, substance abuse, paraphilias, personality disorders, and comorbidity among these conditions. Sex offenders with paraphilias had significantly higher rates of certain types of mental illness and avoidant personality disorder. Moreover, paraphilic offenders spent less time in prison but started offending at a younger age and reported more victims and more non-rape sexual offenses against minors than offenders without paraphilias. On the basis of our findings, we assert that sex offenders should be carefully evaluated for the presence of mental illness and that sex offender management programs should have a capacity for psychiatric treatment.", "title": "" }, { "docid": "3d9e279afe4ba8beb1effd4f26550f67", "text": "We propose and demonstrate a scheme for boosting the efficiency of entanglement distribution based on a decoherence-free subspace over lossy quantum channels. By using backward propagation of a coherent light, our scheme achieves an entanglement-sharing rate that is proportional to the transmittance T of the quantum channel in spite of encoding qubits in multipartite systems for the decoherence-free subspace. We experimentally show that highly entangled states, which can violate the Clauser-Horne-Shimony-Holt inequality, are distributed at a rate proportional to T.", "title": "" }, { "docid": "b8702cb8d18ae53664f3dfff95152764", "text": "Word Sense Disambiguation is a longstanding task in Natural Language Processing, lying at the core of human language understanding. However, the evaluation of automatic systems has been problematic, mainly due to the lack of a reliable evaluation framework. In this paper we develop a unified evaluation framework and analyze the performance of various Word Sense Disambiguation systems in a fair setup. The results show that supervised systems clearly outperform knowledge-based models. Among the supervised systems, a linear classifier trained on conventional local features still proves to be a hard baseline to beat. Nonetheless, recent approaches exploiting neural networks on unlabeled corpora achieve promising results, surpassing this hard baseline in most test sets.", "title": "" }, { "docid": "10f32a4e0671adaee3e18f20592c4619", "text": "This paper presents a novel flexible sliding thigh frame for a gait enhancing mechatronic system. With its two-layered unique structure, the frame is flexible in certain locations and directions, and stiff at certain other locations, so that it can fît well to the wearer's thigh and transmit the assisting torque without joint loading. The paper describes the basic mechanics of this 3D flexible frame and its stiffness characteristics. We implemented the 3D flexible frame on a gait enhancing mechatronic system and conducted experiments. The performance of the proposed mechanism is verified by simulation and experiments.", "title": "" }, { "docid": "8ce97c23c5714b2032cfd8098a59a8b4", "text": "In psychodynamic theory, trauma is associated with a life event, which is defined by its intensity, by the inability of the person to respond adequately and by its pathologic longlasting effects on the psychic organization. In this paper, we describe how neurobiological changes link to psychodynamic theory. Initially, Freud believed that all types of neurosis were the result of former traumatic experiences, mainly in the form of sexual trauma. According to the first Freudian theory (1890–1897), hysteric patients suffer mainly from relevant memories. In his later theory of ‘differed action’, i.e., the retroactive attribution of sexual or traumatic meaning to earlier events, Freud links the consequences of sexual trauma in childhood with the onset of pathology in adulthood (Boschan, 2008). The transmission of trauma from parents to children may take place from one generation to the other. The trauma that is being experienced by the child has an interpersonal character and is being reinforced by the parents’ own traumatic experience. The subject’s interpersonal exposure through the relationship with the direct victims has been recognized as a risk factor for the development of a post-traumatic stress disorder. Trauma may be transmitted from the mother to the foetus during the intrauterine life (Opendak & Sullivan, 2016). Empirical studies also demonstrate that in the first year of life infants that had witnessed violence against their mothers presented symptoms of a posttraumatic disorder. Traumatic symptomatology in infants includes eating difficulties, sleep disorders, high arousal level and excessive crying, affect disorders and relational problems with adults and peers. Infants that are directly dependant to the caregiver are more vulnerable and at a greater risk to suffer interpersonal trauma and its neurobiological consequences (Opendak & Sullivan, 2016). In older children symptoms were more related to the severity of violence they had been exposed to than to the mother’s actual emotional state, which shows that the relationship between mother’s and child’s trauma is different in each age stage. The type of attachment and the quality of the mother-child interactional relationship contribute also to the transmission of the trauma. According to Fonagy (2003), the mother who is experiencing trauma is no longer a source of security and becomes a source of danger. Thus, the mentalization ability may be destroyed by an attachment figure, which caused to the child enough stress related to its own thoughts and emotions to an extent, that the child avoids thoughts about the other’s subjective experience. At a neurobiological level, many studies have shown that the effects of environmental stress on the brain are being mediated through molecular and cellular mechanisms. More specifically, trauma causes changes at a chemical and anatomical level resulting in transforming the subject’s response to future stress. The imprinting mechanisms of traumatic experiences are directly related to the activation of the neurobiological circuits associated with emotion, in which amygdala play a central role. The traumatic experiences are strongly encoded in memory and difficult to be erased. Early stress may result in impaired cognitive function related to disrupted functioning of certain areas of the hippocampus in the short or long term. Infants or young children that have suffered a traumatic experience may are unable to recollect events in a conscious way. However, they may maintain latent memory of the reactions to the experience and the intensity of the emotion. The neurobiological data support the ‘deferred action’ of the psychodynamic theory according which when the impact of early interpersonal trauma is so pervasive, the effects can transcend into later stages, even after the trauma has stopped. The two approaches, psychodynamic and neurobiological, are not opposite, but complementary. Psychodynamic psychotherapists and neurobiologists, based on extended theoretical bases, combine data and enrich the understanding of psychiatric disorders in childhood. The study of interpersonal trauma offers a good example of how different approaches, biological and psychodynamic, may come closer and possibly be unified into a single model, which could result in more effective therapeutic approaches.", "title": "" }, { "docid": "75f5679d9c1bab3585c1bf28d50327d8", "text": "From medical charts to national census, healthcare has traditionally operated under a paper-based paradigm. However, the past decade has marked a long and arduous transformation bringing healthcare into the digital age. Ranging from electronic health records, to digitized imaging and laboratory reports, to public health datasets, today, healthcare now generates an incredible amount of digital information. Such a wealth of data presents an exciting opportunity for integrated machine learning solutions to address problems across multiple facets of healthcare practice and administration. Unfortunately, the ability to derive accurate and informative insights requires more than the ability to execute machine learning models. Rather, a deeper understanding of the data on which the models are run is imperative for their success. While a significant effort has been undertaken to develop models able to process the volume of data obtained during the analysis of millions of digitalized patient records, it is important to remember that volume represents only one aspect of the data. In fact, drawing on data from an increasingly diverse set of sources, healthcare data presents an incredibly complex set of attributes that must be accounted for throughout the machine learning pipeline. This chapter focuses on highlighting such challenges, and is broken down into three distinct components, each representing a phase of the pipeline. We begin with attributes of the data accounted for during preprocessing, then move to considerations during model building, and end with challenges to the interpretation of model output. For each component, we present a discussion around data as it relates to the healthcare domain and offer insight into the challenges each may impose on the efficiency of machine learning techniques.", "title": "" } ]
scidocsrr
4c9b2a96fac7e62bf1237a59fe45c80e
Multilevel secure data stream processing: Architecture and implementation
[ { "docid": "24da291ca2590eb614f94f8a910e200d", "text": "CQL, a continuous query language, is supported by the STREAM prototype data stream management system (DSMS) at Stanford. CQL is an expressive SQL-based declarative language for registering continuous queries against streams and stored relations. We begin by presenting an abstract semantics that relies only on “black-box” mappings among streams and relations. From these mappings we define a precise and general interpretation for continuous queries. CQL is an instantiation of our abstract semantics using SQL to map from relations to relations, window specifications derived from SQL-99 to map from streams to relations, and three new operators to map from relations to streams. Most of the CQL language is operational in the STREAM system. We present the structure of CQL's query execution plans as well as details of the most important components: operators, interoperator queues, synopses, and sharing of components among multiple operators and queries. Examples throughout the paper are drawn from the Linear Road benchmark recently proposed for DSMSs. We also curate a public repository of data stream applications that includes a wide variety of queries expressed in CQL. The relative ease of capturing these applications in CQL is one indicator that the language contains an appropriate set of constructs for data stream processing.", "title": "" } ]
[ { "docid": "96363ec5134359b5bf7c8b67f67971db", "text": "Self adaptive video games are important for rehabilitation at home. Recent works have explored different techniques with satisfactory results but these have a poor use of game design concepts like Challenge and Conservative Handling of Failure. Dynamic Difficult Adjustment with Help (DDA-Help) approach is presented as a new point of view for self adaptive video games for rehabilitation. Procedural Content Generation (PCG) and automatic helpers are used to a different work on Conservative Handling of Failure and Challenge. An experience with amblyopic children showed the proposal effectiveness, increasing the visual acuity 2-3 level following the Snellen Vision Test and improving the performance curve during the game time.", "title": "" }, { "docid": "d974b1ffafd9ad738303514f28a770b9", "text": "We introduce a new algorithm for reinforcement learning called Maximum aposteriori Policy Optimisation (MPO) based on coordinate ascent on a relativeentropy objective. We show that several existing methods can directly be related to our derivation. We develop two off-policy algorithms and demonstrate that they are competitive with the state-of-the-art in deep reinforcement learning. In particular, for continuous control, our method outperforms existing methods with respect to sample efficiency, premature convergence and robustness to hyperparameter settings.", "title": "" }, { "docid": "a6e0bbc761830bc74d58793a134fa75b", "text": "With the explosion of multimedia data, semantic event detection from videos has become a demanding and challenging topic. In addition, when the data has a skewed data distribution, interesting event detection also needs to address the data imbalance problem. The recent proliferation of deep learning has made it an essential part of many Artificial Intelligence (AI) systems. Till now, various deep learning architectures have been proposed for numerous applications such as Natural Language Processing (NLP) and image processing. Nonetheless, it is still impracticable for a single model to work well for different applications. Hence, in this paper, a new ensemble deep learning framework is proposed which can be utilized in various scenarios and datasets. The proposed framework is able to handle the over-fitting issue as well as the information losses caused by single models. Moreover, it alleviates the imbalanced data problem in real-world multimedia data. The whole framework includes a suite of deep learning feature extractors integrated with an enhanced ensemble algorithm based on the performance metrics for the imbalanced data. The Support Vector Machine (SVM) classifier is utilized as the last layer of each deep learning component and also as the weak learners in the ensemble module. The framework is evaluated on two large-scale and imbalanced video datasets (namely, disaster and TRECVID). The extensive experimental results illustrate the advantage and effectiveness of the proposed framework. It also demonstrates that the proposed framework outperforms several well-known deep learning methods, as well as the conventional features integrated with different classifiers.", "title": "" }, { "docid": "e9250f1b7c471c522d8a311a18f5c07b", "text": "In this paper, we explored a learning approach which combines di erent learning methods in inductive logic programming (ILP) to allow a learner to produce more expressive hypotheses than that of each individual learner. Such a learning approach may be useful when the performance of the task depends on solving a large amount of classication problems and each has its own characteristics which may or may not t a particular learning method. The task of semantic parser acquisition in two di erent domains was attempted and preliminary results demonstrated that such an approach is promising.", "title": "" }, { "docid": "9955b14187e172e34f233fec70ae0a38", "text": "Neural network language models (NNLM) have become an increasingly popular choice for large vocabulary continuous speech recognition (LVCSR) tasks, due to their inherent generalisation and discriminative power. This paper present two techniques to improve performance of standard NNLMs. First, the form of NNLM is modelled by introduction an additional output layer node to model the probability mass of out-of-shortlist (OOS) words. An associated probability normalisation scheme is explicitly derived. Second, a novel NNLM adaptation method using a cascaded network is proposed. Consistent WER reductions were obtained on a state-of-the-art Arabic LVCSR task over conventional NNLMs. Further performance gains were also observed after NNLM adaptation.", "title": "" }, { "docid": "80fed8845ca14843855383d714600960", "text": "In this paper, a methodology is developed to use data acquisition derived from condition monitoring and standard diagnosis for rehabilitation purposes of transformers. The interpretation and understanding of the test data are obtained from international test standards to determine the current condition of transformers. In an attempt to ascertain monitoring priorities, the effective test methods are selected for transformer diagnosis. In particular, the standardization of diagnostic and analytical techniques are being improved that will enable field personnel to more easily use the test results and will reduce the need for interpretation by experts. In addition, the advanced method has the potential to reduce the time greatly and increase the accuracy of diagnostics. The important aim of the standardization is to develop the multiple diagnostic models that combine results from the different tests and give an overall assessment of reliability and maintenance for transformers.", "title": "" }, { "docid": "7e70955671d2ad8728fdba0fc3ec5548", "text": "Detection of drowsiness based on extraction of IMF’s from EEG signal using EMD process and characterizing the features using trained Artificial Neural Network (ANN) is introduced in this paper. Our subjects are 8 volunteers who have not slept for last 24 hour due to travelling. EEG signal was recorded when the subject is sitting on a chair facing video camera and are obliged to see camera only. ANN is trained using a utility made in Matlab to mark the EEG data for drowsy state and awaked state and then extract IMF’s of marked data using EMD to prepare feature inputs for Neural Network. Once the neural network is trained, IMFs of New subjects EEG Signals is given as input and ANN will give output in two different states i.e. ‘drowsy’ or ‘awake’. The system is tested on 8 different subjects and it provided good results with more than 84.8% of correct detection of drowsy states.", "title": "" }, { "docid": "17ed052368311073f7f18fd423c817e9", "text": "We adopt and analyze a synchronous K-step averaging stochastic gradient descent algorithm which we call K-AVG for solving large scale machine learning problems. We establish the convergence results of K-AVG for nonconvex objectives. Our analysis of K-AVG applies to many existing variants of synchronous SGD. We explain why the Kstep delay is necessary and leads to better performance than traditional parallel stochastic gradient descent which is equivalent to K-AVG withK = 1. We also show that K-AVG scales better with the number of learners than asynchronous stochastic gradient descent (ASGD). Another advantage of K-AVG over ASGD is that it allows larger stepsizes and facilitates faster convergence. On a cluster of 128 GPUs, K-AVG is faster than ASGD implementations and achieves better accuracies and faster convergence for training with the CIFAR-10 dataset.", "title": "" }, { "docid": "dd6ed8448043868d17ddb015c98a4721", "text": "Social networking sites, especially Facebook, are an integral part of the lifestyle of contemporary youth. The facilities are increasingly being used by older persons as well. Usage is mainly for social purposes, but the groupand discussion facilities of Facebook hold potential for focused academic use. This paper describes and discusses a venture in which postgraduate distancelearning students joined an optional group for the purpose of discussions on academic, contentrelated topics, largely initiated by the students themselves. Learning and insight were enhanced by these discussions and the students, in their environment of distance learning, are benefiting by contact with fellow students.", "title": "" }, { "docid": "5d2190a63468e299bf755895488bd7ba", "text": "We use logical inference techniques for recognising textual entailment, with theorem proving operating on deep semantic interpretations as the backbone of our system. However, the performance of theorem proving on its own turns out to be highly dependent on a wide range of background knowledge, which is not necessarily included in publically available knowledge sources. Therefore, we achieve robustness via two extensions. Firstly, we incorporate model building, a technique borrowed from automated reasoning, and show that it is a useful robust method to approximate entailment. Secondly, we use machine learning to combine these deep semantic analysis techniques with simple shallow word overlap. The resulting hybrid model achieves high accuracy on the RTE testset, given the state of the art. Our results also show that the various techniques that we employ perform very differently on some of the subsets of the RTE corpus and as a result, it is useful to use the nature of the dataset as a feature.", "title": "" }, { "docid": "850f51897e97048a376c60a3a989426f", "text": "With the advent of high dimensionality, adequate identification of relevant features of the data has become indispensable in real-world scenarios. In this context, the importance of feature selection is beyond doubt and different methods have been developed. However, with such a vast body of algorithms available, choosing the adequate feature selection method is not an easy-to-solve question and it is necessary to check their effectiveness on different situations. Nevertheless, the assessment of relevant features is difficult in real datasets and so an interesting option is to use artificial data. In this paper, several synthetic datasets are employed for this purpose, aiming at reviewing the performance of feature selection methods in the presence of a crescent number or irrelevant features, noise in the data, redundancy and interaction between attributes, as well as a small ratio between number of samples and number of features. Seven filters, two embedded methods, and two wrappers are applied over eleven synthetic datasets, tested by four classifiers, so as to be able to choose a robust method, paving the way for its application to real datasets.", "title": "" }, { "docid": "0103439813a724a3df2e3bd827680abd", "text": "Unsupervised automatic topic discovery in micro-blogging social networks is a very challenging task, as it involves the analysis of very short, noisy, ungrammatical and uncontextual messages. Most of the current approaches to this problem are basically syntactic, as they focus either on the use of statistical techniques or on the analysis of the co-occurrences between the terms. This paper presents a novel topic discovery methodology, based on the mapping of hashtags to WordNet terms and their posterior clustering, in which semantics plays a centre role. The paper also presents a detailed case study in the field of Oncology, in which the discovered topics are thoroughly compared to a golden standard, showing promising results. 2015 Published by Elsevier Ltd.", "title": "" }, { "docid": "90abf21c7a6929a47d789c3e1c56f741", "text": "Nearly 40 years ago, Dr. R.J. Gibbons made the first reports of the clinical relevance of what we now know as bacterial biofilms when he published his observations of the role of polysaccharide glycocalyx formation on teeth by Streptococcus mutans [Sci. Am. 238 (1978) 86]. As the clinical relevance of bacterial biofilm formation became increasingly apparent, interest in the phenomenon exploded. Studies are rapidly shedding light on the biomolecular pathways leading to this sessile mode of growth but many fundamental questions remain. The intent of this review is to consider the reasons why bacteria switch from a free-floating to a biofilm mode of growth. The currently available wealth of data pertaining to the molecular genetics of biofilm formation in commonly studied, clinically relevant, single-species biofilms will be discussed in an effort to decipher the motivation behind the transition from planktonic to sessile growth in the human body. Four potential incentives behind the formation of biofilms by bacteria during infection are considered: (1) protection from harmful conditions in the host (defense), (2) sequestration to a nutrient-rich area (colonization), (3) utilization of cooperative benefits (community), (4) biofilms normally grow as biofilms and planktonic cultures are an in vitro artifact (biofilms as the default mode of growth).", "title": "" }, { "docid": "c0350ac9bd1c38252e04a3fd097ae6ee", "text": "In contrast to the increasing popularity of REpresentational State Transfer (REST), systematic testing of RESTful Application Programming Interfaces (API) has not attracted much attention so far. This paper describes different aspects of automated testing of RESTful APIs. Later, we focus on functional and security tests, for which we apply a technique called model-based software development. Based on an abstract model of the RESTful API that comprises resources, states and transitions a software generator not only creates the source code of the RESTful API but also creates a large number of test cases that can be immediately used to test the implementation. This paper describes the process of developing a software generator for test cases using state-of-the-art tools and provides an example to show the feasibility of our approach.", "title": "" }, { "docid": "4d2461f0fe7cd85ed2d4678f3a3b164b", "text": "BACKGROUND\nProblematic Internet addiction or excessive Internet use is characterized by excessive or poorly controlled preoccupations, urges, or behaviors regarding computer use and Internet access that lead to impairment or distress. Currently, there is no recognition of internet addiction within the spectrum of addictive disorders and, therefore, no corresponding diagnosis. It has, however, been proposed for inclusion in the next version of the Diagnostic and Statistical Manual of Mental Disorder (DSM).\n\n\nOBJECTIVE\nTo review the literature on Internet addiction over the topics of diagnosis, phenomenology, epidemiology, and treatment.\n\n\nMETHODS\nReview of published literature between 2000-2009 in Medline and PubMed using the term \"internet addiction.\n\n\nRESULTS\nSurveys in the United States and Europe have indicated prevalence rate between 1.5% and 8.2%, although the diagnostic criteria and assessment questionnaires used for diagnosis vary between countries. Cross-sectional studies on samples of patients report high comorbidity of Internet addiction with psychiatric disorders, especially affective disorders (including depression), anxiety disorders (generalized anxiety disorder, social anxiety disorder), and attention deficit hyperactivity disorder (ADHD). Several factors are predictive of problematic Internet use, including personality traits, parenting and familial factors, alcohol use, and social anxiety.\n\n\nCONCLUSIONS AND SCIENTIFIC SIGNIFICANCE\nAlthough Internet-addicted individuals have difficulty suppressing their excessive online behaviors in real life, little is known about the patho-physiological and cognitive mechanisms responsible for Internet addiction. Due to the lack of methodologically adequate research, it is currently impossible to recommend any evidence-based treatment of Internet addiction.", "title": "" }, { "docid": "412b616f4fcb9399c8220c542ecac83e", "text": "Image cropping aims at improving the aesthetic quality of images by adjusting their composition. Most weakly supervised cropping methods (without bounding box supervision) rely on the sliding window mechanism. The sliding window mechanism requires fixed aspect ratios and limits the cropping region with arbitrary size. Moreover, the sliding window method usually produces tens of thousands of windows on the input image which is very time-consuming. Motivated by these challenges, we firstly formulate the aesthetic image cropping as a sequential decision-making process and propose a weakly supervised Aesthetics Aware Reinforcement Learning (A2-RL) framework to address this problem. Particularly, the proposed method develops an aesthetics aware reward function which especially benefits image cropping. Similar to human's decision making, we use a comprehensive state representation including both the current observation and the historical experience. We train the agent using the actor-critic architecture in an end-to-end manner. The agent is evaluated on several popular unseen cropping datasets. Experiment results show that our method achieves the state-of-the-art performance with much fewer candidate windows and much less time compared with previous weakly supervised methods.", "title": "" }, { "docid": "d80070cf7ab3d3e75c2da1525e59be67", "text": "This paper presents for the first time the analysis and experimental validation of a six-slot four-pole synchronous reluctance motor with nonoverlapping fractional slot-concentrated windings. The machine exhibits high torque density and efficiency due to its high fill factor coils with very short end windings, facilitated by a segmented stator and bobbin winding of the coils. These advantages are coupled with its inherent robustness and low cost. The topology is presented as a logical step forward in advancing synchronous reluctance machines that have been universally wound with a sinusoidally distributed winding. The paper presents the motor design, performance evaluation through finite element studies and validation of the electromagnetic model, and thermal specification through empirical testing. It is shown that high performance synchronous reluctance motors can be constructed with single tooth wound coils, but considerations must be given regarding torque quality and the d-q axis inductances.", "title": "" }, { "docid": "8bd9a5cf3ca49ad8dd38750410a462b0", "text": "Most regional anesthesia in breast surgeries is performed as postoperative pain management under general anesthesia, and not as the primary anesthesia. Regional anesthesia has very few cardiovascular or pulmonary side-effects, as compared with general anesthesia. Pectoral nerve block is a relatively new technique, with fewer complications than other regional anesthesia. We performed Pecs I and Pec II block simultaneously as primary anesthesia under moderate sedation with dexmedetomidine for breast conserving surgery in a 49-year-old female patient with invasive ductal carcinoma. Block was uneventful and showed no complications. Thus, Pecs block with sedation could be an alternative to general anesthesia for breast surgeries.", "title": "" }, { "docid": "e84856804fd03b5334353937e9db4f81", "text": "The probabilistic method comes up in various fields in mathematics. In these notes, we will give a brief introduction to graph theory and applications of the probabilistic method in proving bounds for Ramsey numbers and a theorem in graph cuts. This method is based on the following idea: in order to prove the existence of an object with some desired property, one defines a probability space on some larger class of objects, and then shows that an element of this space has the desired property with positive probability. The elements contained in this probability space may be of any kind. We will illustrate the probabilistic method by giving applications in graph theory.", "title": "" }, { "docid": "7f5f267e7628f3d9968c940ee3a5a370", "text": "Let G=(V,E) be a complete undirected graph, with node set V={v 1 , . . ., v n } and edge set E . The edges (v i ,v j ) ∈ E have nonnegative weights that satisfy the triangle inequality. Given a set of integers K = { k i } i=1 p $(\\sum_{i=1}^p k_i \\leq |V|$) , the minimum K-cut problem is to compute disjoint subsets with sizes { k i } i=1 p , minimizing the total weight of edges whose two ends are in different subsets. We demonstrate that for any fixed p it is possible to obtain in polynomial time an approximation of at most three times the optimal value. We also prove bounds on the ratio between the weights of maximum and minimum cuts.", "title": "" } ]
scidocsrr
ccb69c95b57ab3b3a726e8ee0c27059c
Improving ChangeDistiller Improving Abstract Syntax Tree based Source Code Change Detection
[ { "docid": "cd8eeaeb81423fcb1c383f2b60e928df", "text": "Detecting and representing changes to data is important for active databases, data warehousing, view maintenance, and version and configuration management. Most previous work in change management has dealt with flat-file and relational data; we focus on hierarchically structured data. Since in many cases changes must be computed from old and new versions of the data, we define the hierarchical change detection problem as the problem of finding a \"minimum-cost edit script\" that transforms one data tree to another, and we present efficient algorithms for computing such an edit script. Our algorithms make use of some key domain characteristics to achieve substantially better performance than previous, general-purpose algorithms. We study the performance of our algorithms both analytically and empirically, and we describe the application of our techniques to hierarchically structured documents.", "title": "" } ]
[ { "docid": "1945d4663a49a5e1249e43dc7f64d15b", "text": "The current generation of adolescents grows up in a media-saturated world. However, it is unclear how media influences the maturational trajectories of brain regions involved in social interactions. Here we review the neural development in adolescence and show how neuroscience can provide a deeper understanding of developmental sensitivities related to adolescents’ media use. We argue that adolescents are highly sensitive to acceptance and rejection through social media, and that their heightened emotional sensitivity and protracted development of reflective processing and cognitive control may make them specifically reactive to emotion-arousing media. This review illustrates how neuroscience may help understand the mutual influence of media and peers on adolescents’ well-being and opinion formation. The current generation of adolescents grows up in a media-saturated world. Here, Crone and Konijn review the neural development in adolescence and show how neuroscience can provide a deeper understanding of developmental sensitivities related to adolescents’ media use.", "title": "" }, { "docid": "963d6b615ffd025723c82c1aabdbb9c6", "text": "A single high-directivity microstrip patch antenna (MPA) having a rectangular profile, which can substitute a linear array is proposed. It is designed by using genetic algorithms with the advantage of not requiring a feeding network. The patch fits inside an area of 2.54 x 0.25, resulting in a broadside pattern with a directivity of 12 dBi and a fractional impedance bandwidth of 4 %. The antenna is fabricated and the measurements are in good agreement with the simulated results. The genetic MPA provides a similar directivity as linear arrays using a corporate or series feeding, with the advantage that the genetic MPA results in more bandwidth.", "title": "" }, { "docid": "909405e3c06f22273107cb70a40d88c6", "text": "This paper reports a 6-bit 220-MS/s time-interleaving successive approximation register analog-to-digital converter (SAR ADC) for low-power low-cost CMOS integrated systems. The major concept of the design is based on the proposed set-and-down capacitor switching method in the DAC capacitor array. Compared to the conventional switching method, the average switching energy is reduced about 81%. At 220-MS/s sampling rate, the measured SNDR and SFDR are 32.62 dB and 48.96 dB respectively. The resultant ENOB is 5.13 bits. The total power consumption is 6.8 mW. Fabricated in TSMC 0.18-µm 1P5M Digital CMOS technology, the ADC only occupies 0.032 mm2 active area.", "title": "" }, { "docid": "f8947be81285e037eef69c5d2fcb94fb", "text": "To build a flexible and an adaptable architecture network supporting variety of services and their respective requirements, 5G NORMA introduced a network of functions based architecture breaking the major design principles followed in the current network of entities based architecture. This revolution exploits the advantages of the new technologies like Software-Defined Networking (SDN) and Network Function Virtualization (NFV) in conjunction with the network slicing and multitenancy concepts. In this paper we focus on the concept of Software Defined for Mobile Network Control (SDM-C) network: its definition, its role in controlling the intra network slices resources, its specificity to be QoE aware thanks to the QoE/QoS monitoring and modeling component and its complementarity with the orchestration component called SDM-O. To operate multiple network slices on the same infrastructure efficiently through controlling resources and network functions sharing among instantiated network slices, a common entity named SDM-X is introduced. The proposed design brings a set of new capabilities to make the network energy efficient, a feature that is discussed through some use cases.", "title": "" }, { "docid": "49791684a7a455acc9daa2ca69811e74", "text": "This paper analyzes the basic method of digital video image processing, studies the vehicle license plate recognition system based on image processing in intelligent transport system, presents a character recognition approach based on neural network perceptron to solve the vehicle license plate recognition in real-time traffic flow. Experimental results show that the approach can achieve better positioning effect, has a certain robustness and timeliness.", "title": "" }, { "docid": "704f4681b724a0e4c7c10fd129f3378b", "text": "We present an asymptotic fully polynomial approximation scheme for strip-packing, or packing rectangles into a rectangle of xed width and minimum height, a classical NP-hard cutting-stock problem. The algorithm nds a packing of n rectangles whose total height is within a factor of (1 +) of optimal (up to an additive term), and has running time polynomial both in n and in 1==. It is based on a reduction to fractional bin-packing. R esum e Nous pr esentons un sch ema totalement polynomial d'approximation pour la mise en boite de rectangles dans une boite de largeur x ee, avec hauteur mi-nimale, qui est un probleme NP-dur classique, de coupes par guillotine. L'al-gorithme donne un placement des rectangles, dont la hauteur est au plus egale a (1 +) (hauteur optimale) et a un temps d'execution polynomial en n et en 1==. Il utilise une reduction au probleme de la mise en boite fractionaire. Abstract We present an asymptotic fully polynomial approximation scheme for strip-packing, or packing rectangles into a rectangle of xed width and minimum height, a classical N P-hard cutting-stock problem. The algorithm nds a packing of n rectangles whose total height is within a factor of (1 +) of optimal (up to an additive term), and has running time polynomial both in n and in 1==. It is based on a reduction to fractional bin-packing.", "title": "" }, { "docid": "6470c8a921a9095adb96afccaa0bf97b", "text": "Complex tasks with a visually rich component, like diagnosing seizures based on patient video cases, not only require the acquisition of conceptual but also of perceptual skills. Medical education has found that besides biomedical knowledge (knowledge of scientific facts) clinical knowledge (actual experience with patients) is crucial. One important aspect of clinical knowledge that medical education has hardly focused on, yet, are perceptual skills, like visually searching, detecting, and interpreting relevant features. Research on instructional design has shown that in a visually rich, but simple classification task perceptual skills could be conveyed by means of showing the eye movements of a didactically behaving expert. The current study applied this method to medical education in a complex task. This was done by example video cases, which were verbally explained by an expert. In addition the experimental groups saw a display of the expert’s eye movements recorded, while he performed the task. Results show that blurring non-attended areas of the expert enhances diagnostic performance of epileptic seizures by medical students in contrast to displaying attended areas as a circle and to a control group without attention guidance. These findings show that attention guidance fosters learning of perceptual aspects of clinical knowledge, if implemented in a spotlight manner.", "title": "" }, { "docid": "3d7e7ec8d4d0c2b3167805b2c3ad6e94", "text": "The Electric Vehicle Routing Problem with Time Windows (EVRPTW) is an extension to the well-known Vehicle Routing Problem with Time Windows (VRPTW) where the fleet consists of electric vehicles (EVs). Since EVs have limited driving range due to their battery capacities they may need to visit recharging stations while servicing the customers along their route. The recharging may take place at any battery level and after the recharging the battery is assumed to be full. In this paper, we relax the full recharge restriction and allow partial recharging (EVRPTW-PR) which is more practical in the real world due to shorter recharging duration. We formulate this problem as 0-1 mixed integer linear program and develop an Adaptive Large Neighborhood Search (ALNS) algorithm to solve it efficiently. We apply several removal and insertion mechanisms by selecting them dynamically and adaptively based on their past performances, including new mechanisms specifically designed for EVRPTW and EVRPTWPR. We test the performance of ALNS by using benchmark instances from the recent literature. The computational results show that the proposed method is effective in finding high quality solutions and the partial recharging option may significantly improve the routing decisions.", "title": "" }, { "docid": "4cd868f43a4a468791d014515800fb04", "text": "Rescue operations play an important role in disaster management and in most of the cases rescue operation are challenged by the conditions where human intervention is highly unlikely allowed, in such cases a device which can replace human limitations with advanced technology in robotics and humanoids which can track or follow a route to find the targets. In this paper we use Cellular mobile communication technology as communication channel between the transmitter and the receiving robot device. A phone is established between the transmitter mobile phone and the one on robot with a DTMF decoder which receives the motion control commands from the keypad via mobile phone. The implemented system is built around on the ARM7 LPC2148. It processes the information came from sensors and DTMF module and send to the motor driver bridge to control the motors to change direction and position of the robot. This system is designed to use best in the conditions of accidents or incidents happened in coal mining, fire accidents, bore well incidents and so on.", "title": "" }, { "docid": "56f18b39a740dd65fc2907cdef90ac99", "text": "This paper describes a dynamic artificial neural network based mobile robot motion and path planning system. The method is able to navigate a robot car on flat surface among static and moving obstacles, from any starting point to any endpoint. The motion controlling ANN is trained online with an extended backpropagation through time algorithm, which uses potential fields for obstacle avoidance. The paths of the moving obstacles are predicted with other ANNs for better obstacle avoidance. The method is presented through the realization of the navigation system of a mobile robot.", "title": "" }, { "docid": "1277b7b45f5a54eec80eb8ab47ee3fbb", "text": "Latent variable models, and probabilistic graphical models more generally, provide a declarative language for specifying prior knowledge and structural relationships in complex datasets. They have a long and rich history in natural language processing, having contributed to fundamental advances such as statistical alignment for translation (Brown et al., 1993), topic modeling (Blei et al., 2003), unsupervised part-of-speech tagging (Brown et al., 1992), and grammar induction (Klein and Manning, 2004), among others. Deep learning, broadly construed, is a toolbox for learning rich representations (i.e., features) of data through numerical optimization. Deep learning is the current dominant paradigm in natural language processing, and some of the major successes include language modeling (Bengio et al., 2003; Mikolov et al., 2010; Zaremba et al., 2014), machine translation (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017), and natural language understanding tasks such as question answering and natural language inference.", "title": "" }, { "docid": "5c47c2de88f662f8c6e735b5bb9cd37a", "text": "Neural Machine Translation (NMT) models are often trained on heterogeneous mixtures of domains, from news to parliamentary proceedings, each with unique distributions and language. In this work we show that training NMT systems on naively mixed data can degrade performance versus models fit to each constituent domain. We demonstrate that this problem can be circumvented, and propose three models that do so by jointly learning domain discrimination and translation. We demonstrate the efficacy of these techniques by merging pairs of domains in three languages: Chinese, French, and Japanese. After training on composite data, each approach outperforms its domain-specific counterparts, with a model based on a discriminator network doing so most reliably. We obtain consistent performance improvements and an average increase of 1.1 BLEU.", "title": "" }, { "docid": "448dc3c1c5207e606f1bd3b386f8bbde", "text": "Variational autoencoders (VAE) are a powerful and widely-used class of models to learn complex data distributions in an unsupervised fashion. One important limitation of VAEs is the prior assumption that latent sample representations are independent and identically distributed. However, for many important datasets, such as time-series of images, this assumption is too strong: accounting for covariances between samples, such as those in time, can yield to a more appropriate model specification and improve performance in downstream tasks. In this work, we introduce a new model, the Gaussian Process (GP) Prior Variational Autoencoder (GPPVAE), to specifically address this issue. The GPPVAE aims to combine the power of VAEs with the ability to model correlations afforded by GP priors. To achieve efficient inference in this new class of models, we leverage structure in the covariance matrix, and introduce a new stochastic backpropagation strategy that allows for computing stochastic gradients in a distributed and low-memory fashion. We show that our method outperforms conditional VAEs (CVAEs) and an adaptation of standard VAEs in two image data applications.", "title": "" }, { "docid": "b9b5c187df7a83392244d51b2b4f30a7", "text": "OBJECTIVE\nTo compare the prevalence of anxiety, depression, and stress in medical students from all semesters of a Brazilian medical school and assess their respective associated factors.\n\n\nMETHOD\nA cross-sectional study of students from the twelve semesters of a Brazilian medical school was carried out. Students filled out a questionnaire including sociodemographics, religiosity (DUREL - Duke Religion Index), and mental health (DASS-21 - Depression, Anxiety, and Stress Scale). The students were compared for mental health variables (Chi-squared/ANOVA). Linear regression models were employed to assess factors associated with DASS-21 scores.\n\n\nRESULTS\n761 (75.4%) students answered the questionnaire; 34.6% reported depressive symptomatology, 37.2% showed anxiety symptoms, and 47.1% stress symptoms. Significant differences were found for: anxiety - ANOVA: [F = 2.536, p=0.004] between first and tenth (p=0.048) and first and eleventh (p=0.025) semesters; depression - ANOVA: [F = 2.410, p=0.006] between first and second semesters (p=0.045); and stress - ANOVA: [F = 2.968, p=0.001] between seventh and twelfth (p=0.044), tenth and twelfth (p=0.011), and eleventh and twelfth (p=0.001) semesters. The following factors were associated with (a) stress: female gender, anxiety, and depression; (b) depression: female gender, intrinsic religiosity, anxiety, and stress; and (c) anxiety: course semester, depression, and stress.\n\n\nCONCLUSION\nOur findings revealed high levels of depression, anxiety, and stress symptoms in medical students, with marked differences among course semesters. Gender and religiosity appeared to influence the mental health of the medical students.", "title": "" }, { "docid": "2cd1edeccd5d8b2f8471864a938e7438", "text": "A large body of evidence supports the hypothesis that mesolimbic dopamine (DA) mediates, in animal models, the reinforcing effects of central nervous system stimulants such as cocaine and amphetamine. The role DA plays in mediating amphetamine-type subjective effects of stimulants in humans remains to be established. Both amphetamine and cocaine increase norepinephrine (NE) via stimulation of release and inhibition of reuptake, respectively. If increases in NE mediate amphetamine-type subjective effects of stimulants in humans, then one would predict that stimulant medications that produce amphetamine-type subjective effects in humans should share the ability to increase NE. To test this hypothesis, we determined, using in vitro methods, the neurochemical mechanism of action of amphetamine, 3,4-methylenedioxymethamphetamine (MDMA), (+)-methamphetamine, ephedrine, phentermine, and aminorex. As expected, their rank order of potency for DA release was similar to their rank order of potency in published self-administration studies. Interestingly, the results demonstrated that the most potent effect of these stimulants is to release NE. Importantly, the oral dose of these stimulants, which produce amphetamine-type subjective effects in humans, correlated with the their potency in releasing NE, not DA, and did not decrease plasma prolactin, an effect mediated by DA release. These results suggest that NE may contribute to the amphetamine-type subjective effects of stimulants in humans.", "title": "" }, { "docid": "ced0dfa1447b86cc5af2952012960511", "text": "OBJECTIVE\nThe pathophysiology of peptic ulcer disease (PUD) in liver cirrhosis (LC) and chronic hepatitis has not been established. The aim of this study was to assess the role of portal hypertension from PUD in patients with LC and chronic hepatitis.\n\n\nMATERIALS AND METHODS\nWe analyzed the medical records of 455 hepatic vein pressure gradient (HVPG) and esophagogastroduodenoscopy patients who had LC or chronic hepatitis in a single tertiary hospital. The association of PUD with LC and chronic hepatitis was assessed by univariate and multivariate analysis.\n\n\nRESULTS\nA total of 72 PUD cases were detected. PUD was associated with LC more than with chronic hepatitis (odds ratio [OR]: 4.13, p = 0.03). In the univariate analysis, taking an ulcerogenic medication was associated with PUD in patients with LC (OR: 4.34, p = 0.04) and smoking was associated with PUD in patients with chronic hepatitis (OR: 3.61, p = 0.04). In the multivariate analysis, taking an ulcerogenic medication was associated with PUD in patients with LC (OR: 2.93, p = 0.04). However, HVPG was not related to PUD in patients with LC or chronic hepatitis.\n\n\nCONCLUSION\nAccording to the present study, patients with LC have a higher risk of PUD than those with chronic hepatitis. The risk factor was taking ulcerogenic medication. However, HVPG reflecting portal hypertension was not associated with PUD in LC or chronic hepatitis (Clinicaltrial number NCT01944878).", "title": "" }, { "docid": "27c0c6c43012139fc3e4ee64ae043c0b", "text": "This paper presents a method for measuring signal backscattering from RFID tags, and for calculating a tag's radar cross section (RCS). We derive a theoretical formula for the RCS of an RFID tag with a minimum-scattering antenna. We describe an experimental measurement technique, which involves using a network analyzer connected to an anechoic chamber with and without the tag. The return loss measured in this way allows us to calculate the backscattered power and to find the tag's RCS. Measurements were performed using an RFID tag operating in the UHF band. To determine whether the tag was turned on, we used an RFID tag tester. The tag's RCS was also calculated theoretically, using electromagnetic simulation software. The theoretical results were found to be in good agreement with experimental data", "title": "" }, { "docid": "b37064e74a2c88507eacb9062996a911", "text": "This article builds a theoretical framework to help explain governance patterns in global value chains. It draws on three streams of literature – transaction costs economics, production networks, and technological capability and firm-level learning – to identify three variables that play a large role in determining how global value chains are governed and change. These are: (1) the complexity of transactions, (2) the ability to codify transactions, and (3) the capabilities in the supply-base. The theory generates five types of global value chain governance – hierarchy, captive, relational, modular, and market – which range from high to low levels of explicit coordination and power asymmetry. The article highlights the dynamic and overlapping nature of global value chain governance through four brief industry case studies: bicycles, apparel, horticulture and electronics.", "title": "" }, { "docid": "d77a9e08115ecda71a126819bb6012d4", "text": "Music, an abstract stimulus, can arouse feelings of euphoria and craving, similar to tangible rewards that involve the striatal dopaminergic system. Using the neurochemical specificity of [11C]raclopride positron emission tomography scanning, combined with psychophysiological measures of autonomic nervous system activity, we found endogenous dopamine release in the striatum at peak emotional arousal during music listening. To examine the time course of dopamine release, we used functional magnetic resonance imaging with the same stimuli and listeners, and found a functional dissociation: the caudate was more involved during the anticipation and the nucleus accumbens was more involved during the experience of peak emotional responses to music. These results indicate that intense pleasure in response to music can lead to dopamine release in the striatal system. Notably, the anticipation of an abstract reward can result in dopamine release in an anatomical pathway distinct from that associated with the peak pleasure itself. Our results help to explain why music is of such high value across all human societies.", "title": "" } ]
scidocsrr
ac92d4f51a9bdcca07e144e93ce6a31a
Coupled-Resonator Filters With Frequency-Dependent Couplings: Coupling Matrix Synthesis
[ { "docid": "359d3e06c221e262be268a7f5b326627", "text": "A method for the synthesis of multicoupled resonators filters with frequency-dependent couplings is presented. A circuit model of the filter that accurately represents the frequency responses over a very wide frequency band is postulated. The two-port parameters of the filter based on the circuit model are obtained by circuit analysis. The values of the circuit elements are synthesized by equating the two-port parameters obtained from the circuit analysis and the filtering characteristic function. Solutions similar to the narrowband case (where all the couplings are assumed frequency independent) are obtained analytically when all coupling elements are either inductive or capacitive. The synthesis technique is generalized to include all types of coupling elements. Several examples of wideband filters are given to demonstrate the synthesis techniques.", "title": "" } ]
[ { "docid": "4ebdfc3fe891f11902fb94973b6be582", "text": "This work introduces the CASCADE error correction protocol and LDPC (Low-Density Parity Check) error correction codes which are both parity check based. We also give the results of computer simulations that are performed for comparing their performances (redundant information, success).", "title": "" }, { "docid": "561b37c506657693d27fa65341faf51e", "text": "Currently, much of machine learning is opaque, just like a “black box”. However, in order for humans to understand, trust and effectively manage the emerging AI systems, an AI needs to be able to explain its decisions and conclusions. In this paper, I propose an argumentation-based approach to explainable AI, which has the potential to generate more comprehensive explanations than existing approaches.", "title": "" }, { "docid": "27312c44c3e453ad9e5f35a45b50329c", "text": "The immunologic processes involved in Graves' disease (GD) have one unique characteristic--the autoantibodies to the TSH receptor (TSHR)--which have both linear and conformational epitopes. Three types of TSHR antibodies (stimulating, blocking, and cleavage) with different functional capabilities have been described in GD patients, which induce different signaling effects varying from thyroid cell proliferation to thyroid cell death. The establishment of animal models of GD by TSHR antibody transfer or by immunization with TSHR antigen has confirmed its pathogenic role and, therefore, GD is the result of a breakdown in TSHR tolerance. Here we review some of the characteristics of TSHR antibodies with a special emphasis on new developments in our understanding of what were previously called \"neutral\" antibodies and which we now characterize as autoantibodies to the \"cleavage\" region of the TSHR ectodomain.", "title": "" }, { "docid": "936cebe86936c6aa49758636554a4dc7", "text": "A new kind of distributed power divider/combiner circuit for use in octave bandwidth (or more) microstrip power transistor amplifier is presented. The design, characteristics and advantages are discussed. Experimental results on a 4-way divider are presented and compared with theory.", "title": "" }, { "docid": "97578b3a8f5f34c96e7888f273d4494f", "text": "We analyze the use, advantages, and drawbacks of graph kernels in chemoin-formatics, including a comparison of kernel-based approaches with other methodology, as well as examples of applications. Kernel-based machine learning [1], now widely applied in chemoinformatics, delivers state-of-the-art performance [2] in tasks like classification and regression. Molecular graph kernels [3] are a recent development where kernels are defined directly on the molecular structure graph. This allows the adaptation of methods from graph theory to structure graphs and their direct use with kernel learning algorithms. The main advantage of kernel learning, the so-called “kernel trick”, allows for a systematic, computationally feasible, and often globally optimal search for non-linear patterns, as well as the direct use of non-numerical inputs such as strings and graphs. A drawback is that solutions are expressed indirectly in terms of similarity to training samples, and runtimes that are typically quadratic or cubic in the number of training samples. Graph kernels [3] are positive semidefinite functions defined directly on graphs. The most important types are based on random walks, subgraph patterns, optimal assignments, and graphlets. Molecular structure graphs have strong properties that can be exploited [4], e.g., they are undirected, have no self-loops and no multiple edges, are connected (except for salts), annotated, often planar in the graph-theoretic sense, and their vertex degree is bounded by a small constant. In many applications, they are small. Many graph kernels are generalpurpose, some are suitable for structure graphs, and a few have been explicitly designed for them. We present three exemplary applications of the iterative similarity optimal assignment kernel [5], which was designed for the comparison of small structure graphs: The discovery of novel agonists of the peroxisome proliferator-activated receptor g [6] (ligand-based virtual screening), the estimation of acid dissociation constants [7] (quantitative structure-property relationships), and molecular de novo design [8].", "title": "" }, { "docid": "ba4ffbb6c3dc865f803cbe31b52919c5", "text": "This investigation is one in a series of studies that address the possibility of stroke rehabilitation using robotic devices to facilitate “adaptive training.” Healthy subjects, after training in the presence of systematically applied forces, typically exhibit a predictable “after-effect.” A critical question is whether this adaptive characteristic is preserved following stroke so that it might be exploited for restoring function. Another important question is whether subjects benefit more from training forces that enhance their errors than from forces that reduce their errors. We exposed hemiparetic stroke survivors and healthy age-matched controls to a pattern of disturbing forces that have been found by previous studies to induce a dramatic adaptation in healthy individuals. Eighteen stroke survivors made 834 movements in the presence of a robot-generated force field that pushed their hands proportional to its speed and perpendicular to its direction of motion — either clockwise or counterclockwise. We found that subjects could adapt, as evidenced by significant after-effects. After-effects were not correlated with the clinical scores that we used for measuring motor impairment. Further examination revealed that significant improvements occurred only when the training forces magnified the original errors, and not when the training forces reduced the errors or were zero. Within this constrained experimental task we found that error-enhancing therapy (as opposed to guiding the limb closer to the correct path) to be more effective than therapy that assisted the subject.", "title": "" }, { "docid": "258d0290b2cc7d083800d51dfa525157", "text": "In recent years, study of influence propagation in social networks has gained tremendous attention. In this context, we can identify three orthogonal dimensions—the number of seed nodes activated at the beginning (known as budget), the expected number of activated nodes at the end of the propagation (known as expected spread or coverage), and the time taken for the propagation. We can constrain one or two of these and try to optimize the third. In their seminal paper, Kempe et al. constrained the budget, left time unconstrained, and maximized the coverage: this problem is known as Influence Maximization (or MAXINF for short). In this paper, we study alternative optimization problems which are naturally motivated by resource and time constraints on viral marketing campaigns. In the first problem, termed minimum target set selection (or MINTSS for short), a coverage threshold η is given and the task is to find the minimum size seed set such that by activating it, at least η nodes are eventually activated in the expected sense. This naturally captures the problem of deploying a viral campaign on a budget. In the second problem, termed MINTIME, the goal is to minimize the time in which a predefined coverage is achieved. More precisely, in MINTIME, a coverage threshold η and a budget threshold k are given, and the task is to find a seed set of size at most k such that by activating it, at least η nodes are activated in the expected sense, in the minimum possible time. This problem addresses the issue of timing when deploying viral campaigns. Both these problems are NP-hard, which motivates our interest in their approximation. For MINTSS, we develop a simple greedy algorithm and show that it provides a bicriteria approximation. We also establish a generic hardness result suggesting that improving this bicriteria approximation is likely to be hard. For MINTIME, we show that even bicriteria and tricriteria approximations are hard under several conditions. We show, however, that if we allow the budget for number of seeds k to be boosted by a logarithmic factor and allow the coverage to fall short, then the problem can be solved exactly in PTIME, i.e., we can achieve the required coverage within the time achieved by the optimal solution to MINTIME with budget k and coverage threshold η. Finally, we establish the value of the approximation algorithms, by conducting an experimental evaluation, comparing their quality against that achieved by various heuristics.", "title": "" }, { "docid": "3a2ae63e5b8a9132e30a24373d9262e1", "text": "Nine projective linear measurements were taken to determine morphometric differences of the face among healthy young adult Chinese, Vietnamese, and Thais (60 in each group) and to assess the validity of six neoclassical facial canons in these populations. In addition, the findings in the Asian ethnic groups were compared to the data of 60 North American Caucasians. The canons served as criteria for determining the differences between the Asians and Caucasians. In neither Asian nor Caucasian subjects were the three sections of the facial profile equal. The validity of the five other facial canons was more frequent in Caucasians (range: 16.7–36.7%) than in Asians (range: 1.7–26.7%). Horizontal measurement results were significantly greater in the faces of the Asians (en–en, al–al, zy–zy) than in their white counterparts; as a result, the variation between the classical proportions and the actual measurements was significantly higher among Asians (range: 90–100%) than Caucasians (range: 13.3–48%). The dominant characteristics of the Asian face were a wider intercanthal distance in relation to a shorter palpebral fissure, a much wider soft nose within wide facial contours, a smaller mouth width, and a lower face smaller than the forehead height. In the absence of valid anthropometric norms of craniofacial measurements and proportion indices, our results, based on quantitative analysis of the main vertical and horizontal measurements of the face, offers surgeons guidance in judging the faces of Asian patients in preparation for corrective surgery.", "title": "" }, { "docid": "66fb14019184326107647df9771046f6", "text": "Word embeddings are well known to capture linguistic regularities of the language on which they are trained. Researchers also observe that these regularities can transfer across languages. However, previous endeavors to connect separate monolingual word embeddings typically require cross-lingual signals as supervision, either in the form of parallel corpus or seed lexicon. In this work, we show that such cross-lingual connection can actually be established without any form of supervision. We achieve this end by formulating the problem as a natural adversarial game, and investigating techniques that are crucial to successful training. We carry out evaluation on the unsupervised bilingual lexicon induction task. Even though this task appears intrinsically cross-lingual, we are able to demonstrate encouraging performance without any cross-lingual clues.", "title": "" }, { "docid": "c26eabb377db5f1033ec6d354d890a6f", "text": "Recurrent neural networks have recently shown significant potential in different language applications, ranging from natural language processing to language modelling. This paper introduces a research effort to use such networks to develop and evaluate natural language acquisition on a humanoid robot. Here, the problem is twofold. First, the focus will be put on using the gesture-word combination stage observed in infants to transition from single to multi-word utterances. Secondly, research will be carried out in the domain of connecting action learning with language learning. In the former, the long-short term memory architecture will be implemented, whilst in the latter multiple time-scale recurrent neural networks will be used. This will allow for comparison between the two architectures, whilst highlighting the strengths and shortcomings of both with respect to the language learning problem. Here, the main research efforts, challenges and expected outcomes are described.", "title": "" }, { "docid": "ec0da5cea716d1270b2143ffb6c610d6", "text": "This study focuses on the development of a web-based Attendance Register System or formerly known as ARS. The development of this system is motivated due to the fact that the students’ attendance records are one of the important elements that reflect their academic achievements in the higher academic institutions. However, the current practice implemented in most of the higher academic institutions in Malaysia is becoming more prone to human errors and frauds. Assisted by the System Development Life Cycle (SDLC) methodology, the ARS has been built using the web-based applications such as PHP, MySQL and Apache to cater the recording and reporting of the students’ attendances. The development of this prototype system is inspired by the feasibility study done in Universiti Teknologi MARA, Malaysia where 550 respondents have taken part in answering the questionnaires. From the analysis done, it has revealed that a more systematic and revolutionary system is indeed needed to be reinforced in order to improve the process of recording and reporting the attendances in the higher academic institution. ARS can be easily accessed by the lecturers via the Web and most importantly, the reports can be generated in realtime processing, thus, providing invaluable information about the students’ commitments in attending the classes. This paper will discuss in details the development of ARS from the feasibility study until the design phase.", "title": "" }, { "docid": "ee72a297c05a438a49e86a45b81db17f", "text": "Screening for cyclodextrin glycosyltransferase (CGTase)-producing alkaliphilic bacteria from samples collected from hyper saline soda lakes (Wadi Natrun Valley, Egypt), resulted in isolation of potent CGTase producing alkaliphilic bacterium, termed NPST-10. 16S rDNA sequence analysis identified the isolate as Amphibacillus sp. CGTase was purified to homogeneity up to 22.1 fold by starch adsorption and anion exchange chromatography with a yield of 44.7%. The purified enzyme was a monomeric protein with an estimated molecular weight of 92 kDa using SDS-PAGE. Catalytic activities of the enzyme were found to be 88.8 U mg(-1) protein, 20.0 U mg(-1) protein and 11.0 U mg(-1) protein for cyclization, coupling and hydrolytic activities, respectively. The enzyme was stable over a wide pH range from pH 5.0 to 11.0, with a maximal activity at pH 8.0. CGTase exhibited activity over a wide temperature range from 45 °C to 70 °C, with maximal activity at 50 °C and was stable at 30 °C to 55 °C for at least 1 h. Thermal stability of the purified enzyme could be significantly improved in the presence of CaCl(2). K(m) and V(max) values were estimated using soluble starch as a substrate to be 1.7 ± 0.15 mg/mL and 100 ± 2.0 μmol/min, respectively. CGTase was significantly inhibited in the presence of Co(2+), Zn(2+), Cu(2+), Hg(2+), Ba(2+), Cd(2+), and 2-mercaptoethanol. To the best of our knowledge, this is the first report of CGTase production by Amphibacillus sp. The achieved high conversion of insoluble raw corn starch into cyclodextrins (67.2%) with production of mainly β-CD (86.4%), makes Amphibacillus sp. NPST-10 desirable for the cyclodextrin production industry.", "title": "" }, { "docid": "29c32c8c447b498f43ec215633305923", "text": "A growing body of evidence suggests that empathy for pain is underpinned by neural structures that are also involved in the direct experience of pain. In order to assess the consistency of this finding, an image-based meta-analysis of nine independent functional magnetic resonance imaging (fMRI) investigations and a coordinate-based meta-analysis of 32 studies that had investigated empathy for pain using fMRI were conducted. The results indicate that a core network consisting of bilateral anterior insular cortex and medial/anterior cingulate cortex is associated with empathy for pain. Activation in these areas overlaps with activation during directly experienced pain, and we link their involvement to representing global feeling states and the guidance of adaptive behavior for both self- and other-related experiences. Moreover, the image-based analysis demonstrates that depending on the type of experimental paradigm this core network was co-activated with distinct brain regions: While viewing pictures of body parts in painful situations recruited areas underpinning action understanding (inferior parietal/ventral premotor cortices) to a stronger extent, eliciting empathy by means of abstract visual information about the other's affective state more strongly engaged areas associated with inferring and representing mental states of self and other (precuneus, ventral medial prefrontal cortex, superior temporal cortex, and temporo-parietal junction). In addition, only the picture-based paradigms activated somatosensory areas, indicating that previous discrepancies concerning somatosensory activity during empathy for pain might have resulted from differences in experimental paradigms. We conclude that social neuroscience paradigms provide reliable and accurate insights into complex social phenomena such as empathy and that meta-analyses of previous studies are a valuable tool in this endeavor.", "title": "" }, { "docid": "2d644e4146358131d43fbe25ba725c74", "text": "Neural interface technology has made enormous strides in recent years but stimulating electrodes remain incapable of reliably targeting specific cell types (e.g. excitatory or inhibitory neurons) within neural tissue. This obstacle has major scientific and clinical implications. For example, there is intense debate among physicians, neuroengineers and neuroscientists regarding the relevant cell types recruited during deep brain stimulation (DBS); moreover, many debilitating side effects of DBS likely result from lack of cell-type specificity. We describe here a novel optical neural interface technology that will allow neuroengineers to optically address specific cell types in vivo with millisecond temporal precision. Channelrhodopsin-2 (ChR2), an algal light-activated ion channel we developed for use in mammals, can give rise to safe, light-driven stimulation of CNS neurons on a timescale of milliseconds. Because ChR2 is genetically targetable, specific populations of neurons even sparsely embedded within intact circuitry can be stimulated with high temporal precision. Here we report the first in vivo behavioral demonstration of a functional optical neural interface (ONI) in intact animals, involving integrated fiberoptic and optogenetic technology. We developed a solid-state laser diode system that can be pulsed with millisecond precision, outputs 20 mW of power at 473 nm, and is coupled to a lightweight, flexible multimode optical fiber, approximately 200 microm in diameter. To capitalize on the unique advantages of this system, we specifically targeted ChR2 to excitatory cells in vivo with the CaMKIIalpha promoter. Under these conditions, the intensity of light exiting the fiber ( approximately 380 mW mm(-2)) was sufficient to drive excitatory neurons in vivo and control motor cortex function with behavioral output in intact rodents. No exogenous chemical cofactor was needed at any point, a crucial finding for in vivo work in large mammals. Achieving modulation of behavior with optical control of neuronal subtypes may give rise to fundamental network-level insights complementary to what electrode methodologies have taught us, and the emerging optogenetic toolkit may find application across a broad range of neuroscience, neuroengineering and clinical questions.", "title": "" }, { "docid": "d7b77fae980b3bc26ffb4917d6d093c1", "text": "This work presents a combination of a teach-and-replay visual navigation and Monte Carlo localization methods. It improves a reliable teach-and-replay navigation method by replacing its dependency on precise dead-reckoning by introducing Monte Carlo localization to determine robot position along the learned path. In consequence, the navigation method becomes robust to dead-reckoning errors, can be started from at any point in the map and can deal with the ‘kidnapped robot’ problem. Furthermore, the robot is localized with MCL only along the taught path, i.e. in one dimension, which does not require a high number of particles and significantly reduces the computational cost. Thus, the combination of MCL and teach-and-replay navigation mitigates the disadvantages of both methods. The method was tested using a P3-AT ground robot and a Parrot AR.Drone aerial robot over a long indoor corridor. Experiments show the validity of the approach and establish a solid base for continuing this work.", "title": "" }, { "docid": "8216a6da70affe452ec3c5998e3c77ba", "text": "In this paper, the performance of a rectangular microstrip patch antenna fed by microstrip line is designed to operate for ultra-wide band applications. It consists of a rectangular patch with U-shaped slot on one side of the substrate and a finite ground plane on the other side. The U-shaped slot and the finite ground plane are used to achieve an excellent impedance matching to increase the bandwidth. The proposed antenna is designed and optimized based on extensive 3D EM simulation studies. The proposed antenna is designed to operate over a frequency range from 3.6 to 15 GHz.", "title": "" }, { "docid": "e68da0df82ade1ef0ff2e0b26da4cb4e", "text": "What service-quality attributes must Internet banks offer to induce consumers to switch to online transactions and keep using them?", "title": "" }, { "docid": "0612781063f878c3b85321fd89026426", "text": "A lot of research has been done on multiple-valued logic (MVL) such as ternary logic in these years. MVL reduces the number of necessary operations and also decreases the chip area that would be used. Carbon nanotube field effect transistors (CNTFETs) are considered a viable alternative for silicon transistors (MOSFETs). Combining carbon nanotube transistors and MVL can produce a unique design that is faster and more flexible. In this paper, we design a new half adder and a new multiplier by nanotechnology using a ternary logic, which decreases the power consumption and chip surface and raises the speed. The presented design is simulated using CNTFET of Stanford University and HSPICE software, and the results are compared with those of other studies.", "title": "" }, { "docid": "0cccb226bb72be281ead8c614bd46293", "text": "We introduce a model for incorporating contextual information (such as geography) in learning vector-space representations of situated language. In contrast to approaches to multimodal representation learning that have used properties of the object being described (such as its color), our model includes information about the subject (i.e., the speaker), allowing us to learn the contours of a word’s meaning that are shaped by the context in which it is uttered. In a quantitative evaluation on the task of judging geographically informed semantic similarity between representations learned from 1.1 billion words of geo-located tweets, our joint model outperforms comparable independent models that learn meaning in isolation.", "title": "" }, { "docid": "c19b9828de0416b17d0e24b66c7cb0a5", "text": "Process monitoring using indirect methods leverages on the usage of sensors. Using sensors to acquire vital process related information also presents itself with the problem of big data management and analysis. Due to uncertainty in the frequency of events occurring, a higher sampling rate is often used in real-time monitoring applications to increase the chances of capturing and understanding all possible events related to the process. Advanced signal processing methods helps to further decipher meaningful information from the acquired data. In this research work, power spectrum density (PSD) of sensor data acquired at sampling rates between 40 kHz-51.2 kHz was calculated and the co-relation between PSD and completed number of cycles/passes is presented. Here, the progress in number of cycles/passes is the event this research work intends to classify and the algorithm used to compute PSD is Welchs estimate method. A comparison between Welchs estimate method and statistical methods is also discussed. A clear co-relation was observed using Welchs estimate to classify the number of cyceles/passes.", "title": "" } ]
scidocsrr
bd445f10eb1f0fc811869f66ed27b6d4
Pke: an Open Source Python-based Keyphrase Extraction Toolkit
[ { "docid": "3a37bf4ffad533746d2335f2c442a6d6", "text": "Keyphrase extraction is the task of identifying single or multi-word expressions that represent the main topics of a document. In this paper we present TopicRank, a graph-based keyphrase extraction method that relies on a topical representation of the document. Candidate keyphrases are clustered into topics and used as vertices in a complete graph. A graph-based ranking model is applied to assign a significance score to each topic. Keyphrases are then generated by selecting a candidate from each of the topranked topics. We conducted experiments on four evaluation datasets of different languages and domains. Results show that TopicRank significantly outperforms state-of-the-art methods on three datasets.", "title": "" } ]
[ { "docid": "90dfa19b821aeab985a96eba0c3037d3", "text": "Carcass mass and carcass clothing are factors of potential high forensic importance. In casework, corpses differ in mass and kind or extent of clothing; hence, a question arises whether methods for post-mortem interval estimation should take these differences into account. Unfortunately, effects of carcass mass and clothing on specific processes in decomposition and related entomological phenomena are unclear. In this article, simultaneous effects of these factors are analysed. The experiment followed a complete factorial block design with four levels of carcass mass (small carcasses 5–15 kg, medium carcasses 15.1–30 kg, medium/large carcasses 35–50 kg, large carcasses 55–70 kg) and two levels of carcass clothing (clothed and unclothed). Pig carcasses (N = 24) were grouped into three blocks, which were separated in time. Generally, carcass mass revealed significant and frequently large effects in almost all analyses, whereas carcass clothing had only minor influence on some phenomena related to the advanced decay. Carcass mass differently affected particular gross processes in decomposition. Putrefaction was more efficient in larger carcasses, which manifested itself through earlier onset and longer duration of bloating. On the other hand, active decay was less efficient in these carcasses, with relatively low average rate, resulting in slower mass loss and later onset of advanced decay. The average rate of active decay showed a significant, logarithmic increase with an increase in carcass mass, but only in these carcasses on which active decay was driven solely by larval blowflies. If a blowfly-driven active decay was followed by active decay driven by larval Necrodes littoralis (Coleoptera: Silphidae), which was regularly found in medium/large and large carcasses, the average rate showed only a slight and insignificant increase with an increase in carcass mass. These results indicate that lower efficiency of active decay in larger carcasses is a consequence of a multi-guild and competition-related pattern of this process. Pattern of mass loss in large and medium/large carcasses was not sigmoidal, but rather exponential. The overall rate of decomposition was strongly, but not linearly, related to carcass mass. In a range of low mass decomposition rate increased with an increase in mass, then at about 30 kg, there was a distinct decrease in rate, and again at about 50 kg, the rate slightly increased. Until about 100 accumulated degree-days larger carcasses gained higher total body scores than smaller carcasses. Afterwards, the pattern was reversed; moreover, differences between classes of carcasses enlarged with the progress of decomposition. In conclusion, current results demonstrate that cadaver mass is a factor of key importance for decomposition, and as such, it should be taken into account by decomposition-related methods for post-mortem interval estimation.", "title": "" }, { "docid": "49a041e18a063876dc595f33fe8239a8", "text": "Significant vulnerabilities have recently been identified in collaborative filtering recommender systems. These vulnerabilities mostly emanate from the open nature of such systems and their reliance on userspecified judgments for building profiles. Attackers can easily introduce biased data in an attempt to force the system to “adapt” in a manner advantageous to them. Our research in secure personalization is examining a range of attack models, from the simple to the complex, and a variety of recommendation techniques. In this chapter, we explore an attack model that focuses on a subset of users with similar tastes and show that such an attack can be highly successful against both user-based and item-based collaborative filtering. We also introduce a detection model that can significantly decrease the impact of this attack.", "title": "" }, { "docid": "c56c71775a0c87f7bb6c59d6607e5280", "text": "A correlational study examined relationships between motivational orientation, self-regulated learning, and classroom academic performance for 173 seventh graders from eight science and seven English classes. A self-report measure of student self-efficacy, intrinsic value, test anxiety, self-regulation, and use of learning strategies was administered, and performance data were obtained from work on classroom assignments. Self-efficacy and intrinsic value were positively related to cognitive engagement and performance. Regression analyses revealed that, depending on the outcome measure, self-regulation, self-efficacy, and test anxiety emerged as the best predictors of performance. Intrinsic value did not have a direct influence on performance but was strongly related to self-regulation and cognitive strategy use, regardless of prior achievement level. The implications of individual differences in motivational orientation for cognitive engagement and self-regulation in the classroom are discussed.", "title": "" }, { "docid": "f4d060cd114ffa2c028dada876fcb735", "text": "Mutations of SALL1 related to spalt of Drosophila have been found to cause Townes-Brocks syndrome, suggesting a function of SALL1 for the development of anus, limbs, ears, and kidneys. No function is yet known for SALL2, another human spalt-like gene. The structure of SALL2 is different from SALL1 and all other vertebrate spalt-like genes described in mouse, Xenopus, and Medaka, suggesting that SALL2-like genes might also exist in other vertebrates. Consistent with this hypothesis, we isolated and characterized a SALL2 homologous mouse gene, Msal-2. In contrast to other vertebrate spalt-like genes both SALL2 and Msal-2 encode only three double zinc finger domains, the most carboxyterminal of which only distantly resembles spalt-like zinc fingers. The evolutionary conservation of SALL2/Msal-2 suggests that two lines of sal-like genes with presumably different functions arose from an early evolutionary duplication of a common ancestor gene. Msal-2 is expressed throughout embryonic development but also in adult tissues, predominantly in brain. However, the function of SALL2/Msal-2 still needs to be determined.", "title": "" }, { "docid": "fc50b185323c45e3d562d24835e99803", "text": "The neuropeptide calcitonin gene-related peptide (CGRP) is implicated in the underlying pathology of migraine by promoting the development of a sensitized state of primary and secondary nociceptive neurons. The ability of CGRP to initiate and maintain peripheral and central sensitization is mediated by modulation of neuronal, glial, and immune cells in the trigeminal nociceptive signaling pathway. There is accumulating evidence to support a key role of CGRP in promoting cross excitation within the trigeminal ganglion that may help to explain the high co-morbidity of migraine with rhinosinusitis and temporomandibular joint disorder. In addition, there is emerging evidence that CGRP facilitates and sustains a hyperresponsive neuronal state in migraineurs mediated by reported risk factors such as stress and anxiety. In this review, the significant role of CGRP as a modulator of the trigeminal system will be discussed to provide a better understanding of the underlying pathology associated with the migraine phenotype.", "title": "" }, { "docid": "97f0e7c134d2852d0bfcfec63fb060d7", "text": "Action selection is a fundamental decision process for us, and depends on the state of both our body and the environment. Because signals in our sensory and motor systems are corrupted by variability or noise, the nervous system needs to estimate these states. To select an optimal action these state estimates need to be combined with knowledge of the potential costs or rewards of different action outcomes. We review recent studies that have investigated the mechanisms used by the nervous system to solve such estimation and decision problems, which show that human behaviour is close to that predicted by Bayesian Decision Theory. This theory defines optimal behaviour in a world characterized by uncertainty, and provides a coherent way of describing sensorimotor processes.", "title": "" }, { "docid": "f10ac6d718b07a22b798ef236454b806", "text": "The capability to operate cloud-native applications can generate enormous business growth and value. But enterprise architects should be aware that cloud-native applications are vulnerable to vendor lock-in. We investigated cloud-native application design principles, public cloud service providers, and industrial cloud standards. All results indicate that most cloud service categories seem to foster vendor lock-in situations which might be especially problematic for enterprise architectures. This might sound disillusioning at first. However, we present a reference model for cloud-native applications that relies only on a small subset of well standardized IaaS services. The reference model can be used for codifying cloud technologies. It can guide technology identification, classification, adoption, research and development processes for cloud-native application and for vendor lock-in aware enterprise architecture engineering methodologies.", "title": "" }, { "docid": "3b113b9b299987677daa2bebc7e7bf03", "text": "The restoration of endodontic tooth is always a challenge for the clinician, not only due to excessive loss of tooth structure but also invasion of the biological width due to large decayed lesions. In this paper, the 7 most common clinical scenarios in molars with class II lesions ever deeper were examined. This includes both the type of restoration (direct or indirect) and the management of the cavity margin, such as the need for deep margin elevation (DME) or crown lengthening. It is necessary to have the DME when the healthy tooth remnant is in the sulcus or at the epithelium level. For caries that reaches the connective tissue or the bone crest, crown lengthening is required. Endocrowns are a good treatment option in the endodontically treated tooth when the loss of structure is advanced.", "title": "" }, { "docid": "dbc468368059e6b676c8ece22b040328", "text": "In medical diagnoses and treatments, e.g., endoscopy, dosage transition monitoring, it is often desirable to wirelessly track an object that moves through the human GI tract. In this paper, we propose a magnetic localization and orientation system for such applications. This system uses a small magnet enclosed in the object to serve as excitation source, so it does not require the connection wire and power supply for the excitation signal. When the magnet moves, it establishes a static magnetic field around, whose intensity is related to the magnet's position and orientation. With the magnetic sensors, the magnetic intensities in some predetermined spatial positions can be detected, and the magnet's position and orientation parameters can be computed based on an appropriate algorithm. Here, we propose a real-time tracking system developed by a cubic magnetic sensor array made of Honeywell 3-axis magnetic sensors, HMC1043. Using some efficient software modules and calibration methods, the system can achieve satisfactory tracking accuracy if the cubic sensor array has enough number of 3-axis magnetic sensors. The experimental results show that the average localization error is 1.8 mm.", "title": "" }, { "docid": "ffbebb5d8f4d269353f95596c156ba5c", "text": "Decision trees and random forests are common classifiers with widespread use. In this paper, we develop two protocols for privately evaluating decision trees and random forests. We operate in the standard two-party setting where the server holds a model (either a tree or a forest), and the client holds an input (a feature vector). At the conclusion of the protocol, the client learns only the model’s output on its input and a few generic parameters concerning the model; the server learns nothing. The first protocol we develop provides security against semi-honest adversaries. Next, we show an extension of the semi-honest protocol that obtains one-sided security against malicious adversaries. We implement both protocols and show that both variants are able to process trees with several hundred decision nodes in just a few seconds and a modest amount of bandwidth. Compared to previous semi-honest protocols for private decision tree evaluation, we demonstrate tenfold improvements in computation and bandwidth.", "title": "" }, { "docid": "440a6b8b41a98e392ec13a5e13d7e7ba", "text": "A classical heuristic in software testing is to reward diversity, which implies that a higher priority must be assigned to test cases that differ the most from those already prioritized. This approach is commonly known as similarity-based test prioritization (SBTP) and can be realized using a variety of techniques. The objective of our study is to investigate whether SBTP is more effective at finding defects than random permutation, as well as determine which SBTP implementations lead to better results. To achieve our objective, we implemented five different techniques from the literature and conducted an experiment using the defects4j dataset, which contains 395 real faults from six real-world open-source Java programs. Findings indicate that running the most dissimilar test cases early in the process is largely more effective than random permutation (Vargha–Delaney A [VDA]: 0.76–0.99 observed using normalized compression distance). No technique was found to be superior with respect to the effectiveness. Locality-sensitive hashing was, to a small extent, less effective than other SBTP techniques (VDA: 0.38 observed in comparison to normalized compression distance), but its speed largely outperformed the other techniques (i.e., it was approximately 5–111 times faster). Our results bring to mind the well-known adage, “don’t put all your eggs in one basket”. To effectively consume a limited testing budget, one should spread it evenly across different parts of the system by running the most dissimilar test cases early in the testing process.", "title": "" }, { "docid": "64ec8a9073308280740c96fb0c8b4617", "text": "Lifting is a common manual material handling task performed in the workplaces. It is considered as one of the main risk factors for Work-related Musculoskeletal Disorders. To improve work place safety, it is necessary to assess musculoskeletal and biomechanical risk exposures associated with these tasks, which requires very accurate 3D pose. Existing approaches mainly utilize marker-based sensors to collect 3D information. However, these methods are usually expensive to setup, timeconsuming in process, and sensitive to the surrounding environment. In this study, we propose a multi-view based deep perceptron approach to address aforementioned limitations. Our approach consists of two modules: a \"view-specific perceptron\" network extracts rich information independently from the image of view, which includes both 2D shape and hierarchical texture information; while a \"multi-view integration\" network synthesizes information from all available views to predict accurate 3D pose. To fully evaluate our approach, we carried out comprehensive experiments to compare different variants of our design. The results prove that our approach achieves comparable performance with former marker-based methods, i.e. an average error of 14:72 ± 2:96 mm on the lifting dataset. The results are also compared with state-of-the-art methods on HumanEva- I dataset [1], which demonstrates the superior performance of our approach.", "title": "" }, { "docid": "7bf0b158d9fa4e62b38b6757887c13ed", "text": "Examinations are the most crucial section of any educational system. They are intended to measure student's knowledge, skills and aptitude. At any institute, a great deal of manual effort is required to plan and arrange examination. It includes making seating arrangement for students as well as supervision duty chart for invigilators. Many institutes performs this task manually using excel sheets. This results in excessive wastage of time and manpower. Automating the entire system can help solve the stated problem efficiently saving a lot of time. This paper presents the automatic exam seating allocation. It works in two modules First as, Students Seating Arrangement (SSA) and second as, Supervision Duties Allocation (SDA). It assigns the classrooms and the duties to the teachers in any institution. An input-output data is obtained from the real system which is found out manually by the organizers who set up the seating arrangement and chalk out the supervision duties. The results obtained using the real system and these two models are compared. The application shows that the modules are highly efficient, low-cost, and can be widely used in various colleges and universities.", "title": "" }, { "docid": "78ee892fada4ec9ff860072d0d0ecbe3", "text": "The popularity of FPGAs is rapidly growing due to the unique advantages that they offer. However, their distinctive features also raise new questions concerning the security and communication capabilities of an FPGA-based hardware platform. In this paper, we explore the some of the limits of FPGA side-channel communication. Specifically, we identify a previously unexplored capability that significantly increases both the potential benefits and risks associated with side-channel communication on an FPGA: an in-device receiver. We designed and implemented three new communication mechanisms: speed modulation, timing modulation and pin hijacking. These non-traditional interfacing techniques have the potential to provide reliable communication with an estimated maximum bandwidth of 3.3 bit/sec, 8 Kbits/sec, and 3.4 Mbits/sec, respectively.", "title": "" }, { "docid": "2bb936db4a73e009a86e2bff45f88313", "text": "Chimeric antigen receptors (CARs) have been used to redirect the specificity of autologous T cells against leukemia and lymphoma with promising clinical results. Extending this approach to allogeneic T cells is problematic as they carry a significant risk of graft-versus-host disease (GVHD). Natural killer (NK) cells are highly cytotoxic effectors, killing their targets in a non-antigen-specific manner without causing GVHD. Cord blood (CB) offers an attractive, allogeneic, off-the-self source of NK cells for immunotherapy. We transduced CB-derived NK cells with a retroviral vector incorporating the genes for CAR-CD19, IL-15 and inducible caspase-9-based suicide gene (iC9), and demonstrated efficient killing of CD19-expressing cell lines and primary leukemia cells in vitro, with marked prolongation of survival in a xenograft Raji lymphoma murine model. Interleukin-15 (IL-15) production by the transduced CB-NK cells critically improved their function. Moreover, iC9/CAR.19/IL-15 CB-NK cells were readily eliminated upon pharmacologic activation of the iC9 suicide gene. In conclusion, we have developed a novel approach to immunotherapy using engineered CB-derived NK cells, which are easy to produce, exhibit striking efficacy and incorporate safety measures to limit toxicity. This approach should greatly improve the logistics of delivering this therapy to large numbers of patients, a major limitation to current CAR-T-cell therapies.", "title": "" }, { "docid": "dc7474e5e82f06eb1feb7c579fd713a7", "text": "OBJECTIVE\nTo determine the current values and estimate the projected values (to the year 2041) for annual number of proximal femoral fractures (PFFs), age-adjusted rates of fracture, rates of death in the acute care setting, associated length of stay (LOS) in hospital, and seasonal variation by sex and age in elderly Canadians.\n\n\nDESIGN\nHospital discharge data for fiscal year 1993-94 from the Canadian Institute for Health Information were used to determine PFF incidence, and Statistics Canada population projections were used to estimate the rate and number of PFFs to 2041.\n\n\nSETTING\nCanada.\n\n\nPARTICIPANTS\nCanadian patients 65 years of age or older who underwent hip arthroplasty.\n\n\nOUTCOME MEASURES\nPFF rates, death rates and LOS by age, sex and province.\n\n\nRESULTS\nIn 1993-94 the incidence of PFF increased exponentially with increasing age. The age-adjusted rates were 479 per 100,000 for women and 187 per 100,000 for men. The number of PFFs was estimated at 23,375 (17,823 in women and 5552 in men), with a projected increase to 88,124 in 2041. The rate of death during the acute care stay increased exponentially with increasing age. The death rates for men were twice those for women. In 1993-94 an estimated 1570 deaths occurred in the acute care setting, and 7000 deaths were projected for 2041. LOS in the acute care setting increased with advancing age, as did variability in LOS, which suggests a more heterogeneous case mix with advancing age. The LOS for 1993-94 and 2041 was estimated at 465,000 and 1.8 million patient-days respectively. Seasonal variability in the incidence of PFFs by sex was not significant. Significant season-province interactions were seen (p < 0.05); however, the differences in incidence were small (on the order of 2% to 3%) and were not considered to have a large effect on resource use in the acute care setting.\n\n\nCONCLUSIONS\nOn the assumption that current conditions contributing to hip fractures will remain constant, the number of PFFs will rise exponentially over the next 40 years. The results of this study highlight the serious implications for Canadians if incidence rates are not reduced by some form of intervention.", "title": "" }, { "docid": "23c00b95cbdc39bc040ea6c3e3e128d8", "text": "Network Security is one of the important concepts in data security as the data to be uploaded should be made secure. To make data secure, there exist number of algorithms like AES (Advanced Encryption Standard), IDEA (International Data Encryption Algorithm) etc. These techniques of making the data secure come under Cryptography. Involving lnternet of Things (IoT) in Cryptography is an emerging domain. IoT can be defined as controlling things located at any part of the world via Internet. So, IoT involves data security i.e. Cryptography. Here, in this paper we discuss how data can be made secure for IoT using Cryptography.", "title": "" }, { "docid": "35f2b171f4e8fbb469ef7198d8e2116e", "text": "Recent advances in computer vision technologies have made possible the development of intelligent monitoring systems for video surveillance and ambientassisted living. By using this technology, these systems are able to automatically interpret visual data from the environment and perform tasks that would have been unthinkable years ago. These achievements represent a radical improvement but they also suppose a new threat to individual’s privacy. The new capabilities of such systems give them the ability to collect and index a huge amount of private information about each individual. Next-generation systems have to solve this issue in order to obtain the users’ acceptance. Therefore, there is a need for mechanisms or tools to protect and preserve people’s privacy. This paper seeks to clarify how privacy can be protected in imagery data, so as a main contribution a comprehensive classification of the protection methods for visual privacy as well as an up-to-date review of them are provided. A survey of the existing privacy-aware intelligent monitoring systems and a valuable discussion of important aspects of visual privacy are also provided.", "title": "" }, { "docid": "c50230c77645234564ab51a11fcf49d1", "text": "We present an image set classification algorithm based on unsupervised clustering of labeled training and unlabeled test data where labels are only used in the stopping criterion. The probability distribution of each class over the set of clusters is used to define a true set based similarity measure. To this end, we propose an iterative sparse spectral clustering algorithm. In each iteration, a proximity matrix is efficiently recomputed to better represent the local subspace structure. Initial clusters capture the global data structure and finer clusters at the later stages capture the subtle class differences not visible at the global scale. Image sets are compactly represented with multiple Grassmannian manifolds which are subsequently embedded in Euclidean space with the proposed spectral clustering algorithm. We also propose an efficient eigenvector solver which not only reduces the computational cost of spectral clustering by many folds but also improves the clustering quality and final classification results. Experiments on five standard datasets and comparison with seven existing techniques show the efficacy of our algorithm.", "title": "" }, { "docid": "4e122b71c30c6c0721d5065adcf0b52c", "text": "License plate recognition usually contains three steps, namely license plate detection/localization, character segmentation and character recognition. When reading characters on a license plate one by one after license plate detection step, it is crucial to accurately segment the characters. The segmentation step may be affected by many factors such as license plate boundaries (frames). The recognition accuracy will be significantly reduced if the characters are not properly segmented. This paper presents an efficient algorithm for character segmentation on a license plate. The algorithm follows the step that detects the license plates using an AdaBoost algorithm. It is based on an efficient and accurate skew and slant correction of license plates, and works together with boundary (frame) removal of license plates. The algorithm is efficient and can be applied in real-time applications. The experiments are performed to show the accuracy of segmentation.", "title": "" } ]
scidocsrr
6b59d358b108eda94fcea4c866c3c13e
Energy-Efficient Power Control: A Look at 5G Wireless Technologies
[ { "docid": "6a2d7b29a0549e99cdd31dbd2a66fc0a", "text": "We consider data transmissions in a full duplex (FD) multiuser multiple-input multiple-output (MU-MIMO) system, where a base station (BS) bidirectionally communicates with multiple users in the downlink (DL) and uplink (UL) channels on the same system resources. The system model of consideration has been thought to be impractical due to the self-interference (SI) between transmit and receive antennas at the BS. Interestingly, recent advanced techniques in hardware design have demonstrated that the SI can be suppressed to a degree that possibly allows for FD transmission. This paper goes one step further in exploring the potential gains in terms of the spectral efficiency (SE) and energy efficiency (EE) that can be brought by the FD MU-MIMO model. Toward this end, we propose low-complexity designs for maximizing the SE and EE, and evaluate their performance numerically. For the SE maximization problem, we present an iterative design that obtains a locally optimal solution based on a sequential convex approximation method. In this way, the nonconvex precoder design problem is approximated by a convex program at each iteration. Then, we propose a numerical algorithm to solve the resulting convex program based on the alternating and dual decomposition approaches, where analytical expressions for precoders are derived. For the EE maximization problem, using the same method, we first transform it into a concave-convex fractional program, which then can be reformulated as a convex program using the parametric approach. We will show that the resulting problem can be solved similarly to the SE maximization problem. Numerical results demonstrate that, compared to a half duplex system, the FD system of interest with the proposed designs achieves a better SE and a slightly smaller EE when the SI is small.", "title": "" } ]
[ { "docid": "88488d730255a534d3255eb5884a69a6", "text": "As Computer curricula have developed, Human-Computer Interaction has gradually become part of many of those curricula and the recent ACM/IEEE report on the core of Computing Science and Engineering, includes HumanComputer Interaction as one of the fundamental sub-areas that should be addressed by any such curricula. However, both technology and Human-Computer Interaction are evolving rapidly, thus a continuous effort is needed to maintain a program, bibliography and a set of practical assignments up to date and adapted to the current technology. This paper briefly presents an introductory course on Human-Computer Interaction offered to Electrical and Computer Engineering students at the University of Aveiro.", "title": "" }, { "docid": "4b6b9539468db238d92e9762b2650b61", "text": "The previous chapters gave an insightful introduction into the various facets of Business Process Management. We now share a rich understanding of the essential ideas behind designing and managing processes for organizational purposes. We have also learned about the various streams of research and development that have influenced contemporary BPM. As a matter of fact, BPM has become a holistic management discipline. As such, it requires that a plethora of facets needs to be addressed for its successful und sustainable application. This chapter provides a framework that consolidates and structures the essential factors that constitute BPM as a whole. Drawing from research in the field of maturity models, we suggest six core elements of BPM: strategic alignment, governance, methods, information technology, people, and culture. These six elements serve as the structure for this BPM Handbook. 1 Why Looking for BPM Core Elements? A recent global study by Gartner confirmed the significance of BPM with the top issue for CIOs identified for the sixth year in a row being the improvement of business processes (Gartner 2010). While such an interest in BPM is beneficial for professionals in this field, it also increases the expectations and the pressure to deliver on the promises of the process-centered organization. This context demands a sound understanding of how to approach BPM and a framework that decomposes the complexity of a holistic approach such as Business Process Management. A framework highlighting essential building blocks of BPM can particularly serve the following purposes: M. Rosemann (*) Information Systems Discipline, Faculty of Science and Technology, Queensland University of Technology, Brisbane, Australia e-mail: m.rosemann@qut.edu.au J. vom Brocke and M. Rosemann (eds.), Handbook on Business Process Management 1, International Handbooks on Information Systems, DOI 10.1007/978-3-642-00416-2_5, # Springer-Verlag Berlin Heidelberg 2010 107 l Project and Program Management: How can all relevant issues within a BPM approach be safeguarded? When implementing a BPM initiative, either as a project or as a program, is it essential to individually adjust the scope and have different BPM flavors in different areas of the organization? What competencies are relevant? What approach fits best with the culture and BPM history of the organization? What is it that needs to be taken into account “beyond modeling”? People for one thing play an important role like Hammer has pointed out in his chapter (Hammer 2010), but what might be further elements of relevance? In order to find answers to these questions, a framework articulating the core elements of BPM provides invaluable advice. l Vendor Management: How can service and product offerings in the field of BPM be evaluated in terms of their overall contribution to successful BPM? What portfolio of solutions is required to address the key issues of BPM, and to what extent do these solutions need to be sourced from outside the organization? There is, for example, a large list of providers of process-aware information systems, change experts, BPM training providers, and a variety of BPM consulting services. How can it be guaranteed that these offerings cover the required capabilities? In fact, the vast number of BPM offerings does not meet the requirements as distilled in this Handbook; see for example, Hammer (2010), Davenport (2010), Harmon (2010), and Rummler and Ramias (2010). It is also for the purpose of BPM make-or-buy decisions and the overall vendor management, that a framework structuring core elements of BPM is highly needed. l Complexity Management: How can the complexity that results from the holistic and comprehensive nature of BPM be decomposed so that it becomes manageable? How can a number of coexisting BPM initiatives within one organization be synchronized? An overarching picture of BPM is needed in order to provide orientation for these initiatives. Following a “divide-and-conquer” approach, a shared understanding of the core elements can help to focus on special factors of BPM. For each element, a specific analysis could be carried out involving experts from the various fields. Such an assessment should be conducted by experts with the required technical, business-oriented, and socio-cultural know-how. l Standards Management: What elements of BPM need to be standardized across the organization? What BPM elements need to be mandated for every BPM initiative? What BPM elements can be configured individually within each initiative? A comprehensive framework allows an element-by-element decision for the degrees of standardization that are required. For example, it might be decided that a company-wide process model repository will be “enforced” on all BPM initiatives, while performance management and cultural change will be decentralized activities. l Strategy Management: What is the BPM strategy of the organization? How does this strategy materialize in a BPM roadmap? How will the naturally limited attention of all involved stakeholders be distributed across the various BPM elements? How do we measure progression in a BPM initiative (“BPM audit”)? 108 M. Rosemann and J. vom Brocke", "title": "" }, { "docid": "8de530a30b8352e36b72f3436f47ffb2", "text": "This paper presents a Bayesian optimization method with exponential convergencewithout the need of auxiliary optimization and without the δ-cover sampling. Most Bayesian optimization methods require auxiliary optimization: an additional non-convex global optimization problem, which can be time-consuming and hard to implement in practice. Also, the existing Bayesian optimization method with exponential convergence [ 1] requires access to the δ-cover sampling, which was considered to be impractical [ 1, 2]. Our approach eliminates both requirements and achieves an exponential convergence rate.", "title": "" }, { "docid": "7190c91917d1e1280010c66139837568", "text": "GPUs and accelerators have become ubiquitous in modern supercomputing systems. Scientific applications from a wide range of fields are being modified to take advantage of their compute power. However, data movement continues to be a critical bottleneck in harnessing the full potential of a GPU. Data in the GPU memory has to be moved into the host memory before it can be sent over the network. MPI libraries like MVAPICH2 have provided solutions to alleviate this bottleneck using techniques like pipelining. GPUDirect RDMA is a feature introduced in CUDA 5.0, that allows third party devices like network adapters to directly access data in GPU device memory, over the PCIe bus. NVIDIA has partnered with Mellanox to make this solution available for InfiniBand clusters. In this paper, we evaluate the first version of GPUDirect RDMA for InfiniBand and propose designs in MVAPICH2 MPI library to efficiently take advantage of this feature. We highlight the limitations posed by current generation architectures in effectively using GPUDirect RDMA and address these issues through novel designs in MVAPICH2. To the best of our knowledge, this is the first work to demonstrate a solution for internode GPU-to-GPU MPI communication using GPUDirect RDMA. Results show that the proposed designs improve the latency of internode GPU-to-GPU communication using MPI Send/MPI Recv by 69% and 32% for 4Byte and 128KByte messages, respectively. The designs boost the uni-directional bandwidth achieved using 4KByte and 64KByte messages by 2x and 35%, respectively. We demonstrate the impact of the proposed designs using two end-applications: LBMGPU and AWP-ODC. They improve the communication times in these applications by up to 35% and 40%, respectively.", "title": "" }, { "docid": "5c05ad44ac2bf3fb26cea62d563435f8", "text": "We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters. We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramér GAN critic. Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs. In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance. We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training.", "title": "" }, { "docid": "194bea0d713d5d167e145e43b3c8b4e2", "text": "Users can enjoy personalized services provided by various context-aware applications that collect users' contexts through sensor-equipped smartphones. Meanwhile, serious privacy concerns arise due to the lack of privacy preservation mechanisms. Currently, most mechanisms apply passive defense policies in which the released contexts from a privacy preservation system are always real, leading to a great probability with which an adversary infers the hidden sensitive contexts about the users. In this paper, we apply a deception policy for privacy preservation and present a novel technique, FakeMask, in which fake contexts may be released to provably preserve users' privacy. The output sequence of contexts by FakeMask can be accessed by the untrusted context-aware applications or be used to answer queries from those applications. Since the output contexts may be different from the original contexts, an adversary has greater difficulty in inferring the real contexts. Therefore, FakeMask limits what adversaries can learn from the output sequence of contexts about the user being in sensitive contexts, even if the adversaries are powerful enough to have the knowledge about the system and the temporal correlations among the contexts. The essence of FakeMask is a privacy checking algorithm which decides whether to release a fake context for the current context of the user. We present a novel privacy checking algorithm and an efficient one to accelerate the privacy checking process. Extensive evaluation experiments on real smartphone context traces of users demonstrate the improved performance of FakeMask over other approaches.", "title": "" }, { "docid": "b04ae3842293f5f81433afbaa441010a", "text": "Rootkits Trojan virus, which can control attacked computers, delete import files and even steal password, are much popular now. Interrupt Descriptor Table (IDT) hook is rootkit technology in kernel level of Trojan. The paper makes deeply analysis on the IDT hooks handle procedure of rootkit Trojan according to previous other researchers methods. We compare its IDT structure and programs to find how Trojan interrupt handler code can respond the interrupt vector request in both real address mode and protected address mode. Finally, we analyze the IDT hook detection methods of rootkits Trojan by Windbg or other professional tools.", "title": "" }, { "docid": "d5bf84e6b391bee0bec00924ed788bf8", "text": "In this paper, we explore the use of the Stellar Consensus Protocol (SCP) and its Federated Byzantine Agreement (FBA) algorithm for ensuring trust and reputation between federated, cloud-based platform instances (nodes) and their participants. Our approach is grounded on federated consensus mechanisms, which promise data quality managed through computational trust and data replication, without a centralized authority. We perform our experimentation on the ground of the NIMBLE cloud manufacturing platform, which is designed to support growth of B2B digital manufacturing communities and their businesses through federated platform services, managed by peer-to-peer networks. We discuss the message exchange flow between the NIMBLE application logic and Stellar consensus logic.", "title": "" }, { "docid": "e4d58b9b8775f2a30bc15fceed9cd8bf", "text": "Latency of interactive computer systems is a product of the processing, transport and synchronisation delays inherent to the components that create them. In a virtual environment (VE) system, latency is known to be detrimental to a user's sense of immersion, physical performance and comfort level. Accurately measuring the latency of a VE system for study or optimisation, is not straightforward. A number of authors have developed techniques for characterising latency, which have become progressively more accessible and easier to use. In this paper, we characterise these techniques. We describe a simple mechanical simulator designed to simulate a VE with various amounts of latency that can be finely controlled (to within 3ms). We develop a new latency measurement technique called Automated Frame Counting to assist in assessing latency using high speed video (to within 1ms). We use the mechanical simulator to measure the accuracy of Steed's and Di Luca's measurement techniques, proposing improvements where they may be made. We use the methods to measure latency of a number of interactive systems that may be of interest to the VE engineer, with a significant level of confidence. All techniques were found to be highly capable however Steed's Method is both accurate and easy to use without requiring specialised hardware.", "title": "" }, { "docid": "87c3f3ab2c5c1e9a556ed6f467f613a9", "text": "In this study, we apply learning-to-rank algorithms to design trading strategies using relative performance of a group of stocks based on investors’ sentiment toward these stocks. We show that learning-to-rank algorithms are effective in producing reliable rankings of the best and the worst performing stocks based on investors’ sentiment. More specifically, we use the sentiment shock and trend indicators introduced in the previous studies, and we design stock selection rules of holding long positions of the top 25% stocks and short positions of the bottom 25% stocks according to rankings produced by learning-to-rank algorithms. We then apply two learning-to-rank algorithms, ListNet and RankNet, in stock selection processes and test long-only and long-short portfolio selection strategies using 10 years of market and news sentiment data. Through backtesting of these strategies from 2006 to 2014, we demonstrate that our portfolio strategies produce risk-adjusted returns superior to the S&P500 index return, the hedge fund industry average performance HFRIEMN, and some sentiment-based approaches without learning-to-rank algorithm during the same period.", "title": "" }, { "docid": "895f0424cb71c79b86ecbd11a4f2eb8e", "text": "A chronic alcoholic who had also been submitted to partial gastrectomy developed a syndrome of continuous motor unit activity responsive to phenytoin therapy. There were signs of minimal distal sensorimotor polyneuropathy. Symptoms of the syndrome of continuous motor unit activity were fasciculation, muscle stiffness, myokymia, impaired muscular relaxation and percussion myotonia. Electromyography at rest showed fasciculation, doublets, triplets, multiplets, trains of repetitive discharges and myotonic discharges. Trousseau's and Chvostek's signs were absent. No abnormality of serum potassium, calcium, magnesium, creatine kinase, alkaline phosphatase, arterial blood gases and pH were demonstrated, but the serum Vitamin B12 level was reduced. The electrophysiological findings and muscle biopsy were compatible with a mixed sensorimotor polyneuropathy. Tests of neuromuscular transmission showed a significant decrement in the amplitude of the evoked muscle action potential in the abductor digiti minimi on repetitive nerve stimulation. These findings suggest that hyperexcitability and hyperactivity of the peripheral motor axons underlie the syndrome of continuous motor unit activity in the present case. Ein chronischer Alkoholiker, mit subtotaler Gastrectomie, litt an einem Syndrom dauernder Muskelfaseraktivität, das mit Diphenylhydantoin behandelt wurde. Der Patient wies minimale Störungen im Sinne einer distalen sensori-motorischen Polyneuropathie auf. Die Symptome dieses Syndroms bestehen in: Fazikulationen, Muskelsteife, Myokymien, eine gestörte Erschlaffung nach der Willküraktivität und eine Myotonie nach Beklopfen des Muskels. Das Elektromyogramm in Ruhe zeigt: Faszikulationen, Doublets, Triplets, Multiplets, Trains repetitiver Potentiale und myotonische Entladungen. Trousseau- und Chvostek-Zeichen waren nicht nachweisbar. Gleichzeitig lagen die Kalium-, Calcium-, Magnesium-, Kreatinkinase- und Alkalinphosphatase-Werte im Serumspiegel sowie O2, CO2 und pH des arteriellen Blutes im Normbereich. Aber das Niveau des Vitamin B12 im Serumspiegel war deutlich herabgesetzt. Die muskelbioptische und elektrophysiologische Veränderungen weisen auf eine gemischte sensori-motorische Polyneuropathie hin. Die Abnahme der Amplitude der evozierten Potentiale, vom M. abductor digiti minimi abgeleitet, bei repetitiver Reizung des N. ulnaris, stellten eine Störung der neuromuskulären Überleitung dar. Aufgrund unserer klinischen und elektrophysiologischen Befunde könnten wir die Hypererregbarkeit und Hyperaktivität der peripheren motorischen Axonen als Hauptmechanismus des Syndroms dauernder motorischer Einheitsaktivität betrachten.", "title": "" }, { "docid": "c47b59ea14b86fa18e69074129af72ec", "text": "Multiple networks naturally appear in numerous high-impact applications. Network alignment (i.e., finding the node correspondence across different networks) is often the very first step for many data mining tasks. Most, if not all, of the existing alignment methods are solely based on the topology of the underlying networks. Nonetheless, many real networks often have rich attribute information on nodes and/or edges. In this paper, we propose a family of algorithms FINAL to align attributed networks. The key idea is to leverage the node/edge attribute information to guide (topology-based) alignment process. We formulate this problem from an optimization perspective based on the alignment consistency principle, and develop effective and scalable algorithms to solve it. Our experiments on real networks show that (1) by leveraging the attribute information, our algorithms can significantly improve the alignment accuracy (i.e., up to a 30% improvement over the existing methods); (2) compared with the exact solution, our proposed fast alignment algorithm leads to a more than 10 times speed-up, while preserving a 95% accuracy; and (3) our on-query alignment method scales linearly, with an around 90% ranking accuracy compared with our exact full alignment method and a near real-time response time.", "title": "" }, { "docid": "8d5759855079e2ddaab2e920b93ca2a3", "text": "In a number of information security scenarios, human beings can be better than technical security measures at detecting threats. This is particularly the case when a threat is based on deception of the user rather than exploitation of a specific technical flaw, as is the case of spear-phishing, application spoofing, multimedia masquerading and other semantic social engineering attacks. Here, we put the concept of the human-as-a-security-sensor to the test with a first case study on a small number of participants subjected to different attacks in a controlled laboratory environment and provided with a mechanism to report these attacks if they spot them. A key challenge is to estimate the reliability of each report, which we address with a machine learning approach. For comparison, we evaluate the ability of known technical security countermeasures in detecting the same threats. This initial proof of concept study shows that the concept is viable.", "title": "" }, { "docid": "0df3d30837edd0e7809ed77743a848db", "text": "Many language processing tasks can be reduced to breaking the text into segments with prescribed properties. Such tasks include sentence splitting, tokenization, named-entity extraction, and chunking. We present a new model of text segmentation based on ideas from multilabel classification. Using this model, we can naturally represent segmentation problems involving overlapping and non-contiguous segments. We evaluate the model on entity extraction and noun-phrase chunking and show that it is more accurate for overlapping and non-contiguous segments, but it still performs well on simpler data sets for which sequential tagging has been the best method.", "title": "" }, { "docid": "d168bdb3f1117aac53da1fbac0906887", "text": "Enforcing open source licenses such as the GNU General Public License (GPL), analyzing a binary for possible vulnerabilities, and code maintenance are all situations where it is useful to be able to determine the source code provenance of a binary. While previous work has either focused on computing binary-to-binary similarity or source-to-source similarity, BinPro is the first work we are aware of to tackle the problem of source-to-binary similarity. BinPro can match binaries with their source code even without knowing which compiler was used to produce the binary, or what optimization level was used with the compiler. To do this, BinPro utilizes machine learning to compute optimal code features for determining binaryto-source similarity and a static analysis pipeline to extract and compute similarity based on those features. Our experiments show that on average BinPro computes a similarity of 81% for matching binaries and source code of the same applications, and an average similarity of 25% for binaries and source code of similar but different applications. This shows that BinPro’s similarity score is useful for determining if a binary was derived from a particular source code.", "title": "" }, { "docid": "f7ef3c104fe6c5f082e7dd060a82c03e", "text": "Research about the artificial muscle made of fishing lines or sewing threads, called the twisted and coiled polymer actuator (abbreviated as TCA in this paper) has collected many interests, recently. Since TCA has a specific power surpassing the human skeletal muscle theoretically, it is expected to be a new generation of the artificial muscle actuator. In order that the TCA is utilized as a useful actuator, this paper introduces the fabrication and the modeling of the temperature-controllable TCA. With an embedded micro thermistor, the TCA is able to measure temperature directly, and feedback control is realized. The safe range of the force and temperature for the continuous use of the TCA was identified through experiments, and the closed-loop temperature control is successfully performed without the breakage of TCA.", "title": "" }, { "docid": "77812e38f7250bc23e5157554bb101bc", "text": "PinOS is an extension of the Pin dynamic instrumentation framework for whole-system instrumentation, i.e., to instrument both kernel and user-level code. It achieves this by interposing between the subject system and hardware using virtualization techniques. Specifically, PinOS is built on top of the Xen virtual machine monitor with Intel VT technology to allow instrumentation of unmodified OSes. PinOS is based on software dynamic translation and hence can perform pervasive fine-grain instrumentation. By inheriting the powerful instrumentation API from Pin, plus introducing some new API for system-level instrumentation, PinOS can be used to write system-wide instrumentation tools for tasks like program analysis and architectural studies. As of today, PinOS can boot Linux on IA-32 in uniprocessor mode, and can instrument complex applications such as database and web servers.", "title": "" }, { "docid": "98110985cd175f088204db452a152853", "text": "We propose an automatic method to infer high dynamic range illumination from a single, limited field-of-view, low dynamic range photograph of an indoor scene. In contrast to previous work that relies on specialized image capture, user input, and/or simple scene models, we train an end-to-end deep neural network that directly regresses a limited field-of-view photo to HDR illumination, without strong assumptions on scene geometry, material properties, or lighting. We show that this can be accomplished in a three step process: 1) we train a robust lighting classifier to automatically annotate the location of light sources in a large dataset of LDR environment maps, 2) we use these annotations to train a deep neural network that predicts the location of lights in a scene from a single limited field-of-view photo, and 3) we fine-tune this network using a small dataset of HDR environment maps to predict light intensities. This allows us to automatically recover high-quality HDR illumination estimates that significantly outperform previous state-of-the-art methods. Consequently, using our illumination estimates for applications like 3D object insertion, produces photo-realistic results that we validate via a perceptual user study.", "title": "" }, { "docid": "0ff90bc5ff6ecbb5c9d89902fce1fa0a", "text": "Improving a decision maker’s1 situational awareness of the cyber domain isn’t greatly different than enabling situation awareness in more traditional domains2. Situation awareness necessitates working with processes capable of identifying domain specific activities as well as processes capable of identifying activities that cross domains. These processes depend on the context of the environment, the domains, and the goals and interests of the decision maker but they can be defined to support any domain. This chapter will define situation awareness in its broadest sense, describe our situation awareness reference and process models, describe some of the applicable processes, and identify a set of metrics usable for measuring the performance of a capability supporting situation awareness. These techniques are independent of domain but this chapter will also describe how they apply to the cyber domain. 2.1 What is Situation Awareness (SA)? One of the challenges in working in this area is that there are a multitude of definitions and interpretations concerning the answer to this simple question. A keyword search (executed on 8 April 2009) of ‘situation awareness’ on Google yields over 18,000,000 links the first page of which ranged from a Wikipedia page through the importance of “SA while driving” and ends with a link to a free internet radio show. Also on this first search page are several links to publications by Dr. Mica Endsley whose work in SA is arguably providing a standard for SA definitions and George P. Tadda and John S. Salerno, Air Force Research Laboratory Rome NY 1 Decision maker is used very loosely to describe anyone who uses information to make decisions within a complex dynamic environment. This is necessary because, as will be discussed, situation awareness is unique and dependant on the environment being considered, the context of the decision to be made, and the user of the information. 2 Traditional domains could include land, air, or sea. S. Jajodia et al., (eds.), Cyber Situational Awareness, 15 Advances in Information Security 46, DOI 10.1007/978-1-4419-0140-8 2, c © Springer Science+Business Media, LLC 2010 16 George P. Tadda and John S. Salerno techniques particularly for dynamic environments. In [5], Dr. Endsley provides a general definition of SA in dynamic environments: “Situation awareness is the perception of the elements of the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future.” Also in [5], Endsley differentiates between situation awareness, “a state of knowledge”, and situation assessment, “process of achieving, acquiring, or maintaining SA.” This distinction becomes exceedingly important when trying to apply computer automation to SA. Since situation awareness is “a state of knowledge”, it resides primarily in the minds of humans (cognitive), while situation assessment as a process or set of processes lends itself to automated techniques. Endsley goes on to note that: “SA, decision making, and performance are different stages with different factors influencing them and with wholly different approaches for dealing with each of them; thus it is important to treat these constructs separately.” The “stages” that Endsley defines have a direct correlation with Boyd’s ubiquitous OODA loop with SA relating to Observe and Orient, decision making to Decide, and performance to Act. We’ll see these stages as well as Endsley’s three “levels” of SA (perception, comprehension, and projection) manifest themselves again throughout this discussion. As first mentioned, there are several definitions for SA, from the Army Field Manual 1-02 (September 2004), Situational Awareness is: “Knowledge and understanding of the current situation which promotes timely, relevant and accurate assessment of friendly, competitive and other operations within the battlespace in order to facilitate decision making. An informational perspective and skill that fosters an ability to determine quickly the context and relevance of events that are unfolding.”", "title": "" } ]
scidocsrr
ae969b4380a452408f920e23e7508508
Implementing Gender-Dependent Vowel-Level Analysis for Boosting Speech-Based Depression Recognition
[ { "docid": "b66be42a294208ec31d44e57ae434060", "text": "Emotional speech recognition aims to automatically classify speech units (e.g., utterances) into emotional states, such as anger, happiness, neutral, sadness and surprise. The major contribution of this paper is to rate the discriminating capability of a set of features for emotional speech recognition when gender information is taken into consideration. A total of 87 features has been calculated over 500 utterances of the Danish Emotional Speech database. The Sequential Forward Selection method (SFS) has been used in order to discover the 5-10 features which are able to classify the samples in the best way for each gender. The criterion used in SFS is the crossvalidated correct classification rate of a Bayes classifier where the class probability distribution functions (pdfs) are approximated via Parzen windows or modeled as Gaussians. When a Bayes classifier with Gaussian pdfs is employed, a correct classification rate of 61.1% is obtained for male subjects and a corresponding rate of 57.1% for female ones. In the same experiment, a random Emotional speech recognition aims to automatically classify speech units (e.g., utterances) into emotional states, such as anger, happiness, neutral, sadness and surprise. The major contribution of this paper is to rate the discriminating capability of a set of features for emotional speech recognition when gender information is taken into consideration. A total of 87 features has been calculated over 500 utterances of the Danish Emotional Speech database. The Sequential Forward Selection method (SFS) has been used in order to discover the 5-10 features which are able to classify the samples in the best way for each gender. The criterion used in SFS is the crossvalidated correct classification rate of a Bayes classifier where the class probability distribution functions (pdfs) are approximated via Parzen windows or modeled as Gaussians. When a Bayes classifier with Gaussian PDFs is employed, a correct classification rate of 61.1% is obtained for male subjects and a corresponding rate of 57.1% for female ones. In the same experiment, a random classification would result in a correct classification rate of 20%. When gender information is not considered a correct classification score of 50.6% is obtained.classification would result in a correct classification rate of 20%. When gender information is not considered a correct classification score of 50.6% is obtained.", "title": "" }, { "docid": "80bf80719a1751b16be2420635d34455", "text": "Mood disorders are inherently related to emotion. In particular, the behaviour of people suffering from mood disorders such as unipolar depression shows a strong temporal correlation with the affective dimensions valence, arousal and dominance. In addition to structured self-report questionnaires, psychologists and psychiatrists use in their evaluation of a patient's level of depression the observation of facial expressions and vocal cues. It is in this context that we present the fourth Audio-Visual Emotion recognition Challenge (AVEC 2014). This edition of the challenge uses a subset of the tasks used in a previous challenge, allowing for more focussed studies. In addition, labels for a third dimension (Dominance) have been added and the number of annotators per clip has been increased to a minimum of three, with most clips annotated by 5. The challenge has two goals logically organised as sub-challenges: the first is to predict the continuous values of the affective dimensions valence, arousal and dominance at each moment in time. The second is to predict the value of a single self-reported severity of depression indicator for each recording in the dataset. This paper presents the challenge guidelines, the common data used, and the performance of the baseline system on the two tasks.", "title": "" } ]
[ { "docid": "6e4d8bde993e88fa2c729d2fafb6fd90", "text": "The plant hormones gibberellin and abscisic acid regulate gene expression, secretion and cell death in aleurone. The emerging picture is of gibberellin perception at the plasma membrane whereas abscisic acid acts at both the plasma membrane and in the cytoplasm - although gibberellin and abscisic acid receptors have yet to be identified. A range of downstream-signalling components and events has been implicated in gibberellin and abscisic acid signalling in aleurone. These include the Galpha subunit of a heterotrimeric G protein, a transient elevation in cGMP, Ca2+-dependent and Ca2+-independent events in the cytoplasm, reversible protein phosphory-lation, and several promoter cis-elements and transcription factors, including GAMYB. In parallel, molecular genetic studies on mutants of Arabidopsis that show defects in responses to these hormones have identified components of gibberellin and abscisic acid signalling. These two approaches are yielding results that raise the possibility that specific gibberellin and abscisic acid signalling components perform similar functions in aleurone and other tissues.", "title": "" }, { "docid": "45d551e2d813c37e032b90799c71f4c1", "text": "A process is described to produce single sheets of functionalized graphene through thermal exfoliation of graphite oxide. The process yields a wrinkled sheet structure resulting from reaction sites involved in oxidation and reduction processes. The topological features of single sheets, as measured by atomic force microscopy, closely match predictions of first-principles atomistic modeling. Although graphite oxide is an insulator, functionalized graphene produced by this method is electrically conducting.", "title": "" }, { "docid": "6c1d3eb9d3e39b25f32b77942b04d165", "text": "The aim of this study is to investigate the factors influencing the consumer acceptance of mobile banking in Bangladesh. The demographic, attitudinal, and behavioural characteristics of mobile bank users were examined. 292 respondents from seven major mobile financial service users of different mobile network operators participated in the consumer survey. Infrastructural facility, selfcontrol, social influence, perceived risk, ease of use, need for interaction, perceived usefulness, and customer service were found to influence consumer attitudes towards mobile banking services. The infrastructural facility of updated user friendly technology and its availability was found to be the most important factor that motivated consumers’ attitudes in Bangladesh towards mobile banking. The sample size was not necessarily representative of the Bangladeshi population as a whole as it ignored large rural population. This study identified two additional factors i.e. infrastructural facility and customer service relevant to mobile banking that were absent in previous researches. By addressing the concerns of and benefits sought by the consumers, marketers can create positive attractions and policy makers can set regulations for the expansion of mobile banking services in Bangladesh. This study offers an insight into mobile banking in Bangladesh focusing influencing factors, which has not previously been investigated.", "title": "" }, { "docid": "7113e007073184671d0bf5c9bdda1f5c", "text": "It is widely accepted that mineral flotation is a very challenging control problem due to chaotic nature of process. This paper introduces a novel approach of combining multi-camera system and expert controllers to improve flotation performance. The system has been installed into the zinc circuit of Pyhäsalmi Mine (Finland). Long-term data analysis in fact shows that the new approach has improved considerably the recovery of the zinc circuit, resulting in a substantial increase in the mill’s annual profit. r 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "7d1a7bc7809a578cd317dfb8ba5b7678", "text": "In this paper, we introduce a new technology, which allows people to share taste and smell sensations digitally with a remote person through existing networking technologies such as the Internet. By introducing this technology, we expect people to share their smell and taste experiences with their family and friends remotely. Sharing these senses are immensely beneficial since those are strongly associated with individual memories, emotions, and everyday experiences. As the initial step, we developed a control system, an actuator, which could digitally stimulate the sense of taste remotely. The system uses two approaches to stimulate taste sensations digitally: the electrical and thermal stimulations on tongue. Primary results suggested that sourness and saltiness are the main sensations that could be evoked through this device. Furthermore, this paper focuses on future aspects of such technology for remote smell actuation followed by applications and possibilities for further developments.", "title": "" }, { "docid": "e6ca4f592446163124bcf00f87ccb8df", "text": "A full-vector beam propagation method based on a finite-element scheme for a helicoidal system is developed. The permittivity and permeability tensors of a straight waveguide are replaced with equivalent ones for a helicoidal system, obtained by transformation optics. A cylindrical, perfectly matched layer is implemented for the absorbing boundary condition. To treat wide-angle beam propagation, a second-order differentiation term with respect to the propagation direction is directly discretized without using a conventional Padé approximation. The transmission spectra of twisted photonic crystal fibers are thoroughly investigated, and it is found that the diameters of the air holes greatly affect the spectra. The calculated results are in good agreement with the recently reported measured results, showing the validity and usefulness of the method developed here.", "title": "" }, { "docid": "a6a7770857964e96f98bd4021d38f59f", "text": "During human evolutionary history, there were \"trade-offs\" between expending time and energy on child-rearing and mating, so both men and women evolved conditional mating strategies guided by cues signaling the circumstances. Many short-term matings might be successful for some men; others might try to find and keep a single mate, investing their effort in rearing her offspring. Recent evidence suggests that men with features signaling genetic benefits to offspring should be preferred by women as short-term mates, but there are trade-offs between a mate's genetic fitness and his willingness to help in child-rearing. It is these circumstances and the cues that signal them that underlie the variation in short- and long-term mating strategies between and within the sexes.", "title": "" }, { "docid": "b9f7c203717e6d2b0677b51f55e614f2", "text": "This paper demonstrates a computer-aided diagnosis (CAD) system for lung cancer classification of CT scans with unmarked nodules, a dataset from the Kaggle Data Science Bowl 2017. Thresholding was used as an initial segmentation approach to segment out lung tissue from the rest of the CT scan. Thresholding produced the next best lung segmentation. The initial approach was to directly feed the segmented CT scans into 3D CNNs for classification, but this proved to be inadequate. Instead, a modified U-Net trained on LUNA16 data (CT scans with labeled nodules) was used to first detect nodule candidates in the Kaggle CT scans. The U-Net nodule detection produced many false positives, so regions of CTs with segmented lungs where the most likely nodule candidates were located as determined by the U-Net output were fed into 3D Convolutional Neural Networks (CNNs) to ultimately classify the CT scan as positive or negative for lung cancer. The 3D CNNs produced a test set Accuracy of 86.6%. The performance of our CAD system outperforms the current CAD systems in literature which have several training and testing phases that each requires a lot of labeled data, while our CAD system has only three major phases (segmentation, nodule candidate detection, and malignancy classification), allowing more efficient training and detection and more generalizability to other cancers. Keywords—Lung Cancer; Computed Tomography; Deep Learning; Convolutional Neural Networks; Segmentation.", "title": "" }, { "docid": "883a22f7036514d87ce3af86b5853de3", "text": "A wideband integrated RF duplexer supports 3G/4G bands I, II, III, IV, and IX, and achieves a TX-to-RX isolation of more than 55dB in the transmit-band, and greater than 45dB in the corresponding receive-band across 200MHz of bandwidth. A 65nm CMOS duplexer/LNA achieves a transmit insertion loss of 2.5dB, and a cascaded receiver noise figure of 5dB with more than 27dB of gain, exceeding the commercial external duplexers performance at considerably lower cost and area.", "title": "" }, { "docid": "1f8128a4a525f32099d4fefe4bea1212", "text": "Information overload on the Web has created enormous challenges to customers selecting products for online purchases and to online businesses attempting to identify customers’ preferences efficiently. Various recommender systems employing different data representations and recommendation methods are currently used to address these challenges. In this research, we developed a graph model that provides a generic data representation and can support different recommendation methods. To demonstrate its usefulness and flexibility, we developed three recommendation methods: direct retrieval, association mining, and high-degree association retrieval. We used a data set from an online bookstore as our research test-bed. Evaluation results showed that combining product content information and historical customer transaction information achieved more accurate predictions and relevant recommendations than using only collaborative information. However, comparisons among different methods showed that high-degree association retrieval did not perform significantly better than the association mining method or the direct retrieval method in our test-bed.", "title": "" }, { "docid": "16fec520bf539ab23a5164ffef5561b4", "text": "This article traces the major trends in TESOL methods in the past 15 years. It focuses on the TESOL profession’s evolving perspectives on language teaching methods in terms of three perceptible shifts: (a) from communicative language teaching to task-based language teaching, (b) from method-based pedagogy to postmethod pedagogy, and (c) from systemic discovery to critical discourse. It is evident that during this transitional period, the profession has witnessed a heightened awareness about communicative and task-based language teaching, about the limitations of the concept of method, about possible postmethod pedagogies that seek to address some of the limitations of method, about the complexity of teacher beliefs that inform the practice of everyday teaching, and about the vitality of the macrostructures—social, cultural, political, and historical—that shape the microstructures of the language classroom. This article deals briefly with the changes and challenges the trend-setting transition seems to be bringing about in the profession’s collective thought and action.", "title": "" }, { "docid": "8a22f454a657768a3d5fd6e6ec743f5f", "text": "In recent years, deep learning techniques have been developed to improve the performance of program synthesis from input-output examples. Albeit its significant progress, the programs that can be synthesized by state-of-the-art approaches are still simple in terms of their complexity. In this work, we move a significant step forward along this direction by proposing a new class of challenging tasks in the domain of program synthesis from input-output examples: learning a context-free parser from pairs of input programs and their parse trees. We show that this class of tasks are much more challenging than previously studied tasks, and the test accuracy of existing approaches is almost 0%. We tackle the challenges by developing three novel techniques inspired by three novel observations, which reveal the key ingredients of using deep learning to synthesize a complex program. First, the use of a non-differentiable machine is the key to effectively restrict the search space. Thus our proposed approach learns a neural program operating a domain-specific non-differentiable machine. Second, recursion is the key to achieve generalizability. Thus, we bake-in the notion of recursion in the design of our non-differentiable machine. Third, reinforcement learning is the key to learn how to operate the non-differentiable machine, but it is also hard to train the model effectively with existing reinforcement learning algorithms from a cold boot. We develop a novel two-phase reinforcement learningbased search algorithm to overcome this issue. In our evaluation, we show that using our novel approach, neural parsing programs can be learned to achieve 100% test accuracy on test inputs that are 500× longer than the training samples.", "title": "" }, { "docid": "6e9edeffb12cf8e50223a933885bcb7c", "text": "Reversible data hiding in encrypted images (RDHEI) is an effective technique to embed data in the encrypted domain. An original image is encrypted with a secret key and during or after its transmission, it is possible to embed additional information in the encrypted image, without knowing the encryption key or the original content of the image. During the decoding process, the secret message can be extracted and the original image can be reconstructed. In the last few years, RDHEI has started to draw research interest. Indeed, with the development of cloud computing, data privacy has become a real issue. However, none of the existing methods allow us to hide a large amount of information in a reversible manner. In this paper, we propose a new reversible method based on MSB (most significant bit) prediction with a very high capacity. We present two approaches, these are: high capacity reversible data hiding approach with correction of prediction errors and high capacity reversible data hiding approach with embedded prediction errors. With this method, regardless of the approach used, our results are better than those obtained with current state of the art methods, both in terms of reconstructed image quality and embedding capacity.", "title": "" }, { "docid": "d34be0ce0f9894d6e219d12630166308", "text": "The need for curricular reform in K-4 mathematics is clear. Such reform must address both the content and emphasis of the curriculum as well as approaches to instruction. A longstanding preoccupation with computation and other traditional skills has dominated both what mathematics is taught and the way mathematics is taught at this level. As a result, the present K-4 curriculum is narrow in scope; fails to foster mathematical insight, reasoning, and problem solving; and emphasizes rote activities. Even more significant is that children begin to lose their belief that learning mathematics is a sense-making experience. They become passive receivers of rules and procedures rather than active participants in creating knowledge.", "title": "" }, { "docid": "9ffb4220530a4758ea6272edf6e7e531", "text": "Process mining allows analysts to exploit logs of historical executions of business processes to extract insights regarding the actual performance of these processes. One of the most widely studied process mining operations is automated process discovery. An automated process discovery method takes as input an event log, and produces as output a business process model that captures the control-flow relations between tasks that are observed in or implied by the event log. Various automated process discovery methods have been proposed in the past two decades, striking different tradeoffs between scalability, accuracy, and complexity of the resulting models. However, these methods have been evaluated in an ad-hoc manner, employing different datasets, experimental setups, evaluation measures, and baselines, often leading to incomparable conclusions and sometimes unreproducible results due to the use of closed datasets. This article provides a systematic review and comparative evaluation of automated process discovery methods, using an open-source benchmark and covering 12 publicly-available real-life event logs, 12 proprietary real-life event logs, and nine quality metrics. The results highlight gaps and unexplored tradeoffs in the field, including the lack of scalability of some methods and a strong divergence in their performance with respect to the different quality metrics used.", "title": "" }, { "docid": "fe0fa94ce6f02626fca12f21b60bec46", "text": "Solid waste management (SWM) is a major public health and environmental concern in urban areas of many developing countries. Nairobi’s solid waste situation, which could be taken to generally represent the status which is largely characterized by low coverage of solid waste collection, pollution from uncontrolled dumping of waste, inefficient public services, unregulated and uncoordinated private sector and lack of key solid waste management infrastructure. This paper recapitulates on the public-private partnership as the best system for developing countries; challenges, approaches, practices or systems of SWM, and outcomes or advantages to the approach; the literature review focuses on surveying information pertaining to existing waste management methodologies, policies, and research relevant to the SWM. Information was sourced from peer-reviewed academic literature, grey literature, publicly available waste management plans, and through consultation with waste management professionals. Literature pertaining to SWM and municipal solid waste minimization, auditing and management were searched for through online journal databases, particularly Web of Science, and Science Direct. Legislation pertaining to waste management was also researched using the different databases. Additional information was obtained from grey literature and textbooks pertaining to waste management topics. After conducting preliminary research, prevalent references of select sources were identified and scanned for additional relevant articles. Research was also expanded to include literature pertaining to recycling, composting, education, and case studies; the manuscript summarizes with future recommendationsin terms collaborations of public/ private patternships, sensitization of people, privatization is important in improving processes and modernizing urban waste management, contract private sector, integrated waste management should be encouraged, provisional government leaders need to alter their mind set, prepare a strategic, integrated SWM plan for the cities, enact strong and adequate legislation at city and national level, evaluate the real impacts of waste management systems, utilizing locally based solutions for SWM service delivery and design, location, management of the waste collection centersand recycling and compositing activities should be", "title": "" }, { "docid": "945dea6576c6131fc33cd14e5a2a0be8", "text": "■ This article recounts the development of radar signal processing at Lincoln Laboratory. The Laboratory’s significant efforts in this field were initially driven by the need to provide detected and processed signals for air and ballistic missile defense systems. The first processing work was on the Semi-Automatic Ground Environment (SAGE) air-defense system, which led to algorithms and techniques for detection of aircraft in the presence of clutter. This work was quickly followed by processing efforts in ballistic missile defense, first in surface-acoustic-wave technology, in concurrence with the initiation of radar measurements at the Kwajalein Missile Range, and then by exploitation of the newly evolving technology of digital signal processing, which led to important contributions for ballistic missile defense and Federal Aviation Administration applications. More recently, the Laboratory has pursued the computationally challenging application of adaptive processing for the suppression of jamming and clutter signals. This article discusses several important programs in these areas.", "title": "" }, { "docid": "8d4fdbdd76085391f2a80022f130459e", "text": "Recently completed whole-genome sequencing projects marked the transition from gene-based phylogenetic studies to phylogenomics analysis of entire genomes. We developed an algorithm MGRA for reconstructing ancestral genomes and used it to study the rearrangement history of seven mammalian genomes: human, chimpanzee, macaque, mouse, rat, dog, and opossum. MGRA relies on the notion of the multiple breakpoint graphs to overcome some limitations of the existing approaches to ancestral genome reconstructions. MGRA also generates the rearrangement-based characters guiding the phylogenetic tree reconstruction when the phylogeny is unknown.", "title": "" }, { "docid": "456327904250958baace54bde107f0f7", "text": "Dependability on AI models is of utmost importance to ensure full acceptance of the AI systems. One of the key aspects of the dependable AI system is to ensure that all its decisions are fair and not biased towards any individual. In this paper, we address the problem of detecting whether a model has an individual discrimination. Such a discrimination exists when two individuals who differ only in the values of their protected attributes (such as, gender/race) while the values of their non-protected ones are exactly the same, get different decisions. Measuring individual discrimination requires an exhaustive testing, which is infeasible for a nontrivial system. In this paper, we present an automated technique to generate test inputs, which is geared towards finding individual discrimination. Our technique combines the wellknown technique called symbolic execution along with the local explainability for generation of effective test cases. Our experimental results clearly demonstrate that our technique produces 3.72 times more successful test cases than the existing state-of-the-art across all our chosen benchmarks.", "title": "" }, { "docid": "7a1f244aae5f28cd9fb2d5ba54113c28", "text": "Next generation sequencing (NGS) technology has revolutionized genomic and genetic research. The pace of change in this area is rapid with three major new sequencing platforms having been released in 2011: Ion Torrent’s PGM, Pacific Biosciences’ RS and the Illumina MiSeq. Here we compare the results obtained with those platforms to the performance of the Illumina HiSeq, the current market leader. In order to compare these platforms, and get sufficient coverage depth to allow meaningful analysis, we have sequenced a set of 4 microbial genomes with mean GC content ranging from 19.3 to 67.7%. Together, these represent a comprehensive range of genome content. Here we report our analysis of that sequence data in terms of coverage distribution, bias, GC distribution, variant detection and accuracy. Sequence generated by Ion Torrent, MiSeq and Pacific Biosciences technologies displays near perfect coverage behaviour on GC-rich, neutral and moderately AT-rich genomes, but a profound bias was observed upon sequencing the extremely AT-rich genome of Plasmodium falciparum on the PGM, resulting in no coverage for approximately 30% of the genome. We analysed the ability to call variants from each platform and found that we could call slightly more variants from Ion Torrent data compared to MiSeq data, but at the expense of a higher false positive rate. Variant calling from Pacific Biosciences data was possible but higher coverage depth was required. Context specific errors were observed in both PGM and MiSeq data, but not in that from the Pacific Biosciences platform. All three fast turnaround sequencers evaluated here were able to generate usable sequence. However there are key differences between the quality of that data and the applications it will support.", "title": "" } ]
scidocsrr
d234c5f58bdf816d4e53862e5714cf5c
How Random Walks Can Help Tourism
[ { "docid": "ae9469b80390e5e2e8062222423fc2cd", "text": "Social media such as those residing in the popular photo sharing websites is attracting increasing attention in recent years. As a type of user-generated data, wisdom of the crowd is embedded inside such social media. In particular, millions of users upload to Flickr their photos, many associated with temporal and geographical information. In this paper, we investigate how to rank the trajectory patterns mined from the uploaded photos with geotags and timestamps. The main objective is to reveal the collective wisdom recorded in the seemingly isolated photos and the individual travel sequences reflected by the geo-tagged photos. Instead of focusing on mining frequent trajectory patterns from geo-tagged social media, we put more effort into ranking the mined trajectory patterns and diversifying the ranking results. Through leveraging the relationships among users, locations and trajectories, we rank the trajectory patterns. We then use an exemplar-based algorithm to diversify the results in order to discover the representative trajectory patterns. We have evaluated the proposed framework on 12 different cities using a Flickr dataset and demonstrated its effectiveness.", "title": "" }, { "docid": "51d950dfb9f71b9c8948198c147b9884", "text": "Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods.", "title": "" } ]
[ { "docid": "573fd558864a9c05fef5935a6074c3bc", "text": "Recurrent Neural Networks (RNNs) play a major role in the field of sequential learning, and have outperformed traditional algorithms on many benchmarks. Training deep RNNs still remains a challenge, and most of the state-of-the-art models are structured with a transition depth of 2-4 layers. Recurrent Highway Networks (RHNs) were introduced in order to tackle this issue. These have achieved state-of-the-art performance on a few benchmarks using a depth of 10 layers. However, the performance of this architecture suffers from a bottleneck, and ceases to improve when an attempt is made to add more layers. In this work, we analyze the causes for this, and postulate that the main source is the way that the information flows through time. We introduce a novel and simple variation for the RHN cell, called Highway State Gating (HSG), which allows adding more layers, while continuing to improve performance. By using a gating mechanism for the state, we allow the net to ”choose” whether to pass information directly through time, or to gate it. This mechanism also allows the gradient to back-propagate directly through time and, therefore, results in a slightly faster convergence. We use the Penn Treebank (PTB) dataset as a platform for empirical proof of concept. Empirical results show that the improvement due to Highway State Gating is for all depths, and as the depth increases, the improvement also increases.", "title": "" }, { "docid": "0b29e6813c08637d8df1a472e0e323b6", "text": "A significant number of promising applications for vehicular ad hoc networks (VANETs) are becoming a reality. Most of these applications require a variety of heterogenous content to be delivered to vehicles and to their on-board users. However, the task of content delivery in such dynamic and large-scale networks is easier said than done. In this article, we propose a classification of content delivery solutions applied to VANETs while highlighting their new characteristics and describing their underlying architectural design. First, the two fundamental building blocks that are part of an entire content delivery system are identified: replica allocation and content delivery. The related solutions are then classified according to their architectural definition. Within each category, solutions are described based on the techniques and strategies that have been adopted. As result, we present an in-depth discussion on the architecture, techniques, and strategies adopted by studies in the literature that tackle problems related to vehicular content delivery networks.", "title": "" }, { "docid": "1d1f14cb78693e56d014c89eacfcc3ef", "text": "We undertook a meta-analysis of six Crohn's disease genome-wide association studies (GWAS) comprising 6,333 affected individuals (cases) and 15,056 controls and followed up the top association signals in 15,694 cases, 14,026 controls and 414 parent-offspring trios. We identified 30 new susceptibility loci meeting genome-wide significance (P < 5 × 10−8). A series of in silico analyses highlighted particular genes within these loci and, together with manual curation, implicated functionally interesting candidate genes including SMAD3, ERAP2, IL10, IL2RA, TYK2, FUT2, DNMT3A, DENND1B, BACH2 and TAGAP. Combined with previously confirmed loci, these results identify 71 distinct loci with genome-wide significant evidence for association with Crohn's disease.", "title": "" }, { "docid": "eee9b5301c83faf4fe8fd786f0d99efd", "text": "We present a named entity recognition and classification system that uses only probabilistic character-level features. Classifications by multiple orthographic tries are combined in a hidden Markov model framework to incorporate both internal and contextual evidence. As part of the system, we perform a preprocessing stage in which capitalisation is restored to sentence-initial and all-caps words with high accuracy. We report f-values of 86.65 and 79.78 for English, and 50.62 and 54.43 for the German datasets.", "title": "" }, { "docid": "408d3db3b2126990611fdc3a62a985ea", "text": "Multi-choice reading comprehension is a challenging task, which involves the matching between a passage and a question-answer pair. This paper proposes a new co-matching approach to this problem, which jointly models whether a passage can match both a question and a candidate answer. Experimental results on the RACE dataset demonstrate that our approach achieves state-of-the-art performance.", "title": "" }, { "docid": "62a7c4bd564a7741cd966f3e11487236", "text": "This paper presents an implementation method for the people counting system which detects and tracks moving people using a fixed single camera. The main contribution of this paper is the novel head detection method based on body’s geometry. A novel body descriptor is proposed for finding people’s head which is defined as Body Feature Rectangle (BFR). First, a vertical projection method is used to get the line which divides touching persons into individuals. Second, a special inscribed rectangle is found to locate the neck position which describes the torso area. Third, locations of people’s heads can be got according to its neck-positions. Last, a robust counting method named MEA is proposed to get the real counts of walking people flows. The proposed method can divide the multiple-people image into individuals whatever people merge with each other or not. Moreover, the passing people can be counted accurately under the influence of wearing hats. Experimental results show that our proposed method can nearly reach to an accuracy of 100% if the number of a people-merging pattern is less than six. Keywords-People Counting; Head Detection; BFR; People-flow Tracking", "title": "" }, { "docid": "bc3c7f4fb6d9a2fd12fb702a69a35b23", "text": "Vestibular migraine is a chameleon among the episodic vertigo syndromes because considerable variation characterizes its clinical manifestation. The attacks may last from seconds to days. About one-third of patients presents with monosymptomatic attacks of vertigo or dizziness without headache or other migrainous symptoms. During attacks most patients show spontaneous or positional nystagmus and in the attack-free interval minor ocular motor and vestibular deficits. Women are significantly more often affected than men. Symptoms may begin at any time in life, with the highest prevalence in young adults and between the ages of 60 and 70. Over the last 10 years vestibular migraine has evolved into a medical entity in dizziness units. It is the most common cause of spontaneous recurrent episodic vertigo and accounts for approximately 10% of patients with vertigo and dizziness. Its broad spectrum poses a diagnostic problem of how to rule out Menière's disease or vestibular paroxysmia. Vestibular migraine should be included in the International Headache Classification of Headache Disorders (ICHD) as a subcategory of migraine. It should, however, be kept separate and distinct from basilar-type migraine and benign paroxysmal vertigo of childhood. We prefer the term \"vestibular migraine\" to \"migrainous vertigo,\" because the latter may also refer to various vestibular and non-vestibular symptoms. Antimigrainous medication to treat the single attack and to prevent recurring attacks appears to be effective, but the published evidence is weak. A randomized, double-blind, placebo-controlled study is required to evaluate medical treatment of this condition.", "title": "" }, { "docid": "5f1a273e8419836388faa49df63330c4", "text": "In this paper, the traditional k-modes clustering algorithm is extended by weighting attribute value matches in dissimilarity computation. The use of attribute value weighting technique makes it possible to generate clusters with stronger intra-similarities, and therefore achieve better clustering performance. Experimental results on real life datasets show that these value weighting based k-modes algorithms are superior to the standard k-modes algorithm with respect to clustering accuracy.", "title": "" }, { "docid": "8a42bc2dec684cf087d19bbbd2e815f8", "text": "Carefully managing the presentation of self via technology is a core practice on all modern social media platforms. Recently, selfies have emerged as a new, pervasive genre of identity performance. In many ways unique, selfies bring us fullcircle to Goffman—blending the online and offline selves together. In this paper, we take an empirical, Goffman-inspired look at the phenomenon of selfies. We report a large-scale, mixed-method analysis of the categories in which selfies appear on Instagram—an online community comprising over 400M people. Applying computer vision and network analysis techniques to 2.5M selfies, we present a typology of emergent selfie categories which represent emphasized identity statements. To the best of our knowledge, this is the first large-scale, empirical research on selfies. We conclude, contrary to common portrayals in the press, that selfies are really quite ordinary: they project identity signals such as wealth, health and physical attractiveness common to many online media, and to offline life.", "title": "" }, { "docid": "81b82ae24327c7d5c0b0bf4a04904826", "text": "AIM\nTo identify key predictors and moderators of mental health 'help-seeking behavior' in adolescents.\n\n\nBACKGROUND\nMental illness is highly prevalent in adolescents and young adults; however, individuals in this demographic group are among the least likely to seek help for such illnesses. Very little quantitative research has examined predictors of help-seeking behaviour in this demographic group.\n\n\nDESIGN\nA cross-sectional design was used.\n\n\nMETHODS\nA group of 180 volunteers between the ages of 17-25 completed a survey designed to measure hypothesized predictors and moderators of help-seeking behaviour. Predictors included a range of health beliefs, personality traits and attitudes. Data were collected in August 2010 and were analysed using two standard and three hierarchical multiple regression analyses.\n\n\nFINDINGS\nThe standard multiple regression analyses revealed that extraversion, perceived benefits of seeking help, perceived barriers to seeking help and social support were direct predictors of help-seeking behaviour. Tests of moderated relationships (using hierarchical multiple regression analyses) indicated that perceived benefits were more important than barriers in predicting help-seeking behaviour. In addition, perceived susceptibility did not predict help-seeking behaviour unless individuals were health conscious to begin with or they believed that they would benefit from help.\n\n\nCONCLUSION\nA range of personality traits, attitudes and health beliefs can predict help-seeking behaviour for mental health problems in adolescents. The variable 'Perceived Benefits' is of particular importance as it is: (1) a strong and robust predictor of help-seeking behaviour; and (2) a factor that can theoretically be modified based on health promotion programmes.", "title": "" }, { "docid": "2f04cd1b83b2ec17c9930515e8b36b95", "text": "Traditionally, visualization design assumes that the e↵ectiveness of visualizations is based on how much, and how clearly, data are presented. We argue that visualization requires a more nuanced perspective. Data are not ends in themselves, but means to an end (such as generating knowledge or assisting in decision-making). Focusing on the presentation of data per se can result in situations where these higher goals are ignored. This is especially the case for situations where cognitive or perceptual biases make the presentation of “just” the data as misleading as willful distortion. We argue that we need to de-sanctify data, and occasionally promote designs which distort or obscure data in service of understanding. We discuss examples of beneficial embellishment, distortion, and obfuscation in visualization, and argue that these examples are representative of a wider class of techniques for going beyond simplistic presentations of data.", "title": "" }, { "docid": "ae94106e02e05a38aa50842d7978c2c0", "text": "Fast and reliable face and facial feature detection are required abilities for any Human Computer Interaction approach based on Computer Vision. Since the publication of the Viola-Jones object detection framework and the more recent open source implementation, an increasing number of applications have appeared, particularly in the context of facial processing. In this respect, the OpenCV community shares a collection of public domain classifiers for this scenario. However, as far as we know these classifiers have never been evaluated and/or compared. In this paper we analyze the individual performance of all those public classifiers getting the best performance for each target. These results are valid to define a baseline for future approaches. Additionally we propose a simple hierarchical combination of those classifiers to increase the facial feature detection rate while reducing the face false detection rate.", "title": "" }, { "docid": "db26de1462b3e8e53bf54846849ae2c2", "text": "The design and development of process-aware information systems is often supported by specifying requirements as business process models. Although this approach is generally accepted as an effective strategy, it remains a fundamental challenge to adequately validate these models given the diverging skill set of domain experts and system analysts. As domain experts often do not feel confident in judging the correctness and completeness of process models that system analysts create, the validation often has to regress to a discourse using natural language. In order to support such a discourse appropriately, so-called verbalization techniques have been defined for different types of conceptual models. However, there is currently no sophisticated technique available that is capable of generating natural-looking text from process models. In this paper, we address this research gap and propose a technique for generating natural language texts from business process models. A comparison with manually created process descriptions demonstrates that the generated texts are superior in terms of completeness, structure, and linguistic complexity. An evaluation with users further demonstrates that the texts are very understandable and effectively allow the reader to infer the process model semantics. Hence, the generated texts represent a useful input for process model validation.", "title": "" }, { "docid": "9ddddb7775122ed13544b37c70607507", "text": "We present results from a multi-generational study of collocated group console gaming. We examine the intergenerational gaming practices of four generations of gamers, from ages 3 to 83 and, in particular, the roles that gamers of different generations take on when playing together in groups. Our findings highlight the extent to which existing gaming technologies are amenable to interactions within collocated intergenerational groups and the broader set of roles that have emerged in these computer-mediated interactions than have previously been documented by studies of more traditional collocated, intergenerational interactions. We articulate attributes of the games that encourage intergenerational interaction.", "title": "" }, { "docid": "1c365e6256ae1c404c6f3f145eb04924", "text": "Progress in signal processing continues to enable welcome advances in high-frequency (HF) radio performance and efficiency. The latest data waveforms use channels wider than 3 kHz to boost data throughput and robustness. This has driven the need for a more capable Automatic Link Establishment (ALE) system that links faster and adapts the wideband HF (WBHF) waveform to efficiently use available spectrum. In this paper, we investigate the possibility and advantages of using various non-scanning ALE techniques with the new wideband ALE (WALE) to further improve spectrum awareness and linking speed.", "title": "" }, { "docid": "24bd1a178fde153c8ee8a4fa332611cf", "text": "This paper proposes a comprehensive methodology for the design of a controllable electric vehicle charger capable of making the most of the interaction with an autonomous smart energy management system (EMS) in a residential setting. Autonomous EMSs aim achieving the potential benefits associated with energy exchanges between consumers and the grid, using bidirectional and power-controllable electric vehicle chargers. A suitable design for a controllable charger is presented, including the sizing of passive elements and controllers. This charger has been implemented using an experimental setup with a digital signal processor to validate its operation. The experimental results obtained foresee an adequate interaction between the proposed charger and a compatible autonomous EMS in a typical residential setting.", "title": "" }, { "docid": "c773efb805899ee9e365b5f19ddb40bc", "text": "In this paper, we overview the 2009 Simulated Car Racing Championship-an event comprising three competitions held in association with the 2009 IEEE Congress on Evolutionary Computation (CEC), the 2009 ACM Genetic and Evolutionary Computation Conference (GECCO), and the 2009 IEEE Symposium on Computational Intelligence and Games (CIG). First, we describe the competition regulations and the software framework. Then, the five best teams describe the methods of computational intelligence they used to develop their drivers and the lessons they learned from the participation in the championship. The organizers provide short summaries of the other competitors. Finally, we summarize the championship results, followed by a discussion about what the organizers learned about 1) the development of high-performing car racing controllers and 2) the organization of scientific competitions.", "title": "" }, { "docid": "90b502cb72488529ec0d389ca99b57b8", "text": "The cross-entropy loss commonly used in deep learning is closely related to the defining properties of optimal representations, but does not enforce some of the key properties. We show that this can be solved by adding a regularization term, which is in turn related to injecting multiplicative noise in the activations of a Deep Neural Network, a special case of which is the common practice of dropout. We show that our regularized loss function can be efficiently minimized using Information Dropout, a generalization of dropout rooted in information theoretic principles that automatically adapts to the data and can better exploit architectures of limited capacity. When the task is the reconstruction of the input, we show that our loss function yields a Variational Autoencoder as a special case, thus providing a link between representation learning, information theory and variational inference. Finally, we prove that we can promote the creation of optimal disentangled representations simply by enforcing a factorized prior, a fact that has been observed empirically in recent work. Our experiments validate the theoretical intuitions behind our method, and we find that Information Dropout achieves a comparable or better generalization performance than binary dropout, especially on smaller models, since it can automatically adapt the noise to the structure of the network, as well as to the test sample.", "title": "" }, { "docid": "2ae96a524ba3b6c43ea6bfa112f71a30", "text": "Accurate quantification of gluconeogenic flux following alcohol ingestion in overnight-fasted humans has yet to be reported. [2-13C1]glycerol, [U-13C6]glucose, [1-2H1]galactose, and acetaminophen were infused in normal men before and after the consumption of 48 g alcohol or a placebo to quantify gluconeogenesis, glycogenolysis, hepatic glucose production, and intrahepatic gluconeogenic precursor availability. Gluconeogenesis decreased 45% vs. the placebo (0.56 ± 0.05 to 0.44 ± 0.04 mg ⋅ kg-1 ⋅ min-1vs. 0.44 ± 0.05 to 0.63 ± 0.09 mg ⋅ kg-1 ⋅ min-1, respectively, P < 0.05) in the 5 h after alcohol ingestion, and total gluconeogenic flux was lower after alcohol compared with placebo. Glycogenolysis fell over time after both the alcohol and placebo cocktails, from 1.46-1.47 mg ⋅ kg-1 ⋅ min-1to 1.35 ± 0.17 mg ⋅ kg-1 ⋅ min-1(alcohol) and 1.26 ± 0.20 mg ⋅ kg-1 ⋅ min-1, respectively (placebo, P < 0.05 vs. baseline). Hepatic glucose output decreased 12% after alcohol consumption, from 2.03 ± 0.21 to 1.79 ± 0.21 mg ⋅ kg-1 ⋅ min-1( P < 0.05 vs. baseline), but did not change following the placebo. Estimated intrahepatic gluconeogenic precursor availability decreased 61% following alcohol consumption ( P < 0.05 vs. baseline) but was unchanged after the placebo ( P < 0.05 between treatments). We conclude from these results that gluconeogenesis is inhibited after alcohol consumption in overnight-fasted men, with a somewhat larger decrease in availability of gluconeogenic precursors but a smaller effect on glucose production and no effect on plasma glucose concentrations. Thus inhibition of flux into the gluconeogenic precursor pool is compensated by changes in glycogenolysis, the fate of triose-phosphates, and peripheral tissue utilization of plasma glucose.", "title": "" } ]
scidocsrr
2b87c4c7c558342c8daf9fbc3234cb48
The particle swarm optimization algorithm: convergence analysis and parameter selection
[ { "docid": "555ad116b9b285051084423e2807a0ba", "text": "The performance of particle swarm optimization using an inertia weight is compared with performance using a constriction factor. Five benchmark functions are used for the comparison. It is concluded that the best approach is to use the constriction factor while limiting the maximum velocity Vmax to the dynamic range of the variable Xmax on each dimension. This approach provides performance on the benchmark functions superior to any other published results known by the authors. '", "title": "" } ]
[ { "docid": "e198dab977ba3e97245ecdd07fd25690", "text": "The majority of the human genome consists of non-coding regions that have been called junk DNA. However, recent studies have unveiled that these regions contain cis-regulatory elements, such as promoters, enhancers, silencers, insulators, etc. These regulatory elements can play crucial roles in controlling gene expressions in specific cell types, conditions, and developmental stages. Disruption to these regions could contribute to phenotype changes. Precisely identifying regulatory elements is key to deciphering the mechanisms underlying transcriptional regulation. Cis-regulatory events are complex processes that involve chromatin accessibility, transcription factor binding, DNA methylation, histone modifications, and the interactions between them. The development of next-generation sequencing techniques has allowed us to capture these genomic features in depth. Applied analysis of genome sequences for clinical genetics has increased the urgency for detecting these regions. However, the complexity of cis-regulatory events and the deluge of sequencing data require accurate and efficient computational approaches, in particular, machine learning techniques. In this review, we describe machine learning approaches for predicting transcription factor binding sites, enhancers, and promoters, primarily driven by next-generation sequencing data. Data sources are provided in order to facilitate testing of novel methods. The purpose of this review is to attract computational experts and data scientists to advance this field.", "title": "" }, { "docid": "895da346d947feba89cb171accb3f142", "text": "A six-phase six-step voltage-fed induction motor is presented. The inverter is a transistorized six-step voltage source inverter, while the motor is a modified standard three-phase squirrel-cage motor. The stator is rewound with two three-phase winding sets displaced from each other by 30 electrical degrees. A model for the system is developed to simulate the drive and predict its performance. The simulation results for steady-state conditions and experimental measurements show very good correlation. It is shown that this winding configuration results in the elimination of all air-gap flux time harmonics of the order (6v ±1, v = 1,3,5,...). Consequently, all rotor copper losses produced by these harmonics as well as all torque harmonics of the order (6v, v = 1,3,5,...) are eliminated. A comparison between-the measured instantaneous torque of both three-phase and six-phase six-step voltage-fed induction machines shows the advantage of the six-phase system over the three-phase system in eliminating the sixth harmonic dominant torque ripple.", "title": "" }, { "docid": "c692dd35605c4af62429edef6b80c121", "text": "As one of the most important mid-level features of music, chord contains rich information of harmonic structure that is useful for music information retrieval. In this paper, we present a chord recognition system based on the N-gram model. The system is time-efficient, and its accuracy is comparable to existing systems. We further propose a new method to construct chord features for music emotion classification and evaluate its performance on commercial song recordings. Experimental results demonstrate the advantage of using chord features for music classification and retrieval.", "title": "" }, { "docid": "dd8222a589e824b5189194ab697f27d7", "text": "Facial expression recognition has been investigated for many years, and there are two popular models: Action Units (AUs) and the Valence-Arousal space (V-A space) that have been widely used. However, most of the databases for estimating V-A intensity are captured in laboratory settings, and the benchmarks \"in-the-wild\" do not exist. Thus, the First Affect-In-The-Wild Challenge released a database for V-A estimation while the videos were captured in wild condition. In this paper, we propose an integrated deep learning framework for facial attribute recognition, AU detection, and V-A estimation. The key idea is to apply AUs to estimate the V-A intensity since both AUs and V-A space could be utilized to recognize some emotion categories. Besides, the AU detector is trained based on the convolutional neural network (CNN) for facial attribute recognition. In experiments, we will show the results of the above three tasks to verify the performances of our proposed network framework.", "title": "" }, { "docid": "b98585e7ed4b34afb72f81aeae2ebdcc", "text": "The capability of transcribing music audio into music notation is a fascinating example of human intelligence. It involves perception (analyzing complex auditory scenes), cognition (recognizing musical objects), knowledge representation (forming musical structures), and inference (testing alternative hypotheses). Automatic music transcription (AMT), i.e., the design of computational algorithms to convert acoustic music signals into some form of music notation, is a challenging task in signal processing and artificial intelligence. It comprises several subtasks, including multipitch estimation (MPE), onset and offset detection, instrument recognition, beat and rhythm tracking, interpretation of expressive timing and dynamics, and score typesetting.", "title": "" }, { "docid": "acf4645478c28811d41755b0ed81fb39", "text": "Make more knowledge even in less time every day. You may not always spend your time and money to go abroad and get the experience and knowledge by yourself. Reading is a good alternative to do in getting this desirable knowledge and experience. You may gain many things from experiencing directly, but of course it will spend much money. So here, by reading social network data analytics social network data analytics, you can take more advantages with limited budget.", "title": "" }, { "docid": "eaad298fce83ade590a800d2318a2928", "text": "Space vector modulation (SVM) is the best modulation technique to drive 3-phase load such as 3-phase induction motor. In this paper, the pulse width modulation strategy with SVM is analyzed in detail. The modulation strategy uses switching time calculator to calculate the timing of voltage vector applied to the three-phase balanced-load. The principle of the space vector modulation strategy is performed using Matlab/Simulink. The simulation result indicates that this algorithm is flexible and suitable to use for advance vector control. The strategy of the switching minimizes the distortion of load current as well as loss due to minimize number of commutations in the inverter.", "title": "" }, { "docid": "10aca07789cf8e465443ac9813eef189", "text": "INTRODUCTION\nThe faculty of Medicine, (FOM) Makerere University Kampala was started in 1924 and has been running a traditional curriculum for 79 years. A few years back it embarked on changing its curriculum from traditional to Problem Based Learning (PBL) and Community Based Education and Service (COBES) as well as early clinical exposure. This curriculum has been implemented since the academic year 2003/2004. The study was done to describe the steps taken to change and implement the curriculum at the Faculty of Medicine, Makerere University Kampala.\n\n\nOBJECTIVE\nTo describe the steps taken to change and implement the new curriculum at the Faculty of Medicine.\n\n\nMETHODS\nThe stages taken during the process were described and analysed.\n\n\nRESULTS\nThe following stages were recognized characterization of Uganda's health status, analysis of government policy, analysis of old curriculum, needs assessment, adoption of new model (SPICES), workshop/retreats for faculty sensitization, incremental development of programs by faculty, implementation of new curriculum.\n\n\nCONCLUSION\nThe FOM has successfully embarked on curriculum change. This has not been without challenges. However, challenges have been taken on and handled as they arose and this has led to the implementation of new curriculum. Problem based learning can be adopted even in a low resourced country like Uganda.", "title": "" }, { "docid": "2f2c99ac066dd2875fcfa2dc42467757", "text": "The popularity of wireless networks has increased in recent years and is becoming a common addition to LANs. In this paper we investigate a novel use for a wireless network based on the IEEE 802.11 standard: inferring the location of a wireless client from signal quality measures. Similar work has been limited to prototype systems that rely on nearest-neighbor techniques to infer location. In this paper, we describe Nibble, a Wi-Fi location service that uses Bayesian networks to infer the location of a device. We explain the general theory behind the system and how to use the system, along with describing our experiences at a university campus building and at a research lab. We also discuss how probabilistic modeling can be applied to a diverse range of applications that use sensor data.", "title": "" }, { "docid": "9978f33847a09c651ccce68c3b88287f", "text": "We propose a method for discovering the dependency relationships between the topics of documents shared in social networks using the latent social interactions, attempting to answer the question: given a seemingly new topic, from where does this topic evolve? In particular, we seek to discover the pair-wise probabilistic dependency in topics of documents which associate social actors from a latent social network, where these documents are being shared. By viewing the evolution of topics as a Markov chain, we estimate a Markov transition matrix of topics by leveraging social interactions and topic semantics. Metastable states in a Markov chain are applied to the clustering of topics. Applied to the CiteSeer dataset, a collection of documents in academia, we show the trends of research topics, how research topics are related and which are stable. We also show how certain social actors, authors, impact these topics and propose new ways for evaluating author impact.", "title": "" }, { "docid": "261ef8b449727b615f8cd5bd458afa91", "text": "Luck (2009) argues that gamers face a dilemma when it comes to performing certain virtual acts. Most gamers regularly commit acts of virtual murder, and take these acts to be morally permissible. They are permissible because unlike real murder, no one is harmed in performing them; their only victims are computer-controlled characters, and such characters are not moral patients. What Luck points out is that this justification equally applies to virtual pedophelia, but gamers intuitively think that such acts are not morally permissible. The result is a dilemma: either gamers must reject the intuition that virtual pedophelic acts are impermissible and so accept partaking in such acts, or they must reject the intuition that virtual murder acts are permissible, and so abstain from many (if not most) extant games. While the prevailing solution to this dilemma has been to try and find a morally relevant feature to distinguish the two cases, I argue that a different route should be pursued. It is neither the case that all acts of virtual murder are morally permissible, nor are all acts of virtual pedophelia impermissible. Our intuitions falter and produce this dilemma because they are not sensitive to the different contexts in which games present virtual acts.", "title": "" }, { "docid": "024168795536bc141bb07af74486ef78", "text": "Video-based person re-identification matches video clips of people across non-overlapping cameras. Most existing methods tackle this problem by encoding each video frame in its entirety and computing an aggregate representation across all frames. In practice, people are often partially occluded, which can corrupt the extracted features. Instead, we propose a new spatiotemporal attention model that automatically discovers a diverse set of distinctive body parts. This allows useful information to be extracted from all frames without succumbing to occlusions and misalignments. The network learns multiple spatial attention models and employs a diversity regularization term to ensure multiple models do not discover the same body part. Features extracted from local image regions are organized by spatial attention model and are combined using temporal attention. As a result, the network learns latent representations of the face, torso and other body parts using the best available image patches from the entire video sequence. Extensive evaluations on three datasets show that our framework outperforms the state-of-the-art approaches by large margins on multiple metrics.", "title": "" }, { "docid": "139d9d5866a1e455af954b2299bdbcf6", "text": "1 . I n t r o d u c t i o n Reasoning about knowledge and belief has long been an issue of concern in philosophy and artificial intelligence (cf. [Hil],[MH],[Mo]). Recently we have argued that reasoning about knowledge is also crucial in understanding and reasoning about protocols in distributed systems, since messages can be viewed as changing the state of knowledge of a system [HM]; knowledge also seems to be of v i tal importance in cryptography theory [Me] and database theory. In order to formally reason about knowledge, we need a good semantic model. Part of the difficulty in providing such a model is that there is no agreement on exactly what the properties of knowledge are or should * This author's work was supported in part by DARPA contract N00039-82-C-0250. be. For example, is it the case that you know what facts you know? Do you know what you don't know? Do you know only true things, or can something you \"know\" actually be false? Possible-worlds semantics provide a good formal tool for \"customizing\" a logic so that, by making minor changes in the semantics, we can capture different sets of axioms. The idea, first formalized by Hintikka [Hi l ] , is that in each state of the world, an agent (or knower or player: we use all these words interchangeably) has other states or worlds that he considers possible. An agent knows p exactly if p is true in all the worlds that he considers possible. As Kripke pointed out [Kr], by imposing various conditions on this possibil i ty relation, we can capture a number of interesting axioms. For example, if we require that the real world always be one of the possible worlds (which amounts to saying that the possibility relation is reflexive), then it follows that you can't know anything false. Similarly, we can show that if the relation is transitive, then you know what you know. If the relation is transitive and symmetric, then you also know what you don't know. (The one-knower models where the possibility relation is reflexive corresponds to the classical modal logic T, while the reflexive and transitive case corresponds to S4, and the reflexive, symmetric and transitive case corresponds to S5.) Once we have a general framework for modelling knowledge, a reasonable question to ask is how hard it is to reason about knowledge. In particular, how hard is it to decide if a given formula is valid or satisfiable? The answer to this question depends crucially on the choice of axioms. For example, in the oneknower case, Ladner [La] has shown that for T and S4 the problem of deciding satisfiability is complete in polynomial space, while for S5 it is NP-complete, J. Halpern and Y. Moses 481 and thus no harder than the satisf iabi l i ty problem for propos i t iona l logic. Our a im in th is paper is to reexamine the possiblewor lds f ramework for knowledge and belief w i t h four par t icu lar po ints of emphasis: (1) we show how general techniques for f inding decision procedures and complete ax iomat izat ions apply to models for knowledge and belief, (2) we show how sensitive the di f f icul ty of the decision procedure is to such issues as the choice of moda l operators and the ax iom system, (3) we discuss how not ions of common knowledge and impl ic i t knowl edge among a group of agents fit in to the possibleworlds f ramework, and, f inal ly, (4) we consider to what extent the possible-worlds approach is a viable one for model l ing knowledge and belief. We begin in Section 2 by reviewing possible-world semantics in deta i l , and prov ing tha t the many-knower versions of T, S4, and S5 do indeed capture some of the more common axiomatizat ions of knowledge. In Section 3 we t u r n to complexity-theoret ic issues. We review some standard not ions f rom complexi ty theory, and then reprove and extend Ladner's results to show tha t the decision procedures for the many-knower versions of T, S4, and S5 are a l l complete in po lynomia l space.* Th is suggests tha t for S5, reasoning about many agents' knowledge is qual i ta t ive ly harder than jus t reasoning about one agent's knowledge of the real wor ld and of his own knowledge. In Section 4 we t u rn our at tent ion to mod i fy ing the model so tha t i t can deal w i t h belief rather than knowledge, where one can believe something tha t is false. Th is turns out to be somewhat more compl i cated t han dropp ing the assumption of ref lexivi ty, but i t can s t i l l be done in the possible-worlds f ramework. Results about decision procedures and complete axiomat i i a t i ons for belief paral le l those for knowledge. In Section 5 we consider what happens when operators for common knowledge and implicit knowledge are added to the language. A group has common knowledge of a fact p exact ly when everyone knows tha t everyone knows tha t everyone knows ... tha t p is t rue. (Common knowledge is essentially wha t McCar thy 's \" f oo l \" knows; cf. [MSHI] . ) A group has i m p l ic i t knowledge of p i f, roughly speaking, when the agents poo l the i r knowledge together they can deduce p. (Note our usage of the not ion of \" imp l i c i t knowl edge\" here differs s l ight ly f rom the way it is used in [Lev2] and [FH].) As shown in [ H M l ] , common knowl edge is an essential state for reaching agreements and * A problem is said to be complete w i th respect to a complexity class if, roughly speaking, it is the hardest problem in that class (see Section 3 for more details). coordinating action. For very similar reasons, common knowledge also seems to play an important role in human understanding of speech acts (cf. [CM]). The notion of implicit knowledge arises when reasoning about what states of knowledge a group can attain through communication, and thus is also crucial when reasoning about the efficacy of speech acts and about communication protocols in distributed systems. It turns out that adding an implicit knowledge operator to the language does not substantially change the complexity of deciding the satisfiability of formulas in the language, but this is not the case for common knowledge. Using standard techniques from PDL (Propositional Dynamic Logic; cf. [FL],[Pr]), we can show that when we add common knowledge to the language, the satisfiability problem for the resulting logic (whether it is based on T, S4, or S5) is complete in deterministic exponential time, as long as there at least two knowers. Thus, adding a common knowledge operator renders the decision procedure qualitatively more complex. (Common knowledge does not seem to be of much interest in the in the case of one knower. In fact, in the case of S4 and S5, if there is only one knower, knowledge and common knowledge are identical.) We conclude in Section 6 with some discussion of the appropriateness of the possible-worlds approach for capturing knowledge and belief, particularly in light of our results on computational complexity. Detailed proofs of the theorems stated here, as well as further discussion of these results, can be found in the ful l paper ([HM2]). 482 J. Halpern and Y. Moses 2.2 Possib le-wor lds semant ics: Following Hintikka [H i l ] , Sato [Sa], Moore [Mo], and others, we use a posaible-worlds semantics to model knowledge. This provides us wi th a general framework for our semantical investigations of knowledge and belief. (Everything we say about \"knowledge* in this subsection applies equally well to belief.) The essential idea behind possible-worlds semantics is that an agent's state of knowledge corresponds to the extent to which he can determine what world he is in. In a given world, we can associate wi th each agent the set of worlds that, according to the agent's knowledge, could possibly be the real world. An agent is then said to know a fact p exactly if p is true in all the worlds in this set; he does not know p if there is at least one world that he considers possible where p does not hold. * We discuss the ramifications of this point in Section 6. ** The name K (m) is inspired by the fact that for one knower, the system reduces to the well-known modal logic K. J. Halpern and Y. Moses 483 484 J. Halpern and Y. Moses that can be said is that we are modelling a rather idealised reaaoner, who knows all tautologies and all the logical consequences of his knowledge. If we take the classical interpretation of knowledge as true, justified belief, then an axiom such as A3 seems to be necessary. On the other hand, philosophers have shown that axiom A5 does not hold wi th respect to this interpretation ([Len]). However, the S5 axioms do capture an interesting interpretation of knowledge appropriate for reasoning about distributed systems (see [HM1] and Section 6). We continue here wi th our investigation of all these logics, deferring further comments on their appropriateness to Section 6. Theorem 3 implies that the provable formulas of K (m) correspond precisely to the formulas that are valid for Kripke worlds. As Kripke showed [Kr], there are simple conditions that we can impose on the possibility relations Pi so that the valid formulas of the resulting worlds are exactly the provable formulas of T ( m ) , S4 (m) , and S5(m) respectively. We wi l l try to motivate these conditions, but first we need a few definitions. * Since Lemma 4(b) says that a relation that is both reflexive and Euclidean must also be transitive, the reader may auspect that axiom A4 ia redundant in S5. Thia indeed ia the caae. J. Halpern and Y. Moses 485 486 J. Halpern and Y. Moses", "title": "" }, { "docid": "9dac75a40e421163c4e05cfd5d36361f", "text": "In recent years, many data mining methods have been proposed for finding useful and structured information from market basket data. The association rule model was recently proposed in order to discover useful patterns and dependencies in such data. This paper discusses a method for indexing market basket data efficiently for similarity search. The technique is likely to be very useful in applications which utilize the similarity in customer buying behavior in order to make peer recommendations. We propose an index called the signature table, which is very flexible in supporting a wide range of similarity functions. The construction of the index structure is independent of the similarity function, which can be specified at query time. The resulting similarity search algorithm shows excellent scalability with increasing memory availability and database size.", "title": "" }, { "docid": "b3c9d10efd071659336a1521ce0f8465", "text": "The traditional diet in Okinawa is anchored by root vegetables (principally sweet potatoes), green and yellow vegetables, soybean-based foods, and medicinal plants. Marine foods, lean meats, fruit, medicinal garnishes and spices, tea, alcohol are also moderately consumed. Many characteristics of the traditional Okinawan diet are shared with other healthy dietary patterns, including the traditional Mediterranean diet, DASH diet, and Portfolio diet. All these dietary patterns are associated with reduced risk for cardiovascular disease, among other age-associated diseases. Overall, the important shared features of these healthy dietary patterns include: high intake of unrefined carbohydrates, moderate protein intake with emphasis on vegetables/legumes, fish, and lean meats as sources, and a healthy fat profile (higher in mono/polyunsaturated fats, lower in saturated fat; rich in omega-3). The healthy fat intake is likely one mechanism for reducing inflammation, optimizing cholesterol, and other risk factors. Additionally, the lower caloric density of plant-rich diets results in lower caloric intake with concomitant high intake of phytonutrients and antioxidants. Other shared features include low glycemic load, less inflammation and oxidative stress, and potential modulation of aging-related biological pathways. This may reduce risk for chronic age-associated diseases and promote healthy aging and longevity.", "title": "" }, { "docid": "94ec2b6c24cbbbb8a648bd83873aa0c5", "text": "s since January 1975, a full-text search capacity, and a personal archive for saving articles and search results of interest. All articles can be printed in a format that is virtually identical to that of the typeset pages. Beginning six months after publication, the full text of all Original Articles and Special Articles is available free to nonsubscribers who have completed a brief registration. Copyright © 2003 Massachusetts Medical Society. All rights reserved. Downloaded from www.nejm.org at UNIV OF CINCINNATI SERIALS DEPT on August 8, 2007 .", "title": "" }, { "docid": "4185d65971d7345afbd7189368ed9303", "text": "Ticket annotation and search has become an essential research subject for the successful delivery of IT operational analytics. Millions of tickets are created yearly to address business users' IT related problems. In IT service desk management, it is critical to first capture the pain points for a group of tickets to determine root cause; secondly, to obtain the respective distributions in order to layout the priority of addressing these pain points. An advanced ticket analytics system utilizes a combination of topic modeling, clustering and Information Retrieval (IR) technologies to address the above issues and the corresponding architecture which integrates of these features will allow for a wider distribution of this technology and progress to a significant financial benefit for the system owner. Topic modeling has been used to extract topics from given documents; in general, each topic is represented by a unigram language model. However, it is not clear how to interpret the results in an easily readable/understandable way until now. Due to the inefficiency to render top concepts using existing techniques, in this paper, we propose a probabilistic framework, which consists of language modeling (especially the topic models), Part-Of-Speech (POS) tags, query expansion, retrieval modeling and so on for the practical challenge. The rigorously empirical experiments demonstrate the consistent and utility performance of the proposed method on real datasets.", "title": "" }, { "docid": "d22e8f2029e114b0c648a2cdfba4978a", "text": "This paper considers innovative marketing within the context of a micro firm, exploring how such firm’s marketing practices can take advantage of digital media. Factors that influence a micro firm’s innovative activities are examined and the development and implementation of digital media in the firm’s marketing practice is explored. Despite the significance of marketing and innovation to SMEs, a lack of literature and theory on innovation in marketing theory exists. Research suggests that small firms’ marketing practitioners and entrepreneurs have identified their marketing focus on the 4Is. This paper builds on knowledge in innovation and marketing and examines the process in a micro firm. A qualitative approach is applied using action research and case study approach. The relevant literature is reviewed as the starting point to diagnose problems and issues anticipated by business practitioners. A longitudinal study is used to illustrate the process of actions taken with evaluations and reflections presented. The exploration illustrates that in practice much of the marketing activities within micro firms are driven by incremental innovation. This research emphasises that integrating Information Communication Technologies (ICTs) successfully in marketing requires marketers to take an active managerial role far beyond their traditional areas of competence and authority.", "title": "" }, { "docid": "cf6a7252039826211635cc9221f1db66", "text": "Blockchain technologies are gaining massive momentum in the last few years. Blockchains are distributed ledgers that enable parties who do not fully trust each other to maintain a set of global states. The parties agree on the existence, values, and histories of the states. As the technology landscape is expanding rapidly, it is both important and challenging to have a firm grasp of what the core technologies have to offer, especially with respect to their data processing capabilities. In this paper, we first survey the state of the art, focusing on private blockchains (in which parties are authenticated). We analyze both in-production and research systems in four dimensions: distributed ledger, cryptography, consensus protocol, and smart contract. We then present BLOCKBENCH, a benchmarking framework for understanding performance of private blockchains against data processing workloads. We conduct a comprehensive evaluation of three major blockchain systems based on BLOCKBENCH, namely Ethereum, Parity, and Hyperledger Fabric. The results demonstrate several trade-offs in the design space, as well as big performance gaps between blockchain and database systems. Drawing from design principles of database systems, we discuss several research directions for bringing blockchain performance closer to the realm of databases.", "title": "" } ]
scidocsrr
9277968249a44de6d80e829cdafc1e57
A Vision-Based Counting and Recognition System for Flying Insects in Intelligent Agriculture
[ { "docid": "84ca7dc9cac79fe14ea2061919c44a05", "text": "We describe two new color indexing techniques. The rst one is a more robust version of the commonly used color histogram indexing. In the index we store the cumulative color histograms. The L 1-, L 2-, or L 1-distance between two cumulative color histograms can be used to deene a similarity measure of these two color distributions. We show that while this method produces only slightly better results than color histogram methods, it is more robust with respect to the quantization parameter of the histograms. The second technique is an example of a new approach to color indexing. Instead of storing the complete color distributions, the index contains only their dominant features. We implement this approach by storing the rst three moments of each color channel of an image in the index, i.e., for a HSV image we store only 9 oating point numbers per image. The similarity function which is used for the retrieval is a weighted sum of the absolute diierences between corresponding moments. Our tests clearly demonstrate that a retrieval based on this technique produces better results and runs faster than the histogram-based methods.", "title": "" } ]
[ { "docid": "c4676e3c0fea689408e27ee197f993a3", "text": "20140530 is provided in screen-viewable form for personal use only by members of MIT CogNet. Unauthorized use or dissemination of this information is expressly forbidden. If you have any questions about this material, please contact cognetadmin@cognet.mit.edu.", "title": "" }, { "docid": "19d2c60e0c293d8104c0e6b4005c996e", "text": "An electronic scanning antenna (ESA) that uses a beam former, such as a Rotman lens, has the ability to form multiple beams for shared-aperture applications. This characteristic makes the antenna suitable for integration into systems exploiting the multi-function radio frequency (MFRF) concept, meeting the needs for a future combat system (FCS) RF sensor. An antenna which electronically scans 45/spl deg/ in azimuth has been built and successfully tested at ARL to demonstrate this multiple-beam, shared-aperture approach at K/sub a/ band. Subsequent efforts are focused on reducing the component size and weight while extending the scanning ability of the antenna to a full hemisphere with both azimuth and elevation scanning. Primary emphasis has been on the beamformer, a Rotman lens or similar device, and the switches used to select the beams. Approaches described include replacing the cavity Rotman lens used in the prototype MFRF system with a dielectrically loaded Rotman lens having a waveguide-fed cavity, a microstrip-fed parallel plate, or a surface-wave configuration in order to reduce the overall size. The paper discusses the challenges and progress in the development of Rotman lens beam formers to support such an antenna.", "title": "" }, { "docid": "4e7582d4e8db248f10f8fbe97522190a", "text": "Recent advances in semantic epistemolo-gies and flexible symmetries offer a viable alternative to the lookaside buffer. Here, we verify the analysis of systems. Though such a hypothesis is never an appropriate purpose, it mostly conflicts with the need to provide model checking to scholars. We show that though link-level acknowledge-99] can be made electronic, game-theoretic, and virtual, model checking and architecture can agree to solve this question.", "title": "" }, { "docid": "43e3d3639d30d9e75da7e3c5a82db60a", "text": "This paper studies deep network architectures to address the problem of video classification. A multi-stream framework is proposed to fully utilize the rich multimodal information in videos. Specifically, we first train three Convolutional Neural Networks to model spatial, short-term motion and audio clues respectively. Long Short Term Memory networks are then adopted to explore long-term temporal dynamics. With the outputs of the individual streams, we propose a simple and effective fusion method to generate the final predictions, where the optimal fusion weights are learned adaptively for each class, and the learning process is regularized by automatically estimated class relationships. Our contributions are two-fold. First, the proposed multi-stream framework is able to exploit multimodal features that are more comprehensive than those previously attempted. Second, we demonstrate that the adaptive fusion method using the class relationship as a regularizer outperforms traditional alternatives that estimate the weights in a “free” fashion. Our framework produces significantly better results than the state of the arts on two popular benchmarks, 92.2% on UCF-101 (without using audio) and 84.9% on Columbia Consumer Videos.", "title": "" }, { "docid": "21147cc465a671b2513bf87edb202b6d", "text": "We present a new o -line electronic cash system based on a problem, called the representation problem, of which little use has been made in literature thus far. Our system is the rst to be based entirely on discrete logarithms. Using the representation problem as a basic concept, some techniques are introduced that enable us to construct protocols for withdrawal and payment that do not use the cut and choose methodology of earlier systems. As a consequence, our cash system is much more e cient in both computation and communication complexity than previously proposed systems. Another important aspect of our system concerns its provability. Contrary to previously proposed systems, its correctness can be mathematically proven to a very great extent. Speci cally, if we make one plausible assumption concerning a single hash-function, the ability to break the system seems to imply that one can break the Di e-Hellman problem. Our system o ers a number of extensions that are hard to achieve in previously known systems. In our opinion the most interesting of these is that the entire cash system (including all the extensions) can be incorporated straightforwardly in a setting based on wallets with observers, which has the important advantage that doublespending can be prevented in the rst place, rather than detecting the identity of a double-spender after the fact. In particular, it can be incorporated even under the most stringent requirements conceivable about the privacy of the user, which seems to be impossible to do with previously proposed systems. Another bene t of our system is that framing attempts by a bank have negligible probability of success (independent of computing power) by a simple mechanism from within the system, which is something that previous solutions lack entirely. Furthermore, the basic cash system can be extended to checks, multi-show cash and divisibility, while retaining its computational e ciency. Although in this paper we only make use of the representation problem in groups of prime order, similar intractable problems hold in RSA-groups (with computational equivalence to factoring and computing RSAroots). We discuss how one can use these problems to construct an e cient cash system with security related to factoring or computation of RSA-roots, in an analogous way to the discrete log based system. Finally, we discuss a decision problem (the decision variant of the Di e-Hellman problem) that is strongly related to undeniable signatures, which to our knowledge has never been stated in literature and of which we do not know whether it is inBPP . A proof of its status would be of interest to discrete log based cryptography in general. Using the representation problem, we show in the appendix how to batch the con rmation protocol of undeniable signatures such that polynomially many undeniable signatures can be veri ed in four moves. AMS Subject Classi cation (1991): 94A60 CR Subject Classi cation (1991): D.4.6", "title": "" }, { "docid": "fa665333f76eaa4dd5861d3b127b0f40", "text": "A four-layer transmitarray operating at 30 GHz is designed using a dual-resonant double square ring as the unit cell element. The two resonances of the double ring are used to increase the per-layer phase variation while maintaining a wide transmission magnitude bandwidth of the unit cell. The design procedure for both the single-layer unit cell and the cascaded connection of four layers is described and it leads to a 50% increase in the -1 dB gain bandwidth over that of previous transmitarrays. Results of a 7.5% -1 dB gain bandwidth and 47% radiation efficiency are reported.", "title": "" }, { "docid": "e95fa624bb3fd7ea45650213088a43b0", "text": "In recent years, much research has been conducted on image super-resolution (SR). To the best of our knowledge, however, few SR methods were concerned with compressed images. The SR of compressed images is a challenging task due to the complicated compression artifacts, while many images suffer from them in practice. The intuitive solution for this difficult task is to decouple it into two sequential but independent subproblems, i.e., compression artifacts reduction (CAR) and SR. Nevertheless, some useful details may be removed in CAR stage, which is contrary to the goal of SR and makes the SR stage more challenging. In this paper, an end-to-end trainable deep convolutional neural network is designed to perform SR on compressed images (CISRDCNN), which reduces compression artifacts and improves image resolution jointly. Experiments on compressed images produced by JPEG (we take the JPEG as an example in this paper) demonstrate that the proposed CISRDCNN yields state-of-the-art SR performance on commonly used test images and imagesets. The results of CISRDCNN on real low quality web images are also very impressive, with obvious quality enhancement. Further, we explore the application of the proposed SR method in low bit-rate image coding, leading to better rate-distortion performance than JPEG.", "title": "" }, { "docid": "a6defeca542d1586e521a56118efc56f", "text": "We expose and explore technical and trust issues that arise in acquiring forensic evidence from infrastructure-as-aservice cloud computing and analyze some strategies for addressing these challenges. First, we create a model to show the layers of trust required in the cloud. Second, we present the overarching context for a cloud forensic exam and analyze choices available to an examiner. Third, we provide for the first time an evaluation of popular forensic acquisition tools including Guidance EnCase and AccesData Forensic Toolkit, and show that they can successfully return volatile and non-volatile data from the cloud. We explain, however, that with those techniques judge and jury must accept a great deal of trust in the authenticity and integrity of the data from many layers of the cloud model. In addition, we explore four other solutions for acquisition—Trusted Platform Modules, the management plane, forensics as a service, and legal solutions, which assume less trust but require more cooperation from the cloud service provider. Our work lays a foundation for future development of new acquisition methods for the cloud that will be trustworthy and forensically sound. Our work also helps forensic examiners, law enforcement, and the court evaluate confidence in evidence from the cloud.", "title": "" }, { "docid": "eaf3d25c7babb067e987b2586129e0e4", "text": "Iterative refinement reduces the roundoff errors in the computed solution to a system of linear equations. Only one step requires higher precision arithmetic. If sufficiently high precision is used, the final result is shown to be very accurate.", "title": "" }, { "docid": "ebca43d1e96ead6d708327d807b9e72f", "text": "Weakly supervised semantic segmentation has been a subject of increased interest due to the scarcity of fully annotated images. We introduce a new approach for solving weakly supervised semantic segmentation with deep Convolutional Neural Networks (CNNs). The method introduces a novel layer which applies simplex projection on the output of a neural network using area constraints of class objects. The proposed method is general and can be seamlessly integrated into any CNN architecture. Moreover, the projection layer allows strongly supervised models to be adapted to weakly supervised models effortlessly by substituting ground truth labels. Our experiments have shown that applying such an operation on the output of a CNN improves the accuracy of semantic segmentation in a weakly supervised setting with image-level labels.", "title": "" }, { "docid": "90faa9a8dc3fd87614a61bfbdf24cab6", "text": "The methods proposed recently for specializing word embeddings according to a particular perspective generally rely on external knowledge. In this article, we propose Pseudofit, a new method for specializing word embeddings according to semantic similarity without any external knowledge. Pseudofit exploits the notion of pseudo-sense for building several representations for each word and uses these representations for making the initial embeddings more generic. We illustrate the interest of Pseudofit for acquiring synonyms and study several variants of Pseudofit according to this perspective.", "title": "" }, { "docid": "e9b6bceebe87a5a97fbcbb01f6e6544b", "text": "OBJECTIVES\nTo investigate the prevalence, location, size and course of the anastomosis between the dental branch of the posterior superior alveolar artery (PSAA), known as alveolar antral artery (AAA), and the infraorbital artery (IOA).\n\n\nMATERIAL AND METHODS\nThe first part of the study was performed on 30 maxillary sinuses deriving from 15 human cadaver heads. In order to visualize such anastomosis, the vascular network afferent to the sinus was injected with liquid latex mixed with green India ink through the external carotid artery. The second part of the study consisted of 100 CT scans from patients scheduled for sinus lift surgery.\n\n\nRESULTS\nAn anastomosis between the AAA and the IOA was found by dissection in the context of the sinus anterolateral wall in 100% of cases, while a well-defined bony canal was detected radiographically in 94 out of 200 sinuses (47% of cases). The mean vertical distance from the lowest point of this bony canal to the alveolar crest was 11.25 ± 2.99 mm (SD) in maxillae examined by CT. The canal diameter was <1 mm in 55.3% of cases, 1-2 mm in 40.4% of cases and 2-3 mm in 4.3% of cases. In 100% of cases, the AAA was found to be partially intra-osseous, that is between the Schneiderian membrane and the lateral bony wall of the sinus, in the area selected for sinus antrostomy.\n\n\nCONCLUSIONS\nA sound knowledge of the maxillary sinus vascular anatomy and its careful analysis by CT scan is essential to prevent complications during surgical interventions involving this region.", "title": "" }, { "docid": "8800dba6bb4cea195c8871eb5be5b0a8", "text": "Text summarization and sentiment classification, in NLP, are two main tasks implemented on text analysis, focusing on extracting the major idea of a text at different levels. Based on the characteristics of both, sentiment classification can be regarded as a more abstractive summarization task. According to the scheme, a Self-Attentive Hierarchical model for jointly improving text Summarization and Sentiment Classification (SAHSSC) is proposed in this paper. This model jointly performs abstractive text summarization and sentiment classification within a hierarchical end-to-end neural framework, in which the sentiment classification layer on top of the summarization layer predicts the sentiment label in the light of the text and the generated summary. Furthermore, a self-attention layer is also proposed in the hierarchical framework, which is the bridge that connects the summarization layer and the sentiment classification layer and aims at capturing emotional information at text-level as well as summary-level. The proposed model can generate a more relevant summary and lead to a more accurate summary-aware sentiment prediction. Experimental results evaluated on SNAP amazon online review datasets show that our model outperforms the state-of-the-art baselines on both abstractive text summarization and sentiment classification by a considerable margin.", "title": "" }, { "docid": "d67ee0219625f02ff7023e4d0d39e8d8", "text": "In information retrieval, pseudo-relevance feedback (PRF) refers to a strategy for updating the query model using the top retrieved documents. PRF has been proven to be highly effective in improving the retrieval performance. In this paper, we look at the PRF task as a recommendation problem: the goal is to recommend a number of terms for a given query along with weights, such that the final weights of terms in the updated query model better reflect the terms' contributions in the query. To do so, we propose RFMF, a PRF framework based on matrix factorization which is a state-of-the-art technique in collaborative recommender systems. Our purpose is to predict the weight of terms that have not appeared in the query and matrix factorization techniques are used to predict these weights. In RFMF, we first create a matrix whose elements are computed using a weight function that shows how much a term discriminates the query or the top retrieved documents from the collection. Then, we re-estimate the created matrix using a matrix factorization technique. Finally, the query model is updated using the re-estimated matrix. RFMF is a general framework that can be employed with any retrieval model. In this paper, we implement this framework for two widely used document retrieval frameworks: language modeling and the vector space model. Extensive experiments over several TREC collections demonstrate that the RFMF framework significantly outperforms competitive baselines. These results indicate the potential of using other recommendation techniques in this task.", "title": "" }, { "docid": "0d1193978e4f8be0b78c6184d7ece3fe", "text": "Network representations of systems from various scientific and societal domains are neither completely random nor fully regular, but instead appear to contain recurring structural building blocks [1]. These features tend to be shared by networks belonging to the same broad class, such as the class of social networks or the class of biological networks. At a finer scale of classification within each such class, networks describing more similar systems tend to have more similar features. This occurs presumably because networks representing similar purposes or constructions would be expected to be generated by a shared set of domain specific mechanisms, and it should therefore be possible to classify these networks into categories based on their features at various structural levels. Here we describe and demonstrate a new, hybrid approach that combines manual selection of features of potential interest with existing automated classification methods. In particular, selecting well-known and well-studied features that have been used throughout social network analysis and network science [2, 3] and then classifying with methods such as random forests [4] that are of special utility in the presence of feature collinearity, we find that we achieve higher accuracy, in shorter computation time, with greater interpretability of the network classification results. Past work in the area of network classification has primarily focused on distinguishing networks from different categories using two different broad classes of approaches. In the first approach , network classification is carried out by examining certain specific structural features and investigating whether networks belonging to the same category are similar across one or more dimensions as defined by these features [5, 6, 7, 8]. In other words, in this approach the investigator manually chooses the structural characteristics of interest and more or less manually (informally) determines the regions of the feature space that correspond to different classes. These methods are scalable to large networks and yield results that are easily interpreted in terms of the characteristics of interest, but in practice they tend to lead to suboptimal classification accuracy. In the second approach, network classification is done by using very flexible machine learning classi-fiers that, when presented with a network as an input, classify its category or class as an output To somewhat oversimplify, the first approach relies on manual feature specification followed by manual selection of a classification system, whereas the second approach is its opposite, relying on automated feature detection followed by automated classification. While …", "title": "" }, { "docid": "e584549afba4c444c32dfe67ee178a84", "text": "Bayesian networks (BNs) provide a means for representing, displaying, and making available in a usable form the knowledge of experts in a given Weld. In this paper, we look at the performance of an expert constructed BN compared with other machine learning (ML) techniques for predicting the outcome (win, lose, or draw) of matches played by Tottenham Hotspur Football Club. The period under study was 1995–1997 – the expert BN was constructed at the start of that period, based almost exclusively on subjective judgement. Our objective was to determine retrospectively the comparative accuracy of the expert BN compared to some alternative ML models that were built using data from the two-year period. The additional ML techniques considered were: MC4, a decision tree learner; Naive Bayesian learner; Data Driven Bayesian (a BN whose structure and node probability tables are learnt entirely from data); and a K-nearest neighbour learner. The results show that the expert BN is generally superior to the other techniques for this domain in predictive accuracy. The results are even more impressive for BNs given that, in a number of key respects, the study assumptions place them at a disadvantage. For example, we have assumed that the BN prediction is ‘incorrect’ if a BN predicts more than one outcome as equally most likely (whereas, in fact, such a prediction would prove valuable to somebody who could place an ‘each way’ bet on the outcome). Although the expert BN has now long been irrelevant (since it contains variables relating to key players who have retired or left the club) the results here tend to conWrm the excellent potential of BNs when they are built by a reliable domain expert. The ability to provide accurate predictions without requiring much learning data are an obvious bonus in any domain where data are scarce. Moreover, the BN was relatively simple for the expert to build and its structure could be used again in this and similar types of problems. © 2006 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "cb632cd4d78d85834838b7ac7a126efc", "text": "We present an approach to combining distributional semantic representations induced from text corpora with manually constructed lexical-semantic networks. While both kinds of semantic resources are available with high lexical coverage, our aligned resource combines the domain specificity and availability of contextual information from distributional models with the conciseness and high quality of manually crafted lexical networks. We start with a distributional representation of induced senses of vocabulary terms, which are accompanied with rich context information given by related lexical items. We then automatically disambiguate such representations to obtain a full-fledged proto-conceptualization, i.e. a typed graph of induced word senses. In a final step, this proto-conceptualization is aligned to a lexical ontology, resulting in a hybrid aligned resource. Moreover, unmapped induced senses are associated with a semantic type in order to connect them to the core resource. Manual evaluations against ground-truth judgments for different stages of our method as well as an extrinsic evaluation on a knowledge-based Word Sense Disambiguation benchmark all indicate the high quality of the new hybrid resource. Additionally, we show the benefits of enriching top-down lexical knowledge resources with bottom-up distributional information from text for addressing high-end knowledge acquisition tasks such as cleaning hypernym graphs and learning taxonomies from scratch.", "title": "" }, { "docid": "0ef2a90669c0469df0dc2281a414cf37", "text": "Web Intelligence is a direction for scientific research that explores practical applications of Artificial Intelligence to the next generation of Web-empowered systems. In this paper, we present a Web-based intelligent tutoring system for computer programming. The decision making process conducted in our intelligent system is guided by Bayesian networks, which are a formal framework for uncertainty management in Artificial Intelligence based on probability theory. Whereas many tutoring systems are static HTML Web pages of a class textbook or lecture notes, our intelligent system can help a student navigate through the online course materials, recommend learning goals, and generate appropriate reading sequences.", "title": "" }, { "docid": "96f13d8d4e12ef65948216286a0982c9", "text": "Regression test case selection techniques attempt to increase the testing effectiveness based on the measurement capabilities, such as cost, coverage, and fault detection. This systematic literature review presents state-of-the-art research in effective regression test case selection techniques. We examined 47 empirical studies published between 2007 and 2015. The selected studies are categorized according to the selection procedure, empirical study design, and adequacy criteria with respect to their effectiveness measurement capability and methods used to measure the validity of these results.\n The results showed that mining and learning-based regression test case selection was reported in 39% of the studies, unit level testing was reported in 18% of the studies, and object-oriented environment (Java) was used in 26% of the studies. Structural faults, the most common target, was used in 55% of the studies. Overall, only 39% of the studies conducted followed experimental guidelines and are reproducible.\n There are 7 different cost measures, 13 different coverage types, and 5 fault-detection metrics reported in these studies. It is also observed that 70% of the studies being analyzed used cost as the effectiveness measure compared to 31% that used fault-detection capability and 16% that used coverage.", "title": "" } ]
scidocsrr
3973b47c48100d90604ee1a64dbea1df
Hierarchical Parsing Net: Semantic Scene Parsing From Global Scene to Objects
[ { "docid": "4d2be7aac363b77c6abd083947bc28c7", "text": "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes.", "title": "" } ]
[ { "docid": "d11c2dd512f680e79706f73d4cd3d0aa", "text": "We describe the class of convexified convolutional neural networks (CCNNs), which capture the parameter sharing of convolutional neural networks in a convex manner. By representing the nonlinear convolutional filters as vectors in a reproducing kernel Hilbert space, the CNN parameters can be represented in terms of a lowrank matrix, and the rank constraint can be relaxed so as to obtain a convex optimization problem. For learning two-layer convolutional neural networks, we prove that the generalization error obtained by a convexified CNN converges to that of the best possible CNN. For learning deeper networks, we train CCNNs in a layerwise manner. Empirically, we find that CCNNs achieve competitive or better performance than CNNs trained by backpropagation, SVMs, fully-connected neural networks, stacked denoising auto-encoders, and other baseline methods.", "title": "" }, { "docid": "b35d58ad8987bb4fd9d7df2c09a4daab", "text": "Visual search is necessary for rapid scene analysis because information processing in the visual system is limited to one or a few regions at one time [3]. To select potential regions or objects of interest rapidly with a task-independent manner, the so-called \"visual saliency\", is important for reducing the complexity of scenes. From the perspective of engineering, modeling visual saliency usually facilitates subsequent higher visual processing, such as image re-targeting [10], image compression [12], object recognition [16], etc. Visual attention model is deeply studied in recent decades. Most of existing models are built on the biologically-inspired architecture based on the famous Feature Integration Theory (FIT) [19, 20]. For instance, Itti et al. proposed a famous saliency model which computes the saliency map with local contrast in multiple feature dimensions, such as color, orientation, etc. [15] [23]. However, FIT-based methods perhaps risk being immersed in local saliency (e.g., object boundaries), because they employ local contrast of features in limited regions and ignore the global information. Visual attention models usually provide location information of the potential objects, but miss some object-related information (e.g., object surfaces) that is necessary for further object detection and recognition. Distinguished from FIT, Guided Search Theory (GST) [3] [24] provides a mechanism to search the regions of interest (ROI) or objects with the guidance from scene layout or top-down sources. The recent version of GST claims that the visual system searches objects of interest along two parallel pathways, i.e., the non-selective pathway and the selective pathway [3]. This new visual search strategy allows observers to extract spatial layout (or gist) information rapidly from entire scene via non-selective pathway. Then, this context information of scene acts as top-down modulation to guide the salient object search along the selective pathway. This two-pathway-based search strategy provides a parallel processing of global and local information for rapid visual search. Referring to the GST, we assume that the non-selective pathway provides \"where\" information and prior of multiple objects for visual search, a counterpart to visual selective saliency, and we use certain simple and fast fixation prediction method to provide an initial estimate of where the objects present. At the same time, the bottom-up visual selective pathway extracts fine image features in multiple cue channels, which could be regarded as a counterpart to the \"what\" pathway in visual system for object recognition. When these bottom-up features meet \"where\" information of objects, the visual system …", "title": "" }, { "docid": "06ae65d560af6e99cdc96495d32379d1", "text": "Recent advances in signal processing and machine learning techniques have enabled the application of Brain-Computer Interface (BCI) technologies to fields such as medicine, industry, and recreation; however, BCIs still suffer from the requirement of frequent calibration sessions due to the intra- and inter-individual variability of brain-signals, which makes calibration suppression through transfer learning an area of increasing interest for the development of practical BCI systems. In this paper, we present an unsupervised transfer method (spectral transfer using information geometry, STIG), which ranks and combines unlabeled predictions from an ensemble of information geometry classifiers built on data from individual training subjects. The STIG method is validated in both off-line and real-time feedback analysis during a rapid serial visual presentation task (RSVP). For detection of single-trial, event-related potentials (ERPs), the proposed method can significantly outperform existing calibration-free techniques as well as outperform traditional within-subject calibration techniques when limited data is available. This method demonstrates that unsupervised transfer learning for single-trial detection in ERP-based BCIs can be achieved without the requirement of costly training data, representing a step-forward in the overall goal of achieving a practical user-independent BCI system.", "title": "" }, { "docid": "e9f9be3fad4a9a71e26a75023929147d", "text": "BACKGROUND\nAesthetic surgery of female genitalia is an uncommon procedure, and of the techniques available, labia minora reduction can achieve excellent results. Recently, more conservative labia minora reduction techniques have been developed, because the simple isolated strategy of straight amputation does not ensure a favorable outcome. This study was designed to review a series of labia minora reductions using inferior wedge resection and superior pedicle flap reconstruction.\n\n\nMETHODS\nTwenty-one patients underwent inferior wedge resection and superior pedicle flap reconstruction. The mean follow-up was 46 months. Aesthetic results and postoperative outcomes were collected retrospectively and evaluated.\n\n\nRESULTS\nTwenty patients (95.2 percent) underwent bilateral procedures, and 90.4 percent of patients had a congenital labia minora hypertrophy. Five complications occurred in 21 patients (23.8 percent). Wound-healing problems were observed more frequently. The cosmetic result was considered to be good or very good in 85.7 percent of patients, and 95.2 percent were very satisfied with the procedure. All complications except one were observed immediately after the procedure.\n\n\nCONCLUSIONS\nThe results of this study demonstrate that inferior wedge resection and superior pedicle flap reconstruction is a simple and consistent technique and deserves a place among the main procedures available. The complications observed were not unexpected and did not extend hospital stay or interfere with the normal postoperative period. The success of the procedure depends on patient selection, careful preoperative planning, and adequate intraoperative management.", "title": "" }, { "docid": "881a495a8329c71a0202c3510e21b15d", "text": "We apply basic statistical reasoning to signal reconstruction by machine learning – learning to map corrupted observations to clean signals – with a simple and powerful conclusion: it is possible to learn to restore images by only looking at corrupted examples, at performance at and sometimes exceeding training using clean data, without explicit image priors or likelihood models of the corruption. In practice, we show that a single model learns photographic noise removal, denoising synthetic Monte Carlo images, and reconstruction of undersampled MRI scans – all corrupted by different processes – based on noisy data only.", "title": "" }, { "docid": "5d3ae892c7cbe056734c9b098e018377", "text": "Information on the Nuclear Magnetic Resonance Gyro under development by Northrop Grumman Corporation is presented. The basics of Operation are summarized, a review of the completed phases is presented, and the current state of development and progress in phase 4 is discussed. Many details have been left out for the sake of brevity, but the principles are still complete.", "title": "" }, { "docid": "b9efcefffc894501f7cfc42d854d6068", "text": "Disruption of electric power operations can be catastrophic on the national security and economy. Due to the complexity of widely dispersed assets and the interdependency between computer, communication, and power systems, the requirement to meet security and quality compliance on the operations is a challenging issue. In recent years, NERC's cybersecurity standard was initiated to require utilities compliance on cybersecurity in control systems - NERC CIP 1200. This standard identifies several cyber-related vulnerabilities that exist in control systems and recommends several remedial actions (e.g., best practices). This paper is an overview of the cybersecurity issues for electric power control and automation systems, the control architectures, and the possible methodologies for vulnerability assessment of existing systems.", "title": "" }, { "docid": "3d10793b2e4e63e7d639ff1e4cdf04b6", "text": "Research in signal processing shows that a variety of transforms have been introduced to map the data from the original space into the feature space, in order to efficiently analyze a signal. These techniques differ in their basis functions, that is used for projecting the signal into a higher dimensional space. One of the widely used schemes for quasi-stationary and non-stationary signals is the time-frequency (TF) transforms, characterized by specific kernel functions. This work introduces a novel class of Ramanujan Fourier Transform (RFT) based TF transform functions, constituted by Ramanujan sums (RS) basis. The proposed special class of transforms offer high immunity to noise interference, since the computation is carried out only on co-resonant components, during analysis of signals. Further, we also provide a 2-D formulation of the RFT function. Experimental validation using synthetic examples, indicates that this technique shows potential for obtaining relatively sparse TF-equivalent representation and can be optimized for characterization of certain real-life signals.", "title": "" }, { "docid": "c8e8d82af2d8d2c6c51b506b4f26533f", "text": "We present an efficient method for detecting anomalies in videos. Recent applications of convolutional neural networks have shown promises of convolutional layers for object detection and recognition, especially in images. However, convolutional neural networks are supervised and require labels as learning signals. We propose a spatiotemporal architecture for anomaly detection in videos including crowded scenes. Our architecture includes two main components, one for spatial feature representation, and one for learning the temporal evolution of the spatial features. Experimental results on Avenue, Subway and UCSD benchmarks confirm that the detection accuracy of our method is comparable to state-of-the-art methods at a considerable speed of up to 140 fps.", "title": "" }, { "docid": "d2c6e2e807376b63828da4037028f891", "text": "Cortical circuits in the brain are refined by experience during critical periods early in postnatal life. Critical periods are regulated by the balance of excitatory and inhibitory (E/I) neurotransmission in the brain during development. There is now increasing evidence of E/I imbalance in autism, a complex genetic neurodevelopmental disorder diagnosed by abnormal socialization, impaired communication, and repetitive behaviors or restricted interests. The underlying cause is still largely unknown and there is no fully effective treatment or cure. We propose that alteration of the expression and/or timing of critical period circuit refinement in primary sensory brain areas may significantly contribute to autistic phenotypes, including cognitive and behavioral impairments. Dissection of the cellular and molecular mechanisms governing well-established critical periods represents a powerful tool to identify new potential therapeutic targets to restore normal plasticity and function in affected neuronal circuits.", "title": "" }, { "docid": "4b049e3fee1adfba2956cb9111a38bd2", "text": "This paper presents an optimization based algorithm for underwater image de-hazing problem. Underwater image de-hazing is the most prominent area in research. Underwater images are corrupted due to absorption and scattering. With the effect of that, underwater images have the limitation of low visibility, low color and poor natural appearance. To avoid the mentioned problems, Enhanced fuzzy intensification method is proposed. For each color channel, enhanced fuzzy membership function is derived. Second, the correction of fuzzy based pixel intensification is carried out for each channel to remove haze and to enhance visibility and color. The post processing of fuzzy histogram equalization is implemented for red channel alone when the captured image is having highest value of red channel pixel values. The proposed method provides better results in terms maximum entropy and PSNR with minimum MSE with very minimum computational time compared to existing methodologies.", "title": "" }, { "docid": "925aacab817a20ff527afd4100c2a8bd", "text": "This paper presents an efficient design approach for band-pass post filters in waveguides, based on mode-matching technique. With this technique, the characteristics of symmetrical cylindrical post arrangements in the cross-section of the considered waveguides can be analyzed accurately and quickly. Importantly, the approach is applicable to post filters in waveguide but can be extended to Substrate Integrated Waveguide (SIW) technologies. The fast computations provide accurate relationships for the K factors as a function of the post radii and the distances between posts, and allow analyzing the influence of machining tolerances on the filter performance. The computations are used to choose reasonable posts for designing band-pass filters, while the error analysis helps to judge whether a given machining precision is sufficient. The approach is applied to a Chebyshev band-pass post filter and a band-pass SIW filter with a center frequency of 10.5 GHz and a fractional bandwidth of 9.52% with verification via full-wave simulations using HFSS and measurements on manufactured prototypes.", "title": "" }, { "docid": "5bf8b65e644f0db9920d3dd7fdf4d281", "text": "Software developers face a number of challenges when creating applications that attempt to keep important data confidential. Even with diligent attention paid to correct software design and implementation practices, secrets can still be exposed through a single flaw in any of the privileged code on the platform, code which may have been written by thousands of developers from hundreds of organizations throughout the world. Intel is developing innovative security technology which provides the ability for software developers to maintain control of the security of sensitive code and data by creating trusted domains within applications to protect critical information during execution and at rest. This paper will describe how this technology has been effectively used in lab exercises to protect private information in applications including enterprise rights management, video chat, trusted financial transactions, and others. Examples will include both protection of local processing and the establishment of secure communication with cloud services. It will illustrate useful software design patterns that can be followed to create many additional types of trusted software solutions.", "title": "" }, { "docid": "9911063e58b5c2406afd761d8826538a", "text": "BACKGROUND\nThe purpose of our study was to evaluate inter-observer reliability of the Three-Column classifications with conventional Schatzker and AO/OTA of Tibial Plateau Fractures.\n\n\nMETHODS\n50 cases involving all kinds of the fracture patterns were collected from 278 consecutive patients with tibial plateau fractures who were internal fixed in department of Orthopedics and Trauma III in Shanghai Sixth People's Hospital. The series were arranged randomly, numbered 1 to 50. Four observers were chosen to classify these cases. Before the research, a classification training session was held to each observer. They were given as much time as they required evaluating the radiographs accurately and independently. The classification choices made at the first viewing were not available during the second viewing. The observers were not provided with any feedback after the first viewing. The kappa statistic was used to analyze the inter-observer reliability of the three fracture classification made by the four observers.\n\n\nRESULTS\nThe mean kappa values for inter-observer reliability regarding Schatzker classification was 0.567 (range: 0.513-0.589), representing \"moderate agreement\". The mean kappa values for inter-observer reliability regarding AO/ASIF classification systems was 0.623 (range: 0.510-0.710) representing \"substantial agreement\". The mean kappa values for inter-observer reliability regarding Three-Column classification systems was 0.766 (range: 0.706-0.890), representing \"substantial agreement\".\n\n\nCONCLUSION\nThree-Column classification, which is dependent on the understanding of the fractures using CT scans as well as the 3D reconstruction can identity the posterior column fracture or fragment. It showed \"substantial agreement\" in the assessment of inter-observer reliability, higher than the conventional Schatzker and AO/OTA classifications. We finally conclude that Three-Column classification provides a higher agreement among different surgeons and could be popularized and widely practiced in other clinical centers.", "title": "" }, { "docid": "ad9f3510ffaf7d0bdcf811a839401b83", "text": "The stator permanent magnet (PM) machines have simple and robust rotor structure as well as high torque density. The hybrid excitation topology can realize flux regulation and wide constant power operating capability of the stator PM machines when used in dc power systems. This paper compares and analyzes the electromagnetic performance of different hybrid excitation stator PM machines according to different combination modes of PMs, excitation winding, and iron flux bridge. Then, the control strategies for voltage regulation of dc power systems are discussed based on different critical control variables including the excitation current, the armature current, and the electromagnetic torque. Furthermore, an improved direct torque control (DTC) strategy is investigated to improve system performance. A parallel hybrid excitation flux-switching generator employing the improved DTC which shows excellent dynamic and steady-state performance has been achieved experimentally.", "title": "" }, { "docid": "2c2fd7484d137a2ac01bdd4d3f176b44", "text": "This paper presents a novel two-stage low dropout regulator (LDO) that minimizes output noise via a pre-regulator stage and achieves high power supply rejection via a simple subtractor circuit in the power driver stage. The LDO is fabricated with a standard 0.35mum CMOS process and occupies 0.26 mm2 and 0.39mm2 for single and dual output respectively. Measurement showed PSR is 60dB at 10kHz and integrated noise is 21.2uVrms ranging from 1kHz to 100kHz", "title": "" }, { "docid": "c501b2c5d67037b7ca263ec9c52503a9", "text": "Edith Penrose’s (1959) book, The Theory of the Growth of the Firm, is considered by many scholars in the strategy field to be the seminal work that provided the intellectual foundations for the modern, resource-based theory of the firm. However, the present paper suggests that Penrose’s direct or intended contribution to resource-based thinking has been misinterpreted. Penrose never aimed to provide useful strategy prescriptions for managers to create a sustainable stream of rents; rather, she tried to rigorously describe the processes through which firms grow. In her theory, rents were generally assumed not to occur. If they arose this reflected an inefficient macro-level outcome of an otherwise efficient micro-level growth process. Nevertheless, her ideas have undoubtedly stimulated ‘good conversation’ within the strategy field in the spirit of Mahoney and Pandian (1992); their emerging use by some scholars as building blocks in models that show how sustainable competitive advantage and rents can be achieved is undeniable, although such use was never intended by Edith Penrose herself. Copyright  2002 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "d2268cd9a2ea751ea2080a4d86e32e17", "text": "Predicting panic is of critical importance in many areas of human and animal behavior, notably in the context of economics. The recent financial crisis is a case in point. Panic may be due to a specific external threat or self-generated nervousness. Here we show that the recent economic crisis and earlier large single-day panics were preceded by extended periods of high levels of market mimicry--direct evidence of uncertainty and nervousness, and of the comparatively weak influence of external news. High levels of mimicry can be a quite general indicator of the potential for self-organized crises.", "title": "" }, { "docid": "35a063ab339f32326547cc54bee334be", "text": "We present a model for attacking various cryptographic schemes by taking advantage of random hardware faults. The model consists of a black-box containing some cryptographic secret. The box interacts with the outside world by following a cryptographic protocol. The model supposes that from time to time the box is affected by a random hardware fault causing it to output incorrect values. For example, the hardware fault flips an internal register bit at some point during the computation. We show that for many digital signature and identification schemes these incorrect outputs completely expose the secrets stored in the box. We present the following results: (1) The secret signing key used in an implementation of RSA based on the Chinese Remainder Theorem (CRT) is completely exposed from a single erroneous RSA signature, (2) for non-CRT implementations of RSA the secret key is exposed given a large number (e.g. 1000) of erroneous signatures, (3) the secret key used in Fiat—Shamir identification is exposed after a small number (e.g. 10) of faulty executions of the protocol, and (4) the secret key used in Schnorr's identification protocol is exposed after a much larger number (e.g. 10,000) of faulty executions. Our estimates for the number of necessary faults are based on standard security parameters such as a 1024-bit modulus, and a 2 -40 identification error probability. Our results demonstrate the importance of preventing errors in cryptographic computations. We conclude the paper with various methods for preventing these attacks.", "title": "" }, { "docid": "8272e9a13d2cae8b76cfc3e64b14297d", "text": "Whether they are made to entertain you, or to educate you, good video games engage you. Significant research has tried to understand engagement in games by measuring player experience (PX). Traditionally, PX evaluation has focused on the enjoyment of game, or the motivation of players; these factors no doubt contribute to engagement, but do decisions regarding play environment (e.g., the choice of game controller) affect the player more deeply than that? We apply self-determination theory (specifically satisfaction of needs and self-discrepancy represented using the five factors model of personality) to explain PX in an experiment with controller type as the manipulation. Our study shows that there are a number of effects of controller on PX and in-game player personality. These findings provide both a lens with which to view controller effects in games and a guide for controller choice in the design of new games. Our research demonstrates that including self-characteristics assessment in the PX evaluation toolbox is valuable and useful for understanding player experience.", "title": "" } ]
scidocsrr
1c83c6ee0cf7f8b5e72e8b9a00e6b0fe
Deep automatic license plate recognition system
[ { "docid": "b4316fcbc00b285e11177811b61d2b99", "text": "Automatic license plate recognition (ALPR) is one of the most important aspects of applying computer techniques towards intelligent transportation systems. In order to recognize a license plate efficiently, however, the location of the license plate, in most cases, must be detected in the first place. Due to this reason, detecting the accurate location of a license plate from a vehicle image is considered to be the most crucial step of an ALPR system, which greatly affects the recognition rate and speed of the whole system. In this paper, a region-based license plate detection method is proposed. In this method, firstly, mean shift is used to filter and segment a color vehicle image in order to get candidate regions. These candidate regions are then analyzed and classified in order to decide whether a candidate region contains a license plate. Unlike other existing license plate detection methods, the proposed method focuses on regions, which demonstrates to be more robust to interference characters and more accurate when compared with other methods.", "title": "" }, { "docid": "8d5dd3f590dee87ea609278df3572f6e", "text": "In this work we present a framework for the recognition of natural scene text. Our framework does not require any human-labelled data, and performs word recognition on the whole image holistically, departing from the character based recognition systems of the past. The deep neural network models at the centre of this framework are trained solely on data produced by a synthetic text generation engine – synthetic data that is highly realistic and sufficient to replace real data, giving us infinite amounts of training data. This excess of data exposes new possibilities for word recognition models, and here we consider three models, each one “reading” words in a different way: via 90k-way dictionary encoding, character sequence encoding, and bag-of-N-grams encoding. In the scenarios of language based and completely unconstrained text recognition we greatly improve upon state-of-the-art performance on standard datasets, using our fast, simple machinery and requiring zero data-acquisition costs.", "title": "" }, { "docid": "7c796d22e9875bc4fe1a5267d28e5d40", "text": "A simple approach to learning invariances in image classification consists in augmenting the training set with transformed versions of the original images. However, given a large set of possible transformations, selecting a compact subset is challenging. Indeed, all transformations are not equally informative and adding uninformative transformations increases training time with no gain in accuracy. We propose a principled algorithm -- Image Transformation Pursuit (ITP) -- for the automatic selection of a compact set of transformations. ITP works in a greedy fashion, by selecting at each iteration the one that yields the highest accuracy gain. ITP also allows to efficiently explore complex transformations, that combine basic transformations. We report results on two public benchmarks: the CUB dataset of bird images and the ImageNet 2010 challenge. Using Fisher Vector representations, we achieve an improvement from 28.2% to 45.2% in top-1 accuracy on CUB, and an improvement from 70.1% to 74.9% in top-5 accuracy on ImageNet. We also show significant improvements for deep convnet features: from 47.3% to 55.4% on CUB and from 77.9% to 81.4% on ImageNet.", "title": "" } ]
[ { "docid": "f4adaf2cbb8d176b72939a9a81c92da7", "text": "This paper describes a new method for recognizing overtraced strokes to 2D geometric primitives, which are further interpreted as 2D line drawings. This method can support rapid grouping and fitting of overtraced polylines or conic curves based on the classified characteristics of each stroke during its preprocessing stage. The orientation and its endpoints of a classified stroke are used in the stroke grouping process. The grouped strokes are then fitted with 2D geometry. This method can deal with overtraced sketch strokes in both solid and dash linestyles, fit grouped polylines as a whole polyline and simply fit conic strokes without computing the direction of a stroke. It avoids losing joint information due to segmentation of a polyline into line-segments. The proposed method has been tested with our freehand sketch recognition system (FSR), which is robust and easier to use by removing some limitations embedded with most existing sketching systems which only accept non-overtraced stroke drawing. The test results showed that the proposed method can support freehand sketching based conceptual design with no limitations on drawing sequence, directions and overtraced cases while achieving a satisfactory interpretation rate.", "title": "" }, { "docid": "8234cd43e0bfba657bf81b6ca9b6825a", "text": "We derive upper and lower limits on the majority vote accuracy with respect to individual accuracy p, the number of classifiers in the pool (L), and the pairwise dependence between classifiers, measured by Yule’s Q statistic. Independence between individual classifiers is typically viewed as an asset in classifier fusion. We show that the majority vote with dependent classifiers can potentially offer a dramatic improvement both over independent classifiers and over an individual classifier with accuracy p. A functional relationship between the limits and the pairwise dependence Q is derived. Two patterns of the joint distribution for classifier outputs (correct/incorrect) are identified to derive the limits: the pattern of success and the pattern of failure. The results support the intuition that negative pairwise dependence is beneficial although not straightforwardly related to the accuracy. The pattern of success showed that for the highest improvement over p, all pairs of classifiers in the pool should have the same negative dependence.", "title": "" }, { "docid": "b354f4f9bd12caef2a22ebfeae315cb5", "text": "In order to advance action generation and creation in robots beyond simple learned schemas we need computational tools that allow us to automatically interpret and represent human actions. This paper presents a system that learns manipulation action plans by processing unconstrained videos from the World Wide Web. Its goal is to robustly generate the sequence of atomic actions of seen longer actions in video in order to acquire knowledge for robots. The lower level of the system consists of two convolutional neural network (CNN) based recognition modules, one for classifying the hand grasp type and the other for object recognition. The higher level is a probabilistic manipulation action grammar based parsing module that aims at generating visual sentences for robot manipulation. Experiments conducted on a publicly available unconstrained video dataset show that the system is able to learn manipulation actions by “watching” unconstrained videos with high accuracy.", "title": "" }, { "docid": "997502358acec488a3c02b4c711c6fc2", "text": "This study presents the first results of an analysis primarily based on semi-structured interviews with government officials and managers who are responsible for smart city initiatives in four North American cities—Philadelphia and Seattle in the United States, Quebec City in Canada, and Mexico City in Mexico. With the reference to the Smart City Initiatives Framework that we suggested in our previous research, this study aims to build a new understanding of smart city initiatives. Main findings are categorized into eight aspects including technology, management and organization, policy context, governance, people and communities, economy, built infrastructure, and natural environ-", "title": "" }, { "docid": "0070d6e21bdb8bac260178603cfbf67d", "text": "Sound is a medium that conveys functional and emotional information in a form of multilayered streams. With the use of such advantage, robot sound design can open a way for being more efficient communication in human-robot interaction. As the first step of research, we examined how individuals perceived the functional and emotional intention of robot sounds and whether the perceived information from sound is associated with their previous experience with science fiction movies. The sound clips were selected based on the context of the movie scene (i.e., Wall-E, R2-D2, BB8, Transformer) and classified as functional (i.e., platform, monitoring, alerting, feedback) and emotional (i.e., positive, neutral, negative). A total of 12 participants were asked to identify the perceived properties for each of the 30 items. We found that the perceived emotional and functional messages varied from those originally intended and differed by previous experience.", "title": "" }, { "docid": "6dcf25b450d8c4eea6b61556d505a729", "text": "Skip connections made the training of very deep neural networks possible and have become an indispendable component in a variety of neural architectures. A satisfactory explanation for their success remains elusive. Here, we present an explanation for the benefits of skip connections in training very deep neural networks. We argue that skip connections help break symmetries inherent in the loss landscapes of deep networks, leading to drastically simplified landscapes. In particular, skip connections between adjacent layers in a multilayer network break the permutation symmetry of nodes in a given layer, and the recently proposed DenseNet architecture, where each layer projects skip connections to every layer above it, also breaks the rescaling symmetry of connectivity matrices between different layers. This hypothesis is supported by evidence from a toy model with binary weights and from experiments with fully-connected networks suggesting (i) that skip connections do not necessarily improve training unless they help break symmetries and (ii) that alternative ways of breaking the symmetries also lead to significant performance improvements in training deep networks, hence there is nothing special about skip connections in this respect. We find, however, that skip connections confer additional benefits over and above symmetry-breaking, such as the ability to deal effectively with the vanishing gradients problem.", "title": "" }, { "docid": "18f13858b5f9e9a8e123d80b159c4d72", "text": "Cryptocurrency, and its underlying technologies, has been gaining popularity for transaction management beyond financial transactions. Transaction information is maintained in the blockchain, which can be used to audit the integrity of the transaction. The focus on this paper is the potential availability of block-chain technology of other transactional uses. Block-chain is one of the most stable open ledgers that preserves transaction information, and is difficult to forge. Since the information stored in block-chain is not related to personally identifiable information, it has the characteristics of anonymity. Also, the block-chain allows for transparent transaction verification since all information in the block-chain is open to the public. These characteristics are the same as the requirements for a voting system. That is, strong robustness, anonymity, and transparency. In this paper, we propose an electronic voting system as an application of blockchain, and describe block-chain based voting at a national level through examples.", "title": "" }, { "docid": "9bbc3e426c7602afaa857db85e754229", "text": "Knowledge bases of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge bases are typically incomplete, it is useful to be able to perform link prediction, i.e., predict whether a relationship not in the knowledge base is likely to be true. This paper combines insights from several previous link prediction models into a new embedding model STransE that represents each entity as a lowdimensional vector, and each relation by two matrices and a translation vector. STransE is a simple combination of the SE and TransE models, but it obtains better link prediction performance on two benchmark datasets than previous embedding models. Thus, STransE can serve as a new baseline for the more complex models in the link prediction task.", "title": "" }, { "docid": "bdce7ff18a7a5b7ed8c09fa98e426378", "text": "Following Ebbinghaus (1885/1964), a number of procedures have been devised to measure short-term memory using immediate serial recall: digit span, Knox's (1913) cube imitation test and Corsi's (1972) blocks task. Understanding the cognitive processes involved in these tasks was obstructed initially by the lack of a coherent concept of short-term memory and later by the mistaken assumption that short-term and long-term memory reflected distinct processes as well as different kinds of experimental task. Despite its apparent conceptual simplicity, a variety of cognitive mechanisms are responsible for short-term memory, and contemporary theories of working memory have helped to clarify these. Contrary to the earliest writings on the subject, measures of short-term memory do not provide a simple measure of mental capacity, but they do provide a way of understanding some of the key mechanisms underlying human cognition.", "title": "" }, { "docid": "bec4932c66f8a8a87c1967ca42ad4315", "text": "Nowadays, the number of layers and of neurons in each layer of a deep network are typically set manually. While very deep and wide networks have proven effective in general, they come at a high memory and computation cost, thus making them impractical for constrained platforms. These networks, however, are known to have many redundant parameters, and could thus, in principle, be replaced by more compact architectures. In this paper, we introduce an approach to automatically determining the number of neurons in each layer of a deep network during learning. To this end, we propose to make use of a group sparsity regularizer on the parameters of the network, where each group is defined to act on a single neuron. Starting from an overcomplete network, we show that our approach can reduce the number of parameters by up to 80% while retaining or even improving the network accuracy.", "title": "" }, { "docid": "554d234697cd98bf790444fe630c179b", "text": "This paper presents a novel approach for search engine results clustering that relies on the semantics of the retrieved documents rather than the terms in those documents. The proposed approach takes into consideration both lexical and semantics similarities among documents and applies activation spreading technique in order to generate semantically meaningful clusters. This approach allows documents that are semantically similar to be clustered together rather than clustering documents based on similar terms. A prototype is implemented and several experiments are conducted to test the prospered solution. The result of the experiment confirmed that the proposed solution achieves remarkable results in terms of precision.", "title": "" }, { "docid": "28d01dba790cf55591a84ef88b70ebbf", "text": "A novel method for simultaneous keyphrase extraction and generic text summarization is proposed by modeling text documents as weighted undirected and weighted bipartite graphs. Spectral graph clustering algorithms are useed for partitioning sentences of the documents into topical groups with sentence link priors being exploited to enhance clustering quality. Within each topical group, saliency scores for keyphrases and sentences are generated based on a mutual reinforcement principle. The keyphrases and sentences are then ranked according to their saliency scores and selected for inclusion in the top keyphrase list and summaries of the document. The idea of building a hierarchy of summaries for documents capturing different levels of granularity is also briefly discussed. Our method is illustrated using several examples from news articles, news broadcast transcripts and web documents.", "title": "" }, { "docid": "229605eada4ca390d17c5ff168c6199a", "text": "The sharing economy is a new online community that has important implications for offline behavior. This study evaluates whether engagement in the sharing economy is associated with an actor’s aversion to risk. Using a web-based survey and a field experiment, we apply an adaptation of Holt and Laury’s (2002) risk lottery game to a representative sample of sharing economy participants. We find that frequency of activity in the sharing economy predicts risk aversion, but only in interaction with satisfaction. While greater satisfaction with sharing economy websites is associated with a decrease in risk aversion, greater frequency of usage is associated with greater risk aversion. This analysis shows the limitations of a static perspective on how risk attitudes relate to participation in the sharing economy.", "title": "" }, { "docid": "ada4554ce6e6180075459557409f524d", "text": "The conceptualization of the notion of a system in systems engineering, as exemplified in, for instance, the engineering standard IEEE Std 1220-1998, is problematic when applied to the design of socio-technical systems. This is argued using Intelligent Transportation Systems as an example. A preliminary conceptualization of socio-technical systems is introduced which includes technical and social elements and actors, as well as four kinds of relations. Current systems engineering practice incorporates technical elements and actors in the system but sees social elements exclusively as contextual. When designing socio-technical systems, however, social elements and the corresponding relations must also be considered as belonging to the system.", "title": "" }, { "docid": "479089fb59b5b810f95272d04743f571", "text": "We address offensive tactic recognition in broadcast basketball videos. As a crucial component towards basketball video content understanding, tactic recognition is quite challenging because it involves multiple independent players, each of which has respective spatial and temporal variations. Motivated by the observation that most intra-class variations are caused by non-key players, we present an approach that integrates key player detection into tactic recognition. To save the annotation cost, our approach can work on training data with only video-level tactic annotation, instead of key players labeling. Specifically, this task is formulated as an MIL (multiple instance learning) problem where a video is treated as a bag with its instances corresponding to subsets of the five players. We also propose a representation to encode the spatio-temporal interaction among multiple players. It turns out that our approach not only effectively recognizes the tactics but also precisely detects the key players.", "title": "" }, { "docid": "a25839666b7e208810979dc93d20f950", "text": "Energy consumption management has become an essential concept in cloud computing. In this paper, we propose a new power aware load balancing, named Bee-MMT (artificial bee colony algorithm-Minimal migration time), to decline power consumption in cloud computing; as a result of this decline, CO2 production and operational cost will be decreased. According to this purpose, an algorithm based on artificial bee colony algorithm (ABC) has been proposed to detect over utilized hosts and then migrate one or more VMs from them to reduce their utilization; following that we detect underutilized hosts and, if it is possible, migrate all VMs which have been allocated to these hosts and then switch them to the sleep mode. However, there is a trade-off between energy consumption and providing high quality of service to the customers. Consequently, we consider SLA Violation as a metric to qualify the QOS that require to satisfy the customers. The results show that the proposed method can achieve greater power consumption saving than other methods like LR-MMT (local regression-Minimal migration time), DVFS (Dynamic Voltage Frequency Scaling), IQR-MMT (Interquartile Range-MMT), MAD-MMT (Median Absolute Deviation) and non-power aware.", "title": "" }, { "docid": "c9878a454c91fec094fce02e1ac49348", "text": "Autonomous walking bipedal machines, possibly useful for rehabilitation and entertainment purposes, need a high energy efficiency, offered by the concept of ‘Passive Dynamic Walking’ (exploitation of the natural dynamics of the robot). 2D passive dynamic bipeds have been shown to be inherently stable, but in the third dimension two problematic degrees of freedom are introduced: yaw and roll. We propose a design for a 3D biped with a pelvic body as a passive dynamic compensator, which will compensate for the undesired yaw and roll motion, and allow the rest of the robot to move as if it were a 2D machine. To test our design, we perform numerical simulations on a multibody model of the robot. With limit cycle analysis we calculate the stability of the robot when walking at its natural speed. The simulation shows that the compensator, indeed, effectively compensates for both the yaw and the roll motion, and that the walker is stable.", "title": "" }, { "docid": "1994e427b1d00f1f64ed91559ffa5daa", "text": "We started investigating the collection of HTML tables on the Web and developed the WebTables system a few years ago [4]. Since then, our work has been motivated by applying WebTables in a broad set of applications at Google, resulting in several product launches. In this paper, we describe the challenges faced, lessons learned, and new insights that we gained from our efforts. The main challenges we faced in our efforts were (1) identifying tables that are likely to contain high-quality data (as opposed to tables used for navigation, layout, or formatting), and (2) recovering the semantics of these tables or signals that hint at their semantics. The result is a semantically enriched table corpus that we used to develop several services. First, we created a search engine for structured data whose index includes over a hundred million HTML tables. Second, we enabled users of Google Docs (through its Research Panel) to find relevant data tables and to insert such data into their documents as needed. Most recently, we brought WebTables to a much broader audience by using the table corpus to provide richer tabular snippets for fact-seeking web search queries on Google.com.", "title": "" }, { "docid": "a1348a9823fc85d22bc73f3fe177e0ba", "text": "Ultrasound imaging makes use of backscattering of waves during their interaction with scatterers present in biological tissues. Simulation of synthetic ultrasound images is a challenging problem on account of inability to accurately model various factors of which some include intra-/inter scanline interference, transducer to surface coupling, artifacts on transducer elements, inhomogeneous shadowing and nonlinear attenuation. Current approaches typically solve wave space equations making them computationally expensive and slow to operate. We propose a generative adversarial network (GAN) inspired approach for fast simulation of patho-realistic ultrasound images. We apply the framework to intravascular ultrasound (IVUS) simulation. A stage 0 simulation performed using pseudo B-mode ultrasound image simulator yields speckle mapping of a digitally defined phantom. The stage I GAN subsequently refines them to preserve tissue specific speckle intensities. The stage II GAN further refines them to generate high resolution images with patho-realistic speckle profiles. We evaluate patho-realism of simulated images with a visual Turing test indicating an equivocal confusion in discriminating simulated from real. We also quantify the shift in tissue specific intensity distributions of the real and simulated images to prove their similarity.", "title": "" }, { "docid": "1772d22c19635b6636e42f8bb1b1a674", "text": "• MacArthur Fellowship, 2010 • Guggenheim Fellowship, 2010 • Li Ka Shing Foundation Women in Science Distinguished Lectu re Series Award, 2010 • MIT Technology Review TR-35 Award (recognizing the world’s top innovators under the age of 35), 2009. • Okawa Foundation Research Award, 2008. • Sloan Research Fellow, 2007. • Best Paper Award, 2007 USENIX Security Symposium. • George Tallman Ladd Research Award, Carnegie Mellon Univer sity, 2007. • Highest ranked paper, 2006 IEEE Security and Privacy Sympos ium; paper invited to a special issue of the IEEE Transactions on Dependable and Secure Computing. • NSF CAREER Award on “Exterminating Large Scale Internet Att acks”, 2005. • IBM Faculty Award, 2005. • Highest ranked paper, 1999 IEEE Computer Security Foundati on Workshop; paper invited to a special issue of Journal of Computer Security.", "title": "" } ]
scidocsrr
611e2f512e9bcf17f66f557d8a61e545
Visual Analytics for MOOC Data
[ { "docid": "c995426196ad943df2f5a4028a38b781", "text": "Today it is quite common for people to exchange hundreds of comments in online conversations (e.g., blogs). Often, it can be very difficult to analyze and gain insights from such long conversations. To address this problem, we present a visual text analytic system that tightly integrates interactive visualization with novel text mining and summarization techniques to fulfill information needs of users in exploring conversations. At first, we perform a user requirement analysis for the domain of blog conversations to derive a set of design principles. Following these principles, we present an interface that visualizes a combination of various metadata and textual analysis results, supporting the user to interactively explore the blog conversations. We conclude with an informal user evaluation, which provides anecdotal evidence about the effectiveness of our system and directions for further design.", "title": "" } ]
[ { "docid": "eb888ba37e7e97db36c330548569508d", "text": "Since the first online demonstration of Neural Machine Translation (NMT) by LISA (Bahdanau et al., 2014), NMT development has recently moved from laboratory to production systems as demonstrated by several entities announcing rollout of NMT engines to replace their existing technologies. NMT systems have a large number of training configurations and the training process of such systems is usually very long, often a few weeks, so role of experimentation is critical and important to share. In this work, we present our approach to production-ready systems simultaneously with release of online demonstrators covering a large variety of languages ( 12 languages, for32 language pairs). We explore different practical choices: an efficient and evolutive open-source framework; data preparation; network architecture; additional implemented features; tuning for production; etc. We discuss about evaluation methodology, present our first findings and we finally outline further work. Our ultimate goal is to share our expertise to build competitive production systems for ”generic” translation. We aim at contributing to set up a collaborative framework to speed-up adoption of the technology, foster further research efforts and enable the delivery and adoption to/by industry of use-case specific engines integrated in real production workflows. Mastering of the technology would allow us to build translation engines suited for particular needs, outperforming current simplest/uniform systems.", "title": "" }, { "docid": "821be0a049a74abf5b009b012022af2f", "text": "BACKGROUND\nIn theory, infections that arise after female genital mutilation (FGM) in childhood might ascend to the internal genitalia, causing inflammation and scarring and subsequent tubal-factor infertility. Our aim was to investigate this possible association between FGM and primary infertility.\n\n\nMETHODS\nWe did a hospital-based case-control study in Khartoum, Sudan, to which we enrolled women (n=99) with primary infertility not caused by hormonal or iatrogenic factors (previous abdominal surgery), or the result of male-factor infertility. These women underwent diagnostic laparoscopy. Our controls were primigravidae women (n=180) recruited from antenatal care. We used exact conditional logistic regression, stratifying for age and controlling for socioeconomic status, level of education, gonorrhoea, and chlamydia, to compare these groups with respect to FGM.\n\n\nFINDINGS\nOf the 99 infertile women examined, 48 had adnexal pathology indicative of previous inflammation. After controlling for covariates, these women had a significantly higher risk than controls of having undergone the most extensive form of FGM, involving the labia majora (odds ratio 4.69, 95% CI 1.49-19.7). Among women with primary infertility, both those with tubal pathology and those with normal laparoscopy findings were at a higher risk than controls of extensive FGM, both with borderline significance (p=0.054 and p=0.055, respectively). The anatomical extent of FGM, rather than whether or not the vulva had been sutured or closed, was associated with primary infertility.\n\n\nINTERPRETATION\nOur findings indicate a positive association between the anatomical extent of FGM and primary infertility. Laparoscopic postinflammatory adnexal changes are not the only explanation for this association, since cases without such pathology were also affected. The association between FGM and primary infertility is highly relevant for preventive work against this ancient practice.", "title": "" }, { "docid": "5d6c2580602945084d5a643c335c40f2", "text": "Probabilistic topic models are a suite of algorithms whose aim is to discover the hidden thematic structure in large archives of documents. In this article, we review the main ideas of this field, survey the current state-of-the-art, and describe some promising future directions. We first describe latent Dirichlet allocation (LDA) [8], which is the simplest kind of topic model. We discuss its connections to probabilistic modeling, and describe two kinds of algorithms for topic discovery. We then survey the growing body of research that extends and applies topic models in interesting ways. These extensions have been developed by relaxing some of the statistical assumptions of LDA, incorporating meta-data into the analysis of the documents, and using similar kinds of models on a diversity of data types such as social networks, images and genetics. Finally, we give our thoughts as to some of the important unexplored directions for topic modeling. These include rigorous methods for checking models built for data exploration, new approaches to visualizing text and other high dimensional data, and moving beyond traditional information engineering applications towards using topic models for more scientific ends.", "title": "" }, { "docid": "66e8940044bb58971da01cc059b8ef09", "text": "The use of Bayesian methods for data analysis is creating a revolution in fields ranging from genetics to marketing. Yet, results of our literature review, including more than 10,000 articles published in 15 journals from January 2001 and December 2010, indicate that Bayesian approaches are essentially absent from the organizational sciences. Our article introduces organizational science researchers to Bayesian methods and describes why and how they should be used. We use multiple linear regression as the framework to offer a step-by-step demonstration, including the use of software, regarding how to implement Bayesian methods. We explain and illustrate how to determine the prior distribution, compute the posterior distribution, possibly accept the null value, and produce a write-up describing the entire Bayesian process, including graphs, results, and their interpretation. We also offer a summary of the advantages of using Bayesian analysis and examples of how specific published research based on frequentist analysis-based approaches failed to benefit from the advantages offered by a Bayesian approach and how using Bayesian analyses would have led to richer and, in some cases, different substantive conclusions. We hope that our article will serve as a catalyst for the adoption of Bayesian methods in organizational science research.", "title": "" }, { "docid": "162823edcbd50579a1d386f88931d59d", "text": "Elevated liver enzymes are a common scenario encountered by physicians in clinical practice. For many physicians, however, evaluation of such a problem in patients presenting with no symptoms can be challenging. Evidence supporting a standardized approach to evaluation is lacking. Although alterations of liver enzymes could be a normal physiological phenomenon in certain cases, it may also reflect potential liver injury in others, necessitating its further assessment and management. In this article, we provide a guide to primary care clinicians to interpret abnormal elevation of liver enzymes in asymptomatic patients using a step-wise algorithm. Adopting a schematic approach that classifies enzyme alterations on the basis of pattern (hepatocellular, cholestatic and isolated hyperbilirubinemia), we review an approach to abnormal alteration of liver enzymes within each section, the most common causes of enzyme alteration, and suggest initial investigations.", "title": "" }, { "docid": "f008e38cd63db0e4cf90705cc5e8860e", "text": "6  Abstract— The purpose of this paper is to propose a MATLAB/ Simulink simulators for PV cell/module/array based on the Two-diode model of a PV cell.This model is known to have better accuracy at low irradiance levels which allows for more accurate prediction of PV systems performance.To reduce computational time , the input parameters are reduced as the values of Rs and Rp are estimated by an efficient iteration method. Furthermore ,all of the inputs to the simulators are information available on a standard PV module datasheet. The present paper present first abrief introduction to the behavior and functioning of a PV device and write the basic equation of the two-diode model,without the intention of providing an indepth analysis of the photovoltaic phenomena and the semicondutor physics. The introduction on PV devices is followed by the modeling and simulation of PV cell/PV module/PV array, which is the main subject of this paper. A MATLAB Simulik based simulation study of PV cell/PV module/PV array is carried out and presented .The simulation model makes use of the two-diode model basic circuit equations of PV solar cell, taking the effect of sunlight irradiance and cell temperature into consideration on the output current I-V characteristic and output power P-V characteristic . A particular typical 50W solar panel was used for model evaluation. The simulation results , compared with points taken directly from the data sheet and curves pubblished by the manufacturers, show excellent correspondance to the model.", "title": "" }, { "docid": "33f86056827e1e8958ab17e11d7e4136", "text": "The successful integration of Information and Communications Technology (ICT) into the teaching and learning of English Language is largely dependent on the level of teacher’s ICT competence, the actual utilization of ICT in the language classroom and factors that challenge teachers to use it in language teaching. The study therefore assessed the Secondary School English language teachers’ ICT literacy, the extent of ICT utilization in English language teaching and the challenges that prevent language teachers to integrate ICT in teaching. To answer the problems, three sets of survey questionnaires were distributed to 30 English teachers in the 11 schools of Cluster 1 (CarCanMadCarLan). Data gathered were analyzed using descriptive statistics and frequency count. The results revealed that the teachers’ ICT literacy was moderate. The findings provided evidence that there was only a limited use of ICT in language teaching. Feedback gathered from questionnaires show that teachers faced many challenges that demotivate them from using ICT in language activities. Based on these findings, it is recommended the teachers must be provided with intensive ICT-based trainings to equip them with knowledge of ICT and its utilization in language teaching. School administrators as well as stakeholders may look for interventions to upgrade school’s ICTbased resources for its optimum use in teaching and learning. Most importantly, a larger school-wide ICT development plan may be implemented to ensure coherence of ICT implementation in the teaching-learning activities. ‘ICT & Innovations in Education’ International Journal International Electronic Journal | ISSN 2321 – 7189 | www.ictejournal.com Volume 2, Issue 1 | February 2014", "title": "" }, { "docid": "5f92491cb7da547ba3ea6945832342ac", "text": "SwitchKV is a new key-value store system design that combines high-performance cache nodes with resourceconstrained backend nodes to provide load balancing in the face of unpredictable workload skew. The cache nodes absorb the hottest queries so that no individual backend node is over-burdened. Compared with previous designs, SwitchKV exploits SDN techniques and deeply optimized switch hardware to enable efficient contentbased routing. Programmable network switches keep track of cached keys and route requests to the appropriate nodes at line speed, based on keys encoded in packet headers. A new hybrid caching strategy keeps cache and switch forwarding rules updated with low overhead and ensures that system load is always well-balanced under rapidly changing workloads. Our evaluation results demonstrate that SwitchKV can achieve up to 5× throughput and 3× latency improvements over traditional system designs.", "title": "" }, { "docid": "a2a4936ca3600dc4fb2369c43ffc9016", "text": "Intuitive and efficient retrieval of motion capture data is essential for effective use of motion capture databases. In this paper, we describe a system that allows the user to retrieve a particular sequence by performing an approximation of the motion with an instrumented puppet. This interface is intuitive because both adults and children have experience playacting with puppets and toys to express particular behaviors or to tell stories with style and emotion. The puppet has 17 degrees of freedom and can therefore represent a variety of motions. We develop a novel similarity metric between puppet and human motion by computing the reconstruction errors of the puppet motion in the latent space of the human motion and those of the human motion in the latent space of the puppet motion. This metric works even for relatively large databases. We conducted a user study of the system and subjects could find the desired motion with reasonable accuracy from a database consisting of everyday, exercise, and acrobatic behaviors.", "title": "" }, { "docid": "59aa4318fa39c1d6ec086af7041148b2", "text": "Two of the most important outcomes of learning analytics are predicting students’ learning and providing effective feedback. Learning Management Systems (LMS), which are widely used to support online and face-to-face learning, provide extensive research opportunities with detailed records of background data regarding users’ behaviors. The purpose of this study was to investigate the effects of undergraduate students’ LMS learning behaviors on their academic achievements. In line with this purpose, the participating students’ online learning behaviors in LMS were examined by using learning analytics for 14 weeks, and the relationship between students’ behaviors and their academic achievements was analyzed, followed by an analysis of their views about the influence of LMS on their academic achievement. The present study, in which quantitative and qualitative data were collected, was carried out with the explanatory mixed method. A total of 71 undergraduate students participated in the study. The results revealed that the students used LMSs as a support to face-to-face education more intensively on course days (at the beginning of the related lessons and at nights on course days) and that they activated the content elements the most. Lastly, almost all the students agreed that LMSs helped increase their academic achievement only when LMSs included such features as effectiveness, interaction, reinforcement, attractive design, social media support, and accessibility.", "title": "" }, { "docid": "3c8cc4192ee6ddd126e53c8ab242f396", "text": "There are several approaches for automated functional web testing and the choice among them depends on a number of factors, including the tools used for web testing and the costs associated with their adoption. In this paper, we present an empirical cost/benefit analysis of two different categories of automated functional web testing approaches: (1) capture-replay web testing (in particular, using Selenium IDE); and, (2) programmable web testing (using Selenium WebDriver). On a set of six web applications, we evaluated the costs of applying these testing approaches both when developing the initial test suites from scratch and when the test suites are maintained, upon the release of a new software version. Results indicate that, on the one hand, the development of the test suites is more expensive in terms of time required (between 32% and 112%) when the programmable web testing approach is adopted, but on the other hand, test suite maintenance is less expensive when this approach is used (with a saving between 16% and 51%). We found that, in the majority of the cases, after a small number of releases (from one to three), the cumulative cost of programmable web testing becomes lower than the cost involved with capture-replay web testing and the cost saving gets amplified over the successive releases.", "title": "" }, { "docid": "7f65d625ca8f637a6e2e9cb7006d1778", "text": "Recent work in machine learning for information extraction has focused on two distinct sub-problems: the conventional problem of filling template slots from natural language text, and the problem of wrapper induction, learning simple extraction procedures (“wrappers”) for highly structured text such as Web pages produced by CGI scripts. For suitably regular domains, existing wrapper induction algorithms can efficiently learn wrappers that are simple and highly accurate, but the regularity bias of these algorithms makes them unsuitable for most conventional information extraction tasks. Boosting is a technique for improving the performance of a simple machine learning algorithm by repeatedly applying it to the training set with different example weightings. We describe an algorithm that learns simple, low-coverage wrapper-like extraction patterns, which we then apply to conventional information extraction problems using boosting. The result is BWI, a trainable information extraction system with a strong precision bias and F1 performance better than state-of-the-art techniques in many domains.", "title": "" }, { "docid": "78982bfdcf476081bd708c8aa2e5c5bd", "text": "Simultaneous Localization And Mapping (SLAM) is a fundamental problem in mobile robotics. While sparse point-based SLAM methods provide accurate camera localization, the generated maps lack semantic information. On the other hand, state of the art object detection methods provide rich information about entities present in the scene from a single image. This work incorporates a real-time deep-learned object detector to the monocular SLAM framework for representing generic objects as quadrics that permit detections to be seamlessly integrated while allowing the real-time performance. Finer reconstruction of an object, learned by a CNN network, is also incorporated and provides a shape prior for the quadric leading further refinement. To capture the dominant structure of the scene, additional planar landmarks are detected by a CNN-based plane detector and modelled as landmarks in the map. Experiments show that the introduced plane and object landmarks and the associated constraints, using the proposed monocular plane detector and incorporated object detector, significantly improve camera localization and lead to a richer semantically more meaningful map.", "title": "" }, { "docid": "cefcd78be7922f4349f1bb3aa59d2e1d", "text": "The paper presents performance analysis of modified SEPIC dc-dc converter with low input voltage and wide output voltage range. The operational analysis and the design is done for the 380W power output of the modified converter. The simulation results of modified SEPIC converter are obtained with PI controller for the output voltage. The results obtained with the modified converter are compared with the basic SEPIC converter topology for the rise time, peak time, settling time and steady state error of the output response for open loop. Voltage tracking curve is also shown for wide output voltage range. I. Introduction Dc-dc converters are widely used in regulated switched mode dc power supplies and in dc motor drive applications. The input to these converters is often an unregulated dc voltage, which is obtained by rectifying the line voltage and it will therefore fluctuate due to variations of the line voltages. Switched mode dc-dc converters are used to convert this unregulated dc input into a controlled dc output at a desired voltage level. The recent growth of battery powered applications and low voltage storage elements are increasing the demand of efficient step-up dc–dc converters. Typical applications are in adjustable speed drives, switch-mode power supplies, uninterrupted power supplies, and utility interface with nonconventional energy sources, battery energy storage systems, battery charging for electric vehicles, and power supplies for telecommunication systems etc.. These applications demand high step-up static gain, high efficiency and reduced weight, volume and cost. The step-up stage normally is the critical point for the design of high efficiency converters due to the operation with high input current and high output voltage [1]. The boost converter topology is highly effective in these applications but at low line voltage in boost converter, the switching losses are high because the input current has the maximum value and the highest step-up conversion is required. The inductor has to be oversized for the large current at low line input. As a result, a boost converter designed for universal-input applications is heavily oversized compared to a converter designed for a narrow range of input ac line voltage [2]. However, recently new non-isolated dc–dc converter topologies with basic boost are proposed, showing that it is possible to obtain high static gain, low voltage stress and low losses, improving the performance with respect to the classical topologies. Some single stage high power factor rectifiers are presented in [3-6]. A new …", "title": "" }, { "docid": "33c06f0ee7d3beb0273a47790f2a84cd", "text": "This study presents the clinical results of a surgical technique that expands a narrow ridge when its orofacial width precludes the placement of dental implants. In 170 people, 329 implants were placed in sites needing ridge enlargement using the endentulous ridge expansion procedure. This technique involves a partial-thickness flap, crestal and vertical intraosseous incisions into the ridge, and buccal displacement of the buccal cortical plate, including a portion of the underiying spongiosa. Implants were placed in the expanded ridge and allowed to heal for 4 to 5 months. When indicated, the implants were exposed during a second-stage surgery to allow visualization of the implant site. Occlusal loading was applied during the following 3 to 5 months by provisional prostheses. The final phase was the placement of the permanent prostheses. The results yielded a success rate of 98.8%.", "title": "" }, { "docid": "e546f81fbdc57765956c22d94c9f54ac", "text": "Internet technology is revolutionizing education. Teachers are developing massive open online courses (MOOCs) and using innovative practices such as flipped learning in which students watch lectures at home and engage in hands-on, problem solving activities in class. This work seeks to explore the design space afforded by these novel educational paradigms and to develop technology for improving student learning. Our design, based on the technique of adaptive content review, monitors student attention during educational presentations and determines which lecture topic students might benefit the most from reviewing. An evaluation of our technology within the context of an online art history lesson demonstrated that adaptively reviewing lesson content improved student recall abilities 29% over a baseline system and was able to match recall gains achieved by a full lesson review in less time. Our findings offer guidelines for a novel design space in dynamic educational technology that might support both teachers and online tutoring systems.", "title": "" }, { "docid": "76c42d10b008bdcbfd90d6eb238280c9", "text": "In this paper a review of architectures suitable for nonlinear real-time audio signal processing is presented. The computational and structural complexity of neural networks (NNs) represent in fact, the main drawbacks that can hinder many practical NNs multimedia applications. In particular e,cient neural architectures and their learning algorithm for real-time on-line audio processing are discussed. Moreover, applications in the -elds of (1) audio signal recovery, (2) speech quality enhancement, (3) nonlinear transducer linearization, (4) learning based pseudo-physical sound synthesis, are brie1y presented and discussed. c © 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "b31ce7aa527336d10a5ddb2540e9c61c", "text": "OBJECTIVE\nOptimal mental health care is dependent upon sensitive and early detection of mental health problems. We have introduced a state-of-the-art method for the current study for remote behavioral monitoring that transports assessment out of the clinic and into the environments in which individuals negotiate their daily lives. The objective of this study was to examine whether the information captured with multimodal smartphone sensors can serve as behavioral markers for one's mental health. We hypothesized that (a) unobtrusively collected smartphone sensor data would be associated with individuals' daily levels of stress, and (b) sensor data would be associated with changes in depression, stress, and subjective loneliness over time.\n\n\nMETHOD\nA total of 47 young adults (age range: 19-30 years) were recruited for the study. Individuals were enrolled as a single cohort and participated in the study over a 10-week period. Participants were provided with smartphones embedded with a range of sensors and software that enabled continuous tracking of their geospatial activity (using the Global Positioning System and wireless fidelity), kinesthetic activity (using multiaxial accelerometers), sleep duration (modeled using device-usage data, accelerometer inferences, ambient sound features, and ambient light levels), and time spent proximal to human speech (i.e., speech duration using microphone and speech detection algorithms). Participants completed daily ratings of stress, as well as pre- and postmeasures of depression (Patient Health Questionnaire-9; Spitzer, Kroenke, & Williams, 1999), stress (Perceived Stress Scale; Cohen et al., 1983), and loneliness (Revised UCLA Loneliness Scale; Russell, Peplau, & Cutrona, 1980).\n\n\nRESULTS\nMixed-effects linear modeling showed that sensor-derived geospatial activity (p < .05), sleep duration (p < .05), and variability in geospatial activity (p < .05), were associated with daily stress levels. Penalized functional regression showed associations between changes in depression and sensor-derived speech duration (p < .05), geospatial activity (p < .05), and sleep duration (p < .05). Changes in loneliness were associated with sensor-derived kinesthetic activity (p < .01).\n\n\nCONCLUSIONS AND IMPLICATIONS FOR PRACTICE\nSmartphones can be harnessed as instruments for unobtrusive monitoring of several behavioral indicators of mental health. Creative leveraging of smartphone sensing could provide novel opportunities for close-to-invisible psychiatric assessment at a scale and efficiency that far exceeds what is currently feasible with existing assessment technologies.", "title": "" }, { "docid": "94f94af75b17c0b4a2ad59908e07e462", "text": "Metric learning has the aim to improve classification accuracy by learning a distance measure which brings data points from the same class closer together and pushes data points from different classes further apart. Recent research has demonstrated that metric learning approaches can also be applied to trees, such as molecular structures, abstract syntax trees of computer programs, or syntax trees of natural language, by learning the cost function of an edit distance, i.e. the costs of replacing, deleting, or inserting nodes in a tree. However, learning such costs directly may yield an edit distance which violates metric axioms, is challenging to interpret, and may not generalize well. In this contribution, we propose a novel metric learning approach for trees which we call embedding edit distance learning (BEDL) and which learns an edit distance indirectly by embedding the tree nodes as vectors, such that the Euclidean distance between those vectors supports class discrimination. We learn such embeddings by reducing the distance to prototypical trees from the same class and increasing the distance to prototypical trees from different classes. In our experiments, we show that BEDL improves upon the state-of-the-art in metric learning for trees on six benchmark data sets, ranging from computer science over biomedical data to a natural-language processing data set containing over 300,000 nodes.", "title": "" }, { "docid": "5547f8ad138a724c2cc05ce65f50ebd2", "text": "As machine learning (ML) technology continues to spread by rapid evolution, the system or service using Machine Learning technology, called ML product, makes big impact on our life, society and economy. Meanwhile, Quality Assurance (QA) for ML product is quite more difficult than hardware, non-ML software and service because performance of ML technology is much better than non-ML technology in exchange for the characteristics of ML product, e.g. low explainability. We must keep rapid evolution and reduce quality risk of ML product simultaneously. In this paper, we show a Quality Assurance Framework for Machine Learning product. Scope of QA in this paper is limited to product evaluation. First, a policy of QA for ML Product is proposed. General principles of product evaluation is introduced and applied to ML product evaluation as a part of the policy. They are composed of A-ARAI: Allowability, Achievability, Robustness, Avoidability and Improvability. A strategy of ML Product Evaluation is constructed as another part of the policy. Quality Integrity Level for ML product is also modelled. Second, we propose a test architecture of ML product testing. It consists of test levels and fundamental test types of ML product testing, including snapshot testing, learning testing and confrontation testing. Finally, we defines QA activity levels for ML product.", "title": "" } ]
scidocsrr
6cb4e14d677a2519baac016b6799858f
Impacts of implementing Enterprise Content Management Systems
[ { "docid": "8e6677e03f964984e87530afad29aef3", "text": "University of Jyväskylä, Department of Computer Science and Information Systems, PO Box 35, FIN-40014, Finland; Agder University College, Department of Information Systems, PO Box 422, 4604, Kristiansand, Norway; University of Toronto, Faculty of Information Studies, 140 St. George Street, Toronto, ON M5S 3G6, Canada; University of Oulu, Department of Information Processing Science, University of Oulu, PO Box 3000, FIN-90014, Finland Abstract Innovations in network technologies in the 1990’s have provided new ways to store and organize information to be shared by people and various information systems. The term Enterprise Content Management (ECM) has been widely adopted by software product vendors and practitioners to refer to technologies used to manage the content of assets like documents, web sites, intranets, and extranets In organizational or inter-organizational contexts. Despite this practical interest ECM has received only little attention in the information systems research community. This editorial argues that ECM provides an important and complex subfield of Information Systems. It provides a framework to stimulate and guide future research, and outlines research issues specific to the field of ECM. European Journal of Information Systems (2006) 15, 627–634. doi:10.1057/palgrave.ejis.3000648", "title": "" } ]
[ { "docid": "e08e42c8f146e6a74213643e306446c6", "text": "Disclaimer The opinions and positions expressed in this practice guide are the authors' and do not necessarily represent the opinions and positions of the Institute of Education Sciences or the U.S. Department of Education. This practice guide should be reviewed and applied according to the specific needs of the educators and education agencies using it and with full realization that it represents only one approach that might be taken, based on the research that was available at the time of publication. This practice guide should be used as a tool to assist in decision-making rather than as a \" cookbook. \" Any references within the document to specific education products are illustrative and do not imply endorsement of these products to the exclusion of other products that are not referenced. Alternative Formats On request, this publication can be made available in alternative formats, such as Braille, large print, audiotape, or computer diskette. For more information, call the Alternative Format Center at (202) 205-8113.", "title": "" }, { "docid": "edeefde21bbe1ace9a34a0ebe7bc6864", "text": "Social media platforms provide active communication channels during mass convergence and emergency events such as disasters caused by natural hazards. As a result, first responders, decision makers, and the public can use this information to gain insight into the situation as it unfolds. In particular, many social media messages communicated during emergencies convey timely, actionable information. Processing social media messages to obtain such information, however, involves solving multiple challenges including: parsing brief and informal messages, handling information overload, and prioritizing different types of information found in messages. These challenges can be mapped to classical information processing operations such as filtering, classifying, ranking, aggregating, extracting, and summarizing. We survey the state of the art regarding computational methods to process social media messages and highlight both their contributions and shortcomings. In addition, we examine their particularities, and methodically examine a series of key subproblems ranging from the detection of events to the creation of actionable and useful summaries. Research thus far has, to a large extent, produced methods to extract situational awareness information from social media. In this survey, we cover these various approaches, and highlight their benefits and shortcomings. We conclude with research challenges that go beyond situational awareness, and begin to look at supporting decision making and coordinating emergency-response actions.", "title": "" }, { "docid": "134ecc62958fa9bb930ff934c5fad7a3", "text": "We extend our methods from [24] to reprove the Local Langlands Correspondence for GLn over p-adic fields as well as the existence of `-adic Galois representations attached to (most) regular algebraic conjugate self-dual cuspidal automorphic representations, for which we prove a local-global compatibility statement as in the book of Harris-Taylor, [10]. In contrast to the proofs of the Local Langlands Correspondence given by Henniart, [13], and Harris-Taylor, [10], our proof completely by-passes the numerical Local Langlands Correspondence of Henniart, [11]. Instead, we make use of a previous result from [24] describing the inertia-invariant nearby cycles in certain regular situations.", "title": "" }, { "docid": "34f6603912c9775fc48329e596467107", "text": "Turbo generator with evaporative cooling stator and air cooling rotor possesses many excellent qualities for mid unit. The stator bars and core are immerged in evaporative coolant, which could be cooled fully. The rotor bars are cooled by air inner cooling mode, and the cooling effect compared with hydrogen and water cooling mode is limited. So an effective ventilation system has to been employed to insure the reliability of rotor. This paper presents the comparisons of stator temperature distribution between evaporative cooling mode and air cooling mode, and the designing of rotor ventilation system combined with evaporative cooling stator.", "title": "" }, { "docid": "646097feed29f603724f7ec6b8bbeb8b", "text": "Online reviews provide valuable information about products and services to consumers. However, spammers are joining the community trying to mislead readers by writing fake reviews. Previous attempts for spammer detection used reviewers' behaviors, text similarity, linguistics features and rating patterns. Those studies are able to identify certain types of spammers, e.g., those who post many similar reviews about one target entity. However, in reality, there are other kinds of spammers who can manipulate their behaviors to act just like genuine reviewers, and thus cannot be detected by the available techniques. In this paper, we propose a novel concept of a heterogeneous review graph to capture the relationships among reviewers, reviews and stores that the reviewers have reviewed. We explore how interactions between nodes in this graph can reveal the cause of spam and propose an iterative model to identify suspicious reviewers. This is the first time such intricate relationships have been identified for review spam detection. We also develop an effective computation method to quantify the trustiness of reviewers, the honesty of reviews, and the reliability of stores. Different from existing approaches, we don't use review text information. Our model is thus complementary to existing approaches and able to find more difficult and subtle spamming activities, which are agreed upon by human judges after they evaluate our results.", "title": "" }, { "docid": "a203839d7ec2ca286ac93435aa552159", "text": "Boxer is a semantic parser for English texts with many input and output possibilities, and various ways to perform meaning analysis based on Discourse Representation Theory. This involves the various ways that meaning representations can be computed, as well as their possible semantic ingredients.", "title": "" }, { "docid": "c6ad38fa33666cf8d28722b9a1127d07", "text": "Weakly-supervised semantic image segmentation suffers from lacking accurate pixel-level annotations. In this paper, we propose a novel graph convolutional network-based method, called GraphNet, to learn pixel-wise labels from weak annotations. Firstly, we construct a graph on the superpixels of a training image by combining the low-level spatial relation and high-level semantic content. Meanwhile, scribble or bounding box annotations are embedded into the graph, respectively. Then, GraphNet takes the graph as input and learns to predict high-confidence pseudo image masks by a convolutional network operating directly on graphs. At last, a segmentation network is trained supervised by these pseudo image masks. We comprehensively conduct experiments on the PASCAL VOC 2012 and PASCAL-CONTEXT segmentation benchmarks. Experimental results demonstrate that GraphNet is effective to predict the pixel labels with scribble or bounding box annotations. The proposed framework yields state-of-the-art results in the community.", "title": "" }, { "docid": "cc34a912fb5e1fbb2a1b87d1c79ac01f", "text": "Amyotrophic lateral sclerosis (ALS) is a devastating neurodegenerative disorder characterized by death of motor neurons leading to muscle wasting, paralysis, and death, usually within 2-3 years of symptom onset. The causes of ALS are not completely understood, and the neurodegenerative processes involved in disease progression are diverse and complex. There is substantial evidence implicating oxidative stress as a central mechanism by which motor neuron death occurs, including elevated markers of oxidative damage in ALS patient spinal cord and cerebrospinal fluid and mutations in the antioxidant enzyme superoxide dismutase 1 (SOD1) causing approximately 20% of familial ALS cases. However, the precise mechanism(s) by which mutant SOD1 leads to motor neuron degeneration has not been defined with certainty, and the ultimate trigger for increased oxidative stress in non-SOD1 cases remains unclear. Although some antioxidants have shown potential beneficial effects in animal models, human clinical trials of antioxidant therapies have so far been disappointing. Here, the evidence implicating oxidative stress in ALS pathogenesis is reviewed, along with how oxidative damage triggers or exacerbates other neurodegenerative processes, and we review the trials of a variety of antioxidants as potential therapies for ALS.", "title": "" }, { "docid": "aeb56fbd60165c34c91fa0366c335f7d", "text": "The advent of technology in the 1990s was seen as having the potential to revolutionise electronic management of student assignments. While there were advantages and disadvantages, the potential was seen as a necessary part of the future of this aspect of academia. A number of studies (including Dalgarno et al in 2006) identified issues that supported positive aspects of electronic assignment management but consistently identified drawbacks, suggesting that the maximum achievable potential for these processes may have been reached. To confirm the perception that the technology and process are indeed ‘marking time’ a further study was undertaken at the University of South Australia (UniSA). This paper deals with the study of online receipt, assessment and feedback of assessment utilizing UniSA technology referred to as AssignIT. The study identified that students prefer a paperless approach to marking however there are concerns with the nature, timing and quality of feedback. Staff have not embraced all of the potential elements of electronic management of assignments, identified Occupational Health Safety and Welfare issues, and tended to drift back to traditional manual marking processes through a lack of understanding or confidence in their ability to properly use the technology.", "title": "" }, { "docid": "5aa8c418b63a3ecb71dc60d4863f35cc", "text": "Based on the sense definition of words available in the Bengali WordNet, an attempt is made to classify the Bengali sentences automatically into different groups in accordance with their underlying senses. The input sentences are collected from 50 different categories of the Bengali text corpus developed in the TDIL project of the Govt. of India, while information about the different senses of particular ambiguous lexical item is collected from Bengali WordNet. In an experimental basis we have used Naive Bayes probabilistic model as a useful classifier of sentences. We have applied the algorithm over 1747 sentences that contain a particular Bengali lexical item which, because of its ambiguous nature, is able to trigger different senses that render sentences in different meanings. In our experiment we have achieved around 84% accurate result on the sense classification over the total input sentences. We have analyzed those residual sentences that did not comply with our experiment and did affect the results to note that in many cases, wrong syntactic structures and less semantic information are the main hurdles in semantic classification of sentences. The applicational relevance of this study is attested in automatic text classification, machine learning, information extraction, and word sense disambiguation.", "title": "" }, { "docid": "9b470feac9ae4edd11b87921934c9fc2", "text": "Cutaneous melanoma may in some instances be confused with seborrheic keratosis, which is a very common neoplasia, more often mistaken for actinic keratosis and verruca vulgaris. Melanoma may clinically resemble seborrheic keratosis and should be considered as its possible clinical simulator. We report a case of melanoma with dermatoscopic characteristics of seborrheic keratosis and emphasize the importance of the dermatoscopy algorithm in differentiating between a melanocytic and a non-melanocytic lesion, of the excisional biopsy for the establishment of the diagnosis of cutaneous tumors, and of the histopathologic examination in all surgically removed samples.", "title": "" }, { "docid": "1564a94998151d52785dd0429b4ee77d", "text": "Location management refers to the problem of updating and searching the current location of mobile nodes in a wireless network. To make it efficient, the sum of update costs of location database must be minimized. Previous work relying on fixed location databases is unable to fully exploit the knowledge of user mobility patterns in the system so as to achieve this minimization. The study presents an intelligent location management approach which has interacts between intelligent information system and knowledge-base technologies, so we can dynamically change the user patterns and reduce the transition between the VLR and HLR. The study provides algorithms are ability to handle location registration and call delivery.", "title": "" }, { "docid": "2343e18c8a36bc7da6357086c10f43d4", "text": "Sensor networks offer a powerful combination of distributed sensing, computing and communication. They lend themselves to countless applications and, at the same time, offer numerous challenges due to their peculiarities, primarily the stringent energy constraints to which sensing nodes are typically subjected. The distinguishing traits of sensor networks have a direct impact on the hardware design of the nodes at at least four levels: power source, processor, communication hardware, and sensors. Various hardware platforms have already been designed to test the many ideas spawned by the research community and to implement applications to virtually all fields of science and technology. We are convinced that CAS will be able to provide a substantial contribution to the development of this exciting field.", "title": "" }, { "docid": "a56d43bd191147170e1df87878ca1b11", "text": "Although problem solving is regarded by most educators as among the most important learning outcomes, few instructional design prescriptions are available for designing problem-solving instruction and engaging learners. This paper distinguishes between well-structured problems and ill-structured problems. Well-structured problems are constrained problems with convergent solutions that engage the application of a limited number of rules and principles within welldefined parameters. Ill-structured problems possess multiple solutions, solution paths, fewer parameters which are less manipulable, and contain uncertainty about which concepts, rules, and principles are necessary for the solution or how they are organized and which solution is best. For both types of problems, this paper presents models for how learners solve them and models for designing instruction to support problem-solving skill development. The model for solving wellstructured problems is based on information processing theories of learning, while the model for solving ill-structured problems relies on an emerging theory of ill-structured problem solving and on constructivist and situated cognition approaches to learning. PROBLEM: INSTRUCTIONAL-DESIGN MODELS FOR PROBLEM SOLVING", "title": "" }, { "docid": "a9d4c193693b060f6f2527e92c07e110", "text": "We introduce a novel method for describing and controlling a 3D smoke simulation. Using harmonic analysis and principal component analysis, we define an underlying description of the fluid flow that is compact and meaningful to non-expert users. The motion of the smoke can be modified with high level tools, such as animated current curves, attractors and tornadoes. Our simulation is controllable, interactive and stable for arbitrarily long periods of time. The simulation's computational cost increases linearly in the number of motion samples and smoke particles. Our adaptive smoke particle representation conveniently incorporates the surface-like characteristics of real smoke.", "title": "" }, { "docid": "3d173f723b4f60e2bb15efe22af5e450", "text": "Microblogging websites such as twitter and Sina Weibo have attracted many users to share their experiences and express their opinions on a variety of topics. Sentiment classification of microblogging texts is of great significance in analyzing users' opinion on products, persons and hot topics. However, conventional bag-of-words-based sentiment classification methods may meet some problems in processing Chinese microblogging texts because they does not consider semantic meanings of texts. In this paper, we proposed a global RNN-based sentiment method, which use the outputs of all the time-steps as features to extract the global information of texts, for sentiment classification of Chinese microblogging texts and explored different RNN-models. The experiments on two Chinese microblogging datasets show that the proposed method achieves better performance than conventional bag-of-words-based methods.", "title": "" }, { "docid": "1bdf1bfe81bf6f947df2254ae0d34227", "text": "We investigate the problem of incorporating higher-level symbolic score-like information into Automatic Music Transcription (AMT) systems to improve their performance. We use recurrent neural networks (RNNs) and their variants as music language models (MLMs) and present a generative architecture for combining these models with predictions from a frame level acoustic classifier. We also compare different neural network architectures for acoustic modeling. The proposed model computes a distribution over possible output sequences given the acoustic input signal and we present an algorithm for performing a global search for good candidate transcriptions. The performance of the proposed model is evaluated on piano music from the MAPS dataset and we observe that the proposed model consistently outperforms existing transcription methods.", "title": "" }, { "docid": "fdd63e1c0027f21af7dea9db9e084b26", "text": "To bring down the number of traffic accidents and increase people’s mobility companies, such as Robot Engineering Systems (RES) try to put automated vehicles on the road. RES is developing the WEpod, a shuttle capable of autonomously navigating through mixed traffic. This research has been done in cooperation with RES to improve the localization capabilities of the WEpod. The WEpod currently localizes using its GPS and lidar sensors. These have proven to be not accurate and reliable enough to safely navigate through traffic. Therefore, other methods of localization and mapping have been investigated. The primary method investigated in this research is monocular Simultaneous Localization and Mapping (SLAM). Based on literature and practical studies, ORB-SLAM has been chosen as the implementation of SLAM. Unfortunately, ORB-SLAM is unable to initialize the setup when applied on WEpod images. Literature has shown that this problem can be solved by adding depth information to the inputs of ORB-SLAM. Obtaining depth information for the WEpod images is not an arbitrary task. The sensors on the WEpod are not capable of creating the required dense depth-maps. A Convolutional Neural Network (CNN) could be used to create the depth-maps. This research investigates whether adding a depth-estimating CNN solves this initialization problem and increases the tracking accuracy of monocular ORB-SLAM. A well performing CNN is chosen and combined with ORB-SLAM. Images pass through the depth estimating CNN to obtain depth-maps. These depth-maps together with the original images are used in ORB-SLAM, keeping the whole setup monocular. ORB-SLAM with the CNN is first tested on the Kitti dataset. The Kitti dataset is used since monocular ORBSLAM initializes on Kitti images and ground-truth depth-maps can be obtained for Kitti images. Monocular ORB-SLAM’s tracking accuracy has been compared to ORB-SLAM with ground-truth depth-maps and to ORB-SLAM with estimated depth-maps. This comparison shows that adding estimated depth-maps increases the tracking accuracy of ORB-SLAM, but not as much as the ground-truth depth images. The same setup is tested on WEpod images. The CNN is fine-tuned on 7481 Kitti images as well as on 642 WEpod images. The performance on WEpod images of both CNN versions are compared, and used in combination with ORB-SLAM. The CNN fine-tuned on the WEpod images does not perform well, missing details in the estimated depth-maps. However, this is enough to solve the initialization problem of ORB-SLAM. The combination of ORB-SLAM and the Kitti fine-tuned CNN has a better tracking accuracy than ORB-SLAM with the WEpod fine-tuned CNN. It has been shown that the initialization problem on WEpod images is solved as well as the tracking accuracy is increased. These results show that the initialization problem of monocular ORB-SLAM on WEpod images is solved by adding the CNN. This makes it applicable to improve the current localization methods on the WEpod. Using only this setup for localization on the WEpod is not possible yet, more research is necessary. Adding this setup to the current localization methods of the WEpod could increase the localization of the WEpod. This would make it safer for the WEpod to navigate through traffic. This research sets the next step into creating a fully autonomous vehicle which reduces traffic accidents and increases the mobility of people.", "title": "" }, { "docid": "70cad4982e42d44eec890faf6ddc5c75", "text": "Both translation arrest and proteasome stress associated with accumulation of ubiquitin-conjugated protein aggregates were considered as a cause of delayed neuronal death after transient global brain ischemia; however, exact mechanisms as well as possible relationships are not fully understood. The aim of this study was to compare the effect of chemical ischemia and proteasome stress on cellular stress responses and viability of neuroblastoma SH-SY5Y and glioblastoma T98G cells. Chemical ischemia was induced by transient treatment of the cells with sodium azide in combination with 2-deoxyglucose. Proteasome stress was induced by treatment of the cells with bortezomib. Treatment of SH-SY5Y cells with sodium azide/2-deoxyglucose for 15 min was associated with cell death observed 24 h after treatment, while glioblastoma T98G cells were resistant to the same treatment. Treatment of both SH-SY5Y and T98G cells with bortezomib was associated with cell death, accumulation of ubiquitin-conjugated proteins, and increased expression of Hsp70. These typical cellular responses to proteasome stress, observed also after transient global brain ischemia, were not observed after chemical ischemia. Finally, chemical ischemia, but not proteasome stress, was in SH-SY5Y cells associated with increased phosphorylation of eIF2α, another typical cellular response triggered after transient global brain ischemia. Our results showed that short chemical ischemia of SH-SY5Y cells is not sufficient to induce both proteasome stress associated with accumulation of ubiquitin-conjugated proteins and stress response at the level of heat shock proteins despite induction of cell death and eIF2α phosphorylation.", "title": "" }, { "docid": "5f68e7d03c48d842add703ce0492c453", "text": "This paper presents a summary of the available single-phase ac-dc topologies used for EV/PHEV, level-1 and -2 on-board charging and for providing reactive power support to the utility grid. It presents the design motives of single-phase on-board chargers in detail and makes a classification of the chargers based on their future vehicle-to-grid usage. The pros and cons of each different ac-dc topology are discussed to shed light on their suitability for reactive power support. This paper also presents and analyzes the differences between charging-only operation and capacitive reactive power operation that results in increased demand from the dc-link capacitor (more charge/discharge cycles and increased second harmonic ripple current). Moreover, battery state of charge is spared from losses during reactive power operation, but converter output power must be limited below its rated power rating to have the same stress on the dc-link capacitor.", "title": "" } ]
scidocsrr
0a7dd51ff1a23ab6f28c0dcf7963e1eb
Radio Frequency Time-of-Flight Distance Measurement for Low-Cost Wireless Sensor Localization
[ { "docid": "df09834abe25199ac7b3205d657fffb2", "text": "In modern wireless communications products it is required to incorporate more and more different functions to comply with current market trends. A very attractive function with steadily growing market penetration is local positioning. To add this feature to low-cost mass-market devices without additional power consumption, it is desirable to use commercial communication chips and standards for localization of the wireless units. In this paper we present a concept to measure the distance between two IEEE 802.15.4 (ZigBee) compliant devices. The presented prototype hardware consists of a low- cost 2.45 GHz ZigBee chipset. For localization we use standard communication packets as transmit signals. Thus simultaneous data transmission and transponder localization is feasible. To achieve high positioning accuracy even in multipath environments, a coherent synthesis of measurements in multiple channels and a special signal phase evaluation concept is applied. With this technique the full available ISM bandwidth of 80 MHz is utilized. In first measurements with two different frequency references-a low-cost oscillator and a temperatur-compensated crystal oscillator-a positioning bias error of below 16 cm and 9 cm was obtained. The standard deviation was less than 3 cm and 1 cm, respectively. It is demonstrated that compared to signal correlation in time, the phase processing technique yields an accuracy improvement of roughly an order of magnitude.", "title": "" } ]
[ { "docid": "ff572d9c74252a70a48d4ba377f941ae", "text": "This paper considers how design fictions in the form of 'imaginary abstracts' can be extended into complete 'fictional papers'. Imaginary abstracts are a type of design fiction that are usually included within the content of 'real' research papers, they comprise brief accounts of fictional problem frames, prototypes, user studies and findings. Design fiction abstracts have been proposed as a means to move beyond solutionism to explore the potential societal value and consequences of new HCI concepts. In this paper we contrast the properties of imaginary abstracts, with the properties of a published paper that presents fictional research, Game of Drones. Extending the notion of imaginary abstracts so that rather than including fictional abstracts within a 'non-fiction' research paper, Game of Drones is fiction from start to finish (except for the concluding paragraph where the fictional nature of the paper is revealed). In this paper we review the scope of design fiction in HCI research before contrasting the properties of imaginary abstracts with the properties of our example fictional research paper. We argue that there are clear merits and weaknesses to both approaches, but when used tactfully and carefully fictional research papers may further empower HCI's burgeoning design discourse with compelling new methods.", "title": "" }, { "docid": "5ac63b0be4561f126c90b65e834e1d14", "text": "Conventional security exploits have relied on overwriting the saved return pointer on the stack to hijack the path of execution. Under Sun Microsystem’s Sparc processor architecture, we were able to implement a kernel modification to transparently and automatically guard applications’ return pointers. Our implementation called StackGhost under OpenBSD 2.8 acts as a ghost in the machine. StackGhost advances exploit prevention in that it protects every application run on the system without their knowledge nor does it require their source or binary modification. We will document several of the methods devised to preserve the sanctity of the system and will explore the performance ramifications of StackGhost.", "title": "" }, { "docid": "ad59ca3f7c945142baf9353eeb68e504", "text": "This essay considers dynamic security design and corporate financing, with particular emphasis on informational micro-foundations. The central idea is that firm insiders must retain an appropriate share of firm risk, either to align their incentives with those of outside investors (moral hazard) or to signal favorable information about the quality of the firm’s assets. Informational problems lead to inevitable inefficiencies imperfect risk sharing, the possibility of bankruptcy, investment distortions, etc. The design of contracts that minimize these inefficiencies is a central question. This essay explores the implications of dynamic security design on firm operations and asset prices.", "title": "" }, { "docid": "e0c3dfd45d422121e203955979e23719", "text": "Machine Learning (ML) models are applied in a variety of tasks such as network intrusion detection or malware classification. Yet, these models are vulnerable to a class of malicious inputs known as adversarial examples. These are slightly perturbed inputs that are classified incorrectly by the ML model. The mitigation of these adversarial inputs remains an open problem. As a step towards a model-agnostic defense against adversarial examples, we show that they are not drawn from the same distribution than the original data, and can thus be detected using statistical tests. As the number of malicious points included in samples presented to the test diminishes, its detection confidence decreases. Hence, we introduce a complimentary approach to identify specific inputs that are adversarial among sets of inputs flagged by the statistical test. Specifically, we augment our ML model with an additional output, in which the model is trained to classify all adversarial inputs. We evaluate our approach on multiple adversarial example crafting methods (including the fast gradient sign and Jacobian-based saliency map methods) with several datasets. The statistical test flags sample sets containing adversarial inputs with confidence above 80%. Furthermore, our augmented model either detects adversarial examples with high accuracy (> 80%) or increases the adversary’s cost—the perturbation added—by more than 150%. In this way, we show that statistical properties of adversarial examples are essential to their detection.", "title": "" }, { "docid": "7091deeeea31ed1e2e8fba821e85db6e", "text": "Protein folding is a complex process that can lead to disease when it fails. Especially poorly understood are the very early stages of protein folding, which are likely defined by intrinsic local interactions between amino acids close to each other in the protein sequence. We here present EFoldMine, a method that predicts, from the primary amino acid sequence of a protein, which amino acids are likely involved in early folding events. The method is based on early folding data from hydrogen deuterium exchange (HDX) data from NMR pulsed labelling experiments, and uses backbone and sidechain dynamics as well as secondary structure propensities as features. The EFoldMine predictions give insights into the folding process, as illustrated by a qualitative comparison with independent experimental observations. Furthermore, on a quantitative proteome scale, the predicted early folding residues tend to become the residues that interact the most in the folded structure, and they are often residues that display evolutionary covariation. The connection of the EFoldMine predictions with both folding pathway data and the folded protein structure suggests that the initial statistical behavior of the protein chain with respect to local structure formation has a lasting effect on its subsequent states.", "title": "" }, { "docid": "d5665efd0e4a91e9be4c84fecd5fd4ad", "text": "Hardware accelerators are being increasingly deployed to boost the performance and energy efficiency of deep neural network (DNN) inference. In this paper we propose Thundervolt, a new framework that enables aggressive voltage underscaling of high-performance DNN accelerators without compromising classification accuracy even in the presence of high timing error rates. Using post-synthesis timing simulations of a DNN acceleratormodeled on theGoogle TPU,we show that Thundervolt enables between 34%-57% energy savings on stateof-the-art speech and image recognition benchmarks with less than 1% loss in classification accuracy and no performance loss. Further, we show that Thundervolt is synergistic with and can further increase the energy efficiency of commonly used run-timeDNNpruning techniques like Zero-Skip.", "title": "" }, { "docid": "06f421d0f63b9dc08777c573840654d5", "text": "This paper presents the implementation of a modified state observer-based adaptive dynamic inverse controller for the Black Kite micro aerial vehicle. The pitch and velocity adaptations are computed by the modified state observer in the presence of turbulence to simulate atmospheric conditions. This state observer uses the estimation error to generate the adaptations and, hence, is more robust than model reference adaptive controllers which use modeling or tracking error. In prior work, a traditional proportional-integral-derivative control law was tested in simulation for its adaptive capability in the longitudinal dynamics of the Black Kite micro aerial vehicle. This controller tracks the altitude and velocity commands during normal conditions, but fails in the presence of both parameter uncertainties and system failures. The modified state observer-based adaptations, along with the proportional-integral-derivative controller enables tracking despite these conditions. To simulate flight of the micro aerial vehicle with turbulence, a Dryden turbulence model is included. The turbulence levels used are based on the absolute load factor experienced by the aircraft. The length scale was set to 2.0 meters with a turbulence intensity of 5.0 m/s that generates a moderate turbulence. Simulation results for various flight conditions show that the modified state observer-based adaptations were able to adapt to the uncertainties and the controller tracks the commanded altitude and velocity. The summary of results for all of the simulated test cases and the response plots of various states for typical flight cases are presented.", "title": "" }, { "docid": "93810dab9ff258d6e11edaffa1e4a0ff", "text": "Ishaq, O. 2016. Image Analysis and Deep Learning for Applications in Microscopy. Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 1371. 76 pp. Uppsala: Acta Universitatis Upsaliensis. ISBN 978-91-554-9567-1. Quantitative microscopy deals with the extraction of quantitative measurements from samples observed under a microscope. Recent developments in microscopy systems, sample preparation and handling techniques have enabled high throughput biological experiments resulting in large amounts of image data, at biological scales ranging from subcellular structures such as fluorescently tagged nucleic acid sequences to whole organisms such as zebrafish embryos. Consequently, methods and algorithms for automated quantitative analysis of these images have become increasingly important. These methods range from traditional image analysis techniques to use of deep learning architectures. Many biomedical microscopy assays result in fluorescent spots. Robust detection and precise localization of these spots are two important, albeit sometimes overlapping, areas for application of quantitative image analysis. We demonstrate the use of popular deep learning architectures for spot detection and compare them against more traditional parametric model-based approaches. Moreover, we quantify the effect of pre-training and change in the size of training sets on detection performance. Thereafter, we determine the potential of training deep networks on synthetic and semi-synthetic datasets and their comparison with networks trained on manually annotated real data. In addition, we present a two-alternative forced-choice based tool for assisting in manual annotation of real image data. On a spot localization track, we parallelize a popular compressed sensing based localization method and evaluate its performance in conjunction with different optimizers, noise conditions and spot densities. We investigate its sensitivity to different point spread function estimates. Zebrafish is an important model organism, attractive for whole-organism image-based assays for drug discovery campaigns. The effect of drug-induced neuronal damage may be expressed in the form of zebrafish shape deformation. First, we present an automated method for accurate quantification of tail deformations in multi-fish micro-plate wells using image analysis techniques such as illumination correction, segmentation, generation of branch-free skeletons of partial tail-segments and their fusion to generate complete tails. Later, we demonstrate the use of a deep learning-based pipeline for classifying micro-plate wells as either drug-affected or negative controls, resulting in competitive performance, and compare the performance from deep learning against that from traditional image analysis approaches.", "title": "" }, { "docid": "149ffd270f39a330f4896c7d3aa290be", "text": "The pathogenesis underlining many neurodegenerative diseases remains incompletely understood. The lack of effective biomarkers and disease preventative medicine demands the development of new techniques to efficiently probe the mechanisms of disease and to detect early biomarkers predictive of disease onset. Raman spectroscopy is an established technique that allows the label-free fingerprinting and imaging of molecules based on their chemical constitution and structure. While analysis of isolated biological molecules has been widespread in the chemical community, applications of Raman spectroscopy to study clinically relevant biological species, disease pathogenesis, and diagnosis have been rapidly increasing since the past decade. The growing number of biomedical applications has shown the potential of Raman spectroscopy for detection of novel biomarkers that could enable the rapid and accurate screening of disease susceptibility and onset. Here we provide an overview of Raman spectroscopy and related techniques and their application to neurodegenerative diseases. We further discuss their potential utility in research, biomarker detection, and diagnosis. Challenges to routine use of Raman spectroscopy in the context of neuroscience research are also presented.", "title": "" }, { "docid": "7d0d68f2dd9e09540cb2ba71646c21d2", "text": "INTRODUCTION: Back in time dentists used to place implants in locations with sufficient bone-dimensions only, with less regard to placement of final definitive restoration but most of the times, the placement of implant is not as accurate as intended and even a minor variation in comparison to ideal placement causes difficulties in fabrication of final prosthesis. The use of bone substitutes and membranes is now one of the standard therapeutic approaches. In order to accelerate healing of bone graft over the bony defect, numerous techniques utilizing platelet and fibrinogen concentrates have been introduced in the literature.. OBJECTIVES: This study was designed to evaluate the efficacy of using Autologous Concentrated Growth Factors (CGF) Enriched Bone Graft Matrix (Sticky Bone) and CGF-Enriched Fibrin Membrane in management of dehiscence defect around dental implant in narrow maxillary anterior ridge. MATERIALS AND METHODS: Eleven DIO implants were inserted in six adult patients presenting an upper alveolar ridge width of less than 4mm determined by cone beam computed tomogeraphy (CBCT). After implant placement, the resultant vertical labial dehiscence defect was augmented utilizing Sticky Bone and CGF-Enriched Fibrin Membrane. Three CBCTs were made, pre-operatively, immediately postoperatively and six-months post-operatively. The change in vertical defect size was calculated radiographically then statistically analyzed. RESULTS: Vertical dehiscence defect was sufficiently recovered in 5 implant-sites while in the other 6 sites it was decreased to mean value of 1.25 mm ± 0.69 SD, i.e the defect coverage in 6 implants occurred with mean value of 4.59 mm ±0.49 SD. Also the results of the present study showed that the mean of average implant stability was 59.89 mm ± 3.92 CONCLUSIONS: The combination of PRF mixed with CGF with bone graft (allograft) can increase the quality (density) of the newly formed bone and enhance the rate of new bone formation.", "title": "" }, { "docid": "b045350bfb820634046bff907419d1bf", "text": "Action recognition and human pose estimation are closely related but both problems are generally handled as distinct tasks in the literature. In this work, we propose a multitask framework for jointly 2D and 3D pose estimation from still images and human action recognition from video sequences. We show that a single architecture can be used to solve the two problems in an efficient way and still achieves state-of-the-art results. Additionally, we demonstrate that optimization from end-to-end leads to significantly higher accuracy than separated learning. The proposed architecture can be trained with data from different categories simultaneously in a seamlessly way. The reported results on four datasets (MPII, Human3.6M, Penn Action and NTU) demonstrate the effectiveness of our method on the targeted tasks.", "title": "" }, { "docid": "a7dff1f19690e31f90e0fa4a85db5d97", "text": "This paper presents BOOM version 2, an updated version of the Berkeley Out-of-Order Machine first presented in [3]. The design exploration was performed through synthesis, place and route using the foundry-provided standard-cell library and the memory compiler in the TSMC 28 nm HPM process (high performance mobile). BOOM is an open-source processor that implements the RV64G RISC-V Instruction Set Architecture (ISA). Like most contemporary high-performance cores, BOOM is superscalar (able to execute multiple instructions per cycle) and out-oforder (able to execute instructions as their dependencies are resolved and not restricted to their program order). BOOM is implemented as a parameterizable generator written using the Chisel hardware construction language [2] that can used to generate synthesizable implementations targeting both FPGAs and ASICs. BOOMv2 is an update in which the design effort has been informed by analysis of synthesized, placed and routed data provided by a contemporary industrial tool flow. We also had access to standard singleand dual-ported memory compilers provided by the foundry, allowing us to explore design trade-offs using different SRAM memories and comparing against synthesized flip-flop arrays. The main distinguishing features of BOOMv2 include an updated 3-stage front-end design with a bigger set-associative Branch Target Buffer (BTB); a pipelined register rename stage; split floating point and integer register files; a dedicated floating point pipeline; separate issue windows for floating point, integer, and memory micro-operations; and separate stages for issue-select and register read. Managing the complexity of the register file was the largest obstacle to improving BOOM’s clock frequency. We spent considerable effort on placing-and-routing a semi-custom 9port register file to explore the potential improvements over a fully synthesized design, in conjunction with microarchitectural techniques to reduce the size and port count of the register file. BOOMv2 has a 37 fanout-of-four (FO4) inverter delay after synthesis and 50 FO4 after place-and-route, a 24% reduction from BOOMv1’s 65 FO4 after place-and-route. Unfortunately, instruction per cycle (IPC) performance drops up to 20%, mostly due to the extra latency between load instructions and dependent instructions. However, the new BOOMv2 physical design paves the way for IPC recovery later. BOOMv1-2f3i int/idiv/fdiv", "title": "" }, { "docid": "66108bc186971cc1f69a20e7b7e0283f", "text": "Mining frequent itemsets and association rules is a popular and well researched approach for discovering interesting relationships between variables in large databases. The R package arules presented in this paper provides a basic infrastructure for creating and manipulating input data sets and for analyzing the resulting itemsets and rules. The package also includes interfaces to two fast mining algorithms, the popular C implementations of Apriori and Eclat by Christian Borgelt. These algorithms can be used to mine frequent itemsets, maximal frequent itemsets, closed frequent itemsets and association rules.", "title": "" }, { "docid": "f78fcf875104f8bab2fa465c414331c6", "text": "In this paper, we present a systematic framework for recognizing realistic actions from videos “in the wild”. Such unconstrained videos are abundant in personal collections as well as on the Web. Recognizing action from such videos has not been addressed extensively, primarily due to the tremendous variations that result from camera motion, background clutter, changes in object appearance, and scale, etc. The main challenge is how to extract reliable and informative features from the unconstrained videos. We extract both motion and static features from the videos. Since the raw features of both types are dense yet noisy, we propose strategies to prune these features. We use motion statistics to acquire stable motion features and clean static features. Furthermore, PageRank is used to mine the most informative static features. In order to further construct compact yet discriminative visual vocabularies, a divisive information-theoretic algorithm is employed to group semantically related features. Finally, AdaBoost is chosen to integrate all the heterogeneous yet complementary features for recognition. We have tested the framework on the KTH dataset and our own dataset consisting of 11 categories of actions collected from YouTube and personal videos, and have obtained impressive results for action recognition and action localization.", "title": "" }, { "docid": "53d1ddf4809ab735aa61f4059a1a38b1", "text": "In this paper we present a wearable Haptic Feedback Device to convey intuitive motion direction to the user through haptic feedback based on vibrotactile illusions. Vibrotactile illusions occur on the skin when two or more vibrotactile actuators in proximity are actuated in coordinated sequence, causing the user to feel combined sensations, instead of separate ones. By combining these illusions we can produce various sensation patterns that are discernible by the user, thus allowing to convey different information with each pattern. A method to provide information about direction through vibrotactile illusions is introduced on this paper. This method uses a grid of vibrotactile actuators around the arm actuated in coordination. The sensation felt on the skin is consistent with the desired direction of motion, so the desired motion can be intuitively understood. We show that the users can recognize the conveyed direction, and implemented a proof of concept of the proposed method to guide users' elbow flexion/extension motion.", "title": "" }, { "docid": "994f37328a1e27290af874769d41c5e7", "text": "In the article by Powers et al, “2018 Guidelines for the Early Management of Patients With Acute Ischemic Stroke: A Guideline for Healthcare Professionals From the American Heart Association/American Stroke Association,” which published ahead of print January 24, 2018, and appeared in the March 2018 issue of the journal (Stroke. 2018;49:e46–e110. DOI: 10.1161/ STR.0000000000000158), a few corrections were needed. 1. On page e46, the text above the byline read: Reviewed for evidence-based integrity and endorsed by the American Association of Neurological Surgeons and Congress of Neurological Surgeons Endorsed by the Society for Academic Emergency Medicine It has been updated to read: Reviewed for evidence-based integrity and endorsed by the American Association of Neurological Surgeons and Congress of Neurological Surgeons Endorsed by the Society for Academic Emergency Medicine and Neurocritical Care Society The American Academy of Neurology affirms the value of this guideline as an educational tool for neurologists. 2. On page e60, in the section “2.2. Brain Imaging,” in the knowledge byte text below recommendation 12: • The seventh sentence read, “Therefore, only the eligibility criteria from these trials should be used for patient selection.” It has been updated to read, “Therefore, only the eligibility criteria from one or the other of these trials should be used for patient selection.” • The eighth sentence read, “...at this time, the DAWN and DEFUSE 3 eligibility should be strictly adhered to in clinical practice.” It has been updated to read, “...at this time, the DAWN or DEFUSE 3 eligibility should be strictly adhered to in clinical practice.” 3. On page e73, in the section “3.7. Mechanical Thrombectomy,” recommendation 8 read, “In selected patients with AIS within 6 to 24 hours....” It has been updated to read, “In selected patients with AIS within 16 to 24 hours....” 4. On page e73, in the section “3.7. Mechanical Thrombectomy,” in the knowledge byte text below recommendation 8: • The seventh sentence read, “Therefore, only the eligibility criteria from these trials should be used for patient selection.” It has been updated to read, “Therefore, only the eligibility criteria from one or the other of these trials should be used for patient selection.” • The eighth sentence read, “...at this time, the DAWN and DEFUSE-3 eligibility should be strictly adhered to in clinical practice.” It has been updated to read, “...at this time, the DAWN or DEFUSE-3 eligibility should be strictly adhered to in clinical practice.” 5. On page e76, in the section “3.10. Anticoagulants,” in the knowledge byte text below recommendation 1, the third sentence read, “...(LMWH, 64.2% versus aspirin, 6.52%; P=0.33).” It has been updated to read, “...(LMWH, 64.2% versus aspirin, 62.5%; P=0.33).” These corrections have been made to the current online version of the article, which is available at http://stroke.ahajournals.org/lookup/doi/10.1161/STR.0000000000000158. Correction", "title": "" }, { "docid": "e67b9b48507dcabae92debdb9df9cb08", "text": "This paper presents an annotation scheme for events that negatively or positively affect entities (benefactive/malefactive events) and for the attitude of the writer toward their agents and objects. Work on opinion and sentiment tends to focus on explicit expressions of opinions. However, many attitudes are conveyed implicitly, and benefactive/malefactive events are important for inferring implicit attitudes. We describe an annotation scheme and give the results of an inter-annotator agreement study. The annotated corpus is available online.", "title": "" }, { "docid": "4b4306cddcbf62a93dab81676e2b4461", "text": "The use of drones in agriculture is becoming more and more popular. The paper presents a novel approach to distinguish between different field's plowing techniques by means of an RGB-D sensor. The presented system can be easily integrated in commercially available Unmanned Aerial Vehicles (UAVs). In order to successfully classify the plowing techniques, two different measurement algorithms have been developed. Experimental tests show that the proposed methodology is able to provide a good classification of the field's plowing depths.", "title": "" }, { "docid": "e35669db2d6c016cf71107eb00db820d", "text": "Mobile payments will gain significant traction in the coming years as the mobile and payment technologies mature and become widely available. Various technologies are competing to become the established standards for physical and virtual mobile payments, yet it is ultimately the users who will determine the level of success of the technologies through their adoption. Only if it becomes easier and cheaper to transact business using mobile payment applications than by using conventional methods will they become popular, either with users or providers. This document is a state of the art review of mobile payment technologies. It covers all of the technologies involved in a mobile payment solution, including mobile networks in section 2, mobile services in section 3, mobile platforms in section 4, mobile commerce in section 5 and different mobile payment solutions in sections 6 to 8.", "title": "" }, { "docid": "f75ae6fedddde345109d33499853256d", "text": "Deaths due to prescription and illicit opioid overdose have been rising at an alarming rate, particularly in the USA. Although naloxone injection is a safe and effective treatment for opioid overdose, it is frequently unavailable in a timely manner due to legal and practical restrictions on its use by laypeople. As a result, an effort spanning decades has resulted in the development of strategies to make naloxone available for layperson or \"take-home\" use. This has included the development of naloxone formulations that are easier to administer for nonmedical users, such as intranasal and autoinjector intramuscular delivery systems, efforts to distribute naloxone to potentially high-impact categories of nonmedical users, as well as efforts to reduce regulatory barriers to more widespread distribution and use. Here we review the historical and current literature on the efficacy and safety of naloxone for use by nonmedical persons, provide an evidence-based discussion of the controversies regarding the safety and efficacy of different formulations of take-home naloxone, and assess the status of current efforts to increase its public distribution. Take-home naloxone is safe and effective for the treatment of opioid overdose when administered by laypeople in a community setting, shortening the time to reversal of opioid toxicity and reducing opioid-related deaths. Complementary strategies have together shown promise for increased dissemination of take-home naloxone, including 1) provision of education and training; 2) distribution to critical populations such as persons with opioid addiction, family members, and first responders; 3) reduction of prescribing barriers to access; and 4) reduction of legal recrimination fears as barriers to use. Although there has been considerable progress in decreasing the regulatory and legal barriers to effective implementation of community naloxone programs, significant barriers still exist, and much work remains to be done to integrate these programs into efforts to provide effective treatment of opioid use disorders.", "title": "" } ]
scidocsrr
be8810dc31c4b77df6092a2b3d52911e
YAMAMA: Yet Another Multi-Dialect Arabic Morphological Analyzer
[ { "docid": "4292a60a5f76fd3e794ce67d2ed6bde3", "text": "If two translation systems differ differ in performance on a test set, can we trust that this indicates a difference in true system quality? To answer this question, we describe bootstrap resampling methods to compute statistical significance of test results, and validate them on the concrete example of the BLEU score. Even for small test sizes of only 300 sentences, our methods may give us assurances that test result differences are real.", "title": "" }, { "docid": "aafda1cab832f1fe92ce406676e3760f", "text": "In this paper, we present MADAMIRA, a system for morphological analysis and disambiguation of Arabic that combines some of the best aspects of two previously commonly used systems for Arabic processing, MADA (Habash and Rambow, 2005; Habash et al., 2009; Habash et al., 2013) and AMIRA (Diab et al., 2007). MADAMIRA improves upon the two systems with a more streamlined Java implementation that is more robust, portable, extensible, and is faster than its ancestors by more than an order of magnitude. We also discuss an online demo (see http://nlp.ldeo.columbia.edu/madamira/) that highlights these aspects.", "title": "" } ]
[ { "docid": "530ef3f5d2f7cb5cc93243e2feb12b8e", "text": "Online personal health record (PHR) enables patients to manage their own medical records in a centralized way, which greatly facilitates the storage, access and sharing of personal health data. With the emergence of cloud computing, it is attractive for the PHR service providers to shift their PHR applications and storage into the cloud, in order to enjoy the elastic resources and reduce the operational cost. However, by storing PHRs in the cloud, the patients lose physical control to their personal health data, which makes it necessary for each patient to encrypt her PHR data before uploading to the cloud servers. Under encryption, it is challenging to achieve fine-grained access control to PHR data in a scalable and efficient way. For each patient, the PHR data should be encrypted so that it is scalable with the number of users having access. Also, since there are multiple owners (patients) in a PHR system and every owner would encrypt her PHR files using a different set of cryptographic keys, it is important to reduce the key distribution complexity in such multi-owner settings. Existing cryptographic enforced access control schemes are mostly designed for the single-owner scenarios. In this paper, we propose a novel framework for access control to PHRs within cloud computing environment. To enable fine-grained and scalable access control for PHRs, we leverage attribute based encryption (ABE) techniques to encrypt each patients’ PHR data. To reduce the key distribution complexity, we divide the system into multiple security domains, where each domain manages only a subset of the users. In this way, each patient has full control over her own privacy, and the key management complexity is reduced dramatically. Our proposed scheme is also flexible, in that it supports efficient and on-demand revocation of user access rights, and break-glass access under emergency scenarios.", "title": "" }, { "docid": "d452700b9c919ba62156beecb0d50b91", "text": "In this paper we propose a solution to the problem of body part segmentation in noisy silhouette images. In developing this solution we revisit the issue of insufficient labeled training data, by investigating how synthetically generated data can be used to train general statistical models for shape classification. In our proposed solution we produce sequences of synthetically generated images, using three dimensional rendering and motion capture information. Each image in these sequences is labeled automatically as it is generated and this labeling is based on the hand labeling of a single initial image.We use shape context features and Hidden Markov Models trained based on this labeled synthetic data. This model is then used to segment silhouettes into four body parts; arms, legs, body and head. Importantly, in all the experiments we conducted the same model is employed with no modification of any parameters after initial training.", "title": "" }, { "docid": "f5d04dd0fe3e717bbbab23eb8330109c", "text": "Unmanned Aerial Vehicle (UAV) surveillance systems allow for highly advanced and safe surveillance of hazardous locations. Further, multi-purpose drones can be widely deployed for not only gathering information but also analyzing the situation from sensed data. However, mobile drone systems have limited computing resources and battery power which makes it a challenge to use these systems for long periods of time or in fully autonomous modes. In this paper, we propose an Adaptive Computation Offloading Drone System (ACODS) architecture with reliable communication for increasing drone operating time. We design not only the response time prediction module for mission critical task offloading decision but also task offloading management module via the Multipath TCP (MPTCP). Through performance evaluation via our prototype implementation, we show that the proposed algorithm achieves significant increase in drone operation time and significantly reduces the response time.", "title": "" }, { "docid": "b3962fd4000fced796f3764d009c929e", "text": "Low-field extremity magnetic resonance imaging (lfMRI) is currently commercially available and has been used clinically to evaluate rheumatoid arthritis (RA). However, one disadvantage of this new modality is that the field of view (FOV) is too small to assess hand and wrist joints simultaneously. Thus, we have developed a new lfMRI system, compacTscan, with a FOV that is large enough to simultaneously assess the entire wrist to proximal interphalangeal joint area. In this work, we examined its clinical value compared to conventional 1.5 tesla (T) MRI. The comparison involved evaluating three RA patients by both 0.3 T compacTscan and 1.5 T MRI on the same day. Bone erosion, bone edema, and synovitis were estimated by our new compact MRI scoring system (cMRIS) and the kappa coefficient was calculated on a joint-by-joint basis. We evaluated a total of 69 regions. Bone erosion was detected in 49 regions by compacTscan and in 48 regions by 1.5 T MRI, while the total erosion score was 77 for compacTscan and 76.5 for 1.5 T MRI. These findings point to excellent agreement between the two techniques (kappa = 0.833). Bone edema was detected in 14 regions by compacTscan and in 19 by 1.5 T MRI, and the total edema score was 36.25 by compacTscan and 47.5 by 1.5 T MRI. Pseudo-negative findings were noted in 5 regions. However, there was still good agreement between the techniques (kappa = 0.640). Total number of evaluated joints was 33. Synovitis was detected in 13 joints by compacTscan and 14 joints by 1.5 T MRI, while the total synovitis score was 30 by compacTscan and 32 by 1.5 T MRI. Thus, although 1 pseudo-positive and 2 pseudo-negative findings resulted from the joint evaluations, there was again excellent agreement between the techniques (kappa = 0.827). Overall, the data obtained by our compacTscan system showed high agreement with those obtained by conventional 1.5 T MRI with regard to diagnosis and the scoring of bone erosion, edema, and synovitis. We conclude that compacTscan is useful for diagnosis and estimation of disease activity in patients with RA.", "title": "" }, { "docid": "d54e33049b3f5170ec8bd09d8f17c05c", "text": "Deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations, often at multiple levels, with higher-level learned features defined in terms of lower-level features. The objective is to make these higherlevel representations more abstract, with their individual features more invariant to most of the variations that are typically present in the training distribution, while collectively preserving as much as possible of the information in the input. Ideally, we would like these representations to disentangle the unknown factors of variation that underlie the training distribution. Such unsupervised learning of representations can be exploited usefully under the hypothesis that the input distribution P (x) is structurally related to some task of interest, say predicting P (y|x). This paper focusses on why unsupervised pre-training of representations can be useful, and how it can be exploited in the transfer learning scenario, where we care about predictions on examples that are not from the same distribution as the training distribution.", "title": "" }, { "docid": "8433df9d46df33f1389c270a8f48195d", "text": "BACKGROUND\nFingertip injuries involve varying degree of fractures of the distal phalanx and nail bed or nail plate disruptions. The treatment modalities recommended for these injuries include fracture fixation with K-wire and meticulous repair of nail bed after nail removal and later repositioning of nail or stent substitute into the nail fold by various methods. This study was undertaken to evaluate the functional outcome of vertical figure-of-eight tension band suture for finger nail disruptions with fractures of distal phalanx.\n\n\nMATERIALS AND METHODS\nA series of 40 patients aged between 4 and 58 years, with 43 fingernail disruptions and fracture of distal phalanges, were treated with vertical figure-of-eight tension band sutures without formal fixation of fracture fragments and the results were reviewed. In this method, the injuries were treated by thoroughly cleaning the wound, reducing the fracture fragments, anatomical replacement of nail plate, and securing it by vertical figure-of-eight tension band suture.\n\n\nRESULTS\nAll patients were followed up for a minimum of 3 months. The clinical evaluation of the patients was based on radiological fracture union and painless pinch to determine fingertip stability. Every single fracture united and every fingertip was clinically stable at the time of final followup. We also evaluated our results based on visual analogue scale for pain and range of motion of distal interphalangeal joint. Two sutures had to be revised due to over tensioning and subsequent vascular compromise within minutes of repair; however, this did not affect the final outcome.\n\n\nCONCLUSION\nThis technique is simple, secure, and easily reproducible. It neither requires formal repair of injured nail bed structures nor fixation of distal phalangeal fracture and results in uncomplicated reformation of nail plate and uneventful healing of distal phalangeal fractures.", "title": "" }, { "docid": "175f82940aa18fe390d1ef03835de8cc", "text": "We address personalization issues of image captioning, which have not been discussed yet in previous research. For a query image, we aim to generate a descriptive sentence, accounting for prior knowledge such as the users active vocabularies in previous documents. As applications of personalized image captioning, we tackle two post automation tasks: hashtag prediction and post generation, on our newly collected Instagram dataset, consisting of 1.1M posts from 6.3K users. We propose a novel captioning model named Context Sequence Memory Network (CSMN). Its unique updates over previous memory network models include (i) exploiting memory as a repository for multiple types of context information, (ii) appending previously generated words into memory to capture long-term information without suffering from the vanishing gradient problem, and (iii) adopting CNN memory structure to jointly represent nearby ordered memory slots for better context understanding. With quantitative evaluation and user studies via Amazon Mechanical Turk, we show the effectiveness of the three novel features of CSMN and its performance enhancement for personalized image captioning over state-of-the-art captioning models.", "title": "" }, { "docid": "ee80447709188fab5debfcf9b50a9dcb", "text": "Prior research by Kornell and Bjork (2007) and Hartwig and Dunlosky (2012) has demonstrated that college students tend to employ study strategies that are far from optimal. We examined whether individuals in the broader—and typically older—population might hold different beliefs about how best to study and learn, given their more extensive experience outside of formal coursework and deadlines. Via a web-based survey, however, we found striking similarities: Learners’ study decisions tend to be driven by deadlines, and the benefits of activities such as self-testing and reviewing studied materials are elf-regulated learning etacognition indset tudy strategies mostly unappreciated. We also found evidence, however, that one’s mindset with respect to intelligence is related to one’s habits and beliefs: Individuals who believe that intelligence can be increased through effort were more likely to value the pedagogical benefits of self-testing, to restudy, and to be intrinsically motivated to learn, compared to individuals who believe that intelligence is fixed. © 2014 Society for Applied Research in Memory and Cognition. Published by Elsevier Inc. All rights With the world’s knowledge at our fingertips, there are increasng opportunities to learn on our own, not only during the years f formal education, but also across our lifespan as our careers, obbies, and interests change. The rapid pace of technological hange has also made such self-directed learning necessary: the bility to effectively self-regulate one’s learning—monitoring one’s wn learning and implementing beneficial study strategies—is, rguably, more important than ever before. Decades of research have revealed the efficacy of various study trategies (see Dunlosky, Rawson, Marsh, Nathan, & Willingham, 013, for a review of effective—and less effective—study techiques). Bjork (1994) coined the term, “desirable difficulties,” to efer to the set of study conditions or study strategies that appear to low down the acquisition of to-be-learned materials and make the earning process seem more effortful, but then enhance long-term etention and transfer, presumably because contending with those ifficulties engages processes that support learning and retention. xamples of desirable difficulties include generating information or esting oneself (instead of reading or re-reading information—a relPlease cite this article in press as: Yan, V. X., et al. Habits and beliefs Journal of Applied Research in Memory and Cognition (2014), http://dx.d tively passive activity), spacing out repeated study opportunities instead of cramming), and varying conditions of practice (rather han keeping those conditions constant and predictable). ∗ Corresponding author at: 1285 Franz Hall, Department of Psychology, University f California, Los Angeles, CA 90095, United States. Tel.: +1 310 954 6650. E-mail address: veronicayan@ucla.edu (V.X. Yan). ttp://dx.doi.org/10.1016/j.jarmac.2014.04.003 211-3681/© 2014 Society for Applied Research in Memory and Cognition. Published by reserved. Many recent findings, however—both survey-based and experimental—have revealed that learners continue to study in non-optimal ways. Learners do not appear, for example, to understand two of the most robust effects from the cognitive psychology literature—namely, the testing effect (that practicing retrieval leads to better long-term retention, compared even to re-reading; e.g., Roediger & Karpicke, 2006a) and the spacing effect (that spacing repeated study sessions leads to better long-term retention than does massing repetitions; e.g., Cepeda, Pashler, Vul, Wixted, & Rohrer, 2006; Dempster, 1988). A survey of 472 undergraduate students by Kornell and Bjork (2007)—which was replicated by Hartwig and Dunlosky (2012)—showed that students underappreciate the learning benefits of testing. Similarly, Karpicke, Butler, and Roediger (2009) surveyed students’ study strategies and found that re-reading was by far the most popular study strategy and that self-testing tended to be used only to assess whether some level of learning had been achieved, not to enhance subsequent recall. Even when students have some appreciation of effective strategies they often do not implement those strategies. Susser and McCabe (2013), for example, showed that even though students reported understanding the benefits of spaced learning over massed learning, they often do not space their study sessions on a given topic, particularly if their upcoming test is going to have a that guide self-regulated learning: Do they vary with mindset? oi.org/10.1016/j.jarmac.2014.04.003 multiple-choice format, or if they think the material is relatively easy, or if they are simply too busy. In fact, Kornell and Bjork’s (2007) survey showed that students’ study decisions tended to be driven by impending deadlines, rather than by learning goals, Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "397036a265637f5a84256bdba80d93a2", "text": "0167-4730/$ see front matter 2008 Elsevier Ltd. A doi:10.1016/j.strusafe.2008.06.002 * Corresponding author. E-mail address: abliel@stanford.edu (A.B. Liel). The primary goal of seismic provisions in building codes is to protect life safety through the prevention of structural collapse. To evaluate the extent to which current and past building code provisions meet this objective, the authors have conducted detailed assessments of collapse risk of reinforced-concrete moment frame buildings, including both ‘ductile’ frames that conform to current building code requirements, and ‘non-ductile’ frames that are designed according to out-dated (pre-1975) building codes. Many aspects of the assessment process can have a significant impact on the evaluated collapse performance; this study focuses on methods of representing modeling parameter uncertainties in the collapse assessment process. Uncertainties in structural component strength, stiffness, deformation capacity, and cyclic deterioration are considered for non-ductile and ductile frame structures of varying heights. To practically incorporate these uncertainties in the face of the computationally intensive nonlinear response analyses needed to simulate collapse, the modeling uncertainties are assessed through a response surface, which describes the median collapse capacity as a function of the model random variables. The response surface is then used in conjunction with Monte Carlo methods to quantify the effect of these modeling uncertainties on the calculated collapse fragilities. Comparisons of the response surface based approach and a simpler approach, namely the first-order second-moment (FOSM) method, indicate that FOSM can lead to inaccurate results in some cases, particularly when the modeling uncertainties cause a shift in the prediction of the median collapse point. An alternate simplified procedure is proposed that combines aspects of the response surface and FOSM methods, providing an efficient yet accurate technique to characterize model uncertainties, accounting for the shift in median response. The methodology for incorporating uncertainties is presented here with emphasis on the collapse limit state, but is also appropriate for examining the effects of modeling uncertainties on other structural limit states. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0f79acbbac311e9005112da00ee4a692", "text": "Eight years ago the journal Transcultural Psychiatry published the results of an epidemiological study (Chandler and Lalonde 1998) in which the highly variable rates of youth suicide among British Columbia’s First Nations were related to six markers of “cultural continuity” – community-level variables meant to document the extent to which each of the province’s almost 200 Aboriginal “bands” had taken steps to preserve their cultural past and to secure future control of their civic lives. Two key findings emerged from these earlier efforts. The first was that, although the province-wide rate of Aboriginal youth suicide was sharply elevated (more than 5 times the national average), this commonly reported summary statistic was labelled an “actuarial fiction” that failed to capture the local reality of even one of the province’s First Nations communities. Counting up all of the deaths by suicide and then simply dividing through by the total number of available Aboriginal youth obscures what is really interesting – the dramatic differences in the incidence of youth suicide that actually distinguish one band or tribal council from the next. In fact, more than half of the province’s bands reported no youth suicides during the 6-year period (1987-1992) covered by this study, while more than 90% of the suicides occurred in less than 10% of the bands. Clearly, our data demonstrated, youth suicide is not an “Aboriginal” problem per se but a problem confined to only some Aboriginal communities. Second, all six of the “cultural continuity” factors originally identified – measures intended to mark the degree to which individual Aboriginal communities had successfully taken steps to secure their cultural past in light of an imagined future – proved to be strongly related to the presence or absence of youth suicide. Every community characterized by all six of these protective factors experienced no youth suicides during the 6-year reporting period, whereas those bands in which none of these factors were present suffered suicide rates more than 10 times the national average. Because these findings were seen by us, and have come to be seen by others,1 not only as clarifying the link between cultural continuity and reduced suicide risk but also as having important policy implications, we have undertaken to replicate and broaden our earlier research efforts. We have done this in three ways. First, we have extended our earlier examination of the community-by-community incidence of Aboriginal youth suicides to include also the additional", "title": "" }, { "docid": "14f127a8dd4a0fab5acd9db2a3924657", "text": "Pesticides (herbicides, fungicides or insecticides) play an important role in agriculture to control the pests and increase the productivity to meet the demand of foods by a remarkably growing population. Pesticides application thus became one of the important inputs for the high production of corn and wheat in USA and UK, respectively. It also increased the crop production in China and India [1-4]. Although extensive use of pesticides improved in securing enough crop production worldwide however; these pesticides are equally toxic or harmful to nontarget organisms like mammals, birds etc and thus their presence in excess can cause serious health and environmental problems. Pesticides have thus become environmental pollutants as they are often found in soil, water, atmosphere and agricultural products, in harmful levels, posing an environmental threat. Its residual presence in agricultural products and foods can also exhibit acute or chronic toxicity on human health. Even at low levels, it can cause adverse effects on humans, plants, animals and ecosystems. Thus, monitoring of these pesticide and its residues become extremely important to ensure that agricultural products have permitted levels of pesticides [5-6]. Majority of pesticides belong to four classes, namely organochlorines, organophosphates, carbamates and pyrethroids. Organophosphates pesticides are a class of insecticides, of which many are highly toxic [7]. Until the 21st century, they were among the most widely used insecticides which included parathion, malathion, methyl parathion, chlorpyrifos, diazinon, dichlorvos, dimethoate, monocrotophos and profenofos. Organophosphate pesticides cause toxicity by inhibiting acetylcholinesterase enzyme [8]. It acts as a poison to insects and other animals, such as birds, amphibians and mammals, primarily by phosphorylating the acetylcholinesterase enzyme (AChE) present at nerve endings. This leads to the loss of available AChE and because of the excess acetylcholine (ACh, the impulse-transmitting substance), the effected organ becomes over stimulated. The enzyme is critical to control the transmission of nerve impulse from nerve fibers to the smooth and skeletal muscle cells, secretary cells and autonomic ganglia, and within the central nervous system (CNS). Once the enzyme reaches a critical level due to inactivation by phosphorylation, symptoms and signs of cholinergic poisoning get manifested [9].", "title": "" }, { "docid": "8cd666c0796c0fe764bc8de0d7a20fa3", "text": "$$\\mathcal{Q}$$ -learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian domains. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states. This paper presents and proves in detail a convergence theorem for $$\\mathcal{Q}$$ -learning based on that outlined in Watkins (1989). We show that $$\\mathcal{Q}$$ -learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions to the cases of non-discounted, but absorbing, Markov environments, and where many $$\\mathcal{Q}$$ values can be changed each iteration, rather than just one.", "title": "" }, { "docid": "28bc08b0e0f71b99a7f223b2285f2725", "text": "Evidence has accrued to suggest that there are 2 distinct dimensions of narcissism, which are often labeled grandiose and vulnerable narcissism. Although individuals high on either of these dimensions interact with others in an antagonistic manner, they differ on other central constructs (e.g., Neuroticism, Extraversion). In the current study, we conducted an exploratory factor analysis of 3 prominent self-report measures of narcissism (N=858) to examine the convergent and discriminant validity of the resultant factors. A 2-factor structure was found, which supported the notion that these scales include content consistent with 2 relatively distinct constructs: grandiose and vulnerable narcissism. We then compared the similarity of the nomological networks of these dimensions in relation to indices of personality, interpersonal behavior, and psychopathology in a sample of undergraduates (n=238). Overall, the nomological networks of vulnerable and grandiose narcissism were unrelated. The current results support the need for a more explicit parsing of the narcissism construct at the level of conceptualization and assessment.", "title": "" }, { "docid": "7e61652a45c490c230d368d653ef63e8", "text": "Deep embeddings answer one simple question: How similar are two images? Learning these embeddings is the bedrock of verification, zero-shot learning, and visual search. The most prominent approaches optimize a deep convolutional network with a suitable loss function, such as contrastive loss or triplet loss. While a rich line of work focuses solely on the loss functions, we show in this paper that selecting training examples plays an equally important role. We propose distance weighted sampling, which selects more informative and stable examples than traditional approaches. In addition, we show that a simple margin based loss is sufficient to outperform all other loss functions. We evaluate our approach on the Stanford Online Products, CAR196, and the CUB200-2011 datasets for image retrieval and clustering, and on the LFW dataset for face verification. Our method achieves state-of-the-art performance on all of them.", "title": "" }, { "docid": "ba302b1ee508edc2376160b3ad0a751f", "text": "During the last years terrestrial laser scanning became a standard method of data acquisition for various applications in close range domain, like industrial production, forest inventories, plant engineering and construction, car navigation and – one of the most important fields – the recording and modelling of buildings. To use laser scanning data in an adequate way, a quality assessment of the laser scanner is inevitable. In the literature some publications can be found concerning the data quality of terrestrial laser scanners. Most of these papers concentrate on the geometrical accuracy of the scanner (errors of instrument axis, range accuracy using target etc.). In this paper a special aspect of quality assessment will be discussed: the influence of different materials and object colours on the recorded measurements of a TLS. The effects on the geometric accuracy as well as on the simultaneously acquired intensity values are the topics of our investigations. A TRIMBLE GX scanner was used for several test series. The study of different effects refer to materials commonly used at building façades, i.e. grey scaled and coloured sheets, various species of wood, a metal plate, plasters of different particle size, light-transmissive slides and surfaces of different conditions of wetness. The tests concerning a grey wedge show a dependence on the brightness where the mean square error (MSE) decrease from black to white, and therefore, confirm previous results of other research groups. Similar results had been obtained with coloured sheets. In this context an important result is that the accuracy of measurements at night-time has proved to be much better than at day time. While different species of wood and different conditions of wetness have no significant effect on the range accuracy the study of a metal plate delivers MSE values considerably higher than the accuracy of the scanner, if the angle of incidence is approximately orthogonal. Also light-transmissive slides cause enormous MSE values. It can be concluded that high precision measurements should be carried out at night-time and preferable on bright surfaces without specular characteristics.", "title": "" }, { "docid": "4f1c2748a5f2e50ac1efe80c5bcd3a37", "text": "Recently the RoboCup@Work league emerged in the world's largest robotics competition, intended for competitors wishing to compete in the field of mobile robotics for manipulation tasks in industrial environments. This competition consists of several tasks with one reflected in this work (Basic Navigation Test). This project involves the simulation in Virtual Robot Experimentation Platform (V-REP) of the behavior of a KUKA youBot. The goal is to verify that the robots can navigate in their environment, in a standalone mode, in a robust and secure way. To achieve the proposed objectives, it was necessary to create a program in Lua and test it in simulation. This involved the study of robot kinematics and mechanics, Simultaneous Localization And Mapping (SLAM) and perception from sensors. In this work is introduced an algorithm developed for a KUKA youBot platform to perform the SLAM while reaching for the goal position, which works according to the requirements of this competition BNT. This algorithm also minimizes the errors in the built map and in the path travelled by the robot.", "title": "" }, { "docid": "fc26ebb8329c84d96a714065117dda02", "text": "Technological advances in genomics and imaging have led to an explosion of molecular and cellular profiling data from large numbers of samples. This rapid increase in biological data dimension and acquisition rate is challenging conventional analysis strategies. Modern machine learning methods, such as deep learning, promise to leverage very large data sets for finding hidden structure within them, and for making accurate predictions. In this review, we discuss applications of this new breed of analysis approaches in regulatory genomics and cellular imaging. We provide background of what deep learning is, and the settings in which it can be successfully applied to derive biological insights. In addition to presenting specific applications and providing tips for practical use, we also highlight possible pitfalls and limitations to guide computational biologists when and how to make the most use of this new technology.", "title": "" }, { "docid": "dd5c0dc27c0b195b1b8f2c6e6a5cea88", "text": "The increasing dependence on information networks for business operations has focused managerial attention on managing risks posed by failure of these networks. In this paper, we develop models to assess the risk of failure on the availability of an information network due to attacks that exploit software vulnerabilities. Software vulnerabilities arise from software installed on the nodes of the network. When the same software stack is installed on multiple nodes on the network, software vulnerabilities are shared among them. These shared vulnerabilities can result in correlated failure of multiple nodes resulting in longer repair times and greater loss of availability of the network. Considering positive network effects (e.g., compatibility) alone without taking the risks of correlated failure and the resulting downtime into account would lead to overinvestment in homogeneous software deployment. Exploiting characteristics unique to information networks, we present a queuing model that allows us to quantify downtime loss faced by a rm as a function of (1) investment in security technologies to avert attacks, (2) software diversification to limit the risk of correlated failure under attacks, and (3) investment in IT resources to repair failures due to attacks. The novelty of this method is that we endogenize the failure distribution and the node correlation distribution, and show how the diversification strategy and other security measures/investments may impact these two distributions, which in turn determine the security loss faced by the firm. We analyze and discuss the effectiveness of diversification strategy under different operating conditions and in the presence of changing vulnerabilities. We also take into account the benefits and costs of a diversification strategy. Our analysis provides conditions under which diversification strategy is advantageous.", "title": "" }, { "docid": "4e63f4a95d501641b80fcdf9bc0f89f6", "text": "Streptococcus milleri was isolated from the active lesions of three patients with perineal hidradenitis suppurativa. In each patient, elimination of this organism by appropriate antibiotic therapy was accompanied by marked clinical improvement.", "title": "" }, { "docid": "db597c88e71a8397b81216282d394623", "text": "In many real applications, graph data is subject to uncertainties due to incompleteness and imprecision of data. Mining such uncertain graph data is semantically different from and computationally more challenging than mining conventional exact graph data. This paper investigates the problem of mining uncertain graph data and especially focuses on mining frequent subgraph patterns on an uncertain graph database. A novel model of uncertain graphs is presented, and the frequent subgraph pattern mining problem is formalized by introducing a new measure, called expected support. This problem is proved to be NP-hard. An approximate mining algorithm is proposed to find a set of approximately frequent subgraph patterns by allowing an error tolerance on expected supports of discovered subgraph patterns. The algorithm uses efficient methods to determine whether a subgraph pattern can be output or not and a new pruning method to reduce the complexity of examining subgraph patterns. Analytical and experimental results show that the algorithm is very efficient, accurate, and scalable for large uncertain graph databases. To the best of our knowledge, this paper is the first one to investigate the problem of mining frequent subgraph patterns from uncertain graph data.", "title": "" } ]
scidocsrr
0693209386b1531a62d4e5726c021392
Loughborough University Institutional Repository Understanding Generation Y and their use of social media : a review and research agenda
[ { "docid": "b4880ddb59730f465f585f3686d1d2b1", "text": "The authors study the effect of word-of-mouth (WOM) marketing on member growth at an Internet social networking site and compare it with traditional marketing vehicles. Because social network sites record the electronic invitations sent out by existing members, outbound WOM may be precisely tracked. WOM, along with traditional marketing, can then be linked to the number of new members subsequently joining the site (signups). Due to the endogeneity among WOM, new signups, and traditional marketing activity, the authors employ a Vector Autoregression (VAR) modeling approach. Estimates from the VAR model show that word-ofmouth referrals have substantially longer carryover effects than traditional marketing actions. The long-run elasticity of signups with respect to WOM is estimated to be 0.53 (substantially larger than the average advertising elasticities reported in the literature) and the WOM elasticity is about 20 times higher than the elasticity for marketing events, and 30 times that of media appearances. Based on revenue from advertising impressions served to a new member, the monetary value of a WOM referral can be calculated; this yields an upper bound estimate for the financial incentives the firm might offer to stimulate word-of-mouth.", "title": "" } ]
[ { "docid": "fe397e4124ef517268aaabd999bc02c4", "text": "A new frequency-reconfigurable quasi-Yagi dipole antenna is presented. It consists of a driven dipole element with two varactors in two arms, a director with an additional varactor, a truncated ground plane reflector, a microstrip-to-coplanar-stripline (CPS) transition, and a novel biasing circuit. The effective electrical length of the director element and that of the driven arms are adjusted together by changing the biasing voltages. A 35% continuously frequency-tuning bandwidth, from 1.80 to 2.45 GHz, is achieved. This covers a number of wireless communication systems, including 3G UMTS, US WCS, and WLAN. The length-adjustable director allows the endfire pattern with relatively high gain to be maintained over the entire tuning bandwidth. Measured results show that the gain varies from 5.6 to 7.6 dBi and the front-to-back ratio is better than 10 dB. The H-plane cross polarization is below -15 dB, and that in the E-plane is below -20 dB.", "title": "" }, { "docid": "7e1c0505e40212ef0e8748229654169f", "text": "This article addresses the concept of quality risk in outsourcing. Recent trends in outsourcing extend a contract manufacturer’s (CM’s) responsibility to several functional areas, such as research and development and design in addition to manufacturing. This trend enables an original equipment manufacturer (OEM) to focus on sales and pricing of its product. However, increasing CM responsibilities also suggest that the OEM’s product quality is mainly determined by its CM. We identify two factors that cause quality risk in this outsourcing relationship. First, the CM and the OEM may not be able to contract on quality; second, the OEM may not know the cost of quality to the CM. We characterize the effects of these two quality risk factors on the firms’ profits and on the resulting product quality. We determine how the OEM’s pricing strategy affects quality risk. We show, for example, that the effect of noncontractible quality is higher than the effect of private quality cost information when the OEM sets the sales price after observing the product’s quality. We also show that committing to a sales price mitigates the adverse effect of quality risk. To obtain these results, we develop and analyze a three-stage decision model. This model is also used to understand the impact of recent information technologies on profits and product quality. For example, we provide a decision tree that an OEM can use in deciding whether to invest in an enterprise-wide quality management system that enables accounting of quality-related activities across the supply chain. © 2009 Wiley Periodicals, Inc. Naval Research Logistics 56: 669–685, 2009", "title": "" }, { "docid": "46072702edbe5177e48510fe37b77943", "text": "Due to the explosive increase of online images, content-based image retrieval has gained a lot of attention. The success of deep learning techniques such as convolutional neural networks have motivated us to explore its applications in our context. The main contribution of our work is a novel end-to-end supervised learning framework that learns probability-based semantic-level similarity and feature-level similarity simultaneously. The main advantage of our novel hashing scheme that it is able to reduce the computational cost of retrieval significantly at the state-of-the-art efficiency level. We report on comprehensive experiments using public available datasets such as Oxford, Holidays and ImageNet 2012 retrieval datasets.", "title": "" }, { "docid": "7d0020ff1a7500df1458ddfd568db7b4", "text": "In this position paper, we address the problems of automated road congestion detection and alerting systems and their security properties. We review different theoretical adaptive road traffic control approaches, and three widely deployed adaptive traffic control systems (ATCSs), namely, SCATS, SCOOT and InSync. We then discuss some related research questions, and the corresponding possible approaches, as well as the adversary model and potential attack scenarios. Two theoretical concepts of automated road congestion alarm systems (including system architecture, communication protocol, and algorithms) are proposed on top of ATCSs, such as SCATS, SCOOT and InSync, by incorporating secure wireless vehicle-to-infrastructure (V2I) communications. Finally, the security properties of the proposed system have been discussed and analysed using the ProVerif protocol verification tool.", "title": "" }, { "docid": "0882fc46d918957e73d0381420277bdc", "text": "The term ‘resource use efficiency in agriculture’ may be broadly defined to include the concepts of technical efficiency, allocative efficiency and environmental efficiency. An efficient farmer allocates his land, labour, water and other resources in an optimal manner, so as to maximise his income, at least cost, on sustainable basis. However, there are countless studies showing that farmers often use their resources sub-optimally. While some farmers may attain maximum physical yield per unit of land at a high cost, some others achieve maximum profit per unit of inputs used. Also in the process of achieving maximum yield and returns, some farmers may ignore the environmentally adverse consequences, if any, of their resource use intensity. Logically all enterprising farmers would try to maximise their farm returns by allocating resources in an efficient manner. But as resources (both qualitatively and quantitatively) and managerial efficiency of different farmers vary widely, the net returns per unit of inputs used also vary significantly from farm to farm. Also a farmer’s access to technology, credit, market and other infrastructure and policy support, coupled with risk perception and risk management capacity under erratic weather and price situations would determine his farm efficiency. Moreover, a farmer knowingly or unknowingly may over-exploit his land and water resources for maximising farm income in the short run, thereby resulting in soil and water degradation and rapid depletion of ground water, and also posing a problem of sustainability of agriculture in the long run. In fact, soil degradation, depletion of groundwater and water pollution due to farmers’ managerial inefficiency or otherwise, have a social cost, while farmers who forego certain agricultural practices which cause any such sustainability problem may have a high opportunity cost. Furthermore, a farmer may not be often either fully aware or properly guided and aided for alternative, albeit best possible uses of his scarce resources like land and water. Thus, there are economic as well as environmental aspects of resource use efficiency. In addition, from the point of view of public exchequer, the resource use efficiency would mean that public investment, subsidies and credit for agriculture are", "title": "" }, { "docid": "f611ccffbe10acb7dcbd6cb8f7ffaeaa", "text": "We study the problem of single-image depth estimation for images in the wild. We collect human annotated surface normals and use them to help train a neural network that directly predicts pixel-wise depth. We propose two novel loss functions for training with surface normal annotations. Experiments on NYU Depth, KITTI, and our own dataset demonstrate that our approach can significantly improve the quality of depth estimation in the wild.", "title": "" }, { "docid": "6cf2ffb0d541320b1ad04dc3b9e1c9a4", "text": "Prediction of potential fraudulent activities may prevent both the stakeholders and the appropriate regulatory authorities of national or international level from being deceived. The objective difficulties on collecting adequate data that are obsessed by completeness affects the reliability of the most supervised Machine Learning methods. This work examines the effectiveness of forecasting fraudulent financial statements using semi-supervised classification techniques (SSC) that require just a few labeled examples for achieving robust learning behaviors mining useful data patterns from a larger pool of unlabeled examples. Based on data extracted from Greek firms, a number of comparisons between supervised and semi-supervised algorithms has been conducted. According to the produced results, the later algorithms are favored being examined over several scenarios of different Labeled Ratio (R) values.", "title": "" }, { "docid": "b4ed57258b85ab4d81d5071fc7ad2cc9", "text": "We present LEAR (Lexical Entailment AttractRepel), a novel post-processing method that transforms any input word vector space to emphasise the asymmetric relation of lexical entailment (LE), also known as the IS-A or hyponymy-hypernymy relation. By injecting external linguistic constraints (e.g., WordNet links) into the initial vector space, the LE specialisation procedure brings true hyponymyhypernymy pairs closer together in the transformed Euclidean space. The proposed asymmetric distance measure adjusts the norms of word vectors to reflect the actual WordNetstyle hierarchy of concepts. Simultaneously, a joint objective enforces semantic similarity using the symmetric cosine distance, yielding a vector space specialised for both lexical relations at once. LEAR specialisation achieves state-of-the-art performance in the tasks of hypernymy directionality, hypernymy detection, and graded lexical entailment, demonstrating the effectiveness and robustness of the proposed asymmetric specialisation model.", "title": "" }, { "docid": "04756d4dfc34215c8acb895ecfcfb406", "text": "The author describes five separate projects he has undertaken in the intersection of computer science and Canadian income tax law. They are:A computer-assisted instruction (CAI) course for teaching income tax, programmed using conventional CAI techniques;\nA “document modeling” computer program for generating the documentation for a tax-based transaction and advising the lawyer-user as to what decisions should be made and what the tax effects will be, programmed in a conventional language;\nA prototype expert system for determining the income tax effects of transactions and tax-defined relationships, based on a PROLOG representation of the rules of the Income Tax Act;\nAn intelligent CAI (ICAI) system for generating infinite numbers of randomized quiz questions for students, computing the answers, and matching wrong answers to particular student errors, based on a PROLOG representation of the rules of the Income Tax Act; and\nA Hypercard stack for providing information about income tax, enabling both education and practical research to follow the user's needs path.\n\nThe author shows that non-AI approaches are a way to produce packages quickly and efficiently. Their primary disadvantage is the massive rewriting required when the tax law changes. AI approaches based on PROLOG, on the other hand, are harder to develop to a practical level but will be easier to audit and maintain. The relationship between expert systems and CAI is discussed.", "title": "" }, { "docid": "9500dfc92149c5a808cec89b140fc0c3", "text": "We present a new approach to the geometric alignment of a point cloud to a surface and to related registration problems. The standard algorithm is the familiar ICP algorithm. Here we provide an alternative concept which relies on instantaneous kinematics and on the geometry of the squared distance function of a surface. The proposed algorithm exhibits faster convergence than ICP; this is supported both by results of a local convergence analysis and by experiments.", "title": "" }, { "docid": "a2258145e9366bfbf515b3949b2d70fa", "text": "Affect intensity (AI) may reconcile 2 seemingly paradoxical findings: Women report more negative affect than men but equal happiness as men. AI describes people's varying response intensity to identical emotional stimuli. A college sample of 66 women and 34 men was assessed on both positive and negative affect using 4 measurement methods: self-report, peer report, daily report, and memory performance. A principal-components analysis revealed an affect balance component and an AI component. Multimeasure affect balance and AI scores were created, and t tests were computed that showed women to be as happy as and more intense than men. Gender accounted for less than 1% of the variance in happiness but over 13% in AI. Thus, depression findings of more negative affect in women do not conflict with well-being findings of equal happiness across gender. Generally, women's more intense positive emotions balance their higher negative affect.", "title": "" }, { "docid": "47505c95f8a3cf136b3b5a76847990fc", "text": "We present a hybrid algorithm to compute the convex hull of points in three or higher dimensional spaces. Our formulation uses a GPU-based interior point filter to cull away many of the points that do not lie on the boundary. The convex hull of remaining points is computed on a CPU. The GPU-based filter proceeds in an incremental manner and computes a pseudo-hull that is contained inside the convex hull of the original points. The pseudo-hull computation involves only localized operations and maps well to GPU architectures. Furthermore, the underlying approach extends to high dimensional point sets and deforming points. In practice, our culling filter can reduce the number of candidate points by two orders of magnitude. We have implemented the hybrid algorithm on commodity GPUs, and evaluated its performance on several large point sets. In practice, the GPU-based filtering algorithm can cull up to 85M interior points per second on an NVIDIA GeForce GTX 580 and the hybrid algorithm improves the overall performance of convex hull computation by 10 − 27 times (for static point sets) and 22 − 46 times (for deforming point sets).", "title": "" }, { "docid": "a83ba31bdf54c9dec09788bfb1c972fc", "text": "In 1999, ISPOR formed the Quality of Life Special Interest group (QoL-SIG)--Translation and Cultural Adaptation group (TCA group) to stimulate discussion on and create guidelines and standards for the translation and cultural adaptation of patient-reported outcome (PRO) measures. After identifying a general lack of consistency in current methods and published guidelines, the TCA group saw a need to develop a holistic perspective that synthesized the full spectrum of published methods. This process resulted in the development of Translation and Cultural Adaptation of Patient Reported Outcomes Measures--Principles of Good Practice (PGP), a report on current methods, and an appraisal of their strengths and weaknesses. The TCA Group undertook a review of evidence from current practice, a review of the literature and existing guidelines, and consideration of the issues facing the pharmaceutical industry, regulators, and the broader outcomes research community. Each approach to translation and cultural adaptation was considered systematically in terms of rationale, components, key actors, and the potential benefits and risks associated with each approach and step. The results of this review were subjected to discussion and challenge within the TCA group, as well as consultation with the outcomes research community at large. Through this review, a consensus emerged on a broad approach, along with a detailed critique of the strengths and weaknesses of the differing methodologies. The results of this review are set out as \"Translation and Cultural Adaptation of Patient Reported Outcomes Measures--Principles of Good Practice\" and are reported in this document.", "title": "" }, { "docid": "ba65c99adc34e05cf0cd1b5618a21826", "text": "We investigate a family of bugs in blockchain-based smart contracts, which we call event-ordering (or EO) bugs. These bugs are intimately related to the dynamic ordering of contract events, i.e., calls of its functions on the blockchain, and enable potential exploits of millions of USD worth of Ether. Known examples of such bugs and prior techniques to detect them have been restricted to a small number of event orderings, typicall 1 or 2. Our work provides a new formulation of this general class of EO bugs as finding concurrency properties arising in long permutations of such events. The technical challenge in detecting our formulation of EO bugs is the inherent combinatorial blowup in path and state space analysis, even for simple contracts. We propose the first use of partial-order reduction techniques, using happen-before relations extracted automatically for contracts, along with several other optimizations built on a dynamic symbolic execution technique. We build an automatic tool called ETHRACER that requires no hints from users and runs directly on Ethereum bytecode. It flags 7-11% of over ten thousand contracts analyzed in roughly 18.5 minutes per contract, providing compact event traces that human analysts can run as witnesses. These witnesses are so compact that confirmations require only a few minutes of human effort. Half of the flagged contracts have subtle EO bugs, including in ERC-20 contracts that carry hundreds of millions of dollars worth of Ether. Thus, ETHRACER is effective at detecting a subtle yet dangerous class of bugs which existing tools miss.", "title": "" }, { "docid": "70c6da9da15ad40b4f64386b890ccf51", "text": "In this paper, we describe a positioning control for a SCARA robot using a recurrent neural network. The simultaneous perturbation optimization method is used for the learning rule of the recurrent neural network. Then the recurrent neural network learns inverse dynamics of the SCARA robot. We present details of the control scheme using the simultaneous perturbation. Moreover, we consider an example for two target positions using an actual SCARA robot. The result is shown.", "title": "" }, { "docid": "0ec0b6797069ee5bd737ea787cba43ef", "text": "Evaluation of retrieval performance is a crucial problem in content-based image retrieval (CBIR). Many different methods for measuring the performance of a system have been created and used by researchers. This article discusses the advantages and shortcomings of the performance measures currently used. Problems such as a common image database for performance comparisons and a means of getting relevance judgments (or ground truth) for queries are explained. The relationship between CBIR and information retrieval (IR) is made clear, since IR researchers have decades of experience with the evaluation problem. Many of their solutions can be used for CBIR, despite the differences between the fields. Several methods used in text retrieval are explained. Proposals for performance measures and means of developing a standard test suite for CBIR, similar to that used in IR at the annual Text REtrieval Conference (TREC), are presented. MULLER, Henning, et al. Performance Evaluation in Content-Based Image Retrieval: Overview and Proposals. Genève : 1999", "title": "" }, { "docid": "c26e9f486621e37d66bf0925d8ff2a3e", "text": "We report the first two Malaysian children with partial deletion 9p syndrome, a well delineated but rare clinical entity. Both patients had trigonocephaly, arching eyebrows, anteverted nares, long philtrum, abnormal ear lobules, congenital heart lesions and digital anomalies. In addition, the first patient had underdeveloped female genitalia and anterior anus. The second patient had hypocalcaemia and high arched palate and was initially diagnosed with DiGeorge syndrome. Chromosomal analysis revealed a partial deletion at the short arm of chromosome 9. Karyotyping should be performed in patients with craniostenosis and multiple abnormalities as an early syndromic diagnosis confers prognostic, counselling and management implications.", "title": "" }, { "docid": "c9c98e50a49bbc781047dc425a2d6fa1", "text": "Understanding wound healing today involves much more than simply stating that there are three phases: \"inflammation, proliferation, and maturation.\" Wound healing is a complex series of reactions and interactions among cells and \"mediators.\" Each year, new mediators are discovered and our understanding of inflammatory mediators and cellular interactions grows. This article will attempt to provide a concise report of the current literature on wound healing by first reviewing the phases of wound healing followed by \"the players\" of wound healing: inflammatory mediators (cytokines, growth factors, proteases, eicosanoids, kinins, and more), nitric oxide, and the cellular elements. The discussion will end with a pictorial essay summarizing the wound-healing process.", "title": "" }, { "docid": "ceedf70c92099fc8612a38f91f2c9507", "text": "Recent work has demonstrated the value of social media monitoring for health surveillance (e.g., tracking influenza or depression rates). It is an open question whether such data can be used to make causal inferences (e.g., determining which activities lead to increased depression rates). Even in traditional, restricted domains, estimating causal effects from observational data is highly susceptible to confounding bias. In this work, we estimate the effect of exercise on mental health from Twitter, relying on statistical matching methods to reduce confounding bias. We train a text classifier to estimate the volume of a user’s tweets expressing anxiety, depression, or anger, then compare two groups: those who exercise regularly (identified by their use of physical activity trackers like Nike+), and a matched control group. We find that those who exercise regularly have significantly fewer tweets expressing depression or anxiety; there is no significant difference in rates of tweets expressing anger. We additionally perform a sensitivity analysis to investigate how the many experimental design choices in such a study impact the final conclusions, including the quality of the classifier and the construction of the control group.", "title": "" }, { "docid": "fd32bf580b316634e44a8c37adfab2eb", "text": "In a previous paper we reported the successful use of graph coloring techniques for doing global register allocation in an experimental PL/I optimizing compiler. When the compiler cannot color the register conflict graph with a number of colors equal to the number of available machine registers, it must add code to spill and reload registers to and from storage. Previously the compiler produced spill code whose quality sometimes left much to be desired, and the ad hoc techniques used took considerable amounts of compile time. We have now discovered how to extend the graph coloring approach so that it naturally solves the spilling problem. Spill decisions are now made on the basis of the register conflict graph and cost estimates of the value of keeping the result of a computation in a register rather than in storage. This new approach produces better object code and takes much less compile time.", "title": "" } ]
scidocsrr
794d168e82a8e468067707d0e2c62f40
Signed networks in social media
[ { "docid": "31a1a5ce4c9a8bc09cbecb396164ceb4", "text": "In trying out this hypothesis we shall understand by attitude the positive or negative relationship of a person p to another person o or to an impersonal entity x which may be a situation, an event, an idea, or a thing, etc. Examples are: to like, to love, to esteem, to value, and their opposites. A positive relation of this kind will be written L, a negative one ~L. Thus, pLo means p likes, loves, or values o, or, expressed differently, o is positive for p.", "title": "" } ]
[ { "docid": "4d4219d8e4fd1aa86724f3561aea414b", "text": "Trajectory search has long been an attractive and challenging topic which blooms various interesting applications in spatial-temporal databases. In this work, we study a new problem of searching trajectories by locations, in which context the query is only a small set of locations with or without an order specified, while the target is to find the k Best-Connected Trajectories (k-BCT) from a database such that the k-BCT best connect the designated locations geographically. Different from the conventional trajectory search that looks for similar trajectories w.r.t. shape or other criteria by using a sample query trajectory, we focus on the goodness of connection provided by a trajectory to the specified query locations. This new query can benefit users in many novel applications such as trip planning.\n In our work, we firstly define a new similarity function for measuring how well a trajectory connects the query locations, with both spatial distance and order constraint being considered. Upon the observation that the number of query locations is normally small (e.g. 10 or less) since it is impractical for a user to input too many locations, we analyze the feasibility of using a general-purpose spatial index to achieve efficient k-BCT search, based on a simple Incremental k-NN based Algorithm (IKNN). The IKNN effectively prunes and refines trajectories by using the devised lower bound and upper bound of similarity. Our contributions mainly lie in adapting the best-first and depth-first k-NN algorithms to the basic IKNN properly, and more importantly ensuring the efficiency in both search effort and memory usage. An in-depth study on the adaption and its efficiency is provided. Further optimization is also presented to accelerate the IKNN algorithm. Finally, we verify the efficiency of the algorithm by extensive experiments.", "title": "" }, { "docid": "a65d1881f5869f35844064d38b684ac8", "text": "Skilled artists, using traditional media or modern computer painting tools, can create a variety of expressive styles that are very appealing in still images, but have been unsuitable for animation. The key difficulty is that existing techniques lack adequate temporal coherence to animate these styles effectively. Here we augment the range of practical animation styles by extending the guided texture synthesis method of Image Analogies [Hertzmann et al. 2001] to create temporally coherent animation sequences. To make the method art directable, we allow artists to paint portions of keyframes that are used as constraints. The in-betweens calculated by our method maintain stylistic continuity and yet change no more than necessary over time.", "title": "" }, { "docid": "350f7694198d1b2c0a2c8cc1b75fc3c2", "text": "We present a methodology, called fast repetition rate (FRR) fluorescence, that measures the functional absorption cross-section (sigmaPS II) of Photosystem II (PS II), energy transfer between PS II units (p), photochemical and nonphotochemical quenching of chlorophyll fluorescence, and the kinetics of electron transfer on the acceptor side of PS II. The FRR fluorescence technique applies a sequence of subsaturating excitation pulses ('flashlets') at microsecond intervals to induce fluorescence transients. This approach is extremely flexible and allows the generation of both single-turnover (ST) and multiple-turnover (MT) flashes. Using a combination of ST and MT flashes, we investigated the effect of excitation protocols on the measured fluorescence parameters. The maximum fluorescence yield induced by an ST flash applied shortly (10 &mgr;s to 5 ms) following an MT flash increased to a level comparable to that of an MT flash, while the functional absorption cross-section decreased by about 40%. We interpret this phenomenon as evidence that an MT flash induces an increase in the fluorescence-rate constant, concomitant with a decrease in the photosynthetic-rate constant in PS II reaction centers. The simultaneous measurements of sigmaPS II, p, and the kinetics of Q-A reoxidation, which can be derived only from a combination of ST and MT flash fluorescence transients, permits robust characterization of the processes of photosynthetic energy-conversion.", "title": "" }, { "docid": "2f83b2ef8f71c56069304b0962074edc", "text": "Abstract: Printed antennas are becoming one of the most popular designs in personal wireless communications systems. In this paper, the design of a novel tapered meander line antenna is presented. The design analysis and characterization of the antenna is performed using the finite difference time domain technique and experimental verifications are performed to ensure the effectiveness of the numerical model. The new design features an operating frequency of 2.55 GHz with a 230 MHz bandwidth, which supports future generations of mobile communication systems.", "title": "" }, { "docid": "5d851687f9a69db7419ff054623f03d8", "text": "Attention mechanisms are a design trend of deep neural networks that stands out in various computer vision tasks. Recently, some works have attempted to apply attention mechanisms to single image super-resolution (SR) tasks. However, they apply the mechanisms to SR in the same or similar ways used for high-level computer vision problems without much consideration of the different nature between SR and other problems. In this paper, we propose a new attention method, which is composed of new channelwise and spatial attention mechanisms optimized for SR and a new fused attention to combine them. Based on this, we propose a new residual attention module (RAM) and a SR network using RAM (SRRAM). We provide in-depth experimental analysis of different attention mechanisms in SR. It is shown that the proposed method can construct both deep and lightweight SR networks showing improved performance in comparison to existing state-of-the-art methods.", "title": "" }, { "docid": "8eb96feea999ce77f2b56b7941af2587", "text": "The term cyber security is often used interchangeably with the term information security. This paper argues that, although there is a substantial overlap between cyber security and information security, these two concepts are not totally analogous. Moreover, the paper posits that cyber security goes beyond the boundaries of traditional information security to include not only the protection of information resources, but also that of other assets, including the person him/herself. In information security, reference to the human factor usually relates to the role(s) of humans in the security process. In cyber security this factor has an additional dimension, namely, the humans as potential targets of cyber attacks or even unknowingly participating in a cyber attack. This additional dimension has ethical implications for society as a whole, since the protection of certain vulnerable groups, for example children, could be seen as a societal responsibility. a 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0cd1f01d1b2a5afd8c6eba13ef5082fa", "text": "Automatic differentiation—the mechanical transformation of numeric computer programs to calculate derivatives efficiently and accurately—dates to the origin of the computer age. Reverse mode automatic differentiation both antedates and generalizes the method of backwards propagation of errors used in machine learning. Despite this, practitioners in a variety of fields, including machine learning, have been little influenced by automatic differentiation, and make scant use of available tools. Here we review the technique of automatic differentiation, describe its two main modes, and explain how it can benefit machine learning practitioners. To reach the widest possible audience our treatment assumes only elementary differential calculus, and does not assume any knowledge of linear algebra.", "title": "" }, { "docid": "1c4e1feed1509e0a003dca23ad3a902c", "text": "With an expansive and ubiquitously available gold mine of educational data, Massive Open Online courses (MOOCs) have become the an important foci of learning analytics research. The hope is that this new surge of development will bring the vision of equitable access to lifelong learning opportunities within practical reach. MOOCs offer many valuable learning experiences to students, from video lectures, readings, assignments and exams, to opportunities to connect and collaborate with others through threaded discussion forums and other Web 2.0 technologies. Nevertheless, despite all this potential, MOOCs have so far failed to produce evidence that this potential is being realized in the current instantiation of MOOCs. In this work, we primarily explore video lecture interaction in Massive Open Online Courses (MOOCs), which is central to student learning experience on these educational platforms. As a research contribution, we operationalize video lecture clickstreams of students into behavioral actions, and construct a quantitative information processing index, that can aid instructors to better understand MOOC hurdles and reason about unsatisfactory learning outcomes. Our results illuminate the effectiveness of developing such a metric inspired by cognitive psychology, towards answering critical questions regarding students’ engagement, their future click interactions and participation trajectories that lead to in-video dropouts. We leverage recurring click behaviors to differentiate distinct video watching profiles for students in MOOCs. Additionally, we discuss about prediction of complete course dropouts, incorporating diverse perspectives from statistics and machine learning, to offer a more nuanced view into how the second generation of MOOCs be benefited, if course instructors were to better comprehend factors that lead to student attrition. Implications for research and practice are discussed.", "title": "" }, { "docid": "0d30cfe8755f146ded936aab55cb80d3", "text": "In this study, we investigated a pattern-recognition technique based on an artificial neural network (ANN), which is called a massive training artificial neural network (MTANN), for reduction of false positives in computerized detection of lung nodules in low-dose computed tomography (CT) images. The MTANN consists of a modified multilayer ANN, which is capable of operating on image data directly. The MTANN is trained by use of a large number of subregions extracted from input images together with the teacher images containing the distribution for the \"likelihood of being a nodule.\" The output image is obtained by scanning an input image with the MTANN. The distinction between a nodule and a non-nodule is made by use of a score which is defined from the output image of the trained MTANN. In order to eliminate various types of non-nodules, we extended the capability of a single MTANN, and developed a multiple MTANN (Multi-MTANN). The Multi-MTANN consists of plural MTANNs that are arranged in parallel. Each MTANN is trained by using the same nodules, but with a different type of non-nodule. Each MTANN acts as an expert for a specific type of non-nodule, e.g., five different MTANNs were trained to distinguish nodules from various-sized vessels; four other MTANNs were applied to eliminate some other opacities. The outputs of the MTANNs were combined by using the logical AND operation such that each of the trained MTANNs eliminated none of the nodules, but removed the specific type of non-nodule with which the MTANN was trained, and thus removed various types of non-nodules. The Multi-MTANN consisting of nine MTANNs was trained with 10 typical nodules and 10 non-nodules representing each of nine different non-nodule types (90 training non-nodules overall) in a training set. The trained Multi-MTANN was applied to the reduction of false positives reported by our current computerized scheme for lung nodule detection based on a database of 63 low-dose CT scans (1765 sections), which contained 71 confirmed nodules including 66 biopsy-confirmed primary cancers, from a lung cancer screening program. The Multi-MTANN was applied to 58 true positives (nodules from 54 patients) and 1726 false positives (non-nodules) reported by our current scheme in a validation test; these were different from the training set. The results indicated that 83% (1424/1726) of non-nodules were removed with a reduction of one true positive (nodule), i.e., a classification sensitivity of 98.3% (57 of 58 nodules). By using the Multi-MTANN, the false-positive rate of our current scheme was improved from 0.98 to 0.18 false positives per section (from 27.4 to 4.8 per patient) at an overall sensitivity of 80.3% (57/71).", "title": "" }, { "docid": "e4c33ca67526cb083cae1543e5564127", "text": "Given e-commerce scenarios that user profiles are invisible, session-based recommendation is proposed to generate recommendation results from short sessions. Previous work only considers the user's sequential behavior in the current session, whereas the user's main purpose in the current session is not emphasized. In this paper, we propose a novel neural networks framework, i.e., Neural Attentive Recommendation Machine (NARM), to tackle this problem. Specifically, we explore a hybrid encoder with an attention mechanism to model the user's sequential behavior and capture the user's main purpose in the current session, which are combined as a unified session representation later. We then compute the recommendation scores for each candidate item with a bi-linear matching scheme based on this unified session representation. We train NARM by jointly learning the item and session representations as well as their matchings. We carried out extensive experiments on two benchmark datasets. Our experimental results show that NARM outperforms state-of-the-art baselines on both datasets. Furthermore, we also find that NARM achieves a significant improvement on long sessions, which demonstrates its advantages in modeling the user's sequential behavior and main purpose simultaneously.", "title": "" }, { "docid": "9464f2e308b5c8ab1f2fac1c008042c0", "text": "Data governance has become a significant approach that drives decision making in public organisations. Thus, the loss of data governance is a concern to decision makers, acting as a barrier to achieving their business plans in many countries and also influencing both operational and strategic decisions. The adoption of cloud computing is a recent trend in public sector organisations, that are looking to move their data into the cloud environment. The literature shows that data governance is one of the main concerns of decision makers who are considering adopting cloud computing; it also shows that data governance in general and for cloud computing in particular is still being researched and requires more attention from researchers. However, in the absence of a cloud data governance framework, this paper seeks to develop a conceptual framework for cloud data governance-driven decision making in the public sector.", "title": "" }, { "docid": "96af2e34acf9f1e9c0c57cc24795d0f9", "text": "Poker games provide a useful testbed for modern Artificial Intelligence techniques. Unlike many classical game domains such as chess and checkers, poker includes elements of imperfect information, stochastic events, and one or more adversarial agents to interact with. Furthermore, in poker it is possible to win or lose by varying degrees. Therefore, it can be advantageous to adapt ones’ strategy to exploit a weak opponent. A poker agent must address these challenges, acting in uncertain environments and exploiting other agents, in order to be highly successful. Arguably, poker games more closely resemble many real world problems than games with perfect information. In this brief paper, we outline Polaris, a Texas Hold’em poker program. Polaris recently defeated top human professionals at the Man vs. Machine Poker Championship and it is currently the reigning AAAI Computer Poker Competition winner in the limit equilibrium and no-limit events.", "title": "" }, { "docid": "fcbb5b1adf14b443ef0d4a6f939140fe", "text": "In this paper we make the case for IoT edge offloading, which strives to exploit the resources on edge computing devices by offloading fine-grained computation tasks from the cloud closer to the users and data generators (i.e., IoT devices). The key motive is to enhance performance, security and privacy for IoT services. Our proposal bridges the gap between cloud computing and IoT by applying a divide and conquer approach over the multi-level (cloud, edge and IoT) information pipeline. To validate the design of IoT edge offloading, we developed a unikernel-based prototype and evaluated the system under various hardware and network conditions. Our experimentation has shown promising results and revealed the limitation of existing IoT hardware and virtualization platforms, shedding light on future research of edge computing and IoT.", "title": "" }, { "docid": "11a1c92620d58100194b735bfc18c695", "text": "Stabilization by static output feedback (SOF) is a long-standing open problem in control: given an n by n matrix A and rectangular matrices B and C, find a p by q matrix K such that A + BKC is stable. Low-order controller design is a practically important problem that can be cast in the same framework, with (p+k)(q+k) design parameters instead of pq, where k is the order of the controller, and k << n. Robust stabilization further demands stability in the presence of perturbation and satisfactory transient as well as asymptotic system response. We formulate two related nonsmooth, nonconvex optimization problems over K, respectively with the following objectives: minimization of the -pseudospectral abscissa of A+BKC, for a fixed ≥ 0, and maximization of the complex stability radius of A + BKC. Finding global optimizers of these functions is hard, so we use a recently developed gradient sampling method that approximates local optimizers. For modest-sized systems, local optimization can be carried out from a large number of starting points with no difficulty. The best local optimizers may then be investigated as candidate solutions to the static output feedback or low-order controller design problem. We show results for two problems published in the control literature. The first is a turbo-generator example that allows us to show how different choices of the optimization objective lead to stabilization with qualitatively different properties, conveniently visualized by pseudospectral plots. The second is a well known model of a Boeing 767 aircraft at a flutter condition. For this problem, we are not aware of any SOF stabilizing K published in the literature. Our method was not only able to find an SOF stabilizing K, but also to locally optimize the complex stability radius of A + BKC. We also found locally optimizing order–1 and order–2 controllers for this problem. All optimizers are visualized using pseudospectral plots.", "title": "" }, { "docid": "080e7880623a09494652fd578802c156", "text": "Whole-cell biosensors are a good alternative to enzyme-based biosensors since they offer the benefits of low cost and improved stability. In recent years, live cells have been employed as biosensors for a wide range of targets. In this review, we will focus on the use of microorganisms that are genetically modified with the desirable outputs in order to improve the biosensor performance. Different methodologies based on genetic/protein engineering and synthetic biology to construct microorganisms with the required signal outputs, sensitivity, and selectivity will be discussed.", "title": "" }, { "docid": "8724a0d439736a419835c1527f01fe43", "text": "Shuffled frog-leaping algorithm (SFLA) is a new memetic meta-heuristic algorithm with efficient mathematical function and global search capability. Traveling salesman problem (TSP) is a complex combinatorial optimization problem, which is typically used as benchmark for testing the effectiveness as well as the efficiency of a newly proposed optimization algorithm. When applying the shuffled frog-leaping algorithm in TSP, memeplex and submemeplex are built and the evolution of the algorithm, especially the local exploration in submemeplex is carefully adapted based on the prototype SFLA. Experimental results show that the shuffled frog leaping algorithm is efficient for small-scale TSP. Particularly for TSP with 51 cities, the algorithm manages to find six tours which are shorter than the optimal tour provided by TSPLIB. The shortest tour length is 428.87 instead of 429.98 which can be found cited elsewhere.", "title": "" }, { "docid": "827396df94e0bca08cee7e4d673044ef", "text": "Localization in Wireless Sensor Networks (WSNs) is regarded as an emerging technology for numerous cyberphysical system applications, which equips wireless sensors with the capability to report data that is geographically meaningful for location based services and applications. However, due to the increasingly pervasive existence of smart sensors in WSN, a single localization technique that affects the overall performance is not sufficient for all applications. Thus, there have been many significant advances on localization techniques in WSNs in the past few years. The main goal in this paper is to present the state-of-the-art research results and approaches proposed for localization in WSNs. Specifically, we present the recent advances on localization techniques in WSNs by considering a wide variety of factors and categorizing them in terms of data processing (centralized vs. distributed), transmission range (range free vs. range based), mobility (static vs. mobile), operating environments (indoor vs. outdoor), node density (sparse vs dense), routing, algorithms, etc. The recent localization techniques in WSNs are also summarized in the form of tables. With this paper, readers can have a more thorough understanding of localization in sensor networks, as well as research trends and future research directions in this area.", "title": "" }, { "docid": "fb7fc0398c951a584726a31ae307c53c", "text": "In this paper, we use a advanced method called Faster R-CNN to detect traffic signs. This new method represents the highest level in object recognition, which don't need to extract image feature manually anymore and can segment image to get candidate region proposals automatically. Our experiment is based on a traffic sign detection competition in 2016 by CCF and UISEE company. The mAP(mean average precision) value of the result is 0.3449 that means Faster R-CNN can indeed be applied in this field. Even though the experiment did not achieve the best results, we explore a new method in the area of the traffic signs detection. We believe that we can get a better achievement in the future.", "title": "" }, { "docid": "45885c7c86a05d2ba3979b689f7ce5c8", "text": "Existing Markov Chain Monte Carlo (MCMC) methods are either based on generalpurpose and domain-agnostic schemes, which can lead to slow convergence, or problem-specific proposals hand-crafted by an expert. In this paper, we propose ANICE-MC, a novel method to automatically design efficient Markov chain kernels tailored for a specific domain. First, we propose an efficient likelihood-free adversarial training method to train a Markov chain and mimic a given data distribution. Then, we leverage flexible volume preserving flows to obtain parametric kernels for MCMC. Using a bootstrap approach, we show how to train efficient Markov chains to sample from a prescribed posterior distribution by iteratively improving the quality of both the model and the samples. Empirical results demonstrate that A-NICE-MC combines the strong guarantees of MCMC with the expressiveness of deep neural networks, and is able to significantly outperform competing methods such as Hamiltonian Monte Carlo.", "title": "" }, { "docid": "190ec7d12156c298e8a545a5655df969", "text": "The Linked Movie Database (LinkedMDB) project provides a demonstration of the first open linked dataset connecting several major existing (and highly popular) movie web resources. The database exposed by LinkedMDB contains millions of RDF triples with hundreds of thousands of RDF links to existing web data sources that are part of the growing Linking Open Data cloud, as well as to popular movierelated web pages such as IMDb. LinkedMDB uses a novel way of creating and maintaining large quantities of high quality links by employing state-of-the-art approximate join techniques for finding links, and providing additional RDF metadata about the quality of the links and the techniques used for deriving them.", "title": "" } ]
scidocsrr
7d2dcba86295187b3e3b788600ae3558
Model-based Software Testing
[ { "docid": "e94596df0531345dcb3026e9d3edcf2b", "text": "The use of context-free grammars to improve functional testing of very-large-scale integrated circuits is described. It is shown that enhanced context-free grammars are effective tools for generating test data. The discussion covers preliminary considerations, the first tests, generating systematic tests, and testing subroutines. The author's experience using context-free grammars to generate tests for VLSI circuit simulators indicates that they are remarkably effective tools that virtually anyone can use to debug virtually any program.<<ETX>>", "title": "" } ]
[ { "docid": "d395193924613f6818511650d24cf9ae", "text": "Assortment planning of substitutable products is a major operational issue that arises in many industries, such as retailing, airlines and consumer electronics. We consider a single-period joint assortment and inventory planning problem under dynamic substitution with stochastic demands, and provide complexity and algorithmic results as well as insightful structural characterizations of near-optimal solutions for important variants of the problem. First, we show that the assortment planning problem is NP-hard even for a very simple consumer choice model, where each customer is willing to buy only two products. In fact, we show that the problem is hard to approximate within a factor better than 1− 1/e. Secondly, we show that for several interesting and practical choice models, one can devise a polynomial-time approximation scheme (PTAS), i.e., the problem can be solved efficiently to within any level of accuracy. To the best of our knowledge, this is the first efficient algorithm with provably near-optimal performance guarantees for assortment planning problems under dynamic substitution. Quite surprisingly, the algorithm we propose stocks only a constant number of different product types; this constant depends only on the desired accuracy level. This provides an important managerial insight that assortments with a relatively small number of product types can obtain almost all of the potential revenue. Furthermore, we show that our algorithm can be easily adapted for more general choice models, and present numerical experiments to show that it performs significantly better than other known approaches.", "title": "" }, { "docid": "c82c32d057557903184e55f0f76c7a4e", "text": "An experimental program of steel panel shear walls is outlined and some results are presented. The tested specimens utilized low yield strength (LYS) steel infill panels and reduced beam sections (RBS) at the beam-ends. Two specimens make allowances for penetration of the panel by utilities, which would exist in a retrofit situation. The first, consisting of multiple holes, or perforations, in the steel panel, also has the characteristic of further reducing the corresponding solid panel strength (as compared with the use of traditional steel). The second such specimen utilizes quarter-circle cutouts in the panel corners, which are reinforced to transfer the panel forces to the adjacent framing.", "title": "" }, { "docid": "659cc5b1999c962c9fb0b3544c8b928a", "text": "During the recent years the mainstream framework for HCI research — the informationprocessing cognitive psychology —has gained more and more criticism because of serious problems in applying it both in research and practical design. In a debate within HCI research the capability of information processing psychology has been questioned and new theoretical frameworks searched. This paper presents an overview of the situation and discusses potentials of Activity Theory as an alternative framework for HCI research and design.", "title": "" }, { "docid": "3fd7a368b1b35f96593ac79d8a1658bc", "text": "Musical training has emerged as a useful framework for the investigation of training-related plasticity in the human brain. Learning to play an instrument is a highly complex task that involves the interaction of several modalities and higher-order cognitive functions and that results in behavioral, structural, and functional changes on time scales ranging from days to years. While early work focused on comparison of musical experts and novices, more recently an increasing number of controlled training studies provide clear experimental evidence for training effects. Here, we review research investigating brain plasticity induced by musical training, highlight common patterns and possible underlying mechanisms of such plasticity, and integrate these studies with findings and models for mechanisms of plasticity in other domains.", "title": "" }, { "docid": "369cdea246738d5504669e2f9581ae70", "text": "Content Security Policy (CSP) is an emerging W3C standard introduced to mitigate the impact of content injection vulnerabilities on websites. We perform a systematic, large-scale analysis of four key aspects that impact on the effectiveness of CSP: browser support, website adoption, correct configuration and constant maintenance. While browser support is largely satisfactory, with the exception of few notable issues, our analysis unveils several shortcomings relative to the other three aspects. CSP appears to have a rather limited deployment as yet and, more crucially, existing policies exhibit a number of weaknesses and misconfiguration errors. Moreover, content security policies are not regularly updated to ban insecure practices and remove unintended security violations. We argue that many of these problems can be fixed by better exploiting the monitoring facilities of CSP, while other issues deserve additional research, being more rooted into the CSP design.", "title": "" }, { "docid": "7eba71bb191a31bd87cd9d2678a7b860", "text": "In winter, rainbow smelt (Osmerus mordax) accumulate glycerol and produce an antifreeze protein (AFP), which both contribute to freeze resistance. The role of differential gene expression in the seasonal pattern of these adaptations was investigated. First, cDNAs encoding smelt and Atlantic salmon (Salmo salar) phosphoenolpyruvate carboxykinase (PEPCK) and smelt glyceraldehyde-3-phosphate dehydrogenase (GAPDH) were cloned so that all sequences required for expression analysis would be available. Using quantitative PCR, expression of beta actin in rainbow smelt liver was compared with that of GAPDH in order to determine its validity as a reference gene. Then, levels of glycerol-3-phosphate dehydrogenase (GPDH), PEPCK, and AFP relative to beta actin were measured in smelt liver over a fall-winter-spring interval. Levels of GPDH mRNA increased in the fall just before plasma glycerol accumulation, implying a driving role in glycerol synthesis. GPDH mRNA levels then declined during winter, well in advance of serum glycerol, suggesting the possibility of GPDH enzyme or glycerol conservation in smelt during the winter months. PEPCK mRNA levels rose in parallel with serum glycerol in the fall, consistent with an increasing requirement for amino acids as metabolic precursors, remained elevated for much of the winter, and then declined in advance of the decline in plasma glycerol. AFP mRNA was elevated at the onset of fall sampling in October and remained elevated until April, implying separate regulation from GPDH and PEPCK. Thus, winter freezing point depression in smelt appears to result from a seasonal cycle of GPDH gene expression, with an ensuing increase in the expression of PEPCK, and a similar but independent cycle of AFP gene expression.", "title": "" }, { "docid": "da17a995148ffcb4e219bb3f56f5ce4a", "text": "As education communities grow more interested in STEM (science, technology, engineering, and mathematics), schools have integrated more technology and engineering opportunities into their curricula. Makerspaces for all ages have emerged as a way to support STEM learning through creativity, community building, and hands-on learning. However, little research has evaluated the learning that happens in these spaces, especially in young children. One framework that has been used successfully as an evaluative tool in informal and technology-rich learning spaces is Positive Technological Development (PTD). PTD is an educational framework that describes positive behaviors children exhibit while engaging in digital learning experiences. In this exploratory case study, researchers observed children in a makerspace to determine whether the environment (the space and teachers) contributed to children’s Positive Technological Development. N = 20 children and teachers from a Kindergarten classroom were observed over 6 hours as they engaged in makerspace activities. The children’s activity, teacher’s facilitation, and the physical space were evaluated for alignment with the PTD framework. Results reveal that children showed high overall PTD engagement, and that teachers and the space supported children’s learning in complementary aspects of PTD. Recommendations for practitioners hoping to design and implement a young children’s makerspace are discussed.", "title": "" }, { "docid": "f9ee2d57aa034ea14749de81e241d856", "text": "Advances in computing technology and computer graphics engulfed with huge collections of data have introduced new visualization techniques. This gives users many choices of visualization techniques to gain an insight about the dataset at hand. However, selecting the most suitable visualization for a given dataset and the task to be performed on the data is subjective. The work presented here introduces a set of visualization metrics to quantify visualization techniques. Based on a comprehensive literature survey, we propose effectiveness, expressiveness, readability, and interactivity as the visualization metrics. Using these metrics, a framework for optimizing the layout of a visualization technique is also presented. The framework is based on an evolutionary algorithm (EA) which uses treemaps as a case study. The EA starts with a randomly initialized population, where each chromosome of the population represents one complete treemap. Using the genetic operators and the proposed visualization metrics as an objective function, the EA finds the optimum visualization layout. The visualizations that evolved are compared with the state-of-the-art treemap visualization tool through a user study. The user study utilizes benchmark tasks for the evaluation. A comparison is also performed using direct assessment, where internal and external visualization metrics are used. Results are further verified using analysis of variance (ANOVA) test. The results suggest better performance of the proposed metrics and the EA-based framework for optimizing visualization layout. The proposed methodology can also be extended to other visualization techniques. © 2017 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "bfe58868ab05a6ba607ef1f288d37f33", "text": "There is much debate as to whether online offenders are a distinct group of sex offenders or if they are simply typical sex offenders using a new technology. A meta-analysis was conducted to examine the extent to which online and offline offenders differ on demographic and psychological variables. Online offenders were more likely to be Caucasian and were slightly younger than offline offenders. In terms of psychological variables, online offenders had greater victim empathy, greater sexual deviancy, and lower impression management than offline offenders. Both online and offline offenders reported greater rates of childhood physical and sexual abuse than the general population. Additionally, online offenders were more likely to be Caucasian, younger, single, and unemployed compared with the general population. Many of the observed differences can be explained by assuming that online offenders, compared with offline offenders, have greater self-control and more psychological barriers to acting on their deviant interests.", "title": "" }, { "docid": "7f0023af2f3df688aa58ae3317286727", "text": "Time-parameterized queries (TP queries for short) retrieve (i) the actual result at the time that the query is issued, (ii) the validity period of the result given the current motion of the query and the database objects, and (iii) the change that causes the expiration of the result. Due to the highly dynamic nature of several spatio-temporal applications, TP queries are important both as standalone methods, as well as building blocks of more complex operations. However, little work has been done towards their efficient processing. In this paper, we propose a general framework that covers time-parameterized variations of the most common spatial queries, namely window queries, k-nearest neighbors and spatial joins. In particular, each of these TP queries is reduced to nearest neighbor search where the distance functions are defined according to the query type. This reduction allows the application and extension of well-known branch and bound techniques to the current problem. The proposed methods can be applied with mobile queries, mobile objects or both, given a suitable indexing method. Our experimental evaluation is based on R-trees and their extensions for dynamic objects.", "title": "" }, { "docid": "1eac633f903fb5fa1d37405ef0ca59a5", "text": "OBJECTIVE\nTo examine psychometric properties of the Self-Care Inventory-revised (SCI-R), a self-report measure of perceived adherence to diabetes self-care recommendations, among adults with diabetes.\n\n\nRESEARCH DESIGN AND METHODS\nWe used three data sets of adult type 1 and type 2 diabetic patients to examine psychometric properties of the SCI-R. Principal component and factor analyses examined whether a general factor or common factors were present. Associations with measures of theoretically related concepts were examined to assess SCI-R concurrent and convergent validity. Internal reliability coefficients were calculated. Responsiveness was assessed using paired t tests, effect size, and Guyatt's statistic for type 1 patients who completed psychoeducation.\n\n\nRESULTS\nPrincipal component and factor analyses identified a general factor but no consistent common factors. Internal consistency of the SCI-R was alpha = 0.87. Correlation with a measure of frequency of diabetes self-care behaviors was r = 0.63, providing evidence for SCI-R concurrent validity. The SCI-R correlated with diabetes-related distress (r = -0.36), self-esteem (r = 0.25), self-efficacy (r = 0.47), depression (r = -0.22), anxiety (r = -0.24), and HbA(1c) (r = -0.37), supporting construct validity. Responsiveness analyses showed SCI-R scores improved with diabetes psychoeducation with a medium effect size of 0.62 and a Guyatt's statistic of 0.85.\n\n\nCONCLUSIONS\nThe SCI-R is a brief, psychometrically sound measure of perceptions of adherence to recommended diabetes self-care behaviors of adults with type 1 or type 2 diabetes.", "title": "" }, { "docid": "9d34b66f9d387cb61e358c46568f03dd", "text": "This and the companion paper present an analysis of the amplitude and time-dependent changes of the apparent frequency of a seven-story reinforced-concrete hotel building in Van Nuys, Calif. Data of recorded response to 12 earthquakes are used, representing very small, intermediate, and large excitations (peak ground velocity, vmax = 0.6 2 11, 23, and 57 cm/s, causing no minor and major damage). This paper presents a description of the building structure, foundation, and surrounding soil, the strong motion data used in the analysis, the soil-structure interaction model assumed, and results of Fourier analysis of the recorded response. The results show that the apparent frequency changes form one earthquake to another. The general trend is a reduction with increasing amplitudes of motion. The smallest values (measured during the damaging motions) are 0.4 and 0.5 Hz for the longitudinal and transverse directions. The largest values are 1.1 and 1.4 Hz, respectively, determined from response to ambient noise after the damage occurred. This implies 64% reduction of the system frequency, or a factor '3 change, from small to large response amplitudes, and is interpreted to be caused by nonlinearities in the soil.", "title": "" }, { "docid": "201377a4c2d29c907c33f8cdfe6d7084", "text": "•Sequence of tokens mapped to word embeddings. •Bidirectional LSTM builds context-dependent representations for each word. •A small feedforward layer encourages generalisation. •Conditional Random Field (CRF) at the top outputs the most optimal label sequence for the sentence. •Unable to model unseen words, learns poor representations for infrequent words, and unable to capture character-level patterns.", "title": "" }, { "docid": "cc56706151e027c89eea5639486d4cd3", "text": "To refine user interest profiling, this paper focuses on extending scientific subject ontology via keyword clustering and on improving the accuracy and effectiveness of recommendation of the electronic academic publications in online services. A clustering approach is proposed for domain keywords for the purpose of the subject ontology extension. Based on the keyword clusters, the construction of user interest profiles is presented on a rather fine granularity level. In the construction of user interest profiles, we apply two types of interest profiles: explicit profiles and implicit profiles. The explicit eighted keyword graph", "title": "" }, { "docid": "f4e67e19f5938f475a2757282082b695", "text": "Classrooms are complex social systems, and student-teacher relationships and interactions are also complex, multicomponent systems. We posit that the nature and quality of relationship interactions between teachers and students are fundamental to understanding student engagement, can be assessed through standardized observation methods, and can be changed by providing teachers knowledge about developmental processes relevant for classroom interactions and personalized feedback/support about their interactive behaviors and cues. When these supports are provided to teachers’ interactions, student engagement increases. In this chapter, we focus on the theoretical and empirical links between interactions and engagement and present an approach to intervention designed to increase the quality of such interactions and, in turn, increase student engagement and, ultimately, learning and development. Recognizing general principles of development in complex systems, a theory of the classroom as a setting for development, and a theory of change specifi c to this social setting are the ultimate goals of this work. Engagement, in this context, is both an outcome in its own R. C. Pianta , Ph.D. (*) Curry School of Education , University of Virginia , PO Box 400260 , Charlottesville , VA 22904-4260 , USA e-mail: rcp4p@virginia.edu B. K. Hamre , Ph.D. Center for Advanced Study of Teaching and Learning , University of Virginia , Charlottesville , VA , USA e-mail: bkh3d@virginia.edu J. P. Allen , Ph.D. Department of Psychology , University of Virginia , Charlottesville , VA , USA e-mail: allen@virginia.edu Teacher-Student Relationships and Engagement: Conceptualizing, Measuring, and Improving the Capacity of Classroom Interactions* Robert C. Pianta , Bridget K. Hamre , and Joseph P. Allen *Preparation of this chapter was supported in part by the Wm. T. Grant Foundation, the Foundation for Child Development, and the Institute of Education Sciences. 366 R.C. Pianta et al.", "title": "" }, { "docid": "34d16a5eb254846f431e2c716309e20a", "text": "AIM\nWe investigated the uptake and pharmacokinetics of l-ergothioneine (ET), a dietary thione with free radical scavenging and cytoprotective capabilities, after oral administration to humans, and its effect on biomarkers of oxidative damage and inflammation.\n\n\nRESULTS\nAfter oral administration, ET is avidly absorbed and retained by the body with significant elevations in plasma and whole blood concentrations, and relatively low urinary excretion (<4% of administered ET). ET levels in whole blood were highly correlated to levels of hercynine and S-methyl-ergothioneine, suggesting that they may be metabolites. After ET administration, some decreasing trends were seen in biomarkers of oxidative damage and inflammation, including allantoin (urate oxidation), 8-hydroxy-2'-deoxyguanosine (DNA damage), 8-iso-PGF2α (lipid peroxidation), protein carbonylation, and C-reactive protein. However, most of the changes were non-significant.\n\n\nINNOVATION\nThis is the first study investigating the administration of pure ET to healthy human volunteers and monitoring its uptake and pharmacokinetics. This compound is rapidly gaining attention due to its unique properties, and this study lays the foundation for future human studies.\n\n\nCONCLUSION\nThe uptake and retention of ET by the body suggests an important physiological function. The decreasing trend of oxidative damage biomarkers is consistent with animal studies suggesting that ET may function as a major antioxidant but perhaps only under conditions of oxidative stress. Antioxid. Redox Signal. 26, 193-206.", "title": "" }, { "docid": "883042a6004a5be3865da51da20fa7c9", "text": "Green Mining is a field of MSR that studies software energy consumption and relies on software performance data. Unfortunately there is a severe lack of publicly available software power use performance data. This means that green mining researchers must generate this data themselves by writing tests, building multiple revisions of a product, and then running these tests multiple times (10+) for each software revision while measuring power use. Then, they must aggregate these measurements to estimate the energy consumed by the tests for each software revision. This is time consuming and is made more difficult by the constraints of mobile devices and their OSes. In this paper we propose, implement, and demonstrate Green Miner: the first dedicated hardware mining software repositories testbed. The Green Miner physically measures the energy consumption of mobile devices (Android phones) and automates the testing of applications, and the reporting of measurements back to developers and researchers. The Green Miner has already produced valuable results for commercial Android application developers, and has been shown to replicate other power studies' results.", "title": "" }, { "docid": "ba9ee073a073c31bfa0d1845a90f12ca", "text": "Nowadays, health disease are increasing day by day due to life style, hereditary. Especially, heart disease has become more common these days, i.e. life of people is at risk. Each individual has different values for Blood pressure, cholesterol and pulse rate. But according to medically proven results the normal values of Blood pressure is 120/90, cholesterol is and pulse rate is 72. This paper gives the survey about different classification techniques used for predicting the risk level of each person based on age, gender, Blood pressure, cholesterol, pulse rate. The patient risk level is classified using datamining classification techniques such as Naïve Bayes, KNN, Decision Tree Algorithm, Neural Network. etc., Accuracy of the risk level is high when using more number of attributes.", "title": "" }, { "docid": "b5b08bdd830144741cf900f6d41fe87d", "text": "A wealth of research has established that practice tests improve memory for the tested material. Although the benefits of practice tests are well documented, the mechanisms underlying testing effects are not well understood. We propose the mediator effectiveness hypothesis, which states that more-effective mediators (that is, information linking cues to targets) are generated during practice involving tests with restudy versus during restudy only. Effective mediators must be retrievable at time of test and must elicit the target response. We evaluated these two components of mediator effectiveness for learning foreign language translations during practice involving either test-restudy or restudy only. Supporting the mediator effectiveness hypothesis, test-restudy practice resulted in mediators that were more likely to be retrieved and more likely to elicit targets on a final test.", "title": "" }, { "docid": "bf14f996f9013351aca1e9935157c0e3", "text": "Attributed graphs are becoming important tools for modeling information networks, such as the Web and various social networks (e.g. Facebook, LinkedIn, Twitter). However, it is computationally challenging to manage and analyze attributed graphs to support effective decision making. In this paper, we propose, Pagrol, a parallel graph OLAP (Online Analytical Processing) system over attributed graphs. In particular, Pagrol introduces a new conceptual Hyper Graph Cube model (which is an attributed-graph analogue of the data cube model for relational DBMS) to aggregate attributed graphs at different granularities and levels. The proposed model supports different queries as well as a new set of graph OLAP Roll-Up/Drill-Down operations. Furthermore, on the basis of Hyper Graph Cube, Pagrol provides an efficient MapReduce-based parallel graph cubing algorithm, MRGraph-Cubing, to compute the graph cube for an attributed graph. Pagrol employs numerous optimization techniques: (a) a self-contained join strategy to minimize I/O cost; (b) a scheme that groups cuboids into batches so as to minimize redundant computations; (c) a cost-based scheme to allocate the batches into bags (each with a small number of batches); and (d) an efficient scheme to process a bag using a single MapReduce job. Results of extensive experimental studies using both real Facebook and synthetic datasets on a 128-node cluster show that Pagrol is effective, efficient and scalable.", "title": "" } ]
scidocsrr
590b171dde0c348430ff6e9098d7a4c6
Machine learning, medical diagnosis, and biomedical engineering research - commentary
[ { "docid": "ea8716e339cdc51210f64436a5c91c44", "text": "Feature selection has been the focus of interest for quite some time and much work has been done. With the creation of huge databases and the consequent requirements for good machine learning techniques, new problems arise and novel approaches to feature selection are in demand. This survey is a comprehensive overview of many existing methods from the 1970’s to the present. It identifies four steps of a typical feature selection method, and categorizes the different existing methods in terms of generation procedures and evaluation functions, and reveals hitherto unattempted combinations of generation procedures and evaluation functions. Representative methods are chosen from each category for detailed explanation and discussion via example. Benchmark datasets with different characteristics are used for comparative study. The strengths and weaknesses of different methods are explained. Guidelines for applying feature selection methods are given based on data types and domain characteristics. This survey identifies the future research areas in feature selection, introduces newcomers to this field, and paves the way for practitioners who search for suitable methods for solving domain-specific real-world applications. (Intelligent Data Analysis, Vol. I, no. 3, http:llwwwelsevier.co&ocate/ida)", "title": "" } ]
[ { "docid": "a827d89c56521de7dff8a59039c52181", "text": "A set of tools is being prepared in the frame of ESA activity [18191/04/NL] labelled: \"Mars Rover Chassis Evaluation Tools\" to support design, selection and optimisation of space exploration rovers in Europe. This activity is carried out jointly by Contraves Space as prime contractor, EPFL, DLR, Surrey Space Centre and EADS Space Transportation. This paper describes the current results of this study and its intended used for selection, design and optimisation on different wheeled vehicles. These tools would also allow future developments for a more efficient motion control on rover. INTRODUCTION AND MOTIVATION A set of tools is being developed to support the design of planetary rovers in Europe. The RCET will enable accurate predictions and characterisations of rover performances as related to the locomotion subsystem. This infrastructure consists of both S/W and H/W elements that will be interwoven to result in a user-friendly environment. The actual need for mobility increased in terms of range and duration. In this respect, redesigning specific aspects of the past rover concepts, in particular the development of most suitable all terrain performances is appropriate [9]. Analysis and design methodologies for terrestrial surface vehicles to operate on unprepared surfaces have been successfully applied to planet rover developments for the first time during the Apollo LRV manned lunar rover programme of the late 1960’s and early 1970’s [1,2]. Key to this accomplishment and to rational surface vehicle designs in general are quantitative descriptions of the terrain and of the interaction between the terrain and the vehicle. Not only the wheel/ground interaction is essential for efficient locomotion, but also the rover kinematics concepts. In recent terrestrial off-the-road vehicle development and acquisition, especially in the military, the so-called ‘Virtual Proving Ground’ (VPG) Simulation Technology has become essential. The integrated environments previously available to design engineers involved sophisticated hardware and software and cost hundreds of thousands of Euros. The experimentation and operational costs associated with the use of such instruments were even more alarming. The promise of VPG is to lower the risk and cost in vehicle definition and design by allowing early concept characterisation and trade-off’s based on numerical models without having to rely on prototyping for concept assessment. A similar approach is proposed for future European planetary rover programmes and is to be enabled by RCET. The first part of this paper describes the methodology used in the RCET activity and gives an overview of the different tools under development. The next section details the theory and modules used for the simulation. Finally the last section relates the first results, the future work and concludes this paper. In Proceedings of the 8th ESA Workshop on Advanced Space Technologies for Robotics and Automation 'ASTRA 2004' ESTEC, Noordwijk, The Netherlands, November 2 4, 2004", "title": "" }, { "docid": "2c328d1dd45733ad8063ea89a6b6df43", "text": "We present Residual Policy Learning (RPL): a simple method for improving nondifferentiable policies using model-free deep reinforcement learning. RPL thrives in complex robotic manipulation tasks where good but imperfect controllers are available. In these tasks, reinforcement learning from scratch remains data-inefficient or intractable, but learning a residual on top of the initial controller can yield substantial improvement. We study RPL in five challenging MuJoCo tasks involving partial observability, sensor noise, model misspecification, and controller miscalibration. By combining learning with control algorithms, RPL can perform long-horizon, sparse-reward tasks for which reinforcement learning alone fails. Moreover, we find that RPL consistently and substantially improves on the initial controllers. We argue that RPL is a promising approach for combining the complementary strengths of deep reinforcement learning and robotic control, pushing the boundaries of what either can achieve independently.", "title": "" }, { "docid": "0b10bd76d0d78e609c6397b60257a2ed", "text": "Persistent increase in population of world is demanding more and more supply of food. Hence there is a significant need of advancement in cultivation to meet up the future food needs. It is important to know moisture levels in soil to maximize the output. But most of farmers cannot afford high cost devices to measure soil moisture. Our research work in this paper focuses on home-made low cost moisture sensor with accuracy. In this paper we present a method to manufacture soil moisture sensor to estimate moisture content in soil hence by providing information about required water supply for good cultivation. This sensor is tested with several samples of soil and able to meet considerable accuracy. Measuring soil moisture is an effective way to determine condition of soil and get information about the quantity of water that need to be supplied for cultivation. Two separate methods are illustrated in this paper to determine soil moisture over an area and along the depth.", "title": "" }, { "docid": "0038c1aaa5d9823f44c118a7048d574a", "text": "We present the design and implementation of a system which allows a standard paper-based exam to be graded via tablet computers. The paper exam is given normally in a course, with a specialized footer that allows for automated recognition of each exam page. The exam pages are then scanned in via a high-speed scanner, graded by one or more people using tablet computers, and returned electronically to the students. The system provides many advantages over regular paper-based exam grading, and boasts a faster grading experience than traditional grading methods.", "title": "" }, { "docid": "7e8feb5f8d816a0c0626f6fdc4db7c04", "text": "In this paper, we analyze if cascade usage of the context encoder with increasing input can improve the results of the inpainting. For this purpose, we train context encoder for 64x64 pixels images in a standard way and use its resized output to fill in the missing input region of the 128x128 context encoder, both in training and evaluation phase. As the result, the inpainting is visibly more plausible. In order to thoroughly verify the results, we introduce normalized squared-distortion, a measure for quantitative inpainting evaluation, and we provide its mathematical explanation. This is the first attempt to formalize the inpainting measure, which is based on the properties of latent feature representation, instead of L2 reconstruction loss.", "title": "" }, { "docid": "038e48bcae7346ef03a318bb3a280bcc", "text": "Low back pain (LBP) is a problem worldwide with a lifetime prevalence reported to be as high as 84%. The lifetime prevalence of low back pain is reported to be as high as 84%, and the prevalence of chronic low back pain is about 23%, with 11–12% of the population being disabled by low back pain [1]. LBP is defined as pain experienced between the twelfth rib and the inferior gluteal fold, with or without associated leg pain [2]. Based on the etiology LBP is classified as Specific Low Back Pain and Non-specific Low Back Pain. Of all the LBP patients 10% are attributed to Specific and 90% are attributed to NonSpecific Low Back Pain (NSLBP) [3]. Specific LBP are those back pains which have specific etiology causes like Sponylolisthesis, Spondylosis, Ankylosing Spondylitis, Prolapsed disc etc.", "title": "" }, { "docid": "67d8680a41939c58a866f684caa514a3", "text": "Triboelectric effect works on the principle of triboelectrification and electrostatic induction. This principle is used to generate voltage by converting mechanical energy into electrical energy. This paper presents the charging behavior of different capacitors by rubbing of two different materials using mechanical motion. The numerical and simulation modeling, describes the charging performance of a TENG with a bridge rectifier. It is also demonstrated that a 10 μF capacitor can be charged to a maximum of 24.04 volt in 300 seconds and it is also provide 2800 μJ/cm3 maximum energy density. Such system can be used for ultralow power electronic devices, biomedical devices and self-powered appliances etc.", "title": "" }, { "docid": "09f19a5e4751dc3ee4aa38817aafd3cf", "text": "Article history: Received 10 September 2012 Received in revised form 12 March 2013 Accepted 24 March 2013 Available online 23 April 2013", "title": "" }, { "docid": "44e28ba2149dce27fd0ccc9ed2065feb", "text": "Flip chip assembly technology is an attractive solution for high I/O density and fine-pitch microelectronics packaging. Recently, high efficient GaN-based light-emitting diodes (LEDs) have undergone a rapid development and flip chip bonding has been widely applied to fabricate high-brightness GaN micro-LED arrays [1]. The flip chip GaN LED has some advantages over the traditional top-emission LED, including improved current spreading, higher light extraction efficiency, better thermal dissipation capability and the potential of further optical component integration [2, 3]. With the advantages of flip chip assembly, micro-LED (μLED) arrays with high I/O density can be performed with improved luminous efficiency than conventional p-side-up micro-LED arrays and are suitable for many potential applications, such as micro-displays, bio-photonics and visible light communications (VLC), etc. In particular, μLED array based selif-emissive micro-display has the promising to achieve high brightness and contrast, reliability, long-life and compactness, which conventional micro-displays like LCD, OLED, etc, cannot compete with. In this study, GaN micro-LED array device with flip chip assembly package process was presented. The bonding quality of flip chip high density micro-LED array is tested by daisy chain test. The p-n junction tests of the devices are measured for electrical characteristics. The illumination condition of each micro-diode pixel was examined under a forward bias. Failure mode analysis was performed using cross sectioning and scanning electron microscopy (SEM). Finally, the fully packaged micro-LED array device is demonstrated as a prototype of dice projector system.", "title": "" }, { "docid": "a112cd88f637ecb0465935388bc65ca4", "text": "This paper shows a Class-E RF power amplifier designed to obtain a flat-top transistor-voltage waveform whose peak value is 81% of the peak value of the voltage of a “Classical” Class-E amplifier.", "title": "" }, { "docid": "1de10e40580ba019045baaa485f8e729", "text": "Automated labeling of anatomical structures in medical images is very important in many neuroscience studies. Recently, patch-based labeling has been widely investigated to alleviate the possible mis-alignment when registering atlases to the target image. However, the weights used for label fusion from the registered atlases are generally computed independently and thus lack the capability of preventing the ambiguous atlas patches from contributing to the label fusion. More critically, these weights are often calculated based only on the simple patch similarity, thus not necessarily providing optimal solution for label fusion. To address these limitations, we propose a generative probability model to describe the procedure of label fusion in a multi-atlas scenario, for the goal of labeling each point in the target image by the best representative atlas patches that also have the largest labeling unanimity in labeling the underlying point correctly. Specifically, sparsity constraint is imposed upon label fusion weights, in order to select a small number of atlas patches that best represent the underlying target patch, thus reducing the risks of including the misleading atlas patches. The labeling unanimity among atlas patches is achieved by exploring their dependencies, where we model these dependencies as the joint probability of each pair of atlas patches in correctly predicting the labels, by analyzing the correlation of their morphological error patterns and also the labeling consensus among atlases. The patch dependencies will be further recursively updated based on the latest labeling results to correct the possible labeling errors, which falls to the Expectation Maximization (EM) framework. To demonstrate the labeling performance, we have comprehensively evaluated our patch-based labeling method on the whole brain parcellation and hippocampus segmentation. Promising labeling results have been achieved with comparison to the conventional patch-based labeling method, indicating the potential application of the proposed method in the future clinical studies.", "title": "" }, { "docid": "753a4af9741cd3fec4e0e5effaf5fc67", "text": "With the growing volume of online information, recommender systems have been an effective strategy to overcome information overload. The utility of recommender systems cannot be overstated, given their widespread adoption in many web applications, along with their potential impact to ameliorate many problems related to over-choice. In recent years, deep learning has garnered considerable interest in many research fields such as computer vision and natural language processing, owing not only to stellar performance but also to the attractive property of learning feature representations from scratch. The influence of deep learning is also pervasive, recently demonstrating its effectiveness when applied to information retrieval and recommender systems research. The field of deep learning in recommender system is flourishing. This article aims to provide a comprehensive review of recent research efforts on deep learning-based recommender systems. More concretely, we provide and devise a taxonomy of deep learning-based recommendation models, along with a comprehensive summary of the state of the art. Finally, we expand on current trends and provide new perspectives pertaining to this new and exciting development of the field.", "title": "" }, { "docid": "18c507d6624f153cb1b7beaf503b0d54", "text": "The critical period hypothesis for language acquisition (CP) proposes that the outcome of language acquisition is not uniform over the lifespan but rather is best during early childhood. The CP hypothesis was originally proposed for spoken language but recent research has shown that it applies equally to sign language. This paper summarizes a series of experiments designed to investigate whether and how the CP affects the outcome of sign language acquisition. The results show that the CP has robust effects on the development of sign language comprehension. Effects are found at all levels of linguistic structure (phonology, morphology and syntax, the lexicon and semantics) and are greater for first as compared to second language acquisition. In addition, CP effects have been found on all measures of language comprehension examined to date, namely, working memory, narrative comprehension, sentence memory and interpretation, and on-line grammatical processing. The nature of these effects with respect to a model of language comprehension is discussed.", "title": "" }, { "docid": "84f9a6913a7689a5bbeb04f3173237b2", "text": "BACKGROUND\nPsychosocial treatments are the mainstay of management of autism in the UK but there is a notable lack of a systematic evidence base for their effectiveness. Randomised controlled trial (RCT) studies in this area have been rare but are essential because of the developmental heterogeneity of the disorder. We aimed to test a new theoretically based social communication intervention targeting parental communication in a randomised design against routine care alone.\n\n\nMETHODS\nThe intervention was given in addition to existing care and involved regular monthly therapist contact for 6 months with a further 6 months of 2-monthly consolidation sessions. It aimed to educate parents and train them in adapted communication tailored to their child's individual competencies. Twenty-eight children with autism were randomised between this treatment and routine care alone, stratified for age and baseline severity. Outcome was measured at 12 months from commencement of intervention, using standardised instruments.\n\n\nRESULTS\nAll cases studied met full Autism Diagnostic Interview (ADI) criteria for classical autism. Treatment and controls had similar routine care during the study period and there were no study dropouts after treatment had started. The active treatment group showed significant improvement compared with controls on the primary outcome measure--Autism Diagnostic Observation Schedule (ADOS) total score, particularly in reciprocal social interaction--and on secondary measures of expressive language, communicative initiation and parent-child interaction. Suggestive but non-significant results were found in Vineland Adaptive Behaviour Scales (Communication Sub-domain) and ADOS stereotyped and restricted behaviour domain.\n\n\nCONCLUSIONS\nA Randomised Treatment Trial design of this kind in classical autism is feasible and acceptable to patients. This pilot study suggests significant additional treatment benefits following a targeted (but relatively non-intensive) dyadic social communication treatment, when compared with routine care. The study needs replication on larger and independent samples. It should encourage further RCT designs in this area.", "title": "" }, { "docid": "2afb992058eb720ff0baf4216e3a22c2", "text": "In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit: Summary. — A longitudinal anthropological study of cotton farming in Warangal District of Andhra Pradesh, India, compares a group of villages before and after adoption of Bt cotton. It distinguishes \" field-level \" and \" farm-level \" impacts. During this five-year period yields rose by 18% overall, with greater increases among poor farmers with the least access to information. Insecticide sprayings dropped by 55%, although predation by non-target pests was rising. However shifting from the field to the historically-situated context of the farm recasts insect attacks as a symptom of larger problems in agricultural decision-making. Bt cotton's opponents have failed to recognize real benefits at the field level, while its backers have failed to recognize systemic problems that Bt cotton may exacerbate.", "title": "" }, { "docid": "a740207cc7d4a0db263dae2b7c9402d9", "text": "In this paper we propose a Deep Autoencoder Mixture Clustering (DAMIC) algorithm based on a mixture of deep autoencoders where each cluster is represented by an autoencoder. A clustering network transforms the data into another space and then selects one of the clusters. Next, the autoencoder associated with this cluster is used to reconstruct the data-point. The clustering algorithm jointly learns the nonlinear data representation and the set of autoencoders. The optimal clustering is found by minimizing the reconstruction loss of the mixture of autoencoder network. Unlike other deep clustering algorithms, no regularization term is needed to avoid data collapsing to a single point. Our experimental evaluations on image and text corpora show significant improvement over state-of-the-art methods.", "title": "" }, { "docid": "a8abc8da0f2d5f8055c4ed6ea2294c6c", "text": "This paper presents the design of a modulated metasurface (MTS) antenna capable to provide both right-hand (RH) and left-hand (LH) circularly polarized (CP) boresight radiation at Ku-band (13.5 GHz). This antenna is based on the interaction of two cylindrical-wavefront surface wave (SW) modes of transverse electric (TE) and transverse magnetic (TM) types with a rotationally symmetric, anisotropic-modulated MTS placed on top of a grounded slab. A properly designed centered circular waveguide feed excites the two orthogonal (decoupled) SW modes and guarantees the balance of the power associated with each of them. By a proper selection of the anisotropy and modulation of the MTS pattern, the phase velocities of the two modes are synchronized, and leakage is generated in broadside direction with two orthogonal linear polarizations. When the circular waveguide is excited with two mutually orthogonal TE11 modes in phase-quadrature, an LHCP or RHCP antenna is obtained. This paper explains the feeding system and the MTS requirements that guarantee the balanced conditions of the TM/TE SWs and consequent generation of dual CP boresight radiation.", "title": "" }, { "docid": "c5cfe386f6561eab1003d5572443612e", "text": "Agri-Food is the largest manufacturing sector in the UK. It supports a food chain that generates over {\\pounds}108bn p.a., with 3.9m employees in a truly international industry and exports {\\pounds}20bn of UK manufactured goods. However, the global food chain is under pressure from population growth, climate change, political pressures affecting migration, population drift from rural to urban regions and the demographics of an aging global population. These challenges are recognised in the UK Industrial Strategy white paper and backed by significant investment via a Wave 2 Industrial Challenge Fund Investment (\"Transforming Food Production: from Farm to Fork\"). Robotics and Autonomous Systems (RAS) and associated digital technologies are now seen as enablers of this critical food chain transformation. To meet these challenges, this white paper reviews the state of the art in the application of RAS in Agri-Food production and explores research and innovation needs to ensure these technologies reach their full potential and deliver the necessary impacts in the Agri-Food sector.", "title": "" }, { "docid": "b24fa0e9c208bf8ea0ea5f3fe0453884", "text": "Bacteria and fungi are ubiquitous in the atmosphere. The diversity and abundance of airborne microbes may be strongly influenced by atmospheric conditions or even influence atmospheric conditions themselves by acting as ice nucleators. However, few comprehensive studies have described the diversity and dynamics of airborne bacteria and fungi based on culture-independent techniques. We document atmospheric microbial abundance, community composition, and ice nucleation at a high-elevation site in northwestern Colorado. We used a standard small-subunit rRNA gene Sanger sequencing approach for total microbial community analysis and a bacteria-specific 16S rRNA bar-coded pyrosequencing approach (4,864 sequences total). During the 2-week collection period, total microbial abundances were relatively constant, ranging from 9.6 x 10(5) to 6.6 x 10(6) cells m(-3) of air, and the diversity and composition of the airborne microbial communities were also relatively static. Bacteria and fungi were nearly equivalent, and members of the proteobacterial groups Burkholderiales and Moraxellaceae (particularly the genus Psychrobacter) were dominant. These taxa were not always the most abundant in freshly fallen snow samples collected at this site. Although there was minimal variability in microbial abundances and composition within the atmosphere, the number of biological ice nuclei increased significantly during periods of high relative humidity. However, these changes in ice nuclei numbers were not associated with changes in the relative abundances of the most commonly studied ice-nucleating bacteria.", "title": "" }, { "docid": "9bbf9422ae450a17e0c46d14acf3a3e3", "text": "This short paper outlines how polynomial chaos theory (PCT) can be utilized for manipulator dynamic analysis and controller design in a 4-DOF selective compliance assembly robot-arm-type manipulator with variation in both the link masses and payload. It includes a simple linear control algorithm into the formulation to show the capability of the PCT framework.", "title": "" } ]
scidocsrr
1d42f6b0d2e62e2463e4a2b36186afc3
Generation Alpha at the Intersection of Technology, Play and Motivation
[ { "docid": "e0ef97db18a47ba02756ba97830a0d0c", "text": "This article reviews the literature concerning the introduction of interactive whiteboards (IWBs) in educational settings. It identifies common themes to emerge from a burgeoning and diverse literature, which includes reports and summaries available on the Internet. Although the literature reviewed is overwhelmingly positive about the impact and the potential of IWBs, it is primarily based on the views of teachers and pupils. There is insufficient evidence to identify the actual impact of such technologies upon learning either in terms of classroom interaction or upon attainment and achievement. This article examines this issue in light of varying conceptions of interactivity and research into the effects of learning with verbal and visual information.", "title": "" }, { "docid": "e5a3119470420024b99df2d6eb14b966", "text": "Why should wait for some days to get or receive the rules of play game design fundamentals book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This rules of play game design fundamentals is well known book in the world, of course many people will try to own it. Why don't you become the first? Still confused with the way?", "title": "" }, { "docid": "9d6a0b31bf2b64f1ec624222a2222e2a", "text": "This is the translation of a paper by Marc Prensky, the originator of the famous metaphor digital natives digital immigrants. Here, ten years after the birth of that successful metaphor, Prensky outlines that, while the distinction between digital natives and immigrants will progressively become less important, new concepts will be needed to represent the continuous evolution of the relationship between man and digital technologies. In this paper Prensky introduces the concept of digital wisdom, a human quality which develops as a result of the empowerment that the natural human skills can receive through a creative and clever use of digital technologies. KEY-WORDS Digital natives, digital immigrants, digital wisdom, digital empowerment. Prensky M. (2010). H. Sapiens Digitale: dagli Immigrati digitali e nativi digitali alla saggezza digitale. TD-Tecnologie Didattiche, 50, pp. 17-24 17 I problemi del mondo d’oggi non possono essere risolti facendo ricorso allo stesso tipo di pensiero che li ha creati", "title": "" } ]
[ { "docid": "d4c8e9ff4129b2e6e7671f11667c57d5", "text": "Currently, the number of surveillance cameras is rapidly increasing responding to security issues. But constructing an intelligent detection system is not easy because it needs high computing performance. This study aims to construct a real-world video surveillance system that can effectively detect moving person using limited resources. To this end, we propose a simple framework to detect and recognize moving objects using outdoor CCTV video footages by combining background subtraction and Convolutional Neural Networks (CNNs). A background subtraction algorithm is first applied to each video frame to find the regions of interest (ROIs). A CNN classification is then carried out to classify the obtained ROIs into one of the predefined classes. Our approach much reduces the computation complexity in comparison to other object detection algorithms. For the experiments, new datasets are constructed by filming alleys and playgrounds, places where crimes are likely to occur. Different image sizes and experimental settings are tested to construct the best classifier for detecting people. The best classification accuracy of 0.85 was obtained for a test set from the same camera with training set and 0.82 with different cameras.", "title": "" }, { "docid": "b40129a15767189a7a595db89c066cf8", "text": "To increase reliability of face recognition system, the system must be able to distinguish real face from a copy of face such as a photograph. In this paper, we propose a fast and memory efficient method of live face detection for embedded face recognition system, based on the analysis of the movement of the eyes. We detect eyes in sequential input images and calculate variation of each eye region to determine whether the input face is a real face or not. Experimental results show that the proposed approach is competitive and promising for live face detection. Keywords—Liveness Detection, Eye detection, SQI.", "title": "" }, { "docid": "41468ef8950c372586485725478c80db", "text": "Sobolevicanthus transvaalensis n.sp. is described from the Cape Teal, Anas capensis Gmelin, 1789, collected in the Republic of South Africa. The new species possesses 8 skrjabinoid hooks 78–88 μm long (mean 85 μm) and a short claviform cirrus-sac 79–143 μm long and resembles S. javanensis (Davis, 1945) and S. terraereginae (Johnston, 1913). It can be distinguished from S. javanensis by its shorter cirrus-sac and smaller cirrus diameter, and by differences in the morphology of the accessory sac and vagina and in their position relative to the cirrus-sac. It can be separated from S. terraereginae on the basis of cirrus length and diameter. The basal diameter of the cirrus in S. terraereginae is three times that in S. transvaalensis. ac]19830414", "title": "" }, { "docid": "1e57a3da54c0d37bc47134961feaf981", "text": "Software Development Life Cycle (SDLC) is a process consisting of various phases like requirements analysis, designing, coding, testing and implementation & maintenance of a software system as well as the way, in which these phases are implemented. Research studies reveal that the initial two phases, viz. requirements and design are the skeleton of the entire development life cycle. Designing has several sub-activities such as Architectural, Function-Oriented and Object- Oriented design, which aim to transform the requirements into detailed specifications covering all facets of the system in a proper way, but at the same time, there exists various related challenges too. One of the foremost challenges is the minimum interaction between construction and design teams causing numerous problems during design such as: production delays, incomplete designs, rework, change orders, etc. Prior research studies reveal that Artificial Intelligence (AI) techniques may eliminate these problems by offering several tools/techniques to automate certain processes up to a certain extent. In this paper, our major aim is to identify the challenges in each of the stages of the design phase and possibility of AI techniques to overcome these identified issues. In addition, the paper also explores the relationship between these issues and their possible AI solution/s through Venn-Diagram. For some of the issues, there exist more than one AI techniques but for some issues, no AI technique/s have been found to overcome the same and accordingly, those issues are still open for further research.", "title": "" }, { "docid": "9c9e3261c293aedea006becd2177a6d5", "text": "This paper proposes a motion-focusing method to extract key frames and generate summarization synchronously for surveillance videos. Within each pre-segmented video shot, the proposed method focuses on one constant-speed motion and aligns the video frames by fixing this focused motion into a static situation. According to the relative motion theory, the other objects in the video are moving relatively to the selected kind of motion. This method finally generates a summary image containing all moving objects and embedded with spatial and motional information, together with key frames to provide details corresponding to the regions of interest in the summary image. We apply this method to the lane surveillance system and the results provide us a new way to understand the video efficiently.", "title": "" }, { "docid": "704961413b936703a1a6fe26bc64f256", "text": "The rise of cloud computing brings virtualization technology continues to heat up. Based on Xen's I/O virtualization subsystem, under the virtual machine environment which has multi-type tasks, the existing schedulers can't achieve response with I/O-bound tasks in time. This paper presents ECredit scheduler combined complexity evaluation of I/O task in Xen's I/O virtualization subsystem. It prioritizes the I/O-bound task realizing fair scheduling. The experiments show that the optimized scheduling algorithm can reduce the response time of I/O-bound task and improve the performance of the virtual system.", "title": "" }, { "docid": "bf333ff6237d875c34a5c62b0216d5d9", "text": "The design of tall buildings essentially involves a conceptual design, approximate analysis, preliminary design and optimization, to safely carry gravity and lateral loads. The design criteria are, strength, serviceability, stability and human comfort. The strength is satisfied by limit stresses, while serviceability is satisfied by drift limits in the range of H/500 to H/1000. Stability is satisfied by sufficient factor of safety against buckling and P-Delta effects. The factor of safety is around 1.67 to 1.92. The human comfort aspects are satisfied by accelerations in the range of 10 to 25 milli-g, where g=acceleration due to gravity, about 981cms/sec^2. The aim of the structural engineer is to arrive at suitable structural schemes, to satisfy these criteria, and assess their structural weights in weight/unit area in square feet or square meters. This initiates structural drawings and specifications to enable construction engineers to proceed with fabrication and erection operations. The weight of steel in lbs/sqft or in kg/sqm is often a parameter the architects and construction managers are looking for from the structural engineer. This includes the weights of floor system, girders, braces and columns. The premium for wind, is optimized to yield drifts in the range of H/500, where H is the height of the tall building. Herein, some aspects of the design of gravity system, and the lateral system, are explored. Preliminary design and optimization steps are illustrated with examples of actual tall buildings designed by CBM Engineers, Houston, Texas, with whom the author has been associated with during the past 3 decades. Dr.Joseph P.Colaco, its President, has been responsible for the tallest buildings in Los Angeles, Houston, St. Louis, Dallas, New Orleans, and Washington, D.C, and with the author in its design staff as a Senior Structural Engineer. Research in the development of approximate methods of analysis, and preliminary design and optimization, has been conducted at WPI, with several of the author’s graduate students. These are also illustrated. Software systems to do approximate analysis of shear-wall frame, framed-tube, out rigger braced tall buildings are illustrated. Advanced Design courses in reinforced and pre-stressed concrete, as well as structural steel design at WPI, use these systems. Research herein, was supported by grants from NSF, Bethlehem Steel, and Army.", "title": "" }, { "docid": "ddecb743bc098a3e31ca58bc17810cf1", "text": "Maxout network is a powerful alternate to traditional sigmoid neural networks and is showing success in speech recognition. However, maxout network is prone to overfitting thus regularization methods such as dropout are often needed. In this paper, a stochastic pooling regularization method for max-out networks is proposed to control overfitting. In stochastic pooling, a distribution is produced for each pooling region by the softmax normalization of the piece values. The active piece is selected based on the distribution during training, and an effective probability weighting is conducted during testing. We apply the stochastic pooling maxout (SPM) networks within the DNN-HMM framework and evaluate its effectiveness under a low-resource speech recognition condition. On benchmark test sets, the SPM network yields 4.7-8.6% relative improvements over the baseline maxout network. Further evaluations show the superiority of stochastic pooling over dropout for low-resource speech recognition.", "title": "" }, { "docid": "b754b1d245aa68aeeb37cf78cf54682f", "text": "This paper postulates that water structure is altered by biomolecules as well as by disease-enabling entities such as certain solvated ions, and in turn water dynamics and structure affect the function of biomolecular interactions. Although the structural and dynamical alterations are subtle, they perturb a well-balanced system sufficiently to facilitate disease. We propose that the disruption of water dynamics between and within cells underlies many disease conditions. We survey recent advances in magnetobiology, nanobiology, and colloid and interface science that point compellingly to the crucial role played by the unique physical properties of quantum coherent nanomolecular clusters of magnetized water in enabling life at the cellular level by solving the “problems” of thermal diffusion, intracellular crowding, and molecular self-assembly. Interphase water and cellular surface tension, normally maintained by biological sulfates at membrane surfaces, are compromised by exogenous interfacial water stressors such as cationic aluminum, with consequences that include greater local water hydrophobicity, increased water tension, and interphase stretching. The ultimate result is greater “stiffness” in the extracellular matrix and either the “soft” cancerous state or the “soft” neurodegenerative state within cells. Our hypothesis provides a basis for understanding why so many idiopathic diseases of today are highly stereotyped and pluricausal. OPEN ACCESS Entropy 2013, 15 3823", "title": "" }, { "docid": "473baf99a816e24cec8dec2b03eb0958", "text": "We propose a method that allows an unskilled user to create an accurate physical replica of a digital 3D model. We use a projector/camera pair to scan a work in progress, and project multiple forms of guidance onto the object itself that indicate which areas need more material, which need less, and where any ridges, valleys or depth discontinuities are. The user adjusts the model using the guidance and iterates, making the shape of the physical object approach that of the target 3D model over time. We show how this approach can be used to create a duplicate of an existing object, by scanning the object and using that scan as the target shape. The user is free to make the reproduction at a different scale and out of different materials: we turn a toy car into cake. We extend the technique to support replicating a sequence of models to create stop-motion video. We demonstrate an end-to-end system in which real-world performance capture data is retargeted to claymation. Our approach allows users to easily and accurately create complex shapes, and naturally supports a large range of materials and model sizes.", "title": "" }, { "docid": "3d1c1e507ed603488742666a9cfb45f2", "text": "This page is dedicated to design science research in Information Systems (IS). Design science research is yet another \"lens\" or set of synthetic and analytical techniques and perspectives (complementing the Positivist and Interpretive perspectives) for performing research in IS. Design science research involves the creation of new knowledge through design of novel or innovative artifacts (things or processes that have or can have material existence) and analysis of the use and/or performance of such artifacts along with reflection and abstraction—to improve and understand the behavior of aspects of Information Systems. Such artifacts include—but certainly are not limited to—algorithms (e.g. for information retrieval), human/computer interfaces, and system design methodologies or languages. Design science researchers can be found in many disciplines and fields, notably Engineering and Computer Science; they use a variety of approaches, methods and techniques. In Information Systems, following a number of years of a general shift in IS research away from technological to managerial and organizational issues, an increasing number of observers are calling for a return to an exploration of the \"IT\" that underlies all IS research (Orlikowski and Iacono, 2001) thus underlining the need for IS design science research.", "title": "" }, { "docid": "bd2c3ee69cda5c08eb106e0994a77186", "text": "This paper explores the combination of self-organizing map (SOM) and feedback, in order to represent sequences of inputs. In general, neural networks with time-delayed feedback represent time implicitly, by combining current inputs and past activities. It has been difficult to apply this approach to SOM, because feedback generates instability during learning. We demonstrate a solution to this problem, based on a nonlinearity. The result is a generalization of SOM that learns to represent sequences recursively. We demonstrate that the resulting representations are adapted to the temporal statistics of the input series.", "title": "" }, { "docid": "3188d901ab997dcabc795ad3da6af659", "text": "This paper is about detecting incorrect arcs in a dependency parse for sentences that contain grammar mistakes. Pruning these arcs results in well-formed parse fragments that can still be useful for downstream applications. We propose two automatic methods that jointly parse the ungrammatical sentence and prune the incorrect arcs: a parser retrained on a parallel corpus of ungrammatical sentences with their corrections, and a sequence-to-sequence method. Experimental results show that the proposed strategies are promising for detecting incorrect syntactic dependencies as well as incorrect semantic dependencies.", "title": "" }, { "docid": "7074c90ee464e4c1d0e3515834835817", "text": "Deep convolutional neural networks (CNNs) have achieved breakthrough performance in many pattern recognition tasks such as image classification. However, the development of high-quality deep models typically relies on a substantial amount of trial-and-error, as there is still no clear understanding of when and why a deep model works. In this paper, we present a visual analytics approach for better understanding, diagnosing, and refining deep CNNs. We formulate a deep CNN as a directed acyclic graph. Based on this formulation, a hybrid visualization is developed to disclose the multiple facets of each neuron and the interactions between them. In particular, we introduce a hierarchical rectangle packing algorithm and a matrix reordering algorithm to show the derived features of a neuron cluster. We also propose a biclustering-based edge bundling method to reduce visual clutter caused by a large number of connections between neurons. We evaluated our method on a set of CNNs and the results are generally favorable.", "title": "" }, { "docid": "6fb8a5456a2bb0ce21f8ac0664aac6eb", "text": "For autonomous driving, moving objects like vehicles and pedestrians are of critical importance as they primarily influence the maneuvering and braking of the car. Typically, they are detected by motion segmentation of dense optical flow augmented by a CNN based object detector for capturing semantics. In this paper, our aim is to jointly model motion and appearance cues in a single convolutional network. We propose a novel two-stream architecture for joint learning of object detection and motion segmentation. We designed three different flavors of our network to establish systematic comparison. It is shown that the joint training of tasks significantly improves accuracy compared to training them independently. Although motion segmentation has relatively fewer data than vehicle detection. The shared fusion encoder benefits from the joint training to learn a generalized representation. We created our own publicly available dataset (KITTI MOD) by extending KITTI object detection to obtain static/moving annotations on the vehicles. We compared against MPNet as a baseline, which is the current state of the art for CNN-based motion detection. It is shown that the proposed two-stream architecture improves the mAP score by 21.5% in KITTI MOD. We also evaluated our algorithm on the non-automotive DAVIS dataset and obtained accuracy close to the state-of-the-art performance. The proposed network runs at 8 fps on a Titan X GPU using a basic VGG16 encoder.", "title": "" }, { "docid": "685e6338727b4ab899cffe2bbc1a20fc", "text": "Existing code similarity comparison methods, whether source or binary code based, are mostly not resilient to obfuscations. In the case of software plagiarism, emerging obfuscation techniques have made automated detection increasingly difficult. In this paper, we propose a binary-oriented, obfuscation-resilient method based on a new concept, longest common subsequence of semantically equivalent basic blocks, which combines rigorous program semantics with longest common subsequence based fuzzy matching. We model the semantics of a basic block by a set of symbolic formulas representing the input-output relations of the block. This way, the semantics equivalence (and similarity) of two blocks can be checked by a theorem prover. We then model the semantics similarity of two paths using the longest common subsequence with basic blocks as elements. This novel combination has resulted in strong resiliency to code obfuscation. We have developed a prototype and our experimental results show that our method is effective and practical when applied to real-world software.", "title": "" }, { "docid": "e6555beb963f40c39089959a1c417c2f", "text": "In this paper, we consider the problem of insufficient runtime and memory-space complexities of deep convolutional neural networks for visual emotion recognition. A survey of recent compression methods and efficient neural networks architectures is provided. We experimentally compare the computational speed and memory consumption during the training and the inference stages of such methods as the weights matrix decomposition, binarization and hashing. It is shown that the most efficient optimization can be achieved with the matrices decomposition and hashing. Finally, we explore the possibility to distill the knowledge from the large neural network, if only large unlabeled sample of facial images is available.", "title": "" }, { "docid": "9a4ca8c02ffb45013115124011e7417e", "text": "Now, we come to offer you the right catalogues of book to open. multisensor data fusion a review of the state of the art is one of the literary work in this world in suitable to be reading material. That's not only this book gives reference, but also it will show you the amazing benefits of reading a book. Developing your countless minds is needed; moreover you are kind of people with great curiosity. So, the book is very appropriate for you.", "title": "" }, { "docid": "b044bab52a36945cfc9d7948468a78ee", "text": "Recently, the speech recognition is very attractive for researchers because of the very significant related applications. For this reason, the novel research has been of very importance in the academic community. The aim of this work is to find out a new and appropriate feature extraction method for Arabic language recognition. In the present study, wavelet packet transform (WPT) with modular arithmetic and neural network were investigated for Arabic vowels recognition. The number of repeating the remainder was carried out for a speech signal. 266 coefficients are given to probabilistic neural network (PNN) for classification. The claimed results showed that the proposed method can make an effectual analysis with classification rates may reach 97%. Four published methods were studied for comparison. The proposed modular wavelet packet and neural networks (MWNN) expert system could obtain the best recognition rate. [Emad F. Khalaf, Khaled Daqrouq Ali Morfeq. Arabic Vowels Recognition by Modular Arithmetic and Wavelets using Neural Network. Life Sci J 2014;11(3):33-41]. (ISSN:1097-8135). http://www.lifesciencesite.com. 6", "title": "" }, { "docid": "9f1441bc10d7b0234a3736ce83d5c14b", "text": "Conservation of genetic diversity, one of the three main forms of biodiversity, is a fundamental concern in conservation biology as it provides the raw material for evolutionary change and thus the potential to adapt to changing environments. By means of meta-analyses, we tested the generality of the hypotheses that habitat fragmentation affects genetic diversity of plant populations and that certain life history and ecological traits of plants can determine differential susceptibility to genetic erosion in fragmented habitats. Additionally, we assessed whether certain methodological approaches used by authors influence the ability to detect fragmentation effects on plant genetic diversity. We found overall large and negative effects of fragmentation on genetic diversity and outcrossing rates but no effects on inbreeding coefficients. Significant increases in inbreeding coefficient in fragmented habitats were only observed in studies analyzing progenies. The mating system and the rarity status of plants explained the highest proportion of variation in the effect sizes among species. The age of the fragment was also decisive in explaining variability among effect sizes: the larger the number of generations elapsed in fragmentation conditions, the larger the negative magnitude of effect sizes on heterozygosity. Our results also suggest that fragmentation is shifting mating patterns towards increased selfing. We conclude that current conservation efforts in fragmented habitats should be focused on common or recently rare species and mainly outcrossing species and outline important issues that need to be addressed in future research on this area.", "title": "" } ]
scidocsrr
bb8fcd3d1a60426e69032232797ee101
An End-to-End Text-Independent Speaker Identification System on Short Utterances
[ { "docid": "b83e537a2c8dcd24b096005ef0cb3897", "text": "We present Deep Speaker, a neural speaker embedding system that maps utterances to a hypersphere where speaker similarity is measured by cosine similarity. The embeddings generated by Deep Speaker can be used for many tasks, including speaker identification, verification, and clustering. We experiment with ResCNN and GRU architectures to extract the acoustic features, then mean pool to produce utterance-level speaker embeddings, and train using triplet loss based on cosine similarity. Experiments on three distinct datasets suggest that Deep Speaker outperforms a DNN-based i-vector baseline. For example, Deep Speaker reduces the verification equal error rate by 50% (relatively) and improves the identification accuracy by 60% (relatively) on a text-independent dataset. We also present results that suggest adapting from a model trained with Mandarin can improve accuracy for English speaker recognition.", "title": "" }, { "docid": "83525470a770a036e9c7bb737dfe0535", "text": "It is known that the performance of the i-vectors/PLDA based speaker verification systems is affected in the cases of short utterances and limited training data. The performance degradation appears because the shorter the utterance, the less reliable the extracted i-vector is, and because the total variability covariance matrix and the underlying PLDA matrices need a significant amount of data to be robustly estimated. Considering the “MIT Mobile Device Speaker Verification Corpus” (MIT-MDSVC) as a representative dataset for robust speaker verification tasks on limited amount of training data, this paper investigates which configuration and which parameters lead to the best performance of an i-vectors/PLDA based speaker verification. The i-vectors/PLDA based system achieved good performance only when the total variability matrix and the underlying PLDA matrices were trained with data belonging to the enrolled speakers. This way of training means that the system should be fully retrained when new enrolled speakers were added. The performance of the system was more sensitive to the amount of training data of the underlying PLDA matrices than to the amount of training data of the total variability matrix. Overall, the Equal Error Rate performance of the i-vectors/PLDA based system was around 1% below the performance of a GMM-UBM system on the chosen dataset. The paper presents at the end some preliminary experiments in which the utterances comprised in the CSTR VCTK corpus were used besides utterances from MIT-MDSVC for training the total variability covariance matrix and the underlying PLDA matrices.", "title": "" } ]
[ { "docid": "bb50f0ad981d3f81df6810322da7bd71", "text": "Scale-model laboratory tests of a surface effect ship (SES) conducted in a near-shore transforming wave field are discussed. Waves approaching a beach in a wave tank were used to simulate transforming sea conditions and a series of experiments were conducted with a 1:30 scale model SES traversing in heads seas. Pitch and heave motion of the vehicle were recorded in support of characterizing the seakeeping response of the vessel in developing seas. The aircushion pressure and the vessel speed were varied over a range of values and the corresponding vehicle responses were analyzed to identify functional dependence on these parameters. The results show a distinct correlation between the air-cushion pressure and the response amplitude of both pitch and heave.", "title": "" }, { "docid": "07e67cee1d0edcd3793bd2eb7520d864", "text": "Content-based image retrieval (CBIR) has attracted much attention due to the exponential growth of digital image collections that have become available in recent years. Relevance feedback (RF) in the context of search engines is a query expansion technique, which is based on relevance judgments about the top results that are initially returned for a given query. RF can be obtained directly from end users, inferred indirectly from user interactions with a result list, or even assumed (aka pseudo relevance feedback). RF information is used to generate a new query, aiming to re-focus the query towards more relevant results.\n This paper presents a methodology for use of signature based image retrieval with a user in the loop to improve retrieval performance. The significance of this study is twofold. First, it shows how to effectively use explicit RF with signature based image retrieval to improve retrieval quality and efficiency. Second, this approach provides a mechanism for end users to refine their image queries. This is an important contribution because, to date, there is no effective way to reformulate an image query; our approach provides a solution to this problem.\n Empirical experiments have been carried out to study the behaviour and optimal parameter settings of this approach. Empirical evaluations based on standard benchmarks demonstrate the effectiveness of the proposed approach in improving the performance of CBIR in terms of recall, precision, speed and scalability.", "title": "" }, { "docid": "8dc2f16d4f4ed1aa0acf6a6dca0ccc06", "text": "This is the second paper in a four-part series detailing the relative merits of the treatment strategies, clinical techniques and dental materials for the restoration of health, function and aesthetics for the dentition. In this paper the management of wear in the anterior dentition is discussed, using three case studies as illustration.", "title": "" }, { "docid": "a9f23b7a6e077d7e9ca1a3165948cdf3", "text": "In most problem-solving activities, feedback is received at the end of an action sequence. This creates a credit-assignment problem where the learner must associate the feedback with earlier actions, and the interdependencies of actions require the learner to either remember past choices of actions (internal state information) or rely on external cues in the environment (external state information) to select the right actions. We investigated the nature of explicit and implicit learning processes in the credit-assignment problem using a probabilistic sequential choice task with and without external state information. We found that when explicit memory encoding was dominant, subjects were faster to select the better option in their first choices than in the last choices; when implicit reinforcement learning was dominant subjects were faster to select the better option in their last choices than in their first choices. However, implicit reinforcement learning was only successful when distinct external state information was available. The results suggest the nature of learning in credit assignment: an explicit memory encoding process that keeps track of internal state information and a reinforcement-learning process that uses state information to propagate reinforcement backwards to previous choices. However, the implicit reinforcement learning process is effective only when the valences can be attributed to the appropriate states in the system – either internally generated states in the cognitive system or externally presented stimuli in the environment.", "title": "" }, { "docid": "73af8236cc76e386aa76c6d20378d774", "text": "Turkish Wikipedia Named-Entity Recognition and Text Categorization (TWNERTC) dataset is a collection of automatically categorized and annotated sentences obtained from Wikipedia. We constructed large-scale gazetteers by using a graph crawler algorithm to extract relevant entity and domain information from a semantic knowledge base, Freebase1. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under 77 different domains. Since automated processes are prone to ambiguity, we also introduce two new content specific noise reduction methodologies. Moreover, we map fine-grained entity types to the equivalent four coarse-grained types, person, loc, org, misc. Eventually, we construct six different dataset versions and evaluate the quality of annotations by comparing ground truths from human annotators. We make these datasets publicly available to support studies on Turkish named-entity recognition (NER) and text categorization (TC).", "title": "" }, { "docid": "3a18b210d3e9f0f0cf883953b8fdd242", "text": "Short-term traffic forecasting is becoming more important in intelligent transportation systems. The k-nearest neighbours (kNN) method is widely used for short-term traffic forecasting. However, the self-adjustment of kNN parameters has been a problem due to dynamic traffic characteristics. This paper proposes a fully automatic dynamic procedure kNN (DP-kNN) that makes the kNN parameters self-adjustable and robust without predefined models or training for the parameters. A real-world dataset with more than one year traffic records is used to conduct experiments. The results show that DP-kNN can perform better than manually adjusted kNN and other benchmarking methods in terms of accuracy on average. This study also discusses the difference between holiday and workday traffic prediction as well as the usage of neighbour distance measurement.", "title": "" }, { "docid": "7b385edcbb0e3fa5bfffca2e1a9ecf13", "text": "A goal of runtime software-fault monitoring is to observe software behavior to determine whether it complies with its intended behavior. Monitoring allows one to analyze and recover from detected faults, providing additional defense against catastrophic failure. Although runtime monitoring has been in use for over 30 years, there is renewed interest in its application to fault detection and recovery, largely because of the increasing complexity and ubiquitous nature of software systems. We present taxonomy that developers and researchers can use to analyze and differentiate recent developments in runtime software fault-monitoring approaches. The taxonomy categorizes the various runtime monitoring research by classifying the elements that are considered essential for building a monitoring system, i.e., the specification language used to define properties; the monitoring mechanism that oversees the program's execution; and the event handler that captures and communicates monitoring results. After describing the taxonomy, the paper presents the classification of the software-fault monitoring systems described in the literature.", "title": "" }, { "docid": "5e50ff15898a96b9dec220331c62820d", "text": "BACKGROUND AND PURPOSE\nPatients with atrial fibrillation and previous ischemic stroke (IS)/transient ischemic attack (TIA) are at high risk of recurrent cerebrovascular events despite anticoagulation. In this prespecified subgroup analysis, we compared warfarin with edoxaban in patients with versus without previous IS/TIA.\n\n\nMETHODS\nENGAGE AF-TIMI 48 (Effective Anticoagulation With Factor Xa Next Generation in Atrial Fibrillation-Thrombolysis in Myocardial Infarction 48) was a double-blind trial of 21 105 patients with atrial fibrillation randomized to warfarin (international normalized ratio, 2.0-3.0; median time-in-therapeutic range, 68.4%) versus once-daily edoxaban (higher-dose edoxaban regimen [HDER], 60/30 mg; lower-dose edoxaban regimen, 30/15 mg) with 2.8-year median follow-up. Primary end points included all stroke/systemic embolic events (efficacy) and major bleeding (safety). Because only HDER is approved, we focused on the comparison of HDER versus warfarin.\n\n\nRESULTS\nOf 5973 (28.3%) patients with previous IS/TIA, 67% had CHADS2 (congestive heart failure, hypertension, age, diabetes, prior stroke/transient ischemic attack) >3 and 36% were ≥75 years. Compared with 15 132 without previous IS/TIA, patients with previous IS/TIA were at higher risk of both thromboembolism and bleeding (stroke/systemic embolic events 2.83% versus 1.42% per year; P<0.001; major bleeding 3.03% versus 2.64% per year; P<0.001; intracranial hemorrhage, 0.70% versus 0.40% per year; P<0.001). Among patients with previous IS/TIA, annualized intracranial hemorrhage rates were lower with HDER than with warfarin (0.62% versus 1.09%; absolute risk difference, 47 [8-85] per 10 000 patient-years; hazard ratio, 0.57; 95% confidence interval, 0.36-0.92; P=0.02). No treatment subgroup interactions were found for primary efficacy (P=0.86) or for intracranial hemorrhage (P=0.28).\n\n\nCONCLUSIONS\nPatients with atrial fibrillation with previous IS/TIA are at high risk of recurrent thromboembolism and bleeding. HDER is at least as effective and is safer than warfarin, regardless of the presence or the absence of previous IS or TIA.\n\n\nCLINICAL TRIAL REGISTRATION\nURL: http://www.clinicaltrials.gov. Unique identifier: NCT00781391.", "title": "" }, { "docid": "13ad3f52725d8417668ca12d5070482b", "text": "Decoronation of ankylosed teeth in infraposition was introduced in 1984 by Malmgren and co-workers (1). This method is used all over the world today. It has been clinically shown that the procedure preserves the alveolar width and rebuilds lost vertical bone of the alveolar ridge in growing individuals. The biological explanation is that the decoronated root serves as a matrix for new bone development during resorption of the root and that the lost vertical alveolar bone is rebuilt during eruption of adjacent teeth. First a new periosteum is formed over the decoronated root, allowing vertical alveolar growth. Then the interdental fibers that have been severed by the decoronation procedure are reorganized between adjacent teeth. The continued eruption of these teeth mediates marginal bone apposition via the dental-periosteal fiber complex. The erupting teeth are linked with the periosteum covering the top of the alveolar socket and indirectly via the alveolar gingival fibers, which are inserted in the alveolar crest and in the lamina propria of the interdental papilla. Both structures can generate a traction force resulting in bone apposition on top of the alveolar crest. This theoretical biological explanation is based on known anatomical features, known eruption processes and clinical observations.", "title": "" }, { "docid": "39e9fe27f70f54424df1feec453afde3", "text": "Ontology is a sub-field of Philosophy. It is the study of the nature of existence and a branch of metaphysics concerned with identifying the kinds of things that actually exists and how to describe them. It describes formally a domain of discourse. Ontology is used to capture knowledge about some domain of interest and to describe the concepts in the domain and also to express the relationships that hold between those concepts. Ontology consists of finite list of terms (or important concepts) and the relationships among the terms (or Classes of Objects). Relationships typically include hierarchies of classes. It is an explicit formal specification of conceptualization and the science of describing the kind of entities in the world and how they are related (W3C). Web Ontology Language (OWL) is a language for defining and instantiating web ontologies (a W3C Recommendation). OWL ontology includes description of classes, properties and their instances. OWL is used to explicitly represent the meaning of terms in vocabularies and the relationships between those terms. Such representation of terms and their interrelationships is called ontology. OWL has facilities for expressing meaning and semantics and the ability to represent machine interpretable content on the Web. OWL is designed for use by applications that need to process the content of information instead of just presenting information to humans. This is used for knowledge representation and also is useful to derive logical consequences from OWL formal semantics.", "title": "" }, { "docid": "f3599d23a21ca906e615025ac3715131", "text": "This literature review synthesized the existing research on cloud computing from a business perspective by investigating 60 sources and integrates their results in order to offer an overview about the existing body of knowledge. Using an established framework our results are structured according to the four dimensions following: cloud computing characteristics, adoption determinants, governance mechanisms, and business impact. This work reveals a shifting focus from technological aspects to a broader understanding of cloud computing as a new IT delivery model. There is a growing consensus about its characteristics and design principles. Unfortunately, research on factors driving or inhibiting the adoption of cloud services, as well as research investigating its business impact empirically, is still limited. This may be attributed to cloud computing being a rather recent research topic. Research on structures, processes and employee qualification to govern cloud services is at an early stage as well.", "title": "" }, { "docid": "363236815299994c5d155ab2c64b4387", "text": "The objective of this work is to infer the 3D shape of an object from a single image. We use sculptures as our training and test bed, as these have great variety in shape and appearance. To achieve this we build on the success of multiple view geometry (MVG) which is able to accurately provide correspondences between images of 3D objects under varying viewpoint and illumination conditions, and make the following contributions: first, we introduce a new loss function that can harness image-to-image correspondences to provide a supervisory signal to train a deep network to infer a depth map. The network is trained end-to-end by differentiating through the camera. Second, we develop a processing pipeline to automatically generate a large scale multi-view set of correspondences for training the network. Finally, we demonstrate that we can indeed obtain a depth map of a novel object from a single image for a variety of sculptures with varying shape/texture, and that the network generalises at test time to new domains (e.g. synthetic images).", "title": "" }, { "docid": "b3da0c6745883ae3da10e341abc3bf4d", "text": "Electrophysiological recording studies in the dorsocaudal region of medial entorhinal cortex (dMEC) of the rat reveal cells whose spatial firing fields show a remarkably regular hexagonal grid pattern (Fyhn et al., 2004; Hafting et al., 2005). We describe a symmetric, locally connected neural network, or spin glass model, that spontaneously produces a hexagonal grid of activity bumps on a two-dimensional sheet of units. The spatial firing fields of the simulated cells closely resemble those of dMEC cells. A collection of grids with different scales and/or orientations forms a basis set for encoding position. Simulations show that the animal's location can easily be determined from the population activity pattern. Introducing an asymmetry in the model allows the activity bumps to be shifted in any direction, at a rate proportional to velocity, to achieve path integration. Furthermore, information about the structure of the environment can be superimposed on the spatial position signal by modulation of the bump activity levels without significantly interfering with the hexagonal periodicity of firing fields. Our results support the conjecture of Hafting et al. (2005) that an attractor network in dMEC may be the source of path integration information afferent to hippocampus.", "title": "" }, { "docid": "2adde1812974f2d5d35d4c7e31ca7247", "text": "All currently available network intrusion detection (ID) systems rely upon a mechanism of data collection---passive protocol analysis---which is fundamentally flawed. In passive protocol analysis, the intrusion detection system (IDS) unobtrusively watches all traffic on the network, and scrutinizes it for patterns of suspicious activity. We outline in this paper two basic problems with the reliability of passive protocol analysis: (1) there isn't enough information on the wire on which to base conclusions about what is actually happening on networked machines, and (2) the fact that the system is passive makes it inherently \"fail-open,\" meaning that a compromise in the availability of the IDS doesn't compromise the availability of the network. We define three classes of attacks which exploit these fundamental problems---insertion, evasion, and denial of service attacks --and describe how to apply these three types of attacks to IP and TCP protocol analysis. We present the results of tests of the efficacy of our attacks against four of the most popular network intrusion detection systems on the market. All of the ID systems tested were found to be vulnerable to each of our attacks. This indicates that network ID systems cannot be fully trusted until they are fundamentally redesigned. Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection http://www.robertgraham.com/mirror/Ptacek-Newsham-Evasion-98.html (1 of 55) [17/01/2002 08:32:46 p.m.]", "title": "" }, { "docid": "4b7eb2b8f4d4ec135ab1978b4811eca4", "text": "This paper focuses on the problem of vision-based obstacle detection and tracking for unmanned aerial vehicle navigation. A real-time object localization and tracking strategy from monocular image sequences is developed by effectively integrating the object detection and tracking into a dynamic Kalman model. At the detection stage, the object of interest is automatically detected and localized from a saliency map computed via the image background connectivity cue at each frame; at the tracking stage, a Kalman filter is employed to provide a coarse prediction of the object state, which is further refined via a local detector incorporating the saliency map and the temporal information between two consecutive frames. Compared with existing methods, the proposed approach does not require any manual initialization for tracking, runs much faster than the state-of-the-art trackers of its kind, and achieves competitive tracking performance on a large number of image sequences. Extensive experiments demonstrate the effectiveness and superior performance of the proposed approach.", "title": "" }, { "docid": "e0e00fdfecc4a23994315579938f740e", "text": "Budget allocation in online advertising deals with distributing the campaign (insertion order) level budgets to different sub-campaigns which employ different targeting criteria and may perform differently in terms of return-on-investment (ROI). In this paper, we present the efforts at Turn on how to best allocate campaign budget so that the advertiser or campaign-level ROI is maximized. To do this, it is crucial to be able to correctly determine the performance of sub-campaigns. This determination is highly related to the action-attribution problem, i.e. to be able to find out the set of ads, and hence the sub-campaigns that provided them to a user, that an action should be attributed to. For this purpose, we employ both last-touch (last ad gets all credit) and multi-touch (many ads share the credit) attribution methodologies. We present the algorithms deployed at Turn for the attribution problem, as well as their parallel implementation on the large advertiser performance datasets. We conclude the paper with our empirical comparison of last-touch and multi-touch attribution-based budget allocation in a real online advertising setting.", "title": "" }, { "docid": "3f7c16788bceba51f0cbf0e9c9592556", "text": "Centralised patient monitoring systems are in huge demand as they not only reduce the labour work and cost but also the time of the clinical hospitals. Earlier wired communication was used but now Zigbee which is a wireless mesh network is preferred as it reduces the cost. Zigbee is also preferred over Bluetooth and infrared wireless communication because it is energy efficient, has low cost and long distance range (several miles). In this paper we proposed wireless transmission of data between a patient and centralised unit using Zigbee module. The paper is divided into two sections. First is patient monitoring system for multiple patients and second is the centralised patient monitoring system. These two systems are communicating using wireless transmission technology i.e. Zigbee. In the first section we have patient monitoring of multiple patients. Each patient's multiple physiological parameters like ECG, temperature, heartbeat are measured at their respective unit. If any physiological parameter value exceeds the threshold value, emergency alarm and LED blinks at each patient unit. This allows a doctor to read various physiological parameters of a patient in real time. The values are displayed on the LCD at each patient unit. Similarly multiple patients multiple physiological parameters are being measured using particular sensors and multiple patient's patient monitoring system is made. In the second section centralised patient monitoring system is made in which all multiple patients multiple parameters are displayed on a central monitor using MATLAB. ECG graph is also displayed on the central monitor using MATLAB software. The central LCD also displays parameters like heartbeat and temperature. The module is less expensive, consumes low power and has good range.", "title": "" }, { "docid": "22c749b089f0bdd1a3296f59fa9cdfc5", "text": "Inspection of printed circuit board (PCB) has been a crucial process in the electronic manufacturing industry to guarantee product quality & reliability, cut manufacturing cost and to increase production. The PCB inspection involves detection of defects in the PCB and classification of those defects in order to identify the roots of defects. In this paper, all 14 types of defects are detected and are classified in all possible classes using referential inspection approach. The proposed algorithm is mainly divided into five stages: Image registration, Pre-processing, Image segmentation, Defect detection and Defect classification. The algorithm is able to perform inspection even when captured test image is rotated, scaled and translated with respect to template image which makes the algorithm rotation, scale and translation in-variant. The novelty of the algorithm lies in its robustness to analyze a defect in its different possible appearance and severity. In addition to this, algorithm takes only 2.528 s to inspect a PCB image. The efficacy of the proposed algorithm is verified by conducting experiments on the different PCB images and it shows that the proposed afgorithm is suitable for automatic visual inspection of PCBs.", "title": "" }, { "docid": "aff804f90fd1ffba5ee8c06e96ddd11b", "text": "The area of machine learning has made considerable progress over the past decade, enabled by the widespread availability of large datasets, as well as by improved algorithms and models. Given the large computational demands of machine learning workloads, parallelism, implemented either through single-node concurrency or through multi-node distribution, has been a third key ingredient to advances in machine learning.\n The goal of this tutorial is to provide the audience with an overview of standard distribution techniques in machine learning, with an eye towards the intriguing trade-offs between synchronization and communication costs of distributed machine learning algorithms, on the one hand, and their convergence, on the other.The tutorial will focus on parallelization strategies for the fundamental stochastic gradient descent (SGD) algorithm, which is a key tool when training machine learning models, from classical instances such as linear regression, to state-of-the-art neural network architectures.\n The tutorial will describe the guarantees provided by this algorithm in the sequential case, and then move on to cover both shared-memory and message-passing parallelization strategies, together with the guarantees they provide, and corresponding trade-offs. The presentation will conclude with a broad overview of ongoing research in distributed and concurrent machine learning. The tutorial will assume no prior knowledge beyond familiarity with basic concepts in algebra and analysis.", "title": "" }, { "docid": "9706819b5e4805b41e3907a7b1688578", "text": "While advances in computing resources have made processing enormous amounts of data possible, human ability to identify patterns in such data has not scaled accordingly. Thus, efficient computational methods for condensing and simplifying data are becoming vital for extracting actionable insights. In particular, while data summarization techniques have been studied extensively, only recently has summarizing interconnected data, or graphs, become popular. This survey is a structured, comprehensive overview of the state-of-the-art methods for summarizing graph data. We first broach the motivation behind and the challenges of graph summarization. We then categorize summarization approaches by the type of graphs taken as input and further organize each category by core methodology. Finally, we discuss applications of summarization on real-world graphs and conclude by describing some open problems in the field.", "title": "" } ]
scidocsrr
ebe0ad792e0d01a575c7b500a962f2b5
Prevalence and Predictors of Video Game Addiction: A Study Based on a National Representative Sample of Gamers
[ { "docid": "39682fc0385d7bc85267479bf20326b3", "text": "This study assessed how problem video game playing (PVP) varies with game type, or \"genre,\" among adult video gamers. Participants (n=3,380) were adults (18+) who reported playing video games for 1 hour or more during the past week and completed a nationally representative online survey. The survey asked about characteristics of video game use, including titles played in the past year and patterns of (problematic) use. Participants self-reported the extent to which characteristics of PVP (e.g., playing longer than intended) described their game play. Five percent of our sample reported moderate to extreme problems. PVP was concentrated among persons who reported playing first-person shooter, action adventure, role-playing, and gambling games most during the past year. The identification of a subset of game types most associated with problem use suggests new directions for research into the specific design elements and reward mechanics of \"addictive\" video games and those populations at greatest risk of PVP with the ultimate goal of better understanding, preventing, and treating this contemporary mental health problem.", "title": "" } ]
[ { "docid": "d90add899632bab1c5c2637c7080f717", "text": "Software Testing plays a important role in Software development because it can minimize the development cost. We Propose a Technique for Test Sequence Generation using UML Model Sequence Diagram.UML models give a lot of information that should not be ignored in testing. In This paper main features extract from Sequence Diagram after that we can write the Java Source code for that Features According to ModelJunit Library. ModelJUnit is a extended library of JUnit Library. By using that Source code we can Generate Test Case Automatic and Test Coverage. This paper describes a systematic Test Case Generation Technique performed on model based testing (MBT) approaches By Using Sequence Diagram.", "title": "" }, { "docid": "5921f0049596d52bd3aea33e4537d026", "text": "Various lines of evidence indicate that men generally experience greater sexual arousal (SA) to erotic stimuli than women. Yet, little is known regarding the neurobiological processes underlying such a gender difference. To investigate this issue, functional magnetic resonance imaging was used to compare the neural correlates of SA in 20 male and 20 female subjects. Brain activity was measured while male and female subjects were viewing erotic film excerpts. Results showed that the level of perceived SA was significantly higher in male than in female subjects. When compared to viewing emotionally neutral film excerpts, viewing erotic film excerpts was associated, for both genders, with bilateral blood oxygen level dependent (BOLD) signal increases in the anterior cingulate, medial prefrontal, orbitofrontal, insular, and occipitotemporal cortices, as well as in the amygdala and the ventral striatum. Only for the group of male subjects was there evidence of a significant activation of the thalamus and hypothalamus, a sexually dimorphic area of the brain known to play a pivotal role in physiological arousal and sexual behavior. When directly compared between genders, hypothalamic activation was found to be significantly greater in male subjects. Furthermore, for male subjects only, the magnitude of hypothalamic activation was positively correlated with reported levels of SA. These findings reveal the existence of similarities and dissimilarities in the way the brain of both genders responds to erotic stimuli. They further suggest that the greater SA generally experienced by men, when viewing erotica, may be related to the functional gender difference found here with respect to the hypothalamus.", "title": "" }, { "docid": "51979e7cca3940cb1629f58feb8712b4", "text": "OBJECTIVES\nThe goal of this survey is to discuss the impact of the growing availability of electronic health record (EHR) data on the evolving field of Clinical Research Informatics (CRI), which is the union of biomedical research and informatics.\n\n\nRESULTS\nMajor challenges for the use of EHR-derived data for research include the lack of standard methods for ensuring that data quality, completeness, and provenance are sufficient to assess the appropriateness of its use for research. Areas that need continued emphasis include methods for integrating data from heterogeneous sources, guidelines (including explicit phenotype definitions) for using these data in both pragmatic clinical trials and observational investigations, strong data governance to better understand and control quality of enterprise data, and promotion of national standards for representing and using clinical data.\n\n\nCONCLUSIONS\nThe use of EHR data has become a priority in CRI. Awareness of underlying clinical data collection processes will be essential in order to leverage these data for clinical research and patient care, and will require multi-disciplinary teams representing clinical research, informatics, and healthcare operations. Considerations for the use of EHR data provide a starting point for practical applications and a CRI research agenda, which will be facilitated by CRI's key role in the infrastructure of a learning healthcare system.", "title": "" }, { "docid": "c6576bb8585fff4a9ac112943b1e0785", "text": "Three-dimensional (3D) kinematic models are widely-used in videobased figure tracking. We show that these models can suffer from singularities when motion is directed along the viewing axis of a single camera. The single camera case is important because it arises in many interesting applications, such as motion capture from movie footage, video surveillance, and vision-based user-interfaces. We describe a novel two-dimensional scaled prismatic model (SPM) for figure registration. In contrast to 3D kinematic models, the SPM has fewer singularity problems and does not require detailed knowledge of the 3D kinematics. We fully characterize the singularities in the SPM and demonstrate tracking through singularities using synthetic and real examples. We demonstrate the application of our model to motion capture from movies. Fred Astaire is tracked in a clip from the film “Shall We Dance”. We also present the use of monocular hand tracking in a 3D user-interface. These results demonstrate the benefits of the SPM in tracking with a single source of video. KEY WORDS—AUTHOR: PLEASE PROVIDE", "title": "" }, { "docid": "24b45f8f41daccf4bddb45f0e2b3d057", "text": "Risk assessment is a systematic process for integrating professional judgments about relevant risk factors, their relative significance and probable adverse conditions and/or events leading to identification of auditable activities (IIA, 1995, SIAS No. 9). Internal auditors utilize risk measures to allocate critical audit resources to compliance, operational, or financial activities within the organization (Colbert, 1995). In information rich environments, risk assessment involves recognizing patterns in the data, such as complex data anomalies and discrepancies, that perhaps conceal one or more error or hazard conditions (e.g. Coakley and Brown, 1996; Bedard and Biggs, 1991; Libby, 1985). This research investigates whether neural networks can help enhance auditors’ risk assessments. Neural networks, an emerging artificial intelligence technology, are a powerful non-linear optimization and pattern recognition tool (Haykin, 1994; Bishop, 1995). Several successful, real-world business neural network application decision aids have already been built (Burger and Traver, 1996). Neural network modeling may prove invaluable in directing internal auditor attention to those aspects of financial, operating, and compliance data most informative of high-risk audit areas, thus enhancing audit efficiency and effectiveness. This paper defines risk in an internal auditing context, describes contemporary approaches to performing risk assessments, provides an overview of the backpropagation neural network architecture, outlines the methodology adopted for conducting this research project including a Delphi study and comparison with statistical approaches, and presents preliminary results, which indicate that internal auditors could benefit from using neural network technology for assessing risk. Copyright  1999 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "eb962e14f34ea53dec660dfe304756b0", "text": "It is difficult to train a personalized task-oriented dialogue system because the data collected from each individual is often insufficient. Personalized dialogue systems trained on a small dataset can overfit and make it difficult to adapt to different user needs. One way to solve this problem is to consider a collection of multiple users’ data as a source domain and an individual user’s data as a target domain, and to perform a transfer learning from the source to the target domain. By following this idea, we propose the “PETAL” (PErsonalized Task-oriented diALogue), a transfer learning framework based on POMDP to learn a personalized dialogue system. The system first learns common dialogue knowledge from the source domain and then adapts this knowledge to the target user. This framework can avoid the negative transfer problem by considering differences between source and target users. The policy in the personalized POMDP can learn to choose different actions appropriately for different users. Experimental results on a real-world coffee-shopping data and simulation data show that our personalized dialogue system can choose different optimal actions for different users, and thus effectively improve the dialogue quality under the personalized setting.", "title": "" }, { "docid": "354b35bb1c51442a7e855824ab7b91e0", "text": "Educational games and intelligent tutoring systems (ITS) both support learning by doing, although often in different ways. The current classroom experiment compared a popular commercial game for equation solving, DragonBox and a research-based ITS, Lynnette with respect to desirable educational outcomes. The 190 participating 7th and 8th grade students were randomly assigned to work with either system for 5 class periods. We measured out-of-system transfer of learning with a paper and pencil pre- and post-test of students’ equation-solving skill. We measured enjoyment and accuracy of self-assessment with a questionnaire. The students who used DragonBox solved many more problems and enjoyed the experience more, but the students who used Lynnette performed significantly better on the post-test. Our analysis of the design features of both systems suggests possible explanations and spurs ideas for how the strengths of the two systems might be combined. The study shows that intuitions about what works, educationally, can be fallible. Therefore, there is no substitute for rigorous empirical evaluation of educational technologies.", "title": "" }, { "docid": "50c0ebb4a984ea786eb86af9849436f3", "text": "We systematically reviewed school-based skills building behavioural interventions for the prevention of sexually transmitted infections. References were sought from 15 electronic resources, bibliographies of systematic reviews/included studies and experts. Two authors independently extracted data and quality-assessed studies. Fifteen randomized controlled trials (RCTs), conducted in the United States, Africa or Europe, met the inclusion criteria. They were heterogeneous in terms of intervention length, content, intensity and providers. Data from 12 RCTs passed quality assessment criteria and provided evidence of positive changes in non-behavioural outcomes (e.g. knowledge and self-efficacy). Intervention effects on behavioural outcomes, such as condom use, were generally limited and did not demonstrate a negative impact (e.g. earlier sexual initiation). Beneficial effect on at least one, but never all behavioural outcomes assessed was reported by about half the studies, but this was sometimes limited to a participant subgroup. Sexual health education for young people is important as it increases knowledge upon which to make decisions about sexual behaviour. However, a number of factors may limit intervention impact on behavioural outcomes. Further research could draw on one of the more effective studies reviewed and could explore the effectiveness of 'booster' sessions as young people move from adolescence to young adulthood.", "title": "" }, { "docid": "e3709e9df325e7a7927e882a40222b26", "text": "In this paper, we present a system that automatically extracts the pros and cons from online reviews. Although many approaches have been developed for extracting opinions from text, our focus here is on extracting the reasons of the opinions, which may themselves be in the form of either fact or opinion. Leveraging online review sites with author-generated pros and cons, we propose a system for aligning the pros and cons to their sentences in review texts. A maximum entropy model is then trained on the resulting labeled set to subsequently extract pros and cons from online review sites that do not explicitly provide them. Our experimental results show that our resulting system identifies pros and cons with 66% precision and 76% recall.", "title": "" }, { "docid": "e771009a5e1810c45db20ed70b314798", "text": "BACKGROUND\nTo identify sources of race/ethnic differences related to post-traumatic stress disorder (PTSD), we compared trauma exposure, risk for PTSD among those exposed to trauma, and treatment-seeking among Whites, Blacks, Hispanics and Asians in the US general population.\n\n\nMETHOD\nData from structured diagnostic interviews with 34 653 adult respondents to the 2004-2005 wave of the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) were analysed.\n\n\nRESULTS\nThe lifetime prevalence of PTSD was highest among Blacks (8.7%), intermediate among Hispanics and Whites (7.0% and 7.4%) and lowest among Asians (4.0%). Differences in risk for trauma varied by type of event. Whites were more likely than the other groups to have any trauma, to learn of a trauma to someone close, and to learn of an unexpected death, but Blacks and Hispanics had higher risk of child maltreatment, chiefly witnessing domestic violence, and Asians, Black men, and Hispanic women had higher risk of war-related events than Whites. Among those exposed to trauma, PTSD risk was slightly higher among Blacks [adjusted odds ratio (aOR) 1.22] and lower among Asians (aOR 0.67) compared with Whites, after adjustment for characteristics of trauma exposure. All minority groups were less likely to seek treatment for PTSD than Whites (aOR range: 0.39-0.61), and fewer than half of minorities with PTSD sought treatment (range: 32.7-42.0%).\n\n\nCONCLUSIONS\nWhen PTSD affects US race/ethnic minorities, it is usually untreated. Large disparities in treatment indicate a need for investment in accessible and culturally sensitive treatment options.", "title": "" }, { "docid": "9a38b18bd69d17604b6e05b9da450c2d", "text": "New invention of advanced technology, enhanced capacity of storage media, maturity of information technology and popularity of social media, business intelligence and Scientific invention, produces huge amount of data which made ample set of information that is responsible for birth of new concept well known as big data. Big data analytics is the process of examining large amounts of data. The analysis is done on huge amount of data which is structure, semi structure and unstructured. In big data, data is generated at exponentially for reason of increase use of social media, email, document and sensor data. The growth of data has affected all fields, whether it is business sector or the world of science. In this paper, the process of system is reviewed for managing &quot;Big Data&quot; and today&apos;s activities on big data tools and techniques.", "title": "" }, { "docid": "51c14998480e2b1063b727bf3e4f4ad0", "text": "With the rapid growth of multimedia information, the font library has become a part of people’s work life. Compared to the Western alphabet language, it is difficult to create new font due to huge quantity and complex shape. At present, most of the researches on automatic generation of fonts use traditional methods requiring a large number of rules and parameters set by experts, which are not widely adopted. This paper divides Chinese characters into strokes and generates new font strokes by fusing the styles of two existing font strokes and assembling them into new fonts. This approach can effectively improve the efficiency of font generation, reduce the costs of designers, and is able to inherit the style of existing fonts. In the process of learning to generate new fonts, the popular of deep learning areas, Generative Adversarial Nets has been used. Compared with the traditional method, it can generate higher quality fonts without well-designed and complex loss function.", "title": "" }, { "docid": "3692954147d1a60fb683001bd379047f", "text": "OBJECTIVE\nThe current study aimed to compare the Philadelphia collar and an open-design cervical collar with regard to user satisfaction and cervical range of motion in asymptomatic adults.\n\n\nDESIGN\nSeventy-two healthy subjects (36 women, 36 men) aged 18 to 29 yrs were recruited for this study. Neck movements, including active flexion, extension, right/left lateral flexion, and right/left axial rotation, were assessed in each subject under three conditions--without wearing a collar and while wearing two different cervical collars--using a dual digital inclinometer. Subject satisfaction was assessed using a five-item self-administered questionnaire.\n\n\nRESULTS\nBoth Philadelphia and open-design collars significantly reduced cervical motions (P < 0.05). Compared with the Philadelphia collar, the open-design collar more greatly reduced cervical motions in three planes and the differences were statistically significant except for limiting flexion. Satisfaction scores for Philadelphia and open-design collars were 15.89 (3.87) and 19.94 (3.11), respectively.\n\n\nCONCLUSION\nBased on the data of the 72 subjects presented in this study, the open-design collar adequately immobilized the cervical spine as a semirigid collar and was considered cosmetically acceptable, at least for subjects aged younger than 30 yrs.", "title": "" }, { "docid": "6d2e7ce04b96a98cc2828dc33c111bd1", "text": "This study explores how customer relationship management (CRM) systems support customer knowledge creation processes [48], including socialization, externalization, combination and internalization. CRM systems are categorized as collaborative, operational and analytical. An analysis of CRM applications in three organizations reveals that analytical systems strongly support the combination process. Collaborative systems provide the greatest support for externalization. Operational systems facilitate socialization with customers, while collaborative systems are used for socialization within an organization. Collaborative and analytical systems both support the internalization process by providing learning opportunities. Three-way interactions among CRM systems, types of customer knowledge, and knowledge creation processes are explored. 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "2c2e57a330157cf28e4d6d6466132432", "text": "This paper presents an automatic method to track soccer players in soccer video recorded from a single camera where the occurrence of pan-tilt-zoom can take place. The automatic object tracking is intended to support texture extraction in a free viewpoint video authoring application for soccer video. To ensure that the identity of the tracked object can be correctly obtained, background segmentation is performed and automatically removes commercial billboards whenever it overlaps with the soccer player. Next, object tracking is performed by an attribute matching algorithm for all objects in the temporal domain to find and maintain the correlation of the detected objects. The attribute matching process finds the best match between two objects in different frames according to their pre-determined attributes: position, size, dominant color and motion information. Utilizing these attributes, the experimental results show that the tracking process can handle occlusion problems such as occlusion involving more than three objects and occluded objects with similar color and moving direction, as well as correctly identify objects in the presence of camera movements. key words: free viewpoint, attribute matching, automatic object tracking, soccer video", "title": "" }, { "docid": "02dce03a41dbe6734cd3ce945db6fcb8", "text": "Antigen-presenting, major histocompatibility complex (MHC) class II-rich dendritic cells are known to arise from bone marrow. However, marrow lacks mature dendritic cells, and substantial numbers of proliferating less-mature cells have yet to be identified. The methodology for inducing dendritic cell growth that was recently described for mouse blood now has been modified to MHC class II-negative precursors in marrow. A key step is to remove the majority of nonadherent, newly formed granulocytes by gentle washes during the first 2-4 d of culture. This leaves behind proliferating clusters that are loosely attached to a more firmly adherent \"stroma.\" At days 4-6 the clusters can be dislodged, isolated by 1-g sedimentation, and upon reculture, large numbers of dendritic cells are released. The latter are readily identified on the basis of their distinct cell shape, ultrastructure, and repertoire of antigens, as detected with a panel of monoclonal antibodies. The dendritic cells express high levels of MHC class II products and act as powerful accessory cells for initiating the mixed leukocyte reaction. Neither the clusters nor mature dendritic cells are generated if macrophage colony-stimulating factor rather than granulocyte/macrophage colony-stimulating factor (GM-CSF) is applied. Therefore, GM-CSF generates all three lineages of myeloid cells (granulocytes, macrophages, and dendritic cells). Since > 5 x 10(6) dendritic cells develop in 1 wk from precursors within the large hind limb bones of a single animal, marrow progenitors can act as a major source of dendritic cells. This feature should prove useful for future molecular and clinical studies of this otherwise trace cell type.", "title": "" }, { "docid": "3192a76e421d37fbe8619a3bc01fb244", "text": "• Develop and implement an internally consistent set of goals and functional policies (this is, a solution to the agency problem) • These internally consistent set of goals and policies aligns the firm’s strengths and weaknesses with external (industry) opportunities and threats (SWOT) in a dynamic balance • The firm’s strategy has to be concerned with the exploitation of its “distinctive competences” (early reference to RBV)", "title": "" }, { "docid": "ac6d474171bfe6bc2457bfb3674cc5a6", "text": "The energy consumption problem in the mobile industry has become crucial. For the sustainable growth of the mobile industry, energy efficiency (EE) of wireless systems has to be significantly improved. Plenty of efforts have been invested in achieving green wireless communications. This article provides an overview of network energy saving studies currently conducted in the 3GPP LTE standard body. The aim is to gain a better understanding of energy consumption and identify key EE research problems in wireless access networks. Classifying network energy saving technologies into the time, frequency, and spatial domains, the main solutions in each domain are described briefly. As presently the attention is mainly focused on solutions involving a single radio base station, we believe network solutions involving multiple networks/systems will be the most promising technologies toward green wireless access networks.", "title": "" }, { "docid": "bab429bf74fe4ce3f387a716964a867f", "text": "We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only \"virtually\" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.", "title": "" }, { "docid": "a4f960905077291bd6da9359fd803a9c", "text": "In this paper, we propose a new framework named Data Augmentation for Domain-Invariant Learning (DADIL). In the field of manufacturing, labeling sensor data as normal or abnormal is helpful for improving productivity and avoiding problems. In practice, however, the status of equipment may change due to changes in maintenance and settings (referred to as a “domain change”), which makes it difficult to collect sufficient homogeneous data. Therefore, it is important to develop a discriminative model that can use a limited number of data samples. Moreover, real data might contain noise that could have a negative impact. We focus on the following aspect: The difficulties of a domain change are also due to the limited data. Although the number of data samples in each domain is low, we make use of data augmentation which is a promising way to mitigate the influence of noise and enhance the performance of discriminative models. In our data augmentation method, we generate “pseudo data” by combining the data for each label regardless of the domain and extract a domain-invariant representation for classification. We experimentally show that this representation is effective for obtaining the label precisely using real datasets.", "title": "" } ]
scidocsrr
7ad2a261e3f57a43e48e1cc309174cfc
Degeneration in VAE: in the Light of Fisher Information Loss
[ { "docid": "ff59e2a5aa984dec7805a4d9d55e69e5", "text": "We introduce Natural Neural Networks, a novel family of algorithms that speed up convergence by adapting their internal representation during training to improve conditioning of the Fisher matrix. In particular, we show a specific example that employs a simple and efficient reparametrization of the neural network weights by implicitly whitening the representation obtained at each layer, while preserving the feed-forward computation of the network. Such networks can be trained efficiently via the proposed Projected Natural Gradient Descent algorithm (PRONG), which amortizes the cost of these reparametrizations over many parameter updates and is closely related to the Mirror Descent online learning algorithm. We highlight the benefits of our method on both unsupervised and supervised learning tasks, and showcase its scalability by training on the large-scale ImageNet Challenge dataset.", "title": "" } ]
[ { "docid": "03bd5c0e41aa5948a5545fa3fca75bc2", "text": "In the application of lead-acid series batteries, the voltage imbalance of each battery should be considered. Therefore, additional balancer circuits must be integrated into the battery. An active battery balancing circuit with an auxiliary storage can employ a sequential battery imbalance detection algorithm by comparing the voltage of a battery and auxiliary storage. The system is being in balance if the battery voltage imbalance is less than 10mV/cell. In this paper, a new algorithm is proposed so that the battery voltage balancing time can be improved. The battery balancing system is based on the LTC3305 working principle. The simulation verifies that the proposed algorithm can achieve permitted battery voltage imbalance faster than that of the previous algorithm.", "title": "" }, { "docid": "1dfe7a3e875436db76496931db34c7db", "text": "Biologically detailed single neuron and network models are important for understanding how ion channels, synapses and anatomical connectivity underlie the complex electrical behavior of the brain. While neuronal simulators such as NEURON, GENESIS, MOOSE, NEST, and PSICS facilitate the development of these data-driven neuronal models, the specialized languages they employ are generally not interoperable, limiting model accessibility and preventing reuse of model components and cross-simulator validation. To overcome these problems we have used an Open Source software approach to develop NeuroML, a neuronal model description language based on XML (Extensible Markup Language). This enables these detailed models and their components to be defined in a standalone form, allowing them to be used across multiple simulators and archived in a standardized format. Here we describe the structure of NeuroML and demonstrate its scope by converting into NeuroML models of a number of different voltage- and ligand-gated conductances, models of electrical coupling, synaptic transmission and short-term plasticity, together with morphologically detailed models of individual neurons. We have also used these NeuroML-based components to develop an highly detailed cortical network model. NeuroML-based model descriptions were validated by demonstrating similar model behavior across five independently developed simulators. Although our results confirm that simulations run on different simulators converge, they reveal limits to model interoperability, by showing that for some models convergence only occurs at high levels of spatial and temporal discretisation, when the computational overhead is high. Our development of NeuroML as a common description language for biophysically detailed neuronal and network models enables interoperability across multiple simulation environments, thereby improving model transparency, accessibility and reuse in computational neuroscience.", "title": "" }, { "docid": "4cd7f19d0413f9bab1a2cda5a5b7a9a4", "text": "Web-based learning plays a vital role in the modern education system, where different technologies are being emerged to enhance this E-learning process. Therefore virtual and online laboratories are gaining popularity due to its easy implementation and accessibility worldwide. These types of virtual labs are useful where the setup of the actual laboratory is complicated due to several factors such as high machinery or hardware cost. This paper presents a very efficient method of building a model using JavaScript Web Graphics Library with HTML5 enabled and having controllable features inbuilt. This type of program is free from any web browser plug-ins or application and also server independent. Proprietary software has always been a bottleneck in the development of such platforms. This approach rules out this issue and can easily applicable. Here the framework has been discussed and neatly elaborated with an example of a simplified robot configuration.", "title": "" }, { "docid": "97a7c48145d682a9ed45109d83c82a73", "text": "We introduce a large dataset of narrative texts and questions about these texts, intended to be used in a machine comprehension task that requires reasoning using commonsense knowledge. Our dataset complements similar datasets in that we focus on stories about everyday activities, such as going to the movies or working in the garden, and that the questions require commonsense knowledge, or more specifically, script knowledge, to be answered. We show that our mode of data collection via crowdsourcing results in a substantial amount of such inference questions. The dataset forms the basis of a shared task on commonsense and script knowledge organized at SemEval 2018 and provides challenging test cases for the broader natural language understanding community.", "title": "" }, { "docid": "e60c295d02b87d4c88e159a3343e0dcb", "text": "In 2163 personally interviewed female twins from a population-based registry, the pattern of age at onset and comorbidity of the simple phobias (animal and situational)--early onset and low rates of comorbidity--differed significantly from that of agoraphobia--later onset and high rates of comorbidity. Consistent with an inherited \"phobia proneness\" but not a \"social learning\" model of phobias, the familial aggregation of any phobia, agoraphobia, social phobia, and animal phobia appeared to result from genetic and not from familial-environmental factors, with estimates of heritability of liability ranging from 30% to 40%. The best-fitting multivariate genetic model indicated the existence of genetic and individual-specific environmental etiologic factors common to all four phobia subtypes and others specific for each of the individual subtypes. This model suggested that (1) environmental experiences that predisposed to all phobias were most important for agoraphobia and social phobia and relatively unimportant for the simple phobias, (2) environmental experiences that uniquely predisposed to only one phobia subtype had a major impact on simple phobias, had a modest impact on social phobia, and were unimportant for agoraphobia, and (3) genetic factors that predisposed to all phobias were most important for animal phobia and least important for agoraphobia. Simple phobias appear to arise from the joint effect of a modest genetic vulnerability and phobia-specific traumatic events in childhood, while agoraphobia and, to a somewhat lesser extent, social phobia result from the combined effect of a slightly stronger genetic influence and nonspecific environmental experiences.", "title": "" }, { "docid": "d4cdea26217e90002a3c4522124872a2", "text": "Recently, several methods for single image super-resolution(SISR) based on deep neural networks have obtained high performance with regard to reconstruction accuracy and computational performance. This paper details the methodology and results of the New Trends in Image Restoration and Enhancement (NTIRE) challenge. The task of this challenge is to restore rich details (high frequencies) in a high resolution image for a single low resolution input image based on a set of prior examples with low and corresponding high resolution images. The challenge has two tracks. We present a super-resolution (SR) method, which uses three losses assigned with different weights to be regarded as optimization target. Meanwhile, the residual blocks are also used for obtaining significant improvement in the evaluation. The final model consists of 9 weight layers with four residual blocks and reconstructs the low resolution image with three color channels simultaneously, which shows better performance on these two tracks and benchmark datasets.", "title": "" }, { "docid": "e73de1e6f191fef625f75808d7fbfbb1", "text": "Colon cancer is one of the most prevalent diseases across the world. Numerous epidemiological studies indicate that diets rich in fruit, such as berries, provide significant health benefits against several types of cancer, including colon cancer. The anticancer activities of berries are attributed to their high content of phytochemicals and to their relevant antioxidant properties. In vitro and in vivo studies have demonstrated that berries and their bioactive components exert therapeutic and preventive effects against colon cancer by the suppression of inflammation, oxidative stress, proliferation and angiogenesis, through the modulation of multiple signaling pathways such as NF-κB, Wnt/β-catenin, PI3K/AKT/PKB/mTOR, and ERK/MAPK. Based on the exciting outcomes of preclinical studies, a few berries have advanced to the clinical phase. A limited number of human studies have shown that consumption of berries can prevent colorectal cancer, especially in patients at high risk (familial adenopolyposis or aberrant crypt foci, and inflammatory bowel diseases). In this review, we aim to highlight the findings of berries and their bioactive compounds in colon cancer from in vitro and in vivo studies, both on animals and humans. Thus, this review could be a useful step towards the next phase of berry research in colon cancer.", "title": "" }, { "docid": "8f24898cb21a259d9260b67202141d49", "text": "PROBLEM\nHow can human contributions to accidents be reconstructed? Investigators can easily take the position a of retrospective outsider, looking back on a sequence of events that seems to lead to an inevitable outcome, and pointing out where people went wrong. This does not explain much, however, and may not help prevent recurrence.\n\n\nMETHOD AND RESULTS\nThis paper examines how investigators can reconstruct the role that people contribute to accidents in light of what has recently become known as the new view of human error. The commitment of the new view is to move controversial human assessments and actions back into the flow of events of which they were part and which helped bring them forth, to see why assessments and actions made sense to people at the time. The second half of the paper addresses one way in which investigators can begin to reconstruct people's unfolding mindsets.\n\n\nIMPACT ON INDUSTRY\nIn an era where a large portion of accidents are attributed to human error, it is critical to understand why people did what they did, rather than judging them for not doing what we now know they should have done. This paper helps investigators avoid the traps of hindsight by presenting a method with which investigators can begin to see how people's actions and assessments actually made sense at the time.", "title": "" }, { "docid": "f9f26d8ff95aff0a361fcb321e57a779", "text": "A novel algorithm for the detection of underwater man-made objects in forwardlooking sonar imagery is proposed. The algorithm takes advantage of the integral-image representation to quickly compute features, and progressively reduces the computational load by working on smaller portions of the image along the detection process phases. By adhering to the proposed scheme, real-time detection on sonar data onboard an autonomous vehicle is made possible. The proposed method does not require training data, as it dynamically takes into account environmental characteristics of the sensed sonar data. The proposed approach has been implemented and integrated into the software system of the Gemellina autonomous surface vehicle, and is able to run in real time. The validity of the proposed approach is demonstrated on real experiments carried out at sea with the Gemellina autonomous surface vehicle.", "title": "" }, { "docid": "d0a6ca9838f8844077fdac61d1d75af1", "text": "Depth-first search, as developed by Tarjan and coauthors, is a fundamental technique of efficient algorithm design for graphs [23]. This note presents depth-first search algorithms for two basic problems, strong and biconnected components. Previous algorithms either compute auxiliary quantities based on the depth-first search tree (e.g., LOWPOINT values) or require two passes. We present one-pass algorithms that only maintain a representation of the depth-first search path. This gives a simplified view of depth-first search without sacrificing efficiency. In greater detail, most depth-first search algorithms (e.g., [23,10,11]) compute so-called LOWPOINT values that are defined in terms of the depth-first search tree. Because of the success of this method LOWPOINT values have become almost synonymous with depth-first search. LOWPOINT values are regarded as crucial in the strong and biconnected component algorithms, e.g., [14, pp. 94, 514]. Tarjan’s LOWPOINT method for strong components is presented in texts [1, 7,14,16,17,21]. The strong component algorithm of Kosaraju and Sharir [22] is often viewed as conceptu-", "title": "" }, { "docid": "e7e8fe5532d1cb32a7233bc4c99ac3b8", "text": "The concept of network slicing opens the possibilities to address the complex requirements of multi-tenancy in 5G. To this end, SDN/NFV can act as technology enabler. This paper presents a centralised and dynamic approach for creating and provisioning network slices for virtual network operators' consumption to offer services to their end customers, focusing on an SDN wireless backhaul use case. We demonstrate our approach for dynamic end-to-end slice and service provisioning in a testbed.", "title": "" }, { "docid": "f75b11bc21dc711b76a7a375c2a198d3", "text": "In many application areas like e-science and data-warehousing detailed information about the origin of data is required. This kind of information is often referred to as data provenance or data lineage. The provenance of a data item includes information about the processes and source data items that lead to its creation and current representation. The diversity of data representation models and application domains has lead to a number of more or less formal definitions of provenance. Most of them are limited to a special application domain, data representation model or data processing facility. Not surprisingly, the associated implementations are also restricted to some application domain and depend on a special data model. In this paper we give a survey of data provenance models and prototypes, present a general categorization scheme for provenance models and use this categorization scheme to study the properties of the existing approaches. This categorization enables us to distinguish between different kinds of provenance information and could lead to a better understanding of provenance in general. Besides the categorization of provenance types, it is important to include the storage, transformation and query requirements for the different kinds of provenance information and application domains in our considerations. The analysis of existing approaches will assist us in revealing open research problems in the area of data provenance.", "title": "" }, { "docid": "d1a4abaa57f978858edf0d7b7dc506ba", "text": "Abstraction in imagery results from the strategic simplification and elimination of detail to clarify the visual structure of the depicted shape. It is a mainstay of artistic practice and an important ingredient of effective visual communication. We develop a computational method for the abstract depiction of 2D shapes. Our approach works by organizing the shape into parts using a new synthesis of holistic features of the part shape, local features of the shape boundary, and global aspects of shape organization. Our abstractions are new shapes with fewer and clearer parts.", "title": "" }, { "docid": "9cf8a2f73a906f7dc22c2d4fbcf8fa6b", "text": "In this paper the effect of spoilers on aerodynamic characteristics of an airfoil were observed by CFD.As the experimental airfoil NACA 2415 was choosen and spoiler was extended from five different positions based on the chord length C. Airfoil section is designed with a spoiler extended at an angle of 7 degree with the horizontal.The spoiler extends to 0.15C.The geometry of 2-D airfoil without spoiler and with spoiler was designed in GAMBIT.The numerical simulation was performed by ANS YS Fluent to observe the effect of spoiler position on the aerodynamic characteristics of this particular airfoil. The results obtained from the computational process were plotted on graph and the conceptual assumptions were verified as the lift is reduced and the drag is increased that obeys the basic function of a spoiler. Coefficient of drag. I. INTRODUCTION An airplane wing has a special shape called an airfoil. As a wing moves through air, the air is split and passes above and below the wing. The wing's upper surface is shaped so the air rushing over the top speeds up and stretches out. This decreases the air pressure above the wing. The air flowing below the wing moves in a straighter line, so its speed and air pressure remains the same. Since high air pressure always moves toward low air pressure, the air below the wing pushes upward toward the air above the wing. The wing is in the middle, and the whole wing is ―lifted‖. The faster an airplane moves, the more lift there is and when the force of lift is greater than the force of gravity, the airplane is able to fly. [1] A spoiler, sometimes called a lift dumper is a device intended to reduce lift in an aircraft. Spoilers are plates on the top surface of a wing which can be extended upward into the airflow and spoil it. By doing so, the spoiler creates a carefully controlled stall over the portion of the wing behind it, greatly reducing the lift of that wing section. Spoilers are designed to reduce lift also making considerable increase in drag. Spoilers increase drag and reduce lift on the wing. If raised on only one wing, they aid roll control, causing that wing to drop. If the spoilers rise symmetrically in flight, the aircraft can either be slowed in level flight or can descend rapidly without an increase in airspeed. When the …", "title": "" }, { "docid": "9ce232e2a49652ee7fbfe24c6913d52a", "text": "Anthropometric quantities are widely used in epidemiologic research as possible confounders, risk factors, or outcomes. 3D laser-based body scans (BS) allow evaluation of dozens of quantities in short time with minimal physical contact between observers and probands. The aim of this study was to compare BS with classical manual anthropometric (CA) assessments with respect to feasibility, reliability, and validity. We performed a study on 108 individuals with multiple measurements of BS and CA to estimate intra- and inter-rater reliabilities for both. We suggested BS equivalents of CA measurements and determined validity of BS considering CA the gold standard. Throughout the study, the overall concordance correlation coefficient (OCCC) was chosen as indicator of agreement. BS was slightly more time consuming but better accepted than CA. For CA, OCCCs for intra- and inter-rater reliability were greater than 0.8 for all nine quantities studied. For BS, 9 of 154 quantities showed reliabilities below 0.7. BS proxies for CA measurements showed good agreement (minimum OCCC > 0.77) after offset correction. Thigh length showed higher reliability in BS while upper arm length showed higher reliability in CA. Except for these issues, reliabilities of CA measurements and their BS equivalents were comparable.", "title": "" }, { "docid": "56667d286f69f8429be951ccf5d61c24", "text": "As the Internet of Things (IoT) is emerging as an attractive paradigm, a typical IoT architecture that U2IoT (Unit IoT and Ubiquitous IoT) model has been presented for the future IoT. Based on the U2IoT model, this paper proposes a cyber-physical-social based security architecture (IPM) to deal with Information, Physical, and Management security perspectives, and presents how the architectural abstractions support U2IoT model. In particular, 1) an information security model is established to describe the mapping relations among U2IoT, security layer, and security requirement, in which social layer and additional intelligence and compatibility properties are infused into IPM; 2) physical security referring to the external context and inherent infrastructure are inspired by artificial immune algorithms; 3) recommended security strategies are suggested for social management control. The proposed IPM combining the cyber world, physical world and human social provides constructive proposal towards the future IoT security and privacy protection.", "title": "" }, { "docid": "886c284d72a01db9bc4eb9467e14bbbb", "text": "The Bitcoin cryptocurrency introduced a novel distributed consensus mechanism relying on economic incentives. While a coalition controlling a majority of computational power may undermine the system, for example by double-spending funds, it is often assumed it would be incentivized not to attack to protect its long-term stake in the health of the currency. We show how an attacker might purchase mining power (perhaps at a cost premium) for a short duration via bribery. Indeed, bribery can even be performed in-band with the system itself enforcing the bribe. A bribing attacker would not have the same concerns about the long-term health of the system, as their majority control is inherently short-lived. New modeling assumptions are needed to explain why such attacks have not been observed in practice. The need for all miners to avoid short-term profits by accepting bribes further suggests a potential tragedy of the commons which has not yet been analyzed.", "title": "" }, { "docid": "b4586447ef1536f23793651fcd9d71b8", "text": "State monitoring is widely used for detecting critical events and abnormalities of distributed systems. As the scale of such systems grows and the degree of workload consolidation increases in Cloud data centers, node failures and performance interferences, especially transient ones, become the norm rather than the exception. Hence, distributed state monitoring tasks are often exposed to impaired communication caused by such dynamics on different nodes. Unfortunately, existing distributed state monitoring approaches are often designed under the assumption of always-online distributed monitoring nodes and reliable inter-node communication. As a result, these approaches often produce misleading results which in turn introduce various problems to Cloud users who rely on state monitoring results to perform automatic management tasks such as auto-scaling. This paper introduces a new state monitoring approach that tackles this challenge by exposing and handling communication dynamics such as message delay and loss in Cloud monitoring environments. Our approach delivers two distinct features. First, it quantitatively estimates the accuracy of monitoring results to capture uncertainties introduced by messaging dynamics. This feature helps users to distinguish trustworthy monitoring results from ones heavily deviated from the truth, yet significantly improves monitoring utility compared with simple techniques that invalidate all monitoring results generated with the presence of messaging dynamics. Second, our approach also adapts to non-transient messaging issues by reconfiguring distributed monitoring algorithms to minimize monitoring errors. Our experimental results show that, even under severe message loss and delay, our approach consistently improves monitoring accuracy, and when applied to Cloud application auto-scaling, outperforms existing state monitoring techniques in terms of the ability to correctly trigger dynamic provisioning.", "title": "" }, { "docid": "c432a44e48e777a7a3316c1474f0aa12", "text": "In this paper, we present an algorithm that generates high dynamic range (HDR) images from multi-exposed low dynamic range (LDR) stereo images. The vast majority of cameras in the market only capture a limited dynamic range of a scene. Our algorithm first computes the disparity map between the stereo images. The disparity map is used to compute the camera response function which in turn results in the scene radiance maps. A refinement step for the disparity map is then applied to eliminate edge artifacts in the final HDR image. Existing methods generate HDR images of good quality for still or slow motion scenes, but give defects when the motion is fast. Our algorithm can deal with images taken during fast motion scenes and tolerate saturation and radiometric changes better than other stereo matching algorithms.", "title": "" }, { "docid": "0151ad8176711618e6cd5b0e20abf0cb", "text": "Skeleton-based action recognition has made great progress recently, but many problems still remain unsolved. For example, the representations of skeleton sequences captured by most of the previous methods lack spatial structure information and detailed temporal dynamics features. In this paper, we propose a novel model with spatial reasoning and temporal stack learning (SR-TSL) for skeleton-based action recognition, which consists of a spatial reasoning network (SRN) and a temporal stack learning network (TSLN). The SRN can capture the high-level spatial structural information within each frame by a residual graph neural network, while the TSLN can model the detailed temporal dynamics of skeleton sequences by a composition of multiple skip-clip LSTMs. During training, we propose a clip-based incremental loss to optimize the model. We perform extensive experiments on the SYSU 3D Human-Object Interaction dataset and NTU RGB+D dataset and verify the effectiveness of each network of our model. The comparison results illustrate that our approach achieves much better results than the state-of-the-art methods.", "title": "" } ]
scidocsrr
78ef5df49c026a283f4a35ecc8afc66a
A vision system for traffic sign detection and recognition
[ { "docid": "cdf2235bea299131929700406792452c", "text": "Real-time detection of traffic signs, the task of pinpointing a traffic sign's location in natural images, is a challenging computer vision task of high industrial relevance. Various algorithms have been proposed, and advanced driver assistance systems supporting detection and recognition of traffic signs have reached the market. Despite the many competing approaches, there is no clear consensus on what the state-of-the-art in this field is. This can be accounted to the lack of comprehensive, unbiased comparisons of those methods. We aim at closing this gap by the “German Traffic Sign Detection Benchmark” presented as a competition at IJCNN 2013 (International Joint Conference on Neural Networks). We introduce a real-world benchmark data set for traffic sign detection together with carefully chosen evaluation metrics, baseline results, and a web-interface for comparing approaches. In our evaluation, we separate sign detection from classification, but still measure the performance on relevant categories of signs to allow for benchmarking specialized solutions. The considered baseline algorithms represent some of the most popular detection approaches such as the Viola-Jones detector based on Haar features and a linear classifier relying on HOG descriptors. Further, a recently proposed problem-specific algorithm exploiting shape and color in a model-based Houghlike voting scheme is evaluated. Finally, we present the best-performing algorithms of the IJCNN competition.", "title": "" }, { "docid": "0789a3b04923fb5d586971ccaf75aec6", "text": "In this paper, we propose a high-performance traffic sign recognition (TSR) framework to rapidly detect and recognize multiclass traffic signs in high-resolution images. This framework includes three parts: a novel region-of-interest (ROI) extraction method called the high-contrast region extraction (HCRE), the split-flow cascade tree detector (SFC-tree detector), and a rapid occlusion-robust traffic sign classification method based on the extended sparse representation classification (ESRC). Unlike the color-thresholding or extreme region extraction methods used by previous ROI methods, the ROI extraction method of the HCRE is designed to extract ROI with high local contrast, which can keep a good balance of the detection rate and the extraction rate. The SFC-tree detector can detect a large number of different types of traffic signs in high-resolution images quickly. The traffic sign classification method based on the ESRC is designed to classify traffic signs with partial occlusion. Instead of solving the sparse representation problem using an overcomplete dictionary, the classification method based on the ESRC utilizes a content dictionary and an occlusion dictionary to sparsely represent traffic signs, which can largely reduce the dictionary size in the occlusion-robust dictionaries and achieve high accuracy. The experiments demonstrate the advantage of the proposed approach, and our TSR framework can rapidly detect and recognize multiclass traffic signs with high accuracy.", "title": "" } ]
[ { "docid": "68cf9884548278e2b4dcec62e29d3122", "text": "BACKGROUND\nVitamin D is crucial for maintenance of musculoskeletal health, and might also have a role in extraskeletal tissues. Determinants of circulating 25-hydroxyvitamin D concentrations include sun exposure and diet, but high heritability suggests that genetic factors could also play a part. We aimed to identify common genetic variants affecting vitamin D concentrations and risk of insufficiency.\n\n\nMETHODS\nWe undertook a genome-wide association study of 25-hydroxyvitamin D concentrations in 33 996 individuals of European descent from 15 cohorts. Five epidemiological cohorts were designated as discovery cohorts (n=16 125), five as in-silico replication cohorts (n=9367), and five as de-novo replication cohorts (n=8504). 25-hydroxyvitamin D concentrations were measured by radioimmunoassay, chemiluminescent assay, ELISA, or mass spectrometry. Vitamin D insufficiency was defined as concentrations lower than 75 nmol/L or 50 nmol/L. We combined results of genome-wide analyses across cohorts using Z-score-weighted meta-analysis. Genotype scores were constructed for confirmed variants.\n\n\nFINDINGS\nVariants at three loci reached genome-wide significance in discovery cohorts for association with 25-hydroxyvitamin D concentrations, and were confirmed in replication cohorts: 4p12 (overall p=1.9x10(-109) for rs2282679, in GC); 11q12 (p=2.1x10(-27) for rs12785878, near DHCR7); and 11p15 (p=3.3x10(-20) for rs10741657, near CYP2R1). Variants at an additional locus (20q13, CYP24A1) were genome-wide significant in the pooled sample (p=6.0x10(-10) for rs6013897). Participants with a genotype score (combining the three confirmed variants) in the highest quartile were at increased risk of having 25-hydroxyvitamin D concentrations lower than 75 nmol/L (OR 2.47, 95% CI 2.20-2.78, p=2.3x10(-48)) or lower than 50 nmol/L (1.92, 1.70-2.16, p=1.0x10(-26)) compared with those in the lowest quartile.\n\n\nINTERPRETATION\nVariants near genes involved in cholesterol synthesis, hydroxylation, and vitamin D transport affect vitamin D status. Genetic variation at these loci identifies individuals who have substantially raised risk of vitamin D insufficiency.\n\n\nFUNDING\nFull funding sources listed at end of paper (see Acknowledgments).", "title": "" }, { "docid": "24c00b40221b905943efbda6a7d5121f", "text": "In four experiments, this research sheds light on aesthetic experiences by rigorously investigating behavioral, neural, and psychological properties of package design. We find that aesthetic packages significantly increase the reaction time of consumers' choice responses; that they are chosen over products with well-known brands in standardized packages, despite higher prices; and that they result in increased activation in the nucleus accumbens and the ventromedial prefrontal cortex, according to functional magnetic resonance imaging (fMRI). The results suggest that reward value plays an important role in aesthetic product experiences. Further, a closer look at psychometric and neuroimaging data finds that a paper-and-pencil measure of affective product involvement correlates with aesthetic product experiences in the brain. Implications for future aesthetics research, package designers, and product managers are discussed. © 2010 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "6495f8c0217be9aea23e694abae248f1", "text": "This paper describes the interactive narrative experiences in Babyz, an interactive entertainment product for the PC currently in development at PF Magic / Mindscape in San Francisco, to be released in October 1999. Babyz are believable agents designed and implemented in the tradition of Dogz and Catz, Your Virtual Petz. As virtual human characters, Babyz are more intelligent, expressive and communicative than their Petz predecessors, allowing for both broader and deeper narrative possibilities. Babyz are designed with behaviors to support entertaining short-term narrative experiences, as well as long-term emotional relationships and narratives.", "title": "" }, { "docid": "d08529ef66abefda062a414acb278641", "text": "Spend your few moment to read a book even only few pages. Reading book is not obligation and force for everybody. When you don't want to read, you can get punishment from the publisher. Read a book becomes a choice of your different characteristics. Many people with reading habit will always be enjoyable to read, or on the contrary. For some reasons, this inductive logic programming techniques and applications tends to be the representative book in this website.", "title": "" }, { "docid": "dde695574d7007f6f6c5fc06b2d4468a", "text": "A model of positive psychological functioning that emerges from diverse domains of theory and philosophy is presented. Six key dimensions of wellness are defined, and empirical research summarizing their empirical translation and sociodemographic correlates is presented. Variations in well-being are explored via studies of discrete life events and enduring human experiences. Life histories of the psychologically vulnerable and resilient, defined via the cross-classification of depression and well-being, are summarized. Implications of the focus on positive functioning for research on psychotherapy, quality of life, and mind/body linkages are reviewed.", "title": "" }, { "docid": "bdd9760446a6412195e0742b5f1c7035", "text": "Cyanobacteria are found globally due to their adaptation to various environments. The occurrence of cyanobacterial blooms is not a new phenomenon. The bloom-forming and toxin-producing species have been a persistent nuisance all over the world over the last decades. Evidence suggests that this trend might be attributed to a complex interplay of direct and indirect anthropogenic influences. To control cyanobacterial blooms, various strategies, including physical, chemical, and biological methods have been proposed. Nevertheless, the use of those strategies is usually not effective. The isolation of natural compounds from many aquatic and terrestrial plants and seaweeds has become an alternative approach for controlling harmful algae in aquatic systems. Seaweeds have received attention from scientists because of their bioactive compounds with antibacterial, antifungal, anti-microalgae, and antioxidant properties. The undesirable effects of cyanobacteria proliferations and potential control methods are here reviewed, focusing on the use of potent bioactive compounds, isolated from seaweeds, against microalgae and cyanobacteria growth.", "title": "" }, { "docid": "a96209a2f6774062537baff5d072f72f", "text": "In recent years, extensive research has been conducted in the area of Service Level Agreement (SLA) for utility computing systems. An SLA is a formal contract used to guarantee that consumers’ service quality expectation can be achieved. In utility computing systems, the level of customer satisfaction is crucial, making SLAs significantly important in these environments. Fundamental issue is the management of SLAs, including SLA autonomy management or trade off among multiple Quality of Service (QoS) parameters. Many SLA languages and frameworks have been developed as solutions; however, there is no overall classification for these extensive works. Therefore, the aim of this chapter is to present a comprehensive survey of how SLAs are created, managed and used in utility computing environment. We discuss existing use cases from Grid and Cloud computing systems to identify the level of SLA realization in state-of-art systems and emerging challenges for future research.", "title": "" }, { "docid": "4f509a4fdc6bbffa45c214bc9267ea79", "text": "Memory units have been widely used to enrich the capabilities of deep networks on capturing long-term dependencies in reasoning and prediction tasks, but little investigation exists on deep generative models (DGMs) which are good at inferring high-level invariant representations from unlabeled data. This paper presents a deep generative model with a possibly large external memory and an attention mechanism to capture the local detail information that is often lost in the bottom-up abstraction process in representation learning. By adopting a smooth attention model, the whole network is trained end-to-end by optimizing a variational bound of data likelihood via auto-encoding variational Bayesian methods, where an asymmetric recognition network is learnt jointly to infer high-level invariant representations. The asymmetric architecture can reduce the competition between bottom-up invariant feature extraction and top-down generation of instance details. Our experiments on several datasets demonstrate that memory can significantly boost the performance of DGMs on various tasks, including density estimation, image generation, and missing value imputation, and DGMs with memory can achieve state-ofthe-art quantitative results.", "title": "" }, { "docid": "ec8f8f8611a4db6d70ba7913c3b80687", "text": "Identifying building footprints is a critical and challenging problem in many remote sensing applications. Solutions to this problem have been investigated using a variety of sensing modalities as input. In this work, we consider the detection of building footprints from 3D Digital Surface Models (DSMs) created from commercial satellite imagery along with RGB orthorectified imagery. Recent public challenges (SpaceNet 1 and 2, DSTL Satellite Imagery Feature Detection Challenge, and the ISPRS Test Project on Urban Classification) approach this problem using other sensing modalities or higher resolution data. As a result of these challenges and other work, most publically available automated methods for building footprint detection using 2D and 3D data sources as input are meant for high-resolution 3D lidar and 2D airborne imagery, or make use of multispectral imagery as well to aid detection. Performance is typically degraded as the fidelity and post spacing of the 3D lidar data or the 2D imagery is reduced. Furthermore, most software packages do not work well enough with this type of data to enable a fully automated solution. We describe a public benchmark dataset consisting of 50 cm DSMs created from commercial satellite imagery, as well as coincident 50 cm RGB orthorectified imagery products. The dataset includes ground truth building outlines and we propose representative quantitative metrics for evaluating performance. In addition, we provide lessons learned and hope to promote additional research in this field by releasing this public benchmark dataset to the community.", "title": "" }, { "docid": "085ec38c3e756504be93ac0b94483cea", "text": "Low power wide area (LPWA) networks are making spectacular progress from design, standardization, to commercialization. At this time of fast-paced adoption, it is of utmost importance to analyze how well these technologies will scale as the number of devices connected to the Internet of Things inevitably grows. In this letter, we provide a stochastic geometry framework for modeling the performance of a single gateway LoRa network, a leading LPWA technology. Our analysis formulates the unique peculiarities of LoRa, including its chirp spread-spectrum modulation technique, regulatory limitations on radio duty cycle, and use of ALOHA protocol on top, all of which are not as common in today’s commercial cellular networks. We show that the coverage probability drops exponentially as the number of end-devices grows due to interfering signals using the same spreading sequence. We conclude that this fundamental limiting factor is perhaps more significant toward LoRa scalability than for instance spectrum restrictions. Our derivations for co-spreading factor interference found in LoRa networks enables rigorous scalability analysis of such networks.", "title": "" }, { "docid": "a009fc320c5a61d8d8df33c19cd6037f", "text": "Over the past decade, crowdsourcing has emerged as a cheap and efficient method of obtaining solutions to simple tasks that are difficult for computers to solve but possible for humans. The popularity and promise of crowdsourcing markets has led to both empirical and theoretical research on the design of algorithms to optimize various aspects of these markets, such as the pricing and assignment of tasks. Much of the existing theoretical work on crowdsourcing markets has focused on problems that fall into the broad category of online decision making; task requesters or the crowdsourcing platform itself make repeated decisions about prices to set, workers to filter out, problems to assign to specific workers, or other things. Often these decisions are complex, requiring algorithms that learn about the distribution of available tasks or workers over time and take into account the strategic (or sometimes irrational) behavior of workers.\n As human computation grows into its own field, the time is ripe to address these challenges in a principled way. However, it appears very difficult to capture all pertinent aspects of crowdsourcing markets in a single coherent model. In this paper, we reflect on the modeling issues that inhibit theoretical research on online decision making for crowdsourcing, and identify some steps forward. This paper grew out of the authors' own frustration with these issues, and we hope it will encourage the community to attempt to understand, debate, and ultimately address them.", "title": "" }, { "docid": "ba41dfe1382ae0bc45d82d197b124382", "text": "Business Intelligence (BI) deals with integrated approaches to management support. Currently, there are constraints to BI adoption and a new era of analytic data management for business intelligence these constraints are the integrated infrastructures that are subject to BI have become complex, costly, and inflexible, the effort required consolidating and cleansing enterprise data and Performance impact on existing infrastructure / inadequate IT infrastructure. So, in this paper Cloud computing will be used as a possible remedy for these issues. We will represent a new environment atmosphere for the business intelligence to make the ability to shorten BI implementation windows, reduced cost for BI programs compared with traditional on-premise BI software, Ability to add environments for testing, proof-of-concepts and upgrades, offer users the potential for faster deployments and increased flexibility. Also, Cloud computing enables organizations to analyze terabytes of data faster and more economically than ever before. Business intelligence (BI) in the cloud can be like a big puzzle. Users can jump in and put together small pieces of the puzzle but until the whole thing is complete the user will lack an overall view of the big picture. In this paper reading each section will fill in a piece of the puzzle.", "title": "" }, { "docid": "5869ef6be3ca9a36dbf964c41e9b17b1", "text": " The Short Messaging Service (SMS), one of the most successful cellular services, generating millions of dollars in revenue for mobile operators yearly. Current estimations indicate that billions of SMSs are sent every day. Nevertheless, text messaging is becoming a source of customer dissatisfaction due to the rapid surge of messaging abuse activities. Although spam is a well tackled problem in the email world, SMS spam experiences a yearly growth larger than 500%. In this paper we expand our previous analysis on SMS spam traffic from a tier-1 cellular operator presented in [1], aiming to highlight the main characteristics of such messaging fraud activity. Communication patterns of spammers are compared to those of legitimate cell-phone users and Machine to Machine (M2M) connected appliances. The results indicate that M2M systems exhibit communication profiles similar to spammers, which could mislead spam filters. We find the main geographical sources of messaging abuse in the US. We also find evidence of spammer mobility, voice and data traffic resembling the behavior of legitimate customers. Finally, we include new findings on the invariance of the main characteristics of spam messages and spammers over time. Also, we present results that indicate a clear device reuse strategy in SMS spam activities.", "title": "" }, { "docid": "7c0ef25b2a4d777456facdfc526cf206", "text": "The paper presents a novel approach to unsupervised text summarization. The novelty lies in exploiting the diversity of concepts in text for summarization, which has not received much attention in the summarization literature. A diversity-based approach here is a principled generalization of Maximal Marginal Relevance criterion by Carbonell and Goldstein \\cite{carbonell-goldstein98}.\nWe propose, in addition, aninformation-centricapproach to evaluation, where the quality of summaries is judged not in terms of how well they match human-created summaries but in terms of how well they represent their source documents in IR tasks such document retrieval and text categorization.\nTo find the effectiveness of our approach under the proposed evaluation scheme, we set out to examine how a system with the diversity functionality performs against one without, using the BMIR-J2 corpus, a test data developed by a Japanese research consortium. The results demonstrate a clear superiority of a diversity based approach to a non-diversity based approach.", "title": "" }, { "docid": "45940a48b86645041726120fb066a1fa", "text": "For large state-space Markovian Decision Problems MonteCarlo planning is one of the few viable approaches to find near-optimal solutions. In this paper we introduce a new algorithm, UCT, that applies bandit ideas to guide Monte-Carlo planning. In finite-horizon or discounted MDPs the algorithm is shown to be consistent and finite sample bounds are derived on the estimation error due to sampling. Experimental results show that in several domains, UCT is significantly more efficient than its alternatives.", "title": "" }, { "docid": "9379cad59abab5e12c97a9b92f4aeb93", "text": "SigTur/E-Destination is a Web-based system that provides personalized recommendations of touristic activities in the region of Tarragona. The activities are properly classified and labeled according to a specific ontology, which guides the reasoning process. The recommender takes into account many different kinds of data: demographic information, travel motivations, the actions of the user on the system, the ratings provided by the user, the opinions of users with similar demographic characteristics or similar tastes, etc. The system has been fully designed and implemented in the Science and Technology Park of Tourism and Leisure. The paper presents a numerical evaluation of the correlation between the recommendations and the user’s motivations, and a qualitative evaluation performed by end users. & 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "641049f7bdf194b3c326298c5679c469", "text": "Acknowledgements Research in areas where there are many possible paths to follow requires a keen eye for crucial issues. The study of learning systems is such an area. Through the years of working with Andy Barto and Rich Sutton, I have observed many instances of \" fluff cutting \" and the exposure of basic issues. I thank both Andy and Rich for the insights that have rubbed off on me. I also thank Andy for opening up an infinite world of perspectives on learning, ranging from engineering principles to neural processing theories. I thank Rich for showing me the most important step in doing \" science \" —simplify your questions by isolating the issues. Several people contributed to the readability of this dissertation. Andy spent much time carefully reading several drafts. Through his efforts the clarity is much improved. I thank Paul Utgoff, Michael Arbib, and Bill Kilmer for reading drafts of this dissertation and providing valuable criticisms. Paul provided a non-connectionist perspective that widened my view considerably. He never hesitated to work out differences in terms and methodologies that have been developed through research with connectionist vs. symbolic representations. I thank for commenting on an early draft and for many interesting discussions. and the AFOSR for starting and maintaining the research project that supported the work reported in this dis-sertation. I thank Susan Parker for the skill with which she administered the project. And I thank the COINS Department at UMass and the RCF Staff for the maintenance of the research computing environment. Much of the computer graphics software used to generate figures of this dissertation is based on graphics tools provided by Rich Sutton and Andy Cromarty. Most importantly, I thank Stacey and Joseph for always being there to lift my spirits while I pursued distant milestones and to share my excitement upon reaching them. Their faith and confidence helped me maintain a proper perspective. The difficulties of learning in multilayered networks of computational units has limited the use of connectionist systems in complex domains. This dissertation elucidates the issues of learning in a network's hidden units, and reviews methods for addressing these issues that have been developed through the years. Issues of learning in hidden units are shown to be analogous to learning issues for multilayer systems employing symbolic representations. Comparisons of a number of algorithms for learning in hidden units are made by applying them in …", "title": "" }, { "docid": "d23c5fc626d0f7b1d9c6c080def550b8", "text": "Gamification of education is a developing approach for increasing learners’ motivation and engagement by incorporating game design elements in educational environments. With the growing popularity of gamification and yet mixed success of its application in educational contexts, the current review is aiming to shed a more realistic light on the research in this field by focusing on empirical evidence rather than on potentialities, beliefs or preferences. Accordingly, it critically examines the advancement in gamifying education. The discussion is structured around the used gamification mechanisms, the gamified subjects, the type of gamified learning activities, and the study goals, with an emphasis on the reliability and validity of the reported outcomes. To improve our understanding and offer a more realistic picture of the progress of gamification in education, consistent with the presented evidence, we examine both the outcomes reported in the papers and how they have been obtained. While the gamification in education is still a growing phenomenon, the review reveals that (i) insufficient evidence exists to support the long-term benefits of gamification in educational contexts; (ii) the practice of gamifying learning has outpaced researchers’ understanding of its mechanisms and methods; (iii) the knowledge of how to gamify an activity in accordance with the specifics of the educational context is still limited. The review highlights the need for systematically designed studies and rigorously tested approaches confirming the educational benefits of gamification, if gamified learning is to become a recognized instructional approach.", "title": "" }, { "docid": "2b745b41b0495ab7adad321080ce2228", "text": "In any teaching and learning setting, there are some variables that play a highly significant role in both teachers’ and learners’ performance. Two of these influential psychological domains in educational context include self-efficacy and burnout. This study is conducted to investigate the relationship between the self-efficacy of Iranian teachers of English and their reports of burnout. The data was collected through application of two questionnaires. The Maslach Burnout Inventory (MBI; Maslach& Jackson 1981, 1986) and Teacher Efficacy Scales (Woolfolk& Hoy, 1990) were administered to ten university teachers. After obtaining the raw data, the SPSS software (version 16) was used to change the data into numerical interpretable forms. In order to determine the relationship between self-efficacy and teachers’ burnout, correlational analysis was employed. The results showed that participants’ self-efficacy has a reverse relationship with their burnout.", "title": "" }, { "docid": "636ace52ca3377809326735810a08310", "text": "BACKGROUND\nAlthough many patients with venous thromboembolism require extended treatment, it is uncertain whether it is better to use full- or lower-intensity anticoagulation therapy or aspirin.\n\n\nMETHODS\nIn this randomized, double-blind, phase 3 study, we assigned 3396 patients with venous thromboembolism to receive either once-daily rivaroxaban (at doses of 20 mg or 10 mg) or 100 mg of aspirin. All the study patients had completed 6 to 12 months of anticoagulation therapy and were in equipoise regarding the need for continued anticoagulation. Study drugs were administered for up to 12 months. The primary efficacy outcome was symptomatic recurrent fatal or nonfatal venous thromboembolism, and the principal safety outcome was major bleeding.\n\n\nRESULTS\nA total of 3365 patients were included in the intention-to-treat analyses (median treatment duration, 351 days). The primary efficacy outcome occurred in 17 of 1107 patients (1.5%) receiving 20 mg of rivaroxaban and in 13 of 1127 patients (1.2%) receiving 10 mg of rivaroxaban, as compared with 50 of 1131 patients (4.4%) receiving aspirin (hazard ratio for 20 mg of rivaroxaban vs. aspirin, 0.34; 95% confidence interval [CI], 0.20 to 0.59; hazard ratio for 10 mg of rivaroxaban vs. aspirin, 0.26; 95% CI, 0.14 to 0.47; P<0.001 for both comparisons). Rates of major bleeding were 0.5% in the group receiving 20 mg of rivaroxaban, 0.4% in the group receiving 10 mg of rivaroxaban, and 0.3% in the aspirin group; the rates of clinically relevant nonmajor bleeding were 2.7%, 2.0%, and 1.8%, respectively. The incidence of adverse events was similar in all three groups.\n\n\nCONCLUSIONS\nAmong patients with venous thromboembolism in equipoise for continued anticoagulation, the risk of a recurrent event was significantly lower with rivaroxaban at either a treatment dose (20 mg) or a prophylactic dose (10 mg) than with aspirin, without a significant increase in bleeding rates. (Funded by Bayer Pharmaceuticals; EINSTEIN CHOICE ClinicalTrials.gov number, NCT02064439 .).", "title": "" } ]
scidocsrr
855900f2bbf809a36d65c33235267922
Manuka: A Batch-Shading Architecture for Spectral Path Tracing in Movie Production
[ { "docid": "c491e39bbfb38f256e770d730a50b2e1", "text": "Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state-of-the-art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real-world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.", "title": "" } ]
[ { "docid": "26d06b650cffb1bf50d059087b307119", "text": "Algorithms and decision making based on Big Data have become pervasive in all aspects of our daily lives lives (offline and online), as they have become essential tools in personal finance, health care, hiring, housing, education, and policies. It is therefore of societal and ethical importance to ask whether these algorithms can be discriminative on grounds such as gender, ethnicity, or health status. It turns out that the answer is positive: for instance, recent studies in the context of online advertising show that ads for high-income jobs are presented to men much more often than to women [Datta et al., 2015]; and ads for arrest records are significantly more likely to show up on searches for distinctively black names [Sweeney, 2013]. This algorithmic bias exists even when there is no discrimination intention in the developer of the algorithm. Sometimes it may be inherent to the data sources used (software making decisions based on data can reflect, or even amplify, the results of historical discrimination), but even when the sensitive attributes have been suppressed from the input, a well trained machine learning algorithm may still discriminate on the basis of such sensitive attributes because of correlations existing in the data. These considerations call for the development of data mining systems which are discrimination-conscious by-design. This is a novel and challenging research area for the data mining community.\n The aim of this tutorial is to survey algorithmic bias, presenting its most common variants, with an emphasis on the algorithmic techniques and key ideas developed to derive efficient solutions. The tutorial covers two main complementary approaches: algorithms for discrimination discovery and discrimination prevention by means of fairness-aware data mining. We conclude by summarizing promising paths for future research.", "title": "" }, { "docid": "110a60612f701575268fe3dbcf0d338f", "text": "The Danish and Swedish male top football divisions were studied prospectively from January to June 2001. Exposure to football and injury incidence, severity and distribution were compared between the countries. Swedish players had greater exposure to training (171 vs. 123 h per season, P<0.001), whereas exposure to matches did not differ between the countries. There was a higher risk for injury during training in Denmark than in Sweden (11.8 vs. 6.0 per 1000 h, P<0.01), whereas for match play there was no difference (28.2 vs. 26.2 per 1000 h). The risk for incurring a major injury (absence from football more than 4 weeks) was greater in Denmark (1.8 vs. 0.7 per 1000 h, P = 0.002). The distribution of injuries according to type and location was similar in both countries. Of all injuries in Denmark and Sweden, overuse injury accounted for 39% and 38% (NS), and re-injury for 30% and 24% (P = 0.032), respectively. The greater training exposure and the long pre-season period in Sweden may explain some of the reported differences.", "title": "" }, { "docid": "4deea3312fe396f81919b07462551682", "text": "The purpose of this paper is to explore applications of blockchain technology related to the 4th Industrial Revolution (Industry 4.0) and to present an example where blockchain is employed to facilitate machine-to-machine (M2M) interactions and establish a M2M electricity market in the context of the chemical industry. The presented scenario includes two electricity producers and one electricity consumer trading with each other over a blockchain. The producers publish exchange offers of energy (in kWh) for currency (in USD) in a data stream. The consumer reads the offers, analyses them and attempts to satisfy its energy demand at a minimum cost. When an offer is accepted it is executed as an atomic exchange (multiple simultaneous transactions). Additionally, this paper describes and discusses the research and application landscape of blockchain technology in relation to the Industry 4.0. It concludes that this technology has significant under-researched potential to support and enhance the efficiency gains of the revolution and identifies areas for future research. Producer 2 • Issue energy • Post purchase offers (as atomic transactions) Consumer • Look through the posted offers • Choose cheapest and satisfy its own demand Blockchain Stream Published offers are visible here Offer sent", "title": "" }, { "docid": "3171587b5b4554d151694f41206bcb4e", "text": "Embedded systems are ubiquitous in society and can contain information that could be used in criminal cases for example in a serious road traffic accident where the car management systems could provide vital forensic information concerning the engine speed etc. A critical review of a number of methods and procedures for the analysis of embedded systems were compared against a ‘standard’ methodology for use in a Forensic Computing Investigation. A Unified Forensic Methodology (UFM) has been developed that is forensically sound and capable of dealing with the analysis of a wide variety of Embedded Systems.", "title": "" }, { "docid": "9c2609adae64ec8d0b4e2cc987628c05", "text": "We propose a novel method capable of retrieving clips from untrimmed videos based on natural language queries. This cross-modal retrieval task plays a key role in visual-semantic understanding, and requires localizing clips in time and computing their similarity to the query sentence. Current methods generate sentence and video embeddings and then compare them using a late fusion approach, but this ignores the word order in queries and prevents more fine-grained comparisons. Motivated by the need for fine-grained multi-modal feature fusion, we propose a novel early fusion embedding approach that combines video and language information at the word level. Furthermore, we use the inverse task of dense video captioning as a side-task to improve the learned embedding. Our full model combines these components with an efficient proposal pipeline that performs accurate localization of potential video clips. We present a comprehensive experimental validation on two large-scale text-to-clip datasets (Charades-STA and DiDeMo) and attain state-ofthe-art retrieval results with our model.", "title": "" }, { "docid": "106fefb169c7e95999fb411b4e07954e", "text": "Additional contents in web pages, such as navigation panels, advertisements, copyrights and disclaimer notices, are typically not related to the main subject and may hamper the performance of Web data mining. They are traditionally taken as noises and need to be removed properly. To achieve this, two intuitive and crucial kinds of information—the textual information and the visual information of web pages—is considered in this paper. Accordingly, Text Density and Visual Importance are defined for the Document Object Model (DOM) nodes of a web page. Furthermore, a content extraction method with these measured values is proposed. It is a fast, accurate and general method for extracting content from diverse web pages. And with the employment of DOM nodes, the original structure of the web page can be preserved. Evaluated with the CleanEval benchmark and with randomly selected pages from well-known Web sites, where various web domains and styles are tested, the effect of the method is demonstrated. The average F1-scores with our method were 8.7 % higher than the best scores among several alternative methods.", "title": "" }, { "docid": "1783f837b61013391f3ff4f03ac6742e", "text": "Nowadays, many methods have been applied for data transmission of MWD system. Magnetic induction is one of the alternative technique. In this paper, detailed discussion on magnetic induction communication system is provided. The optimal coil configuration is obtained by theoretical analysis and software simulations. Based on this coil arrangement, communication characteristics of path loss and bit error rate are derived.", "title": "" }, { "docid": "758692d2c0f1c2232a4c705b0a14c19f", "text": "Process-driven spreadsheet queuing simulation is a better vehicle for understanding queue behavior than queuing theory or dedicated simulation software. Spreadsheet queuing simulation has many pedagogical benefits in a business school end-user modeling course, including developing students' intuition , giving them experience with active modeling skills, and providing access to tools. Spreadsheet queuing simulations are surprisingly easy to program, even for queues with balking and reneging. The ease of prototyping in spreadsheets invites thoughtless design, so careful spreadsheet programming practice is important. Spreadsheet queuing simulation is inferior to dedicated simulation software for analyzing queues but is more likely to be available to managers and students. Q ueuing theory has always been a staple in survey courses on management science. Although it is a powerful tool for computing certain steady-state performance measures, queuing theory is a poor vehicle for teaching students about what transpires in queues. Process-driven spreadsheet queuing simulation is a much better vehicle. Although Evans and Olson [1998, p. 170] state that \" a serious limitation of spreadsheets for waiting-line models is that it is not possible to include behavior such as balking \" and Liberatore and Ny-dick [forthcoming] indicate that a limitation of spreadsheet simulation is the in", "title": "" }, { "docid": "b7b3690f547e479627cc1262ae080b8f", "text": "This article investigates the vulnerabilities of Supervisory Control and Data Acquisition (SCADA) systems which monitor and control the modern day irrigation canal systems. This type of monitoring and control infrastructure is also common for many other water distribution systems. We present a linearized shallow water partial differential equation (PDE) system that can model water flow in a network of canal pools which are equipped with lateral offtakes for water withdrawal and are connected by automated gates. The knowledge of the system dynamics enables us to develop a deception attack scheme based on switching the PDE parameters and proportional (P) boundary control actions, to withdraw water from the pools through offtakes. We briefly discuss the limits on detectability of such attacks. We use a known formulation based on low frequency approximation of the PDE model and an associated proportional integral (PI) controller, to create a stealthy deception scheme capable of compromising the performance of the closed-loop system. We test the proposed attack scheme in simulation, using a shallow water solver; and show that the attack is indeed realizable in practice by implementing it on a physical canal in Southern France: the Gignac canal. A successful field experiment shows that the attack scheme enables us to steal water stealthily from the canal until the end of the attack.", "title": "" }, { "docid": "ec6f93bdc15283b46bc4c1a0ce1a38c8", "text": "This paper advocates the exploration of the full state of recorded real-time strategy (RTS) games, by human or robotic players, to discover how to reason about tactics and strategy. We present a dataset of StarCraft games encompassing the most of the games’ state (not only player’s orders). We explain one of the possible usages of this dataset by clustering armies on their compositions. This reduction of armies compositions to mixtures of Gaussian allow for strategic reasoning at the level of the components. We evaluated this clustering method by predicting the outcomes of battles based on armies compositions’ mixtures components.", "title": "" }, { "docid": "00fdc3da831aadbad0fd3410ffb0f8bb", "text": "Removing undesired reflections from a photo taken in front of a glass is of great importance for enhancing the efficiency of visual computing systems. Various approaches have been proposed and shown to be visually plausible on small datasets collected by their authors. A quantitative comparison of existing approaches using the same dataset has never been conducted due to the lack of suitable benchmark data with ground truth. This paper presents the first captured Single-image Reflection Removal dataset ‘SIR2’ with 40 controlled and 100 wild scenes, ground truth of background and reflection. For each controlled scene, we further provide ten sets of images under varying aperture settings and glass thicknesses. We perform quantitative and visual quality comparisons for four state-of-the-art single-image reflection removal algorithms using four error metrics. Open problems for improving reflection removal algorithms are discussed at the end.", "title": "" }, { "docid": "3d490d7d30dcddc3f1c0833794a0f2df", "text": "Purpose-This study attempts to investigate (1) the effect of meditation experience on employees’ self-directed learning (SDL) readiness and organizational innovative (OI) ability as well as organizational performance (OP), and (2) the relationships among SDL, OI, and OP. Design/methodology/approach-This study conducts an empirical study of 15 technological companies (n = 412) in Taiwan, utilizing the collected survey data to test the relationships among the three dimensions. Findings-Results show that: (1) The employees’ meditation experience significantly and positively influenced employees’ SDL readiness, companies’ OI capability and OP; (2) The study found that SDL has a direct and significant impact on OI; and OI has direct and significant influences on OP. Research limitation/implications-The generalization of the present study is constrained by (1) the existence of possible biases of the participants, (2) the variations of length, type and form of meditation demonstrated by the employees in these high tech companies, and (3) the fact that local data collection in Taiwan may present different cultural characteristics which may be quite different from those in other areas or countries. Managerial implications are presented at the end of the work. Practical implications-The findings indicate that SDL can only impact organizational innovation through employees “openness to a challenge”, “inquisitive nature”, self-understanding and acceptance of responsibility for learning. Such finding implies better organizational innovative capability under such conditions, thus organizations may encourage employees to take risks or accept new opportunities through various incentives, such as monetary rewards or public recognitions. More specifically, the present study discovers that while administration innovation is the most important element influencing an organization’s financial performance, market innovation is the key component in an organization’s market performance. Social implications-The present study discovers that meditation experience positively", "title": "" }, { "docid": "dc20661ca4dbf21e4dcdeeabbab7cf14", "text": "We present our approach for developing a laboratory information management system (LIMS) software by combining Björners software triptych methodology (from domain models via requirements to software) with Arlow and Neustadt archetypes and archetype patterns based initiative. The fundamental hypothesis is that through this Archetypes Based Development (ABD) approach to domains, requirements and software, it is possible to improve the software development process as well as to develop more dependable software. We use ADB in developing LIMS software for the Clinical and Biomedical Proteomics Group (CBPG), University of Leeds.", "title": "" }, { "docid": "2aabe5c6f1ccb8dfd241f0c208609738", "text": "Exposing the weaknesses of neural models is crucial for improving their performance and robustness in real-world applications. One common approach is to examine how input perturbations affect the output. Our analysis takes this to an extreme on natural language processing tasks by removing as many words as possible from the input without changing the model prediction. For question answering and natural language inference, this often reduces the inputs to just one or two words, while model confidence remains largely unchanged. This is an undesireable behavior: the model gets the Right Answer for the Wrong Reason (RAWR). We introduce a simple training technique that mitigates this problem while maintaining performance on regular examples.", "title": "" }, { "docid": "bf7cd2303c325968879da72966054427", "text": "Object detection methods fall into two categories, i.e., two-stage and single-stage detectors. The former is characterized by high detection accuracy while the latter usually has considerable inference speed. Hence, it is imperative to fuse their metrics for a better accuracy vs. speed trade-off. To this end, we propose a dual refinement network (DRN) to boost the performance of the single-stage detector. Inheriting from the advantages of two-stage approaches (i.e., two-step regression and accurate features for detection), anchor refinement and feature offset refinement are conducted in anchor-offset detection, where the detection head is comprised of deformable convolutions. Moreover, to leverage contextual information for describing objects, we design a multi-deformable head, in which multiple detection paths with different receptive field sizes devote themselves to detecting objects. Extensive experiments on PASCAL VOC and ImageNet VID datasets are conducted, and we achieve the state-of-the-art results and a better accuracy vs. speed trade-off, i.e., 81.4% mAP vs. 42.3 FPS on VOC2007 test set. Codes will be publicly available.", "title": "" }, { "docid": "567f48fef5536e9f44a6c66deea5375b", "text": "The principle of control signal amplification is found in all actuation systems, from engineered devices through to the operation of biological muscles. However, current engineering approaches require the use of hard and bulky external switches or valves, incompatible with both the properties of emerging soft artificial muscle technology and those of the bioinspired robotic systems they enable. To address this deficiency a biomimetic molecular-level approach is developed that employs light, with its excellent spatial and temporal control properties, to actuate soft, pH-responsive hydrogel artificial muscles. Although this actuation is triggered by light, it is largely powered by the resulting excitation and runaway chemical reaction of a light-sensitive acid autocatalytic solution in which the actuator is immersed. This process produces actuation strains of up to 45% and a three-fold chemical amplification of the controlling light-trigger, realising a new strategy for the creation of highly functional soft actuating systems.", "title": "" }, { "docid": "728bab0ecb94c38368e867545bfea88e", "text": "We present a hierarchical control approach that can be used to fulfill autonomous flight, including vertical takeoff, landing, hovering, transition, and level flight, of a quadrotor tail-sitter vertical takeoff and landing unmanned aerial vehicle (VTOL UAV). A unified attitude controller, together with a moment allocation scheme between elevons and motor differential thrust, is developed for all flight modes. A comparison study via real flight tests is performed to verify the effectiveness of using elevons in addition to motor differential thrust. With the well-designed switch scheme proposed in this paper, the aircraft can transit between different flight modes with negligible altitude drop or gain. Intensive flight tests have been performed to verify the effectiveness of the proposed control approach in both manual and fully autonomous flight mode.", "title": "" }, { "docid": "64bbb86981bf3cc575a02696f64109f6", "text": "We use computational techniques to extract a large number of different features from the narrative speech of individuals with primary progressive aphasia (PPA). We examine several different types of features, including part-of-speech, complexity, context-free grammar, fluency, psycholinguistic, vocabulary richness, and acoustic, and discuss the circumstances under which they can be extracted. We consider the task of training a machine learning classifier to determine whether a participant is a control, or has the fluent or nonfluent variant of PPA. We first evaluate the individual feature sets on their classification accuracy, then perform an ablation study to determine the optimal combination of feature sets. Finally, we rank the features in four practical scenarios: given audio data only, given unsegmented transcripts only, given segmented transcripts only, and given both audio and segmented transcripts. We find that psycholinguistic features are highly discriminative in most cases, and that acoustic, context-free grammar, and part-of-speech features can also be important in some circumstances.", "title": "" }, { "docid": "e42357ff2f957f6964bab00de4722d52", "text": "We model a degraded image as an original image that has been subject to linear frequency distortion and additive noise injection. Since the psychovisual effects of frequency distortion and noise injection are independent, we decouple these two sources of degradation and measure their effect on the human visual system. We develop a distortion measure (DM) of the effect of frequency distortion, and a noise quality measure (NQM) of the effect of additive noise. The NQM, which is based on Peli's (1990) contrast pyramid, takes into account the following: 1) variation in contrast sensitivity with distance, image dimensions, and spatial frequency; 2) variation in the local luminance mean; 3) contrast interaction between spatial frequencies; 4) contrast masking effects. For additive noise, we demonstrate that the nonlinear NQM is a better measure of visual quality than peak signal-to noise ratio (PSNR) and linear quality measures. We compute the DM in three steps. First, we find the frequency distortion in the degraded image. Second, we compute the deviation of this frequency distortion from an allpass response of unity gain (no distortion). Finally, we weight the deviation by a model of the frequency response of the human visual system and integrate over the visible frequencies. We demonstrate how to decouple distortion and additive noise degradation in a practical image restoration system.", "title": "" }, { "docid": "7c2cb105e5fad90c90aea0e59aae5082", "text": "Life often presents us with situations in which it is important to assess the “true” qualities of a person or object, but in which some factor(s) might have affected (or might yet affect) our initial perceptions in an undesired way. For example, in the Reginald Denny case following the 1993 Los Angeles riots, jurors were asked to determine the guilt or innocence of two African-American defendants who were charged with violently assaulting a Caucasion truck driver. Some of the jurors in this case might have been likely to realize that in their culture many of the popular media portrayals of African-Americans are violent in nature. Yet, these jurors ideally would not want those portrayals to influence their perceptions of the particular defendants in the case. In fact, the justice system is based on the assumption that such portrayals will not influence jury verdicts. In our work on bias correction, we have been struck by the variety of potentially biasing factors that can be identified-including situational influences such as media, social norms, and general culture, and personal influences such as transient mood states, motives (e.g., to manage impressions or agree with liked others), and salient beliefs-and we have been impressed by the apparent ubiquity of correction phenomena (which appear to span many areas of psychological inquiry). Yet, systematic investigations of bias correction are in their early stages. Although various researchers have discussed the notion of effortful cognitive processes overcoming initial (sometimes “automatic”) biases in a variety of settings (e.g., Brewer, 1988; Chaiken, Liberman, & Eagly, 1989; Devine, 1989; Kruglanski & Freund, 1983; Neuberg & Fiske, 1987; Petty & Cacioppo, 1986), little attention has been given, until recently, to the specific processes by which biases are overcome when effort is targeted toward “correction of bias.” That is, when", "title": "" } ]
scidocsrr