content
stringlengths
7
2.61M
On the taming the space of dialogue by deaf people during the COVID-19 pandemic The article is empirical. The aim of the research was to diagnose the specificity of subjective experiences related to the impact of a pandemic situation on the shaping of the dialogical space. The focus was on the following problem: how do deaf people perceive their experiences of creating a space where authentic dialogue takes place? The research used the method of individual cases. The analysis of empirical material obtained on the basis of a narrative interview with deaf students allowed us to learn about their experiences and personal experiences related to the creation of a space in which dialogue takes place in a pandemic situation. Qualitative analysis showed three areas discussed by the respondents, these were reflections on: dialogue as a form of communication, the subject of dialogue and the value of dialogue. The collected narratives revealed emotional experiences that influenced the interpretation of events by deaf students.
Antenatal magnesium sulfate to prevent cerebral palsy Magnesium sulfate given to women before birth at <30 weeks gestation reduces the risk of cerebral palsy in their children. Our study aimed to assess the impact of a local quality improvement programme, primarily using plan-do-study-act cycles, to increase the use of antenatal magnesium sulfate. After implementing our quality improvement programme, an average of 86% of babies delivered at <30 weeks gestation were exposed to antenatal magnesium sulfate compared with a historical baseline rate of 63%. Our study strengthens the case for embedding quality improvement programmes in maternal perinatal care to reduce the impact of cerebral palsy on families and society. Embedding a quality improvement program on prenatal magnesium administration is feasible to reduce the impact of cerebral palsy.
// Что выведет программа? Объяснить вывод программы. package main import "fmt" type customError struct { msg string } func (e *customError) Error() string { return e.msg } func test() *customError { { // do something } return nil } func main() { var err error //интерфейс содержащий нулевой указатель не равен nil err = test() //а вот если возвращать структуру то всё ок // err2 := test() // // fmt.Printf("%#v\n", err) // fmt.Printf("%#v\n", err2) // if err2 != nil { // println("error") // return // } if err != nil { println("error") return } println("ok") }
/** * The <code>OperationFactory</code> is used to create operations * for the transport processor. Depending on the configuration of the * pipeline object this will create different operations. Typically * this will create an SSL handshake operation if the pipeline has * an <code>SSLEngine</code> instance. This allows the transport * processor to complete the handshake before handing the transport * to the transporter for processing. * * @author Niall Gallagher */ class OperationFactory { /** * This is the negotiator used to process the created transport. */ private final Negotiator negotiator; /** * This is the threshold for the asynchronous buffers to use. */ private final int threshold; /** * This is the size of the buffers to be used by the transport. */ private final int buffer; /** * This is the number of buffers that can be queued by the factory. */ private final int queue; /** * This determines if the SSL handshake is for the client side. */ private final boolean client; /** * Constructor for the <code>OperationFactory</code> object. This * uses the negotiator provided to hand off the created transport * when it has been created. All operations created typically * execute in an asynchronous thread. * * @param negotiator the negotiator used to process transports * @param threshold number of bytes that can be copied for queuing */ public OperationFactory(Negotiator negotiator, int threshold) { this(negotiator, threshold, 4096); } /** * Constructor for the <code>OperationFactory</code> object. This * uses the negotiator provided to hand off the created transport * when it has been created. All operations created typically * execute in an asynchronous thread. * * @param negotiator the negotiator used to process transports * @param threshold number of bytes that can be copied for queuing * @param buffer this is the size of the buffers for the transport */ public OperationFactory(Negotiator negotiator, int threshold, int buffer) { this(negotiator, threshold, buffer, 3); } /** * Constructor for the <code>OperationFactory</code> object. This * uses the negotiator provided to hand off the created transport * when it has been created. All operations created typically * execute in an asynchronous thread. * * @param negotiator the negotiator used to process transports * @param threshold number of bytes that can be copied for queuing * @param buffer this is the size of the buffers for the transport * @param queue this is the number of buffers that can be queued */ public OperationFactory(Negotiator negotiator, int threshold, int buffer, int queue) { this(negotiator, threshold, buffer, queue, false); } /** * Constructor for the <code>OperationFactory</code> object. This * uses the negotiator provided to hand off the created transport * when it has been created. All operations created typically * execute in an asynchronous thread. * * @param negotiator the negotiator used to process transports * @param threshold number of bytes that can be copied for queuing * @param buffer this is the size of the buffers for the transport * @param queue this is the number of buffers that can be queued * @param client determines if the SSL handshake is for a client */ public OperationFactory(Negotiator negotiator, int threshold, int buffer, int queue, boolean client) { this.negotiator = negotiator; this.threshold = threshold; this.buffer = buffer; this.client = client; this.queue = queue; } /** * This method is used to create <code>Operation</code> object to * process the next phase of the negotiation. The operations that * are created using this factory ensure the processing can be * done asynchronously, which reduces the overhead the connection * thread has when handing the pipelines over for processing. * * @param socket this is the pipeline that is to be processed * * @return this returns the operation used for processing */ public Operation getInstance(Socket socket) throws IOException { return getInstance(socket, socket.getEngine()); } /** * This method is used to create <code>Operation</code> object to * process the next phase of the negotiation. The operations that * are created using this factory ensure the processing can be * done asynchronously, which reduces the overhead the connection * thread has when handing the pipelines over for processing. * * @param socket this is the pipeline that is to be processed * @param engine this is the engine used for SSL negotiations * * @return this returns the operation used for processing */ private Operation getInstance(Socket socket, SSLEngine engine) throws IOException { Transport transport = new SocketTransport(socket, negotiator, threshold, queue, buffer); if(engine != null) { return new Handshake(transport, negotiator, client); } return new Dispatcher(transport, negotiator); } }
package service import ( "context" "gim/internal/logic/domain/message/repo" ) type seqService struct{} var SeqService = new(seqService) // GetUserNext 获取下一个序列号 func (*seqService) GetUserNext(ctx context.Context, userId int64) (int64, error) { return repo.SeqRepo.Incr(repo.SeqObjectTypeUser, userId) }
The Influence of Response Mode on Study Results: Offering Cigarette Smokers a Choice of Postal or Online Completion of a Survey Background It is unclear whether offering online data collection to study participants affects compliance or produces bias. Objective To compare response rates, baseline characteristics, test-retest reliability, and outcomes between cigarette smokers who chose to complete a survey by mail versus those who chose to complete it online. Methods We surveyed cigarette smokers who intended to stop smoking within the next 30 days to determine barriers to calling a smoking quit line. Participants were offered the choice of completing a paper version of the survey sent through the mail or an online version at a password-protected website. Participants were called 2 months later to determine if they had made a quit attempt and/or called a smoking quit line since the baseline survey. We compared characteristics and outcomes among those who chose postal versus online completion. We measured test-retest reliability of the baseline survey by resurveying a semirandom sample of participants within 10 days of the original survey. Results Of 697 eligible respondents to newspaper ads in 12 US cities, 438 (63%) chose to receive a mailed paper survey and 259 (37%) chose an Internet survey. Survey return rates were the same for the 2 modes (92% versus 92%, P =.82). Online respondents were younger (mean of 46 versus 51 years old for postal, P <.001), more likely to be white (76% versus 62%, P <.001), less likely to be African American (18% versus 30%, P <.001), more highly educated (34% college graduate versus 23%, P <.001), more likely to intend to stop smoking in the next 30 days (47% definitely versus 30%, P <.001), and more likely to have heard of a smoking quit line (51% versus 40%, P =.008). Participants did not differ on gender (54% female for online versus 55% for postal, P =.72) or cigarettes smoked per day (mean of 19 versus 21, P =.30). Online respondents had slightly fewer missing items on the 79-item survey (mean of 1.7% missing versus 2.3%, P =.02). Loss to follow-up at 2 months was similar (16% for online and 15% for postal, P =.74). There was no significant difference between online and postal respondents in having called a smoking quit line during the 2-month follow-up period (20% versus 24%, P =.22) or in having made a quit attempt (76% versus 79%, P =.41). Conclusions Cigarette smokers who chose to complete a survey using the Internet differed in several ways from those who chose mailed surveys. However, more importantly, online and postal responses produced similar outcomes.
Characteristics and reliability of a polysilicon thin-film transistor with a multi-trenched body This paper describes a polysilicon thin-film transistor (TFT) with a multi-trenched body that has been fabricated and found to suppress the off-state leakage current without degrading the on-state current or other electric properties. The thin-film structure minimizes carrier scattering through the polysilicon's grain-boundary traps. In addition to this effect, our multi-trenched structure reduces the off-state current by 50% compared to a conventional TFT. The effects of high temperature and dc hot-carrier stress are also measured in comparison to a conventional TFT. At 100 °C, the multi-trenched-body TFT has a higher output saturation current. After 10000 s of stress testing, the trench-bodied structure continues to outperform the conventional TFT.
Plastid genome sequencing reveals biogeographical structure and extensive population genetic variation in wild populations of Phalaris arundinacea L. in northwestern Europe New and comprehensive collections of the perennial rhizomatous reed canary grass (Phalaris arundinacea) were made in NW Europe along northtosouth and easttowest clines from Denmark, Germany, Ireland, Poland, Sweden and the United Kingdom. Rhizome, seed and leaf samples were taken for analysis and genetic resource conservation. A subsample covering the geographical range was characterized using plastid genome sequencing and SNP discovery generated using a longread PCR and MiSeq sequencing approach. Samples were also subject to flow cytometry and all found to be tetraploid. New sequences were assembled against a Lolium perenne (perennial ryegrass) reference genome, and an average of approximately 60% of each genome was aligned (81 064 bp). Genetic variation was high among the 48 sequenced genotypes with a total of 1793 SNPs, equating to 23 SNPs per kbp. SNPs were subject to principal coordinate and Structure analyses to detect population genetic groupings and to examine phylogeographical pattern. Results indicate substantial genetic variation and population genetic structuring of this allogamous species at a broad geographical scale in NW Europe with plastid genetic diversity organized more across an easttowest than a northtosouth cline. Introduction Phalaris arundinacea, or reed canary grass, is a C 3 photosynthetic perennial patch-forming species with circumpolar boreotemperate distribution (Anderson, 1961;a). It is important for grazing but has also come under scrutiny as a biomass and bioenergy crop (;;Sanderson & Adler, 2008;;). It is native to Asia, Europe and North America and is believed to have been cultivated in Europe since the mid-18th century, starting in Scandinavia. It is also widely naturalized outside its native range Voshell & Hilu, 2014) and populations from Europe were introduced to North America after European colonization and have mixed with native populations (a;Waggy, 2010). In fact, agricultural expansion combined with hybridization of multiply introduced Eurasian genotypes has increased genetic variability of the species in North America (Lavergne & Molofsky, 2007;). Reed canary grass is an outbreeding species with good dispersal ability (;Voshell & Hilu, 2014). In addition to seed dispersal, it can also spread via extensive rhizome systems and can dominate large areas with a dense broad-leaved canopy. It is also very deep rooted, a trait that allows it to occupy apparently dry areas in open ground. It tolerates flooding and can grow in shallow water due to increased aerenchyma in its roots (Cope & Gray, 2009;). Its closest relatives are P. caesia, P. rotgesii and P. lindigii (which are sometimes considered conspecific), but it is also closely allied with P. aquatica, P. minor, P. maderensis, P. coerulescens, P. appendiculata and P. paradoxa in a lineage with x = 7 as its basic chromosome number compared with some other Phalaris that have x = 6 (;;;Voshell & Hilu, 2014). Phalaris arundinacea has traditionally been used for grazing, hay production and soil conservation and has a popular ornamental variegated variety (var. picta). It has attracted attention as a bioenergy feedstock because of its high biomass yields and broad adaptation to a wide tures, damp woodland, forest margins and disturbed areas. It thrives in a range of soils but particularly in clays and is rarely found in soils below a pH of 5 (Cope & Gray, 2009). It is therefore a prime candidate species for growth on marginal land that is unsuitable or unproductive for other agricultural uses. The challenge is to develop new high yielding genotypes suitable for growth in a range of habitats and on marginal land (;;). Biomass yield and stress tolerance are key limiting factors for the production of P. arundinacea as a bioenergy feedstock (b;;). It generally has better cold and water-logging tolerance than some other potential bioenergy crops such as Miscanthus and has been utilized in Scandinavia for bioenergy and fodder (). One limitation is that it is considered invasive in some countries (;Green & Galatowitsch, 2001;Lavergne & Molofsky, 2004;Barney & DiTomaso, 2011;). However, there is huge potential to utilize natural genetic variation of this phenotypically variable species in breeding programmes (;;;). Several methods exist for plastid genome sequencing (;). However, we examined the utility of a long-read PCR and MiSeq DNA sequencing approach developed by Uribe- Convers et al. to sequence a large proportion of the plastid genome in the sampled populations of Phalaris included here as part of an EU funded GrassMargins project (grassmargins.com) that aimed to develop new bioenergy crops suitable for marginal lands. No reference plastid genome for Phalaris is available so we aligned the data to the related grass Lolium perenne (). We conducted extensive sampling of new germplasm across north-west Europe over latitudinal and longitudinal distances of 3000 and 2500 km, respectively. Populations were sampled for ex situ conservation from a broad range of habitats and assessed for plastid genome variation, diversity, phylogeographic pattern and gene pool structure. Sample collection and ploidy measurement Samples of P. arundinacea were collected in Denmark, Germany, Ireland, Poland, Sweden and the United Kingdom (Table 1, Fig. 1) over a distance of 3000 km along a latitudinal gradient (51°37 0 41.92N to 65°59 0 58.04N) and over a distance of 2500 km along a longitudinal gradient (8°45 0 42.29W to 23°11 0 13.20E). Ten populations per country were collected, each comprising 30 accessions. Seed from 30 plants, rhizome from 15 plants and representative herbarium specimens were taken at each site. Seed was stored in paper envelopes and rhizomes kept fresh in approximately 50 mL of potting compost in an aerated ziplock bag until they could be grown up from rhizome. Plants were screened by flow cytometry for ploidy estimation compared to P. arundinacea genotypes with known tetraploidy determined by standard Feulgen staining of mitotic root tip meristems (following ) and light microscopy at 91000 magnification. Fresh leaves of 80 genotypes and four Phalaris standards were sent by courier to a commercial company, Plant Cytometry Services, the Netherlands (www.plantcytometry.nl), for flow cytometry analyses. DNA content of samples was compared to known internal controls (Vinca major) with DAPI fluorochrome staining and the resulting ratios used to determine ploidy with reference to Phalaris standards with known ploidy. Because of time and money constraints, not all samples were sequenced to assess plastid DNA variation. Instead, a subset of 48 accessions grown from rhizome were used for sequencing with approximately one to two plants per sampled population based on survival and health of the rhizome grown material while maximizing population number (Table 1). Long-read PCRs and next-generation sequencing DNA was extracted from fresh or dried leaf tissue with a DNeasy Plant Extraction kit (Qiagen, Valencia, CA, USA), and quantity and quality of DNA of each sample was checked using a NanoDrop 2000 spectrophotometer (Thermo Scientific, Wilmington, DE, USA) and diluted to between 30 and 100 ng lL 1. The primers and the long-read PCR protocol were as in Appendix 1 of Uribe-Convers et al.. All regions (named 1-16; Fig. S1) described in the paper were tested and amplified on four samples of P. arundinacea. After two rounds of optimization (increasing/decreasing the number of cycles, the annealing temperature and/or the time of the elongation step), it was decided to leave out regions 2, 5, 11 and 14 because of inconsistent amplification. The other regions were successfully amplified in the 48 samples with or without optimization (Table S1). PCR products were checked on a 1% agarose gel and purified by precipitation in a 20% polyethylene glycol 8000 (PEG)/ Data analyses The Galaxy public online server was used to analyse the MiSeq data (http://usegalaxy.org/) (;;). FastQ files were imported into the server. The read quality was checked using FastQC. Reads were then cleaned at high stringency (minimum quality = 30, maximum number of bases allowed outside of quality range = 5) and assembled against a reference genome (L. perenne L., GenBank accession no. NC009950, Diekmann et al. ) using BWA with default parameters (Li & Durbin, 2009). SNPs and INDELs were called using MPileup tool, visually checked on IGV V2.3 (Broad Institute, www.broadinstitute.org) then filtered (minimum coverage depth = 5) using SnpSift filter tool (). The individual files were combined, filtered again (maximum-likelihood estimate of the 1st alternative allele frequency AF1 = 1) and then recoded into Excel. Genetic structure was investigated using STRUCTURE v2.3.4. (;), which applies the Markov chain Monte Carlo (MCMC) algorithm. This procedure clusters individuals into populations and estimates the proportion of membership in each population for each individual. An admixture model with correlated allele frequencies was used, the K value was set from 2 to 15, and 10 runs were performed for each value of K. The length of the burn-in period was set to 3000, and the MCMC chains after burn-in were run for an additional 2000 times. The optimal value of K was determined by examination of the DK statistic () using Structure Harvester (Earl & vonHoldt, 2012). The frequency distribution for the most probable number of K clusters was mapped into ARCGIS 10.1 (ESRI) for each sample. Finally, principal coordinate analyses were run into GENALEX 6. 5 (Peakall & Smouse, 2012). Results All genotypes screened for DNA content by flow cytometry were uniformly tetraploid (Table S2). DNA content ratios relative to the internal standard were 1.41, 1.42, 1.44 and 1.46 for the known tetraploid plants and ranged from 1.42 to 1.50 for the other 80 Phalaris samples included here. All samples screened for ploidy and collected in NW Europe were therefore confirmed as tetraploid. Long-read PCR was successful for most primer combinations of Uribe-Convers et al.. With just 10 PCRs per sample, it is possible to amplify approximately twothirds of the plastid genome. The amplicons were quantified, combined and sequenced using an Illumina MiSeq. Of 10 106 895 reads, 9 762 130 passed the filter and 97.75% of the reads were assigned to an individual, for an average of 198 802 reads per individual. The reads were not trimmed due to excellent per base sequence quality (from the FastQC reports). Over 89% of the reads passed the high-stringency filter (8 509 126 reads). Reads were assembled to the reference genome, and on average, 57.9% of the genome (78 294 bp) was aligned (ranging from 41 981 to 97 555 bp, Table 2). The resulting assemblies had an average depth of coverage of 979 (minimum 309-maximum: 2419). Sequences can be found in GenBank at Bioproject ID PRJNA301092. A total of 93.3% of the SNPs passed the filter for a total of 2385 SNPs and INDELS. After removing the SNPs only found between Phalaris and the reference, 1793 SNPs were discovered among the Phalaris samples (Table 3), with an average of 23 SNPs per kbp. The average number of SNPs in coding regions per kbp was 13. SNP content per gene was highly variable ranging from 0 in the photosystem II reaction centre protein J Fig. 1 Sampling locations of Phalaris arundinacea in NW Europe. Plants were collected across north-to-south (3000 km) and east-towest gradients (2500 km). (psbJ), the ribosomal protein L2 (rpl2), the ribosomal protein S18 (rps18) and the transfer RNA gene trnI-GAU to 113 in the ribosomal protein S12 (rps12). The SNP data for Phalaris genotypes were subject to principal coordinate and Structure analyses to define cytoplasmic gene pools and examine clustering of individuals in a phylogeographic context. Axes 1 and 2 of the principal coordinate analysis (PC1 and PC2) explained 19.8% and 7.1% of the variation, respectively, and, although not entirely discrete (separated), a number of general groupings can be determined. The samples from the eastern part of the sampling range (Poland and Germany) are clustered to the right part of the plot and the western samples from Ireland and Britain towards the left part of the plot. Some rough groupings by country were also resolved. Structuring is more evident across an east-to-west than north-to-south cline. The Structure analysis revealed four clusters ( Fig. S2) with the DK statistic (), and membership of genotypes to these clusters follows some but not strong geographical pattern (Figs 3 and 4). One cluster type is more common in the east (blue in Fig. 3) and another more common in the west and Sweden (red in Fig. 3). Some structure can be seen when cluster membership is mapped in ARCGIS (Fig. 4) especially for the Polish and Swedish material. The yellow and blue cluster types are less common in the west and the blue type particularly prevalent in the east. Discussion The utility of the long-read PCR MiSeq whole-genome plastid DNA sequencing method for Phalaris A number of methods exist for whole-genome sequencing of plant plastid DNA (;). We applied an amplification-based method of plastid DNA isolation and enrichment using the long-read PCR primers of Uribe-Convers et al.. We found that amplification and MiSeq sequencing was highly effective using the protocols and 'universal' primers described in Uribe-Convers et al. with nearly 90% of reads passing the high-stringency filter. The universal primers were designed for the eudicot and showed good application to the monocot grass species studied here. In addition, when the MiSeq reads were assembled to the reference genome for each genotype, an average of 57.9% of the genome (78 294 bp) was aligned (ranging from 41 981 to 97 555 bp, Table 3). The resulting assemblies had an average depth of coverage of 979 (minimum 309, maximum: 2419). Previous studies have used molecular markers to study Phalaris diversity including microsatellites (), minisatellites (), ISSRs (), and AFLP (a), but these are not as easily transferrable among laboratories as the SNP markers developed here. We have submitted the sequences to GenBank with the Bioproject ID PRJNA301092, and the data are available for future comparisons and analyses. It would, for example, be useful to extend the sampling to southern and eastern Europe, and the data can be used directly for that purpose. No complete plastid genome has been published for P. arundinacea, and we had to align our data to a related grass species L. perenne (). However, we have sequenced a maximum of 85% of the plastid genome of Phalaris in this manuscript. Such data will be highly valuable for various aspects of comparative genomics in the future with the added value of having population level data (not just a single genotype). Nuclear SNP markers have also been developed and used for P. arundinacea by Ramstein et al. and M. Klaas, S. Barth & T.R. Hodkinson (in preparation) using genotyping by sequencing (GBS). Ramstein et al. used GBS to generate 5228 nuclear SNP markers and applied them in a genomewide association study with 35 traits, but they did not assess plastid genome variation nor assess population genetic variation at broad geographical scale. Populations of P. arundinacea have been assessed by M. Klaas, S. Barth & T.R. Hodkinson (in preparation) with approximately the same population of plants included here. It will be useful to compare and interpret our plastid DNA results with those of nuclear SNP analysis in future studies. Combined they will provide an excellent resource to organize and manage the genetic resources of Phalaris and to study its molecular ecology including its invasion dynamics. Diversity and plastid DNA variability The extent of SNP discovery of the 48 sampled plants from the European populations indicates that it is not necessary to amplify and sequence the entire plastid genome of each P. arundinacea genotype because sufficient diversity is recorded in the partial genomes sequenced in this study (approximately 60% of the genome). Plastid DNA is maternally inherited and each locus would be expected to be linked and provide similar population genetic and phylogeographic signal, apart from some differences that could be attributed to variation in mutation rate. In fact, mutation rate is high and not uniform among genes (Table 3). The samples of P. arundinacea were collected in a broad range of habitats separated by large geographical distances. Genotypes were sampled from Poland, Germany, Denmark, Sweden, Ireland and the United Kingdom over a distance of 3000 km along a latitudinal gradient and over a distance of 2500 km along a longitudinal gradient ( Fig. 1; Table 1). The sampling strategy aimed to encompass maximum geographical variation with plants collected from waterlogged and dry well-drained sites, from freshwater and saline habitats and from large and small stands ( Table 1). The results of our plastid DNA study indicate that a high genetic diversity of Phalaris has been collected. High levels of plastid variability have also been recorded in other grasses such as L. perenne using nuclear SSR markers by McGrath et al.. Lolium is also an outbreeding wind-pollinated species but differs in its habitat preferences to P. arundinacea. High variability was found in genic as well as nongenic regions. It is not known what adaptive significance these variants might have in European ecosystems, but it is clear that the breeders have a high genetic variability of cytotypes to work from. The plastid types will, but not always because of uniparental inheritance, plastid capture and lower mutation rates (), reflect patterns of nuclear genetic variability. Population structure and differentiation The principal coordinate analysis (Fig. 2) showed some geographical patterning of the genotypes despite the percentage of variation explaining each axis being low (PC1 20% and PC2 7%). Groupings can be seen in the PC1 vs. PC2 plot, but there is also considerable overlap among country groups in ordination space reflecting a relatively high degree of gene flow. The samples from the eastern part of the sampling range (Poland and Germany) are grouped to the right-hand part of the plot and the western samples from Ireland and Britain towards the left of the plot. Structuring is more evident across an east-to-west than north-to-south cline. This might reflect the high proportion of more northern samples that had an oceanic (Britain, Ireland, Denmark, Sweden) rather than a more continental climate of eastern Germany and Poland. The structure analysis identified four major clusters (Figs 3 and S2), and similar pattern to the principal coordinate analyses is recovered (Figs 3 and 4). One cluster type is more common in the east including Poland and Germany (shown in blue in Fig. 3) and another is more common in the north Europe including Ireland, Sweden and the United Kingdom (red in Fig. 3). Considerable admixture is also evident in some genotypes but not in others. The third and fourth clusters (green and yellow) do not relate to geographical distribution and are relatively randomly distributed across the range (Fig. 3) except that yellow is rarer in Ireland and the United Kingdom. Dilution of population genetic structuring relative to geographical proximity is expected because reed canary grass is a common outbreeding, wind-pollinated species with a gametophytic self-incompatibility system (). It also has good dispersal ability via its Estimated population structure of Phalaris arundinacea using plastid SNP data and Structure analysis. Each individual is represented by a vertical bar. SNP variation partitioned into coloured segments that represent the individual's estimated membership fractions in each of the K = 4 clusters (cluster 1 green; cluster 2 red; cluster 3 blue; cluster 4 yellow) determined as optimal by the DK statistic of Evanno et al.. Samples arranged according to country. seeds that are produced in profusion and are dispersed over the water and germinate in spring (Cope & Gray, 2009;Voshell & Hilu, 2014). It can also regenerate and spread via extensive rhizome systems over large areas. It is also possible that genotypes have been deliberately or inadvertently moved by humans for forage or habitat restoration. Unlike many other perennial rhizomatous grasses such as Dactylis, Lolium and Festuca, P. arundinacea is not extensively planted and moved around Europe by seed. We sampled populations with no known history of cultivation and do not expect to have sampled commercial cultivars or material introgressed via cultivar/wild population hybridization. In a separate study as part of the grassmargins.com project, we sampled Dactylis glomerata across the same geographical range and did not detect the same extent of population genetic structuring relating to geographical location (data not shown) possibly due to the higher level of planting and human-mediated seed movement in that forage species. Polyploidy does not explain any of the witnessed structuring of genotypes as all samples were recorded as tetraploid (Table S2). A range of ploidy levels have been recorded in European P. arundinacea, but the most common type is tetraploid. Our wide sampling of north-western European material indicates that tetraploid genotypes are indeed the dominant type and perhaps the only type. Phalaris is variable morphologically in its vegetative and reproductive characters, and because of this, some taxonomists recognize infraspecific categories, including subspecies and varieties. Cytologically, diploid (2n = 2x = 14), tetraploid (2n = 4x = 28) and hexaploid (2n = 6x = 42) cytotypes have been recorded (Anderson, 1961;McWilliam & Neal-Smith, 1962;). These three cytotypes were recognized as different taxa (Baldini, 1993(Baldini,, 1995 on the basis of morphological differences, chromosome number, ecology and geography: P. rotgesii (Husn.) Baldini (2x), restricted to Corsica and Sardinia, P. arundinacea L. (4x), with holarctic distribution, and P. caesia Nees (6x), originated in the Mediterranean area and spread to Eastern and Southern Africa. Also, several studies indicate the presence of aneuploids and other meiotic irregularities (). Conclusions Our analysis of near-whole-plastid genome data in P. arundinacea clearly defines groupings for genetic resource characterization and has helped define useful Fig. 4 Cluster membership (K = 4) for the Structure analysis mapped in ARCGIS. Coloured segments that represent the individual's estimated membership fractions in each of the K = 4 clusters (cluster 1 green; cluster 2 red; cluster 3 blue; cluster 4 yellow) determined as optimal by the DK statistic of Evanno et al.. breeding material for future evaluation. It has provided valuable data on plastid genome variation of P. arundinacea across north-west Europe that can be compared to nuclear GBS data in future studies. Plastid genetic SNP variation is high and genotypes show some broad-scale population genetic structuring particularly in an east-towest cline despite the good dispersal ability of P. arundinacea via seed and its allogamous wind-pollinated breeding system. Supporting Information Additional Supporting Information may be found online in the supporting information tab for this article: Figure S1. Long-read primers used for plastid genome sequencing. Figure S2. Optimal value of K determined by the DK statistic () using Structure Harvester (Earl & vonHoldt, 2012). Table S1. Parameters used for long-read PCR for each primer pair of Uribe-Convers et al.. Table S2. Flow cytometry results for Phalaris arundinacea ploidy determination.
/* Copyright 2013 The jeo project. All rights reserved. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package io.jeo.vector; import java.util.List; import org.junit.Test; import static org.junit.Assert.*; public class VectorQueryTest { @Test public void testGetOrderedFields() { Schema schema = new SchemaBuilder("widgets") .fields("sp:String,ip:Integer,pp:Point:srid=4326").schema(); List<String> fields = new VectorQuery().fields("pp","ip","blah").fieldsIn(schema); assertEquals(2, fields.size()); assertEquals("ip", fields.get(0)); assertEquals("pp", fields.get(1)); } }
''' Provides the core API for Cheetah. See the docstring in the Template class and the Users' Guide for more information ''' ################################################################################ ## DEPENDENCIES import sys # used in the error handling code import re # used to define the internal delims regex import new # used to bind methods and create dummy modules import logging import string import os.path import time # used in the cache refresh code from random import randrange import imp import inspect import StringIO import traceback import pprint import cgi # Used by .webInput() if the template is a CGI script. import types from types import StringType, ClassType try: from types import StringTypes except ImportError: StringTypes = (types.StringType, types.UnicodeType) try: from threading import Lock except ImportError: class Lock: def acquire(self): pass def release(self): pass try: x = set() except NameError: # Python 2.3 compatibility from sets import Set as set from Cheetah.Version import convertVersionStringToTuple, MinCompatibleVersionTuple from Cheetah.Version import MinCompatibleVersion # Base classes for Template from Cheetah.Servlet import Servlet # More intra-package imports ... from Cheetah.Parser import ParseError, SourceReader from Cheetah.Compiler import Compiler, DEFAULT_COMPILER_SETTINGS from Cheetah import ErrorCatchers # for placeholder tags from Cheetah import Filters # the output filters from Cheetah.convertTmplPathToModuleName import convertTmplPathToModuleName from Cheetah.Utils.Misc import checkKeywords # Used in Template.__init__ from Cheetah.Utils.Indenter import Indenter # Used in Template.__init__ and for # placeholders from Cheetah.NameMapper import NotFound, valueFromSearchList from Cheetah.CacheStore import MemoryCacheStore, MemcachedCacheStore from Cheetah.CacheRegion import CacheRegion from Cheetah.Utils.WebInputMixin import _Converter, _lookup, NonNumericInputError from Cheetah.Unspecified import Unspecified # Decide whether to use the file modification time in file's cache key __checkFileMtime = True def checkFileMtime(value): globals()['__checkFileMtime'] = value class Error(Exception): pass class PreprocessError(Error): pass def hashList(l): hashedList = [] for v in l: if isinstance(v, dict): v = hashDict(v) elif isinstance(v, list): v = hashList(v) hashedList.append(v) return hash(tuple(hashedList)) def hashDict(d): items = sorted(d.items()) hashedList = [] for k, v in items: if isinstance(v, dict): v = hashDict(v) elif isinstance(v, list): v = hashList(v) hashedList.append((k, v)) return hash(tuple(hashedList)) ################################################################################ ## MODULE GLOBALS AND CONSTANTS def _genUniqueModuleName(baseModuleName): """The calling code is responsible for concurrency locking. """ if baseModuleName not in sys.modules: finalName = baseModuleName else: finalName = ('cheetah_%s_%s_%s'%(baseModuleName, str(time.time()).replace('.', '_'), str(randrange(10000, 99999)))) return finalName # Cache of a cgi.FieldStorage() instance, maintained by .webInput(). # This is only relavent to templates used as CGI scripts. _formUsedByWebInput = None def updateLinecache(filename, src): import linecache size = len(src) mtime = time.time() lines = src.splitlines() fullname = filename linecache.cache[filename] = size, mtime, lines, fullname class CompileCacheItem(object): pass class TemplatePreprocessor(object): ''' This is used with the preprocessors argument to Template.compile(). See the docstring for Template.compile ** Preprocessors are an advanced topic ** ''' def __init__(self, settings): self._settings = settings def preprocess(self, source, file): """Create an intermediate template and return the source code it outputs """ settings = self._settings if not source: # @@TR: this needs improving if isinstance(file, (str, unicode)): # it's a filename. f = open(file) source = f.read() f.close() elif hasattr(file, 'read'): source = file.read() file = None templateAPIClass = settings.templateAPIClass possibleKwArgs = [ arg for arg in inspect.getargs(templateAPIClass.compile.im_func.func_code)[0] if arg not in ('klass', 'source', 'file',)] compileKwArgs = {} for arg in possibleKwArgs: if hasattr(settings, arg): compileKwArgs[arg] = getattr(settings, arg) tmplClass = templateAPIClass.compile(source=source, file=file, **compileKwArgs) tmplInstance = tmplClass(**settings.templateInitArgs) outputSource = settings.outputTransformer(tmplInstance) outputFile = None return outputSource, outputFile class Template(Servlet): ''' This class provides a) methods used by templates at runtime and b) methods for compiling Cheetah source code into template classes. This documentation assumes you already know Python and the basics of object oriented programming. If you don't know Python, see the sections of the Cheetah Users' Guide for non-programmers. It also assumes you have read about Cheetah's syntax in the Users' Guide. The following explains how to use Cheetah from within Python programs or via the interpreter. If you statically compile your templates on the command line using the 'cheetah' script, this is not relevant to you. Statically compiled Cheetah template modules/classes (e.g. myTemplate.py: MyTemplateClasss) are just like any other Python module or class. Also note, most Python web frameworks (Webware, Aquarium, mod_python, Turbogears, CherryPy, Quixote, etc.) provide plugins that handle Cheetah compilation for you. There are several possible usage patterns: 1) tclass = Template.compile(src) t1 = tclass() # or tclass(namespaces=[namespace,...]) t2 = tclass() # or tclass(namespaces=[namespace2,...]) outputStr = str(t1) # or outputStr = t1.aMethodYouDefined() Template.compile provides a rich and very flexible API via its optional arguments so there are many possible variations of this pattern. One example is: tclass = Template.compile('hello $name from $caller', baseclass=dict) print tclass(name='world', caller='me') See the Template.compile() docstring for more details. 2) tmplInstance = Template(src) # or Template(src, namespaces=[namespace,...]) outputStr = str(tmplInstance) # or outputStr = tmplInstance.aMethodYouDefined(...args...) Notes on the usage patterns: usage pattern 1) This is the most flexible, but it is slightly more verbose unless you write a wrapper function to hide the plumbing. Under the hood, all other usage patterns are based on this approach. Templates compiled this way can #extend (subclass) any Python baseclass: old-style or new-style (based on object or a builtin type). usage pattern 2) This was Cheetah's original usage pattern. It returns an instance, but you can still access the generated class via tmplInstance.__class__. If you want to use several different namespace 'searchLists' with a single template source definition, you're better off with Template.compile (1). Limitations (use pattern 1 instead): - Templates compiled this way can only #extend subclasses of the new-style 'object' baseclass. Cheetah.Template is a subclass of 'object'. You also can not #extend dict, list, or other builtin types. - If your template baseclass' __init__ constructor expects args there is currently no way to pass them in. If you need to subclass a dynamically compiled Cheetah class, do something like this: from Cheetah.Template import Template T1 = Template.compile('$meth1 #def meth1: this is meth1 in T1') T2 = Template.compile('#implements meth1\nthis is meth1 redefined in T2', baseclass=T1) print T1, T1() print T2, T2() Note about class and instance attribute names: Attributes used by Cheetah have a special prefix to avoid confusion with the attributes of the templates themselves or those of template baseclasses. Class attributes which are used in class methods look like this: klass._CHEETAH_useCompilationCache (_CHEETAH_xxx) Instance attributes look like this: klass._CHEETAH__globalSetVars (_CHEETAH__xxx with 2 underscores) ''' # this is used by ._addCheetahPlumbingCodeToClass() _CHEETAH_requiredCheetahMethods = ( '_initCheetahInstance', 'searchList', 'errorCatcher', 'getVar', 'varExists', 'getFileContents', 'i18n', 'runAsMainProgram', 'respond', 'shutdown', 'webInput', 'serverSidePath', 'generatedClassCode', 'generatedModuleCode', '_getCacheStore', '_getCacheStoreIdPrefix', '_createCacheRegion', 'getCacheRegion', 'getCacheRegions', 'refreshCache', '_handleCheetahInclude', '_getTemplateAPIClassForIncludeDirectiveCompilation', ) _CHEETAH_requiredCheetahClassMethods = ('subclass',) _CHEETAH_requiredCheetahClassAttributes = ('cacheRegionClass', 'cacheStore', 'cacheStoreIdPrefix', 'cacheStoreClass') ## the following are used by .compile(). Most are documented in its docstring. _CHEETAH_cacheModuleFilesForTracebacks = False _CHEETAH_cacheDirForModuleFiles = None # change to a dirname _CHEETAH_compileCache = dict() # cache store for compiled code and classes # To do something other than simple in-memory caching you can create an # alternative cache store. It just needs to support the basics of Python's # mapping/dict protocol. E.g.: # class AdvCachingTemplate(Template): # _CHEETAH_compileCache = MemoryOrFileCache() _CHEETAH_compileLock = Lock() # used to prevent race conditions _CHEETAH_defaultMainMethodName = None _CHEETAH_compilerSettings = None _CHEETAH_compilerClass = Compiler _CHEETAH_compilerInstance = None _CHEETAH_cacheCompilationResults = True _CHEETAH_useCompilationCache = True _CHEETAH_keepRefToGeneratedCode = True _CHEETAH_defaultBaseclassForTemplates = None _CHEETAH_defaultClassNameForTemplates = None # defaults to DEFAULT_COMPILER_SETTINGS['mainMethodName']: _CHEETAH_defaultMainMethodNameForTemplates = None _CHEETAH_defaultModuleNameForTemplates = 'DynamicallyCompiledCheetahTemplate' _CHEETAH_defaultModuleGlobalsForTemplates = None _CHEETAH_preprocessors = None _CHEETAH_defaultPreprocessorClass = TemplatePreprocessor ## The following attributes are used by instance methods: _CHEETAH_generatedModuleCode = None NonNumericInputError = NonNumericInputError _CHEETAH_cacheRegionClass = CacheRegion _CHEETAH_cacheStoreClass = MemoryCacheStore #_CHEETAH_cacheStoreClass = MemcachedCacheStore _CHEETAH_cacheStore = None _CHEETAH_cacheStoreIdPrefix = None @classmethod def _getCompilerClass(klass, source=None, file=None): return klass._CHEETAH_compilerClass @classmethod def _getCompilerSettings(klass, source=None, file=None): return klass._CHEETAH_compilerSettings @classmethod def compile(klass, source=None, file=None, returnAClass=True, compilerSettings=Unspecified, compilerClass=Unspecified, moduleName=None, className=Unspecified, mainMethodName=Unspecified, baseclass=Unspecified, moduleGlobals=Unspecified, cacheCompilationResults=Unspecified, useCache=Unspecified, preprocessors=Unspecified, cacheModuleFilesForTracebacks=Unspecified, cacheDirForModuleFiles=Unspecified, commandlineopts=None, keepRefToGeneratedCode=Unspecified, ): """ The core API for compiling Cheetah source code into template classes. This class method compiles Cheetah source code and returns a python class. You then create template instances using that class. All Cheetah's other compilation API's use this method under the hood. Internally, this method a) parses the Cheetah source code and generates Python code defining a module with a single class in it, b) dynamically creates a module object with a unique name, c) execs the generated code in that module's namespace then inserts the module into sys.modules, and d) returns a reference to the generated class. If you want to get the generated python source code instead, pass the argument returnAClass=False. It caches generated code and classes. See the descriptions of the arguments'cacheCompilationResults' and 'useCache' for details. This doesn't mean that templates will automatically recompile themselves when the source file changes. Rather, if you call Template.compile(src) or Template.compile(file=path) repeatedly it will attempt to return a cached class definition instead of recompiling. Hooks are provided template source preprocessing. See the notes on the 'preprocessors' arg. If you are an advanced user and need to customize the way Cheetah parses source code or outputs Python code, you should check out the compilerSettings argument. Arguments: You must provide either a 'source' or 'file' arg, but not both: - source (string or None) - file (string path, file-like object, or None) The rest of the arguments are strictly optional. All but the first have defaults in attributes of the Template class which can be overridden in subclasses of this class. Working with most of these is an advanced topic. - returnAClass=True If false, return the generated module code rather than a class. - compilerSettings (a dict) Default: Template._CHEETAH_compilerSettings=None a dictionary of settings to override those defined in DEFAULT_COMPILER_SETTINGS. These can also be overridden in your template source code with the #compiler or #compiler-settings directives. - compilerClass (a class) Default: Template._CHEETAH_compilerClass=Cheetah.Compiler.Compiler a subclass of Cheetah.Compiler.Compiler. Mucking with this is a very advanced topic. - moduleName (a string) Default: Template._CHEETAH_defaultModuleNameForTemplates ='DynamicallyCompiledCheetahTemplate' What to name the generated Python module. If the provided value is None and a file arg was given, the moduleName is created from the file path. In all cases if the moduleName provided is already in sys.modules it is passed through a filter that generates a unique variant of the name. - className (a string) Default: Template._CHEETAH_defaultClassNameForTemplates=None What to name the generated Python class. If the provided value is None, the moduleName is use as the class name. - mainMethodName (a string) Default: Template._CHEETAH_defaultMainMethodNameForTemplates =None (and thus DEFAULT_COMPILER_SETTINGS['mainMethodName']) What to name the main output generating method in the compiled template class. - baseclass (a string or a class) Default: Template._CHEETAH_defaultBaseclassForTemplates=None Specifies the baseclass for the template without manually including an #extends directive in the source. The #extends directive trumps this arg. If the provided value is a string you must make sure that a class reference by that name is available to your template, either by using an #import directive or by providing it in the arg 'moduleGlobals'. If the provided value is a class, Cheetah will handle all the details for you. - moduleGlobals (a dict) Default: Template._CHEETAH_defaultModuleGlobalsForTemplates=None A dict of vars that will be added to the global namespace of the module the generated code is executed in, prior to the execution of that code. This should be Python values, not code strings! - cacheCompilationResults (True/False) Default: Template._CHEETAH_cacheCompilationResults=True Tells Cheetah to cache the generated code and classes so that they can be reused if Template.compile() is called multiple times with the same source and options. - useCache (True/False) Default: Template._CHEETAH_useCompilationCache=True Should the compilation cache be used? If True and a previous compilation created a cached template class with the same source code, compiler settings and other options, the cached template class will be returned. - cacheModuleFilesForTracebacks (True/False) Default: Template._CHEETAH_cacheModuleFilesForTracebacks=False In earlier versions of Cheetah tracebacks from exceptions that were raised inside dynamically compiled Cheetah templates were opaque because Python didn't have access to a python source file to use in the traceback: File "xxxx.py", line 192, in getTextiledContent content = str(template(searchList=searchList)) File "cheetah_yyyy.py", line 202, in __str__ File "cheetah_yyyy.py", line 187, in respond File "cheetah_yyyy.py", line 139, in writeBody ZeroDivisionError: integer division or modulo by zero It is now possible to keep those files in a cache dir and allow Python to include the actual source lines in tracebacks and makes them much easier to understand: File "xxxx.py", line 192, in getTextiledContent content = str(template(searchList=searchList)) File "/tmp/CheetahCacheDir/cheetah_yyyy.py", line 202, in __str__ def __str__(self): return self.respond() File "/tmp/CheetahCacheDir/cheetah_yyyy.py", line 187, in respond self.writeBody(trans=trans) File "/tmp/CheetahCacheDir/cheetah_yyyy.py", line 139, in writeBody __v = 0/0 # $(0/0) ZeroDivisionError: integer division or modulo by zero - cacheDirForModuleFiles (a string representing a dir path) Default: Template._CHEETAH_cacheDirForModuleFiles=None See notes on cacheModuleFilesForTracebacks. - preprocessors Default: Template._CHEETAH_preprocessors=None ** THIS IS A VERY ADVANCED TOPIC ** These are used to transform the source code prior to compilation. They provide a way to use Cheetah as a code generator for Cheetah code. In other words, you use one Cheetah template to output the source code for another Cheetah template. The major expected use cases are: a) 'compile-time caching' aka 'partial template binding', wherein an intermediate Cheetah template is used to output the source for the final Cheetah template. The intermediate template is a mix of a modified Cheetah syntax (the 'preprocess syntax') and standard Cheetah syntax. The preprocessor syntax is executed at compile time and outputs Cheetah code which is then compiled in turn. This approach allows one to completely soft-code all the elements in the template which are subject to change yet have it compile to extremely efficient Python code with everything but the elements that must be variable at runtime (per browser request, etc.) compiled as static strings. Examples of this usage pattern will be added to the Cheetah Users' Guide. The'preprocess syntax' is just Cheetah's standard one with alternatives for the $ and # tokens: e.g. '@' and '%' for code like this @aPreprocessVar $aRuntimeVar %if aCompileTimeCondition then yyy else zzz %% preprocessor comment #if aRunTimeCondition then aaa else bbb ## normal comment $aRuntimeVar b) adding #import and #extends directives dynamically based on the source If preprocessors are provided, Cheetah pipes the source code through each one in the order provided. Each preprocessor should accept the args (source, file) and should return a tuple (source, file). The argument value should be a list, but a single non-list value is acceptable and will automatically be converted into a list. Each item in the list will be passed through Template._normalizePreprocessor(). The items should either match one of the following forms: - an object with a .preprocess(source, file) method - a callable with the following signature: source, file = f(source, file) or one of the forms below: - a single string denoting the 2 'tokens' for the preprocess syntax. The tokens should be in the order (placeholderToken, directiveToken) and should separated with a space: e.g. '@ %' klass = Template.compile(src, preprocessors='@ %') # or klass = Template.compile(src, preprocessors=['@ %']) - a dict with the following keys or an object with the following attributes (all are optional, but nothing will happen if you don't provide at least one): - tokens: same as the single string described above. You can also provide a tuple of 2 strings. - searchList: the searchList used for preprocess $placeholders - compilerSettings: used in the compilation of the intermediate template - templateAPIClass: an optional subclass of `Template` - outputTransformer: a simple hook for passing in a callable which can do further transformations of the preprocessor output, or do something else like debug logging. The default is str(). + any keyword arguments to Template.compile which you want to provide for the compilation of the intermediate template. klass = Template.compile(src, preprocessors=[ dict(tokens='@ %', searchList=[...]) ] ) """ errmsg = "arg '%s' must be %s" if not isinstance(source, (types.NoneType, basestring)): raise TypeError(errmsg % ('source', 'string or None')) if not isinstance(file, (types.NoneType, basestring, types.FileType)): raise TypeError(errmsg % ('file', 'string, file-like object, or None')) if baseclass is Unspecified: baseclass = klass._CHEETAH_defaultBaseclassForTemplates if isinstance(baseclass, Template): baseclass = baseclass.__class__ if not isinstance(baseclass, (types.NoneType, basestring, types.ClassType, types.TypeType)): raise TypeError(errmsg % ('baseclass', 'string, class or None')) if cacheCompilationResults is Unspecified: cacheCompilationResults = klass._CHEETAH_cacheCompilationResults if not isinstance(cacheCompilationResults, (int, bool)): raise TypeError(errmsg % ('cacheCompilationResults', 'boolean')) if useCache is Unspecified: useCache = klass._CHEETAH_useCompilationCache if not isinstance(useCache, (int, bool)): raise TypeError(errmsg % ('useCache', 'boolean')) if compilerSettings is Unspecified: compilerSettings = klass._getCompilerSettings(source, file) or {} if not isinstance(compilerSettings, dict): raise TypeError(errmsg % ('compilerSettings', 'dictionary')) if compilerClass is Unspecified: compilerClass = klass._getCompilerClass(source, file) if preprocessors is Unspecified: preprocessors = klass._CHEETAH_preprocessors if keepRefToGeneratedCode is Unspecified: keepRefToGeneratedCode = klass._CHEETAH_keepRefToGeneratedCode if not isinstance(keepRefToGeneratedCode, (int, bool)): raise TypeError(errmsg % ('keepReftoGeneratedCode', 'boolean')) if not isinstance(moduleName, (types.NoneType, basestring)): raise TypeError(errmsg % ('moduleName', 'string or None')) __orig_file__ = None if not moduleName: if file and isinstance(file, basestring): moduleName = convertTmplPathToModuleName(file) __orig_file__ = file else: moduleName = klass._CHEETAH_defaultModuleNameForTemplates if className is Unspecified: className = klass._CHEETAH_defaultClassNameForTemplates if not isinstance(className, (types.NoneType, basestring)): raise TypeError(errmsg % ('className', 'string or None')) className = re.sub(r'^_+','', className or moduleName) if mainMethodName is Unspecified: mainMethodName = klass._CHEETAH_defaultMainMethodNameForTemplates if not isinstance(mainMethodName, (types.NoneType, basestring)): raise TypeError(errmsg % ('mainMethodName', 'string or None')) if moduleGlobals is Unspecified: moduleGlobals = klass._CHEETAH_defaultModuleGlobalsForTemplates if cacheModuleFilesForTracebacks is Unspecified: cacheModuleFilesForTracebacks = klass._CHEETAH_cacheModuleFilesForTracebacks if not isinstance(cacheModuleFilesForTracebacks, (int, bool)): raise TypeError(errmsg % ('cacheModuleFilesForTracebacks', 'boolean')) if cacheDirForModuleFiles is Unspecified: cacheDirForModuleFiles = klass._CHEETAH_cacheDirForModuleFiles if not isinstance(cacheDirForModuleFiles, (types.NoneType, basestring)): raise TypeError(errmsg % ('cacheDirForModuleFiles', 'string or None')) ################################################## ## handle any preprocessors if preprocessors: origSrc = source source, file = klass._preprocessSource(source, file, preprocessors) ################################################## ## compilation, using cache if requested/possible baseclassValue = None baseclassName = None if baseclass: if isinstance(baseclass, basestring): baseclassName = baseclass elif isinstance(baseclass, (types.ClassType, types.TypeType)): # @@TR: should soft-code this baseclassName = 'CHEETAH_dynamicallyAssignedBaseClass_'+baseclass.__name__ baseclassValue = baseclass cacheHash = None cacheItem = None if source or isinstance(file, basestring): compilerSettingsHash = None if compilerSettings: compilerSettingsHash = hashDict(compilerSettings) moduleGlobalsHash = None if moduleGlobals: moduleGlobalsHash = hashDict(moduleGlobals) fileHash = None if file: fileHash = str(hash(file)) if globals()['__checkFileMtime']: fileHash += str(os.path.getmtime(file)) try: # @@TR: find some way to create a cacheHash that is consistent # between process restarts. It would allow for caching the # compiled module on disk and thereby reduce the startup time # for applications that use a lot of dynamically compiled # templates. cacheHash = ''.join([str(v) for v in [hash(source), fileHash, className, moduleName, mainMethodName, hash(compilerClass), hash(baseclass), compilerSettingsHash, moduleGlobalsHash, hash(cacheDirForModuleFiles), ]]) except: #@@TR: should add some logging to this pass outputEncoding = 'ascii' compiler = None if useCache and cacheHash and cacheHash in klass._CHEETAH_compileCache: cacheItem = klass._CHEETAH_compileCache[cacheHash] generatedModuleCode = cacheItem.code else: compiler = compilerClass(source, file, moduleName=moduleName, mainClassName=className, baseclassName=baseclassName, mainMethodName=mainMethodName, settings=(compilerSettings or {})) if commandlineopts: compiler.setShBang(commandlineopts.shbang) compiler.compile() generatedModuleCode = compiler.getModuleCode() outputEncoding = compiler.getModuleEncoding() if not returnAClass: # This is a bit of a hackish solution to make sure we're setting the proper # encoding on generated code that is destined to be written to a file if not outputEncoding == 'ascii': generatedModuleCode = generatedModuleCode.split('\n') generatedModuleCode.insert(1, '# -*- coding: %s -*-' % outputEncoding) generatedModuleCode = '\n'.join(generatedModuleCode) return generatedModuleCode.encode(outputEncoding) else: if cacheItem: cacheItem.lastCheckoutTime = time.time() return cacheItem.klass try: klass._CHEETAH_compileLock.acquire() uniqueModuleName = _genUniqueModuleName(moduleName) __file__ = uniqueModuleName+'.py' # relative file path with no dir part if cacheModuleFilesForTracebacks: if not os.path.exists(cacheDirForModuleFiles): raise Exception('%s does not exist'%cacheDirForModuleFiles) __file__ = os.path.join(cacheDirForModuleFiles, __file__) # @@TR: might want to assert that it doesn't already exist open(__file__, 'w').write(generatedModuleCode) # @@TR: should probably restrict the perms, etc. mod = new.module(str(uniqueModuleName)) if moduleGlobals: for k, v in moduleGlobals.items(): setattr(mod, k, v) mod.__file__ = __file__ if __orig_file__ and os.path.exists(__orig_file__): # this is used in the WebKit filemonitoring code mod.__orig_file__ = __orig_file__ if baseclass and baseclassValue: setattr(mod, baseclassName, baseclassValue) ## try: co = compile(generatedModuleCode, __file__, 'exec') exec(co, mod.__dict__) except SyntaxError, e: try: parseError = genParserErrorFromPythonException( source, file, generatedModuleCode, exception=e) except: updateLinecache(__file__, generatedModuleCode) e.generatedModuleCode = generatedModuleCode raise e else: raise parseError except Exception, e: updateLinecache(__file__, generatedModuleCode) e.generatedModuleCode = generatedModuleCode raise ## sys.modules[uniqueModuleName] = mod finally: klass._CHEETAH_compileLock.release() templateClass = getattr(mod, className) if (cacheCompilationResults and cacheHash and cacheHash not in klass._CHEETAH_compileCache): cacheItem = CompileCacheItem() cacheItem.cacheTime = cacheItem.lastCheckoutTime = time.time() cacheItem.code = generatedModuleCode cacheItem.klass = templateClass templateClass._CHEETAH_isInCompilationCache = True klass._CHEETAH_compileCache[cacheHash] = cacheItem else: templateClass._CHEETAH_isInCompilationCache = False if keepRefToGeneratedCode or cacheCompilationResults: templateClass._CHEETAH_generatedModuleCode = generatedModuleCode # If we have a compiler object, let's set it to the compiler class # to help the directive analyzer code if compiler: templateClass._CHEETAH_compilerInstance = compiler return templateClass @classmethod def subclass(klass, *args, **kws): """Takes the same args as the .compile() classmethod and returns a template that is a subclass of the template this method is called from. T1 = Template.compile(' foo - $meth1 - bar\n#def meth1: this is T1.meth1') T2 = T1.subclass('#implements meth1\n this is T2.meth1') """ kws['baseclass'] = klass if isinstance(klass, Template): templateAPIClass = klass else: templateAPIClass = Template return templateAPIClass.compile(*args, **kws) @classmethod def _preprocessSource(klass, source, file, preprocessors): """Iterates through the .compile() classmethod's preprocessors argument and pipes the source code through each each preprocessor. It returns the tuple (source, file) which is then used by Template.compile to finish the compilation. """ if not isinstance(preprocessors, (list, tuple)): preprocessors = [preprocessors] for preprocessor in preprocessors: preprocessor = klass._normalizePreprocessorArg(preprocessor) source, file = preprocessor.preprocess(source, file) return source, file @classmethod def _normalizePreprocessorArg(klass, arg): """Used to convert the items in the .compile() classmethod's preprocessors argument into real source preprocessors. This permits the use of several shortcut forms for defining preprocessors. """ if hasattr(arg, 'preprocess'): return arg elif hasattr(arg, '__call__'): class WrapperPreprocessor: def preprocess(self, source, file): return arg(source, file) return WrapperPreprocessor() else: class Settings(object): placeholderToken = None directiveToken = None settings = Settings() if isinstance(arg, str) or isinstance(arg, (list, tuple)): settings.tokens = arg elif isinstance(arg, dict): for k, v in arg.items(): setattr(settings, k, v) else: settings = arg settings = klass._normalizePreprocessorSettings(settings) return klass._CHEETAH_defaultPreprocessorClass(settings) @classmethod def _normalizePreprocessorSettings(klass, settings): settings.keepRefToGeneratedCode = True def normalizeSearchList(searchList): if not isinstance(searchList, (list, tuple)): searchList = [searchList] return searchList def normalizeTokens(tokens): if isinstance(tokens, str): return tokens.split() # space delimited string e.g.'@ %' elif isinstance(tokens, (list, tuple)): return tokens else: raise PreprocessError('invalid tokens argument: %r'%tokens) if hasattr(settings, 'tokens'): (settings.placeholderToken, settings.directiveToken) = normalizeTokens(settings.tokens) if (not getattr(settings, 'compilerSettings', None) and not getattr(settings, 'placeholderToken', None) ): raise TypeError( 'Preprocessor requires either a "tokens" or a "compilerSettings" arg.' ' Neither was provided.') if not hasattr(settings, 'templateInitArgs'): settings.templateInitArgs = {} if 'searchList' not in settings.templateInitArgs: if not hasattr(settings, 'searchList') and hasattr(settings, 'namespaces'): settings.searchList = settings.namespaces elif not hasattr(settings, 'searchList'): settings.searchList = [] settings.templateInitArgs['searchList'] = settings.searchList settings.templateInitArgs['searchList'] = ( normalizeSearchList(settings.templateInitArgs['searchList'])) if not hasattr(settings, 'outputTransformer'): settings.outputTransformer = unicode if not hasattr(settings, 'templateAPIClass'): class PreprocessTemplateAPIClass(klass): pass settings.templateAPIClass = PreprocessTemplateAPIClass if not hasattr(settings, 'compilerSettings'): settings.compilerSettings = {} klass._updateSettingsWithPreprocessTokens( compilerSettings=settings.compilerSettings, placeholderToken=settings.placeholderToken, directiveToken=settings.directiveToken ) return settings @classmethod def _updateSettingsWithPreprocessTokens( klass, compilerSettings, placeholderToken, directiveToken): if (placeholderToken and 'cheetahVarStartToken' not in compilerSettings): compilerSettings['cheetahVarStartToken'] = placeholderToken if directiveToken: if 'directiveStartToken' not in compilerSettings: compilerSettings['directiveStartToken'] = directiveToken if 'directiveEndToken' not in compilerSettings: compilerSettings['directiveEndToken'] = directiveToken if 'commentStartToken' not in compilerSettings: compilerSettings['commentStartToken'] = directiveToken*2 if 'multiLineCommentStartToken' not in compilerSettings: compilerSettings['multiLineCommentStartToken'] = ( directiveToken+'*') if 'multiLineCommentEndToken' not in compilerSettings: compilerSettings['multiLineCommentEndToken'] = ( '*'+directiveToken) if 'EOLSlurpToken' not in compilerSettings: compilerSettings['EOLSlurpToken'] = directiveToken @classmethod def _addCheetahPlumbingCodeToClass(klass, concreteTemplateClass): """If concreteTemplateClass is not a subclass of Cheetah.Template, add the required cheetah methods and attributes to it. This is called on each new template class after it has been compiled. If concreteTemplateClass is not a subclass of Cheetah.Template but already has method with the same name as one of the required cheetah methods, this will skip that method. """ for methodname in klass._CHEETAH_requiredCheetahMethods: if not hasattr(concreteTemplateClass, methodname): method = getattr(Template, methodname) newMethod = new.instancemethod(method.im_func, None, concreteTemplateClass) #print methodname, method setattr(concreteTemplateClass, methodname, newMethod) for classMethName in klass._CHEETAH_requiredCheetahClassMethods: if not hasattr(concreteTemplateClass, classMethName): meth = getattr(klass, classMethName) setattr(concreteTemplateClass, classMethName, classmethod(meth.im_func)) for attrname in klass._CHEETAH_requiredCheetahClassAttributes: attrname = '_CHEETAH_'+attrname if not hasattr(concreteTemplateClass, attrname): attrVal = getattr(klass, attrname) setattr(concreteTemplateClass, attrname, attrVal) if (not hasattr(concreteTemplateClass, '__str__') or concreteTemplateClass.__str__ is object.__str__): mainMethNameAttr = '_mainCheetahMethod_for_'+concreteTemplateClass.__name__ mainMethName = getattr(concreteTemplateClass, mainMethNameAttr, None) if mainMethName: def __str__(self): rc = getattr(self, mainMethName)() if isinstance(rc, unicode): return rc.encode('utf-8') return rc def __unicode__(self): return getattr(self, mainMethName)() elif (hasattr(concreteTemplateClass, 'respond') and concreteTemplateClass.respond!=Servlet.respond): def __str__(self): rc = self.respond() if isinstance(rc, unicode): return rc.encode('utf-8') return rc def __unicode__(self): return self.respond() else: def __str__(self): rc = None if hasattr(self, mainMethNameAttr): rc = getattr(self, mainMethNameAttr)() elif hasattr(self, 'respond'): rc = self.respond() else: rc = super(self.__class__, self).__str__() if isinstance(rc, unicode): return rc.encode('utf-8') return rc def __unicode__(self): if hasattr(self, mainMethNameAttr): return getattr(self, mainMethNameAttr)() elif hasattr(self, 'respond'): return self.respond() else: return super(self.__class__, self).__unicode__() __str__ = new.instancemethod(__str__, None, concreteTemplateClass) __unicode__ = new.instancemethod(__unicode__, None, concreteTemplateClass) setattr(concreteTemplateClass, '__str__', __str__) setattr(concreteTemplateClass, '__unicode__', __unicode__) def __init__(self, source=None, namespaces=None, searchList=None, # use either or. They are aliases for the same thing. file=None, filter='RawOrEncodedUnicode', # which filter from Cheetah.Filters filtersLib=Filters, errorCatcher=None, compilerSettings=Unspecified, # control the behaviour of the compiler _globalSetVars=None, # used internally for #include'd templates _preBuiltSearchList=None # used internally for #include'd templates ): """a) compiles a new template OR b) instantiates an existing template. Read this docstring carefully as there are two distinct usage patterns. You should also read this class' main docstring. a) to compile a new template: t = Template(source=aSourceString) # or t = Template(file='some/path') # or t = Template(file=someFileObject) # or namespaces = [{'foo':'bar'}] t = Template(source=aSourceString, namespaces=namespaces) # or t = Template(file='some/path', namespaces=namespaces) print t b) to create an instance of an existing, precompiled template class: ## i) first you need a reference to a compiled template class: tclass = Template.compile(source=src) # or just Template.compile(src) # or tclass = Template.compile(file='some/path') # or tclass = Template.compile(file=someFileObject) # or # if you used the command line compiler or have Cheetah's ImportHooks # installed your template class is also available via Python's # standard import mechanism: from ACompileTemplate import AcompiledTemplate as tclass ## ii) then you create an instance t = tclass(namespaces=namespaces) # or t = tclass(namespaces=namespaces, filter='RawOrEncodedUnicode') print t Arguments: for usage pattern a) If you are compiling a new template, you must provide either a 'source' or 'file' arg, but not both: - source (string or None) - file (string path, file-like object, or None) Optional args (see below for more) : - compilerSettings Default: Template._CHEETAH_compilerSettings=None a dictionary of settings to override those defined in DEFAULT_COMPILER_SETTINGS. See Cheetah.Template.DEFAULT_COMPILER_SETTINGS and the Users' Guide for details. You can pass the source arg in as a positional arg with this usage pattern. Use keywords for all other args. for usage pattern b) Do not use positional args with this usage pattern, unless your template subclasses something other than Cheetah.Template and you want to pass positional args to that baseclass. E.g.: dictTemplate = Template.compile('hello $name from $caller', baseclass=dict) tmplvars = dict(name='world', caller='me') print dictTemplate(tmplvars) This usage requires all Cheetah args to be passed in as keyword args. optional args for both usage patterns: - namespaces (aka 'searchList') Default: None an optional list of namespaces (dictionaries, objects, modules, etc.) which Cheetah will search through to find the variables referenced in $placeholders. If you provide a single namespace instead of a list, Cheetah will automatically convert it into a list. NOTE: Cheetah does NOT force you to use the namespaces search list and related features. It's on by default, but you can turn if off using the compiler settings useSearchList=False or useNameMapper=False. - filter Default: 'EncodeUnicode' Which filter should be used for output filtering. This should either be a string which is the name of a filter in the 'filtersLib' or a subclass of Cheetah.Filters.Filter. . See the Users' Guide for more details. - filtersLib Default: Cheetah.Filters A module containing subclasses of Cheetah.Filters.Filter. See the Users' Guide for more details. - errorCatcher Default: None This is a debugging tool. See the Users' Guide for more details. Do not use this or the #errorCatcher diretive with live production systems. Do NOT mess with the args _globalSetVars or _preBuiltSearchList! """ errmsg = "arg '%s' must be %s" errmsgextra = errmsg + "\n%s" if not isinstance(source, (types.NoneType, basestring)): raise TypeError(errmsg % ('source', 'string or None')) if not isinstance(source, (types.NoneType, basestring, types.FileType)): raise TypeError(errmsg % ('file', 'string, file open for reading, or None')) if not isinstance(filter, (basestring, types.TypeType)) and not \ (isinstance(filter, types.ClassType) and issubclass(filter, Filters.Filter)): raise TypeError(errmsgextra % ('filter', 'string or class', '(if class, must be subclass of Cheetah.Filters.Filter)')) if not isinstance(filtersLib, (basestring, types.ModuleType)): raise TypeError(errmsgextra % ('filtersLib', 'string or module', '(if module, must contain subclasses of Cheetah.Filters.Filter)')) if not errorCatcher is None: err = True if isinstance(errorCatcher, (basestring, types.TypeType)): err = False if isinstance(errorCatcher, types.ClassType) and \ issubclass(errorCatcher, ErrorCatchers.ErrorCatcher): err = False if err: raise TypeError(errmsgextra % ('errorCatcher', 'string, class or None', '(if class, must be subclass of Cheetah.ErrorCatchers.ErrorCatcher)')) if compilerSettings is not Unspecified: if not isinstance(compilerSettings, types.DictType): raise TypeError(errmsg % ('compilerSettings', 'dictionary')) if source is not None and file is not None: raise TypeError("you must supply either a source string or the" + " 'file' keyword argument, but not both") ################################################## ## Do superclass initialization. super(Template, self).__init__() ################################################## ## Do required version check if not hasattr(self, '_CHEETAH_versionTuple'): try: mod = sys.modules[self.__class__.__module__] compiledVersion = mod.__CHEETAH_version__ compiledVersionTuple = convertVersionStringToTuple(compiledVersion) if compiledVersionTuple < MinCompatibleVersionTuple: raise AssertionError( 'This template was compiled with Cheetah version' ' %s. Templates compiled before version %s must be recompiled.'%( compiledVersion, MinCompatibleVersion)) except AssertionError: raise except: pass ################################################## ## Setup instance state attributes used during the life of template ## post-compile if searchList: for namespace in searchList: if isinstance(namespace, dict): intersection = self.Reserved_SearchList & set(namespace.keys()) warn = False if intersection: warn = True if isinstance(compilerSettings, dict) and compilerSettings.get('prioritizeSearchListOverSelf'): warn = False if warn: logging.info(''' The following keys are members of the Template class and will result in NameMapper collisions! ''') logging.info(''' > %s ''' % ', '.join(list(intersection))) logging.info(''' Please change the key's name or use the compiler setting "prioritizeSearchListOverSelf=True" to prevent the NameMapper from using ''') logging.info(''' the Template member in place of your searchList variable ''') self._initCheetahInstance( searchList=searchList, namespaces=namespaces, filter=filter, filtersLib=filtersLib, errorCatcher=errorCatcher, _globalSetVars=_globalSetVars, compilerSettings=compilerSettings, _preBuiltSearchList=_preBuiltSearchList) ################################################## ## Now, compile if we're meant to if (source is not None) or (file is not None): self._compile(source, file, compilerSettings=compilerSettings) def generatedModuleCode(self): """Return the module code the compiler generated, or None if no compilation took place. """ return self._CHEETAH_generatedModuleCode def generatedClassCode(self): """Return the class code the compiler generated, or None if no compilation took place. """ return self._CHEETAH_generatedModuleCode[ self._CHEETAH_generatedModuleCode.find('\nclass '): self._CHEETAH_generatedModuleCode.find('\n## END CLASS DEFINITION')] def searchList(self): """Return a reference to the searchlist """ return self._CHEETAH__searchList def errorCatcher(self): """Return a reference to the current errorCatcher """ return self._CHEETAH__errorCatcher ## cache methods ## def _getCacheStore(self): if not self._CHEETAH__cacheStore: if self._CHEETAH_cacheStore is not None: self._CHEETAH__cacheStore = self._CHEETAH_cacheStore else: # @@TR: might want to provide a way to provide init args self._CHEETAH__cacheStore = self._CHEETAH_cacheStoreClass() return self._CHEETAH__cacheStore def _getCacheStoreIdPrefix(self): if self._CHEETAH_cacheStoreIdPrefix is not None: return self._CHEETAH_cacheStoreIdPrefix else: return str(id(self)) def _createCacheRegion(self, regionID): return self._CHEETAH_cacheRegionClass( regionID=regionID, templateCacheIdPrefix=self._getCacheStoreIdPrefix(), cacheStore=self._getCacheStore()) def getCacheRegion(self, regionID, cacheInfo=None, create=True): cacheRegion = self._CHEETAH__cacheRegions.get(regionID) if not cacheRegion and create: cacheRegion = self._createCacheRegion(regionID) self._CHEETAH__cacheRegions[regionID] = cacheRegion return cacheRegion def getCacheRegions(self): """Returns a dictionary of the 'cache regions' initialized in a template. Each #cache directive block or $*cachedPlaceholder is a separate 'cache region'. """ # returns a copy to prevent users mucking it up return self._CHEETAH__cacheRegions.copy() def refreshCache(self, cacheRegionId=None, cacheItemId=None): """Refresh a cache region or a specific cache item within a region. """ if not cacheRegionId: for key, cregion in self.getCacheRegions(): cregion.clear() else: cregion = self._CHEETAH__cacheRegions.get(cacheRegionId) if not cregion: return if not cacheItemId: # clear the desired region and all its cacheItems cregion.clear() else: # clear one specific cache of a specific region cache = cregion.getCacheItem(cacheItemId) if cache: cache.clear() ## end cache methods ## def shutdown(self): """Break reference cycles before discarding a servlet. """ try: Servlet.shutdown(self) except: pass self._CHEETAH__searchList = None self.__dict__ = {} ## utility functions ## def getVar(self, varName, default=Unspecified, autoCall=True): """Get a variable from the searchList. If the variable can't be found in the searchList, it returns the default value if one was given, or raises NameMapper.NotFound. """ try: return valueFromSearchList(self.searchList(), varName.replace('$', ''), autoCall) except NotFound: if default is not Unspecified: return default else: raise def varExists(self, varName, autoCall=True): """Test if a variable name exists in the searchList. """ try: valueFromSearchList(self.searchList(), varName.replace('$', ''), autoCall) return True except NotFound: return False hasVar = varExists def i18n(self, message, plural=None, n=None, id=None, domain=None, source=None, target=None, comment=None ): """This is just a stub at this time. plural = the plural form of the message n = a sized argument to distinguish between single and plural forms id = msgid in the translation catalog domain = translation domain source = source lang target = a specific target lang comment = a comment to the translation team See the following for some ideas http://www.zope.org/DevHome/Wikis/DevSite/Projects/ComponentArchitecture/ZPTInternationalizationSupport Other notes: - There is no need to replicate the i18n:name attribute from plone / PTL, as cheetah placeholders serve the same purpose """ return message def getFileContents(self, path): """A hook for getting the contents of a file. The default implementation just uses the Python open() function to load local files. This method could be reimplemented to allow reading of remote files via various protocols, as PHP allows with its 'URL fopen wrapper' """ fp = open(path, 'r') output = fp.read() fp.close() return output def runAsMainProgram(self): """Allows the Template to function as a standalone command-line program for static page generation. Type 'python yourtemplate.py --help to see what it's capabable of. """ from TemplateCmdLineIface import CmdLineIface CmdLineIface(templateObj=self).run() ################################################## ## internal methods -- not to be called by end-users def _initCheetahInstance(self, searchList=None, namespaces=None, filter='RawOrEncodedUnicode', # which filter from Cheetah.Filters filtersLib=Filters, errorCatcher=None, _globalSetVars=None, compilerSettings=None, _preBuiltSearchList=None): """Sets up the instance attributes that cheetah templates use at run-time. This is automatically called by the __init__ method of compiled templates. Note that the names of instance attributes used by Cheetah are prefixed with '_CHEETAH__' (2 underscores), where class attributes are prefixed with '_CHEETAH_' (1 underscore). """ if getattr(self, '_CHEETAH__instanceInitialized', False): return if namespaces is not None: assert searchList is None, ( 'Provide "namespaces" or "searchList", not both!') searchList = namespaces if searchList is not None and not isinstance(searchList, (list, tuple)): searchList = [searchList] self._CHEETAH__globalSetVars = {} if _globalSetVars is not None: # this is intended to be used internally by Nested Templates in #include's self._CHEETAH__globalSetVars = _globalSetVars if _preBuiltSearchList is not None: # happens with nested Template obj creation from #include's self._CHEETAH__searchList = list(_preBuiltSearchList) self._CHEETAH__searchList.append(self) else: # create our own searchList self._CHEETAH__searchList = [self._CHEETAH__globalSetVars, self] if searchList is not None: if isinstance(compilerSettings, dict) and compilerSettings.get('prioritizeSearchListOverSelf'): self._CHEETAH__searchList = searchList + self._CHEETAH__searchList else: self._CHEETAH__searchList.extend(list(searchList)) self._CHEETAH__cheetahIncludes = {} self._CHEETAH__cacheRegions = {} self._CHEETAH__indenter = Indenter() # @@TR: consider allowing simple callables as the filter argument self._CHEETAH__filtersLib = filtersLib self._CHEETAH__filters = {} if isinstance(filter, basestring): filterName = filter klass = getattr(self._CHEETAH__filtersLib, filterName) else: klass = filter filterName = klass.__name__ self._CHEETAH__currentFilter = self._CHEETAH__filters[filterName] = klass(self).filter self._CHEETAH__initialFilter = self._CHEETAH__currentFilter self._CHEETAH__errorCatchers = {} if errorCatcher: if isinstance(errorCatcher, basestring): errorCatcherClass = getattr(ErrorCatchers, errorCatcher) elif isinstance(errorCatcher, ClassType): errorCatcherClass = errorCatcher self._CHEETAH__errorCatcher = ec = errorCatcherClass(self) self._CHEETAH__errorCatchers[errorCatcher.__class__.__name__] = ec else: self._CHEETAH__errorCatcher = None self._CHEETAH__initErrorCatcher = self._CHEETAH__errorCatcher if not hasattr(self, 'transaction'): self.transaction = None self._CHEETAH__instanceInitialized = True self._CHEETAH__isBuffering = False self._CHEETAH__isControlledByWebKit = False self._CHEETAH__cacheStore = None if self._CHEETAH_cacheStore is not None: self._CHEETAH__cacheStore = self._CHEETAH_cacheStore def _compile(self, source=None, file=None, compilerSettings=Unspecified, moduleName=None, mainMethodName=None): """Compile the template. This method is automatically called by Template.__init__ it is provided with 'file' or 'source' args. USERS SHOULD *NEVER* CALL THIS METHOD THEMSELVES. Use Template.compile instead. """ if compilerSettings is Unspecified: compilerSettings = self._getCompilerSettings(source, file) or {} mainMethodName = mainMethodName or self._CHEETAH_defaultMainMethodName self._fileMtime = None self._fileDirName = None self._fileBaseName = None if file and isinstance(file, basestring): file = self.serverSidePath(file) self._fileMtime = os.path.getmtime(file) self._fileDirName, self._fileBaseName = os.path.split(file) self._filePath = file templateClass = self.compile(source, file, moduleName=moduleName, mainMethodName=mainMethodName, compilerSettings=compilerSettings, keepRefToGeneratedCode=True) self.__class__ = templateClass # must initialize it so instance attributes are accessible templateClass.__init__(self, #_globalSetVars=self._CHEETAH__globalSetVars, #_preBuiltSearchList=self._CHEETAH__searchList ) if not hasattr(self, 'transaction'): self.transaction = None def _handleCheetahInclude(self, srcArg, trans=None, includeFrom='file', raw=False): """Called at runtime to handle #include directives. """ _includeID = srcArg if _includeID not in self._CHEETAH__cheetahIncludes: if not raw: if includeFrom == 'file': source = None if type(srcArg) in StringTypes: if hasattr(self, 'serverSidePath'): file = path = self.serverSidePath(srcArg) else: file = path = os.path.normpath(srcArg) else: file = srcArg ## a file-like object else: source = srcArg file = None # @@TR: might want to provide some syntax for specifying the # Template class to be used for compilation so compilerSettings # can be changed. compiler = self._getTemplateAPIClassForIncludeDirectiveCompilation(source, file) nestedTemplateClass = compiler.compile(source=source, file=file) nestedTemplate = nestedTemplateClass(_preBuiltSearchList=self.searchList(), _globalSetVars=self._CHEETAH__globalSetVars) # Set the inner template filters to the initial filter of the # outer template: # this is the only really safe way to use # filter='WebSafe'. nestedTemplate._CHEETAH__initialFilter = self._CHEETAH__initialFilter nestedTemplate._CHEETAH__currentFilter = self._CHEETAH__initialFilter self._CHEETAH__cheetahIncludes[_includeID] = nestedTemplate else: if includeFrom == 'file': path = self.serverSidePath(srcArg) self._CHEETAH__cheetahIncludes[_includeID] = self.getFileContents(path) else: self._CHEETAH__cheetahIncludes[_includeID] = srcArg ## if not raw: self._CHEETAH__cheetahIncludes[_includeID].respond(trans) else: trans.response().write(self._CHEETAH__cheetahIncludes[_includeID]) def _getTemplateAPIClassForIncludeDirectiveCompilation(self, source, file): """Returns the subclass of Template which should be used to compile #include directives. This abstraction allows different compiler settings to be used in the included template than were used in the parent. """ if issubclass(self.__class__, Template): return self.__class__ else: return Template ## functions for using templates as CGI scripts def webInput(self, names, namesMulti=(), default='', src='f', defaultInt=0, defaultFloat=0.00, badInt=0, badFloat=0.00, debug=False): """Method for importing web transaction variables in bulk. This works for GET/POST fields both in Webware servlets and in CGI scripts, and for cookies and session variables in Webware servlets. If you try to read a cookie or session variable in a CGI script, you'll get a RuntimeError. 'In a CGI script' here means 'not running as a Webware servlet'. If the CGI environment is not properly set up, Cheetah will act like there's no input. The public method provided is: def webInput(self, names, namesMulti=(), default='', src='f', defaultInt=0, defaultFloat=0.00, badInt=0, badFloat=0.00, debug=False): This method places the specified GET/POST fields, cookies or session variables into a dictionary, which is both returned and put at the beginning of the searchList. It handles: * single vs multiple values * conversion to integer or float for specified names * default values/exceptions for missing or bad values * printing a snapshot of all values retrieved for debugging All the 'default*' and 'bad*' arguments have 'use or raise' behavior, meaning that if they're a subclass of Exception, they're raised. If they're anything else, that value is substituted for the missing/bad value. The simplest usage is: #silent $webInput(['choice']) $choice dic = self.webInput(['choice']) write(dic['choice']) Both these examples retrieves the GET/POST field 'choice' and print it. If you leave off the'#silent', all the values would be printed too. But a better way to preview the values is #silent $webInput(['name'], $debug=1) because this pretty-prints all the values inside HTML <PRE> tags. ** KLUDGE: 'debug' is supposed to insert into the template output, but it wasn't working so I changed it to a'print' statement. So the debugging output will appear wherever standard output is pointed, whether at the terminal, in a Webware log file, or whatever. *** Since we didn't specify any coversions, the value is a string. It's a 'single' value because we specified it in 'names' rather than 'namesMulti'. Single values work like this: * If one value is found, take it. * If several values are found, choose one arbitrarily and ignore the rest. * If no values are found, use or raise the appropriate 'default*' value. Multi values work like this: * If one value is found, put it in a list. * If several values are found, leave them in a list. * If no values are found, use the empty list ([]). The 'default*' arguments are *not* consulted in this case. Example: assume 'days' came from a set of checkboxes or a multiple combo box on a form, and the user chose'Monday', 'Tuesday' and 'Thursday'. #silent $webInput([], ['days']) The days you chose are: #slurp #for $day in $days $day #slurp #end for dic = self.webInput([], ['days']) write('The days you chose are: ') for day in dic['days']: write(day + ' ') Both these examples print: 'The days you chose are: Monday Tuesday Thursday'. By default, missing strings are replaced by '' and missing/bad numbers by zero. (A'bad number' means the converter raised an exception for it, usually because of non-numeric characters in the value.) This mimics Perl/PHP behavior, and simplifies coding for many applications where missing/bad values *should* be blank/zero. In those relatively few cases where you must distinguish between empty-string/zero on the one hand and missing/bad on the other, change the appropriate 'default*' and 'bad*' arguments to something like: * None * another constant value * $NonNumericInputError/self.NonNumericInputError * $ValueError/ValueError (NonNumericInputError is defined in this class and is useful for distinguishing between bad input vs a TypeError/ValueError thrown for some other rason.) Here's an example using multiple values to schedule newspaper deliveries. 'checkboxes' comes from a form with checkboxes for all the days of the week. The days the user previously chose are preselected. The user checks/unchecks boxes as desired and presses Submit. The value of 'checkboxes' is a list of checkboxes that were checked when Submit was pressed. Our task now is to turn on the days the user checked, turn off the days he unchecked, and leave on or off the days he didn't change. dic = self.webInput([], ['dayCheckboxes']) wantedDays = dic['dayCheckboxes'] # The days the user checked. for day, on in self.getAllValues(): if not on and wantedDays.has_key(day): self.TurnOn(day) # ... Set a flag or insert a database record ... elif on and not wantedDays.has_key(day): self.TurnOff(day) # ... Unset a flag or delete a database record ... 'source' allows you to look up the variables from a number of different sources: 'f' fields (CGI GET/POST parameters) 'c' cookies 's' session variables 'v' 'values', meaning fields or cookies In many forms, you're dealing only with strings, which is why the 'default' argument is third and the numeric arguments are banished to the end. But sometimes you want automatic number conversion, so that you can do numeric comparisions in your templates without having to write a bunch of conversion/exception handling code. Example: #silent $webInput(['name', 'height:int']) $name is $height cm tall. #if $height >= 300 Wow, you're tall! #else Pshaw, you're short. #end if dic = self.webInput(['name', 'height:int']) name = dic[name] height = dic[height] write('%s is %s cm tall.' % (name, height)) if height > 300: write('Wow, you're tall!') else: write('Pshaw, you're short.') To convert a value to a number, suffix ':int' or ':float' to the name. The method will search first for a 'height:int' variable and then for a 'height' variable. (It will be called 'height' in the final dictionary.) If a numeric conversion fails, use or raise 'badInt' or 'badFloat'. Missing values work the same way as for strings, except the default is 'defaultInt' or 'defaultFloat' instead of 'default'. If a name represents an uploaded file, the entire file will be read into memory. For more sophistocated file-upload handling, leave that name out of the list and do your own handling, or wait for Cheetah.Utils.UploadFileMixin. This only in a subclass that also inherits from Webware's Servlet or HTTPServlet. Otherwise you'll get an AttributeError on 'self.request'. EXCEPTIONS: ValueError if 'source' is not one of the stated characters. TypeError if a conversion suffix is not ':int' or ':float'. FUTURE EXPANSION: a future version of this method may allow source cascading; e.g., 'vs' would look first in 'values' and then in session variables. Meta-Data ================================================================================ Author: <NAME> <<EMAIL>> License: This software is released for unlimited distribution under the terms of the MIT license. See the LICENSE file. Version: $Revision: 1.186 $ Start Date: 2002/03/17 Last Revision Date: $Date: 2008/03/10 04:48:11 $ """ src = src.lower() isCgi = not self._CHEETAH__isControlledByWebKit if isCgi and src in ('f', 'v'): global _formUsedByWebInput if _formUsedByWebInput is None: _formUsedByWebInput = cgi.FieldStorage() source, func = 'field', _formUsedByWebInput.getvalue elif isCgi and src == 'c': raise RuntimeError("can't get cookies from a CGI script") elif isCgi and src == 's': raise RuntimeError("can't get session variables from a CGI script") elif isCgi and src == 'v': source, func = 'value', self.request().value elif isCgi and src == 's': source, func = 'session', self.request().session().value elif src == 'f': source, func = 'field', self.request().field elif src == 'c': source, func = 'cookie', self.request().cookie elif src == 'v': source, func = 'value', self.request().value elif src == 's': source, func = 'session', self.request().session().value else: raise TypeError("arg 'src' invalid") sources = source + 's' converters = { '': _Converter('string', None, default, default ), 'int': _Converter('int', int, defaultInt, badInt ), 'float': _Converter('float', float, defaultFloat, badFloat), } #pprint.pprint(locals()); return {} dic = {} # Destination. for name in names: k, v = _lookup(name, func, False, converters) dic[k] = v for name in namesMulti: k, v = _lookup(name, func, True, converters) dic[k] = v # At this point, 'dic' contains all the keys/values we want to keep. # We could split the method into a superclass # method for Webware/WebwareExperimental and a subclass for Cheetah. # The superclass would merely 'return dic'. The subclass would # 'dic = super(ThisClass, self).webInput(names, namesMulti, ...)' # and then the code below. if debug: print("<PRE>\n" + pprint.pformat(dic) + "\n</PRE>\n\n") self.searchList().insert(0, dic) return dic T = Template # Short and sweet for debugging at the >>> prompt. Template.Reserved_SearchList = set(dir(Template)) def genParserErrorFromPythonException(source, file, generatedPyCode, exception): #print dir(exception) filename = isinstance(file, (str, unicode)) and file or None sio = StringIO.StringIO() traceback.print_exc(1, sio) formatedExc = sio.getvalue() if hasattr(exception, 'lineno'): pyLineno = exception.lineno else: pyLineno = int(re.search('[ \t]*File.*line (\d+)', formatedExc).group(1)) lines = generatedPyCode.splitlines() prevLines = [] # (i, content) for i in range(1, 4): if pyLineno-i <=0: break prevLines.append( (pyLineno+1-i, lines[pyLineno-i]) ) nextLines = [] # (i, content) for i in range(1, 4): if not pyLineno+i < len(lines): break nextLines.append( (pyLineno+i, lines[pyLineno+i]) ) nextLines.reverse() report = 'Line|Python Code\n' report += '----|-------------------------------------------------------------\n' while prevLines: lineInfo = prevLines.pop() report += "%(row)-4d|%(line)s\n"% {'row':lineInfo[0], 'line':lineInfo[1]} if hasattr(exception, 'offset'): report += ' '*(3+(exception.offset or 0)) + '^\n' while nextLines: lineInfo = nextLines.pop() report += "%(row)-4d|%(line)s\n"% {'row':lineInfo[0], 'line':lineInfo[1]} message = [ "Error in the Python code which Cheetah generated for this template:", '='*80, '', str(exception), '', report, '='*80, ] cheetahPosMatch = re.search('line (\d+), col (\d+)', formatedExc) if cheetahPosMatch: lineno = int(cheetahPosMatch.group(1)) col = int(cheetahPosMatch.group(2)) #if hasattr(exception, 'offset'): # col = exception.offset message.append('\nHere is the corresponding Cheetah code:\n') else: lineno = None col = None cheetahPosMatch = re.search('line (\d+), col (\d+)', '\n'.join(lines[max(pyLineno-2, 0):])) if cheetahPosMatch: lineno = int(cheetahPosMatch.group(1)) col = int(cheetahPosMatch.group(2)) message.append('\nHere is the corresponding Cheetah code.') message.append('** I had to guess the line & column numbers,' ' so they are probably incorrect:\n') message = '\n'.join(message) reader = SourceReader(source, filename=filename) return ParseError(reader, message, lineno=lineno, col=col) # vim: shiftwidth=4 tabstop=4 expandtab
Comparison of alternative mesenchymal stem cell sources for cell banking and musculoskeletal advanced therapies With the continuous discovery of new alternative sources containing mesenchymal stem cells (MSCs), regenerative medicine therapies may find tailored applications in the clinics. Although these cells have been demonstrated to express specific mesenchymal markers and are able to differentiate into mesenchymal lineages in ad hoc culture conditions, it is still critical to determine the yield and differentiation potential of these cells in comparative studies under the same standardized culture environment. Moreover, the opportunity to use MSCs from bone marrow (BM) of multiorgan donors for cell banking is of relevant importance. In the attempt to establish the relative potential of alternative MSCs sources, we analyzed and compared the yield and differentiation potential of human MSCs from adipose and BM tissues of cadaveric origins, and from fetal annexes (placenta and umbilical cord) after delivery using standardized isolation and culture protocols. BM contained a significantly higher amount of mononuclear cells (MNCs) compared to the other tissue sources. Nonetheless, a higher cell seeding density was needed for these cells to successfully isolate MSCs. The MNCs populations were highly heterogeneous and expressed variable MSCs markers with a large variation from donor to donor. After MSCs selection through tissue culture plastic adhesion, cells displayed a comparable proliferation capacity with distinct colony morphologies and were positive for a pool of typical MSCs markers. In vitro differentiation assays showed a higher osteogenic differentiation capacity of adipose tissue and BM MSCs, and a higher chondrogenic differentiation capacity of BM MSCs. J. Cell. Biochem. 112: 14181430, 2011. © 2011 WileyLiss, Inc.
Invariants of complex structures on nilmanifolds Let $(N, J)$ be a simply connected $2n$-dimensional nilpotent Lie group endowed with an invariant complex structure. We define a left invariant Riemannian metric on $N$ compatible with $J$ to be minimal, if it minimizes the norm of the invariant part of the Ricci tensor among all compatible metrics with the same scalar curvature. In , J. Lauret proved that minimal metrics (if any) are unique up to isometry and scaling. This uniqueness allows us to distinguish two complex structures with Riemannian data, giving rise to a great deal of invariants. We show how to use a Riemannian invariant: the eigenvalues of the Ricci operator, polynomial invariants and discrete invariants to give an alternative proof of the pairwise non-isomorphism between the structures which have appeared in the classification of abelian complex structures on 6-dimensional nilpotent Lie algebras given in . We also present some continuous families in dimension 8.
package com.longcoding.moon.interceptors.impl; import com.longcoding.moon.models.enumeration.TransformType; import com.longcoding.moon.exceptions.ExceptionType; import com.longcoding.moon.helpers.APIExposeSpecification; import com.longcoding.moon.helpers.Constant; import com.longcoding.moon.interceptors.AbstractBaseInterceptor; import com.longcoding.moon.models.RequestInfo; import com.longcoding.moon.models.ehcache.ApiInfo; import com.longcoding.moon.models.enumeration.RoutingType; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.util.MimeTypeUtils; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.util.Map; import java.util.Objects; /** * Coming soon to code comment. * * @author longcoding */ public class TransformRequestInterceptor extends AbstractBaseInterceptor { @Autowired APIExposeSpecification apiExposeSpec; @Override public boolean preHandler(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception { RequestInfo requestInfo = (RequestInfo) request.getAttribute(Constant.REQUEST_INFO_DATA); ApiInfo apiInfo = apiExposeSpec.getApiInfoCache().get(requestInfo.getApiId()); if (RoutingType.API_TRANSFER == requestInfo.getRoutingType() && Objects.nonNull(apiInfo.getTransformData())) { apiInfo.getTransformData().forEach(element -> { Object data = getDataByCurrentTransformType(element.getCurrentPoint(), element.getTargetKey(), requestInfo, apiInfo); if (Objects.isNull(data)) generateException(ExceptionType.E_1007_INVALID_OR_MISSING_ARGUMENT); putDataByTargetTransformType(element.getTargetPoint(), element.getNewKeyName(), data, requestInfo); }); } return true; } private Object getDataByCurrentTransformType(TransformType type, String targetKey, RequestInfo requestInfo, ApiInfo apiInfo) { Object result = null; switch(type) { case HEADER: Map<String, String> headers = requestInfo.getHeaders(); result = headers.get(targetKey); headers.remove(targetKey); break; case PARAM_PATH: String[] inboundUrlsByApiSpec = apiInfo.getInboundURL().split("/"); String[] inboundUrlsByRequest = requestInfo.getRequestPath().split("/"); //Added inboundUrlsByApiSpec.length + 1. Because of Service Path. if (inboundUrlsByApiSpec.length + 1 == inboundUrlsByRequest.length) { targetKey = ":" + targetKey; for (int index=0; index <= inboundUrlsByApiSpec.length; index++) { if (targetKey.equals(inboundUrlsByApiSpec[index])) { result = inboundUrlsByRequest[index + 1]; break; } } } break; case PARAM_QUERY: Map<String, String> queryParams = requestInfo.getQueryStringMap(); result = queryParams.get(targetKey); queryParams.remove(targetKey); break; case BODY_JSON: if ( requestInfo.getContentType().contains(MimeTypeUtils.APPLICATION_JSON_VALUE)) { Map<String, Object> bodyMap = requestInfo.getRequestBodyMap(); for (String key : bodyMap.keySet()) if (key.equalsIgnoreCase(targetKey)) { result = bodyMap.get(key); bodyMap.remove(targetKey); break; } } else { generateException(ExceptionType.E_1011_NOT_SUPPORTED_CONTENT_TYPE); } break; } return result; } private void putDataByTargetTransformType(TransformType type, String newKeyName, Object data, RequestInfo requestInfo) { switch(type) { case HEADER: requestInfo.getHeaders().put(newKeyName, String.valueOf(data)); break; case PARAM_PATH: newKeyName = ":" + newKeyName; String outboundURL = requestInfo.getOutboundURL(); if (outboundURL.contains(newKeyName)) requestInfo.setOutboundURL(outboundURL.replace(newKeyName, String.valueOf(data))); else generateException(ExceptionType.E_1007_INVALID_OR_MISSING_ARGUMENT); break; case PARAM_QUERY: requestInfo.getQueryStringMap().put(newKeyName, String.valueOf(data)); break; case BODY_JSON: if ( requestInfo.getContentType().contains(MimeTypeUtils.APPLICATION_JSON_VALUE) ) { Map<String, Object> bodyMap = requestInfo.getRequestBodyMap(); bodyMap.put(newKeyName, data); } else { generateException(ExceptionType.E_1011_NOT_SUPPORTED_CONTENT_TYPE); } break; } } }
/** * This file is part of the Iritgo/Aktario Framework. * * Copyright (C) 2005-2011 Iritgo Technologies. * Copyright (C) 2003-2005 <NAME>. * * Iritgo licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package de.iritgo.aktario.framework.dataobject.gui; import de.iritgo.aktario.core.Engine; import de.iritgo.aktario.core.gui.IButton; import de.iritgo.aktario.core.gui.ITable; import de.iritgo.aktario.core.gui.ITableSorter; import de.iritgo.aktario.core.iobject.IObjectList; import de.iritgo.aktario.core.iobject.IObjectTableModelSorted; import de.iritgo.aktario.core.logger.Log; import de.iritgo.aktario.framework.base.DataObject; import de.iritgo.aktario.framework.client.Client; import de.iritgo.aktario.framework.dataobject.AbstractQuery; import javax.swing.ImageIcon; import javax.swing.JComboBox; import javax.swing.JComponent; import javax.swing.JLabel; import javax.swing.JPanel; import javax.swing.JScrollPane; import javax.swing.JSeparator; import javax.swing.JTable; import javax.swing.JTextField; import javax.swing.border.BevelBorder; import javax.swing.event.TableModelEvent; import javax.swing.event.TableModelListener; import javax.swing.table.DefaultTableCellRenderer; import java.awt.GridBagConstraints; import java.awt.GridBagLayout; import java.awt.Insets; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.awt.event.MouseAdapter; import java.awt.event.MouseEvent; import java.util.ArrayList; import java.util.Iterator; import java.util.LinkedList; import java.util.List; import java.util.Properties; /** * DefaultQueryRenderer */ public class DefaultQueryRenderer extends Renderer implements TableModelListener { public class ColumnHelper { public WidgetDescription wd; public int columnType; public ExtensionTile extensionTile; public ColumnHelper(WidgetDescription wd, ExtensionTile extensionTile, int columnType) { this.wd = wd; this.columnType = columnType; this.extensionTile = extensionTile; } } public static int DATAOBJECT_COLUMN = 0; public static int TRANSIENT_COLUMN = 1; /** The table of all users. */ public JTable queryTable; public LinkedList dataObjectButtons; /** Model for the user table. */ private IObjectTableModelSorted queryTableModel; /** ScrollPane containing the query state table. */ public JScrollPane queryScrollPane; /** The table sorter helper tablemodel **/ private ITableSorter tableSorter; private ImageIcon newIcon; private ImageIcon editIcon; private ImageIcon saveIcon; private ImageIcon deleteIcon; private ImageIcon cancelIcon; private ImageIcon searchIcon; private ImageIcon searchWait; private JTextField searchConditionField; private DataObject dataObject; private QueryPane queryPane; private List<ColumnHelper> wdList; private JLabel searchLabel; /** */ private ActionListener executeSearch = new ActionListener() { public void actionPerformed(ActionEvent e) { refresh(); } }; /** * Default constructor */ public DefaultQueryRenderer() { super("DefaultQueryRenderer"); newIcon = new ImageIcon(DefaultQueryRenderer.class.getResource("/resources/new.png")); editIcon = new ImageIcon(DefaultQueryRenderer.class.getResource("/resources/edit.png")); saveIcon = new ImageIcon(DefaultQueryRenderer.class.getResource("/resources/save.png")); deleteIcon = new ImageIcon(DefaultQueryRenderer.class.getResource("/resources/delete.png")); cancelIcon = new ImageIcon(DefaultQueryRenderer.class.getResource("/resources/cancel.png")); searchIcon = new ImageIcon(DefaultQueryRenderer.class.getResource("/resources/search.png")); searchWait = new ImageIcon(DefaultQueryRenderer.class.getResource("/resources/search-rotating.gif")); wdList = new ArrayList<ColumnHelper>(8); dataObjectButtons = new LinkedList(); } /** * Will be called on a query pane to generate all needed swing components * * @param Controller * The controller for the dataobject. * @param DataObject * The data object for the display. * @param Object * The content container. In swing it is a JPanel object. * @param DataObjectGUIPane * The pane to render on. */ public void workOn(Controller controller, DataObject dataObject, Object content, DataObjectGUIPane dataObjectGUIPane) { } /** * Will be called on a query pane to generate all needed swing components * * @param Controller * The controller for the dataobject. * @param DataObject * The data object for the display. * @param Object * The content container. In swing it is a JPanel object. * @param QueryPane * The pane to render on. */ public void workOn(final Controller controller, final DataObject dataObject, final Object content, final QueryPane queryPane) { this.dataObject = dataObject; this.queryPane = queryPane; try { JPanel panel = (JPanel) new JPanel(); JPanel toolbar = (JPanel) new JPanel(); panel.setLayout(new GridBagLayout()); toolbar.setLayout(new GridBagLayout()); Properties props = queryPane.getProperties(); JPanel searchPanel = (JPanel) new JPanel(); searchPanel.setLayout(new GridBagLayout()); int searchComponentPos = 0; if (Client.instance().getGUIExtensionManager().existsExtension(queryPane.getOnScreenUniqueId(), "searchPanel")) { for (Iterator i = Client.instance().getGUIExtensionManager().getExtensionIterator( queryPane.getOnScreenUniqueId(), "searchPanel"); i.hasNext();) { ExtensionTile extensionTileCommand = (ExtensionTile) i.next(); searchPanel.add(extensionTileCommand.getTile(queryPane, dataObject, null), createConstraints( searchComponentPos, 0, 1, 1, 0.1, 0.0, GridBagConstraints.HORIZONTAL, GridBagConstraints.NORTHWEST, new Insets(2, 2, 2, 2))); searchComponentPos++; } } WidgetDescription listSearchCategoryComboBox = controller.getWidgetDescription("listSearchCategory"); if (listSearchCategoryComboBox != null) { JComboBox combobox = new JComboBox(); listSearchCategoryComboBox.addControl(queryPane.getOnScreenUniqueId() + "_" + listSearchCategoryComboBox.getWidgetId(), combobox); searchPanel.add(combobox, createConstraints(searchComponentPos, 0, 1, 1, 0.1, 0.0, GridBagConstraints.HORIZONTAL, GridBagConstraints.NORTHWEST, new Insets(2, 2, 2, 2))); searchComponentPos++; } if (props.getProperty("searchpanel", "yes").equals("yes")) { searchConditionField = new JTextField(""); searchPanel.add(searchConditionField, createConstraints(searchComponentPos++, 0, 1, 1, 0.8, 0.0, GridBagConstraints.HORIZONTAL, GridBagConstraints.NORTHWEST, new Insets(2, 2, 2, 2))); searchLabel = new JLabel(searchIcon); searchPanel.add(searchLabel, createConstraints(searchComponentPos++, 0, 1, 1, 0.0, 0.0, GridBagConstraints.NONE, GridBagConstraints.NORTHWEST, new Insets(2, 2, 2, 2))); searchConditionField.addActionListener(executeSearch); if (props.getProperty("searchbutton", "yes").equals("yes")) { IButton button = new IButton("search", searchIcon); toolbar.add(button, createConstraints(0, 0, 1, 1, 1.0, 0.0, GridBagConstraints.HORIZONTAL, GridBagConstraints.NORTHWEST, new Insets(2, 2, 2, 2))); button.addActionListener(executeSearch); searchComponentPos++; } } searchPanel.revalidate(); panel.add(searchPanel, createConstraints(0, 0, 1, 1, 1.0, 0.0, GridBagConstraints.HORIZONTAL, GridBagConstraints.NORTHWEST, new Insets(2, 2, 2, 2))); int y = 1; configureTableModel(controller); queryPane.setModel(queryTableModel); configureTable(queryTableModel); int verticalScrollbarPolicy = JScrollPane.VERTICAL_SCROLLBAR_AS_NEEDED; if (queryPane.getProperties().containsKey("verticalScrollbarPolicy")) { verticalScrollbarPolicy = (Integer) queryPane.getProperties().get("verticalScrollbarPolicy"); } int horizontalScrollbarPolicy = JScrollPane.HORIZONTAL_SCROLLBAR_AS_NEEDED; if (queryPane.getProperties().containsKey("horizontalScrollbarPolicy")) { horizontalScrollbarPolicy = (Integer) queryPane.getProperties().get("horizontalScrollbarPolicy"); } queryScrollPane = new JScrollPane(queryTable, verticalScrollbarPolicy, horizontalScrollbarPolicy); // queryScrollPane.getColumnHeader ().setVisible (true); panel.add(queryScrollPane, createConstraints(0, 1, 1, 1, 1.0, 1.0, GridBagConstraints.BOTH, GridBagConstraints.NORTHWEST, new Insets(0, 0, 0, 0))); JSeparator sep = new JSeparator(); if (props.getProperty("toolbar", "no").equals("yes")) { if (controller.getCommandDescriptions().size() != 0) { toolbar.add(sep, createConstraints(0, 1, 1, 1, 1.0, 0.0, GridBagConstraints.HORIZONTAL, GridBagConstraints.NORTHWEST, new Insets(4, 8, 4, 8))); y = 2; } else { y = 1; } for (Iterator i = controller.getCommandDescriptions().iterator(); i.hasNext();) { CommandDescription cd = (CommandDescription) i.next(); DataObjectButton dataObjectButton = null; if (cd.getTextId().equals("new")) { dataObjectButton = new DataObjectButton(cd.getTextId(), newIcon); } else if (cd.getIconId().equals("edit")) { dataObjectButton = new DataObjectButton(cd.getTextId(), editIcon); } else if (cd.getIconId().equals("save")) { dataObjectButton = new DataObjectButton(cd.getTextId(), saveIcon); } else if (cd.getIconId().equals("delete")) { dataObjectButton = new DataObjectButton(cd.getTextId(), deleteIcon); } else if (cd.getIconId().equals("cancel")) { dataObjectButton = new DataObjectButton(cd.getTextId(), cancelIcon); } else { dataObjectButton = new DataObjectButton(cd.getTextId()); } dataObjectButton.setDataObject(dataObject); dataObjectButton.setSwingGUIPane(queryPane); dataObjectButton.setCommandDescription(cd); toolbar.add(dataObjectButton, createConstraints(0, y, 1, 1, 1.0, 0.0, GridBagConstraints.HORIZONTAL, GridBagConstraints.NORTHWEST, new Insets(4, 8, 0, 8))); dataObjectButtons.add(dataObjectButton); ++y; } if (Client.instance().getGUIExtensionManager().existsExtension(queryPane.getOnScreenUniqueId(), "toolbar")) { sep = new JSeparator(); toolbar.add(sep, createConstraints(0, y, 1, 1, 1.0, 0.0, GridBagConstraints.HORIZONTAL, GridBagConstraints.NORTHWEST, new Insets(8, 8, 4, 8))); ++y; for (Iterator i = Client.instance().getGUIExtensionManager().getExtensionIterator( queryPane.getOnScreenUniqueId(), "toolbar"); i.hasNext();) { ExtensionTile extensionTile = (ExtensionTile) i.next(); toolbar.add(extensionTile.getTile(queryPane, dataObject, null), createConstraints(0, y, 1, 1, 1.0, 0.0, GridBagConstraints.HORIZONTAL, GridBagConstraints.NORTHWEST, new Insets(4, 8, 0, 8))); ++y; } } } // Surrounding Commands if (props.getProperty("surroundingcommands", "no").equals("yes")) { if (Client.instance().getGUIExtensionManager().existsExtension(queryPane.getOnScreenUniqueId(), "surroundingcommands")) { for (Iterator l = Client.instance().getGUIExtensionManager().getExtensionIterator( queryPane.getOnScreenUniqueId(), "surroundingcommands"); l.hasNext();) { ExtensionTile extensionTile = (ExtensionTile) l.next(); JComponent component = extensionTile.getTile(queryPane, dataObject, null); Object constraints = extensionTile.getConstraints(); if (controller.getCommandDescriptions().size() != 0) { if (constraints != null) { component.add(sep, constraints); } else { component.add(sep); } } for (Iterator i = controller.getCommandDescriptions().iterator(); i.hasNext();) { CommandDescription cd = (CommandDescription) i.next(); DataObjectButton dataObjectButton = null; if (cd.getTextId().equals("new")) { dataObjectButton = new DataObjectButton(cd.getTextId(), newIcon); } else if (cd.getIconId().equals("edit")) { dataObjectButton = new DataObjectButton(cd.getTextId(), editIcon); } else if (cd.getIconId().equals("save")) { dataObjectButton = new DataObjectButton(cd.getTextId(), saveIcon); } else if (cd.getIconId().equals("delete")) { dataObjectButton = new DataObjectButton(cd.getTextId(), deleteIcon); } else if (cd.getIconId().equals("cancel")) { dataObjectButton = new DataObjectButton(cd.getTextId(), cancelIcon); } else { dataObjectButton = new DataObjectButton(cd.getTextId()); } if (props.getProperty("surroundingcommands.filter", "").indexOf(cd.getTextId() + ";") >= 0) { continue; } dataObjectButton.setDataObject(dataObject); dataObjectButton.setSwingGUIPane(queryPane); dataObjectButton.setCommandDescription(cd); dataObjectButtons.add(dataObjectButton); if (constraints != null) { component.add(dataObjectButton, constraints); } else { component.add(dataObjectButton); } } if (Client.instance().getGUIExtensionManager().existsExtension(queryPane.getOnScreenUniqueId(), "toolbar")) { for (Iterator i = Client.instance().getGUIExtensionManager().getExtensionIterator( queryPane.getOnScreenUniqueId(), "toolbar"); i.hasNext();) { ExtensionTile extensionTileCommand = (ExtensionTile) i.next(); if (constraints != null) { component.add(extensionTileCommand.getTile(queryPane, dataObject, null), constraints); } else { component.add(extensionTileCommand.getTile(queryPane, dataObject, null)); } } } component.revalidate(); } } } ((JPanel) content).setLayout(new GridBagLayout()); ((JPanel) content).add(panel, createConstraints(0, 0, 1, 1, 1.0, 1.0, GridBagConstraints.BOTH, GridBagConstraints.NORTHWEST, new Insets(0, 0, 4, 0))); ((JPanel) content).add(toolbar, createConstraints(1, 0, 1, 1, 0.0, 0.0, GridBagConstraints.NONE, GridBagConstraints.NORTHWEST, new Insets(0, 0, 4, 0))); } catch (Exception x) { x.printStackTrace(); } } /** * @see de.iritgo.aktario.framework.dataobject.gui.Renderer#setError(java.lang.String) */ @Override public void setError(String widgetId) { } /** * @see de.iritgo.aktario.framework.dataobject.gui.Renderer#setNoError(java.lang.String) */ @Override public void setNoError(String widgetId) { } /** * Create and configure a table model. * * @param Controller * The controller for the data object and model */ @SuppressWarnings("serial") private void configureTableModel(Controller controller) { GUIExtensionManager guiExtManager = Client.instance().getGUIExtensionManager(); List<ExtensionTile> tilesCopy = guiExtManager.getExtensionsCopy(queryPane.getOnScreenUniqueId(), "columns"); WidgetDescription wdGroup = (WidgetDescription) controller.getWidgetDescriptions().get(0); for (Iterator i = wdGroup.getWidgetDescriptions().iterator(); i.hasNext();) { WidgetDescription wd = (WidgetDescription) i.next(); if (wd.isVisible()) { ExtensionTile tile = guiExtManager.getExtension(queryPane.getOnScreenUniqueId(), "columns", wd .getWidgetId()); wdList.add(new ColumnHelper(wd, tile, DATAOBJECT_COLUMN)); tilesCopy.remove(tile); } } for (ExtensionTile tile : tilesCopy) { wdList.add(new ColumnHelper(null, tile, TRANSIENT_COLUMN)); } try { queryTableModel = new IObjectTableModelSorted() { public int getColumnCount() { return wdList.size(); } public String getColumnName(int col) { ColumnHelper columnHelper = (ColumnHelper) wdList.get(col); String text = ""; if (columnHelper.columnType == DATAOBJECT_COLUMN) { text = Engine.instance().getResourceService().getStringWithoutException( (String) ((WidgetDescription) columnHelper.wd).getLabelId()); } else if (columnHelper.columnType == TRANSIENT_COLUMN) { text = Engine.instance().getResourceService().getStringWithoutException( columnHelper.extensionTile.getLabel()); } return text; } public boolean isCellEditable(int row, int col) { return false; } public Object getValueAt(int row, int col) { ColumnHelper columnHelper = (ColumnHelper) wdList.get(col); Object object = new String("Unknown column"); if (columnHelper.columnType == DATAOBJECT_COLUMN) { object = ((DataObject) getObjectByRow(row)).getAttribute(((WidgetDescription) columnHelper.wd) .getWidgetId()); } else if (columnHelper.columnType == TRANSIENT_COLUMN) { object = (DataObject) getObjectByRow(row); } return object; } }; } catch (Exception x) { Log.logFatal("system", "DefaultQueryRenderer.configureTable", x.toString()); x.printStackTrace(); } } /** * Create and configure a JTable for this default query renderer * * @param IObjectTableModelSorted * The table model for this table. */ private void configureTable(IObjectTableModelSorted queryTableModel) { try { queryTable = new ITable(); queryTable.setShowGrid(true); queryTable.setCellSelectionEnabled(false); queryTable.setRowSelectionAllowed(true); queryTable.setSelectionMode(0); queryTable.setRowHeight(Math.max(queryTable.getRowHeight() + 4, 24 + 4)); tableSorter = queryTableModel.getTableSorter(); queryTable.setModel(tableSorter); tableSorter.addMouseListenerToHeaderInTable(queryTable); queryTableModel.addTableModelListener(this); int column = 0; for (ColumnHelper columnHelper : wdList) { if (columnHelper.extensionTile != null) { queryTable.getColumnModel().getColumn(column).setCellRenderer( (DefaultTableCellRenderer) columnHelper.extensionTile.getTile(queryPane, dataObject, null)); } ++column; } queryTable.addMouseListener(new MouseAdapter() { public void mouseClicked(MouseEvent e) { int col = queryTable.columnAtPoint(e.getPoint()); int row = tableSorter.getRealRow(queryTable.getSelectedRow()); if ((col < 0) || (row < 0)) { return; } int realColumn = queryTable.getColumnModel().getColumn(col).getModelIndex(); ColumnHelper columnHelper = (ColumnHelper) wdList.get(realColumn); String columnId = ""; if (columnHelper.columnType == DATAOBJECT_COLUMN) { columnId = columnHelper.wd.getWidgetId(); } else if (columnHelper.columnType == TRANSIENT_COLUMN) { columnId = columnHelper.extensionTile.getTileId(); } DataObject dataObject = (DataObject) ((ITableSorter) queryTable.getModel()).getObjectByRow(row); Properties props = new Properties(); props.put("table", queryTable); props.put("mousePosition", e.getPoint()); for (Iterator i = Client.instance().getGUIExtensionManager().getExtensionIterator( queryPane.getOnScreenUniqueId(), "listcommands"); i.hasNext();) { ExtensionTile extensionTile = (ExtensionTile) i.next(); if (extensionTile.getTileId().equals(columnId)) { if (! extensionTile.isDoubleClickCommand()) { extensionTile.command(queryPane, dataObject, props); } if (extensionTile.isDoubleClickCommand() && (e.getClickCount() == 2)) { extensionTile.command(queryPane, dataObject, props); } } } } }); } catch (Exception x) { Log.logFatal("system", "DefaultQueryRenderer.configureTable", x.toString()); x.printStackTrace(); } } /** * @see de.iritgo.aktario.framework.dataobject.gui.Renderer#close() */ @Override public void close() { for (Iterator i = dataObjectButtons.iterator(); i.hasNext();) { DataObjectButton dob = (DataObjectButton) i.next(); dob.release(); } dataObjectButtons.clear(); } /** * If the tablemodel changed we must update the table sorter * * @param TableModelEvent * The event. */ public void tableChanged(TableModelEvent e) { queryTable.revalidate(); queryTable.repaint(); } /** * Helper method for creating gridbag constraints. * * @param x * The grid column. * @param y * The grid row. * @param width * The number of occupied columns. * @param height * The number of occupied rows. * @param fill * The fill method. * @param anchor * The anchor. * @param wx * The horizontal stretch factor. * @param wy * The vertical stretch factor. * @param insets * The cell insets. * @return The gridbag constraints. */ protected GridBagConstraints createConstraints(int x, int y, int width, int height, double wx, double wy, int fill, int anchor, Insets insets) { GridBagConstraints gbc = new GridBagConstraints(); gbc.gridx = x; gbc.gridy = y; gbc.gridwidth = width; gbc.gridheight = height; gbc.fill = fill; gbc.weightx = wx; gbc.weighty = wy; gbc.anchor = anchor; if (insets == null) { gbc.insets = new Insets(0, 0, 0, 0); } else { gbc.insets = insets; } return gbc; } /** * Describe method setSearchIcon() here. * */ public void setSearchIcon() { if (searchLabel != null) { searchLabel.setIcon(searchIcon); } } /** * Describe method setSearchWaitIcon() here. * */ public void setSearchWaitIcon() { if (searchLabel != null) { searchLabel.setIcon(searchWait); } } /** * Get the Table component. * * @return JTable The table of this default query renderer */ public JTable getTable() { return queryTable; } @Override public void refresh() { AbstractQuery query = (AbstractQuery) dataObject; IObjectList results = (IObjectList) query.getIObjectListResults(); results.clearIObjectList(); query.setSearchCondition(searchConditionField.getText()); query.update(); } }
/* *----------------------------------------------------------------------------- * * HgfsPackOpenV1Reply -- * * Pack hgfs open V1 reply payload to the HgfsReplyOpenV3 structure. * * Results: * None. * * Side effects: * None * *----------------------------------------------------------------------------- */ static void HgfsPackOpenV1Reply(HgfsFileOpenInfo *openInfo, HgfsReplyOpen *reply) { reply->file = openInfo->file; }
St Patrick's Church, Belfast First Church Belfast's first Catholic church was St Mary's, Chapel Lane but with the growth of the catholic population in the early nineteenth century Bishop William Crolly, then a priest in residence in the small Georgian town, decided to construct a new church in Donegall St. This church, dedicated to Ireland's patron saint Patrick, was opened in 1815, the construction made possible - in part - by the contribution of Belfast's educated Protestants and civic elite. In the post-famine era Belfast's Catholic population swelled considerably and, while other churches and new parishes were developed, by the early 1870s it was clear St. Patrick's needed an entirely new and larger church. Current Church The new (current) church was designed by the architect Timothy Hevey who was Belfast's leading Catholic architect. It was built by Collen Brothers of Portadown and Dublin who constructed the new church around the old one which was then demolished. The entire fabric of the new church, designed to seat 2000 people, was completed for blessing on 12th of August 1877 by the Primate of All Ireland, Archbishop Daniel McGettigan of Armagh. Bishop Patrick Dorrian, who early in his priestly ministry had served in the parish, and who authorised the construction of the present building is interred in the church. The splendour and scale of the church meant it was the chosen venue for the episcopal consecrations of Bishops Henry Henry in 1895, of John Tohill in 1908 and later in 1929 of Bishop Daniel Mageean. One notable feature is the indomitable 7ft tall statue of St Patrick above the door which (like the altar) was carved by the English-born James Pearse, father of Padraig Pearse. A two-ton bell, cast by Thomas Sheridan of Dublin, had already been placed into the 180 feet high (54 metre) spire. It is a Grade B+ listed building. In the summer of 2017 it was reported that the church needed millions of pounds to complete restoration. Sir John Lavery The church also houses a triptych by a native of the parish who was baptised in the older, smaller church Sir John Lavery. He presented 'The Madonna Of The Lakes' using his wife Hazel Lavery and step-daughter as models. In 1917, Lavery contacted the then Administrator Fr John O'Neill with the intention of donating a piece of art to the church. The triptych depicting three images - Our Lady flanked by St Brigid and St Patrick - was unveiled in April 1919. The art work was the centrepiece of an historic visit by the Prince of Wales and his wife the Duchess of Cornwall to the church in May 2015 to mark St. Patrick's bicentenary. The couple viewed the church’s most treasured artwork after a short service of prayer. Parish Clergy As of July 2018 the parish is served by three resident clergy - Reverend Eugene O'Neill Adm VF (Administrator), Reverend Tony McAleese (curate) and Right Reverend (Dean) Brendan McGee (retired priest in residence). Parish Mass Times Sundays at 5.30pm Vigil; 8.00am, 10.00am, 12.00pm and 7.00pm. Holyday of Obligation at 7.00pm Vigil; 1.00pm. Mondays to Saturdays at 1.00pm. Parish Confession Times Saturdays from 12.00pm-1.00pm and from 4.30pm-5.20pm. St. Patrick's School Adjacent to the church on Donegall St is the refurbished St. Patrick's School, constructed in 1828 by the Belfast builder Timothy Hevey, father of the architect of the same name who designed the church next door. This was the first Catholic school to be built in Belfast on land was donated by the Marquess of Donegall. For much of its history the school was operated by the Christian Brothers and was a functioning primary school until 1982. After it closed it served briefly as a parish community centre and at one stage the parish clergy wanted to demolish the school for a large car park.
Bernard Fa, Antoine Compagnon: the modern/anti-modern, or the delicate balance of the Entre-Deux ABSTRACT This article examines Antoine Compagnons Le cas Bernard Fa: du Collge de France lindignit nationale in the light of Compagnons intellectual trajectory and in connection with his conception of modernity, in particular French modernity. A sum of contradictions, at once modern and anti-modern, modernity is for Compagnon essentially ambivalent. Its emblem is Baudelaire, whose aesthetic predilection for the modern beauty of the present was paradoxically entwined with his hatred for modernization. Compagnon sets Baudelaires intensely nostalgic and somehow already postmodern modernity against the effusive ideology of modernism, identified with the cult of progress, the equation between aesthetics and politics, and the lyric militancy of the avant-garde. Through the Janus-like figure of Bernard Fa, a modernist aesthete who was Gertrude Steins best friend and who turned into a collaborator and a persecutor of Freemasons during the Second World War, Compagnon excavates, at the crossway between aesthetics and politics, at the intersection of modernism and fascism, the contradictions of modernity and the paradoxes of the history of twentieth-century France. In the meantime, going against the linear grain of the great modernist narrative, Compagnon defines the tasks of the new literary history of modernity.
Emerging Demographic Transition in India Demographics of India is remarkably diverse. India is the second most populous country in the world with more than one sixth of the world population. The stock of any population changes with time. There are three components of population changes which are fertility, mortality and migration. Socio economic phenomena of population development and their impact and differentials like urbanization, infant mortality rate, migration and causes of death are important to understand the population characteristics. It is observed that the growth of population depends on birth rate and death rates in India. During first phase birth rate as well as death rate was high. In the fourth phase birth rate and death rates are decline. It was also found that life expectancy at birth had been gradually increased in India. There is a need to coordinate the population policy with education policy. Employment generation programmes has been launched in the country to solve unemployment problem and mitigate rural unemployment.
package com.lothrazar.customgamerules.mixin; import org.spongepowered.asm.mixin.Mixin; import org.spongepowered.asm.mixin.injection.At; import org.spongepowered.asm.mixin.injection.Inject; import org.spongepowered.asm.mixin.injection.callback.CallbackInfo; import com.lothrazar.customgamerules.GameRuleMod; import com.lothrazar.customgamerules.RuleRegistry; import net.minecraft.block.BlockState; import net.minecraft.block.Blocks; import net.minecraft.block.IceBlock; import net.minecraft.util.math.BlockPos; import net.minecraft.world.World; @Mixin(IceBlock.class) public class IceAntiMeltMixin { @Inject(at = @At("HEAD"), method = "turnIntoWater(Lnet/minecraft/block/BlockState;Lnet/minecraft/world/World;Lnet/minecraft/util/math/BlockPos;)V", cancellable = true) public void tickMixin(BlockState bs, World worldIn, BlockPos pos, CallbackInfo info) { //if disable == true, then IceBlock me = (IceBlock) (Object) this; if (RuleRegistry.isEnabled(worldIn, RuleRegistry.disableLightMeltIce) && bs.getBlock() == Blocks.ICE) { info.cancel(); GameRuleMod.info("IceAntiMeltMixin mixin success and disableLightMeltIce=true"); } } }
// to play with object references carefully!! private void initFunctionAndObject() { this.builtinFunction = initConstructor("Function", ScriptFunction.class); final ScriptFunction anon = ScriptFunctionImpl.newAnonymousFunction(); anon.addBoundProperties(getFunctionPrototype()); builtinFunction.setInitialProto(anon); builtinFunction.setPrototype(anon); anon.set("constructor", builtinFunction, 0); anon.deleteOwnProperty(anon.getMap().findProperty("prototype")); this.typeErrorThrower = new ScriptFunctionImpl("TypeErrorThrower", Lookup.TYPE_ERROR_THROWER_GETTER, null, null, 0); typeErrorThrower.setPrototype(UNDEFINED); typeErrorThrower.deleteOwnProperty(typeErrorThrower.getMap().findProperty("prototype")); typeErrorThrower.preventExtensions(); this.builtinObject = initConstructor("Object", ScriptFunction.class); final ScriptObject ObjectPrototype = getObjectPrototype(); anon.setInitialProto(ObjectPrototype); final ScriptFunction getProto = ScriptFunctionImpl.makeFunction("getProto", NativeObject.GET__PROTO__); final ScriptFunction setProto = ScriptFunctionImpl.makeFunction("setProto", NativeObject.SET__PROTO__); ObjectPrototype.addOwnProperty("__proto__", Attribute.NOT_ENUMERABLE, getProto, setProto); jdk.nashorn.internal.runtime.Property[] properties = getFunctionPrototype().getMap().getProperties(); for (final jdk.nashorn.internal.runtime.Property property : properties) { final Object key = property.getKey(); final Object value = builtinFunction.get(key); if (value instanceof ScriptFunction && value != anon) { final ScriptFunction func = (ScriptFunction)value; func.setInitialProto(getFunctionPrototype()); final ScriptObject prototype = ScriptFunction.getPrototype(func); if (prototype != null) { prototype.setInitialProto(ObjectPrototype); } } } for (final jdk.nashorn.internal.runtime.Property property : builtinObject.getMap().getProperties()) { final Object key = property.getKey(); final Object value = builtinObject.get(key); if (value instanceof ScriptFunction) { final ScriptFunction func = (ScriptFunction)value; final ScriptObject prototype = ScriptFunction.getPrototype(func); if (prototype != null) { prototype.setInitialProto(ObjectPrototype); } } } properties = getObjectPrototype().getMap().getProperties(); for (final jdk.nashorn.internal.runtime.Property property : properties) { final Object key = property.getKey(); if (key.equals("constructor")) { continue; } final Object value = ObjectPrototype.get(key); if (value instanceof ScriptFunction) { final ScriptFunction func = (ScriptFunction)value; final ScriptObject prototype = ScriptFunction.getPrototype(func); if (prototype != null) { prototype.setInitialProto(ObjectPrototype); } } } tagBuiltinProperties("Object", builtinObject); tagBuiltinProperties("Function", builtinFunction); tagBuiltinProperties("Function", anon); }
Get the latest news and videos for this game daily, no spam, no fuss. The PlayStation 4 version of Minecraft failed Sony's certification test, and as a result, developer 4J Studios will need to take some extra time to fix bugs and submit the game all over again. The studio announced the news on Twitter today. "Sony found some issues we have to fix in their final test of Minecraft PS4," 4J Studios said with a sad face attached. "We're fixing, but we need to go through the process again." 4J Studios submitted the PS4 version of Minecraft to Sony for certification on August 12. The game is expected to launch this month, though the need to re-submit would appear to put that release date in question. A PlayStation Vita version of Minecraft is also in development and is scheduled to launch this month. That version is undergoing bug testing right now, 4J says. As for the Xbox One version of Minecraft, it, too, is projected to launch in August, with 4J Studios now spending time fixing bugs before sending the game off to Microsoft for final certification. If you already own the Xbox 360 or PlayStation 3 version of Minecraft, you can get the game on Xbox One or PS4 for $5, and your saves will carry forward. When it's released, the PS Vita version of Minecraft will be available as a cross-buy game with the PS3 iteration. Minecraft, an open-ended sandbox game, has been an incredible success since its full release on PC in 2011. The game has since sold around 54 million copies across all platforms, and it's even possible that it could come to Nintendo platforms like the Wii U or 3DS someday.
Mortality and loss to programme before antiretroviral therapy among HIV-infected children eligible for treatment in The Gambia, West Africa Background HIV infection among children, particularly those under 24months of age, is often rapidly progressive; as a result guidelines recommend earlier access to combination antiretroviral therapy (cART) for HIV infected children. Losses to follow-up (LTFU) and death in the interval between diagnosis and initiation of ART profoundly limit this strategy. This study explores correlates of LTFU and death prior to ART initiation among children. Methods The study is based on 337 HIV-infected children enrolled into care at an urban centre in The Gambia, including those alive and in care when antiretroviral therapy became available and those who enrolled later. Children were followed until they started ART, died, transferred to another facility, or were LTFU. Cox proportional hazards regression models were used to determine the hazard of death or LTFU according to the baseline characteristics of the children. Results Overall, 223 children were assessed as eligible for ART based on their clinical and/or immunological status among whom 73 (32.7%) started treatment, 15 (6.7%) requested transfer to another health facility, 105 (47.1%) and 30 (13.5%) were lost to follow-up and died respectively without starting ART. The median survival following eligibility for children who died without starting treatment was 2.8months (IQR: 0.9 - 5.8) with over half (60%) of all deaths occurring at home. ART-eligible children less than 2years of age and those in WHO stage 3 or 4 were significantly more likely to be LTFU when compared with their respective comparison groups. The overall pre-treatment mortality rate was 25.7 per 100 child-years of follow-up (95% CI 19.9 - 36.8) and the loss to programme rate was 115.7 per 100 child-years of follow-up (95% CI 98.8 - 137). In the multivariable Cox proportional hazard model, significant independent predictors of loss to programme were being less than 2years of age and WHO stage 3 or 4. The Adjusted Hazard Ratio (AHR) for loss to programme was 2.06 (95% CI 1.12 3.83) for being aged less than 2years relative to being 5years of age or older and 1.92 (95% CI 1.05 - 3.53) for being in WHO stage 3 or 4 relative to WHO stage 1 or 2. Conclusions Earlier enrolment into HIV care is key to achieving better outcomes for HIV infected children in developing countries. Developing strategies to ensure early diagnosis, elimination of obstacles to prompt initiation of therapy and instituting measures to reduce losses to follow-up, will improve the overall outcomes of HIV-infected children. Background Combination antiretroviral therapy (cART) has significantly improved the prognosis for HIV-infected children in resource-limited settings. Eligibility for ART among children in resource-limited settings is based on either clinical and/or immunological criteria to start treatment at World Health Organization (WHO) clinical stage 3 or 4 disease, or at a CD4 T-cell count/percent below the age-appropriate immunological threshold. However, because of the more rapid disease progression and significantly higher risk of mortality in the first two years of life among HIV-infected children in sub-Saharan Africa, WHO now recommends that all infants and children aged <24 months with confirmed HIV-infection start ART as soon as possible irrespective of clinical stage or immunological threshold. Unfortunately, the majority of HIV-infected children in sub-Saharan Africa are diagnosed late with advanced clinical disease and immunosuppression, and are usually 5 years of age or older at initiation of therapy. This is due to, among other reasons, the fact that health systems in resource limited settings still face considerable challenges in their efforts to scale-up access to early paediatric HIV diagnosis and treatment, particularly among children aged < 18 months in whom a definitive diagnosis requires sophisticated laboratory techniques. Another challenge that treatment programmes face is ensuring that all children who test HIV-positive are successfully linked to and retained in a Paediatric HIV/AIDS care programme such that they can initiate ART as soon as they are eligible. Retention of patients in pre-ART care is of paramount importance in ensuring the success of ART programmes. Loss to care has been defined as "discontinuation of active engagement in pre-ART care for any reason, including death". Loss to follow-up (LTFU) from HIV care programmes in particular represents missed opportunities for the timely initiation of life-saving treatment. A systematic review of adult ART programmes in sub-Saharan Africa reported that retention is as low as 60% after 2 years, which is consistent with our observation of one-third of ART-eligible adults who died or were lost to follow-up prior to initiation of treatment. However, there is a paucity of data on mortality and loss to followup experiences of ART-eligible HIV-infected children who fail to initiate treatment because this information is not routinely assessed as part of program evaluations. In 2008, the prevalence of HIV-1 and HIV-2 in The Gambia was estimated to be 1.6% and 0.4%, respectively. The prevalence of paediatric HIV/AIDS is amongst the lowest in sub-Saharan Africa with less than 1000 children below the age of 15 years known to be living with HIV/AIDS. Approximately one-third of these are receiving life-saving ART though the proportion that are eligible but are yet to initiate ART is unknown. This study aims to investigate, and determine factors associated with, pre-ART loss to follow-up and mortality among ART-eligible HIV-infected children at the Paediatric HIV clinic of the Medical Research Council (MRC) Unit in Fajara, The Gambia -West Africa. Results Enrolment, follow-up and ART eligibility A total of 411 HIV infected children attended the MRC paediatric HIV clinic at least once between June 1993 and January 2010; of these, 74 were excluded from the analysis because they had either died or were LTFU prior to June 2004 when pre-ART sensitization and screening began ( Figure 1). Characteristics of ART-eligible patients The majority of children in the cohort were enrolled after January 2006 with eligibility based on the 4-stage 2006 revision of the WHO paediatric treatment guidelines (Table 1). 177 (80%) of the children were eligible to start cART at presentation or within 6 months of their being diagnosed with HIV infection; 60% of the children were 2 years of age or older at eligibility. Information on parental vital status was available for 173 of the 223 children, of which half were orphaned: 70 (41%) had lost one parent and 16 (9%) had lost both parents. Only 99 (44%) children had documentation of their parents HIV status -76 had at least one HIV-positive parent, 17 had both parents HIV-positive and 6 had both parents HIV-negative. Gender, HIV type, CD4% and the vital status of the parents were not significantly associated with the outcomes of survival or retention but there were significant differences by age and WHO clinical status (Table 2); children less than 2 years of age and those in WHO stage 3 or 4 at ART eligibility were significantly more likely to be LTFU when compared with their respective comparison groups. Survival and retention in care among ART-eligible patients Among the 73 children who commenced cART; the median time from eligibility to ART initiation and median age at initiation were 5.1 months (IQR: 2.8 -11.2) and 4.9 years (IQR: 2.6 -9.9) respectively. Thirty (13.3%) ART-eligible children died without staring treatment giving a pre-treatment mortality rate of 25.7 per 100 child-years of follow-up (95% CI 19.9 -36.8). The median survival following ART eligibility for children who died without starting treatment was 2.8 months (IQR: 0.9 -5.8) with over half (60%) of all deaths occurring at home (Table 3). One hundred and five (47%) treatment-eligible children who remained in the cohort for a median of 4.2 months (IQR: 3.4 -5.3) were eventually LTFU before starting treatment. Nineteen (18%) of these children were last seen on the date that that they were diagnosed HIV-positive and/or assessed to be eligible for ART and of these, 14 were less than 2 years of age. WHO clinical stage at eligibility was documented in 75 of those lost to follow-up and 25 (33%) and 34 (45.3%) had a stage 3 and stage 4 condition respectively. Twenty-two (21%) of the children LTFU had one or both parents in HIV care whilst 35 (33%) were either single or double orphans. A total of 135 ART-eligible children were lost to the programme before ART initiation giving an incidence rate for the composite endpoint of death or LTFU of 115.7 per 100 child-years of follow-up (95% CI 98.8 -137). In both univariate and multivariable analyses of risk factors for death, no significant risk factors were identified (Table 4); however being less than 2 years of age at eligibility, having advanced clinical HIV disease (i.e. WHO stage 3 or 4) and having been assessed as eligible for ART between January 2006 and January 2010 were all significantly associated with loss to programme. Figure 2 shows the cumulative incidence of loss to programme overall, by age category, eligibility period and WHO clinical stage at ART eligibility. The graphs show significant unadjusted associations between age category (log rank test, p = 0.0001), eligibility period (log rank test, p = 0.0062), WHO clinical stage (log rank test, p = 0.0001) and loss to programme risk from the time of being assessed as ART-eligible. The unadjusted Hazard Ratio for loss to programme was 2.41 (95% CI 1.61 -3.61) for children less than 2 years of age at the time of eligibility compared with children 5 years of age or older; 2.43 (95% CI 1.51 -3.90) for being in WHO stage 3 or 4 relative to WHO stage 1 or 2 and 1.86 (95% CI 1.18 -2.93) (Table 4). In the multivariable Cox proportional hazard model, significant independent predictors of loss to programme were being less than 2 years of age and WHO stage 3 or 4. Being less than 2 years of age is the strongest independent predictor of loss to programme among ART-eligible patients -the Adjusted Hazard Ratio (AHR) for loss to programme was 2.06 (95% CI 1.12 -3.83) for being aged less than 2 years relative to being 5 years of age or older and 1.92 (95% CI 1.05 -3.53) for being in WHO stage 3 or 4 relative to WHO stage 1 or 2. The Proportional hazards assumption was met for all Cox proportional hazard models. Outcomes among HIV-infected children ineligible for ART at enrollment The 114 children who did not meet ART initiation criteria over the study period were followed up for a median of 1.7 months (IQR: 0.2 -16.9) from enrolment. Twentythree (20.2%) children were transferred to another clinic, 88 (77.2%) were LTFU and 3 (2.6%) died; all deaths occurred within three months of enrolment. All these children had been given follow-up appointments no later than 3 months after their last appointment, and were explicitly given permission to return earlier to the clinic in case of illness. Discussion Our study reports pre-treatment mortality and losses to follow-up among ART-eligible children from a cohort who were yet to initiate treatment at a West-African clinic. The majority of deaths in our cohort occurred in the first three months after eligibility and during the period of preparation for ART. The pre-treatment mortality rate observed in our setting is relatively higher than reports from other sites; a study in rural Zambia reported 10% mortality among ART-eligible children with a mortality rate of 2.73 (95% CI: 1.7 -4.18) per 100 child-years while another study reported that 1% of treatment-eligible children receiving care at an urban ART program in Zambia died prior to ART initiation. In a large cohort of 1766 children in Cote d'Ivoire, the reported loss-to-programme rate of 50.3/100 childyears of follow up was also much lower than that reported in this study though, it was not clear if the children in that cohort were eligible for ART. One possible reason for the relatively higher pre-treatment mortality and loss-to-programme rates among ART-eligible children in our setting is the advanced stage of clinical disease and immunosuppression at diagnosis. Many children in resource limited settings are only tested for HIV as part of workup for recurrent or severe ill health; we reported that 80% of the treatment-eligible children attending our clinic were assessed as eligible for ART either at or within 6 months of their being diagnosed with HIV infection. The inclusion of home visits as part of our patient follow-up protocol provides a more accurate and complete ascertainment of survival status that could have contributed to the higher mortality rates observed in our cohort. The proportion of children who initiated ART in our clinic is comparable to that reported in a study in South Africa where 39.5% (96/243) of ART-eligible children started treatment but considerably lower than figures reported from Cote d'Ivoire and Zambia where 86% and 70% of ART-eligible children respectively started therapy. The median time of 5.1 months from eligibility to ART initiation in our cohort was comparable to that reported from Cambodia where ART was initiated after a median of 4.7 months, but more than twice as long as the 2.1 months reported from rural Zambia. Another study from Zambia reported that children eligible for ART at presentation in rural areas took longer time to initiate treatment than children in urban areas (3.6 vs. 0.9 months). Delays in treatment initiation among eligible children are reported to be due to several factors 19,20] some of which were observed in our cohort. Firstly, the long process of pre-treatment counselling required repeated clinic visits to ensure that care-givers are both willing and capable of taking the responsibility of supporting the child to take life-long medications with a high level of adherence. Lack of caregiver readiness as assessed during counselling sessions could prolong the process further. This often proved difficult for patients from rural areas and those with working parents though parents/caregivers who were in HIV care, specifically those already ART and with good adherence, underwent 'accelerated' counselling with emphasis on the challenges of administering drugs to children, drug-storage and side-effects. We however did not collect information on the number of parents/ caregivers who completed the 4 pre-ART counselling sessions among children who died or were LTFU. A second factor for treatment delay was the time required to obtain approval to start ART from the national ART eligibility committee. The fortnightly occurrence of the meeting frequently resulted in eligible children, whose caregivers had completed pre-ART counselling, waiting for days (sometimes as long as one month if the meeting date fell on a public holiday) to receive approval to start treatment. The concept of the national eligibility committee arose in the early days of ART availability in The Gambia when ART supplies were limited, providers inexperienced and practical application of eligibility criteria was still being debated. Though the committee now meets weekly, the inconvenience of travel to attend the meeting, increased uptake of HCT services nationwide and the widespread availability of ART favour decentralization of the committee to centres involved in provision of ART and may help shorten the time patients have to wait in order to initiate life-saving treatment. Thirdly, caregivers failed to attend follow-up appointments; almost 20% of treatment eligible children attended only one clinic visit before being lost to follow-up. Poor uptake of paediatric HIV services is multifactorial and reasons include facility-related factors such as long queues, overcrowding, negative staff attitudes; unemployment, lack of money for food and transport, difficulty getting time off work, absenteeism from school and fear of social rejection from disclosure to family and neighbours. While the cost of transport was not a hindrance to clinic attendance in our setting due to the practice of reimbursement of transport costs for all patients, disclosure of HIV-status to a family member or friend was a major factor for loss to follow-up among our adult cohort and suggests that unwillingness to disclose the child's status to another caregiver may have contributed significantly to failure to keep clinic appointments as well as delays in initiation of ART among eligible children. Disclosure of paediatric infection is multifaceted being dependent on caregiver and family characteristics such as biologic relationship, caregiver permanence, caregiver beliefs and psychosocial function, as well as child-specific factors such as age, developmental stage, cognitive abilities and psychosocial function. A child's positive status usually indicates maternal infection and the resultant anxiety, fear of blame, social and healthcare discrimination as well as marital abandonment may negatively influence the mother's acceptance of the HIV status of the. Denial of results, despair or depression hinder health-care seeking and social support, and may lead to difficulties in reacting to the options and advice given by health workers. Maternal depression (in cases of vertical transmission) and concurrent HIV or other comorbidities may also affect the mother's ability to care for the child; the belief that the child might die any moment may cause her not to take proper care of the child anymore. In circumstances where the biologic parents may have died or may be too ill to care for the child, responsibility for care is shifted to one or more relatives or family friends; poor coordination amongst multiple caregivers may also compromise the quality of care especially where secondary caregivers have not been disclosed to and therefore do not understand the importance of regular clinic visits or adherence to prescribed medications. Studies from sub-Saharan Africa suggest that whilst many caregivers appreciate shared childcare, sustained adherence and community support as benefits of disclosing a child's HIV status to others, many still believe that disclosure to others could have negative effects. Community-based support groups have been shown to play a major role in providing continuous support to caregivers of HIV-infected children and though several support groups were linked to our clinic, we did not routinely collect data of patient/caregiver membership with such groups or assess what impact this had on their quality of life. Disclosing a diagnosis of HIV infection to a child has been suggested to have direct benefits for adherence to ART and among adolescents is associated with higher retention in care. The average age of disclosure reported among children in sub-Saharan Africa is 8 years ; 75% of those LTFU in our cohort (data not shown) were less than 5 years of age at eligibility and therefore too young for a disclosure their HIV status. Lastly, we report that tuberculosis was the most common cause of death among treatment-eligible children and may have also been an underlying cause of death among those who reportedly died at home. This is consistent with data from paediatric cohorts in resource limited settings in which tuberculosis has been cited as a major reason for delayed treatment initiation as well as the most common co-morbidity associated with early deaths among both ART nave and experienced children. As such, treatment for concurrent tuberculosis could also have contributed to the delay in initiation of ART. Overall, 60% of ART-eligible children in our programme died or were lost to follow-up without starting treatment giving a loss-to-care incidence rate of 128 per 100 child-years of follow-up. We observed that the children who were lost to care were significantly more likely to be less than 2 years of age at ART eligibility or to have advanced clinical disease (predominantly tuberculosis and severe malnutrition -data not shown). Children aged less than 2 years or in WHO stage 3 or 4 had almost twice the risk of being lost to the programme before starting treatment respectively compared with those 5 years of age or older and those in WHO stage 1 or 2. The observation that just over half of children LTFU were in WHO stage 3 or 4 coupled with the relatively shorter median time to LTFU from eligibility compared with the time to initiation of ART (4.2 vs 5.1 months), suggests that many of those LTFU may have died as a result of rapid disease progression and advanced HIV disease during the time of preparation for ART and may have not even completed the process. This observation supports the 2010 WHO recommendation that ART be initiated for all HIV-infected infants and children between 12 and 24 months of age irrespective of their immunological threshold or clinical stage. In our study the vital status of the parent was not related to death or loss to care, which was surprising as HIV-infected orphans in sub-Saharan have been shown to have delayed access to HIV care and treatment as well as reduced clinic attendance. Children assessed as eligible for ART between January 2006 and January 2010 using the revised 2006 guidelines were observed to have had a significantly higher risk of being lost to programme compared with those assessed as eligible between June 2004 and December 2005 based on the 2003 WHO treatment guidelines. As the paediatric cohort was a sub-cohort of a much larger adult cohort, this may be attributable to the increase in clinic enrolment over time and the reduced ability of clinic staff to effectively manage the patient load. This risk remained albeit insignificantly after adjusting for the effect of age, clinical stage and other variables. As the only centre in the country performing virological tests for the diagnosis of HIV infection in children <18 months of age and the major paediatric HIV referral clinic in The Gambia, we experienced a good uptake of paediatric HIV services at our HIV clinic, which however, is not representative of paediatric HIV care in West Africa as patients had the advantage of reimbursement of all transport costs incurred in clinic visits, free treatment, nutritional support and provision and school fees for all children enrolled in school up till the age of 18 years. This coupled with the small size of our paediatric cohort, limit the generalizability of findings. There were however several limitations to this study; data were incomplete for many key variables such as WHO stage or CD4 T-cell percentage at eligibility and these children were classified based on immunologic or clinical criteria alone. Secondly, children with moderate or severe malnutrition in our setting were initially classified as WHO stage 3 or 4 respectively without assessing their response to nutritional support and treatment and thus may have been misclassified as eligible for treatment. Unreported deaths among ART-eligible children LTFU in our cohort suggest that the reported mortality rate may be underestimated; in addition, a good proportion of the children ineligible for ART and also LTFU may have progressed to an advanced stage of HIV-disease and died as they became eligible. Conclusion The results of our study have shown that HIV-infected children in The Gambia are enrolled into care at an advanced stage of disease with severe immunosuppression and a significant number of these treatment-eligible children die before initiating therapy. Only one-third of ART-eligible children go on to initiate ART but face a number of out-of-program and in-program delays before treatment is eventually commenced. Developing strategies to ensure early identification of HIV-infected children before they become eligible for ART, elimination of obstacles to prompt initiation of therapy as well as instituting measures to reduce losses to follow-up, will improve the overall outcomes of HIV-infected children. Study setting and participants This study was conducted in MRC HIV research clinic located in the urban area of Greater Banjul, the Gambia. The clinic established an adult cohort in 1986 with approval from the Joint Gambian Government/MRC Ethics Committee and started enrolment of children in 1993 based on written informed consent by the parent or legal guardian, with the adoption of a family-centred model of HIV-care. Paediatric patients consist of exposed or infected infants and children of adults enrolled in the MRC cohort, children with a positive HIV serologic test referred to the clinic from the MRC Clinical services or other health-care services in the country. HIV-exposed infants referred from other PMTCT programmes were also enrolled in the paediatric clinic. Provision of ART in The Gambia began in October 2004 through the support of the Global Fund. Nutritional support is provided and school fees for all children enrolled in school up till the age of 18 years are paid. All treatments, including cART, and treatment monitoring are provided free of charge and the parents/guardians of enrolled children also receive reimbursement of transport costs incurred in clinic visits. Acquisition of clinical data in an electronic system includes capture of demographic characteristics, social history, basic anthropometric measurements, complete blood count, serum biochemistry, CD4 T-cell profile, chest radiograph, opportunistic/concomitant infections and clinical stage. Patients are required to be seen in clinic at least once every 3 months for follow-up clinical assessment, and CD4 T-cell monitoring is performed every 6 months or earlier if clinically indicated. All children receive daily cotrimoxazole prophylaxis. Selection for ART Eligibility for ART was determined using immunological and/or clinical criteria based on the Gambian National ART guideline and WHO paediatric ART guidelines that were in place at the time of enrolment; for children enrolled on or before December 31 2005, eligibility was based on the 3-stage 2003 guidelines while for those enrolled from January 2006 eligibility was based on the 4-stage 2006 revision of the guidelines. Following a further revision of the treatment guidelines in June 2008 , ART was recommended for all children <12 months of age with confirmed HIV infection. In addition, all children were required to have at least one identifiable caregiver who would serve as a 'treatment supporter' and take responsibility for the administration of the child's medications. Pre-ART counselling was conducted twice a week and caregivers underwent a minimum of four one-on-one counselling sessions over a period of three to six weeks before selection was processed and confirmed by the national eligibility committee. Patients who did not come to the clinic for at least 90 days beyond their last scheduled visit were considered LTFU and visited at home by trained field workers to ascertain survival status or change of address. For children who died at home, deaths were verified by close family members and cause of death ascertained by verbal autopsy. Recruitment of children into the programme ended in January 2010 as part of the transition process to transfer patient care from the MRC to the Gambian national health care system. Laboratory methods HIV diagnosis and CD4 measurement Screening for HIV-1 and HIV-2 infection was done using a protocol described in detail elsewhere. In children <18 months of age, HIV infection was diagnosed by two polymerase chain reaction (PCR) tests. Percentages and absolute counts of CD4 T lymphocytes were determined on a FACS Calibur (Becton Dickinson, US) using BD MultiTest reagents and MultiSet software (BD Immunocytometry Systems). Statistical analysis For the purpose of this analysis, eligibility for ART was determined from June 2004 when sensitization of patients began in preparation for the roll-out of ART, to January 2010 when patient recruitment ended. Children already in the paediatric cohort and eligible for treatment but who died or were lost to follow-up before June 2004 were excluded from the analyses. Severe anaemia was defined as haemoglobin <8 g/dl. Distributions of categorical variables were compared between the four possible outcomes: initiation of ART; death; loss to follow-up or transfer by chi square statistics or Fisher exact test as appropriate. Continuous variables were compared using Kruskall-Wallis non-parametric test. Follow-up duration in person-time was calculated from the date the children were assessed as eligible for ART; children who started ART were right-censored on the date treatment started; those who requested transfer were followed through to the date of transfer. Patients who died were followed to their date of death if known or date last seen alive. To avoid immortal person-time bias, patients LTFU were right-censored 90 days after the date of their last clinic visit (as loss to follow-up was not possible before this time). The probability of loss to programme following ART-eligibility was described using cumulative incidence curves and summarised by mortality rate and loss to programme rate (incident rate for the composite endpoint of death and LTFU) reported per 100 child-years of follow-up respectively. Cox proportional hazards regression models were used to identify baseline characteristics associated with death or a loss to programme. The proportional hazards assumption was assessed for all models by the global test based on the Schoenfeld residuals. All statistical analyses were performed with STATA release 11.1 software (Stata Corp., College Station, TX, USA) and statistical significance defined as p < 0.05 (two-sided).
<filename>genie-web/src/main/java/com/netflix/genie/web/spring/autoconfigure/agent/services/AgentServicesAutoConfiguration.java /* * * Copyright 2019 Netflix, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * */ package com.netflix.genie.web.spring.autoconfigure.agent.services; import com.netflix.genie.common.internal.util.GenieHostInfo; import com.netflix.genie.web.agent.inspectors.AgentMetadataInspector; import com.netflix.genie.web.agent.services.AgentConnectionTrackingService; import com.netflix.genie.web.agent.services.AgentFilterService; import com.netflix.genie.web.agent.services.AgentJobService; import com.netflix.genie.web.agent.services.AgentMetricsService; import com.netflix.genie.web.agent.services.AgentRoutingService; import com.netflix.genie.web.agent.services.impl.AgentConnectionTrackingServiceImpl; import com.netflix.genie.web.agent.services.impl.AgentFilterServiceImpl; import com.netflix.genie.web.agent.services.impl.AgentJobServiceImpl; import com.netflix.genie.web.agent.services.impl.AgentMetricsServiceImpl; import com.netflix.genie.web.agent.services.impl.AgentRoutingServiceImpl; import com.netflix.genie.web.data.services.AgentConnectionPersistenceService; import com.netflix.genie.web.data.services.DataServices; import com.netflix.genie.web.services.JobResolverService; import io.micrometer.core.instrument.MeterRegistry; import org.springframework.beans.factory.annotation.Qualifier; import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.scheduling.TaskScheduler; import java.util.List; /** * Auto configuration for services needed in the {@literal agent} module. * * @author tgianos * @since 4.0.0 */ @Configuration public class AgentServicesAutoConfiguration { /** * Get a {@link AgentJobService} instance if there isn't already one. * * @param dataServices The {@link DataServices} instance to use * @param jobResolverService The specification service to use * @param agentFilterService The agent filter service to use * @param meterRegistry The metrics registry to use * @return An {@link AgentJobServiceImpl} instance. */ @Bean @ConditionalOnMissingBean(AgentJobService.class) public AgentJobServiceImpl agentJobService( final DataServices dataServices, final JobResolverService jobResolverService, final AgentFilterService agentFilterService, final MeterRegistry meterRegistry ) { return new AgentJobServiceImpl( dataServices, jobResolverService, agentFilterService, meterRegistry ); } /** * Get an implementation of {@link AgentConnectionTrackingService} if one hasn't already been defined. * * @param agentRoutingService the agent routing service * @param taskScheduler the task scheduler * @return A {@link AgentConnectionTrackingServiceImpl} instance */ @Bean @ConditionalOnMissingBean(AgentConnectionTrackingService.class) public AgentConnectionTrackingService agentConnectionTrackingService( final AgentRoutingService agentRoutingService, @Qualifier("genieTaskScheduler") final TaskScheduler taskScheduler ) { return new AgentConnectionTrackingServiceImpl( agentRoutingService, taskScheduler ); } /** * Get an implementation of {@link AgentRoutingService} if one hasn't already been defined. * * @param agentConnectionPersistenceService The persistence service to use for agent connections * @param genieHostInfo The local genie host information * @return A {@link AgentRoutingServiceImpl} instance */ @Bean @ConditionalOnMissingBean(AgentRoutingService.class) public AgentRoutingServiceImpl agentRoutingService( final AgentConnectionPersistenceService agentConnectionPersistenceService, final GenieHostInfo genieHostInfo ) { return new AgentRoutingServiceImpl( agentConnectionPersistenceService, genieHostInfo ); } /** * A {@link AgentFilterService} implementation that federates the decision to a set of * {@link AgentMetadataInspector}s. * * @param agentMetadataInspectorsList the list of inspectors. * @return An {@link AgentFilterService} instance. */ @Bean @ConditionalOnMissingBean(AgentFilterService.class) public AgentFilterServiceImpl agentFilterService( final List<AgentMetadataInspector> agentMetadataInspectorsList ) { return new AgentFilterServiceImpl(agentMetadataInspectorsList); } /** * Provide an implementation of {@link AgentMetricsService} if one hasn't been provided. * * @param genieHostInfo The Genie host information * @param agentConnectionPersistenceService Implementation of {@link AgentConnectionPersistenceService} to get * information about running agents in the ecosystem * @param registry The metrics repository * @return An instance of {@link AgentMetricsServiceImpl} */ @Bean @ConditionalOnMissingBean(AgentMetricsService.class) public AgentMetricsServiceImpl agentMetricsService( final GenieHostInfo genieHostInfo, final AgentConnectionPersistenceService agentConnectionPersistenceService, final MeterRegistry registry ) { return new AgentMetricsServiceImpl(genieHostInfo, agentConnectionPersistenceService, registry); } }
//===-- XCoreTargetInfo.cpp - XCore Target Implementation -----------------===// // // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. // See https://llvm.org/LICENSE.txt for license information. // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception // //===----------------------------------------------------------------------===// #include "TargetInfo/XCoreTargetInfo.h" #include "llvm/MC/TargetRegistry.h" using namespace llvm; Target &llvm::getTheXCoreTarget() { static Target TheXCoreTarget; return TheXCoreTarget; } extern "C" LLVM_EXTERNAL_VISIBILITY void LLVMInitializeXCoreTargetInfo() { RegisterTarget<Triple::xcore> X(getTheXCoreTarget(), "xcore", "XCore", "XCore"); }
Emotional reaction recognition from EEG In this study we explore the application of pattern recognition models for recognizing emotional reactions elicited by videos from electroencephalography (EEG). We show that both the presence and magnitude of each emotion can be predicted above chance levels with up to 88% accuracy. Furthermore, we show that there are differences in classifiability for different emotions and participants, but whether a participants data can be classified with respect to different emotions can itself be predicted from their EEG. Index Terms Emotion recognition, electroenecephalography (EEG), pattern recognition, classification, regression, individual differences, affective computing applied.
Involvement of stakeholders during the preparedness phase of post-accident situation management. The Steering Committee for Post-accident Management Preparedness (CODIRPA) was commissioned by the French Government in 2005 with the aim of establishing the main principles to be set up for population protection and recovery in the long term. From the beginning, one of the main principles was the pluralistic nature of the working groups (WGs), including scientific and technical experts, representatives from state departments, nuclear operators, and representatives of civil society (i.e. stakeholders). Stakeholders were mainly associated with the various WGs of CODIRPA. In order to foster the involvement of stakeholders from civil society in the works of CODIRPA, a new organisation was implemented with two WGs: one mainly composed of technical experts for tackling technical issues, and one for evaluating the proposals made by the experts from the stakeholders' point of view. This article presents the results of this new strategy.
// NewRCByName creates a replication controller with a selector by name of name. func NewRCByName(c clientset.Interface, ns, name string, replicas int32, gracePeriod *int64) (*v1.ReplicationController, error) { By(fmt.Sprintf("creating replication controller %s", name)) return c.Core().ReplicationControllers(ns).Create(framework.RcByNamePort( name, replicas, framework.ServeHostnameImage, 9376, v1.ProtocolTCP, map[string]string{}, gracePeriod)) }
NREL and Clemson University Drivetrain Test Facility Collaboration: Cooperative Research and Development Final Report, CRADA Number CRD-13-509 of CRADA Work: The National Renewable Energy Laboratory (NREL) and Clemson University have a mutual interest in collaborating in the development of wind turbine drivetrain testing facilities and grid simulators. NREL and Clemson University will work together to share resources and experiences in the development of future wind energy test facilities and capabilities. The CRADA includes sharing of facility topologies and capabilities, modeling efforts, commissioning plans, test protocols, infrastructure cost data, test plans, pro forma contracting instruments, and safe operating strategies. Further, NREL and Clemson University will exchange staff for training and development purposes, including collaborative participation in commissioning and customer testing. DOE has provided NREL with over 10 years of support in developing custom facilities and capabilities to enable testing of full-scale integrated wind turbine drivetrain systems in accordance with the needs of the US wind industry. NREL currently operates a 2.5MW dynamometer and is in the processes of building a 5MW dynamometer and a grid simulator (referred to as a Controllable Grid Interface or CGI). Clemson University is currently building a drivetrain testing facility with two dynamometers, 7.5 MW and 15 MW, and 15 MW Hardware in the loop grid simulator.
package com.example.myapplication3; import androidx.annotation.Nullable; import androidx.appcompat.app.AppCompatActivity; import androidx.recyclerview.widget.LinearLayoutManager; import androidx.recyclerview.widget.RecyclerView; import android.app.ProgressDialog; import android.os.Bundle; import android.util.Log; import com.google.firebase.firestore.DocumentChange; import com.google.firebase.firestore.EventListener; import com.google.firebase.firestore.FirebaseFirestore; import com.google.firebase.firestore.FirebaseFirestoreException; import com.google.firebase.firestore.Query; import com.google.firebase.firestore.QuerySnapshot; import org.w3c.dom.Document; import java.util.ArrayList; public class MainActivity extends AppCompatActivity { RecyclerView recyclerView; ArrayList<Products> productsArrayList; MyAdapter myAdapter; FirebaseFirestore db; ProgressDialog progressDialog; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); progressDialog = new ProgressDialog(this); progressDialog.setCancelable(false); progressDialog.setMessage("불러오는 중..."); progressDialog.show(); recyclerView = findViewById(R.id.recyclerVIew); recyclerView.setHasFixedSize(true); recyclerView.setLayoutManager(new LinearLayoutManager(this)); db = FirebaseFirestore.getInstance(); productsArrayList = new ArrayList<Products>(); myAdapter = new MyAdapter(MainActivity.this,productsArrayList); recyclerView.setAdapter(myAdapter); EventChangeListener(); } private void EventChangeListener() { db.collection("Products").orderBy("addDate", Query.Direction.ASCENDING) .addSnapshotListener(new EventListener<QuerySnapshot>() { @Override public void onEvent(@Nullable QuerySnapshot value, @Nullable FirebaseFirestoreException error) { if (error != null) { if (progressDialog.isShowing()) progressDialog.dismiss(); Log.e("Firestore error",error.getMessage()); return; } for (DocumentChange dc : value.getDocumentChanges()){ if (dc.getType() == DocumentChange.Type.ADDED){ productsArrayList.add(dc.getDocument().toObject(Products.class)); } myAdapter.notifyDataSetChanged(); if (progressDialog.isShowing()) progressDialog.dismiss(); } } }); } }
<filename>kernellib/decomposition/laplacian_eigenmaps.py<gh_stars>1-10 """ Author : <NAME> Date : 25th February, 2018 Email : <EMAIL> """ import numpy as np from sklearn.base import BaseEstimator from sklearn.utils import check_array from utils.graph import (adjacency, Adjacency, create_laplacian) from utils.embeddings import graph_embedding class LaplacianEigenmaps(BaseEstimator): """Scikit-Learn compatible class for Laplacian Eigenmaps. This algorithm implements the Laplacian Eigenmaps algorithm Parameters ---------- n_components : int, (default = 2) number of coordinates for the learned embedding constraint: str, (default='degree') the constraint matrix used ['degree', 'identity'] n_neighbors : int, (default=10) number of neighbors for constructing the adjacency matrix adjacency_kwargs : dict, (default=None) dictionary of kwargs for the adjacency matrix construction see 'graph.py' for more details decomposition_kwargs : dict, (default=None) dictionary of kwargs used to solve for the eigenvalues """ def __init__(self, n_components=2, n_neighbors=10, constraint='degree', adjacency_kwargs=None, neighbors_kwargs=None, eigensolver_kwargs=None): self.n_components = n_components self.n_neighbors = n_neighbors self.constraint = constraint self.adjacency_kwargs = adjacency_kwargs self.neighbors_kwargs = neighbors_kwargs self.eigensolver_kwargs = eigensolver_kwargs def fit(self, X, y=None): # Check the X array X = check_array(X) if self.adjacency_kwargs is None: self.adjacency_kwargs = { 'algorithm': 'brute', 'metric': 'euclidean', 'mode': 'distance', 'method': 'knn', 'weight': 'heat', 'gamma': 1.0/X.shape[0] } if self.eigensolver_kwargs is None: self.eigensolver_kwargs = { } GetAdjacency = Adjacency(n_neighbors=self.n_neighbors, **self.adjacency_kwargs) # Compute the adjacency matrix for X adjacency_matrix = GetAdjacency.create_adjacency(X) # Compute graph embedding self.eigenvalues, self.embedding_ = \ graph_embedding( adjacency_matrix, n_components=self.n_components, operator = 'lap', constraint=self.constraint ) return self def fit_transform(self, X, y=None): X = check_array(X) self.fit(X) return self.embedding_ def swiss_roll_test(): import matplotlib.pyplot as plt plt.style.use('ggplot') from time import time from sklearn import manifold, datasets from sklearn.manifold import SpectralEmbedding n_points = 1000 X, color = datasets.samples_generator.make_s_curve(n_points, random_state=0) n_neighbors=20 n_components=2 # scikit-learn le algorithm t0 = time() ml_model = SpectralEmbedding(n_neighbors=n_neighbors, n_components=n_components) Y = ml_model.fit_transform(X) t1 = time() # 2d projection fig, ax = plt.subplots(nrows=2, ncols=1, figsize=(5,10)) ax[0].scatter(Y[:,0], Y[:,1], c=color, label='scikit') ax[0].set_title('Sklearn-LE: {t:.2g}'.format(t=t1-t0)) # MY Laplacian Eigenmaps Algorithm t0 = time() ml_model = LaplacianEigenmaps(n_components=n_components, n_neighbors=n_neighbors, constraint='degree') Y = ml_model.fit_transform(X) t1 = time() ax[1].scatter(Y[:,0], Y[:,1], c=color, label='My Algorithm') ax[1].set_title('My Algorithm: {t:.2g}'.format(t=t1-t0)) plt.show() return None if __name__ == "__main__": swiss_roll_test()
The impact of occupational injury on injured worker and family: outcomes of upper extremity cumulative trauma disorders in Maryland workers. BACKGROUND Surveys have identified a dramatically rising incidence of work-related upper extremity cumulative trauma disorders (UECTDs). Outcome studies have addressed time lost from work and cost of compensation; omitting other significant consequences. We assess health, functional and family outcomes. METHODS We identified 537 Workers' Compensation UECTD claimants. A computer-assisted telephone questionnaire was used to elicit symptom prevalence, functional impairment, depressive symptoms (CES-D scale), employment status. RESULTS One to 4 years post-claim, respondents reported persistent symptoms severe enough to interfere with work (53%), home/recreation activities (64%) and sleep (44%). Only 64% of responses to the activities of daily living scale items indicated "normal" function. Job loss was reported by 38% of respondents, and depressive symptoms by 31%. CONCLUSIONS Work-related UECTDs result in persisting symptoms and difficulty in performing simple activities of daily living, impacting home life even more than work. Job loss, symptoms of depression, and family disruption were common.
package com.example.android.sangamnagri; import android.support.design.widget.TabLayout; import android.support.v4.view.ViewPager; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.widget.Toolbar; public class HomeActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_home); //Set custom ToolBar android.support.v7.widget.Toolbar toolbar = findViewById(R.id.toolbar); setSupportActionBar(toolbar); //Set custom adapter to the ViewPager HomeFragmentPagerAdapter homeFragmentPagerAdapter = new HomeFragmentPagerAdapter(getSupportFragmentManager()); ViewPager home = findViewById(R.id.viewPager); home.setAdapter(homeFragmentPagerAdapter); //Set TabLayout and assign Listener what to do on Tab select and on page change TabLayout tabLayout = findViewById(R.id.tabs); home.addOnPageChangeListener(new TabLayout.TabLayoutOnPageChangeListener(tabLayout)); tabLayout.addOnTabSelectedListener(new TabLayout.ViewPagerOnTabSelectedListener(home)); } }
Polarized Photocathodes Make the Grade Future linear colliders will require high levels of performance from their electron sources. A group at SLAC has recently tested a structure that substantially exceeds current collider polarized electron source pulse-profile requirements. Polarized photocathodes make the grade Author: Jym Clendenin and Takashi Maruyama, SLAC Future linear colliders will require high levels of performance from their electron sources. A group at SLAC has recently tested a structure that substantially exceeds current collider polarized electron source pulse-profile requirements. A polarized electron source for future electron-positron linear colliders must have at least 80% polarization and high operational efficiency. The source must also meet the collider pulse profile requirements (charge, charge distribution and repetition rate). Recent results from the Stanford Linear Accelerator Center (SLAC) have demonstrated for the first time that the profile required for a high-polarization beam can be produced. Since the introduction in 1978 of semiconductor photocathodes for accelerator applications, there has been significant progress in improving their performance. Currently, all polarized electron sources used for accelerated beams share several common design features -the use of negative-electron-affinity semiconductor photocathodes excited by a laser matched to the semiconductor band gap, the cathode biased at between -60 and -120 kV DC, and a carefully designed vacuum system. While the earliest polarizations achieved were much less than 50%, several accelerator centers, including Jefferson Lab, MIT Bates and SLAC in the US, along with Bonn and Mainz in Germany, now routinely achieve polarizations of around 80%. Source efficiencies have shown similar dramatic improvement. The Stanford Linear Collider (SLC) achieved more than 95% overall availability of the polarized beam across nearly seven years of continuous operation. These achievements clearly point to the viability of polarized beams for future colliders. Peak currents of up to 10 A were routinely produced in 1991 in the SLC Gun Test Laboratory by using the 2 ns pulse from a doubled Nd:YAG laser to fully illuminate the 14 mm diameter active area of a GaAs photocathode. However, when the photocathode gun was moved to the linac injector, where a high-peak-energy pulsed laser was available that could be tuned to the band gap energy as required for high polarization, the current extracted from the cathode was found to saturate at much less than 5 A unless the cathode quantum efficiency (QE) was very high. The SLC required a source pulse structure of about 8 nC in each of two bunches separated by some 60 ns at a maximum repetition rate of 120 Hz. These requirements were met by doubling the cathode area and by using a vacuum load-lock to insure a high QE when installing newly activated cathodes. In contrast, designs for the Next Linear Collider and Japan Linear Collider, being pursued by SLAC and the KEK laboratory in Japan, call for a train of 190 microbunches separated by 1.4 ns, with each bunch having a 2.2 nC charge at the source, for a total of 420 nC for the 266 ns macropulse. This is about 25 times the SLC maximum charge. Both the macrobunch and microbunch current requirements for CERN's CLIC concept are somewhat higher, while the 337 ns spacing between microbunches insures that charge will not be a limitation for the TESLA collider being spearheaded by Germany's DESY laboratory. The limitation in peak current density, which has become known as the surface charge limit (SCL), proved difficult to overcome. Simply designing a semiconductor structure with a high quantum yield was not a solution because the polarization tended to vary inversely with the maximum yield. Gradient doping As early as 1992, a group from KEK, Nagoya University and the NEC company designed a GaAs-AlGaAs superlattice with a thin, very-highly-doped surface layer and a lower density doping in the remaining active layer -a technique called gradient doping. The very high doping aids the recombination of the minority carriers trapped at the surface that increase the surface barrier in proportion to the arrival rate of photoexcited conduction band (CB) electrons. Because CB electrons depolarize as they diffuse to the surface of heavily doped materials, the highly doped layer must be very thin, typically no more than a few nanometers. When tested at Nagoya and SLAC, this cathode design yielded promising results in which a charge of 32 nC in a 2 ns bunch was extracted from a 14 mm diameter area, limited by the space charge limit of the 120 kV gun at SLAC. In 1998 a group from KEK, Nagoya, NEC and Osaka University applied the gradientdoping technique to a strained InGaAs-AlGaAs superlattice structure. They retained 73% polarization while demonstrating the absence of the SCL in a string of four 12 ns microbunches, spaced 25 ns apart, up to the 20 nC space charge limit of the 70 kV gun. In a more recent experiment using a gradient-doped GaAs-GaAsP superlattice, they extracted 1 nC for each of a pair of 0.7 ns bunches separated by 2.8 ns without any sign of the SCL, before reaching the space charge limit of the 50 kV gun. The polarization and QE were 80 and 0.4% respectively. Other groups, notably at Stanford University, St. Petersburg Technical University and the Institute for Semiconductor Physics at Novosibirsk, have also made significant contributions to solving the SCL problem. A group at SLAC has recently applied the gradient-doping technique to a single strainedlayer GaAs-GaAsP structure with results that substantially exceed current collider requirements. These results both complement and extend the 1998 Japanese results. The highly doped surface layer was estimated to be 10 nm thick. To compensate for an increase in the bandgap that resulted from the increased dopant concentration, 5% phosphorus (P) was added to the active layer and the percentage of P in the base layer was increased to maintain the desired degree of lattice strain at the interface. Adding P in the active layer shifts the bandgap by about 50 meV towards the blue, reaching 1.55 eV (800 nm). In combination with the reduction of the surface barrier, this ensured a high QE of about 0.3% at the polarization peak. This is similar to the QE of the standard SLC strained GaAs-GaAsP cathodes. Two laser systems were used to determine the peak charge. A flashlamp-pumped Ti:sapphire (flash-Ti) system provided flat pulses up to several hundred nanoseconds long with a maximum energy of about 2 mJ/ns. In addition, up to 20 mJ in a 4 ns pulse was available from a Q-switched, cavity-dumped, YAG-pumped Ti:sapphire (YAG-Ti) laser. With the flash-Ti alone, the charge increased linearly with the laser energy up to the maximum available laser energy. Because of the finite relaxation time of the SCL, a flat pulse is a much more stringent test of the SCL than if it contained a microstructure. The peak charge per unit time (see graph) is only slightly lower than the NLC requirement for each microbunch when assuming a 0.5 ns full bunch-width. By extending the laser pulse to 370 ns, a charge of 1280 nC was extracted, far exceeding the NLC macropulse requirement. To determine if the peak charge required for a microbunch would be charge-limited, the YAG-Ti laser pulse was superimposed on the flash-Ti pulse. The resulting charge increment was consistent with the charge obtained using the YAG-Ti alone. The charge increment was independent of the relative temporal positions of the two laser pulses indicating that the massive total charge of an NLC, JLC or CLIC macropulse will not inhibit the peak charge required for each microbunch. The maximum charge produced by the YAG-Ti alone was 37 nC, which is more than 15 times the NLC requirement for a single microbunch. To increase the charge density the laser spot on the cathode was reduced to 14 mm, below which the bunch is space-charge-limited for the maximum laser energy. Again, the charge increased linearly with the laser energy. The linearity remained when the quantum yield was allowed to decrease although, of course, the maximum charge also decreased. Thus it is clear that if sufficient laser energy is available, the linearity of the charge increase will be maintained for total charge and peak charge per unit time when using the new SLAC cathode design and will exceed NLC, JLC, and CLIC requirements. The new SLAC cathode was used in the polarized source for a recent high-energy physics experiment requiring 80 nC at the source in a 300 ns pulse. The improved charge performance provided the headroom necessary for temporal shaping of the laser pulse to allow adequate compensation for energy beam loading effects in the 50 GeV linac. The polarization measured at 50 GeV confirmed the greater than 80% polarization measured in the source development laboratory at 120 keV. The international effort to improve polarized photocathodes will continue. For instance, tests for the surface charge limit at the very high current densities required by lowemittance guns have yet to be performed. On a broader front, the superlattice structurein part because of the large number of parameters that the designer can vary -appears to be the best candidate for achieving a significantly higher polarization while maintaining a QE above 0.1%. The high-gradient GaAs-GaAsP cathode structure, thickness and dopant density that was used for SLAC's polarized photoca ode experiment. th \ The charge in the electron bunch measured at the electron source as a function of laser energy using a 100 ns pulse with no microstructure: (a) QE of 0.31% and fully illuminated cathode diameter of 20 mm; (b) 0.25% and 14 mm; (c) SLC cathode shown for comparison: 300 ns pulse with QE of about 0.2% (at 10 nC) and 20 mm diameter.
Stability Enhancement using Hybrid Power System Stabilizer Auto Tuned by Breeder Genetic Algorithm The design and implementation of Power System Stabilizer (PSS) in a multimachine power system based on innovative evolutionary algorithm plainly as Breeder Genetic Algorithm (BGA) with Adaptive Mutation is described in this paper. For the analysis purpose a Conventional Power System Stabilizer and a Conventional Genetic Algorithm based Power System Stabilizer are designed and implemented in the same system. Simulation results on multimachine system subjected to small perturbation and three phase fault radiates the effectiveness and robustness of the proposed PSS over a wide range of operating conditions and system configurations. The results have shown that Adaptive Mutation BGAs are well suited for optimal tuning of PSS and they work better than Conventional Genetic Algorithm, since they have been designed to work on continuous domain. The effectiveness and feasibility of the proposed Power System Stabilizer is demonstrated through a three machine nine bus WSCC system and New England 10-machine system which shows better results when compared to the Conventional Genetic Algorithm.
<reponame>devinrsmith/deephaven-core package io.deephaven.db.v2.by.ssmcountdistinct; import io.deephaven.db.tables.dbarrays.DbCharArray; import io.deephaven.db.v2.sources.AbstractColumnSource; import io.deephaven.db.v2.sources.ColumnSourceGetDefaults; import io.deephaven.db.v2.sources.MutableColumnSourceGetDefaults; import io.deephaven.db.v2.sources.ObjectArraySource; import io.deephaven.db.v2.ssms.CharSegmentedSortedMultiset; import io.deephaven.db.v2.utils.Index; /** * A {@link SsmBackedColumnSource} for Characters. */ public class CharSsmBackedSource extends AbstractColumnSource<DbCharArray> implements ColumnSourceGetDefaults.ForObject<DbCharArray>, MutableColumnSourceGetDefaults.ForObject<DbCharArray>, SsmBackedColumnSource<CharSegmentedSortedMultiset, DbCharArray> { private final ObjectArraySource<CharSegmentedSortedMultiset> underlying; private boolean trackingPrevious = false; //region Constructor public CharSsmBackedSource() { super(DbCharArray.class, char.class); underlying = new ObjectArraySource<>(CharSegmentedSortedMultiset.class, char.class); } //endregion Constructor //region SsmBackedColumnSource @Override public CharSegmentedSortedMultiset getOrCreate(long key) { CharSegmentedSortedMultiset ssm = underlying.getUnsafe(key); if(ssm == null) { //region CreateNew underlying.set(key, ssm = new CharSegmentedSortedMultiset(DistinctOperatorFactory.NODE_SIZE)); //endregion CreateNew } ssm.setTrackDeltas(trackingPrevious); return ssm; } @Override public CharSegmentedSortedMultiset getCurrentSsm(long key) { return underlying.getUnsafe(key); } @Override public void clear(long key) { underlying.set(key, null); } @Override public void ensureCapacity(long capacity) { underlying.ensureCapacity(capacity); } @Override public ObjectArraySource<CharSegmentedSortedMultiset> getUnderlyingSource() { return underlying; } //endregion @Override public boolean isImmutable() { return false; } @Override public DbCharArray get(long index) { return underlying.get(index); } @Override public DbCharArray getPrev(long index) { final CharSegmentedSortedMultiset maybePrev = underlying.getPrev(index); return maybePrev == null ? null : maybePrev.getPrevValues(); } @Override public void startTrackingPrevValues() { trackingPrevious = true; underlying.startTrackingPrevValues(); } @Override public void clearDeltas(Index indices) { indices.iterator().forEachLong(key -> { final CharSegmentedSortedMultiset ssm = getCurrentSsm(key); if(ssm != null) { ssm.clearDeltas(); } return true; }); } }
Diffusion of platinum ions and platinum nanoparticles during photoreduction processes using the transient grating method. The photoreduction process of PtCl2- to Pt nanoparticles in poly(N-vinyl-2-pyrrolidone) solutions upon UV light irradiation was investigated by monitoring the change in the diffusion coefficient (D). The D values of chemical species during UV irradiation was measured by the laser-induced transient grating (TG) method. The TG signal of the PtCl2- solution before UV irradiation was composed of three kinds of contributions, the thermal grating, the species grating due to the creation of PtCl4(2-), and the species grating due to the depletions of PtCl6(2-). Upon UV irradiation of the solution, the species grating signal due to PtCl6(2-) diminished and then the TG signal of Pt nanoparticles gradually appeared. This result indicates that the gradual clustering of Pt0 atoms into Pt nanoparticles occurs after all PtCl2- ions are photochemically reduced to PtCl2- and subsequently transformed to Pt0 atoms with a short delay. With increasing time of the UV irradiation, the TG signal intensity increased, while D of the Pt nanoparticles did not change. This suggests that the number of Pt nanoparticles increases, but the size of the Pt nanoparticles with the polymer layer is unchanged, in the course of the UV irradiation.
Superabrasive grit comprised of diamond and cubic boron nitride ("CBN") are widely used in sawing, drilling, dressing, and grinding applications. The grit is typically held in a matrix of nickel, copper, iron, cobalt, or tin, or alloys thereof, by mechanical bonds and the matrix is connected to a tool body. The matrix can also comprise a resin, such as phenol formaldehyde. In mechanical bonding, the matrix surrounds the grit, holding them in place. While simple and practical, mechanical bonds are relatively weak and the grit can be easily lost as the surrounding matrix is abraded away during use. Grit retention can be improved by limiting the exposure of the grit by the matrix, but this decreases cutting efficiency. In a typical saw blade application, the average exposure of diamond grit is less than 20% of the total grit height. Grit loss can become a serious problem when the supporting matrix is worn down such that over one-third of the grit is exposed. The average lifetime of such cutting tools is decreased as up to two-thirds of the original diamond grit are prematurely lost. In an attempt to improve grit retention, diamond particles have been coated with carbide forming transition metals. For example, U.S. Pat. No. 3,650,714, to Farkas, discloses a method of coating diamond particles with a thin titanium or zirconium layer, typically up to 5% by volume of the diamond, by metal vapor phase deposition. The coating's inner surface forms a carbide with the diamond. A second layer of a less oxidizable metal, such as nickel or copper, can then be applied to protect the inner layer from oxidation. Diamond particles coated by titanium are commercially available from DeBeers and General Electric. Tensile testing of double layer coated diamond having an inner layer such as titanium or chromium and an outer layer such as nickel shows that fracturing occurs at the interface between the inner and outer metal layers. This suggests that nickel does not alloy or otherwise bond well with the underlying carbide and that double layer coated grits according to Farkas may not significantly improve overall grit retention. Bonding can also be weakened by oxidation of the inner titanium or chromium layers during the nickel coating process. In U.S. Pat. No. 4,339,167, Pipkin discloses metal coating diamond or CBN particles with titanium, manganese, chromium, vanadium, tungsten, molybdenum or niobium by metal vapor deposition. It has been found, however, that the carbide formers chosen by Pipkin, do not bond strongly enough to the diamond crystals to improve their grit retention for many high stress applications, or are susceptible to oxidation. As discussed above, the outer metal layers used to protect inner layers from oxidation, do not adequately bond to the inner layer. U.S. Pat. No. 4,378,975, to Tomlinson, describes a first, thin metal coating up to 10% by volume of the particle and a second, wear resistant coating of between one and two times the radius of the particle. The inner coating is preferably chromium and the outer coating is a nickel-iron based alloy, a metal bonded carbide or boride, or silicon carbide. A metal bonded carbide is typically a mixture of a metal or alloy and a carbide. Although not elaborated upon, a metal bonded carbide can be bonded to the first metal layer by metallurgical bonds between the metal of the metal bonded carbide and the first layer. There is essentially no direct chemical bonding between the carbide itself and the first layer. In U.S. Pat. No. 3,929,432, Caveney controls the duration of heat treatment to improve the bond between an inner titanium layer and diamond in a double coated diamond particle having an outer layer of nickel, or other alloying metals. The patent does not address the problem of weak bonding between the titanium and the outer alloying metal coating. Metal coatings have also been used in resin matrices to insulate the superabrasive particle, decreasing thermal degradation, and to improve the bonding of grit to the matrix. Nickel coated grit has improved retention in resin matrices over uncoated grit due to improved adhesion with the matrix. The bond between the grit and the nickel, however, is still weak and is a cause of grit loss.
//% weight=140 color=#bd0f7d icon="\uf26c" namespace VT100 { //% weight=100 blockId="id_setdisplay" block="set mode %mode | background color %bc | foreground color %fc" export function fn_setDisplay(mode: number, bc: number, fc: number): void { serial.writeString("\x1B[" + mode + ";" + bc + ";" + fc + "m"); } //% weight=95 blockId="id_moveto" block="move to x %x | y %y" export function fn_moveTo(x: number, y: number): void { serial.writeString("\x1B[" + y + ";" + x + "H"); } //% weight=90 blockId="id_showtext" block="show text %s" export function fn_showText(s: string): void { serial.writeString(s); } //% weight=85 blockId="id_shownumber" block="show number %n" export function fn_showNumber(n: number): void { serial.writeString(n.toString()); } //% weight=80 blockId="id_showascii" block="show ascii %n" export function fn_showAscii(n: number): void { serial.writeString(String.fromCharCode(n)); } //% weight=75 blockId="id_hidecursor" block="hide cursor" export function fn_hideCursor(): void { serial.writeString("\x1B[?25l"); } //% weight=70 blockId="id_showcursor" block="show cursor" export function fn_showCursor(): void { serial.writeString("\x1B[?25h"); } //% weight=65 blockId="id_erasescreen" block="erase terminal" export function fn_eraseScreen(): void { serial.writeString("\x1B[2J"); } //% weight=60 blockId="id_eraseup" block="erase up" export function fn_eraseUp(): void { serial.writeString("\x1B[1J"); } //% weight=55 blockId="id_erasedown" block="erase down" export function fn_eraseDown(): void { serial.writeString("\x1B[J"); } //% weight=50 blockId="id_eraseline" block="erase line" export function fn_eraseLine(): void { serial.writeString("\x1B[2K"); } //% weight=45 blockId="id_eraseright" block="erase right" export function fn_eraseRight(): void { serial.writeString("\x1B[K"); } //% weight=40 blockId="id_eraseleft" block="erase left" export function fn_eraseLeft(): void { serial.writeString("\x1B[1K"); } } //% weight=130 color=#00ff11 icon="\uf009" namespace Two_digits { let off = 0 let pos = 0 let digit = 0 let x = 0 let y = 0 let scroll = 0 let br = 0 let n_scroll = 0 let t_scroll = 0 let num = 0 let arr_digits: number[] = [] let arr_leds: number[] = [] function fn_set_digit() { for (let index = 0; index <= 9; index++) { if (index == digit) { arr_digits[index] = 0 } else { arr_digits[index] = 1 } } arr_leds[0] = arr_digits[1] arr_leds[1] = arr_digits[4] arr_leds[2] = arr_digits[1] * (arr_digits[2] * (arr_digits[3] * arr_digits[7])) arr_leds[3] = arr_digits[4] * (arr_digits[5] * arr_digits[6]) arr_leds[4] = arr_digits[1] * (arr_digits[7] * arr_digits[8]) arr_leds[5] = arr_digits[8] arr_leds[6] = arr_digits[1] * (arr_digits[3] * (arr_digits[4] * (arr_digits[5] * (arr_digits[7] * arr_digits[9])))) arr_leds[7] = arr_digits[2] arr_leds[8] = arr_digits[1] * (arr_digits[4] * arr_digits[7]) arr_leds[9] = 1 } function fn_show_sign() { led.plotBrightness(1, 2, br) led.plotBrightness(2, 2, br) led.plotBrightness(3, 2, br) } function fn_show_digit() { for (let index = 0; index <= 9; index++) { y = index / 2 x = index % 2 if (arr_leds[index] == 1) { led.plotBrightness(x + pos, y, br) } } } //% blockId="fn_show_num" block="show number %_num | brightness %_br | scroll (if > 2 digits) %_n_scroll | scroll delay %_t_scroll" export function fn_show_num(_num: number, _br: number, _n_scroll: number, _t_scroll: number) { num = _num br = _br n_scroll = _n_scroll t_scroll = _t_scroll arr_digits = [] arr_leds = [] for (let index = 0; index <= 9; index++) { arr_digits.push(0) arr_leds.push(0) } if (Math.abs(num) < 100) { scroll = 0 } else { scroll = 1 } for (let index2 = 0; index2 <= (n_scroll - 1) * scroll; index2++) { off = 5 * scroll if (num < 0) { basic.clearScreen() fn_show_sign() basic.pause(500) } for (let index = 0; index <= 20 * scroll; index++) { basic.clearScreen() off = (5 - index) * scroll pos = off - 0 if (Math.abs(num) >= 1000) { pos = pos + 0 digit = Math.abs(num) / 1000 fn_set_digit() fn_show_digit() pos = pos + 3 digit = Math.abs(num) / 100 % 10 fn_set_digit() fn_show_digit() pos = pos + 3 digit = Math.abs(num) / 10 % 10 fn_set_digit() fn_show_digit() pos = pos + 3 digit = Math.abs(num) % 10 fn_set_digit() fn_show_digit() } else if (Math.abs(num) >= 100) { pos = pos + 0 digit = Math.abs(num) / 100 % 10 fn_set_digit() fn_show_digit() pos = pos + 3 digit = Math.abs(num) / 10 % 10 fn_set_digit() fn_show_digit() pos = pos + 3 digit = Math.abs(num) % 10 fn_set_digit() fn_show_digit() } else { pos = pos + 0 digit = Math.abs(num) / 10 % 10 fn_set_digit() fn_show_digit() pos = pos + 3 digit = Math.abs(num) % 10 fn_set_digit() fn_show_digit() } basic.pause(t_scroll) } } } } //% weight=120 color=#0fbdae icon="\uf10c" namespace Bits { enum digit_value { //% block="zero" zero, //% block="one" one, //% block="complement" com } let hex_arr = "0123456789abcdef" let dec_num = 0 //% weight=100 blockId="id_raiseto" block="%base | raised to %exp" export function fn_raiseto(base: number, exp: number): number { return Math.pow(base, exp) } //% weight=90 blockId="id_getbit" block="get bit %pos | in %num" export function fn_getbit(pos: number, num: number): number { return (num >> pos) & 1 } //% weight=80 blockId="id_setbit" block="set bit %pos | in %num | to %dv" export function fn_setbit(pos: number, num: number, dv: digit_value): number { if (dv == digit_value.zero) return num & ((1 << pos) ^ 0xffff) else if (dv == digit_value.one) return num | (1 << pos) else return num ^ (1 << pos) } //% weight=70 blockId="id_hextodec" block="convert hexadecimal %hex_num | to decimal" export function fn_hextodec(hex_num: string): number { dec_num = 0 for (let index = 0; index <= hex_num.length - 1; index++) { let char = hex_num.charAt(hex_num.length - 1 - index) for (let index2 = 0; index2 <= 15; index2++) { if (char.compare(hex_arr.charAt(index2)) == 0) { dec_num = dec_num + index2 * Math.pow(16, index) } } } return dec_num } //% weight=60 blockId="id_bintodec" block="convert binary %bin_num | to decimal" export function fn_bintodec(bin_num: string): number { dec_num = 0 for (let index = 0; index <= bin_num.length - 1; index++) { let char = bin_num.charAt(bin_num.length - 1 - index) if (char.compare("1") == 0) { dec_num = dec_num + Math.pow(2, index) } } return dec_num } }
"These threatened strikes can only damage Ryanair's business in Germany, and if they continue, will lead to base cuts and job cuts for both German pilots and cabin crew, particularly at some secondary German bases," sad the Irish no-frills airline's chief marketing officer Kenny Jacobs in a statement. Without naming the German sites under threat, Jacobs said they are already loss-making during the winter season and could suffer greater losses if strikes continued. Condemning the Cockpit pilots union's call for a 24-hour walkout that is expected to involve 400 pilots and co-pilots in Germany, Jacobs also rejected the charge that the airline is underpaying its staff. Lufthansa's subsidiary Eurowings pay pilots 30 percent less, he claimed.
/** * Unit tests for AuthorAction */ @RunWith(JUnit4.class) public class AuthorActionTests { @InjectMocks private AuthorAction handle; @Before public void beforeEachTest() { MockitoAnnotations.initMocks(this); assertTrue(handle != null); } @Test public void variables() { handle.setAuthor(null); assertNull(handle.getAuthor()); Author author = new Author("admin"); handle.setAuthor(author); Assert.assertEquals("admin", handle.getAuthor() .get_Id()); } }
A Mathematical Approach to Analyse Factors Influencing Adoption of Solar Based Power Production in Residential Buildings in Tamilnadu State of India The objective of this research paper is to find out the important factors influencing the adoption of solar photo voltaic system for individual households and to find out if there is any difference in the adoption of solar photo voltaic system (SPVS) among the households with differing income levels, and their place of residence namely rural and urban place. Further, TOPSIS is applied for ranking the groups of households based on the influencing factors. Data was collected from respondents in rural and urban areas, living in own and rented houses. Respondents also represented various income strata. It is found that people with higher income levels are generally more open to the idea of adopting SPVS and people living in rented houses are less likely to adopt SPVS compared to those living in own houses whether in urban or rural areas. The outcome of TOPSIS shows that the respondents with high income, residing in their own houses in urban areas are ranked first in the willingness to adopt SPVS.
''' Manage peering connections between Azure Virtual Networks. ''' from .... pyaz_utils import _call_az def create(name, remote_vnet, resource_group, vnet_name, allow_forwarded_traffic=None, allow_gateway_transit=None, allow_vnet_access=None, use_remote_gateways=None): ''' Create a virtual network peering connection. Required Parameters: - name -- The name of the VNet peering. - remote_vnet -- Resource ID or name of the remote VNet. - resource_group -- Name of resource group. You can configure the default group using `az configure --defaults group=<name>` - vnet_name -- The virtual network (VNet) name. Optional Parameters: - allow_forwarded_traffic -- Allows forwarded traffic from the local VNet to the remote VNet. - allow_gateway_transit -- Allows gateway link to be used in the remote VNet. - allow_vnet_access -- Allows access from the local VNet to the remote VNet. - use_remote_gateways -- Allows VNet to use the remote VNet's gateway. Remote VNet gateway must have --allow-gateway-transit enabled for remote peering. Only 1 peering can have this flag enabled. Cannot be set if the VNet already has a gateway. ''' return _call_az("az network vnet peering create", locals()) def show(name, resource_group, vnet_name): ''' Show details of a peering. Required Parameters: - name -- The name of the VNet peering. - resource_group -- Name of resource group. You can configure the default group using `az configure --defaults group=<name>` - vnet_name -- The virtual network (VNet) name. ''' return _call_az("az network vnet peering show", locals()) def list(resource_group, vnet_name): ''' List peerings. Required Parameters: - resource_group -- Name of resource group. You can configure the default group using `az configure --defaults group=<name>` - vnet_name -- The virtual network (VNet) name. ''' return _call_az("az network vnet peering list", locals()) def delete(name, resource_group, vnet_name): ''' Delete a peering. Required Parameters: - name -- The name of the VNet peering. - resource_group -- Name of resource group. You can configure the default group using `az configure --defaults group=<name>` - vnet_name -- The virtual network (VNet) name. ''' return _call_az("az network vnet peering delete", locals()) def update(name, resource_group, vnet_name, add=None, force_string=None, remove=None, set=None): ''' Update a peering. Required Parameters: - name -- The name of the VNet peering. - resource_group -- Name of resource group. You can configure the default group using `az configure --defaults group=<name>` - vnet_name -- The virtual network (VNet) name. Optional Parameters: - add -- Add an object to a list of objects by specifying a path and key value pairs. Example: --add property.listProperty <key=value, string or JSON string> - force_string -- When using 'set' or 'add', preserve string literals instead of attempting to convert to JSON. - remove -- Remove a property or an element from a list. Example: --remove property.list <indexToRemove> OR --remove propertyToRemove - set -- Update an object by specifying a property path and value to set. Example: --set property1.property2=<value> ''' return _call_az("az network vnet peering update", locals())
By early 1969, singer-guitarist Glen Campbell had conquered the worlds of both country and pop music, topping the album charts with his Wichita Lineman LP. He would also soon make his big-screen acting debut alongside film legend John Wayne in True Grit. But throughout the year, Campbell‘s greatest visibility was via his weekly CBS television series The Glen Campbell Goodtime Hour, which showcased not only his warm, folksy manner and numerous hit songs, but also his considerable guitar-playing skills. Campbell’s guests on the series, which climbed into the Top 20 in the Nielsen ratings, naturally included such country stars as Merle Haggard, Roger Miller, Waylon Jennings and Johnny Cash (soon to get his own show on rival network ABC), as well as other fellow musicians, comedians and TV stars. Running a total of four seasons, with writers that included Rob Reiner and Steve Martin, the Goodtime Hour began to slip in the ratings in its last two years on the air, but Campbell himself remained popular and the show continued to draw major celebrities. In October 1971, one of the biggest stars on television was 20-year-old David Cassidy, whose role as Keith Partridge on ABC’s The Partridge Family had turned him into a teen idol practically overnight a year earlier. Cassidy and his co-stars, especially Partridge matriarch Shirley, played by Cassidy’s real-life stepmother Shirley Jones, made the rounds on other talk and variety series as their show vaulted into the Top 20 and their records, especially the chart-topping “I Think I Love You,” dominated the airwaves for the next two years. On October 5th, Campbell‘s show welcomed Cassidy and his onscreen sister Susan Dey for an episode that paid tribute to the movies. Jones, an Academy Award-winning actress, was also featured. Three weeks later, Cassidy was back as musical guest, performing several Partridge Family tunes. Cassidy, who died November 21st, just three months after Campbell‘s passing in early August, often acknowledged the complicated relationship he had with those hit songs, which were considered the height of bubblegum pop at the time. His own musical tastes were far more eclectic and rock-oriented, but in the above clip, a highlight from his second Goodtime Hour appearance, Cassidy joins the host for a harmony-rich medley of the Everly Brothers favorites “All I Have to Do Is Dream,” “Bird Dog,” Wake Up Little Susie” and “Bye Bye Love.” Although Campbell does the heavy lifting on guitar and lead vocals, Cassidy branches out from his Partridge role with great relish, even giving the classic tunes just the right amount of country twang. That’s about as close as any of the Partridge clan got to country cred, other than the very first episode of The Partridge Family in September 1970, when the fictional group was introduced on a Las Vegas stage by the very real Johnny Cash. The Goodtime Hour would last just five more months, wrapping in March 1972, with several episodes available from Shout Factory TV. The Partridge Family would take their last bus ride in the spring of 1974, although the show continues to air daily on Antenna TV. Campbell and Cassidy both continued to record and perform for the next several decades. While 2017 has sadly taken them both, the good times captured in early clips such as this are certainly worth remembering.
Teaching PSE mastery during, and after, the COVID-19 pandemic After more than a year of online teaching resulting from the COVID-19 pandemic, it is time to take stock of the status quo in teaching practice in all things concerning process systems engineering (PSE), and to derive recommendations for the future to harness what we have experienced to improve the degree to which our students achieve mastery. This contribution presents the experiences and conclusions resulting from the first COVID-19 semester (spring 2020), and how the lessons learned were applied to the process design course taught in the second COVID-19 semester (winter 2020) to a class of 53 students. The paper concludes with general recommendations for fostering active learning by students in all PSE courses, whether taught online or face to face. Introduction Process Systems Engineering (PSE) aims to harness computational methods to improve the design, control, and operation of processing systems. Suppose a processing system leads to the product distribution shown in Fig. 1, where a value of 70% is the minimum quality required for saleable product. Clearly, a production line in which a third of all production is unsellable would be totally unacceptable. In the same way, if we were teaching a course for which the same plot shows the students' final exam grade distribution, we should be equally unhappy, as one third of our students would not have achieved course mastery. Most of us have experienced over a year of "lockdown teaching" because of the COVID-19 pandemic, forcing us to move our teaching activity exclusively online. The objective of this paper is to take stock of the status quo after this period, and to derive recommendations for the future to harness what we have experienced to improve the degree to which our students achieve mastery in all things PSE. The author has been teaching PSE courses to undergraduates at the Technion, the Israel Institute of Technology, for more than 30 years, evolving his teaching to active learning methods and in the last seven years, to the "flipped class" model. In the spring of 2020, teaching became particularly challenging, since it was taught online for the first time, with students having to collaborate remotely with each other also for the detailed design work. E-mail address: dlewin@technion.ac.il In the rest of this introduction, we review the status of engineering education and explore the literature concerning active learning and its impact on online teaching and engineering education. Graham conducted a study on the global state of the art in engineering undergraduate education on behalf of MIT's New Engineering Education Transformation (NEET) initiative, charged with developing and delivering a world-leading program of undergraduate engineering education at that university. The study is based on interviews with 178 individuals with in-depth knowledge and experience of world-leading engineering programs and identifies the top ten institutions that are acknowledged as "current leaders" and the top ten considered as "emerging leaders" in engineering education. Of those, only five institutions appear on both lists: Olin College, Technical University of Delft, University College London (UCL), National University of Singapore (NUS), and Chalmers University Sweden. The report also lists the program and institutional features that distinguish these global leaders and identifies the challenges that constrain global progress. An important lesson from the study is the common denominator in the more successful programs which feature chains of courses which implement active learning that is student-centered, rather than individual stand-alone efforts. The report also postulates future directions for the engineering education sector, identifying three potential trends: (a) the shift in the center of gravity of the world's leading engineering programs from high-income countries to the emerging 'powerhouses' in Asia and South America; (b) a move towards student-centered curricula and multidisciplinary learning; and (c) the emergence of a new generation of leaders in engineering education that deliver integrated student-centered curriculum at scale. One good example is UCL, where the first and second years of all engineering departments follow six cycles of learning, each culminating in an immersive and formative week of group project activity ( Tsatse and Sorensen, 2021 ). After three semesters of lockdown teaching, most of us are indeed teaching 100% online, by necessity. Lewin et al. present the results of a survey mapping PSE teaching perceptions and practices, that returned the positions of 82 academic lecturers from around the world, mostly with at least 10 years of experience in teaching PSE topics. Lewin et al. report an even split between those who teach in the traditional teachercentered method (teacher talks -students listen) and those who apply student-centered, active learning in their classes, mostly by those teaching process design rather than other PSE topics. At the author's university (the Technion, Israel Institute of Technology), most of the exercises/recitations are also delivered synchronously by teaching assistants (TAs), where they deliver additional lectures rather than use the meetings as an opportunity to activate students. Thus, for the most part, we are not using available technology to make learning more effective. Instead of moving at least some of the lecture materials online and require students to cover them as preparation, and then use at least part of the available staff-student contact time to foster active learning, this opportunity is largely being squandered. We are mostly lecturing, with our students mostly passive, and as well established, passive students learn less. As sadly pointed out by Miller : "Lecturing is that mysterious process by means of which the contents of the notebook of the professor are transferred through the instrument of the fountain pen to the notebook of the student without passing through the mind of either. " Unfortunately, we discover how little some of our students have learned at the end of each course, when final exam distributions are often similar to that shown in Fig. 1. Of course, by then it is too late to fix the problem. And just as it was pointed out by that pioneer of active learning, Eric Mazur ( Rimer, 2009 ): "Just as you can't become a marathon runner by watching marathons on TV, likewise for science, you have to go through the thought processes of doing science and not just watch your instructor do it. " Another take on the same idea, as pointed out by Lewin and Barzilai, is: "Watching their teaching assistant demonstrating how typical exercises are solved is about as useful to students as going to the gym and watching how their gym instructor lifts weights for them " Although Benjamin Bloom is most known for his taxonomy, he contributed much more. For example, as postulated by Bloom, the degree to which students achieve mastery depends on four conditions: Clear definition of what constitutes mastery; Systematic, well-organized instruction, focused on student needs; Assistance for students when and where they experience difficulties; Provision of sufficient time for students to achieve mastery. Two desirable key features follow from the spirit of Bloom's ideas: (a) One should support the acquisition of knowledge and skills mastery by creating opportunities for active learning. In this regard, consider the ICAP categorization of cognitive learning patterns proposed by Chi et al., which delineates the degree of decreasing learning effectiveness, from Interactive, Constructive, Active, to Passive. (b) Learners should be encouraged to experiment, even if they make mistakes or even fail ( Kapur, 2015 ). Learning is all about trying, failing, understanding why they have failed, trying again, and repeating these steps as necessary. Opportunities for students to engage in active learning and experimentation require allocating sufficient staff-student contact time, time that in a conventional setting is taken up by lecturing. This reallocation can be realized by implementing a "flipped classroom." In the "flipped classroom," home and class activities are "flipped," that is: (a) What used to be class activity, that is, lecturing by teachers, is moved to home activity to be completed by students in advance of class meetings with teachers. This home activity consists of a combination of pre-recorded lectures, readings, online quizzes, and other individual assignments. (b) What used to be homework, that is, exercises, computational assignments, and some of the project work, are moved to class activity, to be performed individually or in groups by students, with lecturers and TAs present in mentor and guide roles. Thus, the main justification to move to flipped format is the desire to increase the proportion of the student-staff contact time in which students are actively learning rather than just listening to lectures ( Crouch and Mazur, 2001 ;Felder, 1995;Felder and Brent, 2015 ). In another important contribution, Bloom reports the modes of learning that improve outcomes, with the most significant obtained by 1-1 (teacher-student ratio) personal tutoring, which increases the degree of mastery as exhibited by exam grades up to two standard deviations higher than for students taught conventionally by a lecture-based approach. Clearly, personal tutoring is not a sustainable pedagogy, with a more typical teacher-student ratio being 1-30. In a course with that teacher-student ratio that is taught in a teacher-centered approach, the contact time between the teacher and the students is mostly utilized for lectures by the teacher, often with modest involvement of the students. In recitations, the assistant will often take the same approach. This means that in a teacher-centered approach, students are largely passive in most of the contact time available, with the students expected to take an active role mostly when tackling homework sets on their own. These deficiencies reduce the degree to which students acquire mastery in higher-level design and evaluation capabilities. In contrast, in a student-centered approach with the same teacher-student ratio, the contact time is focused on giving opportunities for students to become involved in class activities, with the teaching staff acting as mentors. Amongst the activities are class quizzes leading to discussions, brainstorming, cooperative problem-solving, and student presentations. By nurturing student involvement, the teacher can better assess the degree of mastery being built up by the students. Student involvement is even more critical in the recitations, where the focus should be on giving students time to work problems for themselves. For students to learn, they need to be given opportunities to make mistakes, understand the reasons for the mistakes, and correct them. This takes time, and the more recitation time taken up by the TAs explaining their problem-solving strategies, the less time the students will have for their own effort s. Mentoring student s' work, should fill most of the recitation time, enabling staff to mentor and assess student capabilities. This formative assessment can only be ascertained if the teachers and assistants reduce the amount of time that they are lecturing in favor of providing time for active learning by the students ( ; ). Since "flipping the classroom" inherently frees class time, it is one way to make this happen. In a recent study, Munir et al. presented results for the successful implementation of a flipped class incorporating cooperative learning to a small class of graduate students. This paper is organized as follows. PSE mastery is best defined in terms of the instructional objectives of each course. Hence, Section 2 provides a clear statement of the learning outcomes for the three key PSE areas: numerical methods, process control, and process design. Next, in Sections 3 and 4, the flipped paradigm as applied at the Technion's Chemical Engineering Faculty is described, and then quantitative evidence is presented indicating that there is significant improvement in the outcomes obtained by students who engage with the course, over those that do not. Next, Section 5 lists some of the challenges imposed by implementing flipping online imposed by the pandemic, as well as the lessons learned on how to address them. Finally, Section 6 provides a road map to facilitate this change by employing the "flipped classroom," in which part or all the materials that were previously lectured in class time are provided asynchronously as video-lectures, for the students to cover ahead of class meetings as preparation. The main message of this paper is the clear need to free class time to enable students to actively engage in their own learning. Typical instructional objectives for PSE mastery Most of us in the PSE community will agree about the importance of taking a systems approach in chemical engineering design and analysis instruction ( Cameron and Lewin, 2009 ; ; ). Within the framework of PSE, this instruction would include at least courses in the central expertise areas of numerical methods, process control and process design. Curricula for these courses are best expressed as instructional objectives, which link learning objectives to learning outcomes -indeed, the course definitions are couched in terms of learning outcomes, as demonstrated for the PSE courses listed below. Because PSE largely deals with problem-solving, the most important and relevant levels of Bloom's taxonomy that students need to master are the highest: analysis, synthesis, and evaluation. A helpful way of teaching these materials is by making use of concept maps, which facilitate explaining the connection between the course components. An example of a concept map for a course on numerical methods is presented in Fig. 2. The key PSE concepts and their instructional objectives are listed next. Numerical methods This course ideally instructs the students in the understanding of the basic building blocks of numerical methods, before continuing to provide tools for their practical application. On the completion of such a course, students are expected to select the appropriate numerical methods for a given problem, implement them, and interpret the obtained results. Typical course outcomes are as follows: Building blocks : Efficient solution of linear systems Finite difference approximations (derivatives, interpolation, integration) Efficient solution of nonlinear systems Mastery in unconstrained (gradient methods) and constrained minimization (Linear Programming) Applications : Linear and nonlinear regression capabilities Efficient solution of ordinary differential equations, initial-value partial differential equations, and boundary value problems Integrated problem-solving capabilities Process control This course provides the tools to develop first principles and empirical process models, and then using the derived models, to design simple control systems to meet desired closed-loop performance. Typical course outcomes ( ) are as follows: Process modeling : First-principles modeling capability Ability to generate state-space and transfer function models Block diagram manipulation capability Ability to analyze the transient response of linear systems Process control synthesis : Frequency domain analysis capabilities Stability analysis capability Capability to synthesize control systems to meet response specifications using the root locus method Knowing how to tune PID controllers effectively Capability to design cascade and feedforward control systems Process design The capstone design course represents the acid test of students' ability to apply the engineering tools they have acquired from the core courses studied previously, with typical desired outcomes ( ) being as follows: Capability to carry out plant costing and profitability analysis Separation sequence synthesis capability for both zeotropic and azeotropic systems Capability to perform maximum energy recovery (MER) targeting and heat exchanger network (HEN) synthesis Plant-wide control system configuration capability Capability to perform a qualitative hazard and operability study (HAZOP) and to carry out a quantitative hazard analysis (HAZAN) Proven cooperative design project capability, demonstrating both team and individual skills As an example of a typical design project, consider the best team effort submitted for the 2020/21 challenge, which involved the design of a process for the manufacture of 90k Tons/year of DME from a feedstock of methanol, presented in Fig. 3, which achieves a venture profit (VP) of $6.8 million/year. In the reaction section of the plant ( Fig. 3 a), methanol feed is mixed with recycled methanol from the separation section, heated in E-100 with intermediate pressure steam to partially vaporize the methanol, and then heated using hot reactor effluent in E-101 to superheat the methanol vapor fed to the reactor to its optimal temperature. The reactor methanol conversion is below equilibrium, only 74%, a result of plant-wide optimization to maximize the VP. Note that about half of the energy required to be transferred to the methanol feed is recovered from the hot reactor effluent, leaving the rest to be used to power the reboilers of the separation system. Again, this is a consequence of plant-wide optimization. Moving on to the separations section of the plant ( Fig. 3 b), we note that the methanol recycle purity is only 84%, and that heat recovered from the hot reactor effluent provides most of the reboiler duties for both columns in the separation system (87% of the reboiler duty required for the first tower and 100% of that required by the second tower, respectively). Both these features are a consequence of plant-wide optimization to maximize the VP. This example illustrates the kind of mastery expected from our students. Fig. 3. The best student team solution for a process to manufacture DME from methanol. The flipped class paradigm as implemented at the technion Our implementation of the flipped classroom involves the following sequence of activities, repeated in every week of each course (See Fig. 4 ): a Online Materials -Produced by converting lectures to preprepared, online lessons composed of 5-15 min video clips interspersed with online activities. Students are expected to cover these materials on their own as homework in advance of each week of activity and are given course credit for it. Benefits: Students learn the basic materials covered in each week at their own pace, and their learning is reinforced by addressing the online activities as they follow the materials. The online activities can be tailored to achieve specific objectives in each stage of the course. These can be: (a) Regular quizzes : Quiz questions posed as multiple-choice, matching, or numerical computations; (b) "Your turn" extended calculations and small-scale designs : A problem for the student to tackle independently is defined at the end of a video clip, which is followed by a video clip in which a possible solution to the problem solved is presented, which students can compare to their solutions; (c) Preparing for brainstorming : A video clip can present a problem that requires group effort to address, for which students are requested to collect information, write down ideas, and bring their results to class for discussion in groups. Note that all these activities increase the students' stake in their learning and will prepare them to make better use of the next resource -the Class Meeting. b Class Meetings -Moving from teacher-centered lecturing to student-centered meetings in the classroom. A typical class meeting combines quizzes, class discussions and open-ended problem solving, with the focus being to keep the students active. Benefits : Giving students the opportunity to prepare ahead increases their effective participation in class and impacts positively on the degree to which they learn and master the application of what they have learned. The specific benefits of each type of activity that could be utilized are as follows: (a) Quizzes for comprehension: These could be clicker questions, to test comprehension of concepts learned at home, or to reinforce previous, related materials. The lecturer can check the level of understanding exhibited by all the students in real time; (b) Quizzes to generate discussion: When the questions raised may have more than one solution, it pays dividends to use them to generate class discussion. Learning from incorrect answers is often more valuable than focusing only on correct ones; (c) Open-ended problem solving: This is one of the main reasons for having class meetings. The focus should be on getting students to participate in the development of solutions. For particularly complex problems, dividing the class into separate workgroups may have benefit. For online synchronous class meetings on Zoom, for example, it is recommended that classes be divided into breakout rooms. c Active Tutorials -For students to master course content, they need to apply themselves to independently work problem sets covering the curriculum. The job of the teaching assistant in this setting is to be the enabler for student effort s rather than a demonstrator of solutions. Benefits: In active tutorials, students working in teams solve the classwork (previously referred to as homework) in class time. This ensures that: (a) All students who participate in the sessions are actively involved in working problems; (b) Assistance can be provided by staff and from students, helping each other; (c) Students, assistants and the lecturer all receive feedback in a timely fashion (in real time). The most important take-away from implementing this sequence is that at every phase of the week's activities, students optimize their time-investment in the course; at home, they use their time to build their basic knowledge, whereas in their contact-time with staff they hone this knowledge to higher levels by application and practice. These improvements are difficult to achieve in a conventional lecture-based approach for several reasons: (a) If students come to a lecture unprepared, they will find it difficult to simultaneously absorb what is new material as well as participate actively in meaningful Q&A; (b) Lecturers who plan to cover a given set of materials in class may be left with insufficient time to allow for more than modest Q&A. The home preparation required of the students in the flipped class paradigm releases class time for work with the students at higher levels. For example, consider the three-week segment of the process design course covering heat exchanger network synthesis, detailed in Table 1. In the seventh week of the course, students are introduced to MER targeting using the temperature interval (TI) method, and basic HEN design rules, with typical class exercises involving MER targeting and HEN designs for relatively simple systems involving two hot and two cold streams. By the eighth week, the students will have learned to use more advanced techniques, such a stream splitting and dealing with threshold problems as well as reducing the number of heat exchangers to the minimum necessary, making it possible for them to tackle more complex problems, involving four or more hot streams transferring heat to four or more cold streams. By the ninth week, they will have learned how to reliably extract stream data from process flowsheets, and how to integrate reactors and distillation columns into flowsheets to minimize total utility demands using the grand composite curve (GCC), thus empowering them with the ability to practically apply the HEN synthesis procedure to complete flowsheets. Some readers may be surprised by the focus only on pinch methods for HEN design instruction rather than the use of MILP/MINLP approaches. It is certainly true that HEN synthesis, can be carried out efficiently for small/medium-sized problems using MILP/MINLP. For a graduate-level course, attended by students who have received a comprehensive undergraduate chemical engineering degree, and therefore grounded in engineering principles and insights, the usage of optimization methods for HEN synthesis is indeed appropriate. The main advantage of teaching pinch design methods to undergraduates is the physical insights they gain as a consequence. This insight is lacking if one simply formulates the design problem as a linear/nonlinear program. It is therefore recommended that if one is teaching undergraduate process design, the focus on teaching only optimization methods for HEN synthesis should be reconsidered. All these topics were taught with the same allocated time before flipping was instigated in the process design course, but after Table 1 Subjects and concepts taught and exercised in the three-week sequence between weeks 7 and 9 in the course that covers HEN design. Applying the technology to real process streams; HEN design with multiple cold and hot utilities aided by the GCC; Heat integration of distillation column trains. introducing flipping in the course, a higher level of mastery can be achieved by students because the freed lecture time is now used for practical application and practice. Whether this potential benefit results in improved summative outcomes is determined by the final examinations, and we will get to that later. While moving to active learning has benefits, it is by no means a panacea, and is subject to some negative repercussions, as pointed out by Felder, most of which can be offset if the instructor is open-minded and responsive to the concerns of students: a Dealing with student resistance. Flipping may be new to the students, so it is important to set the stage in the first meeting, which should not be used to cover technical material but rather, should describe the teaching methodology and its benefits, making clear to students what is expected of them and how they can make the best use of the time they are willing to invest in the course. It is basically "Flipping 101," if you will. b Providing value-added content in class meetings. A teacher of a course driven by active learning is a mentor and coach rather than just being a transmitter of information. Investing class time in coaching is an important and worthwhile activity. Introducing mini clinics in class meetings constitutes productive use of contact time. c Maintaining focus on student-needs. One should listen to the students and be sympathetic to their perceived difficulties. This does not mean that standards need to be compromised, but rather one should use the communication as an additional way to teach the students to take on responsibility for their own learning. d Maintaining the right attitude as instructors. One should always be patient with one's students, particularly when some of them take longer than expected to achieve the learning objectives. Eventually, most of them will achieve them, especially if one does not relax those objectives. e Remaining optimistic, tempered by realism. One should aim high but not expect 100% success. There will always be a hopefully small group of die-hards who resist active methods, and a hopefully small percentage of students who, despite best efforts by the course staff, do not achieve mastery. Often the two groups share many members. Thus, a fair question would be whether all the effort entailed in implementing active methods are worth the investment. The costs involved are obvious: flipping implies the preparation of video clips based on the lecture materials as well as formative activities, usually quiz questions, that accompany each clip, which may require considerable one-time investment of effort on the part of the instructor. Furthermore, not all the students take kindly to its implementation. Does it make that much difference to the learning outcomes? Is it worth it to flip the class? As in all flipped courses taught by the author, students of the capstone process design course are given credit (the so-called "flipping credit," in this case, amounting to 10% of the final course grade) for completing class preparation assignments in advance of the class meeting. Each week, the class preparation assignment is to watch the weekly lesson's video segments and complete the quizzes. Until the 2020/21 academic year, this grade depended only on the quiz grade and was not dependent on the time taken to watch the videos. As students are given four tries on each question, and most questions are multiple-choice with usually four possible answers, it is expected that most students should score 100% in these assignments, even if only by persistence. In fact, students can learn effectively by making errors, realizing the reason for errors (assisted by preprogrammed responses), correcting them, and achieving the correct answers. Since the quiz completion times are also logged by the learning management system (LMS) used at the Technion (Moodle®), it was noted that some students complete the quizzes in such a short time, in some cases insufficient to read the quiz questions themselves. After the experiences in the first lockdown semester of Spring 2020, it was decided to change the flipping credit award policy as being the quiz grade conditional on "sufficient time" viewing the lesson video segments. A measure of the students' viewing time for each online lesson, the Learning Engagement, LE, is defined as: LE = a student's viewing time/total viewing time of all the segments of the same lesson. An associated measure is the Video Engagement, VE, defined as: VE = number of video clips access by a student/total video clips in all lesson segments. It is of note that LE and VE are correlated -invariably, values of LE greater than unity are accompanied by values of VE over unity, meaning multiple views of portions of the same video clips. Granted, students could turn on each video clip and just leave them running unattended. However, it is unlikely that this is the common practice for several reasons: (a) Students would have to be extremely uninterested in learning to click on 7-12 video clips of 5-10 min each to just get minor credit; (b) Since the average number of video views per lesson is greater than the number of videos per lesson, why would the average student click the same videos twice, just to get credit?; Most crucially, (c) If the practice of not paying attention to the video lessons is extensive, how is the correlation between LE and exam outcomes explained, noting that most of the exam failures were of students with low LE scores? Table 2 summarizes LE data by course week, comparing viewing statistics for the 2019/20 academic year with that of the subsequent year, reporting values for N, the number of students who watched the lesson videos ahead of the class meeting, the percentage of the class who did so (%Eng), the LE mean and standard deviation, and finally, the percentages of the total class ( N tot ) with LE < 0.9, referred to as low-engagers, and LE > 1.1, referred to as high-engagers Fig. 5. compares the class percentages of high-and low-engagers for the two consecutive academic years: 2019/20 and 2020/21, using data from Table 2. The data in Table 2 and Fig. 5 highlight the stark difference in student engagement in the two years under comparison. In 2019/20, when the flipping credit did not depend on lesson viewing time, the percentage of the enrolled students that viewed the pre-recorded videos in advance of class meetings varied from 41 to 88% (77% on average), with an average of 57% low-engagers. In contrast, in 2020/21, when students were aware that flipped credit depended on viewing the online lessons, an average of 97% of the class prepared in advance of class meetings in some fashion, with the average percentage of low-engagers dropping to only 13%. It is interesting to note that the percentage of low engagers in Week 2, which was the first week in which online viewing of lessons was required, the initial level of low-engagers was 30% of the class, much higher than average. All of the low-engagers were contacted to remind them of the rules, which had an immediate effect on reducing their number in the remainder of the course. The average percentage of high-engagers in 2019/20 was only 16%, peaking at 25%, whereas in 2020/21 the average was 21%, peaking at 40%. Looking at the plots for the two years shown in Fig. 5, one observes a large drop in the proportion of the class that are low-engagers from 2019/20 to 2020/21, but only a modest increase in the proportion of high-engagers. Clearly, it is advantageous to require a minimum attention time to video viewing, but clearly more should be done to encourage students to seriously engage while viewing online lessons. But how does lesson engagement impact on the final exam grades? Fig. 6 shows the final exam grade distributions for the capstone design course in 2020/21, in which the distribution of the entire class is compared with the distributions for the 50% most engaged (i.e., the students that had the top 50% average LE values) and 50% least engaged students (the rest of the class). Several things are clear: (a) As shown in Fig. 6 (a), the class average exam grade is 69.3%, which is a little on the low side, explained by the fact that 26% of the class (13 out of 50 students) failed the exam. (b) As shown in Fig. 6 (b), the exam grade distribution can be analyzed using the approach of Lewin (2021a), in which the parameters of a bimodal distribution model comprising a weighted sum of two normal distributions are fitted to the exam grade distribution, yielding estimates for averages and standard deviations of high-and low-performing subpopulations ( 1, 1, 2, and 2 ), as well as the proportion of highperformers ( p ). In this case, p is estimated as 76%, which is consistent with the actual failure rate of 26%. (c) Separate distributions of the exam grades of the top 50% and bottom 50% lesson engagers, are shown in Fig. 6 (c) and 6(d), respectively, noting that the average grades for the two populations are 74.6% and 63.6%, respectively. The Z-statistic for these two distributions is 2.2, indicating a statistically-significant improvement of the high-engagers over the low-engagers, by approximately one standard deviation. This is in line with Bloom's prediction that active learning improves exam grades over those obtained by passive learners by the same margin. It is noted also that of the 13 students who failed the exam, nine were low-engagers. This indicates that lesson engagement significantly affects the exam performance and is the justification for monitoring LE and continuously encouraging the lowengagers to make more effort to come prepared for class meetings. Clearly success in the final exam does not just depend on LE, but they are correlated. Given these outcomes, which may be typical in many PSE courses, it is appropriate to define the responsibilities of the stakeholders in the classroom. Educators have the responsibility of preparing and presenting well-organized course materials, and it is the responsibility of the students to apply themselves as adults, to learn and to master the course subject. That distinction is clear. The educator has other responsibilities though, for example to excite, encourage and motivate the students to better effort s. And we do have a product as teachers, whether we agree about it or not. Our products are students whom we graduate who are a credit to themselves and their alma mater. If a student does not make the grade, it is also our duty not to pass them. At the Technion, each student can sit the final exam of each course twice. The grade distribution presented in Fig. 6 was for the first exam of the design course in 2020/21. The total failure rate of the class after the second exam with the same degree of difficulty as the first, was much lower (Just 2 students -4%). Finally, a word on the effect of other factors on the course outcomes is in order. Lewin and Barzilai, using multiple linear regression, studied the effect of in-course factors such as lesson engagement (LE), as defined previously, and active tutorial attendance, and out-of-course factors, namely the students' GPA on the exam grades of both the process design and process control course, as taught at the Technion. For both courses, the most significant effect was that of the GPA, indicating that the students' general preparedness is the most important factor. The second important effect was attendance of the active tutorial. In addition, for the process control course, LE also had an effect on the outcomes, although its effect was found statistically insignificant for the design course. Online challenges and how to address them The spring of 2020, with the resulting COVID-19 lockdowns, introduced additional challenges to effective teaching. Several prob-lems surfaced, associated with a need for social distancing and online lessons ( ;Chhetri, 2020 ;Ghasem and Ghannam, 2021 ). Here is an itemized list of problems together with the ways that have been found helpful in overcoming them: a Undesirable online behavior of students, such as students turning off cameras and microphones or passive and/or low student attendance. Fixes: (a) Request that students turn on cameras with microphones on mute, turning on microphones to participate. A bright and positive attitude by the lecturer will go far in securing cooperation of the students. (b) What worked outstandingly well was to invite all the students to an online "BYOB (bring your own bottle) Party" before the start of classes, to get to know them and to use the informal meeting as a chance to share expectations. After that, the ice was broken and most of the students were cooperative in the online Zoom sessions. Attendance was high (usually over 70% of the students), with many students participating in class discussion. b Undesirable online behavior of teachers, such as the teacher talking most of the class time, or teachers demonstrating solutions of problems, with little involvement of students, or allowing a few students to dominate the in-class discussions. Fixes: (a) Pause in presentation to give students a chance to ask questions. Respond to the questions and check that the response fully-addresses them; (b) In-class problem solving should involve the students. Do not provide full solutions up-front but get students to contribute suggestions and partial solution steps by brainstorming with student involvement; (c) Use online quizzes to promote class discussion, with all students participating. Use polling software to involve the whole class in this, and use the class answers, especially the wrong ones, to generate discussion in the class. c Too many students (15-25% or more if uncontrolled, less than 15% if monitored and feedback corrections applied) not preparing for the synchronous meetings by studying the online lessons in advance. This rests of the assumption that all the enrolled students are willing to participate and interested in the course; in reality, many of them are not enrolled in compulsory PSE courses, or even the chemical engineering program, because of their personal choices. Whatever the reason, for them to succeed in passing the PSE courses, engagement as requirement needs to be made clear to all enrolled students. We should be concerned with the consequences and not the reasons for non-engagement, and how we as teachers can encourage engagement. Fix: You cannot afford to lose 15-25% of the class! Not taking steps to bring these non-performers back into the fold can mean a large proportion of under-performers who do not even pass a course. Effort s need to be made to track the non-collaborators, reaching out to them from the start of the course and bringing them back in. This is surprisingly easy to do if the teacher takes a supportive rather than critical stance in the outreach message. Many of the otherwise noncooperative students will take kindly to a teacher's outreach, especially if the communication is positive and focused on how much the teacher cares about their success. If the percentage of students truly on-board is maintained high during the entire course, the whole class will benefit, and the outcomes at the end of the course will reflect this ( Lewin, 2021b ). Most of these suggested fixes will work in a regular, face-to-face (F2F) setting also. A flipped roadmap for the future The author has had a long and successful experience with the effective im plementation of the flipped classroom to the teaching of both process control and process design, now for seven consecutive years. There is evidence for improved outcomes in process design instruction resulting from the implementation of active methods ( Lewin and Barzilai, 2021 ). In the year of the pandemic, and the consequently imposed lockdowns, the flipped classroom was relatively easily adapted to online learning ( Lewin, 2021b ). The experiences gained in the second semester of the pandemic with a relatively large group of students who took the process design course have led to a clear conclusion that a correctly implemented flipped paradigm is highly effective, and particularly so for students who take an active role in their learning. This implementation involves the following eight key components: 1 Have a game plan. Balance expectations of the lecturers, teaching assistants and students, as all three stakeholder groups need to be on board. It is recommended that a lecturer with no previous experience in flipping try the paradigm first on a single week of class, selecting the week that is the most challenging to fully-cover using a conventional approach. In addition to preparing the online lesson as homework, the class meeting and the active tutorial should be included in this trial. 2 Preparation of online lessons. Define instructional objectives for each lesson. Divide the lecture into video segments of between 5 and 15 min duration, ensuring that the content is complete (e.g., cover all steps in a mathematical development, remembering that unlike in a regular lecture, students are not able to ask questions if any step is unclear to them). Write and use a script when recording the video segments and practice the delivery before recording. Audio quality is more critical than video quality. 3 Preparation of effective quiz questions. Follow each video segment with a quiz question/activity to test students' understanding. Write useful explanations of all answers (especially important for the wrong ones) and allow students to retry the questions that they get wrong. This is not a test -it is part of their learning! 4 Lesson assembly and testing. Upload questions and videos and generate a Moodle lesson (or similar). The teacher should test the flow and system response first, and have an assistant perform an independent check. 5 Require students to complete the lessons before Class Meetings. Students should be given credit for this crucial preparatory step, with the credit awarded to students being conditional on their adequate coverage of the material. Continuously follow up on students who do not prepare adequately, starting from the first week of the semester. 6 Plan for a useful Class Meeting. Prepare additional materials and do not repeat what the students have already learned online. The following is a partial list of activities that have been found to be useful: (a) Short quiz questions -to be used to foster class-discussion; (b) Open-ended exam-style questions to be solved with class participation; (c) Project/ design work, executed in "break out rooms." To make this activity effective, it is important to plan sufficient time for the "break-out" activities, and to schedule a summarizing discussion in class when the students return; (d) Short student presentations. 7 Schedule an Active Tutorial. Schedule sufficient time as this activity largely replaces what used to be "homework." Allow time to discuss solution strategies in class. Divide the class into small work groups, using breakout rooms if online, or by ensuring appropriate seating arrangements if F2F. Make sure that the question levels in each week's problem set span from easy to difficult (exam level), and make solutions available online. It is unreasonable to expect students to master exam-level questions that integrate course topics in the final exam without giving them the opportunity to practice on similar questions for themselves in the Active Tutorials during the semester. 8 Follow up on every component. All three steps of the flipping paradigm are critical to success and all of them can be continuously improved. For the Online Lesson, were there any problematic video segments, and were there any problematic or particularly useful quiz questions, and should more questions be added? For the Class Meeting, were there enough students active, and how many attended? Were the planned activities suitable? For the Active Tutorial, how many students attended, and how many of them were actively engaged and completed the assignments? The online version of our Flipped Classroom implementation assumes that all students have access to internet of sufficient bandwidth, especially when it comes to class meetings and active tutorials, and this can indeed be an obstacle to expanding the usage of the method. Internet penetration is not equal around the world, where the penetration rates can be as high as 96%, such as in the USA and in South Korea, and as low as 65%, such as in Mexico. However, the prevalence of active teaching methods such as flipping does not necessarily rely on penetration rates as high as those in the USA. NUS in Singapore, for example, is a world leader in the implementation of active teaching, where the internet penetration rate is only 88%. Conclusions This paper presents the case for better utilizing the familiarity with online technology to motivate PSE educators to make the switch to employing active learning in the classroom, as leverage for improving the degree of mastery of more students. Some limitations of this presentation are in order. This paper has focused on the teaching of PSE subjects, which is in the scope of the author's experience. While there is no reason why active methods should not be applied throughout the chemical engineering curriculum, and indeed in other disciplines too, this is out-of-scope of the paper's stated intentions. The paper has also not addressed many important social and emotional aspects of student life on campus, which were disturbed in many ways by the outcomes of the pandemic. These issues, too, are largely out of scope of the paper's focus, given that the main objective was to suggest suitable teaching approaches for the future. Long experience with the flipped-class approach indicates that engagement with the materials throughout the semester improves the students' level of confidence in their mastery of the subjects. These observations could explain the improved performance in the final exams in the process design course since adopting active learning and flipping ( Lewin and Barzilai, 2021 ). The encouraging outcomes suggest that this format can be taught equally well both online and in F2F teaching, and that active learning methods achieve better results in both cases. Hopefully, these findings and recommendations will encourage others in the PSE community to move to active learning methods.
<reponame>intlandsupport/offline-client-app /** * Copyright 2020 Intland Software GmbH * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * * 1. Redistributions of source code must retain the above copyright notice, this * list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright notice, * this list of conditions and the following disclaimer in the documentation * and/or other materials provided with the distribution. * * 3. Neither the name of the copyright holder nor the names of its * contributors may be used to endorse or promote products derived from * this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE * DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ // This file is required by karma.conf.js and loads recursively all the .spec and framework files import 'zone.js/dist/zone-testing'; import {getTestBed} from '@angular/core/testing'; import { BrowserDynamicTestingModule, platformBrowserDynamicTesting } from '@angular/platform-browser-dynamic/testing'; declare const require: any; // First, initialize the Angular testing environment. getTestBed().initTestEnvironment( BrowserDynamicTestingModule, platformBrowserDynamicTesting() ); // Then we find all the tests. const context = require.context('./', true, /\.spec\.ts$/); // And load the modules. context.keys().map(context);
import { IconBrandFacebook, IconBrandGithub, IconBrandLinkedin, IconBrandTelegram, IconBrandTwitter, IconExternalLink, IconMail, IconUpload } from '@tabler/icons'; import { SiCss3, SiFirebase, SiHtml5, SiJavascript, SiLaravel, SiMongodb, SiNextdotjs, SiNodedotjs, SiPhp, SiReact, SiSass, SiStyledcomponents, SiTypescript, SiVite } from 'react-icons/si'; const IconMapper = ({ name, className }: { name: string; className?: string; }) => { switch (name) { case 'github': return <IconBrandGithub className={className}/>; case 'linkedin': return <IconBrandLinkedin className={className}/>; case 'twitter': return <IconBrandTwitter className={className}/>; case 'mail': return <IconMail className={className}/>; case 'telegram': return <IconBrandTelegram className={className}/>; case 'facebook': return <IconBrandFacebook className={className}/>; case 'external': return <IconExternalLink className={className}/>; case 'javascript': return <SiJavascript className={className}/>; case 'typescript': return <SiTypescript className={className}/>; case 'react': case 'reactjs': return <SiReact className={className}/>; case 'node': return <SiNodedotjs className={className}/>; case 'php': return <SiPhp className={className}/>; case 'sass': case 'scss': return <SiSass className={className}/>; case 'styled-components': case 'styled components': return <SiStyledcomponents className={className}/>; case 'vite': return <SiVite className={className}/>; case 'firebase': return <SiFirebase className={className}/>; case 'mongodb': return <SiMongodb className={className}/>; case 'next': case 'nextjs': return <SiNextdotjs className={className}/>; case 'laravel': return <SiLaravel className={className}/>; case 'css': case 'css3': return <SiCss3 className={className}/>; case 'html': case 'html5': return <SiHtml5 className={className}/>; case 'upload': return <IconUpload className={className}/>; default: throw new Error('Unknown icon name'); } }; export default IconMapper;
<gh_stars>100-1000 package cn.ztuo.bitrade; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.test.context.junit4.SpringRunner; import cn.ztuo.bitrade.dao.AdvertiseDao; import cn.ztuo.bitrade.event.MemberEvent; import cn.ztuo.bitrade.service.AdvertiseService; import cn.ztuo.bitrade.service.OtcCoinService; @RunWith(SpringRunner.class) @SpringBootTest public class ApiApplicationTests { @Autowired private OtcCoinService otcCoinService; @Autowired private AdvertiseService advertiseService; @Autowired private AdvertiseDao advertiseDao; //@Autowired //private JavaMailSender javaMailSender; @Autowired private MemberEvent memberEvent; @Test public void testConfig() { System.out.print(advertiseService.findOne(1L).getCreateTime()); } /** * 测试锁 */ /* @Test public void testLock() throws InterruptedException { Long id = 1L; Advertise advertise = advertiseService.findOne(id); advertise.setCountry("美国"); new Thread( () -> { Advertise advertise1 = advertiseService.findOne(id); advertise1.setCountry("伊拉克"); advertise1 = advertiseDao.saveAndFlush(advertise1); System.out.println("ad1 " + advertise1); }).start(); Thread.sleep(5000); advertise = advertiseDao.saveAndFlush(advertise); System.out.println(advertise); Thread.sleep(20000); }*/ /** * 测试发送邮件 */ /* @Test public void testSendEmail() throws Exception { MimeMessage mimeMessage = javaMailSender.createMimeMessage(); MimeMessageHelper helper = new MimeMessageHelper(mimeMessage, true); //基本设置. helper.setFrom("<EMAIL>");//发送者. helper.setTo("<EMAIL>");//接收者. helper.setSubject("会员注册成功(某某网站平台)");//邮件主题. Map<String, Object> model = new HashMap<>(); model.put("username", "张金伟"); Configuration cfg = new Configuration(Configuration.VERSION_2_3_26); cfg.setClassForTemplateLoading(this.getClass(), "/templates"); Template template = cfg.getTemplate("activateEmail.ftl"); String html = FreeMarkerTemplateUtils.processTemplateIntoString(template, model); helper.setText(html, true); javaMailSender.send(mimeMessage);}*/ /*@Test public void testRegisterEvent(){ Member member = new Member(); member.setId(1L); memberEvent.onRegisterSuccess(member); }*/ /*@Test public void testFindAd() throws SQLException, DataException { String unit = "GCX"; OtcCoin otcCoin = otcCoinService.findByUnit(unit); double marketPrice = 1000.00; int pageNo = 1; int pageSize = 10; AdvertiseType advertiseType = AdvertiseType.SELL; SpecialPage<ScanAdvertise> page = advertiseService.paginationAdvertise(pageNo, pageSize, otcCoin, advertiseType, marketPrice); System.out.println(page); }*/ }
<filename>src/components/ui/atoms/Button/Button.styles.ts import styled, { css } from 'styled-components/native'; import { ButtonProps } from './Button'; import { RectButton } from 'react-native-gesture-handler'; import Icon from '../../../../../assets/svg/discord.svg'; type ButtonStylesProps = Pick<ButtonProps, 'size' | 'buttonStyle'>; const wrapperModifiers = { small: () => css` width: 160px; height: 56px; `, medium: () => css` width: 274px; height: 56px; `, large: () => css` width: 327px; height: 56px; `, overallWidth: () => css` width: 100%; height: 56px; ` }; export const Wrapper = styled.View<{ border: boolean } & Pick<ButtonProps, 'size'>>` ${({ theme, border, size }) => css` border: ${border ? `1px solid ${theme.colors.secondary}` : 'none'}; border-radius: ${theme.border.radius}; ${!!size && wrapperModifiers[size]} `} `; export const Button = styled(RectButton)<ButtonStylesProps>` ${({ theme, size, enabled, buttonStyle }) => css` background-color: ${buttonStyle === 'primary' ? theme.colors.primary : 'transparent'}; border-radius: ${theme.border.radius}; align-items: center; flex-direction: row; opacity: ${enabled ? 1 : 0.5}; ${!!size && wrapperModifiers[size]} `} `; export const Text = styled.Text` ${({ theme }) => css` color: ${theme.colors.lightGray}; text-shadow: ${`0 0 8px ${theme.colors.lightGray}`}; font-family: ${theme.font.family.secundaryRegular}; font-weight: ${theme.font.regular}; font-style: normal; font-size: ${theme.font.sizes.medium}; text-align: center; flex: 1; `} `; export const BoxIcon = styled.View` ${({ theme }) => css` width: 56px; height: 56px; justify-content: center; align-items: center; border-right-width: 1px; border-color: ${theme.colors.line}; `} `; export const IconDiscord = styled(Icon)` ${({ theme }) => css` width: 24px; height: 18px; color: ${theme.colors.white}; text-shadow: ${`0 0 8px ${theme.colors.white}`}; `} `;
CARTERET COUNTY, NC (WITN) The Federal Emergency Management Agency has approved a multi-million dollar reimbursement for debris removal in one eastern Carolina county following Hurricane Florence. Carteret County will now receive nearly $5.7 million from FEMA after hiring contractors and using county workers and equipment to remove debris. The approved funds will cover work completed through November 4th of 2018. FEMA's public assistance program provides grants to state and local governments to reimburse the cost of debris removal, emergency protective measures and permanent repair work. FEMA reimburses applicants at least 75 percent of eligible costs, and the remaining 25 percent is covered by the state of North Carolina.
Hopf Bifurcation in Genesio System with Delayed Feedback In this paper, we investigate the local Hopf bifurcation in Genesio system with delayed feedback control. We choose the delay as the parameter, and the occurrence of local Hopf bifurcations are verified. By using the normal form theory and the center manifold theorem, we obtain the explicit formulae for determining the stability and direction of bifurcated periodic solutions. Numerical simulations indicate that delayed feedback control plays an effective role in control of chaos.
Comparative Analysis of Record Linkage Decision Rules This paper provides an empirical comparison of decision rules in the Fellegi-Sunter model of record linkage. Using files for which true linkage status is known, the results of applying various parameter-estimation/decision-rule strategies for designating links and nonlinks are compared. The Expectation-Maximization Algorithm provides estimates of parameters for loglinear models of latent classes in situations where the underlying probability distributions of agreements on identifiers such has surname, house number and age satisfy a conditional independence assumption and in situations where more general interactions are allowed.
#include<bits/stdc++.h> using namespace std; const int maxN=500; const int INF=1e5; int w[maxN][maxN],dis[maxN][maxN]; int n,m; void input() { cin>>n>>m; for(int i=1;i<=n;i++){ fill(dis[i],dis[i]+n+10,INF); fill(w[i],w[i]+n+10,INF);} for(int i=1;i<=m;i++) { int k1,k2; cin>>k1>>k2; w[k1][k2]=1; w[k2][k1]=1; } for(int i=1;i<=n;i++) for(int j=1;j<=n;j++) if(w[i][j]!=1) dis[i][j]=1; for(int i=1;i<=n;i++) dis[i][i]=w[i][i]=0; } int main() { input(); for(int z=1;z<=n;z++) for(int x=1;x<=n;x++) for(int y=1;y<=n;y++) w[x][y]=min(w[x][y],w[x][z]+w[z][y]); for(int z=1;z<=n;z++) for(int x=1;x<=n;x++) for(int y=1;y<=n;y++) dis[x][y]=min(dis[x][y],dis[x][z]+dis[z][y]); if(dis[1][n]>n || w[1][n]>n) cout<<-1; else cout<<max(dis[1][n],w[1][n]); return 0; }
Global Stability for an SEIR Epidemiological Model with Varying Infectivity and Infinite Delay A recent paper (Math. Biosci. and Eng. 5:389-402) presented an SEIR model using an infinite delay to account for varying infectivity. The analysis in that paper did not resolve the global dynamics for R0 > 1. Here, we show that the endemic equilibrium is globally stable for R0 > 1. The proof uses a Lyapunov functional that includes an integral over all previous states. 1. Introduction. A recent paper presented an SEIR model for an infectious disease that included infection-age structure to allow for varying infectivity. The incidence is of mass action type, but because of the varying infectivity, has the form S(t) ∞ 0 k(a)i(t, a)da. Nevertheless, the authors gave a thorough analysis leaving out only the elusive global stability of the endemic equilibrium. That issue is resolved in this paper using a Lyapunov functional related to the type of Lyapunov function used for ordinary differential equation (ODE) ecological models in the 1980s and used more recently for ODE epidemiological models. In, an ODE model of arbitrary dimension that includes varying infectivity is studied using the same type of Lyapunov function. For each of these models, the Lyapunov function is a sum of terms of the form f (y) = y−1−ln y, where y is a variable of the system. The model studied in this paper has infinite delay, and so it is necessary to include in the Lyapunov functional a term that integrates over all previous states. We now provide a brief outline of the paper. In Section 2 we describe the equations that are to be studied. Section 3 includes results by Rst and Wu from, providing the context in which this paper is to be read. Many of these results are then used in Section 4 where the global stability of the endemic equilibrium is shown -the key result of this paper. Constant recruitment into S is given by. Incidence is of mass action type with baseline coefficient. The relative infectivity of individuals of infection-age a is k(a), where k is an integrable function taking values in the interval. The natural death rate is d, the disease-related death rate is r, the average latency period is 1/ and the average period of infectivity is 1/r. The original model equations are and with the boundary condition i(t, 0) = E(t). Solving gives This allows equation to be rewritten as where the equations for dI dt and dR dt are omitted because they decouple. In order to specify the initial conditions for, we introduce the following notation. Given a non-negative function E defined on the interval (−∞, T ], for any t ≤ T we define the function E t : R ≤0 → R ≥0 by E t () = E(t + ) for ≤ 0. For equation, the initial condition would specify S, E, R ≥ 0 and i(0, ) : R ≥0 → R ≥0. For equation, an equation with infinite delay, the initial condition must specify S ≥ 0 and E 0 : Due to the infinite delay, it is necessary to determine an appropriate phase space. For any ∆ ∈ (0, d + + r), let C ∆ = : R ≤0 → R such that ()e ∆ is bounded and uniformly continuous and Define the norm on C ∆ and Y ∆ by It follows immediately that ≤. Fixing ∆ ∈ (0, d+ +r), we take the phase space for equation to be R ≥0 Y ∆. Any initial condition (S, E 0 ) ∈ R ≥0 Y ∆ gives a solution (S(t), E t ) that remains in the phase space for all time. Furthermore, if (S(t), E(t)) is bounded for t ≥ 0, then the positive orbit Relevant developments of infinite delay equations, including determining the phase space, can be found in and references found therein. 3. Previous results. In their paper, the authors of give a thorough analysis of equation. They find the equilibria, calculate the basic reproduction number R 0 and show that the system is point dissipative. The disease-free equilibrium is shown to be globally stable for R 0 < 1. For R 0 > 1 the disease-free equilibrium is unstable, there is a unique endemic equilibrium, which is locally asymptotically stable, and the system is permanent. They also do a final size calculation. All that remains to complete the analysis is to determine the global behaviour for R 0 > 1. This is done in Section 4 of this paper, where it is shown that the endemic equilibrium is globally stable for R 0 > 1. In preparation for that, we now give results from. The basic reproduction number for the model is For all values of the parameters, there is a disease-free equilibrium P 0 = (S 0, 0) Note that while we write an equilibrium of as a point (S,) ∈ R 2, more formally, an equilibrium point is a point ( S, E) ∈ R ≥0 Y ∆ satisfying S =S and E() = for all ≤ 0. The equilibrium solution is given by ( Related to this is an equilibrium of for which S(t), E(t), I(t) and R(t) are constant functions and for which i(t, a) =(a) = e −(d++r)a is independent of time t. Theorem 3.2. If R 0 < 1, then all solutions converge to the disease-free equilibrium, which is locally asymptotically stable. As with many finite dimensional models, if R 0 is larger than one, then the diseasefree equilibrium attracts disease-free states and repels states for which disease is present. Let a = inf a : ∞ a k()d = 0. For a system with a truly infinite delay, we have a = ∞, whereas, for a system with a bounded distributed delay, we have 0 < a < ∞. For a state ( S, E) ∈ R ≥0 Y ∆, we say that disease is present if E(−a) > 0 for some a ∈ [0, a). Recall that elements of Y ∆ are continuous. Thus, if E is positive at some point, then E is positive on an interval about that point. If disease is present for ( S, E), then the solution of with initial condition ( S, E) will satisfy E(t) > 0 for some t > 0. If E does not satisfy the given condition (i.e. E(−a) = 0 for all a ∈ [0, a)), then the solution of will have E(t) identically zero for t ≥ 0, and will converge to P 0. For a solution for which disease is present for the initial condition, we say the disease is initially present. Theorem 3.3. Suppose R 0 > 1. Then the disease-free equilibrium is unstable and the endemic equilibrium is locally asymptotically stable. Furthermore, the system is persistent; that is, there exists > 0 such that for any solution for which the disease is initially present, we have Remark 1. In, it is implicitly understood that a = ∞ meaning that the system has a true infinite delay. However, for a bounded distributed delay, which gives a < ∞, the proofs in still hold, as do the new results of this paper. 4. Global stability for R 0 > 1. Let X(t) = (S(t), E t ) be a solution of equation for which disease is initially present. It is shown in the proof of Theorem 6.1 of that the semi-flow induced by equation has properties that imply the existence of a global compact attractor (see Theorem 3.4.6 of ). Combined with Theorem 3.1 and Theorem 3.3, it follows that the -limit set of X is non-empty, compact, and invariant. It follows that is the union of orbits of equation. That is, if ( S, E) ∈ R ≥0 Y ∆ is an omega limit point of X, then there is a solution through ( S, E) such that every point on the solution is in. Proof. Fix > 0 and T ∈ R, and let Z = Z(T ) = ((T ), T ). Then Z ∈ is an omega limit point of X. Thus, there exists a sequence {t n } that increases to infinity such that X(t n ) → Z. Then S(t n ) → (T ). By Theorem 3.1 and Theorem 3.3, we have − ≤ S(t n ) ≤ M for large n, and so the same inequalities apply to (T ). Also, 0 ≤ |E(t n )−(T )| ≤ E tn − T, which goes to 0 as n → ∞. Thus, since − ≤ E(t n ) ≤ M for large enough n, the same is true for (T ). Because the choice of T was arbitrary, as was the choice of > 0, the desired result follows for all t ∈ R. Proof. We begin by normalizing. Let s(t) = (t)/S *, x(t) = (t)/E * and x t = t /E *. Then The endemic equilibrium for is p * = (s *, x * ) =. Thus, by evaluating both sides of at p *, we have Let We will study the behaviour of the Lyapunov functional We note that x is positive, as is (a) for each a ∈ [0, a). The function f has domain R >0 and range R ≥0. We also note that f has only one extreme value, which is the global minimum: f = 0. Thus, U (t) ≥ 0 with equality if and only if s(t) = x(t) = 1 and x(t − a) = 1 for almost all a ∈ [0, a). Lemma 4.1 implies U is well-defined; that is, U + is finite for all t. For clarity, we calculate the derivatives of each of U s, U x and U + separately and then combine them to get dU dt. Also, instances of s(t) and x(t) will be written as s and x, respectively. Subtracting the right-hand side of the first equation of gives In calculating dUx dt, we use the second equation of to replace ( + d) with the integral, obtaining C. CONNELL MCCLUSKEY We now calculate the derivative of U + (t). Next, we show that lim t→∞ s(t) = 1. To do this, we first note that dU. Suppose that s(t) does not converge to 1. Then there exist > 0 and a sequence {t n } that increases to infinity such that g(t n ) ≥ for each n. Note that the bounds on Z given by Lemma 4.1 imply that the derivative ds dt is bounded, and so there exists > 0 such that g(t) ≥ 2 for t ∈ I n = (t n −, t n + ). Then, we have dU dt ≤ − 2 for all t ∈ ∪I n, which is a set of infinite measure. Hence, U decreases to −∞, which contradicts the fact that U is bounded below. Thus, s(t) must converge to 1. Finally, we show that lim t→∞ x(t) = 1. To do this, let y(t) = s(t)+ x x(t). Then y Since s(t) converges to 1, this is an asymptotically autonomous ordinary differential equation for which solutions of the limiting equation go to a hyperbolic equilibrium. Thus, lim t→∞ y(t) = 1 +d S * +. Using, it follows that lim t→∞ x(t) = lim t→∞ 1 x (y(t) − s(t)) = 1. Since lim t→∞ (s(t), x(t)) =, it follows that lim t→∞ ((t), (t)) = (S *, E * ), completing the proof. Proof. Let Z(t) be a solution in, the omega limit set of X. By Theorem 4.2, Z(t) converges to the endemic equilibrium P *. Since is closed, we have P * ∈ and so X gets arbitrarily close to P *. By Theorem 3.3, P * is locally asymptotically stable and therefore X converges to P *. We note that the results here include systems with bounded distributed delay.
A new high-performance sideband-separating mixer for 650GHz In the modular sideband-separating mixers that we built over the last years, we observe a clear anti-correlation between the image rejection ratio obtained with a certain block and its noise performance, as well as strong correlations between the image rejection and imbalances in the pumping of the mixer devices. We report on the mechanisms responsible for these effects, and conclude that the reduction of the image rejection is largely explained by the presence of standing waves. We demonstrate the rejection ratio to be very sensitive to those. In principle, all potential round-trip paths should be terminated in matched loads, so no standing waves can develop. In practice, the typical high reflections from the SIS mixers combined with imperfect loads and non-negligible input/output reflections of the other components give many opportunities for standing waves. Since most of the loss of image rejection can be attributed to standing waves, the anti-correlation with the noise temperature can be understood by considering any excess loss in the structure, as the waveguides start acting as distribured loads. This reduces the standing waves, and thereby improves the rejection ratio, at the expense of noise temperature. Based on these experiences, we designed a new waveguide structure, with a basic waveguide size of 400200 m and improved loads. Strong emphasis was placed on low input and output reflections of the waveguide components, in some places at the cost of phase or amplitude imbalance. For the latter there is ample margin not to impair the performance, however. Apart from further details of the design, we present the first results of the new mixers, tested in a modified production-level ALMA Band 9 receiver, and show that even in an unfinished state, it simultaneously meets requirements for image rejection and noise temperature.
<filename>src/setraders/tradingitem/PriceSimulator.java package setraders.tradingitem; import java.util.Random; public class PriceSimulator { public static double cryptoPrice; public static double forexPrice; public static double stockPrice; public static Random random = new Random(); public static void randomGenCrypto() { cryptoPrice = 0 + (15000 - 4000) * random.nextDouble(); } public static void randomGenForex() { forexPrice = 0 + (4 - 1) * random.nextDouble(); } public static void randomGenStock() { stockPrice = 0 + (500 - 200) * random.nextDouble(); } }
/* **============================================================================== ** ** Copyright (c) 2003, 2004, 2005, 2006, <NAME>, <NAME> ** ** Permission is hereby granted, free of charge, to any person obtaining a ** copy of this software and associated documentation files (the "Software"), ** to deal in the Software without restriction, including without limitation ** the rights to use, copy, modify, merge, publish, distribute, sublicense, ** and/or sell copies of the Software, and to permit persons to whom the ** Software is furnished to do so, subject to the following conditions: ** ** The above copyright notice and this permission notice shall be included in ** all copies or substantial portions of the Software. ** ** THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR ** IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, ** FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE ** AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER ** LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, ** OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE ** SOFTWARE. ** **============================================================================== */ #ifndef _cimple_Ops_h #define _cimple_Ops_h #include "config.h" #include "linkage.h" #include "Instance_Ref.h" #include "Atomic_Counter.h" CIMPLE_NAMESPACE_BEGIN /** Base class for CIM operation implementers (e.g., CIM clients). */ class CIMPLE_CIMPLE_LINKAGE Ops { public: Ops(); Ops(const Ops& x); Ops& operator=(const Ops& x); ~Ops(); protected: class Ops_Rep* _rep; private: friend Ops_Rep* __rep(Ops& ops); friend const Ops_Rep* __rep(const Ops& ops); }; CIMPLE_CIMPLE_LINKAGE void __invoke( Ops& ops, const String& ns, const Instance_Ref& ref, const Meta_Method* mm, ...); inline Ops_Rep* __rep(Ops& ops) { return ops._rep; } inline const Ops_Rep* __rep(const Ops& ops) { return ops._rep; } /** Internal class. */ class CIMPLE_CIMPLE_LINKAGE Ops_Rep { public: Ops_Rep(); virtual ~Ops_Rep(); virtual void invoke( const String& name_space, const Instance_Ref& instance_name, Instance* meth) = 0; static void ref(const Ops_Rep* rep); static void unref(const Ops_Rep* rep); Atomic_Counter refs; }; CIMPLE_NAMESPACE_END #endif /* _cimple_Ops_h */
<gh_stars>10-100 package main import ( "fmt" "net/http" "strconv" "github.com/gin-gonic/gin" controller "github.com/gravitl/netmaker/controllers" "github.com/gravitl/netmaker/functions" "github.com/gravitl/netmaker/models" "github.com/skip2/go-qrcode" ) func CreateIngressClient(c *gin.Context) { var client models.ExtClient client.Network = c.Param("net") client.IngressGatewayID = c.Param("mac") node, err := functions.GetNodeByMacAddress(client.Network, client.IngressGatewayID) if err != nil { fmt.Println(err) ReturnError(c, http.StatusBadRequest, err, "Nodes") return } client.IngressGatewayEndpoint = node.Endpoint + ":" + strconv.FormatInt(int64(node.ListenPort), 10) err = controller.CreateExtClient(client) if err != nil { fmt.Println(err) ReturnError(c, http.StatusBadRequest, err, "Nodes") return } ReturnSuccess(c, "ExtClients", "external client has been created") } func DeleteIngressClient(c *gin.Context) { net := c.Param("net") id := c.Param("id") err := controller.DeleteExtClient(net, id) if err != nil { fmt.Println(err) ReturnError(c, http.StatusBadRequest, err, "Nodes") return } ReturnSuccess(c, "ExtClients", "external client "+id+" @ "+net+" has been deleted") } //EditIngressClient displays a form to update name of external client func EditIngressClient(c *gin.Context) { net := c.Param("net") id := c.Param("id") client, err := controller.GetExtClient(id, net) if err != nil { fmt.Println(err) ReturnError(c, http.StatusBadRequest, err, "Nodes") return } c.HTML(http.StatusOK, "EditExtClient", client) } func GetQR(c *gin.Context) { net := c.Param("net") id := c.Param("id") config, err := GetConf(net, id) if err != nil { fmt.Println(err) ReturnError(c, http.StatusBadRequest, err, "ExtClient") return } b, err := qrcode.Encode(config, qrcode.Medium, 220) if err != nil { fmt.Println(err) ReturnError(c, http.StatusBadRequest, err, "ExtClient") return } c.Header("Content-Type", "image/png") c.Data(http.StatusOK, "application/octet-strean", b) } func GetConf(net, id string) (string, error) { client, err := controller.GetExtClient(id, net) if err != nil { return "", err } gwnode, err := functions.GetNodeByMacAddress(client.Network, client.IngressGatewayID) if err != nil { return "", err } network, err := functions.GetParentNetwork(client.Network) if err != nil { return "", err } keepalive := "" if network.DefaultKeepalive != 0 { keepalive = "PersistentKeepalive = " + strconv.Itoa(int(network.DefaultKeepalive)) } gwendpoint := gwnode.Endpoint + ":" + strconv.Itoa(int(gwnode.ListenPort)) newAllowedIPs := network.AddressRange if egressGatewayRanges, err := client.GetEgressRangesOnNetwork(); err == nil { for _, egressGatewayRange := range egressGatewayRanges { newAllowedIPs += "," + egressGatewayRange } } defaultDNS := "" if network.DefaultExtClientDNS != "" { defaultDNS = "DNS = " + network.DefaultExtClientDNS } config := fmt.Sprintf(`[Interface] Address = %s PrivateKey = %s %s [Peer] PublicKey = %s AllowedIPs = %s Endpoint = %s %s `, client.Address+"/32", client.PrivateKey, defaultDNS, gwnode.PublicKey, newAllowedIPs, gwendpoint, keepalive) return config, nil } func GetClientConfig(c *gin.Context) { net := c.Param("net") id := c.Param("id") config, err := GetConf(net, id) b := []byte(config) if err != nil { fmt.Println(err) ReturnError(c, http.StatusBadRequest, err, "ExtClient") return } filename := id + ".conf" //c.FileAttachment(filepath, filename) c.Header("Content-Description", "File Transfer") c.Header("Content-Disposition", "attachment: filename="+filename) c.Data(http.StatusOK, "application/octet-stream", b) } //UpdateClient updates name of external Client func UpdateClient(c *gin.Context) { net := c.Param("net") id := c.Param("id") newid := c.PostForm("newid") client, err := controller.GetExtClient(id, net) if err != nil { fmt.Println(err) ReturnError(c, http.StatusBadRequest, err, "ExtClient") return } _, err = controller.UpdateExtClient(newid, net, client) if err != nil { fmt.Println(err) ReturnError(c, http.StatusBadRequest, err, "ExtClient") return } ReturnSuccess(c, "ExtClients", "external client has been updated") }
<filename>IntegerNumber.cpp #include"IntegerNumber.h" #include<iostream> using namespace std; IntegerNumber::IntegerNumber() { val=0; } IntegerNumber::IntegerNumber(int other) { val=other; } void IntegerNumber::print() { cout<<val; } IntegerNumber IntegerNumber::operator + (const IntegerNumber& other) { IntegerNumber temp(val+other.val); return temp; } IntegerNumber IntegerNumber::operator * (const IntegerNumber& other) { IntegerNumber temp(val*other.val); return temp; }
<gh_stars>0 package com.qiwenshare.common.util; import java.util.Random; public class PasswordUtil { public static String getSaltValue() { Random r = new Random(); StringBuilder sb = new StringBuilder(16); sb.append(r.nextInt(99999999)).append(r.nextInt(99999999)); int len = sb.length(); if (len < 16) { for (int i = 0; i < 16 - len; i++) { sb.append("0"); } } String salt = sb.toString(); return salt; } }
My iPhone wouldn’t power on this past weekend, to my extreme annoyance. When I rolled into at the Apple Store at 5:55 on Saturday, five minutes before my scheduled Genius Bar appointment, several employees, equipped with the cold and empty smiles they’re known for, turned me away. They decided to close early for the weather, but failed to alert scheduled customers. I braved the storm for a coveted sport at the Genius Bar, only to turn right back into the storm, empty-handed. I was enraged at the moment, but in hindsight, caveat emptor; I should have known better. Apple products were once known to be both cutting-edge and stylish, and their customer service was considered top-notch. However, even as their advantage in quality has diminished over the years, their marketing has remained extremely successful. Most Apple stores I’ve been to are packed, often to the detriment of my goals and productivity that day. Apple has a fantastic reputation, and it’s allowed them to continue to attract customers and prosper. Sports teams don’t have to do nearly as much to attract fans. I’m a Clevelander, so by default, I’ve always been a fan, and importantly to them, a customer of the local teams. When I got into football, the Browns were lousy. When I got into baseball, the Indians were lousy. Nonetheless, a dedicated customer I became, and they’re lucky to have me and those like me (presumably you, the reader). If I founded a hotel in Portland, it would be out of business in weeks, not months, because there are hundreds of hotels in Portland with established customer bases and with CEOs that presumably know what they’re doing, and I don’t have the first clue. On the contrary, if I founded a Major League Baseball team in Portland, I’m confident that it would thrive financially, simply because a baseball team did not exist in that large pocket of people, and then it would.1 My point is this: compared to normal businesses, sports teams have a huge advantage in attracting customers because of tribalistic sports fan tendencies and a controlled amount of teams that exist. I know that it’s easier said than done, but starting from this point of advantage means that the PR necessary for most companies to attract people is not necessary for, say, the Cleveland Indians. Good PR may help bring people to the stadium, attract non-baseball fans, or broaden their base into Tigers, Pirates, or Reds territories, but for the most part, all they need to do is exist and meet expectations. And yet, morale among Indians fans is, by my completely unscientific estimation, at a ten year low, this despite six consecutive winning seasons, three AL Central titles, and an AL Pennant three seasons ago. Clearly, the Indians are not meeting fan expectations, so I thought it may be worth going through what those expectations are, and how the Indians have fared. This is obvious, but it isn’t trivial. Winning is obviously the goal that fans hold in the highest regard—flags fly forever! Winning can take several forms (winning the division, going above .500, etc), and it’s not linear. It doesn’t matter how many times a team wins so much as it does matter how much they win relative to outside expectations. The Braves and the Indians finished a game apart in the standings. Both teams won their division, and both teams were promptly trounced in the divisional series. Each team is stocked with exciting, dynamic young players who will hopefully dominate the game for the next decade. And yet, the Braves were heralded as an unmitigated success, and the Indians bordered on catastrophe. Some of the smartest national baseball writers are employed by The Appleman over at Fangraphs. Those very writers selected the Braves to come in fourth place, with nary a vote to make the playoffs. And they weren’t alone: oddsshark.com put the over/under for the 2018 Braves at 74.5 wins. ESPN didn’t even mention the Braves in their 2018 predictions. Even MLB.com’s picks for the 2018 “Cinderella Team”2 had the Braves or Phillies only winning a wild card. No one would have predicted that Ozzie Albies and Ronald Acuna would play so well, or Johan Camargo would come out of nowhere, or Nick Markakis would have a career year3, but all of those things did happen, making the Braves not only playoff contenders, but also one of the feel-good stories of the MLB season. Meanwhile, pretty much every source had the Indians winning 90+ games and breezing into the postseason as AL Central champions, partially because the Indians are really good, and partially because the rest of the AL Central is essentially a Major League Baseball cover band.4 This foregone conclusion meant that the Indians would play zero meaningful games during the regular season. That doesn’t have to be a bad thing, necessarily, if the team dominated as it should have. But, 91 wins against the worst division in MLB history, coming from a supposed superteam is not a good look.5 And, when the team finally had its fanbase’s full attention, the Indians got embarrassed. This is coming after a season in which Cleveland was arguably the best team in baseball, and two seasons after an AL Pennant and near-triumph in the World Series. Not only was the 2018 season a disappointment in every conceivable way, but it continued a downward trajectory which is no doubt troubling to fans. After the Indians lost the 2016 World Series, they did something out of character for them: they made a free agent splash, signing Edwin Encarnacion to a 3 year/$60 million contract, which many considered to be under his true value.6 They didn’t need to improve to continue their success, but they found EE to be a bargain that they simply could not pass up. This was not a PR move, but rather, a savvy roster construction move that helped PR. The reason I can say this with such certainty is that their activity, or should I say inactivity this offseason has made it clear that the front office does not care about PR. Should it have to? In a vacuum, the Indians roster inarguably has holes, and that’s what the fans see. Each outfield spot projects to be well below average, with left field and right field each rating as bottom three in the MLB. The bullpen, meanwhile, was a major problem last year, and despite some interesting additions to the 2019 relief corps, none of them are close to a sure thing, and fans seem to be taking a “wait and see” approach to them. Meanwhile, they see teams like the Yankees and Brewers signing players that would look awfully nice in red. The big picture is that people see a stunning lack of effort to improve a roster with extremely disappointing 2018 results. However, we don’t exist in a vacuum. From the Indians’ front office perspective, the Indians’ peaks are as high as their valleys are low. Francisco Lindor and Jose Ramirez are each projected to be the best in the MLB at their positions, as is the Indians starting rotation. The AL Central appears to have gotten collectively dosed with a horse tranquilizer. And, oh by the way, the bullpen is quietly projected to be middle of the road, not nearly as terrible as some people think it will be. That’s not even considering the fact that Stars n’ Scrubs rosters (which the Indians officially have) are far easier to improve upon than deeper rosters with fewer stars. Consider the following hypothetical: there are two teams with the exact same record and position player WAR totals on July 31. Team A has an amazing infield and a terrible outfield, while Team B is solid everywhere but has no true strengths or weaknesses. If Team A wants to improve its roster through a trade, all it needs to do is target an average outfielder, and voila! The roster is significantly improved. If Team B wants to improve, they have to trade for a star, which would be much more costly. By the same token, it’s easier to imagine a replacement-level player achieving a luck- or skill-fueled breakout into “average player” territory than it is to an average player turn into a star. All in all, the Indians are projected to win more games than last year and maintain their status as the clear favorite in the AL Central, so from a roster construction standpoint, the front office is undoubtedly satisfied with its position. It’s that very satisfaction, that lack of urgency to improve the roster, that has disheartened fans this offseason, but I urge you to look at the big picture and think about things from the perspective of the front office. Despite its flaws, this is a great team, and with no challengers in the AL Central, it figures to remain a great team for a long time. PR doesn’t matter as much in sports as it does for other businesses. As it stands, the Indians have a quality product, and the only marketing they need is success. If that happens, the seats of Progressive Field should be as crowded as an Apple Store. Here’s hoping they will be.
package com.sofka.perfilprofesional.domain.colaborador; import co.com.sofka.domain.generic.AggregateEvent; import co.com.sofka.domain.generic.DomainEvent; import com.sofka.perfilprofesional.domain.colaborador.events.ColaboradorCreado; import com.sofka.perfilprofesional.domain.colaborador.values.*; import com.sofka.perfilprofesional.domain.generics.IdHojaDeVida; import com.sofka.perfilprofesional.domain.generics.NombreCompleto; import java.util.List; public class Colaborador extends AggregateEvent<IdColaborador> { public IdHojaDeVida idHojaDeVida; public FechaNacimiento fechaNacimiento; public NombreCompleto nombreCompleto; public Cedula cedula; public Genero genero; public Area area; public Colaborador(IdColaborador idColaborador, IdHojaDeVida idHojaDeVida, FechaNacimiento fechaNacimiento, NombreCompleto nombreCompleto, Cedula cedula, Genero genero, Area area) { super(idColaborador); subscribe(new ColaboradorChange(this)); appendChange(new ColaboradorCreado(idHojaDeVida,fechaNacimiento,nombreCompleto,cedula,genero,area)).apply(); } private Colaborador(IdColaborador idColaborador){ super(idColaborador); subscribe(new ColaboradorChange(this)); } public static Colaborador from(IdColaborador idColaborador, List<DomainEvent> events){ var colaborador = new Colaborador(idColaborador); events.forEach(colaborador::applyEvent); return colaborador; } public IdHojaDeVida idHojaDeVida() { return idHojaDeVida; } public FechaNacimiento fechaNacimiento() { return fechaNacimiento; } public NombreCompleto nombreCompleto() { return nombreCompleto; } public Cedula cedula() { return cedula; } public Genero genero() { return genero; } public Area area() { return area; } }
BlackBerry's Devices Get Dumped in the U.S. Senate, But Does It Matter? The former smartphone leader loses another key market, but it probably won’t hurt its core software business. BlackBerry (NYSE:BB) often highlights the use of its devices among government employees as a niche market which is defensible against Apple's (NASDAQ:AAPL) iPhones and Alphabet's (NASDAQ:GOOG) (NASDAQ:GOOGL) Android devices. In the past, BlackBerry argued that government agencies wouldn't forsake its "best in breed" end-to-end security for the convenience of using more popular iOS or Android devices. Unfortunately for BlackBerry, Apple and Android device maker Samsung got better at securing their devices, and relaxed BYOD (bring your own device) policies enabled agencies to let more government employees use their own personal devices at work. Back in 2013, the U.S. Department of Defense approved iOS and Samsung KNOX devices for unclassified communications alongside BlackBerry devices. In early July, the U.S. Senate went a step further and announced that it would stop issuing BlackBerry devices to its entire staff and replace them with Apple and Samsung devices. Does this government-level abandonment of BlackBerry devices indicate that the end is nigh for the company's struggling hardware business? An accidental glimpse into the future? The Senate memo claimed that BlackBerry told Verizon and AT&T that it was ceasing the production of all BB 0 devices (the Q10, Z10, Z30, Passport and Classic), so future orders could no longer be guaranteed. That was a stunning revelation, since BlackBerry had only recently discontinued the Classic. BlackBerry responded by calling the Senate statement "incorrect," and that it will keep updating BB10 as it supports new Android devices. BlackBerry claims that the Senate staff misunderstood the discontinuation of the Classic as the end of all BB10 devices. Nonetheless, the Senate apparently hasn't changed its mind about replacing BlackBerries with iPhones and Samsung devices. If the discontinuation of the Classic is a preview of BlackBerry's future, it isn't a surprising one. The company has been pivoting toward Android devices over the past year, and recently announced the development of three new Android devices. The company's first Android device, the Priv, received a lukewarm response due to its high price tag. BlackBerry only sold half a million phones last quarter and controlled about 0.2% of the global smartphone market. Back in 2009, it controlled nearly 20% of that market. BlackBerry's rapid decline over the subsequent years can be attributed to its early refusal to switch physical keyboards for touchscreens, its inability to create a popular app ecosystem like Apple and Google, and mismanagement by co-CEOs Mike Lazaridis and Jim Balsillie and their successor Thorsten Heins. CEO John Chen, who took over in 2013, finally convinced BlackBerry to swallow its pride and partner with Samsung in 2014 to integrate KNOX with BES (BlackBerry Enterprise Service), the core pillar of its software business. This move complemented Chen's mission of transforming BlackBerry into a company focused on cross-platform software growth instead of hardware sales. BES plays a central role in this strategy, because it's a "control panel" for businesses to monitor iOS, Android, Windows, and BlackBerry devices -- thus capitalizing on the growth of BYOD instead of stubbornly resisting it. BlackBerry Enterprise Service 12. Image source: BlackBerry. I believe that Chen's ultimate goal is to turn BlackBerry into a software company supported by BES, mobile device management services from Good Technology, the embedded OS QNX, and its BBM messaging app for business users. Software sales rose 21% annually last quarter, but only accounted for 39% of its top line. This means that BlackBerry can't simply abandon its dying hardware business without dramatically reducing its cash flows. Therefore, Chen seems to be gradually reducing its exposure to BB10 devices, which are selling poorly, and offering more Android devices tethered to its services, which might perform better by appealing to BYOD users. Chen likely knows that BlackBerry's Android devices will never sell as well as Samsung's, but they could buy its software business more time to become the company's main source of revenue. In that regard, investors should notice the silver lining on the government's abandonment of BlackBerry devices -- it might boost demand for its enterprise mobility management (EMM) services like BES, which Chen claims already controls up to 20% of the overall market. Therefore, investors shouldn't fret too much over the Senate's abandonment of BB10 devices. It was bound to happen sooner or later, and shouldn't hurt BlackBerry's core software growth engine in the long run.
# By Dimitris_GR from forums # Modify Problem Set 31's (Optional) Symmetric Square to return True # if the given square is antisymmetric and False otherwise. # An nxn square is called antisymmetric if A[i][j]=-A[j][i] # for each i=0,1,...,n-1 and for each j=0,1,...,n-1. def antisymmetric(list_): new_list = zip(*list_) # Converts rows to columns in new list length = len(list_) i = 0 # Debug list and new_list conversion #print("list: ", list_, "new_list: ", new_list) # Filters non-square lists while i < length: if len(list_[i]) != len(new_list[i]): return False i += 1 # Compares lists i = 0 while i < length: j = 0 while j < len(list_[i]): # Debug list_[i][j], new_list[i][j], and comparison #print("list_[i][j]: ", list_[i][j], # "new_list[i][j]: ", new_list[i][j], # "result: ", list_[i][j] != new_list[i][j]) if list_[i][j] != -new_list[i][j]: return False j += 1 i += 1 return True # Test Cases: print antisymmetric([[0, 1, 2], [-1, 0, 3], [-2, -3, 0]]) #>>> True print antisymmetric([[0, 0, 0], [0, 0, 0], [0, 0, 0]]) #>>> True print antisymmetric([[0, 1, 2], [-1, 0, -2], [2, 2, 3]]) #>>> False print antisymmetric([[1, 2, 5], [0, 1, -9], [0, 0, 1]]) #>>> False
/** * Convert a JSONArray to a JSArray * * @param array * @return * @throws JSONException */ private JSArray convertJSONArrayToJSArray(JSONArray array) throws JSONException { JSArray rArr = new JSArray(); for (int i = 0; i < array.length(); i++) { rArr.put(array.get(i)); } return rArr; }
#include<avr/io.h> #include<util/delay.h> #include<compat/deprecated.h> #define s1_clr bit_is_clear(PIND,0) #define s2_clr bit_is_clear(PIND,1) #define s3_clr bit_is_clear(PIND,2) #define s4_clr bit_is_clear(PIND,3) #define s1_set bit_is_set(PIND,0) #define s2_set bit_is_set(PIND,1) #define s3_set bit_is_set(PIND,2) #define s4_set bit_is_set(PIND,3) #define fwd PORTC=5 #define right PORTC=6 #define left PORTC=9 void stgt(int); void rgtturn(); void leftturn(); void zigzag(int,int); int main() { DDRD=0xF0; DDRC=0b11111111; DDRB=0x0F; int b,l; b=2; l=3; while(1) { zigzag(b-1,l-1); } } void stgt(int a) { int i=0; for(i=0;i<a;) { if(s1_clr && s2_clr && s3_clr && s4_clr) { PORTB=++i; fwd; while(PIND==0); } if(s2_clr && s3_clr && s1_set && s4_set) { fwd; } if(s1_set && s2_set && s3_clr && s4_set) { right; } if(s1_set && s2_clr && s3_set && s4_set) { left; } } PORTC=0; } void rgtturn() { fwd; _delay_ms(2000); right; while(PIND!=7); right; while(PIND!=9); } void leftturn() { fwd; _delay_ms(2000); left; while(PIND!=14); left; while(PIND!=9); } void abtturn() { fwd; _delay_ms(2000); right; while(PIND!=7); right; while(PIND!=9); right; while(PIND!=7); right; while(PIND!=9); } void zigzag(int r,int c) { int i; stgt(r); for(i=1;i<=c;i++) { if(i%2!=0) { rgtturn(); stgt(1); rgtturn(); stgt(r); } if(i%2==0) { leftturn(); stgt(1); leftturn(); stgt(r); } } if(c%2==0) { abtturn(); stgt(r); rgtturn(); stgt(c); rgtturn(); } if(c%2!=0) { rgtturn(); stgt(c); rgtturn(); } }
Leadership: the critical success factor in the rise or fall of useful research activity. AIM To describe how momentum towards building research capacity has developed through aligning research activity with executive responsibility via strategic planning processes that direct operational structures and processes for research activity. BACKGROUND Reflecting on the development of research capacity over many years at complex tertiary referral hospitals reveals that building nursing knowledge is too important to be left to chance or whim but needs a strategic focus, appropriate resourcing and long-term sustainability through infrastructure. KEY ISSUES A number of key approaches we uncovered as successful include: (i) articulation of questions consistent with the strategic direction of the health context that can be addressed through research evidence; (ii) engagement and dissemination through making research meaningful; and (iii) feedback that informs the executive about the contribution of research activity to guide policy and practice decisions. CONCLUSIONS Leadership teams need to ensure that the development of research knowledge is a strategic priority. The focus also needs to be more broadly on creating research capacity than focussing on small operational issues. IMPLICATIONS FOR NURSING MANAGEMENT Research capacity is developed when it is initiated, supported and monitored by leadership.
<reponame>l3montree-dev/keep-em-down<gh_stars>1-10 pub mod http_attacker; pub mod tcp_attacker; use std::cmp; use async_trait::async_trait; use crate::target::Target; #[derive(Clone, Debug)] pub struct IterativeAttack { pub target: Target, pub connections: u32, } pub struct IterativeAttackResult { pub target_down_rate: f32, pub avg_response_time: f32, } pub trait SingleAttackResult: Send { fn target_down(&self) -> bool; fn get_duration_ms(&self) -> u128; } #[async_trait] pub trait Attacker { async fn attack_target( &self, target: Target, ) -> Box<dyn SingleAttackResult>; } pub async fn attack( attacker: Box<dyn Attacker>, iterative_attack: &IterativeAttack, ) -> IterativeAttackResult { let result = (0..iterative_attack.connections).map(|_| { let target = iterative_attack.target.clone(); attacker.attack_target(target) }); let awaited_fut = futures::future::join_all(result).await; to_iterative_result(&awaited_fut) } fn to_iterative_result( vec: &Vec<Box<dyn SingleAttackResult>>, ) -> IterativeAttackResult { let mut amount_target_down = 0; let mut total_duration_ms = 0; for attack_result in vec { if attack_result.target_down() { amount_target_down += 1; } else { total_duration_ms += attack_result.get_duration_ms(); } } return IterativeAttackResult { avg_response_time: total_duration_ms as f32 / cmp::max(1, vec.len() - amount_target_down) as f32, target_down_rate: amount_target_down as f32 / cmp::max(1, vec.len()) as f32, }; }
Reuters U.S. Republican presidential candidate Ben Carson does not believe there is a physical place where people go and are tormented. Republican presidential candidate Dr. Ben Carson believes people live in an evil world "so bad things will happen to some people." However, he does not believe that a worst place called hell exists. "I don't see any evidence for that in the Bible," he told The Washington Post. "I don't believe there is a physical place where people go and are tormented. No. I don't believe that." What he does strongly believe in though is the existence of God. The Seventh-day Adventist is even so in awe of God that he cannot find the right words to describe Him. "There's no man who can explain God, or he would be God. He's a force that doesn't believe in dictating and gives you a choice: Whether you want to be associated with Him or not. It can provide enormous strength and power if you do. And He has been an integral part of my life. There are many things I would have never taken on in the medical field had I not felt that He was behind me," he said. At the same time, Carson believes in the idea that heaven is a physical place because there is proof of its existence in the Bible. "The Bible says when you die, you know, there is no soul that kind of floats away. But essentially, when you die, the next thing you know is the coming of Christ because you don't know anything when you're dead. If you're dead for a second or a thousand years, it's the same. But when he comes, according to the book of First Corinthians, that the sound of the archangel will rise and that's when things happen," he said. Carson also believes that Jesus Christ is coming back, and when He finally does, there will be tribulation. "We believe that Christ is going to return to the earth again," he said. "I think [Christ] could come any time." Because of this mindset, Carson stressed the need for people to "live your life as if He's coming back today. As if He's coming back tomorrow."
Attempt missed. Ashley Barnes (Burnley) right footed shot from the centre of the box is close, but misses to the right. Assisted by Chris Wood with a headed pass following a set piece situation. Attempt missed. Philip Billing (Huddersfield Town) left footed shot from outside the box misses to the right. Assisted by Alex Pritchard. Attempt missed. Steve Mounie (Huddersfield Town) right footed shot from the centre of the box is high and wide to the right. Assisted by Elias Kachunga with a cross. Attempt saved. Steve Mounie (Huddersfield Town) left footed shot from outside the box is saved in the bottom right corner. Assisted by Florent Hadergjonaj. Corner, Huddersfield Town. Conceded by Johann Gudmundsson. Attempt blocked. Christopher Schindler (Huddersfield Town) right footed shot from the left side of the six yard box is blocked. Assisted by Terence Kongolo. Attempt missed. Dwight McNeil (Burnley) left footed shot from outside the box misses to the left. Assisted by Ashley Westwood. Offside, Burnley. Ashley Westwood tries a through ball, but Johann Gudmundsson is caught offside. Isaac Mbenza (Huddersfield Town) wins a free kick in the defensive half. Attempt saved. Ben Mee (Burnley) header from the centre of the box is saved in the bottom right corner. Assisted by Ashley Westwood with a cross. Attempt saved. Ashley Barnes (Burnley) left footed shot from the right side of the six yard box is saved in the centre of the goal. Attempt blocked. Elias Kachunga (Huddersfield Town) header from the left side of the box is blocked. Assisted by Alex Pritchard with a cross. Goal! Huddersfield Town 1, Burnley 0. Steve Mounie (Huddersfield Town) header from the centre of the box to the top right corner. Assisted by Isaac Mbenza with a cross. Hand ball by Charlie Taylor (Burnley). Goal! Huddersfield Town 1, Burnley 1. Chris Wood (Burnley) left footed shot from very close range to the high centre of the goal. Assisted by Dwight McNeil with a cross. Second yellow card to Christopher Schindler (Huddersfield Town) for a bad foul. Attempt saved. Johann Gudmundsson (Burnley) left footed shot from outside the box is saved in the centre of the goal. Substitution, Huddersfield Town. Erik Durm replaces Alex Pritchard. Attempt missed. Charlie Taylor (Burnley) right footed shot from outside the box is too high. Assisted by Chris Wood. First Half ends, Huddersfield Town 1, Burnley 1. Second Half begins Huddersfield Town 1, Burnley 1. Attempt missed. Chris Wood (Burnley) header from the right side of the six yard box is close, but misses to the right. Assisted by Ashley Westwood with a cross following a corner. Offside, Burnley. Dwight McNeil tries a through ball, but Charlie Taylor is caught offside. Attempt blocked. Philip Billing (Huddersfield Town) header from the centre of the box is blocked. Assisted by Isaac Mbenza with a cross. Attempt saved. Johann Gudmundsson (Burnley) left footed shot from the left side of the box is saved in the centre of the goal. Assisted by Ashley Westwood. Attempt blocked. Isaac Mbenza (Huddersfield Town) right footed shot from the left side of the box is blocked. Assisted by Steve Mounie. Substitution, Burnley. Matthew Lowton replaces Phil Bardsley because of an injury. Attempt blocked. Ashley Barnes (Burnley) left footed shot from the centre of the box is blocked. Assisted by Dwight McNeil. Attempt blocked. Johann Gudmundsson (Burnley) header from the centre of the box is blocked. Assisted by Charlie Taylor with a cross. Robbie Brady (Burnley) wins a free kick on the left wing. Attempt missed. Johann Gudmundsson (Burnley) left footed shot from outside the box is close, but misses the top left corner following a set piece situation. Attempt saved. Robbie Brady (Burnley) left footed shot from the left side of the box is saved in the bottom left corner. Attempt missed. Chris Wood (Burnley) left footed shot from the centre of the box is too high. Assisted by Robbie Brady with a cross. Substitution, Huddersfield Town. Chris Löwe replaces Florent Hadergjonaj. Goal! Huddersfield Town 1, Burnley 2. Ashley Barnes (Burnley) right footed shot from the centre of the box to the bottom right corner. Assisted by Ashley Westwood. Chris Wood (Burnley) is shown the yellow card. Attempt missed. Robbie Brady (Burnley) right footed shot from outside the box is high and wide to the right. Assisted by Johann Gudmundsson. James Tarkowski (Burnley) wins a free kick on the right wing. Substitution, Huddersfield Town. Laurent Depoitre replaces Erik Durm because of an injury. Corner, Burnley. Conceded by Terence Kongolo. Isaac Mbenza (Huddersfield Town) wins a free kick in the attacking half. Attempt saved. Ashley Barnes (Burnley) right footed shot from the centre of the box is saved in the bottom right corner. Assisted by Jack Cork with a through ball. Robbie Brady (Burnley) is shown the red card. Match ends, Huddersfield Town 1, Burnley 2. Attempt missed. Philip Billing (Huddersfield Town) left footed shot from outside the box is just a bit too high from a direct free kick. Substitution, Burnley. Jeff Hendrick replaces Ashley Barnes. Tom Heaton (Burnley) is shown the yellow card. Second Half ends, Huddersfield Town 1, Burnley 2.
/** * Called for each row in the processRow method of the spring query. Upgrades the xml and update the * krns_maint_doc_t table. * * @param docId - the document id string * @param docCntnt - the old xml string * @param encryptServ - the encryption service used to encrypt/decrypt the xml */ public void processDocumentRow(String docId, String docCntnt, EncryptionService encryptServ, String runMode) { System.out.println(docId); try { String oldXml = docCntnt; if (encryptServ.isEnabled()) { oldXml = encryptServ.decrypt(docCntnt); } if ("2".equals(runMode)) { System.out.println("------ ORIGINAL DOC XML --------"); System.out.println(oldXml); System.out.println("--------------------------------"); } MaintainableXMLConversionServiceImpl maintainableXMLConversionServiceImpl = new MaintainableXMLConversionServiceImpl(); String newXML = maintainableXMLConversionServiceImpl.transformMaintainableXML(oldXml); if ("2".equals(runMode)) { System.out.println("******* UPGRADED DOC XML ********"); System.out.println(newXML); System.out.println("*********************************\n"); } if ("1".equals(runMode)) { if (encryptServ.isEnabled()) { jdbcTemplate.update("update krns_maint_doc_t set DOC_CNTNT = ? where DOC_HDR_ID = ?", new Object[]{encryptServ.encrypt(newXML), docId}); } else { jdbcTemplate.update("update krns_maint_doc_t set DOC_CNTNT = ? where DOC_HDR_ID = ?", new Object[]{newXML, docId}); } } totalDocs++; } catch (Exception ex) { Logger.getLogger(FileConverter.class.getName()).log(Level.SEVERE, null, ex); System.exit(1); } }
<filename>presto-tests/src/main/java/com/facebook/presto/tests/statistics/MetricComparator.java /* * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.facebook.presto.tests.statistics; import com.facebook.presto.cost.PlanNodeStatsEstimate; import com.facebook.presto.execution.StageInfo; import com.facebook.presto.spi.statistics.Estimate; import com.facebook.presto.sql.planner.Plan; import com.facebook.presto.sql.planner.plan.PlanNode; import com.facebook.presto.sql.planner.plan.PlanNodeId; import com.facebook.presto.sql.planner.planPrinter.PlanNodeStats; import com.facebook.presto.sql.planner.planPrinter.PlanNodeStatsSummarizer; import java.util.List; import java.util.Map; import java.util.Optional; import java.util.function.BinaryOperator; import java.util.stream.Collectors; import java.util.stream.Stream; import static com.facebook.presto.execution.StageInfo.getAllStages; import static com.facebook.presto.sql.planner.optimizations.PlanNodeSearcher.searchFrom; import static com.facebook.presto.util.MoreMaps.mergeMaps; import static com.google.common.collect.Maps.transformValues; public final class MetricComparator { private MetricComparator() {} public static List<MetricComparison> createMetricComparisons(Plan queryPlan, StageInfo outputStageInfo) { return Stream.of(Metric.values()).flatMap(metric -> { Map<PlanNodeId, PlanNodeStatsEstimate> estimates = queryPlan.getPlanNodeStats(); Map<PlanNodeId, PlanNodeStatsEstimate> actuals = extractActualStats(outputStageInfo); return estimates.entrySet().stream().map(entry -> { // todo refactor to stay in PlanNodeId domain ???? PlanNode node = planNodeForId(queryPlan, entry.getKey()); PlanNodeStatsEstimate estimate = entry.getValue(); Optional<PlanNodeStatsEstimate> execution = Optional.ofNullable(actuals.get(node.getId())); return createMetricComparison(metric, node, estimate, execution); }); }).collect(Collectors.toList()); } private static PlanNode planNodeForId(Plan queryPlan, PlanNodeId id) { return searchFrom(queryPlan.getRoot()) .where(node -> node.getId().equals(id)) .findOnlyElement(); } private static Map<PlanNodeId, PlanNodeStatsEstimate> extractActualStats(StageInfo outputStageInfo) { Stream<Map<PlanNodeId, PlanNodeStats>> stagesStatsStream = getAllStages(Optional.of(outputStageInfo)).stream() .map(PlanNodeStatsSummarizer::aggregatePlanNodeStats); Map<PlanNodeId, PlanNodeStats> mergedStats = mergeStats(stagesStatsStream); return transformValues(mergedStats, MetricComparator::toPlanNodeStats); } private static Map<PlanNodeId, PlanNodeStats> mergeStats(Stream<Map<PlanNodeId, PlanNodeStats>> stagesStatsStream) { BinaryOperator<PlanNodeStats> allowNoDuplicates = (a, b) -> { throw new IllegalArgumentException("PlanNodeIds must be unique"); }; return mergeMaps(stagesStatsStream, allowNoDuplicates); } private static PlanNodeStatsEstimate toPlanNodeStats(PlanNodeStats operatorStats) { return PlanNodeStatsEstimate.builder() .setOutputRowCount(new Estimate(operatorStats.getPlanNodeOutputPositions())) .setOutputSizeInBytes(new Estimate(operatorStats.getPlanNodeOutputDataSize().toBytes())) .build(); } private static MetricComparison createMetricComparison(Metric metric, PlanNode node, PlanNodeStatsEstimate estimate, Optional<PlanNodeStatsEstimate> execution) { Optional<Double> estimatedStats = asOptional(metric.getValue(estimate)); Optional<Double> executionStats = execution.flatMap(e -> asOptional(metric.getValue(e))); return new MetricComparison(node, metric, estimatedStats, executionStats); } private static Optional<Double> asOptional(Estimate estimate) { return estimate.isValueUnknown() ? Optional.empty() : Optional.of(estimate.getValue()); } }
<reponame>yym-yumeng123/ReactUI<gh_stars>1-10 import React from "react"; import Demo from "lib/Demo/demo"; import API from "example/API/api"; import Card from "lib/Card/card"; import CheckboxExampleBasic from "./checkbox.example_basic"; import CheckboxExampleGroup from "./checkbox.example_group"; // tslint:disable-next-line: no-var-requires const codeCheck = require("!!raw-loader!./checkbox.example_basic.tsx"); const codeGroup = require("!!raw-loader!./checkbox.example_group.tsx"); const CarouselDemo = () => { return ( <div className="content"> <Card title="Checkbox 组件基本使用"> <Demo code={codeCheck.default}> <CheckboxExampleBasic /> </Demo> </Card> <Card title="RadioGroup 组件使用"> <Demo code={codeGroup.default}> <CheckboxExampleGroup /> </Demo> </Card> <Card title="API"> <API type="checkbox" /> </Card> </div> ); }; export default CarouselDemo;
Nahki Wells’s Queens Park Rangers suffered a 4-0 defeat away to league leaders Norwich City in the Sky Bet Championship on Saturday. It was an eighth-straight victory for Norwich, who were two goals up inside 12 minutes at Carrow Road, Emi Buendia putting them in front from close range and Marco Stiepermann firing in from 20 yards. Teemu Pukki netted in his 25th goal of the season to extend his side’s lead before the interval. Norwich were reduced to ten men when Buendia was shown a red card for a reckless challenge on Josh Scowen in the 71st minute. It mattered little, however, with Pukki adding a second late on to heap more misery on the visiting side, who sacked their manager Steve McClaren last week. John Eustace, the QPR caretaker manager, said: “We were up against a really good Norwich side who are probably going to win the league and it was always going to be a tough game for us. “Having said that I was very disappointed with the way we defended early on. When Norwich scored their first it seemed to drain the confidence out of the lads and it was a very difficult first half for us. Reggie Lambe played a full 90 minutes in Cambridge United’s 1-0 defeat away to Mansfield in League Two on Saturday. Tyler Walker was on target for Mansfield, Lambe’s former club, in the 64th minute. Kacy Milan Butterfield delivered an all-action display in midfield in Kidderminster Harriers’ 3-0 victory at home to Bradford Park Avenue in the Vanarama National League North on Saturday. Joe Ironside scored a brace while Ashley Chambers also netted. In the same division, Osagi Bascome was an unused substitute in Darlington’s 2-0 win at home to FC United of Manchester, as was goalkeeper Jahquil Hill in Hereford United’s 2-2 draw at home to Stockport County. Zeiko Lewis scored his second goal of the season in Charleston Battery’s 2-0 win over Charlotte Independence in the United Soccer League Eastern Conference yesterday. Ian Svantesson was also on target. Roger Lee picked up a yellow card in Tallinna Kalev’s 2-0 win at home to Kuressaare in the Estonian Meistriliiga yesterday. The defeat leaves Lee’s side bottom of the table with no points from their opening four league games. Danté Leverock, the Bermuda captain, played a full 90 minutes as Sligo Rovers suffered a 2-0 defeat at home to Bohemians in the SSC Airtricity League on Saturday. Bermudian goalkeeper Nathan Trott has played his first match in 2019, posting a shutout as relegation-threatened West Ham United beat Leicester City 1-0 on a 76th-minute Daniel Kemp penalty yesterday in Premier League 2. Trott, who last played on December 15 in a 4-2 home defeat by Brighton & Hove Albion and has since penned a new deal with the East London club, made four saves on his return from a nagging leg injury. The result lifts West Ham out of the relegation places in Premier League 2 and into ninth place on goal difference. They have two matches remaining — against fourth-placed Derby County and leaders Everton — while Blackburn Rovers and Tottenham Hotspur, who are also on 22 points, have three left. Swansea City are all but relegated on 16 points with two matches left.
package betterwithmods.integration; import betterwithmods.BWMod; import betterwithmods.integration.tcon.TraitMending; import betterwithmods.util.NetherSpawnWhitelist; import net.minecraft.block.Block; import net.minecraft.item.ItemStack; import net.minecraft.util.ResourceLocation; import net.minecraftforge.fluids.Fluid; import net.minecraftforge.fluids.FluidRegistry; import net.minecraftforge.fml.relauncher.Side; import net.minecraftforge.fml.relauncher.SideOnly; import net.minecraftforge.oredict.OreDictionary; import org.apache.commons.lang3.tuple.Pair; import slimeknights.mantle.util.RecipeMatch; import slimeknights.tconstruct.library.MaterialIntegration; import slimeknights.tconstruct.library.TinkerRegistry; import slimeknights.tconstruct.library.client.MaterialRenderInfo; import slimeknights.tconstruct.library.fluid.FluidMolten; import slimeknights.tconstruct.library.materials.ExtraMaterialStats; import slimeknights.tconstruct.library.materials.HandleMaterialStats; import slimeknights.tconstruct.library.materials.HeadMaterialStats; import slimeknights.tconstruct.library.materials.Material; import slimeknights.tconstruct.library.smeltery.MeltingRecipe; import slimeknights.tconstruct.library.traits.AbstractTrait; import slimeknights.tconstruct.library.utils.HarvestLevels; import slimeknights.tconstruct.tools.TinkerMaterials; import slimeknights.tconstruct.tools.TinkerTraits; import java.util.List; public class TConstruct { public static final Material soulforgedSteel = mat("soulforgedSteel", 5066061); public static final Material hellfire = mat("hellfire", 14426647); public static AbstractTrait mending; public static FluidMolten soulforgeFluid; public static FluidMolten hellfireFluid; public static void init() { mending = new TraitMending(); if(BWMod.proxy.isClientside()) registerRenderInfo(soulforgedSteel, 5066061, 0.1F, 0.3F, 0.1F); soulforgeFluid = fluidMetal("soulforged_steel", 5066061); soulforgeFluid.setTemperature(681); soulforgedSteel.addItem("ingotSoulforgedSteel", 1, Material.VALUE_Ingot); soulforgedSteel.addTrait(mending); if(BWMod.proxy.isClientside()) registerRenderInfo(hellfire, 14426647, 0.0F, 0.2F, 0.0F); hellfireFluid = fluidMetal("hellfire", 14426647); hellfireFluid.setTemperature(850); hellfire.addItem("ingotHellfire", 1, Material.VALUE_Ingot); hellfire.addTrait(TinkerTraits.autosmelt); TinkerRegistry.addMaterialStats(soulforgedSteel, new HeadMaterialStats(875, 12.0F, 6.0F, HarvestLevels.OBSIDIAN), new HandleMaterialStats(1.0F, 225), new ExtraMaterialStats(50)); TinkerRegistry.addMaterialStats(hellfire, new HeadMaterialStats(325, 8.0F, 4.0F, HarvestLevels.DIAMOND), new HandleMaterialStats(0.75F, 75), new ExtraMaterialStats(25)); registerMaterial(soulforgedSteel, soulforgeFluid, "SoulforgedSteel"); registerMaterial(hellfire, hellfireFluid, "Hellfire"); fixHellfireDust(); netherWhitelist(); } private static void netherWhitelist() { Block ore = Block.REGISTRY.getObject(new ResourceLocation("tconstruct", "ore")); NetherSpawnWhitelist.addBlock(ore, 0); NetherSpawnWhitelist.addBlock(ore, 1); NetherSpawnWhitelist.addBlock(Block.REGISTRY.getObject(new ResourceLocation("tconstruct", "slime_congealed")), 3); NetherSpawnWhitelist.addBlock(Block.REGISTRY.getObject(new ResourceLocation("tconstruct", "slime_congealed")), 4); NetherSpawnWhitelist.addBlock(Block.REGISTRY.getObject(new ResourceLocation("tconstruct", "slime_dirt")), 3); Block slimeGrass = Block.REGISTRY.getObject(new ResourceLocation("tconstruct", "slime_grass")); NetherSpawnWhitelist.addBlock(slimeGrass, 4); NetherSpawnWhitelist.addBlock(slimeGrass, 9); NetherSpawnWhitelist.addBlock(slimeGrass, 14); } private static void registerMaterial(Material material, Fluid fluid, String oreSuffix) { MaterialIntegration mat = new MaterialIntegration(material, fluid, oreSuffix).setRepresentativeItem("ingot" + oreSuffix); mat.integrate(); mat.integrateRecipes(); mat.registerRepresentativeItem(); } private static Material mat(String name, int color) { Material mat = new Material(name, color); TinkerMaterials.materials.add(mat); return mat; } private static FluidMolten fluidMetal(String name, int color) { FluidMolten fluid = new FluidMolten(name, color); return registerFluid(fluid); } private static <T extends Fluid> T registerFluid(T fluid) { fluid.setUnlocalizedName(BWMod.MODID + ":" + fluid.getName()); FluidRegistry.registerFluid(fluid); return fluid; } @SideOnly(Side.CLIENT) private static void registerRenderInfo(Material material, int color, float shininess, float brightness, float hueshift) { material.setRenderInfo(new MaterialRenderInfo.Metal(color, shininess, brightness, hueshift)); } private static void fixHellfireDust() { Pair<List<ItemStack>, Integer> dustOre = Pair.of(OreDictionary.getOres("powderedHellfire"), Material.VALUE_Ingot / 8); TinkerRegistry.registerMelting(new MeltingRecipe(RecipeMatch.of(dustOre.getLeft(), dustOre.getRight()), hellfireFluid)); } }
Ontario is helping more than 100 non-profit organizations expand and improve their programs and services for more than 350,000 people and build stronger communities across the province. Liz Sandals, MPP for Guelph, made the announcement today on behalf of Eleanor McMahon, Minister of Tourism Culture and Sport, at the Lakeside HOPE House in Guelph. The province is supporting the Lakeside HOPE House to help more than 60 low-income people through its Circles ® program. Program volunteers help individuals and their families become more financially independent by offering emotional and practical support and connecting them to resources and training, for example preparing for job interviews. These 114 organizations will be funded through two Ontario Trillium Foundation streams: Grow grants, which help organizations expand an existing, proven not-for-profit project, and Collective Impact grants, which help organizations work together to tackle complex issues in their own community. Supporting strong, healthy communities is part of our plan to create jobs, grow our economy and help people in their everyday lives. The Ontario Trillium Foundation, an agency of the Government of Ontario, is one of Canada’s largest granting foundations. Since 2013, the Ontario Trillium Foundation has invested over $432 million in projects to help build healthy and vibrant communities. Sixteen Grant Review Teams across Ontario, composed of active community-based volunteers, review applications and guide granting decisions for the OTF. With an investment of $35.5 million, these 114 projects will have a positive impact on the lives of more than 356,000 people across the province over the next three years. OTF publishes its granting data in a raw, machine-readable format to help drive innovation and collaboration. This aligns with Ontario’s Open Government commitment to increase transparency by making government data more publicly available.
This invention relates to a distributing valve for viscous materials, particularly twin cylinder pumps supplying concrete, said valve having a shaft in a casting which can be acted upon at least partially by the pump pressure, for moving an oscillating body with locking elements, which open a suction orifice for one pump cylinder and a pressure orifice for the other pump cylinder in one position of the oscillating body and are arranged to close the pressure orifice of the sucking cylinder and the suction orifice of the compressing cylinder. Such a valve will hereinafter be referred to as a valve of the type specified. In concrete pumps the casing is usually arranged below a preliminary feed receptacle, into which the concrete is introduced, for example, by means of a conveyor mixer. One of the pistons of the pump sucks the concrete out of the preliminary feed receptacle into the appropriate cylinder, while the other piston forces the concrete sucked in during the previous stroke out of the other cylinder into a pipe. In general the casing has two pushing orifices arranged adjacent one another for the attachment of a Y pipe, to the other end of which a conveyor pipe for conveying the concrete is attached. However the construction of the distributing valve should be such that in each position of the oscillating body, one cylinder has its suction orifice open and its pressure orifice closed while the reverse is true for the corresponding orifices of the other cylinder. In a known distributing valve of the type specified, this requirement is fulfilled by arranging for the axis of rotation of the shaft to be roughly in the middle of the casing and by arranging that, in one of the two control positions, which are on diametrically opposite sides of the axis of the shaft, the suction orifice of one pump cylinder and the pressure orifice of the other pump cylinder are closed while the pressure orifice of the one and the suction orifice of the other pump cylinder are simultaneously open. In the other control position the functions are reversed. The disadvantage of this known method of construction lies in the problems of sealing and the problems of the resulting wear on the rotating shaft, which must lie within the part of the casing which is under pressure.
Threeyear risk of highgrade CIN for women aged 30 years or older who undergo baseline Pap cytology and HPV coscreening Papanicolaou (Pap) cytology and highrisk human papillomavirus (HPV) DNA cotesting for women aged ≥30 years are recommended for the prevention of cervical cancer. The objective of the current study was to evaluate the efficacy of this cotesting for predicting the risk of highgrade cervical intraepithelial neoplasia 3 (CIN3) during a 3year followup period.
A 26-year-old Okaloosa County school bus driver has been put on administrative leave following her arrest Monday. Christina B. Russell of Crestview was arrested after sheriff�s deputies discovered equipment and supplies used to produce methamphetamine in her home while trying to serve an arrest warrant on her husband, according to a news release from the Okaloosa County Sheriff�s Office. School district administrators placed Russell on paid leave until the School Board meets to decide what action to take, according to Mike Foxworthy, chief of human resources for the district. Russell said she let her husband, Allen Rhinehart, cook the meth while children were present, according to the Sheriff�s Office. Russell was charged with conspiracy to traffic in methamphetamine and child neglect without great harm. Rhinehart was not home when deputies arrived to serve the warrant.
from .containers import * from .events import * from struct import unpack, pack from .util import * from .fileio import * def get_subclasses(base): subs = set(base.__subclasses__()) recursive = set(e for a in subs for e in get_subclasses(a)) subs.update(recursive) return subs def populate_eventregistry(): events = get_subclasses(AbstractEvent).difference( {AbstractEvent, Event, MetaEvent, NoteEvent, MetaEventWithText} ) for event in events: EventRegistry.register_event(event) populate_eventregistry()
/** * This class is part of JCodec ( www.jcodec.org ) This software is distributed * under FreeBSD License. * * The class is a direct java port of libvpx's * (https://github.com/webmproject/libvpx) relevant VP8 code with significant * java oriented refactoring. * * @author The JCodec project * */ public class BestSegInfo { public MV ref_mv; public MV mvp; public long segment_rd; public BlockEnum segment_num; public int r; public int d; public int segment_yrate; public BPredictionMode[] modes = new BPredictionMode[16]; public MV[] mvs = new MV[modes.length]; public short[] eobs = new short[modes.length]; public int mvthresh; public int[] mdcounts; public MV[] sv_mvp = new MV[4]; /* save 4 mvp from 8x8 */ public FullAccessIntArrPointer sv_istep = new FullAccessIntArrPointer(2); /* save 2 initial step_param for 16x8/8x16 */ public BestSegInfo(long best_rd, MV best_ref_mv, int mvthresh, int[] mdcounts) { segment_rd = best_rd; ref_mv = best_ref_mv; mvp = best_ref_mv.copy(); this.mvthresh = mvthresh; this.mdcounts = mdcounts; segment_num = BlockEnum.BLOCK_16X8; r = d = 0; for (int i = 0; i < modes.length; ++i) { modes[i] = BPredictionMode.ZERO4X4; eobs[i] = 0; mvs[i] = new MV(); } for (int i = 0; i < sv_mvp.length; i++) { sv_mvp[i] = new MV(); } } }
Clinical correlates of paliperidone palmitate and aripiprazole monohydrate prescription for subjects with schizophrenia-spectrum disorders: findings from the STAR Network Depot Study This study, based on the Servizi Territoriali Associati per la Ricerca (STAR) Network Depot Study nationwide baseline data, explored whether individual symptoms severity and clusters might influence the prescription of paliperidone palmitate 1-month (PP1M) vs. aripiprazole monohydrate. The Brief Psychiatric Rating Scale (BPRS) was used to assess psychopathology and relevant symptoms clusters. Drug Attitude Inventory, 10 items, was used to test attitude towards medications. Adherence to treatments was rated according to the Kemp seven-point scale. We assessed for eligibility 451 individuals and, among them, we included 195 subjects (n=117 who started PPM1 and n=78 aripiprazole monohydrate). Individuals were comparable in terms of age, gender, treatment years, recent hospitalizations, previous long-acting injectable antipsychotic treatments, additional oral treatments, attitude toward drugs, medication adherence, and alcohol/substance-related comorbidities. Subjects starting PP1M presented higher BPRS overall (P=0.009), positive (P=0.015), and negative (P=0.010) symptom scores compared to subjects starting aripiprazole monohydrate. Results were confirmed by appropriate regression models and propensity score matching analysis. No differences were found comparing the other BPRS subscale scores: affect, resistance, and activation. Clinicians may be more prone to prescribe PPM1, rather than aripiprazole monohydrate, to subjects showing higher overall symptom severity, including positive and negative symptoms. No additional clinical factors influenced prescribing attitudes in our sample.
<reponame>coheigea/tcommon-studio-se // ============================================================================ // // Copyright (C) 2006-2021 Talend Inc. - www.talend.com // // This source code is available under agreement available at // %InstallDIR%\features\org.talend.rcp.branding.%PRODUCTNAME%\%PRODUCTNAME%license.txt // // You should have received a copy of the agreement // along with this program; if not, write to Talend SA // 9 rue Pages 92150 Suresnes, France // // ============================================================================ package org.talend.core.model.metadata; /** * nrousseau class global comment. Detailled comment */ public interface IHL7Constant { public static final String REF_TYPE = "-TYPE"; //$NON-NLS-1$ public static final String REPOSITORY_VALUE = "HL7"; //$NON-NLS-1$ public static final String TABLE_SCHEMAS = "SCHEMAS"; //$NON-NLS-1$ public static final String REF_ATTR_REPOSITORY = "REPOSITORY"; //$NON-NLS-1$ public static final String FIELD_SCHEMA = "SCHEMA"; //$NON-NLS-1$ public static final String FIELD_MAPPING = "MAPPING"; //$NON-NLS-1$ public static final String FIELD_CODE = "CODE"; //$NON-NLS-1$ public static final String PATH = "PATH"; //$NON-NLS-1$ public static final String VALUE = "VALUE"; //$NON-NLS-1$ public static final String COLUMN = "COLUMN"; //$NON-NLS-1$ public static final String ATTRIBUTE = "ATTRIBUTE"; //$NON-NLS-1$s public static final String ORDER = "ORDER"; //$NON-NLS-1$ public static final String REPEATABLE = "REPEATABLE";//$NON-NLS-1$ /* * */ public static final String PROPERTY = "PROPERTY"; //$NON-NLS-1$ public static final String PROPERTY_TYPE = "PROPERTY_TYPE"; //$NON-NLS-1$ public static final String REPOSITORY_PROPERTY_TYPE = "REPOSITORY_PROPERTY_TYPE"; //$NON-NLS-1$ }
/** * Class is the abstract instance for all documents of type DocumentSum. * * @author The eFaps Team */ @EFapsUUID("e177ab08-67f0-4ce2-8eff-d3f167352bee") @EFapsApplication("eFapsApp-Sales") public abstract class AbstractDocumentSum_Base extends AbstractDocument { /** * Key to sore access check during a request. */ public static final String ACCESSREQKEY = AbstractDocumentSum.class.getName() + ".accessCheck4Rate"; /** * @param _parameter Parameter as passed by the eFaps API * @return return granting access or not * @throws EFapsException on error */ public Return accessCheck4NetUnitPrice(final Parameter _parameter) throws EFapsException { final Return ret = new Return(); final Field field = (Field) _parameter.get(ParameterValues.UIOBJECT); if (field.isEditableDisplay(TargetMode.CREATE) && !Calculator.isIncludeMinRetail(_parameter, this) || field .isReadonlyDisplay(TargetMode.CREATE) && Calculator.isIncludeMinRetail(_parameter, this)) { ret.put(ReturnValues.TRUE, true); } return ret; } /** * AccessCheck that grants access if currency and ratecurrency are different. * @param _parameter Parameter as passed by the eFaps API * @return return granting access or not * @throws EFapsException on error */ public Return accessCheck4Rate(final Parameter _parameter) throws EFapsException { final Return ret = new Return(); Object obj = Context.getThreadContext().getRequestAttribute(AbstractDocumentSum_Base.ACCESSREQKEY); if (obj == null) { final PrintQuery print = new PrintQuery(_parameter.getInstance()); print.addAttribute(CISales.DocumentSumAbstract.CurrencyId, CISales.DocumentSumAbstract.RateCurrencyId); print.executeWithoutAccessCheck(); final Long currencyId = print.<Long>getAttribute(CISales.DocumentSumAbstract.CurrencyId); final Long rateCurrencyId = print.<Long>getAttribute(CISales.DocumentSumAbstract.RateCurrencyId); obj = currencyId != null && !currencyId.equals(rateCurrencyId); Context.getThreadContext().setRequestAttribute(AbstractDocumentSum_Base.ACCESSREQKEY, obj); } if ((Boolean) obj) { ret.put(ReturnValues.TRUE, true); } return ret; } /** * Method to edit the basic Document. The method checks for the Type to be * created for every attribute if a related field is in the parameters. * * @param _parameter Parameter as passed from the eFaps API. * @return the edited document * @throws EFapsException on error. */ protected EditedDoc editDoc(final Parameter _parameter) throws EFapsException { return editDoc(_parameter, new EditedDoc(_parameter.getInstance())); } /** * Method to edit the basic Document. The method checks for the Type to be * created for every attribute if a related field is in the parameters. * * @param _parameter Parameter as passed from the eFaps API. * @param _editDoc edited document * @return the edited document * @throws EFapsException on error. */ protected EditedDoc editDoc(final Parameter _parameter, final EditedDoc _editDoc) throws EFapsException { final List<Calculator> calcList = analyseTable(_parameter, null); _editDoc.addValue(AbstractDocument_Base.CALCULATORS_VALUE, calcList); final Instance baseCurrInst = Currency.getBaseCurrency(); final Instance rateCurrInst = getRateCurrencyInstance(_parameter, _editDoc); final Object[] rateObj = getRateObject(_parameter); final BigDecimal rate = ((BigDecimal) rateObj[0]).divide((BigDecimal) rateObj[1], 12, RoundingMode.HALF_UP); final Update update = new Update(_editDoc.getInstance()); final String name = getDocName4Edit(_parameter); if (name != null) { update.add(CISales.DocumentSumAbstract.Name, name); _editDoc.getValues().put(CISales.DocumentSumAbstract.Name.name, name); } final String date = _parameter.getParameterValue(getFieldName4Attribute(_parameter, CISales.DocumentSumAbstract.Date.name)); if (date != null) { update.add(CISales.DocumentSumAbstract.Date, date); _editDoc.getValues().put(CISales.DocumentSumAbstract.Date.name, date); } final String duedate = _parameter.getParameterValue(getFieldName4Attribute(_parameter, CISales.DocumentSumAbstract.DueDate.name)); if (duedate != null) { update.add(CISales.DocumentSumAbstract.DueDate, duedate); _editDoc.getValues().put(CISales.DocumentSumAbstract.DueDate.name, duedate); } final String contact = _parameter.getParameterValue(getFieldName4Attribute(_parameter, CISales.DocumentSumAbstract.Contact.name)); final Instance contactIns = Instance.get(contact); if (contactIns != null && contactIns.isValid()) { update.add(CISales.DocumentSumAbstract.Contact, contactIns.getId()); _editDoc.getValues().put(CISales.DocumentSumAbstract.Contact.name, contactIns); } final String remark = _parameter.getParameterValue(getFieldName4Attribute(_parameter, CISales.DocumentSumAbstract.Remark.name)); if (remark != null) { update.add(CISales.DocumentSumAbstract.Remark, remark); _editDoc.getValues().put(CISales.DocumentSumAbstract.Remark.name, remark); } final String revision = _parameter.getParameterValue(getFieldName4Attribute(_parameter, CISales.DocumentSumAbstract.Revision.name)); if (revision != null) { update.add(CISales.DocumentSumAbstract.Revision, revision); _editDoc.getValues().put(CISales.DocumentSumAbstract.Revision.name, revision); } final String note = _parameter.getParameterValue(getFieldName4Attribute(_parameter, CISales.DocumentSumAbstract.Note.name)); if (note != null) { update.add(CISales.DocumentSumAbstract.Note, note); _editDoc.getValues().put(CISales.DocumentSumAbstract.Note.name, note); } final String salesperson = _parameter.getParameterValue(getFieldName4Attribute(_parameter, CISales.DocumentSumAbstract.Salesperson.name)); if (salesperson != null) { update.add(CISales.DocumentSumAbstract.Salesperson, salesperson); _editDoc.getValues().put(CISales.DocumentSumAbstract.Salesperson.name, salesperson); } final String groupId = _parameter.getParameterValue(getFieldName4Attribute(_parameter, CISales.DocumentSumAbstract.Group.name)); if (groupId != null) { update.add(CISales.DocumentSumAbstract.Group, groupId); _editDoc.getValues().put(CISales.DocumentSumAbstract.Group.name, groupId); } if (_editDoc.getInstance().getType().isKindOf(CISales.DocumentSumAbstract.getType())) { final DecimalFormat frmt = NumberFormatter.get().getFrmt4Total(getType4SysConf(_parameter)); final int scale = frmt.getMaximumFractionDigits(); final BigDecimal rateCrossTotal = getCrossTotal(_parameter, calcList).setScale(scale, RoundingMode.HALF_UP); update.add(CISales.DocumentSumAbstract.RateCrossTotal, rateCrossTotal); _editDoc.getValues().put(CISales.DocumentSumAbstract.RateCrossTotal.name, rateCrossTotal); final BigDecimal rateNetTotal = getNetTotal(_parameter, calcList).setScale(scale, RoundingMode.HALF_UP); update.add(CISales.DocumentSumAbstract.RateNetTotal, rateNetTotal); _editDoc.getValues().put(CISales.DocumentSumAbstract.RateNetTotal.name, rateNetTotal); update.add(CISales.DocumentSumAbstract.RateDiscountTotal, BigDecimal.ZERO); update.add(CISales.DocumentSumAbstract.RateTaxes, getRateTaxes(_parameter, calcList, rateCurrInst)); update.add(CISales.DocumentSumAbstract.Taxes, getTaxes(_parameter, calcList, rate, baseCurrInst)); final BigDecimal crossTotal = getCrossTotal(_parameter, calcList).divide(rate, RoundingMode.HALF_UP) .setScale(scale, RoundingMode.HALF_UP); update.add(CISales.DocumentSumAbstract.CrossTotal, crossTotal); _editDoc.getValues().put(CISales.DocumentSumAbstract.CrossTotal.name, crossTotal); final BigDecimal netTotal = getNetTotal(_parameter, calcList).divide(rate, RoundingMode.HALF_UP) .setScale(scale, RoundingMode.HALF_UP); update.add(CISales.DocumentSumAbstract.NetTotal, netTotal); _editDoc.getValues().put(CISales.DocumentSumAbstract.CrossTotal.name, netTotal); update.add(CISales.DocumentSumAbstract.DiscountTotal, BigDecimal.ZERO); update.add(CISales.DocumentSumAbstract.CurrencyId, baseCurrInst); update.add(CISales.DocumentSumAbstract.Rate, rateObj); update.add(CISales.DocumentSumAbstract.RateCurrencyId, rateCurrInst); _editDoc.getValues().put(CISales.DocumentSumAbstract.CurrencyId.name, baseCurrInst); _editDoc.getValues().put(CISales.DocumentSumAbstract.RateCurrencyId.name, rateCurrInst); _editDoc.getValues().put(CISales.DocumentSumAbstract.Rate.name, rateObj); } addStatus2DocEdit(_parameter, update, _editDoc); add2DocEdit(_parameter, update, _editDoc); update.execute(); return _editDoc; } /** * @param _parameter Parameter as passed from the eFaps API. * @param _createdDoc createdDoc * @return Instance of the document. * @throws EFapsException on error. */ protected Instance getRateCurrencyInstance(final Parameter _parameter, final CreatedDoc _createdDoc) throws EFapsException { return new Currency().getCurrencyFromUI(_parameter, "rateCurrencyId"); } /** * Method to create the basic Document. The method checks for the Type to be * created for every attribute if a related field is in the parameters. * * @param _parameter Parameter as passed from the eFaps API. * @return Instance of the document. * @throws EFapsException on error. */ protected CreatedDoc createDoc(final Parameter _parameter) throws EFapsException { final CreatedDoc createdDoc = new CreatedDoc(); final List<Calculator> calcList = analyseTable(_parameter, null); createdDoc.addValue(AbstractDocument_Base.CALCULATORS_VALUE, calcList); final Instance baseCurrInst = Currency.getBaseCurrency(); final Instance rateCurrInst = getRateCurrencyInstance(_parameter, createdDoc); final Object[] rateObj = getRateObject(_parameter); final BigDecimal rate = ((BigDecimal) rateObj[0]).divide((BigDecimal) rateObj[1], 12, RoundingMode.HALF_UP); final Insert insert = new Insert(getType4DocCreate(_parameter)); final String name = getDocName4Create(_parameter); insert.add(CISales.DocumentSumAbstract.Name, name); createdDoc.getValues().put(CISales.DocumentSumAbstract.Name.name, name); final String date = _parameter.getParameterValue(getFieldName4Attribute(_parameter, CISales.DocumentSumAbstract.Date.name)); if (date != null) { insert.add(CISales.DocumentSumAbstract.Date, date); createdDoc.getValues().put(CISales.DocumentSumAbstract.Date.name, date); } final String duedate = _parameter.getParameterValue(getFieldName4Attribute(_parameter, CISales.DocumentSumAbstract.DueDate.name)); if (duedate != null) { insert.add(CISales.DocumentSumAbstract.DueDate, duedate); createdDoc.getValues().put(CISales.DocumentSumAbstract.DueDate.name, duedate); } final String contact = _parameter.getParameterValue(getFieldName4Attribute(_parameter, CISales.DocumentSumAbstract.Contact.name)); if (contact != null) { final Instance inst = Instance.get(contact); if (inst.isValid()) { insert.add(CISales.DocumentSumAbstract.Contact, inst.getId()); createdDoc.getValues().put(CISales.DocumentSumAbstract.Contact.name, inst); } } final String note = _parameter.getParameterValue(getFieldName4Attribute(_parameter, CISales.DocumentSumAbstract.Note.name)); if (note != null) { insert.add(CISales.DocumentSumAbstract.Note, note); createdDoc.getValues().put(CISales.DocumentSumAbstract.Note.name, note); } final String revision = _parameter.getParameterValue(getFieldName4Attribute(_parameter, CISales.DocumentSumAbstract.Revision.name)); if (revision != null) { insert.add(CISales.DocumentSumAbstract.Revision, revision); createdDoc.getValues().put(CISales.DocumentSumAbstract.Revision.name, revision); } final String remark = _parameter.getParameterValue(getFieldName4Attribute(_parameter, CISales.DocumentSumAbstract.Remark.name)); if (remark != null) { insert.add(CISales.DocumentSumAbstract.Remark, remark); createdDoc.getValues().put(CISales.DocumentSumAbstract.Remark.name, remark); } final String salesperson = _parameter.getParameterValue(getFieldName4Attribute(_parameter, CISales.DocumentSumAbstract.Salesperson.name)); if (salesperson != null) { insert.add(CISales.DocumentSumAbstract.Salesperson, salesperson); createdDoc.getValues().put(CISales.DocumentSumAbstract.Salesperson.name, salesperson); } final String groupId = _parameter.getParameterValue(getFieldName4Attribute(_parameter, CISales.DocumentSumAbstract.Group.name)); if (groupId != null) { insert.add(CISales.DocumentSumAbstract.Group, groupId); createdDoc.getValues().put(CISales.DocumentSumAbstract.Group.name, groupId); } if (getType4DocCreate(_parameter).isKindOf(CISales.DocumentSumAbstract.getType())) { final DecimalFormat frmt = NumberFormatter.get().getFrmt4Total(getType4SysConf(_parameter)); final int scale = frmt.getMaximumFractionDigits(); final BigDecimal rateCrossTotal = getCrossTotal(_parameter, calcList) .setScale(scale, RoundingMode.HALF_UP); insert.add(CISales.DocumentSumAbstract.RateCrossTotal, rateCrossTotal); createdDoc.getValues().put(CISales.DocumentSumAbstract.RateCrossTotal.name, rateCrossTotal); final BigDecimal rateNetTotal = getNetTotal(_parameter, calcList).setScale(scale, RoundingMode.HALF_UP); insert.add(CISales.DocumentSumAbstract.RateNetTotal, rateNetTotal); createdDoc.getValues().put(CISales.DocumentSumAbstract.RateNetTotal.name, rateNetTotal); insert.add(CISales.DocumentSumAbstract.RateDiscountTotal, BigDecimal.ZERO); insert.add(CISales.DocumentSumAbstract.RateTaxes, getRateTaxes(_parameter, calcList, rateCurrInst)); insert.add(CISales.DocumentSumAbstract.Taxes, getTaxes(_parameter, calcList, rate, baseCurrInst)); final BigDecimal crossTotal = getCrossTotal(_parameter, calcList).divide(rate, RoundingMode.HALF_UP) .setScale(scale, RoundingMode.HALF_UP); insert.add(CISales.DocumentSumAbstract.CrossTotal, crossTotal); createdDoc.getValues().put(CISales.DocumentSumAbstract.CrossTotal.name, crossTotal); final BigDecimal netTotal = getNetTotal(_parameter, calcList).divide(rate, RoundingMode.HALF_UP) .setScale(scale, RoundingMode.HALF_UP); insert.add(CISales.DocumentSumAbstract.NetTotal, netTotal); createdDoc.getValues().put(CISales.DocumentSumAbstract.NetTotal.name, netTotal); insert.add(CISales.DocumentSumAbstract.DiscountTotal, BigDecimal.ZERO); insert.add(CISales.DocumentSumAbstract.CurrencyId, baseCurrInst); insert.add(CISales.DocumentSumAbstract.Rate, rateObj); insert.add(CISales.DocumentSumAbstract.RateCurrencyId, rateCurrInst); createdDoc.getValues().put(CISales.DocumentSumAbstract.CurrencyId.name, baseCurrInst); createdDoc.getValues().put(CISales.DocumentSumAbstract.RateCurrencyId.name, rateCurrInst); createdDoc.getValues().put(CISales.DocumentSumAbstract.Rate.name, rateObj); } addStatus2DocCreate(_parameter, insert, createdDoc); add2DocCreate(_parameter, insert, createdDoc); insert.execute(); createdDoc.setInstance(insert.getInstance()); // call possible listeners for (final IOnCreateDocument listener : Listener.get().<IOnCreateDocument>invoke( IOnCreateDocument.class)) { listener.afterCreate(_parameter, createdDoc); } return createdDoc; } /** * Method is executed as an update event of the field containing the * quantity of products to calculate the new totals. * * @param _parameter Parameter as passed by the eFasp API * @return Return containing the list * @throws EFapsException on error */ public Return updateFields4Quantity(final Parameter _parameter) throws EFapsException { final Return retVal = new Return(); final List<Map<String, Object>> list = new ArrayList<>(); final Map<String, Object> map = new HashMap<>(); final int selected = getSelectedRow(_parameter); final List<Calculator> calcList = analyseTable(_parameter, null); final Calculator cal = calcList.get(selected); if (calcList.size() > 0) { add2Map4UpdateField(_parameter, map, calcList, cal, true); list.add(map); retVal.put(ReturnValues.VALUES, list); } return retVal; } /** * Update fields4 cross price. * * @param _parameter the _parameter * @return the return * @throws EFapsException the e faps exception */ public Return updateFields4CrossPrice(final Parameter _parameter) throws EFapsException { final Return retVal = new Return(); final List<Map<String, Object>> list = new ArrayList<>(); final Map<String, Object> map = new HashMap<>(); final int selected = getSelectedRow(_parameter); final List<Calculator> calcList = analyseTable(_parameter, null); final Calculator cal = calcList.get(selected); cal.setCrossPrice(_parameter.getParameterValues("crossPrice")[selected]); if (calcList.size() > 0) { add2Map4UpdateField(_parameter, map, calcList, cal, true); list.add(map); retVal.put(ReturnValues.VALUES, list); } return retVal; } /** * Add to the map for update field. * * @param _parameter Parameter as passed by the eFaps API * @param _map Map the values will be added to * @param _calcList list of all calculators * @param _cal current calculator * @param _includeTotal the include total * @throws EFapsException on error */ protected void add2Map4UpdateField(final Parameter _parameter, final Map<String, Object> _map, final List<Calculator> _calcList, final Calculator _cal, final boolean _includeTotal) throws EFapsException { // positions if (_cal != null) { _map.put("quantity", _cal.getQuantityStr()); _map.put("netUnitPrice", _cal.getNetUnitPriceFmtStr()); _map.put("netUnitPrice4Read", _cal.getNetUnitPriceFmtStr()); _map.put("netPrice", _cal.getNetPriceFmtStr()); _map.put("discountNetUnitPrice", _cal.getDiscountNetUnitPriceFmtStr()); _map.put("discount", _cal.getDiscountFmtStr()); _map.put("crossPrice", _cal.getCrossPriceFmtStr()); } if (_includeTotal) { // totals _map.put("netTotal", getNetTotalFmtStr(_parameter, _calcList)); _map.put("crossTotal", getCrossTotalFmtStr(_parameter, _calcList)); final StringBuilder js = new StringBuilder(); js.append(getTaxesScript(_parameter, new TaxesAttribute().getUI4ReadOnly(getRateTaxes(_parameter, _calcList, null)))); InterfaceUtils.appendScript4FieldUpdate(_map, js); if (Sales.PERCEPTIONCERTIFICATEACTIVATE.get()) { _map.put("perceptionTotal", getPerceptionTotalFmtStr(_parameter, _calcList)); } } } /** * @param _parameter Parameter as passed by the eFaps API * @param _innerHtml innerHtml part of the taxfield * @return StringBuilder with Javascript */ protected StringBuilder getTaxesScript(final Parameter _parameter, final String _innerHtml) { return getTaxesScript(_parameter, "taxes", _innerHtml); } /** * @param _parameter Parameter as passed by the eFaps API * @param _fieldName fieldName * @param _innerHtml innerHtml part of the taxfield * @return StringBuilder with Javascript */ protected StringBuilder getTaxesScript(final Parameter _parameter, final String _fieldName, final String _innerHtml) { return new StringBuilder() .append("require([\"dojo/query\", \"dojo/NodeList-manipulate\"], function(query){") .append("query(\"[name='").append(_fieldName).append("']\").innerHTML(\"") .append(_innerHtml) .append("\");") .append("});"); } /** * Method is executed as an update event of the field containing the net * unit price for products to calculate the new totals. * * @param _parameter Parameter as passed by the eFasp API * @return Return containing the list * @throws EFapsException on error */ public Return updateFields4NetUnitPrice(final Parameter _parameter) throws EFapsException { final Return retVal = new Return(); final List<Map<String, Object>> list = new ArrayList<>(); final Map<String, Object> map = new HashMap<>(); final int selected = getSelectedRow(_parameter); final List<Calculator> calcList = analyseTable(_parameter, null); final Calculator cal = calcList.get(selected); if (calcList.size() > 0) { add2Map4UpdateField(_parameter, map, calcList, cal, true); list.add(map); retVal.put(ReturnValues.VALUES, list); } return retVal; } /** * Method is executed as an update event of the field containing the * discount for products to calculate the new totals. * * @param _parameter Parameter as passed by the eFasp API * @return Return containing the list * @throws EFapsException on error */ public Return updateFields4Discount(final Parameter _parameter) throws EFapsException { final Return retVal = new Return(); final List<Map<String, Object>> list = new ArrayList<>(); final Map<String, Object> map = new HashMap<>(); final int selected = getSelectedRow(_parameter); final List<Calculator> calcList = analyseTable(_parameter, null); final Calculator cal = calcList.get(selected); if (calcList.size() > 0) { add2Map4UpdateField(_parameter, map, calcList, cal, true); list.add(map); retVal.put(ReturnValues.VALUES, list); } return retVal; } /** * Method to update the fields on leaving the product field. * * @param _parameter Parameter as passed from the eFaps API * @return map list with values * @throws EFapsException on errro */ public Return updateFields4Product(final Parameter _parameter) throws EFapsException { final Return retVal = new Return(); final List<Map<String, Object>> list = new ArrayList<>(); final Map<String, Object> map = new HashMap<>(); final int selected = getSelectedRow(_parameter); final Field field = (Field) _parameter.get(ParameterValues.UIOBJECT); final String fieldName = field.getName(); final Instance prodInst = Instance.get(_parameter.getParameterValues(fieldName)[selected]); // validate that a product was selected if (prodInst.isValid()) { add2UpdateField4Product(_parameter, map, prodInst); final List<Calculator> calcList = analyseTable(_parameter, selected); if (calcList.size() > 0) { final Calculator cal = calcList.get(selected); add2Map4UpdateField(_parameter, map, calcList, cal, true); } } list.add(map); retVal.put(ReturnValues.VALUES, list); return retVal; } /** * @param _parameter Paraemter as passed by the eFasp API * @return List map for the update event * @throws EFapsException on error */ public Return updateFields4Uom(final Parameter _parameter) throws EFapsException { final Return retVal = new Return(); final List<Map<String, Object>> list = new ArrayList<>(); final Map<String, Object> map = new HashMap<>(); final int selected = getSelectedRow(_parameter); final List<Calculator> calcList = analyseTable(_parameter, null); if (calcList.size() > 0) { final Calculator cal = calcList.get(selected); final Long uomID = Long.parseLong(_parameter.getParameterValues("uoM")[selected]); final UoM uom = Dimension.getUoM(uomID); final BigDecimal up = cal.getProductPrice().getCurrentPrice().multiply(new BigDecimal(uom.getNumerator())) .divide(new BigDecimal(uom.getDenominator())); cal.setUnitPrice(up); add2Map4UpdateField(_parameter, map, calcList, cal, true); list.add(map); retVal.put(ReturnValues.VALUES, list); } return retVal; } @Override protected StringBuilder getJavaScript4Positions(final Parameter _parameter, final List<Instance> _instances) throws EFapsException { return super.getJavaScript4Positions(_parameter, _instances).append( InterfaceUtils.wrappInScriptTag(_parameter, "executeCalculator();\n", false, 2000)); } @Override protected StringBuilder getJavaScript4Positions(final Parameter _parameter, final Instance _instance) throws EFapsException { return super.getJavaScript4Positions(_parameter, _instance).append( InterfaceUtils.wrappInScriptTag(_parameter, "executeCalculator();\n", false, 2000)); } /** * Update the form after change of rate currency. * * @param _parameter Parameter as passed by the eFaps API for esjp * @return javascript for update * @throws EFapsException on error */ public Return updateFields4RateCurrency(final Parameter _parameter) throws EFapsException { final List<Map<String, String>> list = new ArrayList<>(); final Instance newInst = getRateCurrencyInstance(_parameter, null); final Map<String, String> map = new HashMap<>(); Instance currentInst = new Currency().getCurrencyFromUI(_parameter, "rateCurrencyId_eFapsPrevious"); final Instance baseInst = Currency.getBaseCurrency(); if (currentInst == null || currentInst != null && !currentInst.isValid()) { currentInst = baseInst; } if (!newInst.equals(currentInst)) { final Currency currency = new Currency(); final RateInfo[] rateInfos = currency.evaluateRateInfos(_parameter, _parameter.getParameterValue("date_eFapsDate"), currentInst, newInst); final List<Calculator> calculators = analyseTable(_parameter, null); final StringBuilder js = new StringBuilder(); int i = 0; final Map<Integer, Map<String, Object>> values = new TreeMap<>(); for (final Calculator calculator : calculators) { final Map<String, Object> map2 = new HashMap<>(); if (!calculator.isEmpty()) { calculator.applyRate(newInst, RateInfo.getRate(_parameter, rateInfos[2], getTypeName4SysConf(_parameter))); map2.put("netUnitPrice", calculator.getNetUnitPriceFmtStr()); map2.put("netUnitPrice4Read", calculator.getNetUnitPriceFmtStr()); map2.put("netPrice", calculator.getNetPriceFmtStr()); map2.put("discountNetUnitPrice", calculator.getDiscountNetUnitPriceFmtStr()); map2.put("crossPrice", calculator.getCrossPriceFmtStr()); values.put(i, map2); } i++; } final Set<String> noEscape = new HashSet<>(); add2SetValuesString4Postions4CurrencyUpdate(_parameter, calculators, values, noEscape); js.append(getSetFieldValuesScript(_parameter, values.values(), noEscape)); if (calculators.size() > 0) { js.append(getSetFieldValue(0, "crossTotal", getCrossTotalFmtStr(_parameter, calculators))) .append(getSetFieldValue(0, "netTotal", getNetTotalFmtStr(_parameter, calculators))) .append(addFields4RateCurrency(_parameter, calculators)); if (_parameter.getParameterValue("openAmount") != null) { js.append(getSetFieldValue(0, "openAmount", getBaseCrossTotalFmtStr(_parameter, calculators))); } if (Sales.PERCEPTIONCERTIFICATEACTIVATE.get()) { js.append(getSetFieldValue(0, "perceptionTotal", getPerceptionTotalFmtStr(_parameter, calculators))); } } js.append(getSetFieldValue(0, "rateCurrencyData", getRateUIFrmt(_parameter, rateInfos[1]))) .append(getSetFieldValue(0, "rate", getRateUIFrmt(_parameter, rateInfos[1]))) .append(getSetFieldValue(0, "rate" + RateUI.INVERTEDSUFFIX, Boolean.toString(rateInfos[1].isInvert()))) .append(getSetFieldValue(0, "rateCurrencyId_eFapsPrevious", _parameter.getParameterValue("rateCurrencyId"))) .append(addAdditionalFields4CurrencyUpdate(_parameter, calculators)); map.put(EFapsKey.FIELDUPDATE_JAVASCRIPT.getKey(), js.toString()); list.add(map); } final Return retVal = new Return(); retVal.put(ReturnValues.VALUES, list); return retVal; } /** * Method to set the openAmount into the session cache. * * @param _parameter Parameter as passed from the eFaps API * @param _calcList List of <code>Calculator</code> * @throws EFapsException on error */ protected void setOpenAmount(final Parameter _parameter, final List<Calculator> _calcList) throws EFapsException { final Instance curInst = getCurrencyFromUI(_parameter); final OpenAmount openAmount = new Payment().new OpenAmount(new CurrencyInst(curInst), getCrossTotal(_parameter, _calcList), new PriceUtil().getDateFromParameter(_parameter)); Context.getThreadContext().setSessionAttribute(Payment_Base.OPENAMOUNT_SESSIONKEY, openAmount); } /** * Method to set additional fields for the currency update method. * * @param _parameter Paramter as passed by the eFaps API * @param _calculators list of calculators * @return new StringBuilder with the additional fields. * @throws EFapsException on error */ protected StringBuilder addAdditionalFields4CurrencyUpdate(final Parameter _parameter, final List<Calculator> _calculators) throws EFapsException { // to be used by implementations return new StringBuilder(); } /** * @param _parameter Paramter as passed by the eFaps API * @param _calculators list of calculators * @param _values values to be added to * @param _noEscape no escape fields * @throws EFapsException on error */ protected void add2SetValuesString4Postions4CurrencyUpdate(final Parameter _parameter, final List<Calculator> _calculators, final Map<Integer, Map<String, Object>> _values, final Set<String> _noEscape) throws EFapsException { // to be used by implementations } /** * Update the form after change of date. * * @param _parameter Parameter as passed by the eFaps API. * @return JavaScript for update. * @throws EFapsException on error. */ public Return updateFields4Date(final Parameter _parameter) throws EFapsException { final List<Map<String, String>> list = new ArrayList<>(); final Map<String, String> map = new HashMap<>(); final Instance newCurrInst = getRateCurrencyInstance(_parameter, null); final String date = _parameter.getParameterValue("date_eFapsDate"); final StringBuilder js = new StringBuilder(); final RateInfo rateInfo = new Currency().evaluateRateInfo(_parameter, date, newCurrInst); js.append(getSetFieldValue(0, "rateCurrencyData", RateInfo.getRateUIFrmt(_parameter, rateInfo, getTypeName4SysConf(_parameter)))) .append(getSetFieldValue(0, "rate", RateInfo.getRateUIFrmt(_parameter, rateInfo, getTypeName4SysConf(_parameter)))) .append(getSetFieldValue(0, "rate" + RateUI.INVERTEDSUFFIX, Boolean.toString(rateInfo.isInvert()))); map.put(EFapsKey.FIELDUPDATE_JAVASCRIPT.getKey(), js.toString()); new Channel().add2FieldUpdateMap4Condition(_parameter, map); list.add(map); final Return retVal = new Return(); retVal.put(ReturnValues.VALUES, list); return retVal; } /** * Update the form after change of date. (Seems to be unused) * * @param _parameter Parameter as passed by the eFaps API for esjp * @return javascript for update * @throws EFapsException on error */ public Return updateFields4RateCurrencyFromDate(final Parameter _parameter) throws EFapsException { final List<Map<String, String>> list = new ArrayList<>(); final Map<String, String> map = new HashMap<>(); final Instance newInst = getRateCurrencyInstance(_parameter, null); final Currency currency = new Currency(); final RateInfo rateInfo = currency.evaluateRateInfo(_parameter, _parameter.getParameterValue("date_eFapsDate"), newInst); final List<Calculator> calculators = analyseTable(_parameter, null); final StringBuilder js = new StringBuilder(); int i = 0; final Map<Integer, Map<String, Object>> values = new TreeMap<>(); for (final Calculator calculator : calculators) { final Map<String, Object> map2 = new HashMap<>(); if (!calculator.isEmpty()) { final QueryBuilder qlb = new QueryBuilder(CISales.PositionAbstract); qlb.addWhereAttrEqValue(CISales.PositionAbstract.Product, Instance.get(calculator.getOid())); qlb.addWhereAttrEqValue(CISales.PositionAbstract.DocumentAbstractLink, _parameter.getInstance()); final InstanceQuery query = qlb.getQuery(); query.execute(); if (!query.next()) { calculator.applyRate(newInst, RateInfo.getRate(_parameter, rateInfo, getTypeName4SysConf(_parameter))); } map2.put("netUnitPrice", calculator.getNetUnitPriceFmtStr()); map2.put("netUnitPrice4Read", calculator.getNetUnitPriceFmtStr()); map2.put("netPrice", calculator.getNetPriceFmtStr()); map2.put("discountNetUnitPrice", calculator.getDiscountNetUnitPriceFmtStr()); map2.put("crossPrice", calculator.getCrossPriceFmtStr()); values.put(i, map2); } i++; } final Set<String> noEscape = new HashSet<>(); add2SetValuesString4Postions4CurrencyUpdate(_parameter, calculators, values, noEscape); js.append(getSetFieldValuesScript(_parameter, values.values(), noEscape)); if (calculators.size() > 0) { js.append(getSetFieldValue(0, "crossTotal", getCrossTotalFmtStr(_parameter, calculators))) .append(getSetFieldValue(0, "netTotal", getNetTotalFmtStr(_parameter, calculators))) .append(addFields4RateCurrency(_parameter, calculators)); if (_parameter.getParameterValue("openAmount") != null) { js.append(getSetFieldValue(0, "openAmount", getBaseCrossTotalFmtStr(_parameter, calculators))); } } js.append(getSetFieldValue(0, "rateCurrencyData", rateInfo.getRateUIFrmt(null))) .append(getSetFieldValue(0, "rate", rateInfo.getRateUIFrmt(null))) .append(getSetFieldValue(0, "rate" + RateUI.INVERTEDSUFFIX, Boolean.toString(rateInfo.isInvert()))) .append(addAdditionalFields4CurrencyUpdate(_parameter, calculators)); map.put(EFapsKey.FIELDUPDATE_JAVASCRIPT.getKey(), js.toString()); list.add(map); final Return retVal = new Return(); retVal.put(ReturnValues.VALUES, list); return retVal; } /** * Internal Method to create the positions for this Document. * * @param _parameter Parameter as passed from eFaps API. * @param _createdDoc cretaed Document * @throws EFapsException on error */ protected void createPositions(final Parameter _parameter, final CreatedDoc _createdDoc) throws EFapsException { final Instance baseCurrInst = Currency.getBaseCurrency(); final Instance rateCurrInst = getRateCurrencyInstance(_parameter, _createdDoc); final Object[] rateObj = getRateObject(_parameter); final BigDecimal rate = ((BigDecimal) rateObj[0]).divide((BigDecimal) rateObj[1], 12, RoundingMode.HALF_UP); @SuppressWarnings("unchecked") final List<Calculator> calcList = (List<Calculator>) _createdDoc.getValue( AbstractDocument_Base.CALCULATORS_VALUE); final DecimalFormat totalFrmt = NumberFormatter.get().getFrmt4Total(getType4SysConf(_parameter)); final int scale = totalFrmt.getMaximumFractionDigits(); final DecimalFormat unitFrmt = NumberFormatter.get().getFrmt4UnitPrice(getType4SysConf(_parameter)); final int uScale = unitFrmt.getMaximumFractionDigits(); Integer idx = 0; for (final Calculator calc : calcList) { if (!calc.isEmpty()) { final Insert posIns = new Insert(getType4PositionCreate(_parameter)); posIns.add(CISales.PositionAbstract.PositionNumber, idx + 1); posIns.add(CISales.PositionAbstract.DocumentAbstractLink, _createdDoc.getInstance().getId()); final String[] product = _parameter.getParameterValues(getFieldName4Attribute(_parameter, CISales.PositionAbstract.Product.name)); if (product != null && product.length > idx) { final Instance inst = Instance.get(product[idx]); if (inst.isValid()) { posIns.add(CISales.PositionAbstract.Product, inst.getId()); } } final String[] productDesc = _parameter.getParameterValues(getFieldName4Attribute(_parameter, CISales.PositionAbstract.ProductDesc.name)); if (productDesc != null && productDesc.length > idx) { posIns.add(CISales.PositionAbstract.ProductDesc, productDesc[idx]); } final String[] remarks = _parameter.getParameterValues(getFieldName4Attribute(_parameter, CISales.PositionAbstract.Remark.name)); if (remarks != null && remarks.length > idx) { posIns.add(CISales.PositionAbstract.Remark, remarks[idx]); } final String[] uoM = _parameter.getParameterValues(getFieldName4Attribute(_parameter, CISales.PositionAbstract.UoM.name)); if (uoM != null && uoM.length > idx) { posIns.add(CISales.PositionAbstract.UoM, uoM[idx]); } posIns.add(CISales.PositionSumAbstract.Quantity, calc.getQuantity()); posIns.add(CISales.PositionSumAbstract.CrossUnitPrice, calc.getCrossUnitPrice() .divide(rate, RoundingMode.HALF_UP).setScale(uScale, RoundingMode.HALF_UP)); posIns.add(CISales.PositionSumAbstract.NetUnitPrice, calc.getNetUnitPrice() .divide(rate, RoundingMode.HALF_UP).setScale(uScale, RoundingMode.HALF_UP)); posIns.add(CISales.PositionSumAbstract.CrossPrice, calc.getCrossPrice() .divide(rate, RoundingMode.HALF_UP).setScale(scale, RoundingMode.HALF_UP)); posIns.add(CISales.PositionSumAbstract.NetPrice, calc.getNetPrice() .divide(rate, RoundingMode.HALF_UP).setScale(scale, RoundingMode.HALF_UP)); posIns.add(CISales.PositionSumAbstract.Tax, calc.getTaxCatId()); final Taxes taxes = calc.getTaxes(baseCurrInst); taxes.getEntries().forEach(entry -> { entry.setAmount(entry.getAmount().divide(rate, RoundingMode.HALF_UP)); entry.setBase(entry.getBase().divide(rate, RoundingMode.HALF_UP)); }); posIns.add(CISales.PositionSumAbstract.Taxes, taxes); posIns.add(CISales.PositionSumAbstract.Discount, calc.getDiscount()); posIns.add(CISales.PositionSumAbstract.DiscountNetUnitPrice, calc.getDiscountNetUnitPrice() .divide(rate, RoundingMode.HALF_UP).setScale(uScale, RoundingMode.HALF_UP)); posIns.add(CISales.PositionSumAbstract.CurrencyId, baseCurrInst); posIns.add(CISales.PositionSumAbstract.Rate, rateObj); posIns.add(CISales.PositionSumAbstract.RateCurrencyId, rateCurrInst); posIns.add(CISales.PositionSumAbstract.RateNetUnitPrice, calc.getNetUnitPrice() .setScale(uScale, RoundingMode.HALF_UP)); posIns.add(CISales.PositionSumAbstract.RateCrossUnitPrice, calc.getCrossUnitPrice() .setScale(uScale, RoundingMode.HALF_UP)); posIns.add(CISales.PositionSumAbstract.RateDiscountNetUnitPrice, calc.getDiscountNetUnitPrice() .setScale(uScale, RoundingMode.HALF_UP)); posIns.add(CISales.PositionSumAbstract.RateNetPrice, calc.getNetPrice().setScale(scale, RoundingMode.HALF_UP)); posIns.add(CISales.PositionSumAbstract.RateCrossPrice, calc.getCrossPrice().setScale(scale, RoundingMode.HALF_UP)); posIns.add(CISales.PositionSumAbstract.RateTaxes, calc.getTaxes(rateCurrInst)); add2PositionInsert(_parameter, calc, posIns, idx); posIns.execute(); _createdDoc.addPosition(posIns.getInstance()); } idx++; } } /** * @param _parameter Parameter as passed by the eFaps API * @param _calc Calculator * @param _posIns insert * @param _idx index * @throws EFapsException on error */ protected void add2PositionInsert(final Parameter _parameter, final Calculator _calc, final Insert _posIns, final int _idx) throws EFapsException { // to be implemented by subclasses } /** * Update the positions of a Document. * @param _parameter Parameter as passed by the eFaps API * @param _editDoc EditDoc the postions that will be updated belong to * @throws EFapsException on error */ protected void updatePositions(final Parameter _parameter, final EditedDoc _editDoc) throws EFapsException { final Instance baseCurrInst = Currency.getBaseCurrency(); final Instance rateCurrInst = getRateCurrencyInstance(_parameter, _editDoc); final Object[] rateObj = getRateObject(_parameter); final BigDecimal rate = ((BigDecimal) rateObj[0]).divide((BigDecimal) rateObj[1], 12, RoundingMode.HALF_UP); @SuppressWarnings("unchecked") final List<Calculator> calcList = (List<Calculator>) _editDoc.getValue(AbstractDocument_Base.CALCULATORS_VALUE); @SuppressWarnings("unchecked") final Map<String, String> oidMap = (Map<String, String>) _parameter.get(ParameterValues.OIDMAP4UI); final String[] rowKeys = _parameter.getParameterValues(EFapsKey.TABLEROW_NAME.getKey()); final DecimalFormat totalFrmt = NumberFormatter.get().getFrmt4Total(getType4SysConf(_parameter)); final int scale = totalFrmt.getMaximumFractionDigits(); final DecimalFormat unitFrmt = NumberFormatter.get().getFrmt4UnitPrice(getType4SysConf(_parameter)); final int uScale = unitFrmt.getMaximumFractionDigits(); final Iterator<Calculator> iter = calcList.iterator(); for (int i = 0; i < rowKeys.length; i++) { final Calculator calc = iter.next(); final Instance inst = Instance.get(oidMap.get(rowKeys[i])); if (!calc.isEmpty()) { final Update update; if (inst.isValid()) { update = new Update(inst); } else { update = new Insert(getType4PositionUpdate(_parameter)); } update.add(CISales.PositionAbstract.DocumentAbstractLink, _editDoc.getInstance()); final String[] product = _parameter.getParameterValues(getFieldName4Attribute(_parameter, CISales.PositionAbstract.Product.name)); if (product != null && product.length > i) { final Instance prodInst = Instance.get(product[i]); if (prodInst.isValid()) { update.add(CISales.PositionAbstract.Product, prodInst); } } final String[] productDesc = _parameter.getParameterValues(getFieldName4Attribute(_parameter, CISales.PositionAbstract.ProductDesc.name)); if (productDesc != null && productDesc.length > i) { update.add(CISales.PositionAbstract.ProductDesc, productDesc[i]); } final String[] remarks = _parameter.getParameterValues(getFieldName4Attribute(_parameter, CISales.PositionAbstract.Remark.name)); if (remarks != null && remarks.length > i) { update.add(CISales.PositionAbstract.Remark, remarks[i]); } final String[] uoM = _parameter.getParameterValues(getFieldName4Attribute(_parameter, CISales.PositionAbstract.UoM.name)); if (uoM != null && uoM.length > i) { update.add(CISales.PositionAbstract.UoM, uoM[i]); } update.add(CISales.PositionSumAbstract.Quantity, calc.getQuantity()); update.add(CISales.PositionSumAbstract.CrossUnitPrice, calc.getCrossUnitPrice() .divide(rate, RoundingMode.HALF_UP).setScale(uScale, RoundingMode.HALF_UP)); update.add(CISales.PositionSumAbstract.NetUnitPrice, calc.getNetUnitPrice() .divide(rate, RoundingMode.HALF_UP).setScale(uScale, RoundingMode.HALF_UP)); update.add(CISales.PositionSumAbstract.CrossPrice, calc.getCrossPrice() .divide(rate, RoundingMode.HALF_UP).setScale(scale, RoundingMode.HALF_UP)); update.add(CISales.PositionSumAbstract.NetPrice, calc.getNetPrice() .divide(rate, RoundingMode.HALF_UP).setScale(scale, RoundingMode.HALF_UP)); update.add(CISales.PositionSumAbstract.Tax, calc.getTaxCatId()); update.add(CISales.PositionSumAbstract.Discount, calc.getDiscountStr()); update.add(CISales.PositionSumAbstract.DiscountNetUnitPrice, calc.getDiscountNetUnitPrice() .divide(rate, RoundingMode.HALF_UP).setScale(uScale, RoundingMode.HALF_UP)); update.add(CISales.PositionSumAbstract.CurrencyId, baseCurrInst); final Taxes taxes = calc.getTaxes(baseCurrInst); taxes.getEntries().forEach(entry -> { entry.setAmount(entry.getAmount().divide(rate, RoundingMode.HALF_UP)); entry.setBase(entry.getBase().divide(rate, RoundingMode.HALF_UP)); }); update.add(CISales.PositionSumAbstract.Taxes, taxes); update.add(CISales.PositionSumAbstract.Rate, rateObj); update.add(CISales.PositionSumAbstract.RateCurrencyId, rateCurrInst); update.add(CISales.PositionSumAbstract.RateNetUnitPrice, calc.getNetUnitPrice() .setScale(uScale, RoundingMode.HALF_UP)); update.add(CISales.PositionSumAbstract.RateCrossUnitPrice, calc.getCrossUnitPrice() .setScale(uScale, RoundingMode.HALF_UP)); update.add(CISales.PositionSumAbstract.RateDiscountNetUnitPrice, calc.getDiscountNetUnitPrice() .setScale(uScale, RoundingMode.HALF_UP)); update.add(CISales.PositionSumAbstract.RateNetPrice, calc.getNetPrice().setScale(scale, RoundingMode.HALF_UP)); update.add(CISales.PositionSumAbstract.RateCrossPrice, calc.getCrossPrice().setScale(scale, RoundingMode.HALF_UP)); update.add(CISales.PositionSumAbstract.RateTaxes, calc.getTaxes(rateCurrInst)); add2PositionUpdate(_parameter, calc, update, i); update.execute(); _editDoc.addPosition(update.getInstance()); } } deletePosition4Update(_parameter, _editDoc); } /** * @param _parameter Parameter as passed from the eFaps API * @return Return containing list * @throws EFapsException on error */ public Return executeCalculatorOnScript(final Parameter _parameter) throws EFapsException { final Instance derivedInst = Instance.get(_parameter.getParameterValue("derived")); Integer row4priceFromDB = null; if (derivedInst != null && derivedInst.isValid() && derivedInst.getType().isKindOf(CISales.DocumentStockAbstract)) { row4priceFromDB = -1; } final Return retVal = new Return(); final List<Map<String, Object>> list = new ArrayList<>(); final List<Calculator> calcList = analyseTable(_parameter, row4priceFromDB); int i = 0; for (final Calculator cal : calcList) { // always add the first and than only the ones visible in the userinterface if (i == 0 || !cal.isBackground()) { final Map<String, Object> map = new HashMap<>(); _parameter.getParameters().put("eFapsRowSelectedRow", new String[] { "" + i }); add2Map4UpdateField(_parameter, map, calcList, cal, i == 0); list.add(map); } i++; } retVal.put(ReturnValues.VALUES, list); return retVal; } /** * Recalculate the rate values by instantiating calculators * and simulating the interaction with the form. * * @param _parameter Parameter as passed from the eFaps API * @return map list for update event * @throws EFapsException on error */ public Return recalculateRate(final Parameter _parameter) throws EFapsException { final List<Instance> instances; final Instance instance = _parameter.getInstance(); if (InstanceUtils.isKindOf(instance, CISales.DocumentSumAbstract)) { instances = new ArrayList<>(); instances.add(instance); } else { final List<Instance> selInsts = getInstances(_parameter, "", true); if (containsProperty(_parameter, "Select4Instance")) { instances = new ArrayList<>(); final String select = getProperty(_parameter, "Select4Instance"); final MultiPrintQuery multi = new MultiPrintQuery(selInsts); multi.addSelect(select); multi.execute(); while (multi.next()) { instances.add(multi.getSelect(select)); } } else { instances = selInsts; } } for (final Instance docInst : instances) { if (InstanceUtils.isKindOf(docInst, CISales.DocumentSumAbstract)) { final PrintQuery print = new PrintQuery(docInst); final SelectBuilder selRateCurInst = SelectBuilder.get().linkto( CISales.DocumentSumAbstract.RateCurrencyId).instance(); print.addSelect(selRateCurInst); print.addAttribute(CISales.DocumentSumAbstract.Date, CISales.DocumentSumAbstract.Rate); print.execute(); final Instance baseCurrInst = Currency.getBaseCurrency(); final Instance rateCurrInst = print.<Instance>getSelect(selRateCurInst); if (!baseCurrInst.equals(rateCurrInst)) { final String dateStr = _parameter.getParameterValue( CIFormSales.Sales_DocumentSum_RecalculateForm.date.name + "_eFapsDate"); final DateTime date; if (dateStr == null) { date = print.getAttribute(CISales.DocumentSumAbstract.Date); } else { date = DateUtil.getDateFromParameter(dateStr); } final Currency currency = new Currency(); final RateInfo rateInfo = currency.evaluateRateInfo(_parameter, date, rateCurrInst); final BigDecimal rate = RateInfo.getRate(_parameter, rateInfo, docInst.getType().getName()); final Object[] rateObj = RateInfo.getRateObject(_parameter, rateInfo, docInst.getType().getName()); final Object[] currentRateObj = print.getAttribute(CISales.DocumentSumAbstract.Rate); if (((BigDecimal) currentRateObj[0]).compareTo((BigDecimal) rateObj[0]) != 0 || ((BigDecimal) currentRateObj[1]).compareTo((BigDecimal) rateObj[1]) != 0) { final DecimalFormat frmt = NumberFormatter.get().getFormatter(); final DecimalFormat totalFrmt = NumberFormatter.get().getFrmt4Total(getType4SysConf( _parameter)); final int scale = totalFrmt.getMaximumFractionDigits(); final DecimalFormat unitFrmt = NumberFormatter.get().getFrmt4UnitPrice(getType4SysConf( _parameter)); final int uScale = unitFrmt.getMaximumFractionDigits(); final List<Calculator> calcList = new ArrayList<>(); final QueryBuilder queryBldr = new QueryBuilder(CISales.PositionSumAbstract); queryBldr.addWhereAttrEqValue(CISales.PositionSumAbstract.DocumentAbstractLink, docInst); final MultiPrintQuery multi = queryBldr.getPrint(); final SelectBuilder selProdOid = SelectBuilder.get().linkto(CISales.PositionSumAbstract.Product) .oid(); multi.addSelect(selProdOid); multi.addAttribute(CISales.PositionSumAbstract.Quantity, CISales.PositionSumAbstract.RateNetUnitPrice, CISales.PositionSumAbstract.Discount); multi.execute(); while (multi.next()) { // read the ratevalues final BigDecimal quantity = multi.<BigDecimal>getAttribute( CISales.PositionSumAbstract.Quantity); final BigDecimal unitPrice = multi.<BigDecimal>getAttribute( CISales.PositionSumAbstract.RateNetUnitPrice); final BigDecimal discount = multi.<BigDecimal>getAttribute( CISales.PositionSumAbstract.Discount); final String prodOid = multi.<String>getSelect(selProdOid); final Calculator calc = getCalculator(_parameter, null, prodOid, frmt.format(quantity), unitFrmt.format(unitPrice), frmt.format(discount), false, 0); calcList.add(calc); // update the base values for the position final Update update = new Update(multi.getCurrentInstance()); update.add(CISales.PositionSumAbstract.CrossUnitPrice, calc.getCrossUnitPrice().divide(rate, RoundingMode.HALF_UP).setScale(uScale, RoundingMode.HALF_UP)); update.add(CISales.PositionSumAbstract.NetUnitPrice, calc.getNetUnitPrice().divide(rate, RoundingMode.HALF_UP).setScale(uScale, RoundingMode.HALF_UP)); update.add(CISales.PositionSumAbstract.CrossPrice, calc.getCrossPrice().divide(rate, RoundingMode.HALF_UP).setScale(scale, RoundingMode.HALF_UP)); update.add(CISales.PositionSumAbstract.NetPrice, calc.getNetPrice().divide(rate, RoundingMode.HALF_UP).setScale(scale, RoundingMode.HALF_UP)); update.add(CISales.PositionSumAbstract.Tax, calc.getTaxCatId()); update.add(CISales.PositionSumAbstract.Discount, calc.getDiscount()); update.add(CISales.PositionSumAbstract.DiscountNetUnitPrice, calc.getDiscountNetUnitPrice() .divide(rate, RoundingMode.HALF_UP).setScale(uScale, RoundingMode.HALF_UP)); update.add(CISales.PositionSumAbstract.CurrencyId, baseCurrInst.getId()); update.add(CISales.PositionSumAbstract.Rate, rateObj); update.execute(); } // update the base values for the document final Update update = new Update(docInst); final BigDecimal netTotal = getNetTotal(_parameter, calcList).divide(rate, RoundingMode.HALF_UP).setScale(scale, RoundingMode.HALF_UP); update.add(CISales.DocumentSumAbstract.NetTotal, netTotal); update.add(CISales.DocumentSumAbstract.Taxes, getTaxes(_parameter, calcList, rate, baseCurrInst)); final BigDecimal crossTotal = getCrossTotal(_parameter, calcList).divide(rate, RoundingMode.HALF_UP).setScale(scale, RoundingMode.HALF_UP); update.add(CISales.DocumentSumAbstract.CrossTotal, crossTotal); update.add(CISales.DocumentSumAbstract.Rate, rateObj); update.execute(); } } } } return new Return(); } /** * Used by an FieldUpdate event used in the form for Recalculating * DocumentSum with a rate. * * @param _parameter Parameter as passed from the eFaps API * @return map list for update event * @throws EFapsException on error */ public Return update4DateOnRecalculate(final Parameter _parameter) throws EFapsException { final Return retVal = new Return(); final Instance docInst = _parameter.getInstance(); if (docInst.getType().isKindOf(CISales.DocumentSumAbstract.getType())) { final PrintQuery print = new PrintQuery(docInst); print.addAttribute(CISales.DocumentSumAbstract.RateCurrencyId); print.execute(); final CurrencyInst curInst = CurrencyInst .get(print.<Long>getAttribute(CISales.DocumentSumAbstract.RateCurrencyId)); final RateInfo rateInfo = new Currency().evaluateRateInfo(_parameter, _parameter.getParameterValue("date_eFapsDate"), curInst.getInstance()); final BigDecimal rate = RateInfo.getRateUI(_parameter, rateInfo, docInst.getType().getName()); final DecimalFormat formater = (DecimalFormat) NumberFormat.getInstance( Context.getThreadContext().getLocale()); formater.applyPattern("#,##0.############"); formater.setRoundingMode(RoundingMode.HALF_UP); final String rateStr = formater.format(rate); final List<Map<String, String>> list = new ArrayList<>(); final Map<String, String> map = new HashMap<>(); map.put("rate", rateStr); list.add(map); retVal.put(ReturnValues.VALUES, list); } return retVal; } /** * @param _docInst Instance of the document * @param _rateValue old Rate * @param _newRate new Rate * @return new Value * @throws EFapsException on error */ protected BigDecimal getNewValue(final Instance _docInst, final BigDecimal _rateValue, final BigDecimal _newRate) throws EFapsException { BigDecimal ret = BigDecimal.ZERO; if (_rateValue.compareTo(BigDecimal.ZERO) != 0) { ret = _rateValue.divide(_newRate, RoundingMode.HALF_UP) .setScale(isDecimal4Doc(_docInst), RoundingMode.HALF_UP); } return ret; } /** * Get decimal for document instance of the system configuration. * * @param _docInst instance of doc. * @return integer. * @throws EFapsException on error. */ protected int isDecimal4Doc(final Instance _docInst) throws EFapsException { int ret = 2; final SystemConfiguration config = SystemConfiguration.get( UUID.fromString("c9a1cbc3-fd35-4463-80d2-412422a3802f")); final Properties props = config.getAttributeValueAsProperties("ActivateLongDecimal"); final String type = _docInst.getType().getName(); if (props.containsKey(type) && Integer.valueOf(props.getProperty(type)) != ret) { ret = Integer.valueOf(props.getProperty(type)); } return ret; } /** * @param _parameter Parameter as passed by the eFaps API * @param _calcList List of calculators * @param _currencyInst instancof the current currency * @return Taxes object * @throws EFapsException on error */ public Taxes getRateTaxes(final Parameter _parameter, final List<Calculator> _calcList, final Instance _currencyInst) throws EFapsException { final Map<Tax, TaxAmount> values = new HashMap<>(); for (final Calculator calc : _calcList) { if (!calc.isWithoutTax()) { for (final TaxAmount taxAmount : calc.getTaxesAmounts()) { if (!values.containsKey(taxAmount.getTax())) { values.put(taxAmount.getTax(), new TaxAmount().setTax(taxAmount.getTax())); } values.get(taxAmount.getTax()) .addAmount(taxAmount.getAmount()) .addBase(taxAmount.getBase()); } } } final Taxes ret = new Taxes(); if (!_calcList.isEmpty()) { final Calculator calc = _calcList.iterator().next(); UUID currencyUUID = null; if (_currencyInst != null) { final CurrencyInst curInst = CurrencyInst.get(_currencyInst); currencyUUID = curInst.getUUID(); } for (final TaxAmount taxAmount : values.values()) { final TaxEntry taxentry = new TaxEntry(); taxentry.setAmount(taxAmount.getAmount()); taxentry.setBase(taxAmount.getBase()); taxentry.setUUID(taxAmount.getTax().getUUID()); taxentry.setCatUUID(taxAmount.getTax().getTaxCat().getUuid()); taxentry.setCurrencyUUID(currencyUUID); taxentry.setDate(calc.getDate()); ret.getEntries().add(taxentry); } } return ret; } /** * @param _parameter Parameter as passed by the eFaps API * @param _calcList List of calculators * @param _rate rate amount * @param _baseCurrInst instancof the base currency * @return Taxes object * @throws EFapsException on error */ public Taxes getTaxes(final Parameter _parameter, final List<Calculator> _calcList, final BigDecimal _rate, final Instance _baseCurrInst) throws EFapsException { final Taxes ret = getRateTaxes(_parameter, _calcList, _baseCurrInst); for (final TaxEntry entry : ret.getEntries()) { entry.setAmount(entry.getAmount().divide(_rate, RoundingMode.HALF_UP)); entry.setBase(entry.getBase().divide(_rate, RoundingMode.HALF_UP)); } return ret; } /** * Gets the calculators for a document. * * @param _parameter Parameter as passed by the eFaps API * @param _docInst Instance of a Document the List of Calculator is wanted for * @param _excludes Collection of Instances no Calculator is wanted for * @return List of Calculator * @throws EFapsException on error */ protected List<Calculator> getCalculators4Doc(final Parameter _parameter, final Instance _docInst, final Collection<Instance> _excludes) throws EFapsException { final List<Calculator> ret = new ArrayList<>(); final QueryBuilder queryBldr = new QueryBuilder(CISales.PositionSumAbstract); queryBldr.addWhereAttrEqValue(CISales.PositionSumAbstract.DocumentAbstractLink, _docInst); queryBldr.addOrderByAttributeAsc(CISales.PositionSumAbstract.PositionNumber); final MultiPrintQuery multi = queryBldr.getPrint(); multi.setEnforceSorted(true); final SelectBuilder selProdInst = SelectBuilder.get().linkto(CISales.PositionSumAbstract.Product).instance(); multi.addSelect(selProdInst); multi.addAttribute(CISales.PositionSumAbstract.Quantity, CISales.PositionSumAbstract.Discount, CISales.PositionSumAbstract.RateNetUnitPrice, CISales.PositionSumAbstract.RateCrossUnitPrice, CISales.PositionSumAbstract.PositionNumber); multi.execute(); while (multi.next()) { if (_excludes == null || _excludes != null && !_excludes.contains(multi.getCurrentInstance())) { final BigDecimal quantity = multi.<BigDecimal>getAttribute(CISales.PositionSumAbstract.Quantity); final BigDecimal discount = multi.<BigDecimal>getAttribute(CISales.PositionSumAbstract.Discount); final BigDecimal unitPrice; if (Calculator.priceIsNet(_parameter, this)) { unitPrice = multi.<BigDecimal>getAttribute(CISales.PositionSumAbstract.RateNetUnitPrice); } else { unitPrice = multi.<BigDecimal>getAttribute(CISales.PositionSumAbstract.RateCrossUnitPrice); } final Integer idx = multi.<Integer>getAttribute(CISales.PositionSumAbstract.PositionNumber); final Instance prodInst = multi.<Instance>getSelect(selProdInst); ret.add(getCalculator(_parameter, null, prodInst, quantity, unitPrice, discount, false, idx)); } } return ret; } /** * Method to get formated String representation of the cross total for a * list of Calculators. * * @param _parameter Parameter as passed by the eFasp API * @param _calcList list of Calculator the net total is wanted for * @return formated String representation of the cross total * @throws EFapsException on error */ protected String getCrossTotalFmtStr(final Parameter _parameter, final List<Calculator> _calcList) throws EFapsException { return NumberFormatter.get().getFrmt4Total(getType4SysConf(_parameter)) .format(getCrossTotal(_parameter, _calcList)); } /** * Method to get String representation of the cross total for a list of Calculators. * @param _parameter Parameter as passed by the eFasp API * @param _calcList list of Calculator the net total is wanted for * @return String representation of the cross total * @throws EFapsException on error */ protected String getCrossTotalStr(final Parameter _parameter, final List<Calculator> _calcList) throws EFapsException { return getCrossTotal(_parameter, _calcList).toString(); } /** * Method to get the cross total for a list of Calculators. * * @param _parameter Parameter as passed by the eFasp API * @param _calcList list of Calculator the net total is wanted for * @return the cross total * @throws EFapsException on error */ protected BigDecimal getCrossTotal(final Parameter _parameter, final List<Calculator> _calcList) throws EFapsException { return Calculator.getCrossTotal(_parameter, _calcList); } /** * Method to get formated String representation of the net total for a * list of Calculators. * @param _parameter Parameter as passed by the eFasp API * @param _calcList list of Calculator the net total is wanted for * @return formated String representation of the net total * @throws EFapsException on error */ protected String getNetTotalFmtStr(final Parameter _parameter, final List<Calculator> _calcList) throws EFapsException { return NumberFormatter.get().getFrmt4Total(getType4SysConf(_parameter)) .format(getNetTotal(_parameter, _calcList)); } /** * Method to get String representation of the net total for a list of Calculators. * * @param _parameter Parameter as passed by the eFasp API * @param _calcList list of Calculator the net total is wanted for * @return String representation of the net total * @throws EFapsException on error */ protected String getNetTotalStr(final Parameter _parameter, final List<Calculator> _calcList) throws EFapsException { return getNetTotal(_parameter, _calcList).toString(); } /** * Method to get the net total for a list of Calculators. * * @param _parameter Parameter as passed by the eFasp API * @param _calcList list of Calculator the net total is wanted for * @return the net total * @throws EFapsException on error */ protected BigDecimal getNetTotal(final Parameter _parameter, final List<Calculator> _calcList) throws EFapsException { return Calculator.getNetTotal(_parameter, _calcList); } /** * Method to get the base cross total for a list of Calculators. * * @param _parameter Parameter as passed by the eFasp API * @param _calcList list of Calculator the net total is wanted for * @return the base cross total * @throws EFapsException on error */ protected BigDecimal getBaseCrossTotal(final Parameter _parameter, final List<Calculator> _calcList) throws EFapsException { return Calculator.getBaseCrossTotal(_parameter, _calcList); } /** * Method to get formated string representation of the base cross total for a list of Calculators. * * @param _parameter Parameter as passed by the eFasp API * @param _calcList list of Calculator the base cross total is wanted for * @return the base cross total * @throws EFapsException on error */ protected String getBaseCrossTotalFmtStr(final Parameter _parameter, final List<Calculator> _calcList) throws EFapsException { return NumberFormatter.get().getFrmt4Total(getType4SysConf(_parameter)) .format(getBaseCrossTotal(_parameter, _calcList)); } /** * Method to get String representation of the base cross total for a list of Calculators. * * @param _parameter Parameter as passed by the eFasp API * @param _calcList list of Calculator the base cross total is wanted for * @return the base cross total * @throws EFapsException on error */ protected String getBaseCrossTotalStr(final Parameter _parameter, final List<Calculator> _calcList) throws EFapsException { return getBaseCrossTotal(_parameter, _calcList).toString(); } /** * Method to get formated String representation of the cross total for a * list of Calculators. * * @param _parameter Parameter as passed by the eFasp API * @param _calcList list of Calculator the net total is wanted for * @return formated String representation of the cross total * @throws EFapsException on error */ protected String getPerceptionTotalFmtStr(final Parameter _parameter, final List<Calculator> _calcList) throws EFapsException { return NumberFormatter.get().getFrmt4Total(getType4SysConf(_parameter)) .format(getPerceptionTotal(_parameter, _calcList)); } /** * Method to get String representation of the cross total for a list of Calculators. * @param _parameter Parameter as passed by the eFasp API * @param _calcList list of Calculator the net total is wanted for * @return String representation of the cross total * @throws EFapsException on error */ protected String getPerceptionTotalStr(final Parameter _parameter, final List<Calculator> _calcList) throws EFapsException { return getPerceptionTotal(_parameter, _calcList).toString(); } /** * Method to get the perception total for a list of Calculators. * * @param _parameter Parameter as passed by the eFasp API * @param _calcList list of Calculator the net total is wanted for * @return the cross total * @throws EFapsException on error */ protected BigDecimal getPerceptionTotal(final Parameter _parameter, final List<Calculator> _calcList) throws EFapsException { return Calculator.getPerceptionTotal(_parameter, _calcList); } /** * Method to override if a default value is required for a document. * * @param _parameter as passed from eFaps API * @return DropDownfield * @throws EFapsException on error. */ public Return dropDown4DocumentType(final Parameter _parameter) throws EFapsException { Return ret = new Return(); final IUIValue uiValue = (IUIValue) _parameter.get(ParameterValues.UIOBJECT); if (uiValue.getField().isEditableDisplay((TargetMode) _parameter.get(ParameterValues.ACCESSMODE))) { final org.efaps.esjp.common.uiform.Field field = new org.efaps.esjp.common.uiform.Field() { @Override protected void add2QueryBuilder4List(final Parameter _parameter, final QueryBuilder _queryBldr) throws EFapsException { final Map<Integer, String> activations = analyseProperty(_parameter, "Activation"); final List<DocTypeActivation> pactivt = new ArrayList<>(); for (final String activation : activations.values()) { final DocTypeActivation pDAct = ERP.DocTypeActivation.valueOf(activation); pactivt.add(pDAct); } if (!pactivt.isEmpty()) { _queryBldr.addWhereAttrEqValue(CIERP.DocumentType.Activation, pactivt.toArray()); } final Map<Integer, String> configurations = analyseProperty(_parameter, "Configuration"); final List<DocTypeConfiguration> configs = new ArrayList<>(); for (final String configuration : configurations.values()) { final DocTypeConfiguration config = ERP.DocTypeConfiguration.valueOf(configuration); configs.add(config); } if (!configs.isEmpty()) { _queryBldr.addWhereAttrEqValue(CIERP.DocumentType.Configuration, configs.toArray()); } } }; ret = field.getOptionListFieldValue(_parameter); } return ret; } /** * @param _parameter as passed from eFaps API. * @return Return for a search * @throws EFapsException on error. */ public Return search4DocumentType(final Parameter _parameter) throws EFapsException { return new Search() { @Override protected void add2QueryBuilder(final Parameter _parameter, final QueryBuilder _queryBldr) throws EFapsException { final Map<?, ?> properties = (Map<?, ?>) _parameter.get(ParameterValues.PROPERTIES); final String typeStr = (String) properties.get("SelectType"); final Type type = Type.get(typeStr); final QueryBuilder query = new QueryBuilder(type); final AttributeQuery attQueryType = query.getAttributeQuery(CIERP.DocumentTypeAbstract.ID); final QueryBuilder attrQueryBldr = new QueryBuilder(CISales.Document2DocumentType); attrQueryBldr.addWhereAttrInQuery(CISales.Document2DocumentType.DocumentTypeLink, attQueryType); final AttributeQuery attrQuery = attrQueryBldr .getAttributeQuery(CISales.Document2DocumentType.DocumentLink); _queryBldr.addWhereAttrNotInQuery(CISales.DocumentAbstract.ID, attrQuery); } } .execute(_parameter); } /** * @param _parameter Parameter as passed by the eFasp API * @return new empty Return * @throws EFapsException on error */ public Return changeDocumentType(final Parameter _parameter) throws EFapsException { final String value; if (_parameter.getParameterValue("documentType") != null) { value = _parameter.getParameterValue("documentType"); } else { value = _parameter.getParameterValue("productDocumentType"); } final Instance instDocType = Instance.get(value); if (InstanceUtils.isValid(instDocType)) { final List<Instance> documentInstances; if (InstanceUtils.isValid(_parameter.getInstance())) { documentInstances = Collections.singletonList(_parameter.getInstance()); } else { documentInstances = getSelectedInstances(_parameter); } for (final Instance documentInst : documentInstances) { final QueryBuilder queryBldr = new QueryBuilder(getType4DocCreate(_parameter)); queryBldr.addWhereAttrEqValue(CIERP.Document2DocumentTypeAbstract.DocumentLinkAbstract, documentInst); final InstanceQuery query = queryBldr.getQuery(); query.execute(); final Update update; if (query.next()) { update = new Update(query.getCurrentValue()); } else { update = new Insert(getType4DocCreate(_parameter)); update.add(CIERP.Document2DocumentTypeAbstract.DocumentLinkAbstract, documentInst); } update.add(CIERP.Document2DocumentTypeAbstract.DocumentTypeLinkAbstract, instDocType); update.execute(); } } return new Return(); } /** * Validate connect document. * * @param _parameter Parameter as passed by the eFaps API * @return the return * @throws EFapsException on error */ public Return validateConnectDocument(final Parameter _parameter) throws EFapsException { final Return ret = new Return(); final Map<?, ?> others = (HashMap<?, ?>) _parameter.get(ParameterValues.OTHERS); final StringBuilder html = new StringBuilder(); final String[] childOids = (String[]) others.get("selectedRow"); boolean validate = true; if (childOids != null) { final Instance callInstance = _parameter.getCallInstance(); for (final String childOid : childOids) { final Instance child = Instance.get(childOid); if (callInstance.getType().isKindOf(CISales.DocumentSumAbstract.getType())) { if (child.getType().equals(CISales.IncomingInvoice.getType()) && check4Relation(CISales.Document2Document4Swap.uuid, child).next()) { validate = false; html.append(getString4ReturnInvalidate(child)); break; } else if (child.getType().equals(CISales.IncomingInvoice.getType()) && check4Relation(CISales.IncomingPerceptionCertificate2IncomingInvoice.uuid, child).next()) { validate = false; html.append(getString4ReturnInvalidate(child)); break; } else if (child.getType().equals(CISales.PaymentOrder.getType()) && check4Relation(CISales.Document2Document4Swap.uuid, child).next()) { validate = false; html.append(getString4ReturnInvalidate(child)); break; } else if (child.getType().equals(CISales.Invoice.getType()) && check4Relation(CISales.Document2Document4Swap.uuid, child).next()) { validate = false; html.append(getString4ReturnInvalidate(child)); break; } else if (child.getType().equals(CISales.Invoice.getType()) && check4Relation(CISales.IncomingRetentionCertificate2Invoice.uuid, child).next()) { validate = false; html.append(getString4ReturnInvalidate(child)); break; } else if (child.getType().equals(CISales.CollectionOrder.getType()) && check4Relation(CISales.Document2Document4Swap.uuid, child).next()) { validate = false; html.append(getString4ReturnInvalidate(child)); break; } } } if (validate) { ret.put(ReturnValues.TRUE, true); html.append(DBProperties.getProperty(this.getClass().getName() + ".validateConnectDoc")); ret.put(ReturnValues.SNIPLETT, html.toString()); } else { html.insert(0, DBProperties.getProperty(this.getClass().getName() + ".invalidateConnectDoc") + "<p>"); ret.put(ReturnValues.SNIPLETT, html.toString()); } } return ret; } /** * Check4 relation. * * @param _typeUUID the type uuid * @param _instance the instance * @return the multi print query * @throws EFapsException on error */ protected MultiPrintQuery check4Relation(final UUID _typeUUID, final Instance _instance) throws EFapsException { final QueryBuilder queryBldr = new QueryBuilder(_typeUUID); queryBldr.addWhereAttrMatchValue(CISales.Document2DocumentAbstract.ToAbstractLink, _instance.getId()); final MultiPrintQuery multi = queryBldr.getPrint(); multi.addAttribute(CISales.Document2DocumentAbstract.OID); multi.execute(); return multi; } /** * Gets the string4 return invalidate. * * @param _child the child * @return the string4 return invalidate * @throws EFapsException on error */ protected StringBuilder getString4ReturnInvalidate(final Instance _child) throws EFapsException { final StringBuilder html = new StringBuilder(); final PrintQuery print = new PrintQuery(_child); print.addAttribute(CISales.DocumentAbstract.Name); print.execute(); return html.append(_child.getType().getLabel() + " - " + print.<String>getAttribute(CISales.DocumentAbstract.Name)); } /** * Gets the currency from ui. * * @param _parameter Parameter as passed by the eFaps API * @return the currency from ui * @throws EFapsException on error */ protected Instance getCurrencyFromUI(final Parameter _parameter) throws EFapsException { return new Currency().getCurrencyFromUI(_parameter); } /** * Gets the payment analysis. * * @param _parameter Parameter as passed by the eFaps API * @return the payment analysis * @throws EFapsException on error */ public Return getPaymentAnalysisFieldValueUI(final Parameter _parameter) throws EFapsException { final Return ret = new Return(); ret.put(ReturnValues.SNIPLETT, DocPaymentInfo_Base.getInfoHtml(_parameter, _parameter.getInstance())); return ret; } /** * Gets the payment info field value UI. * * @param _parameter Parameter as passed by the eFaps API * @return the payment info field value UI * @throws EFapsException */ public Return getPaymentInfoFieldValueUI(final Parameter _parameter) throws EFapsException { final Return ret = new Return(); ret.put(ReturnValues.VALUES, DocPaymentInfo_Base.getInfoValue(_parameter, _parameter.getInstance())); return ret; } }
<filename>src/useCases/Comment/Create/CreateCommentUseCase.ts<gh_stars>1-10 import ICommentsRepository from '../../../repositories/ICommentsRepository'; import IPostsRepository from '../../../repositories/IPostsRepository'; import ICreatCommentDTO from './CreateCommentDTO'; import Comment from '../../../entities/Comment'; import User from '../../../entities/User'; import RequestError from '../../../utils/RequestError'; export default class CreateCommentUseCase { constructor( private commentsRepository: ICommentsRepository, private postsRepository: IPostsRepository, ) {} async execute(data: ICreatCommentDTO): Promise<Comment> { const user = new User({ id: data.userId }); const post = await this.postsRepository.findById(data.postId); if (!post) { throw RequestError.POST_NOT_FOUND; } const comment = new Comment({ user, post, text: data.text, }); const createdComment = await this.commentsRepository.save(comment); return createdComment; } }
/* * This file is generated by jOOQ. */ package cn.vertxup.rbac.domain.tables.daos; import cn.vertxup.rbac.domain.tables.SUser; import cn.vertxup.rbac.domain.tables.records.SUserRecord; import io.github.jklingsporn.vertx.jooq.shared.internal.AbstractVertxDAO; import java.time.LocalDateTime; import java.util.Collection; import org.jooq.Configuration; import java.util.List; import io.vertx.core.Future; import io.github.jklingsporn.vertx.jooq.classic.jdbc.JDBCClassicQueryExecutor; /** * This class is generated by jOOQ. */ @SuppressWarnings({ "all", "unchecked", "rawtypes" }) public class SUserDao extends AbstractVertxDAO<SUserRecord, cn.vertxup.rbac.domain.tables.pojos.SUser, String, Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>>, Future<cn.vertxup.rbac.domain.tables.pojos.SUser>, Future<Integer>, Future<String>> implements io.github.jklingsporn.vertx.jooq.classic.VertxDAO<SUserRecord,cn.vertxup.rbac.domain.tables.pojos.SUser,String> { /** * @param configuration The Configuration used for rendering and query * execution. * * @param vertx the vertx instance */ public SUserDao(Configuration configuration, io.vertx.core.Vertx vertx) { super(SUser.S_USER, cn.vertxup.rbac.domain.tables.pojos.SUser.class, new JDBCClassicQueryExecutor<SUserRecord,cn.vertxup.rbac.domain.tables.pojos.SUser,String>(configuration,cn.vertxup.rbac.domain.tables.pojos.SUser.class,vertx)); } @Override protected String getId(cn.vertxup.rbac.domain.tables.pojos.SUser object) { return object.getKey(); } /** * Find records that have <code>USERNAME IN (values)</code> asynchronously */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByUsername(Collection<String> values) { return findManyByCondition(SUser.S_USER.USERNAME.in(values)); } /** * Find records that have <code>USERNAME IN (values)</code> asynchronously * limited by the given limit */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByUsername(Collection<String> values, int limit) { return findManyByCondition(SUser.S_USER.USERNAME.in(values),limit); } /** * Find records that have <code>REALNAME IN (values)</code> asynchronously */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByRealname(Collection<String> values) { return findManyByCondition(SUser.S_USER.REALNAME.in(values)); } /** * Find records that have <code>REALNAME IN (values)</code> asynchronously * limited by the given limit */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByRealname(Collection<String> values, int limit) { return findManyByCondition(SUser.S_USER.REALNAME.in(values),limit); } /** * Find records that have <code>ALIAS IN (values)</code> asynchronously */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByAlias(Collection<String> values) { return findManyByCondition(SUser.S_USER.ALIAS.in(values)); } /** * Find records that have <code>ALIAS IN (values)</code> asynchronously * limited by the given limit */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByAlias(Collection<String> values, int limit) { return findManyByCondition(SUser.S_USER.ALIAS.in(values),limit); } /** * Find records that have <code>MOBILE IN (values)</code> asynchronously */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByMobile(Collection<String> values) { return findManyByCondition(SUser.S_USER.MOBILE.in(values)); } /** * Find records that have <code>MOBILE IN (values)</code> asynchronously * limited by the given limit */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByMobile(Collection<String> values, int limit) { return findManyByCondition(SUser.S_USER.MOBILE.in(values),limit); } /** * Find records that have <code>EMAIL IN (values)</code> asynchronously */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByEmail(Collection<String> values) { return findManyByCondition(SUser.S_USER.EMAIL.in(values)); } /** * Find records that have <code>EMAIL IN (values)</code> asynchronously * limited by the given limit */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByEmail(Collection<String> values, int limit) { return findManyByCondition(SUser.S_USER.EMAIL.in(values),limit); } /** * Find records that have <code>PASSWORD IN (values)</code> asynchronously */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByPassword(Collection<String> values) { return findManyByCondition(SUser.S_USER.PASSWORD.in(values)); } /** * Find records that have <code>PASSWORD IN (values)</code> asynchronously * limited by the given limit */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByPassword(Collection<String> values, int limit) { return findManyByCondition(SUser.S_USER.PASSWORD.in(values),limit); } /** * Find records that have <code>MODEL_ID IN (values)</code> asynchronously */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByModelId(Collection<String> values) { return findManyByCondition(SUser.S_USER.MODEL_ID.in(values)); } /** * Find records that have <code>MODEL_ID IN (values)</code> asynchronously * limited by the given limit */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByModelId(Collection<String> values, int limit) { return findManyByCondition(SUser.S_USER.MODEL_ID.in(values),limit); } /** * Find records that have <code>MODEL_KEY IN (values)</code> asynchronously */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByModelKey(Collection<String> values) { return findManyByCondition(SUser.S_USER.MODEL_KEY.in(values)); } /** * Find records that have <code>MODEL_KEY IN (values)</code> asynchronously * limited by the given limit */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByModelKey(Collection<String> values, int limit) { return findManyByCondition(SUser.S_USER.MODEL_KEY.in(values),limit); } /** * Find records that have <code>CATEGORY IN (values)</code> asynchronously */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByCategory(Collection<String> values) { return findManyByCondition(SUser.S_USER.CATEGORY.in(values)); } /** * Find records that have <code>CATEGORY IN (values)</code> asynchronously * limited by the given limit */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByCategory(Collection<String> values, int limit) { return findManyByCondition(SUser.S_USER.CATEGORY.in(values),limit); } /** * Find records that have <code>SIGMA IN (values)</code> asynchronously */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyBySigma(Collection<String> values) { return findManyByCondition(SUser.S_USER.SIGMA.in(values)); } /** * Find records that have <code>SIGMA IN (values)</code> asynchronously * limited by the given limit */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyBySigma(Collection<String> values, int limit) { return findManyByCondition(SUser.S_USER.SIGMA.in(values),limit); } /** * Find records that have <code>LANGUAGE IN (values)</code> asynchronously */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByLanguage(Collection<String> values) { return findManyByCondition(SUser.S_USER.LANGUAGE.in(values)); } /** * Find records that have <code>LANGUAGE IN (values)</code> asynchronously * limited by the given limit */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByLanguage(Collection<String> values, int limit) { return findManyByCondition(SUser.S_USER.LANGUAGE.in(values),limit); } /** * Find records that have <code>ACTIVE IN (values)</code> asynchronously */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByActive(Collection<Boolean> values) { return findManyByCondition(SUser.S_USER.ACTIVE.in(values)); } /** * Find records that have <code>ACTIVE IN (values)</code> asynchronously * limited by the given limit */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByActive(Collection<Boolean> values, int limit) { return findManyByCondition(SUser.S_USER.ACTIVE.in(values),limit); } /** * Find records that have <code>METADATA IN (values)</code> asynchronously */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByMetadata(Collection<String> values) { return findManyByCondition(SUser.S_USER.METADATA.in(values)); } /** * Find records that have <code>METADATA IN (values)</code> asynchronously * limited by the given limit */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByMetadata(Collection<String> values, int limit) { return findManyByCondition(SUser.S_USER.METADATA.in(values),limit); } /** * Find records that have <code>CREATED_AT IN (values)</code> asynchronously */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByCreatedAt(Collection<LocalDateTime> values) { return findManyByCondition(SUser.S_USER.CREATED_AT.in(values)); } /** * Find records that have <code>CREATED_AT IN (values)</code> asynchronously * limited by the given limit */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByCreatedAt(Collection<LocalDateTime> values, int limit) { return findManyByCondition(SUser.S_USER.CREATED_AT.in(values),limit); } /** * Find records that have <code>CREATED_BY IN (values)</code> asynchronously */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByCreatedBy(Collection<String> values) { return findManyByCondition(SUser.S_USER.CREATED_BY.in(values)); } /** * Find records that have <code>CREATED_BY IN (values)</code> asynchronously * limited by the given limit */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByCreatedBy(Collection<String> values, int limit) { return findManyByCondition(SUser.S_USER.CREATED_BY.in(values),limit); } /** * Find records that have <code>UPDATED_AT IN (values)</code> asynchronously */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByUpdatedAt(Collection<LocalDateTime> values) { return findManyByCondition(SUser.S_USER.UPDATED_AT.in(values)); } /** * Find records that have <code>UPDATED_AT IN (values)</code> asynchronously * limited by the given limit */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByUpdatedAt(Collection<LocalDateTime> values, int limit) { return findManyByCondition(SUser.S_USER.UPDATED_AT.in(values),limit); } /** * Find records that have <code>UPDATED_BY IN (values)</code> asynchronously */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByUpdatedBy(Collection<String> values) { return findManyByCondition(SUser.S_USER.UPDATED_BY.in(values)); } /** * Find records that have <code>UPDATED_BY IN (values)</code> asynchronously * limited by the given limit */ public Future<List<cn.vertxup.rbac.domain.tables.pojos.SUser>> findManyByUpdatedBy(Collection<String> values, int limit) { return findManyByCondition(SUser.S_USER.UPDATED_BY.in(values),limit); } @Override public JDBCClassicQueryExecutor<SUserRecord,cn.vertxup.rbac.domain.tables.pojos.SUser,String> queryExecutor(){ return (JDBCClassicQueryExecutor<SUserRecord,cn.vertxup.rbac.domain.tables.pojos.SUser,String>) super.queryExecutor(); } }
def assemble(*msg: str, arr=None): final = '' if arr is None: for request in msg: final += "{}{}".format(request, SEP) else: for msg in arr: final += "{}{}".format(msg, SEP) return final
// GifGenerator provide function to generate gif command handler func (bot *Bot) GifGenerator(theme string) func(s *discordgo.Session, m *discordgo.MessageCreate, locale string) { return func(s *discordgo.Session, m *discordgo.MessageCreate, locale string) { url, err := bot.GetGif(theme) if err != nil { bot.SendErrorMessage(s, err) } s.ChannelMessageSendEmbed(m.ChannelID, &discordgo.MessageEmbed{ Image: &discordgo.MessageEmbedImage{ URL: url, }, }) } }
Government says police break up two massive child-trafficking gangs, arresting 802 suspects and saving 181 children. Chinese police have arrested 802 people on suspicion of child trafficking and rescued 181 children in a major operation spanning 15 provinces, the Chinese Ministry of Public Security has said. The recent operation broke up two trafficking rings and led to the arrests of the ring leaders, the ministry said in a statement posted Friday on its website. Child trafficking is a big problem in China. Its strict one-child policy which limits most urban couples to one child and rural couples to two, if their first-born is a girl, has driven a thriving market in babies, especially boys. Many trafficked babies are abducted, but some are sold by families who are too poor to care for a baby or do not want a baby girl. State media report that a baby girl can fetch $4,800 to $8,000 and that a baby boy sells for $11,200 to $12,800. The national operation was set up earlier this year after local police spotted trafficking signs, including frequent appearances of out-of-town pregnant women in a clinic in north China's Hebei province, the ministry said. State media reported that parents wishing to sell their babies could find potential buyers through the clinic. A doctor at the clinic was arrested, state media said. In central China's Henan province, an inspection of a long-distance bus turned up four suspects who tried to sell four infants, the ministry said. Last year, China rescued more than 8,000 children who were abducted or willingly sold by parents. Chinese courts often hand down harsh punishments, including death sentences, to child traffickers.
Racial and ethnic disparities among individuals with Alzheimers disease in the United States: A literature review This study reviews the published literature on racial and ethnic disparities among people with Alzheimers disease (AD) and related dementias in the United States. To identify relevant studies, we searched electronic sources for peer-reviewed journal articles and unpublished research reports that were published through July 2014; related to the AD population and their caregivers; and provided evidence of racial and ethnic disparities, discussed reasons for disparities, or described interventions to address disparities. The literature shows consistent and adverse disparities among blacks and Hispanics compared with non-Hispanic whites concerning AD, including the diseases prevalence and incidence, mortality, participation in clinical trials, use of medications and other interventions, use of long-term services and supports, health care expenditures, quality of care, and caregiving. The literature suggests numerous underlying causes, including factors related to measurement of the disease, genetics, socioeconomic factors, cultural differences, lack of culturally competent clinicians, and discrimination. Although these disparities are well known, little is known about the effectiveness of various strategies, such as cultural competence training, to address these differences, and very few studies evaluate possible interventions.
<filename>src/tests/InlineVector.cpp /* * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. * */ #include "qpid/InlineVector.h" #include "unit_test.h" namespace qpid { namespace tests { QPID_AUTO_TEST_SUITE(InlineVectorTestSuite) using namespace qpid; using namespace std; typedef InlineVector<int, 3> Vec; bool isInline(const Vec& v) { // If nothing, give it the benefit of the doubt; // can't take address of nothing. if (v.size() <= 0) return true; return (const char*)&v <= (const char*)(&v[0]) && (const char*)(&v[0]) < (const char*)&v+sizeof(v); } QPID_AUTO_TEST_CASE(testCtor) { { Vec v; BOOST_CHECK(isInline(v)); BOOST_CHECK(v.empty()); } { Vec v(3, 42); BOOST_CHECK(isInline(v)); BOOST_CHECK_EQUAL(3u, v.size()); BOOST_CHECK_EQUAL(v[0], 42); BOOST_CHECK_EQUAL(v[2], 42); Vec u(v); BOOST_CHECK(isInline(u)); BOOST_CHECK_EQUAL(3u, u.size()); BOOST_CHECK_EQUAL(u[0], 42); BOOST_CHECK_EQUAL(u[2], 42); } { Vec v(4, 42); BOOST_CHECK_EQUAL(v.size(), 4u); BOOST_CHECK(!isInline(v)); Vec u(v); BOOST_CHECK_EQUAL(u.size(), 4u); BOOST_CHECK(!isInline(u)); } } QPID_AUTO_TEST_CASE(testInsert) { { Vec v; v.push_back(1); BOOST_CHECK_EQUAL(v.size(), 1u); BOOST_CHECK_EQUAL(v.back(), 1); BOOST_CHECK(isInline(v)); v.insert(v.begin(), 2); BOOST_CHECK_EQUAL(v.size(), 2u); BOOST_CHECK_EQUAL(v.back(), 1); BOOST_CHECK(isInline(v)); v.push_back(3); BOOST_CHECK(isInline(v)); v.push_back(4); BOOST_CHECK(!isInline(v)); } { Vec v(3,42); v.insert(v.begin(), 9); BOOST_CHECK_EQUAL(v.size(), 4u); BOOST_CHECK(!isInline(v)); } { Vec v(3,42); v.insert(v.begin()+1, 9); BOOST_CHECK(!isInline(v)); BOOST_CHECK_EQUAL(v.size(), 4u); } } QPID_AUTO_TEST_CASE(testAssign) { Vec v(3,42); Vec u; u = v; BOOST_CHECK(isInline(u)); u.push_back(4); BOOST_CHECK(!isInline(u)); v = u; BOOST_CHECK(!isInline(v)); } QPID_AUTO_TEST_CASE(testResize) { Vec v; v.resize(5); BOOST_CHECK(!isInline(v)); } QPID_AUTO_TEST_SUITE_END() }} // namespace qpid::tests
A Method for Quantifying the Importance of Facts, Rules and Hypotheses A labelled digraph is used as a model of a simple database, nodes representing facts (or classes of facts) and arcs the relationships between these facts. An expression for the number of microstates in which such a data structure may exist is derived and used to calculate a measure of intrinsic entropy. This measure is fundamentally related to the information content of the structure and its change, on adding or subtracting an item of information from the database, may be used to associate a value called Importance with the item. An algorithm called the " Database Monitor Program (DMP) " is introduced. Its function is to guarantee that the digraph database exists in a ' least complex state' by replacing relationships between nodes by relationships pertaining to classes of nodes, but with the constraint that the information content of the structure remains unchanged. Once in this minimal condition, the Importance of an item is denned in terms of the change it induces on the minimum entropy state. Like measures of entropy, the Importance of an item is a relative concept in that its value depends on the context of the database. It is argued that this is an intuitively appealing measure in that a system is only able to judge the importance of an item in the light of existing knowledge. Two additional concepts termed confidence and significance are introduced and used to assess the formation of 'class concepts' within the database. The use of these three measures for conflict resolution is also discussed. Finally, an example system developed within the Poplog environment is presented and extensions of the work are discussed.
. The trigeminal nerve nuclei are examined light- and electron-microscopically in the adult domestic fowl. The nucleus sensibilis principalis nervi trigemini is formed by scarce, medium-sized, round-to-ovoid polygonal neurons. The Nissl bodies are concentrated around the nucleus and consist of short cisterns of the rough endoplasmic reticulum densely bordered with ribosomes. The nucleus tractus spinalis nervi trigemini extends to the first segments of the cervical cord. The rostral part of the nucleus is characterized by medium-sized polygonal neurons. Their cell bodies are densely packed with coarse Nissl bodies. Small multiforme cell types with large nuclei frequently showing two nucleoli predominate in the caudal part. The motorical main portion, nucleus motorius nervi trigemini consists of medium-sized as well as great polygonal neurons. The accessory portion, nucleus motorius dorsalis nervi trigemini, consists of medium-sized polygonal neurons. Both nuclei show the typical motoneuron cytomorphology. In the neuropil, the axodendritic synapses can be differentiated into five types. Occasionally, densely packed glial lamellae and giant mitochondria occur.
The Federal Trade Commission has reached a settlement with Warner Bros. over claims that the publisher failed to disclose that it had paid prominent YouTubers for positive coverage of one of its video games. The FTC charge stated that Warner Bros. deceived customers by paying thousands of dollars to social media "influencers," including YouTube megastar PewDiePie, to cover Middle Earth: Shadow of Mordor without announcing that money had changed hands. Under the terms of the agreement, Warner Bros. is banned from failing to disclose similar deals in the future, and cannot pretend that sponsored videos and articles are actually the work of independent producers. "Consumers have the right to know if reviewers are providing their own opinions or paid sales pitches," director of the FTC's Bureau of Consumer Protection Jessica Rich said in a statement. "Companies like Warner Brothers need to be straight with consumers in their online ad campaigns." The influencers could not express negative opinions about the game Warner Bros.' deal with the influencers involved stated that they had to make at least one tweet or Facebook post about the game, as well as produce videos with a string of caveats to avoid showing it in a negative light. Those videos could not express negative opinions about the game or Warner Bros. itself, could not show any glitches or bugs, and must include "a strong verbal call-to-action to click the link in the description box for the viewer to go to the [game's] website to learn more about the [game], to learn how they can register, and to learn how to play the game," according to Ars Technica. The FTC says disclaimers in the YouTube description were not enough The videos earned more than 5.5 million views for Warner Bros., with PewDiePie's monster subscriber numbers accounting for 3.7 million views on his own. Influencers were advised to disclose the video's sponsored status under YouTube's "Show More" section, and while PewDiePie included a line, others did not. But that doesn't matter: the FTC says this would not have been enough to skirt the rules anyway, as the disclaimer would not have been visible on videos watched through Twitter, Facebook, or other social media sources. YouTube has increasingly been seen as a haven for independent video game purchasing advice over recent years, as Let's Plays have taken off as a watch-along format, and movements like GamerGate have cast aspersions against the mainstream media and specialist video games press. But while YouTube allows for a direct connection between these new game-playing celebrities and their fans, the legally murky format and young age of some of the biggest stars mean that people are open to exploitation by companies and YouTubers alike who hide their affiliations. Other YouTubers have taken money to produce content and presented it as independent opinion In 2014, Gamasutra found that of more than 40 YouTubers questioned with more than 5,000 subscribers, a quarter had taken money to produce sponsored content. Earlier this month, two big names in the Counter-Strike YouTube community were criticized after it was revealed they actually owned a video game item betting site that they had advertised in several videos. Trevor ‘TmarTn' Martin and Tom ‘ProSyndicate' Cassell produced videos of themselves using — and repeatedly winning on — CSGOLotto, without divulging that they owned the site, and could conceivably tweak the odds at will. This was a particular problem because the items being betted on — skins for Counter-Strike's weapons — can be sold for real money. Currently there are two lawsuits pending against both CSGOLotto and Valve, the creators of Counter-Strike, arguing that both are complicit in operating and maintaining ersatz online casinos. Update: Added a line and pullquote to make it clearer that the FTC does not consider disclosures in YouTube descriptions adequate, since they do not appear alongside videos in embeds or on other platforms.
/** * @file Snake.hpp * @author <NAME> (<EMAIL>) * @brief Snake textures abstraction class * @version 0.1 * @date 2021-12-05 * * @copyright Copyright (c) 2021. <NAME> * Permission is hereby granted, free of charge, to any person obtaining a copy of * this software and associated documentation files (the "Software"), to deal in * the Software without restriction, including without limitation the rights to * use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of * the Software, and to permit persons to whom the Software is furnished to do so, * subject to the following conditions: * * The above copyright notice and this permission notice shall be included in all * copies or substantial portions of the Software. * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include "Texture.hpp" #pragma once class Snake : public Texture { /* Custom datatypes */ public: enum SnakePart { HEAD, TURN, BODY, TAIL }; /* Constructor */ public: /** * @brief Construct a new SnakeTexture object * * @param renderer Renderer reference * @param filename Filename of the texture file * @param logs Log reference or NULL */ Snake( SDL_Renderer *renderer, std::string filename, Logging *logs) noexcept(false); ~Snake(); };
// Bootstrap the RDPG Database and associated services. func Bootstrap() (err error) { r := newRDPG() log.Info(fmt.Sprintf(`rdpg.RDPG<%s>#Bootstrap() Bootstrapping Cluster Node...`, ClusterID)) err = r.initialBootstrap() if err != nil { log.Error(fmt.Sprintf(`rdpg.RDPG<%s>#Bootstrap() r.initialBootstrap() ! %s`, ClusterID, err)) return } kv := r.ConsulClient.KV() key := fmt.Sprintf(`rdpg/%s/cluster/service`, ClusterID) kvp := &consulapi.KVPair{Key: key, Value: []byte(globals.ClusterService)} _, err = kv.Put(kvp, &consulapi.WriteOptions{}) if err != nil { log.Error(fmt.Sprintf(`rdpg.RDPG<%s>#BootStrap(): key=%s globals.ClusterService=%s ! %s`, ClusterID, key, globals.ClusterService, err)) } s, err := services.NewService(globals.ClusterService) err = s.Configure() if err != nil { log.Error(fmt.Sprintf(`rdpg.RDPG<%s>#Bootstrap() s.Configure(%s) ! %s`, ClusterID, globals.ClusterService, err)) } if globals.ClusterService == "pgbdr" { r.bdrBootstrap() } else { err = r.serviceClusterCapacityStore() if err != nil { log.Error(fmt.Sprintf(`rdpg.RDPG<%s>#bootstrapSystem() Store Service CLuster Instance Capacity in Consul KeyValue! %s`, ClusterID, err)) return } err = r.bootstrapSystem() if err != nil { log.Error(fmt.Sprintf(`rdpg.RDPG<%s>#bdrLeaderBootstrap() r.bootstrapSystem(%s,%s) ! %s`, ClusterID, globals.ServiceRole, globals.ClusterService, err)) return } } svcs := []string{`pgbouncer`, `haproxy`} for index := range svcs { s, err := services.NewService(svcs[index]) err = s.Configure() if err != nil { log.Error(fmt.Sprintf(`rdpg.RDPG<%s>#Bootstrap() s.Configure(%s) ! %s`, ClusterID, svcs[index], err)) } } err = r.registerConsulServices() if err != nil { log.Error(fmt.Sprintf(`rdpg.RDPG<%s>#Bootstrap() r.registerConsulServices() ! %s`, ClusterID, err)) } err = r.registerConsulWatches() if err != nil { log.Error(fmt.Sprintf(`rdpg.RDPG<%s>#Bootstrap() r.registerConsulWatches() ! %s`, ClusterID, err)) } log.Trace(fmt.Sprintf(`rdpg.RDPG<%s>#Bootstrap() Bootstrapping Cluster Node Completed.`, ClusterID)) return }
import { Component, OnInit } from '@angular/core'; import { CardService } from './cards.service'; import { Card } from './card'; @Component({ moduleId: module.id, selector: 'my-cards', templateUrl: 'cards.component.html', styleUrls: ['cards.component.css'] }) export class CardsComponent implements OnInit { constructor(private CardService: CardService) { } cards: Card[]; shuffledList: Card[]; ngOnInit(): void { this.cards = this.generateCards(); } generateCards(): Card[] { var newCards = this.CardService.generateCards(); return newCards; } shuffle(cardList: Card[]): void { this.cards = this.CardService.shuffle(cardList); } sort(cardList: Card[]): void { this.cards = this.CardService.sort(cardList); } }
Dark Matter, Baryogenesis and Neutrino Oscillations from Right Handed Neutrinos We show that, leaving aside accelerated cosmic expansion, all experimental data in high energy physics that are commonly agreed to require physics beyond the Standard Model can be explained when completing it by three right handed neutrinos that can be searched for using current day experimental techniques. The model that realises this scenario is known as Neutrino Minimal Standard Model (\nu MSM). In this article we give a comprehensive summary of all known constraints in the \nu MSM, along with a pedagogical introduction to the model. We present the first complete quantitative study of the parameter space of the model where no physics beyond the \nu MSM is needed to simultaneously explain neutrino oscillations, dark matter and the baryon asymmetry of the universe. This requires to track the time evolution of left and right handed neutrino abundances from hot big bang initial conditions down to temperatures below the QCD scale. We find that the interplay of resonant amplifications, CP-violating flavour oscillations, scatterings and decays leads to a number of previously unknown constraints on the sterile neutrino properties. We furthermore re-analyse bounds from past collider experiments and big bang nucleosynthesis in the face of recent evidence for a non-zero neutrino mixing angle \theta_{13}. We combine all our results with existing constraints on dark matter properties from astrophysics and cosmology. Our results provide a guideline for future experimental searches for sterile neutrinos. A summary of the constraints on sterile neutrino masses and mixings has appeared in arXiv:1204.3902 . In this article we provide all details of our calculations and give constraints on other model parameters. 1 Introduction timated individually in the framework of the MSM, to date it has not been verified that there is a range of right handed neutrino parameters for which they can be explained simultaneously, in particular for experimentally accessible sterile neutrinos. In this article we present detailed results of the first complete quantitative study to identify the range of parameters that allows to simultaneously explain neutrino oscillations, the observed DM density DM and the observed BAU, responsible for today's remnant baryonic density B. We in the following refer to this situation, in which no physics beyond the MSM is required to explain these phenomena, as scenario I. In this scenario DM is made of one of the right handed neutrinos, while the other two are responsible for baryogenesis and the generation of active neutrino masses. We also study systematically how the constraints relax if one allows the sterile neutrinos that compose DM to be produced by some mechanism beyond the MSM (scenario II). Finally, we briefly comment on a scenario III, in which the MSM is a theory of baryogenesis and neutrino oscillations only, with no relation to DM. A more precise definition of these scenarios is given in section 2.2. Only scenarios I and II are studied in this article, which is devoted to the MSM as the common origin of DM, neutrino masses and the BAU. While scenario II has previously been studied in, the constraints coming from the requirement to thermally produce the observed DM in scenario I are calculated for the first time in this work. We combine our results with bounds coming from big bang nucleosynthesis (BBN) and direct searches for sterile neutrinos, which we re-derived in the face of recent data from neutrino experiments (in particular 13 = 0). Centerpiece of our analysis is the study of all lepton numbers throughout the evolution of the early universe. As will be explained below, in the MSM lepton asymmetries are crucial for both, baryogenesis and DM production. We determine the time evolution of left and right handed neutrino abundances for a wide range of sterile neutrino parameters from hot big bang initial conditions at temperatures T ≫ T EW ∼ 200 GeV down to temperatures below the QCD scale by means of effective kinetic equations. They incorporate various effects, including thermal production of sterile neutrinos from the primordial plasma, coherent oscillations, back reaction, washouts, resonant amplifications, decoherence, finite temperature corrections to the neutrino properties and the change in effective number of degrees of freedom in the SM background. Many of these were only roughly estimated or completely neglected in previous studies. The various different time scales appearing in the problem make an analytic treatment or the use of a single CP-violating parameter impossible in most of the parameter space. Most of our results are obtained numerically. However, the parametric dependence on the experimentally relevant parameters (sterile neutrino masses and mixings) can be understood in a simple way. Furthermore, we discover a number of tuning conditions that can be understood analytically and allow to reduce the dimensionality of the parameter space. We find that there exists a considerable fraction of the MSM parameter space in which the model can simultaneously explain neutrino oscillations, dark matter and the baryon asymmetry of the universe. This includes a range of masses and couplings for which the right handed neutrinos can be found in laboratory experiments. The main results of our study, constraints on sterile neutrino masses and mixings, have previously been presented in. In this article we give details of our calculation and constraints on other model parameters, which are not discussed in. The remainder of this article is organized as follows. In Section 2 we overview the MSM, its parametrization, and describe the universe history in its framework, including baryogenesis and dark matter production. In Section 3 we discuss different experimental and cosmological bounds on the properties of right-handed neutrinos in the MSM. In Section 4 we formulate the kinetic equations which are used to follow the time evolution of sterile neutrinos and active neutrino flavors in the early universe. In Section 5 we present our results on baryogenesis in scenario II. In Section 6 we study the generation of lepton asymmetries at late times, essential for thermal dark matter production in the MSM. In Section 7 we combine the constraints of the two previous Sections and define the region of parameters where scenario I can be realized, i.e. the MSM explains simultaneously neutrino masses and oscillations, dark matter, and baryon asymmetry of the universe. In Section 8 we present our conclusions. In a number of appendices we give technical details on kinetic equations (A), on the parametrization of the MSM Lagrangian (B), on different notations to describe lepton asymmetries (C) and on the decay rates of sterile neutrinos (D). The MSM The MSM is described by the Lagrangian Here we have suppressed flavor and isospin indices. L SM is the Lagrangian of the SM. F is a matrix of Yukawa couplings and M M a Majorana mass term for the right handed neutrinos R. L L = ( L, e L ) T are the left handed lepton doublets in the SM and is the Higgs doublet. We chose a basis where the charged lepton Yukawa couplings and M M are diagonal. The Lagrangian is well-known in the context of the seesaw mechanism for neutrino masses and leptogenesis. While the eigenvalues of M M in most models are related to an energy scale far above the electroweak scale, it is a defining assumption of the MSM that the observational data can be explained without involvement of any new scale above the Fermi one. Mass-and Flavor Eigenstates with the Dirac mass matrix m D = F v. When the eigenvalues of M M are much larger than those of m D, the seesaw mechanism naturally leads to light active and heavy sterile neutrinos. This hierarchy is realized in the MSM. In vacuum there are two sets of mass eigenstates; on one hand active neutrinos i with masses m i, which are mainly mixings of the SU charged fields L, with I = (m D M −1 M ) I, and on the other hand sterile neutrinos 5 N I with masses M I, which are mainly mixings of the singlet fields R, 5 In the notation is slightly different and the letter "NI " does not denote mass eigenstates. Here P R,L are chiral projectors and N I ( i ) are Majorana spinors, the left chiral (right chiral) part of which is fixed by the Majorana relations N c I = N I and i = c i. The matrix U N diagonalises the sterile neutrino mass matrix M N defined below. The entries of the matrix determine the active-sterile mixing angles. The neutrino mass matrix can be block diagonalized. At leading order in the Yukawa couplings F one obtains the mass matrices The mass matrices m and M N are not diagonal and lead to neutrino oscillations. While there is very little mixing between active and sterile flavors at all temperatures of interest, the oscillations between sterile neutrinos can be essential for the generation of a lepton asymmetry. m can be parameterized in the usual way by active neutrino masses, mixing angles and phases, m = U diag(m 1, m 2, m 3 )U T. In the basis where the charged lepton Yukawas are diagonal, U is identical to the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) lepton mixing matrix. The physical sterile neutrino masses M I are given by the eigenvalues of M N M N. In the seesaw limit M N is almost diagonal and they are very close to the entries of M M. We nevertheless need to keep terms O( 2 ) because the masses M 2 and M 3 are degenerate in the MSM, see section 2.6, and the mixing of the sterile neutrinos N 2,3 amongst each other may be large despite the seesaw-hierarchy 6. This mixing is given by the matrix U N, which can be seen as analogue to U. It is worth noting that due to the matrix U N is real at this order in F. The experimentally relevant coupling between active and sterile species is given by the matrix with 7 In practice, experiments to date cannot distinguish the sterile flavors and are only sensitive to the quantities U 2 ≡ I I * I = I I * I. Therefore U N, and hence the sterile-sterile mixing and the coupling of individual sterile flavors to the SM, cannot be probed in direct searches. Benchmark Scenarios The notation introduced above allows to define the scenarios I-III introduced in the introduction more precisely. In scenario I no physics beyond the MSM is needed to explain the observed DM, neutrino masses and B. DM is composed of thermally produced sterile neutrinos N 1. N 2 and N 3 generate active neutrino masses via the seesaw mechanism, and their CP-violating oscillations produce lepton asymmetries in the early universe. The effect of N 1 on neutrino masses and lepton asymmetry generation is negligible because its Yukawa couplings F 1 are 6 It turns out that the region where UN is close to identity phenomenologically is the most interesting, see section 2.6. 7 The fact that matrix appearing in is U N T = ( * UN ) rather than = U N is due to the fact that the NI couple to L,, but overlap with c L,. constrained to be tiny by the requirement to be a viable DM candidate, c.f. section 3.1.2. The lepton asymmetries produced by N 2,3 are crucial on two occasions in the history of the universe: On one hand the asymmetries generated at early times (T 140 GeV) are responsible for the generation of a BAU via flavored leptogenesis, on the other hand the late time asymmetries (T ∼ 100 MeV) strongly affect the rate of thermal N 1 production. Due to the latter the requirement to produce the observed DM imposes indirect constraints on the particles N 2,3. There are determined in sections 6 and 7 and form the main result of our study. In scenario II the roles of N 2,3 and N 1 are the same as in scenario I, but we assume that DM was produced by some unknown mechanism beyond the MSM. The astrophysical constraints on the N 1 mass and coupling equal those in scenario I. N 2,3 are again required to generate the active neutrino masses via the seesaw mechanism and to produce sufficient flavored lepton asymmetries at T ∼ 140 MeV to explain the BAU. However, there is no need for a large late time asymmetry. This considerably relaxes the bounds on N 2,3. Scenario II is studied in detail in section 5. In scenario III the MSM is not required to explain DM, i.e. it is considered to be a theory of neutrino masses and low energy leptogenesis only. Then all three N I can participate in the generation of lepton asymmetries. This makes the parameter space for baryogenesis considerably bigger than in scenarios I and II, including new sources of CP violation. We do not study scenario III in this work, some aspects are discussed in. Effective Theory of Lepton Number Generation In scenarios I and II the lightest sterile neutrino N 1 is a DM candidate. In this article we focus on those two scenarios. If N 1 is required to compose all observed DM, its mass M 1 and mixing are constrained by observational data, see section 3. Its mixing is so small that its effect on the active neutrino masses is negligible. Note that this implies that one active neutrino is much lighter than the others (with mass smaller than O(10 −5 ) eV ). Finding three massive active neutrinos with degenerate spectrum would exclude the MSM with three sterile neutrinos as common and only origin of active neutrino oscillations, dark matter and baryogenesis. N 1 also does not contribute significantly to the production of a lepton asymmetry at any time. This process can therefore be described in an effective theory with only two sterile flavors N 2,3. In the following we will almost exclusively work in this framework. To simplify the notation, we will use the symbols M N and U N for both, the full (3 3) mass matrix and mixing matrices defined above and the (2 2 and 3 2) sub-matrices that only involve the sterile flavors I = 2, 3, which appear in the effective theory. The mixing between N 1 and N 2,3 is negligible due to the smallness of F 1, which is enforced by the seesaw relation and the observational bounds on M 1 summarized in Section 3.1.2. The effective N 2,3 mass matrix can be written as For all parameter choices we are interested inM ≃ M holds in very good approximation. The masses M 2,3 are too big to be sensitive to loop corrections. In contrast, the splitting M can be considerably smaller than the size of radiative corrections to M 2,3. The above expressions have a different shape than those given in because we use a different base in flavor space, see appendix B. These above formulae hold for the (zero temperature) masses in the microscopic theory. At finite temperature the system is described by a thermodynamical ensemble, the properties of which can usually be described in terms of quasiparticles with temperature dependent dispersion relations. We approximate these by temperature dependent "thermal masses". Thermal History of the Universe in the MSM Apart from the very weakly coupled sterile neutrinos, the matter content of the MSM is the same as that of the SM. Therefore the thermal history of the universe during the radiation dominated era is similar in both models. Here we only point out the differences that arise due to the presence of the fields R, see figure 1. They couple to the SM only via the Yukawa matrices F, which are constrained by the seesaw relation. For sterile neutrino masses below the electroweak scale, the abundances are too small to affect the entropy during the radiation dominated era significantly. However, the additional sources of CP-violation contained in them have a huge effect on the lepton chemical potentials in the plasma. Baryogenesis The MSM adds no new degrees of freedom to the SM above the electroweak scale. As a consequence of the smallness of the Yukawa couplings F, the N I are produced only in negligible amounts during reheating. Therefore the thermal history for T ≫ T EW closely resembles that in the SM 8. The sterile neutrinos have to be produced thermally from the primordial plasma in the radiation dominated epoch. During this non-equilibirum process, all Sakharov conditions are fulfilled: Baryon number is violated by SM sphalerons, and the oscillations amongst the sterile neutrinos violate CP. Source of this CP-violation are the complex phases in the Yukawa couplings F I. Due to the Majorana mass M M neither the individual (active) leptonic currents, defined in and, nor the total lepton number are strictly conserved. However, for T ≫ M the effect of the Majorana masses is negligible. Though the neutrinos are Majorana particles, one can define neutrinos and antineutrinos as the two helicity states, transitions between which are suppressed at T ≫ M. We will in the following always use the terms "neutrinos" and "antineutrinos" in this sense. In scenarios I and II the abundance of N 1 remains negligible until T ∼ 100 MeV because of the smallness of its coupling that is required to be in accord with astrophysical bounds on DM, see Section 3.1.2. N 2,3, on the other hand, are produced efficiently in the early universe. During this process flavored "lepton asymmetries" can be generated. N 2,3 reach equilibrium at a temperature T +. Though the total lepton number at T + ≫ M is very small, there are asymmetries in the above helicity sense in the individual active and sterile flavors. Sphalerons, which only couple to the left chiral fields, can convert them into a baryon asymmetry. The washout of lepton asymmetries becomes efficient at T T +. It is a necessary condition for baryogenesis that this washout has not erased all asymmetries at T EW, which is fulfilled for T + T EW. The BAU at T ∼ T EW can be estimated by today's baryon to photon ratio, see for a recent review. A precise value can be obtained by combining data from the cosmic microwave background and large scale structure, The parameter B is related to the remnant density of baryons B, in units of the critical density, by B ≃ B /(2.739 10 −8 h 2 ), where h parameterizes today's Hubble rate H 0 = 100h (km/s)/Mpc. In order to generate this asymmetry, the effective (thermal) masses M 2 (T ) and M 3 (T ) of the sterile neutrinos in the plasma need to be quasi-degenerate at T T EW, see section 2.6. After N 2 and N 3 reach equilibrium, the lepton asymmetries are washed out. This washout takes longer than the kinetic equilibration, but it was been estimated in that no asymmetries survive until N 2,3 -freezeout at T = T −. In it has been suggested that some asymmetry may be protected from this washout by the chiral anomaly, which transfers them into magnetic fields. Here we take the most conservative approach and assume that no asymmetry survives between T + and T −. Around T = T −, the interactions that keep N 2,3 in equilibrium become inefficient. During the resulting freezeout the Sakharov conditions are again fulfilled and a new asymmetries are generated. Even later, a final contribution to the lepton asymmetries are added when the unstable particles N 2,3 decay at a temperature T d. DM production The abundance of the third sterile neutrino N 1 in scenario I remains below equilibrium at all times due to its small Yukawa coupling. In absence of chemical potentials, the thermal production of these particles (Dodelson-Widrow mechanism ) is not sufficient to explain all dark matter as relic N 1 abundance if the observational bounds summarized in section 3 are taken into consideration. However, in the presence of a lepton asymmetry in the primordial plasma, the dispersion relations of active and sterile neutrinos are modified by the Mikheyev-Smirnov-Wolfenstein effect (MSW effect). The thermal mass of the active neutrinos can be large enough to cause a level crossing between the dispersion relation for active and sterile flavors at T DM, resulting in a resonantly enhanced production of N 1 (resonant or Shi-Fuller mechanism ). This mechanism requires a lepton asymmetry | | 8 10 −6 to be efficient enough to explain the entire observed dark matter density DM in terms of N 1 relic neutrinos. Here we have characterized the asymmetry by 9 where s is the entropy density of the universe and n the total number density (particles minus antiparticles) of active (SM) leptons of flavor. The relations between defined in and other ways to characterize the asymmetry (e.g. the chemical potential) are given in appendix C. Cosmological constraints Thus, in scenario I there are two cosmological requirements related to the lepton asymmetry that have to be fulfilled to produce the correct B and DM within the MSM: i) ∼ 10 −10 at T EW ∼ 200 GeV for successful baryogenesis and ii) | | > 8 10 −6 at T DM for dark matter production. In scenarios I and II the asymmetry generation in both cases relies on a resonant amplification and quasi-degeneracy of M 2 and M 3, which we discuss in section 2.6. This may be considered as fine tuning. On the other hand, the fact that the BAU (and thus the baryonic matter density B ) and DM production in the SM both rely on essentially the same mechanism may be considered as a hint for an explanation for the apparent coincidence B ∼ DM, though the connection is not obvious as B and DM also depend on other parameters. In scenario II only the condition i) applies. The resulting constraints on the N 2,3 properties have been studied in detail in. In section 5 we update this analysis in the face of recent data from neutrino experiments, in particular evidence for an active neutrino mixing angle 13 = 0. In section 6 we include the second condition and study which additional constraints come from the requirement | | > 8 10 −6 at T DM. Previous estimates suggest T DM ∼ 100 MeV T QCD and T − < M W, where M W is the mass of the W -boson and T QCD the temperature at which quarks form hadrons. Though we are concerned with the conditions under which N 1 can explain all observed dark matter, the N 1 will not directly enter our analysis because the lepton asymmetry that is necessary for resonant N 1 production in scenario I is created by N 2,3. Instead, we derive constraints on the properties of N 2,3, which can be searched for in particle colliders. N 1, in contrast, cannot be detected directly in the laboratory due to its small coupling. However, the N 1 parameter space is constrained from all sides by indirect observations including structure formation, Ly forest, X-rays and phase space analysis, see section 3. Parameterization Adding k flavors of right handed neutrinos to the SM with three active neutrinos extends the parameter space of the model by 7k − 3 parameters. In the MSM k = 3, thus there are 18 parameters in addition to those of the SM. These can be chosen as the masses m i and M I of the three active and three sterile neutrinos, respectively, and three mixing angles as well as three phases in each of the mixing matrices U and U N that diagonalize m and M N, respectively. In the following we consider an effective theory with only two right handed neutrinos, which is appropriate to describe the generation of lepton asymmetries in scenarios I and II. After dropping N 1 from the Lagrangian, the effective Lagrangian contains 11 new parameters in addition to the SM. 7 of them are related to the active neutrinos. In the standard parametrization they are two masses m i (one active neutrino has a negligible mass), three mixing angles ij, a Dirac phase and a Majorana phase. They can at least in principle be measured in active neutrino experiments. The remaining four are related to sterile neutrino properties. In the common Casas-Ibarra parametrization two of them are chosen as M 2, M 3. The last two are the real and imaginary part of a complex angle 10. The Yukawa coupling is written as where m diag = diag(m 1, m 2, m 3 ). For normal hierarchy of active neutrino masses (m 1 ≃ 0) R is given by while for inverted hierarchy (m 3 ≃ 0) it reads where = ±1. The matrix U can be parameterized as 10 Note that F as a polynomial in z = e i only contains terms of the powers z and 1/z. Table 1: Neutrino masses and mixings as found in. We parameterize the masses mi according to m1 = 0, m 2 2 = m 2 sol, m 2 3 = m 2 atm + m 2 sol /2 for normal hierarchy and m 2 1 = m 2 atm − m 2 sol /2, m 2 2 = m 2 atm + m 2 sol /2, m3 = 0 for inverted hierarchy. Using the values for 13 found more recently in has no visible effect on our results. with U ± = diag(e ∓i/2, 1, e ±i/2 ) and where c ij and s ij stand for cos( ij ) and sin( ij ), respectively, and 1, 2 and are the CPviolating phases. For normal hierarchy the Yukawa matrix F only depends on the phases 2 and, for the inverted hierarchy, it depends on and the difference 1 − 2. This is because N 1 has no measurable effect on neutrino masses due to M 1 ≪ M 2,3. In practice we will use the following parameters: two active neutrino masses m i, five parameters in the active mixing matrix (three angles, one Dirac phase, one Majorana phase), the average physical sterile neutrino mass M = (M 1 + M 2 )/2 ≃ M, the mass splitting ∆M. The masses and mixing angles of active neutrinos have been measured (the absolute mass scale is fixed as the lightest active neutrino is almost massless in scenarios I and II). We use the experimental values obtained from the global fit published in reference in all calculations, which are summarized in table 1. Shortly after we finished our numerical studies, the mixing angle 13 was measured by the Daya Bay and RENO collaborations. The values found there slightly differ from the one given in, see also. We checked that the effect of using one or the other value on the generated asymmetries in negligible, which justifies to use the selfconsistent set of parameters given in table 1. The remaining parameters can be constrained in decays of sterile neutrinos in the laboratory. It is one of the main goals of this article to impose bounds on them to provide a guideline for experimental searches. In order to identify the interesting regions in parameter space we proceed as follows. We neglect ∆M in, but of course keep it in the effective Hamiltonian introduced in section 4. This is allowed in the region ∆M ≪ M, which we consider in this work. Unless stated differently, we always allow the CP-violating Majorana and Dirac phases to vary. We then numerically determine the values that maximize the asymmetry and fix them to those. In section 5, where we study the condition i) for baryogenesis, we apply the same procedure to. On the other hand, the requirement ii), necessary to explain DM in scenario I, almost fixes the parameter Re to a multiple of /2 11. In section 6 we therefore fix Re = /2. The remaining parameters, M, ∆M and Im contain a redundancy. For ∆M ≪ M changing simultaneously the signs of, ∆M and Im along with the transformation Re ↔ − Re corresponds to swapping the names of N 2 and N 3. To be definite, we always chose = 1 and consider both signs of Im. Our main results consist of bounds on the parameters M, Im and ∆M. For experimental searches the most relevant properties of the sterile neutrinos are the mass M ≃ M and their mixing with active neutrinos. We therefore also present our results in terms of M, the physical mass splitting M and where and U 2 are given by and, respectively. U 2 measures the mixing between active and sterile species. M and U 2 can, however, not be mapped on parameters in the Lagrangian in a unique way; there exists more than one choice of leading to the same U 2. "Fine Tunings" and the Constrained MSM In most models that incorporate the seesaw mechanism the eigenvalues of M M are much larger than the scale of electroweak symmetry breaking. It is a defining feature of the MSM that all experimental data can be explained without introduction of such a new scale. In order to keep the sterile neutrino masses below the electroweak scale and the active neutrino masses in agreement with experimental constraints, the Yukawa couplings F have to be very small. As a consequence of this, the thermal production rates for lepton asymmetries are also very small unless they are resonantly amplified. In scenarios I and II this requires a small mass splitting between M 2 and M 3. This can either be viewed as "fine tuning" or be related to a new symmetry. In the following we focus on these two scenarios, I and II. We do not discuss the origin of the small mass splitting here, but only list the implications 12. Fermionic dispersion relations in a medium can have a complicated momentum dependence. In the following we make the simplifying assumption that all neutrinos have hard spacial moment p ∼ T and parameterize the effect of the medium by a temperature dependent quasiparticle mass matrix M N (T ), 13 which we define as M N (T ) 2 = H 2 −p 2 at |p| =p ∼ T. Here H is the dispersive part of the temperature dependent effective Hamiltonian given in the appendix, cf.. The general structure of M N (T ) is rather complicated, but we are only interested in the regimes T M (DM production) and T > T EW (baryogenesis). Analogue to the vacuum notation in -, we refer to the temperature dependent eigenvalues of M N (T ) as M 1 (T ) and M 2 (T ), their average asM (T ) and their splitting as M (T ). Though N I are the fields whose excitations correspond to mass eigenstates in the microscopic theory, the mass matrix M N (T ) in the effective quasiparticle description is not necessarily diagonal in the N I -basis for T = 0. The effective physical mass splitting M (T ) depends on T in a non-trivial way. This dependence is essential in the regimeM (T ) ≫ M (T ), which we are mainly interested in. In principle als M (T ) depends on temperature, but this dependence is practically irrelevant and replacingM (T ) by M at all temperatures of consideration does not cause a significant error. There are three contributions to the temperature dependent physical mass splitting: The splitting ∆M that appears in the Lagrangian, the Dirac mass m D (T ) = F v(T ) that is generated 12 As far as this work is concerned the sterile neutrino mass spectrum in the MSM follows from the requirement to simultaneously explain B and DM. It is in accord with the principle of minimality and the idea to explain new physics without introduction of a new scale (above the electroweak scale). In this work we do not discuss a possible origin of the mass spectrum and flavor structure in the SM and MSM, which to date is purely speculative. Some ideas on the origin of a low seesaw scale can be found in, see also. Some speculations on the small mass splitting have been made in. A similar spectrum has been considered in a supersymmetric theory in. 13 See e.g. for a discussion of the quasiparticle description. by the coupling to the Higgs condensate and thermal masses due to forward scattering in the plasma, including Higgs particle exchange 14. The interplay between the different contributions leads to non-trivial effects as the temperature changes. Baryogenesis For successful baryogenesis it is necessary to produce a lepton asymmetry of ∼ 10 −10 at T T + that survives until T EW and is partly converted into a baryon asymmetry by sphalerons, see condition i). In this work we focus on scenarios I and II, in which only two sterile neutrinos N 2,3 are involved in baryogenesis. In these scenarios baryogenesis is only possible if the physical mass splitting is sufficiently small (M (T ) ≪ M ) and leads to a resonant amplification. On the other hand it should be large enough for the sterile neutrinos to perform at least one oscillation. Thus, baryogenesis is most efficient if it is of the same order of magnitude as the relaxation rate (or thermal damping rate) at T T +, Here N is the temperature dependent dissipative part of the effective Hamiltonian that appears in the kinetic equations given in section 4; it is defined in appendix A.3.2 and calculated in section 4.2. It is essentially given by the sterile neutrino thermal width. However, only provides a rule of thumb to identify the region where baryogenesis is most efficient. Numerical studies in section 5 show that the observed BAU can be explained even far away from this point, for M ≫ M (T ) ≫ N (T ). Thus, the mass degeneracy M (T ) ≪ M is the only serious tuning required in scenario II. In it has been found that no such mass degeneracy is required in scenario III. Dark Matter Production In scenario I N 1 dark matter has to be produced thermally from the primordial plasma. In absence of chemical potentials, the resulting spectrum of N 1 momenta has been determined in. State of the art X-ray observations, structure formations and Ly forest observations suggest that this production mechanism is not sufficient to explain DM because the required N 1 mass and mixing are astrophysically excluded. However, in the presence of a lepton chemical potential, the dispersion relation for active neutrinos is modified due to the MSW effect. If the chemical potential is large enough, this can lead to a level crossing between active and sterile neutrinos, resulting in a resonant amplification of the N 1 production rate. The full dark matter spectrum is a superposition of a smooth distribution from the non-resonant production and a non-thermal spectrum with distinct peaks at low momenta from resonant mechanism. In order to explain all observed dark matter by N 1 neutrinos, lepton asymmetries | | ∼ 8 10 −6 are required at T DM ∼ 100 MeV. This is the origin of the condition ii) already formulated in section 2.4. Again the resonance condition indicates the region where the asymmetry production is most efficient. For T d, T − ≪ T EW it imposes a much stronger constraint on the mass splitting than during baryogenesis because the thermal rates N are much smaller. The asymmetries can be created in two different ways, either during the freezeout of N 2,3 around T ∼ T − or in their decay at T ∼ T d. During these processes we can use the vacuum value for v. As discussed in appendix A.3.1, the temperature dependence of M (T ) is weak for T < T −. The rates, on the other hand, still depend rather strongly on temperature, thus it is usually not possible to fulfill the requirement at T = T − and T = T d simultaneously. Therefore one can distinguish two scenarios: the asymmetry generation is efficient either during freezeout (freezeout scenario) or during decay (decay scenario). On the other hand, can be fulfilled simultaneously at T = T + and T = T d or at T = T + and T = T − because at T = T + also the mass splitting depends on temperature. The strongest "fine tuning" requirement in the MSM is therefore 15 From it is clear that during the decay M (T d ) ≈ M (T = 0) and Fulfilling the resonance condition at low temperature requires a precise cancellation of the parameters in and, both of which have to be fulfilled individually. The condition imposes a strong constraint on the active neutrino mass matrix. It can be fulfilled when real part of the off-diagonal elements is small. Note that this due to implies that U N is close to unity. This is certainly the case when the real part of complex angle in R is a multiple of /2. In sections 6 and 7 we will focus on this region and always choose Re = /2. It should be clear that this is a conservative approach, since the production of lepton asymmetries can also be efficient away from the maximally resonant regions defined by. The lower bound can always be made consistent with by adjusting the otherwise unconstrained parameter ∆M. At tree level this parameter is effectively fixed by where the dependence of the RHS on ∆M is weak. The range of values for ∆M dictated by this condition is extremely narrow; it requires a tuning of order ∼ 10 −11 (in units of M ). Quantum corrections are of order ∼ m i, i.e. much bigger than M (T − ). The high degree of tuning, necessary to explain the observed DM, is not understood theoretically. Some speculations can be found in. However, the origin of this fine-tuning plays no role for the present work. In the following we will refer to the MSM with the condition Re = 1 2 and the fixing of ∆M as constrained MSM. Since the first term in the square root in also depends on Re, fixing this parameter exactly to a multiple of /2 usually does not exactly give the minimal M. However, it considerable simplifies the analysis, and deviations from such a value can in any case only be small due to the above considerations. Experimental Searches and Astrophysical Bounds The experimental, astrophysical and BBN bounds presented in this section and in the figures in sections 5-7 are derived under the premise that the mass and mixing of N 1 qualify it as a DM candidate, while N 2,3 are responsible for baryogenesis (scenarios I and II). Some of them loosen if one drops the DM requirement and considers the MSM as a theory of baryogenesis and neutrino oscillations only in scenario III. Existing Bounds A detailed discussion of the existing experimental and observational bounds on the MSM can be found in. Some updates that incorporate the effect of recent measurements of the active neutrino mixing matrix U, in particular 13 = 0, have been published in. In the following we re-analyze all relevant constraints on the seesaw partners N 2,3 from direct search experiments and BBN in the face of these experimental results. We also briefly review existing constraints on the dark matter candidate N 1. As far as the known (active) neutrinos are concerned, the main prediction of the MSM is that one of them is (almost) massless. This fixes the absolute mass scale of the remaining two neutrinos. Currently there is neither a clear prediction for the phases in U in the MSM nor an experimental determination, though the experimental value for 13 suggests that a measurement in principle might be possible. Regarding the sterile neutrinos, one has to distinguish between N 2,3 and N 1. Seesaw Partners N 2 and N 3 LHC The small values of M I ≪ v in principle make it possible to produce them in the laboratory. However, the smallness of the Yukawa couplings F implies that the branching ratios are very small. Therefore the number of collisions (rather than the required collision energy) is the main obstacle in direct searches for the sterile neutrinos. In particular, they cannot be seen in high energy experiments such as ATLAS or CMS. It is therefore a prediction of the MSM that they see nothing but the Higgs boson. Vice versa, the lack of findings of new physics beyond the SM at the LHC can be viewed as indirect support for the model (though this prediction is of course relaxed if nature happens to be described by the MSM plus something else). Direct Searches The sterile neutrinos participate in all processes that involve active neutrinos, but with a probability that is suppressed by the small mixings U 2. The mixing of N 2,3 to the SM is large enough that they can be found experimentally. A number of experiments that allow to constrain the sterile neutrino properties has been carried out in the past, in particular CERN PS191, NuTeV, CHARM, NOMAD and BEBC (see for a review). These can be grouped into beam dump experiments and peak searches. Peak search experiments look for the decay of charged mesons into charged leptons (e ± or ± ) and neutrinos. Due to the mixing of the active neutrino flavor eigenstates with the sterile neutrinos, the final state in a fraction of decays suppressed by U 2 e (or U 2 ) is e ± + N I (or ± + N I ). The kinematics of the two body decay can be reconstructed from the measured charged lepton, but the sterile flavor cannot be determined because of the N I mass degeneracy. Therefore these experiments are only sensitive to the inclusive mixing U 2 defined in, where is the flavor of the charged lepton. In beam dump experiments, sterile neutrinos are also created in the decay of mesons, which have been produced by sending a proton beam onto a fixed target. A second detector is placed near the beamline to detect the decay of the sterile neutrinos into charged particles. Also in beam dump experiments, the sterile flavors cannot be distinguished. In this case, the expected signal is of the order U 2 U 2 because creation and decay of the N I each involve one active-sterile mixing. For instance, the CERN PS191 experiment constrains the combinations ( where This set differs from the quantities considered by the experimental group. It has been pointed out in that the original interpretation of the PS191 (and also CHARM) data cannot be directly applied to the seesaw Lagrangian. The authors of translate the bounds on active-sterile neutrino mixing published by the PS191 and CHARM collaborations into bounds that apply to the MSM and kindly provided us with their data. We use these bounds, along with the NuTeV constraints, as an input to constrain the region in the MSM parameter space that is compatible with experiments. Our results are displayed as green lines of different shade in the summary plots in figures 7, 13 and 14 in sections 5 and 7. The different lines have to be interpreted as follows. Each shade of green corresponds to one experiment. For each experiment, there is a solid and a dashed line. The solid line is an exclusion bound. That means that there exists no choice of MSM parameters that leads to a combination of U 2 and M above this line and is consistent with table 1 and the experiment in question. In order to obtain the exclusion bound from an experiment for a particular choice of M we varied the CP-violating phases and Im. 17 We checked for each choice whether the resulting U 2 are compatible with the experiment in question. The exclusion bound in the M − U 2 plane is obtained from the set of parameters that leads to the maximal U 2 for given M amongst all choices that are in accord with experiment. The exclusion plots are independent of the other lines in the summary plots. The dashed lines (in the same shade as the exclusion plots) represent the bounds imposed by each experiment if the CP-violating phases are self-consistently fixed to the values that we used to produce the red and blue lines in the summary plots, which encircle the regions in which enough asymmetry is created to explain the BAU and DM. The NuTeV experiment puts bounds only on the mixing angle U 2. This induces a much weaker constraint in the M − U 2 plane for inverted mass hierarchy than the other experiments. Our results differ from those of. In the latter, the experimental constraints on the individual U 2 were directly reported in the M − U 2 plane plotted in figure 3 of. Moreover, only the PS191 exclusion bound was computed by distinguishing between mass hierarchies. Active Neutrino Oscillation Experiments The region below the "seesaw" line in figures 7, 13, 11 and 14 is excluded because for the experimental values listed in table 1, there exists no choice of MSM parameters that would lead to this combination of M and U 2. 16 There are also constraints on U 2 which are, however, too weak to be of practical relevance. 17 The mixings U 2 do not depend on Re and the dependence on ∆M is negligible. Big Bang Nucleosynthesis It is a necessary requirement that N 2,3 have decayed sufficiently long before BBN that their decay products do not affect the predicted abundances of light elements, which are in good agreement with observation. The total increase of entropy due to the N 2,3 decay is small, but the decay products have energies in the GeV range and even a small number of them can dissociate enough nuclei to modify the light element abundances. Since the sterile neutrinos are created as flavor eigenstates, they oscillate rapidly around the time of BBN. On average, they spend roughly half the time in each flavor state, and not the individual lifetimes of each flavor determine the relaxation time, but their average. This allows to estimate the inverse N 2,3 lifetime by as −1 ≃ 1 2 tr N at T = 1 MeV. For < 0.1s the decay products and all secondary particles have lost their excess energy to the plasma in collisions and reached equilibrium by the time of BBN 18. We impose the condition < 0.1s and vary all free parameters to identify the region in the M -U 2 plane consistent with this condition. The BBN exclusion bounds in figures 7, 13 and 14 represent the region in which no choice of MSM parameters exists that is consistent with table 1 and the above condition. Note that −1 ≃ 1 2 tr N and the condition < 0.1s are both rough estimates; the BBN bound we plot may change by a factor of order one when a detailed computation is performed. Dark Matter Candidate N 1 The coupling of the DM candidate N 1 is too weak to be constrained by any past laboratory experiment. However, different indirect methods have been used to identify the allowed region in the 2 1 − M 1 plane. The possibility of sterile neutrino DM has been studied by many authors in the past, see for reviews. In the following we summarize the most important constraints. As a decaying dark matter candidate N 1 particles produce a distinct X-ray line in the sky that can be searched for. These pose an upper bound M 1 3 − 4 keV (see e.g. ). If this were the only mechanism, the tension between these bounds would rule out the MSM as the common source of BAU, DM and neutrino oscillations. There are two different mechanisms for DM production in the MSM. The first one, common thermal (non-resonant) production, leads to a smooth distribution of momenta. The second one, which relies on a resonance produced by a level crossing in active and sterile neutrino dispersion relations (see below), creates a highly non-thermal spectrum. Observations of the matter distribution in the universe constrain the DM free streaming length. Without resonant production, the distribution reconstructed from Ly forest observations suggests a lower bound on the mass M 1 8 keV, see also 19. In combination with the X-ray bound, this would make resonant production necessary. In a realistic scenario involving both production mechanisms (| | 10 −5 ) this bound relaxes and has been estimated as M 1 > 2 keV. In our analysis we take these results for granted, though some uncertainties remain to be clarified, see section 3.2.1. The DM production rate can be resonantly amplified by the presence of a lepton chemical potential in the plasma. The resonance occurs due to a level crossing between active and sterile neutrino dispersion relations, caused by the MSW effect. This mechanism enhances the production rate for particular momenta as they pass through the resonance, resulting in a non- Excluded by X ray observations 24 10 4 Excluded by phase space analysis Figure 2: Different constraints on N1 mass and mixing. The blue region is excluded by X-ray observations, the dark gray region M1 < 1 keV by the Tremaine-Gunn bound. The points on the upper solid black line correspond to observed DM produced in scenario I in the absence of lepton asymmetries (for = 0) ; points on the lower solid black line give the correct DM for || = 1.24 10 −4, the maximal asymmetry we found. The region between these lines is accessible for 0 ≤ || ≤ 1.24 10 −4. We do not display bounds derived from Ly forest observations because it depends on in a complicated way and the calculation currently includes considerable uncertainties. thermal DM momentum distribution that is dominated by low momenta and thus "colder" 20. Effectively, this mechanism "converts" lepton asymmetries into DM abundance, as the asymmetries are erased while DM is produced. The full DM spectrum in the MSM is a superposition of the two components. The dependence on is, however, rather complicated. In particular, the naive expectation that the largest | |, which maximized the efficiency of the resonant production mechanism, leads to the lowest average momentum ("coldest DM") is not true because does not only affect the efficiency of the resonance, but also the momentum distribution of the produced particles. The N 1 -abundance must correctly reproduce the observed DM density DM. This requirement defines a line in the M 1 − | 1 | 2 -plane, the production curve. All combinations of M 1 and | 1 | 2 along the production curve lead to the observed DM abundance. Due to the resonant contribution, the production curve depends on. This dependence has been studied in, where it was assumed that e = =. Finally, DM sterile neutrinos may have interesting effects for supernova explosions. Figure 2 summarizes a number of bounds on the properties of N 1. The two thick black lines are the production curves for = 0 and | | = 1.24 10 −4, the maximal asymmetry we found at T = 100 MeV in our analysis, see figure 12. The allowed region lies between these lines; above the = 0 line, the non-resonant production alone would already overproduce DM, below the production curve for maximal asymmetry N 2,3 fail to produce the required asymmetry for all choices of parameters. The maximal asymmetry has been estimated as ∼ 7 10 −4 in, which agrees with our estimate shown in figure 12 up to a factor ∼ 5. The corresponding production curve is shown as a dotted line in figure 2. Our result is smaller and imposes a stronger lower bound on the N 1 -mixing, which makes it easier to find this particle (or exclude it as the only constituent of the observed DM ) in X-ray observations. However, though our calculation is considerably more precise than the previous estimate, the exclusion bound displayed in figure 12 still suffers from uncertainties of order one due to the issues discussed in appendix A.4 and the strong assumption 21 e = =, made in to find the dependence of the production curve on the asymmetry. In order to determine the precise exclusion bound, the dependence of the production curve on individual flavor asymmetries has to be determined. Future Searches Indirect Detection The DM candidate N 1 can be searched for astrophysically, using high resolution X-ray spectrometers to look for the emission line from its decay in DM dense regions. For details and references see the proposal submitted to European Strategy Preparatory Group by Boyarsky et al.,. Structure Formation Model-independent constraints on N 1 can be derived from consideration of dynamics of dwarf galaxies. The existing small scale structures in the universe, such as galaxy subhalos, provides another probe that is sensitive to N 1 properties because such structures would be erased if the mean free path of DM particles is too long. It can be exploited by comparing numerical simulations of structure formation to the distribution of matter in the universe that is reconstructed from Ly forest observations. However, the momentum distribution of resonantly produced N 1 particles can be complicated, leading to a complicated dependence of the allowed mixing angle on the N 1 mass and lepton asymmetries in the plasma. A reliable quantitative analysis would involve numerical simulations that use the non-thermal N 1 momentum distribution predicted in scenario I as input. While for Cold Darm Matter extensive studies have been performed, see e.g., simulations for other spectra have only been done for certain benchmark scenarios; a model of Warm Dark Matter has been studied in. Direct Detection As the solar system passes through the interstellar medium, the DM particles N 1 can interact with atomic nuclei in the laboratory via the 1 * 1 suppressed weak interaction. This in principle opens the possibility of direct detection. Such detection is extremely challenging due to the small mixing angle and the background from solar and stellar active neutrinos. It has, however, been argued that it may be possible. BBN The primordial abundances of light elements are sensitive to the number of relativistic particle species in the primordial plasma during BBN because these affect the energy budget, which determines the expansion rate and temperature evolution. Any deviation from the SM prediction is usually parameterized in terms of the effective number of neutrino species N ef f. At temperatures around 2 MeV, most N 1 particles are relativistic. However, the occupation numbers are far below their equilibrium value, and the effect of the N 1 on N ef f is very small. Given the error bars in current measurements, the MSM predicts a value for N ef f that is practically not distinguishable from N ef f = 3. In principle the late time asymmetry in active neutrinos predicted by the MSM also affects BBN because the chemical potential modifies the momentum distribution of neutrinos in the plasma. However, the predicted asymmetry is several orders of magnitude smaller than existing bounds and it is extremely unlikely that this effect can be observed in the foreseeable future. Seesaw Partners N 2,3 The singlet fermions participate in all the reactions the ordinary neutrinos do with a probability suppressed roughly by a factor U 2. However, due to their masses, the kinematic changes when an ordinary neutrino is replaced by N I. The N 2,3 particles can be found in the laboratory using the strategies outlined in section 3.1.1, which have been applied in past searches. One strategy, used in peak searches, is the study of kinematics of rare K, D, and B meson decays can constrain the strength of the N I masses and mixings. This includes two body decays (e.g. K ± → ± N, K ± → e ± N ) or three-body decays (e.g. K L,S → ± + e ∓ + N 2,3 ). The precise study of the kinematics is possible in (like KLOE), charm, and B factories, or in experiments with kaons where the initial 4-momentum is well known. For 3MeV < M I < 414 MeV this possibility has recently been discussed in. The second strategy aims at observing the decay of the N I themselves ("nothing" → leptons and hadrons) in proton beam dump experiments. The N I are created in the decay K, D or B mesons emitted by a fixed target, into which the proton beam is dumped. The detector must be placed in some distance along the beamline. Several existing or planned neutrino facilities (related, e.g., to CERN SPS, MiniBooNE, MINOS or J-PARC), could be complemented by a dedicated near detector for these searches. Finally, these two strategies can be unified, so that the production and the decay occurs inside the same detector. For the mass interval M < m K, both strategies can be used. An upgrade of the NA62 experiment at CERN would allow to search in the mass region below the Kaon mass m K. For m K < M < m D it is unlikely that a peak search for missing energy at beauty, charm, and factories will gain the necessary statistics. Thus, in this region the search for N 2,3 decays is the most promising strategy. Dedicated experiments using the SPS proton beam at CERN can completely explore the very interesting parameter range for M < 2 GeV. This has been outlined in detail in the European Strategy Preparatory Group by Gorbunov et al.,. An upgrade of the LHCb experiment could allow to combine both strategies. This would allow to constrain the cosmologically interesting region in the M − U 2 plane.. With existing or planned proton beams and B-factories the mass region between the D-mass and B-meson thresholds is in principle accessible, but such experiments would be extremely challenging. A search in the cosmologically interesting parameter space would require an increase in the present intensity of the CERN SPS beam by two orders of magnitude or to produce and study the kinematics of more than 10 10 B-mesons. Kinetic Equations Production, freezeout and decay of the sterile neutrinos are nonequilibrium processes in the hot primordial plasma. We describe these by effective kinetic equations of the type used in and further elaborated in. These equations are similar to those commonly used to describe the propagation of active neutrinos in a medium. They rely on a number of assumptions and may require corrections when memory effects or off-shell contributions are relevant. These assumptions are discussed in appendix A. We postpone a more refined study to the time when such precision is required from the experimental side. In the following we briefly sketch the derivation of the kinetic equations we use. More details are given in appendix A. Short derivation of the Kinetic Equations We describe the early universe as a thermodynamical ensemble. In quantum field theory, any such ensemble -may it be in equilibrium or not -can be described by a density matrix. The expectation value of any operator A at any time can be computed as A = tr(A). As there are infinitely many states in which the world can be, infinitely many numbers are necessary to exactly characterize. These can either be given by all matrix elements of or, equivalently, by all n-point correlation functions for all quantum fields. Either way, any practically computable description requires truncation. The leptonic charges can be expressed in terms of field bilinears, thus it is sufficient to concentrate on the two-point functions. Instead of bilinears in the field operators themselves we consider bilinears in the ladder operators a I, a I for sterile and a, a for active neutrinos. In principle there is a large number of bilinears, but only few of them are relevant for our purpose. For each momentum mode of sterile neutrinos we consider two 2 2 matrices formed by products of ladder operators a I a J, one for positive and one for negative helicity 22. Since M N is diagonal in the N I -basis, a I a I can be interpreted as a number operator for physical sterile neutrinos while a I a J with I = J correspond to coherences. N I are Majorana fields, but we can define a notion of "particle" and "antiparticle" by their helicity states. In the limit T ≫ M, i.e. for a negligibly small Majorana mass term, the total lepton number (sum over and I ) defined this way are conserved. All other bilinears in the ladder operators for sterile neutrinos are either of higher order in F or quickly oscillating and can be neglected. Practically we are not interested in the time evolution of individual modes, but only in the total asymmetries. We therefore describe the sterile neutrinos by momentum integrated abundance matrices N for "particles" and N for "antiparticles". The precise definitions are given in appendix A.1. The active leptons are close to thermal equilibrium at all times of consideration. This is because kinetic equilibration is driven by fast gauge interactions, while the relaxation rates for the asymmetries are of second order in the small Yukawa couplings F. We thus describe the active sector by four numbers 23, the temperature and three asymmetries (one for each flavor, integrated over momentum). More precisely, the asymmetry in the SM leptons of flavor is given by the difference between lepton and antilepton abundance, which we denote by, see 24. We study the time evolution of each flavor separately and find that they can differ significantly from each other. Following the steps sketched in appendix A, one can find the effective kinetic "rate equations" Here X = M/T, eq is the common equilibrium value of the matrices N and N, H is the dispersive part of the effective Hamiltonian for sterile neutrinos that is responsible for oscillations and rates N, N and L form the dissipative part of the effective Hamiltonian. It is convenient to describe the sterile sector by + and −, the CP-even and CP-odd deviations from equilibrium, rather than N and N, In terms of + and −, - read with Equations - are the basis of our numerical studies. Computation of the Rates The rates appearing in - can be expressed as with no sum over in and. The flavor matrices R and R M are defined in appendix A.3, see and. HereF = F U N and where T 2 /M 0 is the Hubble rate and M 0 = M P 1/2 /(4 3 g * ) 1/2 with the effective number of relativistic degrees of freedom g * computed in and shown in figure 3. The flavor matrices R(T, M ) and R M (T, M ) are almost diagonal since off-diagonal elements of (p) include active neutrino oscillations, which are at least suppressed by m i /T. We will always neglect the off-diagonal elements. R(T, M ) and R M (T, M ) contain contributions from decays and scatterings. In finite temperature field theory these can be associated with different cuts through the N I self-energy shown in figure 4. The scatterings keep the N I in thermal equilibrium for T > T −. At T ≃ T − they become inefficient and the sterile neutrinos freeze out. Due to their small coupling they are long-lived, but unstable and decay at a temperature T d. For T d ≪ T −, decay and freezeout are two separate processes and can be treated independently. This is the case in the interesting part of the MSM parameter space. for T > v. N is obtained from the discontinuity of the diagrams, which can be computed by cutting it in various ways. The gray self energy blobs indicate that dressed lepton and Higgs propagators have to be used. Cuts through them reveal a large number of processes, which are summarized in and appendix A of. Recently it has been pointed out that current estimates suffer from an error O due to infrared and collinear enhancements at high temperature. Systematic approaches to include these effects can be found in for T > M (relevant for baryogenesis) and for M > T (relevant for late time asymmetries). We ignore this effect in our current study as it is comparable to other uncertainties in the kinetic equations and would only slightly change the results for the relevant regions in the MSM parameter space. Dark Matter Production For T d ≪ T −, which is the case in the interesting part of the MSM parameter space, freezeout and decay happen in different temperature regimes. At temperatures T T − the processes that keep the plasma in equilibrium are scatterings mediated by the weak interaction. Furthermore, the lepton masses are in first approximation negligible 25 To be specific, in the high temperature limit, when all lepton masses are negligible, simplifies to 25 For some parameter choices this assumption can be violated for the mass, introducing a small error. In the low temperature regime, where R ≃ R M, one finds The indices (S) and (D) indicate that the dominant contribution to the rate comes from scatterings or decays, respectively. The functions R (S) (T, M ) and R There are leptonic and semi-leptonic channels, depending on the temperature either with quarks (before hadronization) or mesons (after hadronization) in the final state. Let N I → be the rate at which N I decays into a final state of flavor (e.g. qq or L L ). Then where the sum runs over all possible final states that have flavor. The factor 2 is due to the equal probabilities for decay into particles and antiparticles at tree level. The simple form of is a result of the fact that the Yukawa couplings can be factored out of the corresponding amplitudes and the kinematics of N 2 and N 3 is the same due to their degenerate mass. Most of the rates required for our study have been computed in, the remaining ones are given in appendix D. For T ∼ T d = 0 with T d ≪ M the sterile neutrinos are non-relativistic and one can estimate (M ) for T T d ≪ M. In practice we can simply add these contributions at all temperatures, though in principle we do not known the scattering contribution outside the range plotted in figure 5 and the decay contribution is obtained from vacuum rates. This is justified because for T T − our expressions for the decay rates are incorrect, but R figure 5. For T d < T < T − our expressions for both, decay and scattering rates, are incorrect, but they are both smaller than the rate of Hubble expansion and have negligible effect. Baryogenesis from Sterile Neutrino Oscillations The BAU in the MSM is produced during the thermal production of sterile neutrinos N I. This is in contrast to most other (thermal) leptogenesis scenarios, where decays and inverse decays play the central role. The violation of total fermion number by the Majorana mass term M M is negligible at T EW ≫ M, but asymmetries in the helicity states of the individual flavors can be created. The sum over these vanishes up to terms suppressed by M/T EW, but because sphaleron processes only act on left chiral fields, the generated BAU can be much bigger. In this sense baryogenesis in the MSM can be regarded as a version of "flavored leptogenesis". In this section we explore the part of the MSM parameter space where a BAU consistent with, i.e. ∼ 10 −10, can be generated. We assume two sterile neutrinos N 2,3 participate in baryogenesis, as required in scenarios I and II. This assumption is motivated by the premise that N 1 should be a valuable DM candidate, with masses and mixing consistent with astrophysical bounds. These require its Yukawa interaction to be too small to be relevant for baryogenesis, see section 3. In this sense, we consider the MSM as a model of both, baryogenesis and DM production, but are not concerned with the DM production mechanism, which is discussed in the following section 6. This corresponds to scenario II. The requirement to explain DM only enters implicitly, as we demand the N 1 mass and mixing to be consistent with astrophysical observations. If one completely drops the requirement to explain the observed DM and study the MSM as a theory of baryogenesis and neutrino oscillations only (scenario III), the resulting bounds on the parameters weaken considerably. In particular, it was found in that no mass degeneracy between the sterile neutrino masses is needed. We extend the analysis performed in, but take into account two additional aspects. First, we use the non-zero value for the active neutrino mixing angle 13 given in table 1, which brings in a new source of CP-violation through the phase. Second, we include the contribution from the temperature dependent Higgs expectation value v(T ) to the effective Hamiltonian, coming from the real part of the diagram in figure 4a). It is relevant for temperatures close to the electroweak scale. We solve numerically the system of equations - to find the lepton asymmetries at T ∼ T EW, assuming that there is no initial asymmetry. The effective Hamiltonian is given by and - with. We fix the active neutrino masses and mixing angles according to table 1 and choose the phases, 1 and 2 as well as Re to maximize the asymmetry. Interestingly, for normal hierarchy of neutrino masses, the value of Re that maximizes the asymmetry is close to 2, as required in the constrained MSM. This allows to identify the region in the remaining three-dimensional parameter space consisting of M, ∆M and Im where an asymmetry 10 −10 can be created. Deep inside this region, the asymmetry generated for this choice of phases can be much too large, but it can always be reduced by choosing different phases. Thus, any choice of M, ∆M and Im inside this region can reproduce the observed BAU. In practice it is difficult to find the phases that maximize the asymmetry in each single point, as we are dealing with a seven-dimensional parameter space. However, the analysis can be simplified. First, the choice of phases that maximize the asymmetry practically does not depend on ∆M because the dependence of the Yukawa coupling on ∆M is very weak. Second, our numerical studies reveal in most of the parameter space Im is the main source of CP-violation. The other phases have comparably little effect on the final asymmetry, except for the region around Im = 0. Surprisingly, the values for, 1 and 2 that maximize it vary only very little and are always close to zero. One possible interpretation is that Im provides the main source for the asymmetry generation, while, 1 and 2 contribute stronger to the washout. However, due to the various different time scales involved we cannot extract a single CP-violating parameter at this point, which is commonly used in thermal leptogenesis scenarios to study such connections analytically. The above seems to be valid everywhere except in the region Im ∼ 0, where, 1 and 2 are the only sources of CP-violation. We present our results in figure 6, which shows the allowed region in the ∆M − Im plane for several masses M. The lines correspond to the exact observed asymmetry, inside more asymmetry is generated. As pointed out above, any point inside the lines is consistent with observation because the asymmetry can be reduced by choosing different phases. Figure 6 shows that even for small masses around 10 MeV enough asymmetry can be created. However, for small masses the CP-violation contained in, 1 and 2 is not sufficient, and the allowed region consists of two disjoint parts that are separated by the Im ≃ 0 region. The area of these increases with M. For masses of a few GeV, the CP-violation from, 1 and 2 alone is sufficient and the regions join. Interestingly, there appear to be mass-independent diagonal lines in the ∆M − e Im plane that confine the region where enough asymmetry can be generated. We currently have not understood the origin of these lines parametrically. The inverted hierarchy generally allows to produce more asymmetry than the normal hierarchy. There is an approximate symmetry between regions with positive and negative Im. It would be exact when simultaneously changing and is related to the symmetry of the Lagrangian under exchange of N 2 and N 3. As expected, these results are close to those obtained in, which provides a good consistency check. The slightly bigger asymmetry is due to the additional source of CP-violation for 13 = 0. For experimental searches, the most relevant parameters are the mass M and the mixing between active and sterile neutrinos. In figure 7 we translate our results into bounds on the flavor independent mixing parameter U 2 defined in equation. Using the results displayed in figure 6, we chose M to maximize the asymmetry and find the region in the U 2 − M plane within which baryogenesis is possible. The plot has to be read as follows: For each point in the region between the blue lines there exists at least one choice of MSM parameters that allows for successful baryogenesis. The plots in figure 7 are similar to the ones of figure 3 in, but the allowed region is slightly bigger due to the effect the new source of CP-violation for 13 = 0. The constraints on the mixing angle U 2 shown in figure 7 can be translated into constraints on the neutrino lifetime −1 ≃ Late Time Lepton Asymmetry and Dark Matter Production The lepton asymmetry at temperatures of a few hundred MeV is of crucial importance for the dark matter production in scenario I. Resonant dark matter production requires a lepton asymmetry | | ∼ 8 10 −6 in the plasma, much larger than the baryon asymmetry. The details of this process have been outlined in. Here we are not concerned with the dark matter production itself, but with the mechanisms that generate the required lepton asymmetry. This asymmetry must come GeV. The phases that maximize the asymmetry differ significantly for Im ≈ 0 and away from that region. In the region 0.5 < e Im < 1.5 we chose 2 =, = 0, Re = 7 10 for normal hierarchy and 2 − 1 =, =, Re = 3 4 for inverted hierarchy. Everywhere else we chose = 3 20, Re = 1 2 for normal hierarchy and 2 − 1 = 11 10, = 11 20, Re = 4 5 for inverted hierarchy. The upper panel shows the results for normal hierarchy, the lower panel for inverted hierarchy. from a source that is different from that of the baryon asymmetry because N 2,3 reach chemical equilibrium at some temperature T + < T EW and the asymmetry in the leptonic sector is washed out (while the baryon asymmetry remains as sphalerons are inefficient at T < T EW ) 26. There are two distinct mechanisms that contribute to the late time asymmetry, the freezeout of N 2,3 at T ∼ T − and their decay at T ∼ T d. The requirement that these two mechanisms produce enough asymmetry put severe constraints on the parameters of the model, described in section 2.6. The value of Re is fixed to values near /2. The mass splitting ∆M is limited to a very narrow range by equation. Therefore we will use the mass splitting in vacuum M instead of ∆M as a free parameter in the following. All experimentally known parameters are fixed to the values given in table 1. The phases, 1 and 2 are chosen to maximize the asymmetry. As in section 5 we observe that in most of the parameter space Im is the main source of CP-violation. We again find that it is convenient to split the parameter space into the region 0.5 < e Im < 1.5 and the complement. For normal hierarchy we chose the phases 2 = 2, = 3 2 in the region 0.5 < e Im < 1.5 and 2 = 5 and = 0 everywhere else. For inverted hierarchy we chose 2 − 1 = 7 5 and = 3 5 in the region 0.5 < e Im < 1.5 and 2 − 1 = 0, = 9 10 everywhere else. Note that for normal hierarchy F only depends on 2 and, while for inverted hierarchy it depends on 2 − 1 and because one neutrino is massless. We then study the parameter space spanned by M, M and Im. As in section 5, we use the kinetic equations - in order to calculate the lepton asymmetries as a function of T. The effective Hamiltonian is calculated from and - with -. We impose thermal equilibrium with vanishing chemical potentials as initial condition at a temperature T > T − and look for the parameter region where | | > 8 10 −6 at T = 100 MeV. 27 The results are shown in figures 9 and 10. The required asymmetry can be created when the sterile neutrinos have masses in the GeV range. For small masses of M = 2 − 4 GeV the CP violation contained in 1, 2 and alone is not sufficient for normal hierarchy and barely sufficient for inverted hierarchy; a non-zero Im is required and the allowed region consists of two disjoint parts along the Im axis which are separated by the Im ≃ 0 region. For larger masses (M 7 GeV for normal hierarchy, M 4 GeV for inverted hierarchy ), the regions merge, but Im continues to be the most relevant source of CP violation in most of the parameter space. In addition, one can also observe disjoint regions along the M axis. These can be identified with the decay scenario and freezeout scenario. In the upper part of the figures, the asymmetry is mainly created during the freezeout of N 2,3, in the lower part during the decay. At T −, N has considerably larger entries than at T d. Thus, the resonance condition requires a smaller mass splitting in the decay scenario. For larger masses, both regions merge. However, freezeout and decay are always two separated processes, i.e. T − ≫ T d. As in figure 6, there is an approximate symmetry under a change of sign for Im, which is related to the symmetry of the Lagrangian under exchange of N 2 and N 3 and becomes exact when also changing. The phases that maximize the asymmetry differ significantly for Im ≈ 0 and away from that region. We chose the phases 2 = 2, = 3 2 in the region 0.5 < e Im < 1.5 and 2 = 5 and = 0 everywhere else. For experimental searches for sterile neutrinos, the most relevant parameters are the mass M and the mixing between active and sterile species. As in section 5, we translate our results for the parameters in the Lagrangian into bounds on the mass and mixing. For each mass, we chose M in a way that maximizes the allowed region in the U 2 − M -plane. The results are shown in figure 11. Finally, we estimate the maximal asymmetry that can be generated at T ∼ 100 MeV as a function of M by its largest value within the data files we used to create figures 9 and 10. The maximal asymmetry allows to impose a lower bound on the N 1 mixing; bigger lepton asymmetries make the resonant DM production more efficient and allow for smaller N 1 mixing, displayed in figure 2. Furthermore, the maximal | | is of interest because in it was pointed out that a large lepton asymmetry may lead to a first order phase transition during hadronisation. The maximal asymmetries we found are shown in figure 12. For both hierarchies they remains well below cosmological bounds (see ) at all masses of consideration and are about a factor 5 smaller than the value 7 10 −4 estimated in. However, given the uncertainties summarized in appendix A.4, they can easily change by a factor O. GeV. The phases that maximize the asymmetry differ significantly for Im ≈ 0 and away from that region. We chose 2 − 1 = 7 5 and = 3 5 in the region 0.5 < e Im < 1.5 and 2 − 1 = 0, = 9 10 everywhere else. DM, BAU and Neutrino Oscillations in the MSM In the previous sections 5 and 6 we have studied independently the conditions for successful baryogenesis on one hand and sufficient dark matter production on the other. The most interesting question is of course in which part of the MSM parameter space scenario I can be realized, i.e. both can be achieved simultaneously. This region cannot be found by simply superposing the figures from the previous sections because the phases that maximize the asymmetry are different for T T EW and T T −. The requirement to produce enough DM imposes the stronger constraint. We therefore fix the CP-violating phases in a way that is consistent with | | > 8 10 −6 at T = 100 GeV in some significant region in the M − U 2 -plane. We then check for which combination of M and U 2 the correct BAU is created by these phases. We start with the phases used in figure 11, which were chosen to maximize the area in the M −U 2 -plane where | | > 810 −6 at T = 100 MeV. The result is shown in figure 13. The blue line corresponds to the points where the asymmetry at T EW corresponds to the observed BAU. While the requirement to produce enough DM only imposes a lower bound on the asymmetry at 100 MeV, the value of the BAU is known to be given by, i.e. has a fixed value. Thus, only the points on the blue line that lie within the region encircled by the red line (DM region) form the allowed parameter space. The shape of the blue BAU line can be modified by changing the phases, see figure 14, but this also changes the shape of the red line (DM region). Solving the kinetic equations for different phases reveals that the BAU line can be brought to most points within the DM region. This region therefore gives a good estimate of the allowed parameter space. The constraints derived on the mixing angle U 2 are translated into constraints on the neutrino lifetime −1 ≃ 1 2 tr N (at T = 1 MeV) in figure 15. In the plots of figure 14, there are two regions where the 'BAU' and 'DM' lines are close, leading to the successful baryogenesis and dark matter production. One is near the seesaw line and the other is for higher mixing. These regions are easier to identify in the Im − M plane shown in figure 16. The baryon asymmetry almost vanishes for Im really close to 0, but this is not the case for dark matter production. Therefore, there is a region near Im = 0 which produce the right amount of baryon asymmetry and enough dark matter. This is the region where the blue 'BAU' line is inside the red 'DM' line in figure 16. For large value of |Im|, there also are regions where the two constraint are close. In the region between the blue "BAU" lines, the observed BAU can be generated. The lepton asymmetry at T = 100 MeV can be large enough that the resonant enhancement of N1 production is sufficient to explain the observed DM inside the red "DM" line. The CP-violating phases were chosen to maximize the asymmetry at T = 100 MeV. Solid linesnormal hierarchy, dotted lines -inverted hierarchy. Conclusions and Discussion We tested the hypothesis that three right handed neutrinos with masses below the electroweak scale can be the common origin of the observed dark matter, the baryon asymmetry of the universe and neutrino flavor oscillations. This possibility can be realized in the MSM, an extension of the SM that is based on the type-I seesaw mechanism with three right handed neutrinos N I. Center- Figure 16: Constraints on the N2,3 masses M2,3 ≃ M and parameter Im in the constrained MSM (scenario I); upper panel -normal hierarchy, lower panel -inverted hierarchy. In the region between the solid blue "BAU" lines, the observed BAU can be generated. The lepton asymmetry at T = 100 MeV can be large enough that the resonant enhancement of N1 production is sufficient to explain the observed DM inside the solid red "DM" line. The CP-violating phases were chosen to maximize the asymmetry at T = 100 MeV. piece of our analysis is the study of sterile and active neutrino abundances in the early universe, which allows to determine the range of sterile neutrino parameters in which DM, baryogenesis and all known data from active neutrino experiments can be explained simultaneously within the MSM. We combined our results with astrophysical constraints and re-analyzed bounds from past experiments in the face of recent data from neutrino oscillation experiments. We found that all these requirements can be fulfilled for a wide range of sterile neutrino masses and mixings, see figures 13, 14 in section 7. In some part of this parameter space, all three new particles may be found in experiment or observation, using upgrades to existing facilities. This is the first complete quantitative study of the above scenario (scenario I), in which no physics beyond the MSM is required. We found that the MSM can explain all experimental data if one sterile neutrino (N 1 ), which composes the dark matter, has a mass in the keV range, while the other two (N 2,3 ) have quasi-degenerate masses in the GeV range. The heavier particles N 2,3 generate neutrino masses via the seesaw mechanism and create flavored lepton asymmetries from CP-violating oscillations in the early universe. These lepton asymmetries are crucial on two occasions in the early universe. On one hand they create the BAU via flavored leptogenesis. One the other hand they affect the rate of thermal DM production via the MSW effect. The second point allows to derive strong constraints on the N 2,3 properties from the requirement to explain the observed DM by thermal N 1 production, see section 6. This can be achieved by resonant production, caused by the presence of lepton asymmetries in the primordial plasma at T ∼ 100 MeV. The required asymmetries can be created when N 2,3 are heavier than 1 − 2 GeV and the physical mass splitting between the N 2 and N 3 masses is comparable to the active neutrino mass differences. This can be achieved in a subspace of the MSM parameter space that is defined by fixing two of the unknown parameters (the Majorana mass splitting ∆M and a mixing angle Re in the sterile sector). This choice, in which scenario I can be realized, is dubbed "constrained MSM". We also studied systematically how the parameter constraints relax if one allows N 1 DM to be produced by some unspecified mechanism beyond the MSM (scenario II), see section 5. In this case the strongest constraints come from baryogenesis and the required mass degeneracy is much weaker, ∆M/M 10 −3. We found that successful baryogenesis is possible for N 2,3 masses as low as 10 MeV. These results are based on an extension of the analysis performed in that accounts for a non-zero value of the neutrino mixing angle 13 and a temperature dependent Higgs expectation value. While the low mass region is severely constrained by BBN and experiments, the allowed parameter space becomes considerably bigger for masses in the GeV range. Detailed results for the allowed sterile neutrino masses and mixings are shown in figures 6 -7. If one completely drops the requirement that DM is composed of N 1 and considers the MSM as a theory of baryogenesis and neutrino oscillations only (scenario III), no degeneracy in masses is required. Note that this also implies that no degeneracy is required in scenario II if more than three right handed neutrinos are added to the SM. For masses below 5 GeV, the heavier sterile neutrinos can be searched for in experiments using present day technology. This makes the MSM one of the few truly testable theories of baryogenesis. The parameter space for the DM candidate N 1 is bound in all directions, see figure 2, and can be tested using observations of cosmic X-rays and the large scale structure of the universe. Since the model does not require new particle physics up to the Planck scale to be consistent with experiment, the hierarchy problem is absent in the MSM. We conclude that neutrino physics can explain all confirmed detections of physics beyond the standard model except accelerated cosmic expansion. A Kinetic Equations In the following we sketch the derivation of the kinetic equations -. Our basic assumptions can be summarized as follows: 1) Coherent states containing more than one N I quantum are not relevant. Their contributions are suppressed by additional powers of the small mixing I. Processes involving one sterile neutrino include decays of N I particles, their scatterings with SM particles and flavor oscillations. 2) Screened one-particle states are the only relevant propagating neutrino degrees of freedom. In particular, we do not consider any collective excitations, which are infrared effects and only give a small contribution when the typical neutrino momenta are hard ∼ T. 3) The interactions that keep the SM fields in equilibrium act much faster than interactions involving N I at all times due to the smallness of F. 4) T − T d, i.e. the lifetime of the N I is sufficiently long that freezeout and decay are two well-separated events. This is the case in the parameter space we study. 5) The typical momentum of N I particles isp ∼ T even when they are out of equilibrium. This is justified because they are produced from a thermal bath and freeze out from a thermal state, hence their distribution functions should mimic those of kinetic equilibrium even when out of equilibrium. 6) We neglect the effect of the N 2,3 on the time evolution of the entropy (or temperature). This is justified as their contribution to the total entropy and energy densities are always small. 7) We neglect the effect of the lepton asymmetry on hadronization. This aspect has e.g. been discussed in. A.1 How to characterize the Asymmetries The leptonic charges that we are interested in can be expressed in terms of field bilinears. This and 1) imply that this is sufficient as we only need to deal with a reduced density matrix, in which all states including more than one sterile neutrino have been removed by partial tracing. Instead of bilinears in the field operators themselves we consider expectation values of bilinears in the ladder operators a I, a I for sterile and a, a for active neutrinos. To be explicit, we decompose N I as 28 Here p is momentum, s the spin index and u, v are the usual plane wave solutions to the Dirac equation, We will in the following always assume that the spacial momentum is directed along the zaxis, which is also the axis of angular momentum quantization. We chose the convention that hu s p = (−1) s+1 u s p while hv s p = (−1) s v s p, where h is the helicity matrix All relevant matrix elements of the density matrix can be identified with expectation values of bilinears in the ladder operators. Because of this the matrix of bilinears, to be defined in, is often referred to as "density matrix" (rather than itself). In principle there is a large number of such bilinears. A complete characterization of the system requires knowledge of all their expectation values at all times. However, it can be simplified dramatically, and for our purpose it will be sufficient to follow the time evolution of two 2 2 matrices N and N and three chemical potentials. The only term in that violates lepton number is M M. For T ≫ M, it is negligible and lepton number is approximately conserved. There is no total lepton asymmetry at T EW in the MSM, but there can be asymmetries of opposite sign for fermions with different chirality. Baryogenesis occurs because sphalerons only couple to left handed fermions. As far as the (Majorana) neutrinos are concerned, the two helicity states act as "particle" and "antiparticle". Terms containing two creation or two annihilation operators such, as a I a J or a a, can be related to processes that violate lepton number and are suppressed at T > M. For T M they in principle could contribute, but the leading order contribution in the Yukawa coupling F to the corresponding rates d dt a I a J etc. 29 oscillate fast. We therefore only consider terms that contain exactly one creation and one annihilation operator. Since only two of the sterile neutrinos are relevant here, these form a 10 10 matrix that can be written as where we have suppressed time and momentum indices (all momenta are p and all times t). V is the overall spacial volume, which will always drop out of the computations in the end. A.2 Effective Kinetic Equations The time evolution of is governed by an effective Hamiltonian. In absence of Hubble expansion, which we will add later, it follows the kinetic equation H can be viewed as the dispersive part of the effective Hamiltonian. The absorbtive part given by the matrices ≷ arises because the system is coupled to the background plasma formed by all other degrees of freedom of the SM. Note that is valid for each momentum mode separately. The different modes are coupled by H and ≷, which in principle depend on and the lepton chemical potentials. The smallness of the sterile neutrino couplings F allows to simplify due to a separation of time scales: The time scale associated with the N I dynamics and the time scale on which chemical equilibration of the lepton asymmetries occurs are much longer than the typical relaxation time to kinetic equilibrium in the SM plasma. This allows to employ a relaxation time approximation and relate > and < by a detailed balance (or Kubo-Martin-Schwinger) relation, with = > − <. eq is evaluated with an equilibrium density matrix, = Z/trZ, Z = exp(−/T ), where is the Hamiltonian corresponding to. The matrices H and are Hermitian. The effective masses of active and sterile neutrinos are very different and fast oscillations between them play no role. We thus put to zero N L, LN, N L, L N, NL, LN, NL and LN. The time evolution of the asymmetries is related to the relaxation time scales of the N I. Since interactions of the active neutrinos amongst themselves and with other SM fields are much faster, coherent effects in the active sector are negligible on this time scale. This allows to furthermore neglect LL and L L. LL and LL are taken diagonal with equilibrium occupation numbers and are thus characterized by the temperature T and three slowly varying chemical potentials. Thus, we can entirely describe the active sector by four numbers. Instead of the chemical potentials, we will in the following use n = ( LL ) − (LL), i.e. the number of particles minus number of antiparticles, to characterize the asymmetries 30. The relation between both can be found in the appendix of. In the sterile sector we have to keep track of coherences. The system can then be described by the following set of kinetic equations, Here N, N are the appropriate block-diagonal submatrices of, for the corresponding submatrix of H we used the same symbol as for the full matrix to simplify the notations. These equations do not take into account the expansion of the universe. As usual, it can be included by using abundances (or "yields") instead of number densities. It is also convenient to introduce the variable X = M/T rather than time t. All quantities appearing in the above equations depend on momentum. The different momentum modes are coupled by the scattering and decay processes. We have suppressed this momentum dependence. We define the abundances N = d 3 p/(2) 3 N N /s, N = d 3 p/(2) 3 NN /s, eq = d 3 p/(2) 3 eq N N /s ≈ d 3 p/(2) 3 eq NN /s and = d 3 p/(2) 3 n /s 31. Assumption 5) is justified if the common kinetic equilibrium assumption holds. We can use to rewrite the anticommutator in as We again emphasize that N N, N etc. appearing in - depend on momentum while N, N, eq etc. do not. For T ≪ M, almost all particles have the momentump ∼ T and N is essentially obtained by evaluating N at p =p. Practically we compute the rates as described in section 4.2.2. Similarly, we can use H = H| p=p for the Hermitian part of the effective Hamiltonian. For |p| ∼ T M, n F can be approximated by n F ≈ 3 2 T 3 ≈ 1.8T 3, but N has to be computed numerically. Using the above considerations, we can write down the following effective kinetic equations: They are equivalent to the ones used in. Their interpretation is straightforward. In the mass base, the diagonal elements of N and N are the abundances of sterile neutrinos and antineutrinos, respectively. The off-diagonal elements are flavor coherences. N thus gives the abundances for "particles" and N those for "antiparticles", defined as the helicity states of the Majorana fields N I. This interpretation holds in vacuum, while at finite temperature the effective mass matrix rotates due to the interplay between the (temperature dependent) Higgs expectation value, the Majorana mass M M and thermal masses in the plasma. The first two terms in and are due to sterile neutrino oscillations and dissipative effects, respectively, either by scatterings or by decays and inverse decays of sterile neutrinos. More precisely, the Hermitian 2 2 matrix H in and is the dispersive part of the effective Hamiltonian for N and N. The matrix N is the dissipative part of the effective Hamiltonian for N and N that arises because the sterile neutrinos are coupled to the SM. eq is the common equilibrium value of N and N in absence of an asymmetry. All these terms appeared already in earlier studies. The equations of motion for the asymmetries in the active sector follow from consistency consideration and the symmetries of the MSM. The terms containing L in are their counterparts in the active sector. The last term is due to backreaction and has been discussed in. A.3 The Effective Hamiltonian We follow the approach used in and split the Hamiltonian in the Heisenberg picture into a free part 0 and interaction int. We perform the computation in Minkowski spacetime and for the moment omit the factor ∂t/∂X included in the definition. The same rates, multiplied by this factor, can be used in the early universe when abundances are considered instead of number densities. Starting point of the computation is the von Neumann equation in the interaction picture, where I ≡ exp(i 0 t) exp(−i 0 t) is the density matrix in the interaction picture and I = exp(i 0 t) int exp(−i 0 t), where is the (time independent) density matrix in the Heisenberg picture. Equation can be solved perturbatively, where 0 ≡ = I. We use to compute the expectation values a I,r (p, t)a J,s (p, t) /V by insertion into. For 0 we chose a product density matrix 0 = N ⊗ eq SM, where eq SM is an equilibrium density matrix for the SM fields and N = I P I,s a I,s |0 0|a I,s. This is not the most general density matrix that can be build from one-particle N I states, but it is sufficient to derive the effective Hamiltonian. The formula formally gives expressions for the bilinears at all times. These are strictly valid only at times much shorter than the sterile neutrino relaxation time because the perturbative expansion at some point breaks down due to secular terms. In the relaxation time approximation we can use a trick to deal with this problem. We differentiate a I,r (p, t)a J,s (p, t) /V with respect to time to obtain a "rate". We then send t to infinity to eliminate its explicit appearance from the rate. This last step is allowed because all correlation functions of SM fields are damped on time scales much shorter than the sterile neutrino relaxation time by thermal damping rates due to the gauge interactions. Thus, the late time part of the integrand in does not contribute significantly to d a I,r (p, t)a J,s (p, t) /(V dt). This way we obtain the rate of change of the matrices N N and NN at initial time. In the relaxation time approximation, these rates can also be used at later times because backreaction is accounted for in the N − eq term. Repeating literally all steps in section 2.2 of reference for the two flavor case and the initial density matrix, we obtain d dt with and In the limit M → 0 the projectors are independent of the sterile flavor indices and reduce to where h is the helicity matrix. We have used u c = C T = v and introduced the self energies withF = F U N. The thermal Wightman functions appearing therein are defined as Here i, j are spinor indices, which we suppress in the following. Transitions with r = s do not contribute at leading order in I due to the projectors. This justifies our description of the sterile sector by two 2 2 matrices N and N rather than a 4 4 matrix including elements ∝ NN etc. Transitions with = are suppressed by the small active neutrino masses m i /T ≪ 1. The above expressions are written in theF -base (vacuum mass base). They can be translated into the F -base used in by the replacementsF → F and ∆M M → ∆M 3. Note thatM is defined at T = 0. With, the initial value for N can be written as N ∝ diag(P 1, P 2 ). The RHS of has a real and an imaginary part. They allow to extract the dispersive and dissipative parts H and N of the effective Hamiltonian. A.3.1 Dispersive Part H Comparison of and in absence of active lepton asymmetries (since we chose eq SM without chemical potentials) allows to define the dispersive part of the effective Hamiltonian appearing in, where we have introduced the short notation P u = P R (P 11 u ) IJ P L = ( The additional factor f F ( p )/n F and the momentum integral come from the momentum averaging, cf. One can distinguish between three contributions. The term involving ∆M M comes from the splitting of the Majorana masses and remains present in vacuum. The term involving − IJ is due to the Yukawa interactions. It contains two contributions, see, which are related to the Feynman diagrams shown in Figure 4. The part ∝ v(T ) 2 is due to the interaction with the Higgs condensate and produces the Dirac mass at T < T EW. The part involving ∆ ≷ comes from scatterings with Higgs particles. The Higgs expectation value as a function of temperature can be calculated for a given Higgs mass. We used m H = 126 GeV, as suggested by recent LHC data, to obtain the dependence shown in figure 17. However, we checked that varying m H within the allowed window 115 − 130 GeV does not have a big effect on the results. Evaluation of requires knowledge of the dressed active neutrino and Higgs propagators S ≷ (p) and ∆ ≷, respectively. These are in principle complicated functions of p and T. However, we are mostly interested in very high or low temperatures, T T EW ≫ M during baryogenesis and T M in the context of DM production. This allows to simplify the expressions. It is convenient to dissect the self energy − into the Lorentz components where u = is the four-velocity of the primordial plasma, and write Here the momentum dependence of B IJ (q o, p), A IJ (q 0, p) has been suppressed and p has to be read as |p|. At temperatures T ≫ M, the integral is dominated by hard momenta ∼ T and the term involving B IJ dominates H IJ. For T v(T ), the interaction with the Higgs condensate dominates the N I self energy and B IJ can be estimated as Here b is the so-called "potential contribution" to the active neutrino propagator,obtained by decomposing the retarded active neutrino self energy as Since active neutrinos mainly scatter via weak gauge interactions, the coefficients are in good approximation flavor independent in the primordial plasma, we can define b ≡ b (p), where b (p) is to be evaluated on-shell. For hard momenta, b gives : Here during the calculations in section 5. In practice it is more convenient to work in the F -base, where the Hamiltonian reads − 3. At temperatures T ≪ T EW, there are no Higgs particles in the plasma and the Higgs expectation value is constant, thus B IJ = v 2F * IF J b, A IJ = v 2F * IF J a. In it has been estimated that thermal corrections to the active neutrino propagator are small below a temperature T pot = 13 M GeV 1 3 GeV. For the masses under consideration in this work, we can approximately use free active neutrino propagators in section 6 because T + < T pot. Furthermore, due to the considerations in section 2.6, we are mainly interested in the case U N ≃ for DM production, thusF ≃ F. Then b (p 0, |p|) ≃ 0 and a (p 0, |p|) ≃ (U ) i (U ) * i i ((p 0 − i ) − (p 0 + i )), with i = (p 2 + m 2 i ) 1/2, where m i are the active neutrino masses. This recovers the vacuum result for the mass matrix at |p| = 0, cf. H can be approximated by In the basis of vacuum mass eigenstates it has the form H = diag((p 2 + M 2 2 ) 1/2, (p 2 + M 2 3 ) 1/2 ). Since M ≪ T, M we can expand in M and obtain forp = T with X = M/T and the third Pauli matrix 3. The part of H that is proportional to the identity matrix has been dropped as it always cancels out of the commutators in the kinetic equations. A.3.2 Dissipative Part N Again comparing and, we define The rate for the "antiparticles" N can be found by using projectors analogue to, but with helicity index 2. It is given by ( < N ) * as expected 32. For what follows, it is useful to pull the Yukawa matrices out of the self energies and defin Obviously ≷ =F * IF J ≷. For the computation of N according to we can now define the matrices They in general have to be computed numerically. We discuss their properties in section 4.2. As usual in thermal field theory, the sterile neutrino self energies < and > can be associated with the gain and loss rate. Their difference − = > − < gives the total relaxation rate N for the sterile neutrinos. It acts as thermal production rate when their occupation numbers are below their equilibrium values and as dissipation rate in the opposite case. In configuration space, the self energy − (x) is related to the retarded self energy by R (x) = (x 0 ) − (x). This implies − (p) = 2iIm R (p) in momentum space. As usual in field theory, the imaginary part of R can be related to the total scattering cross section by the optical theorem (or its finite temperature 32 This can be seen by noticing that the traces are real and PRP 22 u PL = PRP 11 v PL, PRP 22 v PL = PRP 11 u PL under the trace. generalization), while the real part is responsible for the mass shift (or modified dispersion relation in the plasma). Both are related by the Kramers-Kronig relations. The appearance of − in is in accord with the optical theorem, and the contributions to the dispersive and dissipative parts of the effective Hamiltonian are indeed related by a Kramers-Kronig relation, cf and. This provides a good cross-check for our result. A.3.3 The remaining Rates The remaining rates and appearing in - in principle have to be calculated independently. The precise computation is considerably more involved than in the case of N. N is related to the discontinuities of the N I self energies, which to leading order in the tiny Yukawa couplings F I only contain propagators of SM-fields as internal lines. Due to the fast gauge interactions these are in equilibrium in the relaxation time approximation and the RHS of can be computed by means of thermal (equilibrium) field theory. This is not possible in the computation of the damping rates for the SM-lepton asymmetries, which are related to self energies where the out of equilibrium fields N I appear as internal lines. For simplicity we follow the approach taken in, see section 6 therein, and use the symmetries of the MSM in certain limits to fix the structure of the rates. To leading order in the small mixing I this implies This situation is in good approximation realized for T ≫ M, when baryogenesis takes place -the total lepton number is not violated during this process and a non-zero baryon number is only realized because sphalerons couple exclusively to left handed fields. Equation implies Other interesting limits considered in include F I → 0 with fixed for all I (leading to conservation of J ) and F I → 0 with fixed I for all (leading to individual conservation of the combination J I + J and the remaining current J J =I ). These limits allow to fix the basic structure of equations,. For a general choice of parameters some corrections of O to these relations may be necessary, the determination of which we postpone until the precision of experimental data on the MSM requires it. A.4 Uncertainties Our study is the most complete quantitative study of bounds on the MSM parameter space from cosmology to date. However, the various assumptions made in the derivation of the kinetic equations lead to uncertainties that may be of order one. These can be grouped into three categories: We only consider momentum averaged quantities. Since the sterile neutrinos can be far from thermal equilibrium, one in principle has to study the time evolution of each mode separately. Our treatment is a reasonable approximation as long as the kinetic equilibrium assumption holds. A study of this aspect published in suggests that deviations from kinetic equilibrium are indeed only of order one. The rates and have been calculated in a rather crude way in section A.3.3, leading to another source of uncertainties of order one. In addition, a precise calculation of the BAU requires knowledge of the sphaleron rate throughout the electroweak transition. Including this is expected to yield a slightly bigger value for the BAU. Though they are matrix valued and allow to study flavor oscillations, the equations - are of the Boltzmann type. They assume that the system can be described as a collection of (possibly entangled) individual particles that move freely between isolated scatterings and carry essentially no knowledge about previous interactions ("molecular chaos"). The first two issues can be fixed by more precise computations. However, with the current experimental data, order one uncertainties are small compared to the experimental and observational bounds on the model parameters. The corrections will only slightly change the boundaries of the allowed regions in parameter space found in this work. We therefore postpone more precise calculation to the time when such precision is required from the experimental side. In contrast to that, the third point is more conceptual. In a dense plasma, multiple scatterings, off-shell and memory effects may affect the dynamics. The effect of these cannot be estimated within the framework of Boltzmann type equations, it requires a derivation from first principles that either confirms - and allows to estimate the size of the corrections or replaces them by a modified set of equations. In the past years, much progress has been made in the derivation of effective kinetic equations from first principles. Recent studies suggest that kinetic equation of the Boltzmann type are in principle applicable to study leptogenesis, but the resonant amplification may be weaker than found in the standard Boltzmann approach. It remains to be seen which effects possible corrections have in the MSM, where baryogenesis and dark matter production both crucially rely on the resonant amplification. A first principles study is difficult in the MSM due to the various different time dependent scales related to production, oscillations, freezeout and decay of the sterile neutrinos and the vast range of relevant temperatures. However, at this stage it seems likely that a first principles treatment is, if at all, only of phenomenological interest in the region around N ∼ M, which makes up only a small fraction of the relevant parameter space, cf. figure 6. B Connection to Pseudo-Dirac Base In our notation, the elements of N and N are bilinears in ladder operators that create quanta of the fields N I, i.e. mass eigenstates in vacuum. This has the advantage that the diagonal elements can be interpreted as abundances of physical particles. The rates R and R M have been introduced in, to which we regularly refer in this article. The basis in the field space right handed neutrinos there differs from the one we use in and corresponds to U R with In this basis M M is not diagonal and the Yukawa couplings should be rewritten as hU = F. The computation of the rates is then performed by defining a Dirac-spinor = U 2I R,I + (U 3I R,I ) c. This is possible when the small mass splitting between the sterile neutrinos is neglected (or viewed as a perturbation and placed in the interacting part of L). The fields R,I can be recovered from this as R,I = U * 2I P R + U * 3I P R c. In terms of, the MSM Lagrangian reads L = L SM + L 0 + L int The analogue of our matrix N (which is also called N in ) is defined as where c s (c s ) and d s (d s ) are the annihilation (creation) operators for particles (antiparticles) with momentum p and helicity s. The corresponding rate N in the kinetic equations is given by 33 where h = F U, 1 is the first Pauli matrix and we have neglected flavor off-diagonal elements. In the high temperature regime that was considered in this simplifies to 1 h h 1 is (h h) * with the diagonal elements swapped. The equation defines the quantities R(T, M ) and R M (T, M ) 34. Ignoring the small mixing between active and sterile neutrinos, N is related to N by for T ≫ m i. Finally, the Yukawa matrix in and is expressed in terms of the parameters di, di, di, which differ from those we use here. In the limit di >> 1, these can be related to our parameters by √ di = e −Im, di = 2Re, di = 2 2, di =. We here prefer to use the parameterization fixed in section 2.5 because the expressions in therms of given in are only approximate. C How to characterize the lepton asymmetries In the MSM neither the individual lepton numbers, related to the currents, nor their sum are conserved. However, since the rates of all processes that violate them are suppressed by the small Yukawa couplings F, they evolve on a much slower time scale than other processes in the primordial plasma and are well-defined. For practical purposes the magnitude of flavoured lepton asymmetries in the primordial plasma can be characterized in different ways. In this article we describe them by the ratio between the number densities (particles minus antiparticles) and the entropy density s ≡ 2 2 T 3 g * /45, = n s cf.. This quantity is convenient because it is not affected by the expansion of the universe as long as the expansion is adiabatic. In the following we relate to other quantities that are commonly used in the literature, using the relations given in the appendix of. In quantum field theory calculations it is common to parameterize the asymmetries by chemical potentials, which can be extracted from the distribution functions that appear in the free propagators at finite temperature. In the massless limit T ≫ m i these are related to n by leading to Alternatively one can normalize the lepton numbers n ("particles minus antiparticles") by the total density of "particles plus untiparticles" in the plasma, where n eq ≡ 2 d 3 q/(2) 3 /(e |q|/T + 1) = 3T 3 /2 2 and Finally, one can normalize with respect to the photon density, where n ≡ 2 d 3 q/(2) 3 /(e |q|/T − 1) = 2T 3 / 2, which yields D Low temperature Decay Rates for sterile Neutrinos Most of the rates relevant for this work have been computed in, where they are listed in the appendix. Here we only list those rates that are needed in addition to those or require refinement. This was necessary for the decay rates into leptons, where masses of the final state particles had been neglected in the original computation. D.1 Semileptonic decay Decay into up-type quarks through neutral current : where x q = m q /M. Decay into down-type quarks through neutral current : Decay into quarks through charged current : where min(m, m un, m dm ) is neglected, and x and y are the two heavier masses divided by M.
. Objective: To introduce a new design needle and a device of microcatheter protection for lumbar intrathecal catheterization in rats,and evaluate its feasibility and effectiveness.Methods: Sixty pathogen-free adult male Sprogue-Dawley rats were randomly divided into two groups(n=30 in each group), the control group (group C) and the modification group(group M). The traditional puncture device, 20G needle, was used in the group C without extemal shielding protection. The new design puncture needle and the microinjection cock were used in the group M. All rats were assessed for motor function on postoperative. The motor function was evaluated 1 day afteroperation. Lidocaine was injected in the catheter at 1st,3rd,7th,14th,21st day post-catheterization, methylene blue was injected in intrathecal at 30th day after operation, and the catheter location was observed. The paw withdrawal threshold(PWT) was measured at 1st,3rd,7th,14th,21st,30th day after operation, open-field test was tested at preoperative and one week postoperative for the purpose of evaluating the autonomous behavior of rats. Results: About motor function:level Ⅰ 75.9%,level Ⅱ 20.7%,level Ⅲ 3.4% in group C, and level Ⅰ 96.7%,level Ⅱ 3.3% in group M, Compared with group C,group M had higher percentage of the level Ⅰ in motor function (P<0.05);Lidocaine test and methylen blue location showed that each one case of catheter was removed on the 14th and 21st day after intubation in group C, and total four cases were removed till the 30th day, while all catheters were in normal location in group M. There was significant difference between two groups in protection of the extemal portion of catheter(P<0.05); The time of intrathecal injection in group M was only 1 minute, and it spent more than 3 minutes in group C. Compared with group C,the time of intrathecal injection is significantly shorter in group M(P<0.01);PWT was reduced to the lowest on the third day after catheterization, and there was significant difference compared with preoperative(P<0.05), PWT recovered on the 7th day and there were no significant difference between two groups; Compared with preoperative, there was no significant difference in the parameters of the group M in the open field test, neither between two groups. Conclusion: The new design puncture needle by its less injury and higher efficiency can be used in intrathecal catheterization. The microinjection cock is reliable and convenient for repeat injection with a perfect protection function of the external portion of catheter, meanwhile it has no impact on rats' autonomous behavior so that it is worthy of further promoting.
Impact of a prolonged decline in rainfall on eucalypt woodlands in southwestern Australia and its consequences for avifauna Aims Our objective was to establish a relationship between long-term variation in the climatic environment, tree canopy decline and observed effects on the population dynamics of avifauna in the Dryandra Woodlands in southwestern Australia. These geographically isolated remnant woodlands are rich in endemic species and sustain a diverse range of ecological communities, but are threatened by habitat degradation and a decline in rainfall. Methods We used annual rainfall data, averaged from a series of weather stations within 100km of the Dryandra Woodlands and a time series analysis to investigate long-term changes in annual rainfall. Satellite spectral observations of eight study sites at Dryandra was used to measure changes in Projected Foliage Cover (PFC) of old growth Eucalyptus wandoo at all sites. Our mist-net trapping study across three years and all eight sites, targeted two focal species; the rufous treecreeper (Climacteris rufa) and yellow-plumed honeyeater (Ptilotula ornata). We investigated the relationship between the captures of each species and variation in PFC, between sites and across years. Also in a separate demographic study, capture-mark-recapture data was used to estimate the apparent survival rate of each species, following the robust design for open and closed populations. Key results We demonstrate a long-term and continuing decline in average annual rainfall that is accelerating. We found the rainfall trend is concomitant with a long-term decline in PFC of E. wandoo and that the previous years annual rainfall is a predictor of average PFC across all sites. Additionally, we discovered that the PFC at each site, in each year, is a predictor of the number of yellow-plumed honeyeaters which prefer feeding on canopy insects and not a predictor of the predominantly ground-foraging rufous treecreeper. We also found a substantial difference in the apparent survival rates between the two species, with the apparent survival of yellow-plumed honeyeaters being approximately half that of rufous treecreepers. This difference was partially attributed to the likely movement outside of the study area due to decreasing habitat quality. Conclusions and implications Overall, our results do suggest that some impacts of long-term rainfall trends can be traced to particular species through PFC variation, but the response between species to habitat change will differ and depend on species-specific habitat requirements. As increasing greenhouse emissions are associated with declining rainfall in southwestern Australia, this study shows if rainfall decline and habitat degradation continue, it will have catastrophic consequences for woodland ecosystems.
def add_control_qubits(self, qubits): assert(isinstance(qubits, list)) self._control_qubits.extend([WeakQubitRef(qubit.engine, qubit.id) for qubit in qubits]) self._control_qubits = sorted(self._control_qubits, key=lambda x: x.id)