content
stringlengths 7
2.61M
|
---|
In silico analysis of interactions of flucloxacillin and its metabolites with HLA-B*57:01 An antibiotic flucloxacillin (FX) which is widely used for the treatment of staphylococcal infection, is known to cause liver injury. A genome-wide association study has shown that FX induced idiosyncratic drug toxicity (IDT) is associated with HLA-B*57:01. FX is processed in the human body to produce several metabolites. Molecular interactions of FX or its metabolites with HLA-B*57:01 should play a crucial role in the occurrence of the adverse drug reaction. In this study, we have undertaken docking simulations of interactions of FX and its metabolites with HLA-B*57:01 to understand molecular mechanisms leading to the onset of IDT. Introduction Flucloxacillin (FX) is an antibiotic belonging to the penicillin class. FX has a broad range of uses in the treatment of Gram-positive bacterial infections and used widely for the treatment of staphylococcal infection. Several works have shown FX to be associated with liver injury at a frequency of ca.8 per 100,000 people. Drug-induced liver injury (DILI) is a leading cause of attrition of compounds in drug development and also a major cause for drug withdrawals, restrictions and project terminations. Therefore, it is important to understand the molecular mechanisms which cause DILI to minimize the attrition. Although the molecular mechanisms causing DILI are complex, a genome-wide association study has unequivocally shown the association of FX induced DILI with HLA-B*57:01 (OR=80.6, P=9.0x10 -19 ). There are several possible modes of interactions between drugs and HLA molecule leading to an adaptive immune response. One possible mode is that drugs or their metabolites directly bind to the peptide-binding groove of HLA which trigger the following activation of T cells. Direct interaction between abacavir which causes idiosyncratic adverse drug reaction and a particular allele of HLA-B*15:02 has been confirmed by X-ray analysis. This study indicates that the direct binding of drugs or their metabolites to the corresponding HLA molecule should be an important step leading to an immune response. In the present study, we have undertaken in silico docking studies of FX and its metabolites at the peptide-binding groove of HLA-B*57:01 in order to identify the chemical species responsible for the FX-induced DILI and the binding mode. A previous investigation on FX metabolism has shown that FX could be biotransformed to produce four major metabolites Methods A crystal structure of HLA-B*57:01 (PDB ID: 3VH8) deposited at the Protein Data Bank was used in this study. A software system MOE (molecular operating environment) was used throughout this study. Possible binding sites of FX and its metabolites were identified by the'alpha finder' function implemented in MOE and all possible biding sites at the protein-binding groove of the HLA molecule were taken into account in docking simulations. Docking simulations between HLA-B*57:01 and the molecules shown in Figure 1 were performed by use of a docking software ASEDock.The complex structures were optimized and the binding affinity was judged by a scoring function of GBVI/WSA_dG which is considered to express protein-ligand binding free energy. Only the backbone heavy atoms of HLA-B*57:01 were fixed during optimization. The binding modes of these molecules at the antigenic-peptide binding groove of HLA-B*57:01 are shown in Figure 2(a). The six molecules bound at the same site in the groove are not deeply buried into the peptide-binding groove and largely exposed on the surface of HLA-B*57:01. It indicates that recruiting novel antigenic peptides on top the bound FX and its metabolites as in the case of abacavir should not be possible. As carboxy groups of the bound FX and its metabolites stick out from the binding groove, it is highly possible that the bound FX and its metabolites would directly interact with T-cell receptors leading to an immune response. The lowest GBVI/WSA_dG values (kcal/mol) of the complexes of FX and its metabolites with Maier-Salamon et al. have reported that the biliary concentration of 5-OH-FX is high and the liver toxicity of FX might be due to the continuously elevated 5-OH-FX. The present docking simulations have shown that the binding affinity of 5-OH-FX to HLA-B*57:01 is significantly high. The predicted strong affinity to HLA-B*57:01 and the observed high concentration in the body unequivocally indicate that 5-OH-FX should be the main risk compound of DILI. The binding mode of 5-OH-FX is shown in Figure 2 (b). The exposed carboxy group is depicted in the upper right. Maier-Salamon et al. also have monitored the plasma concentrations of FX and its metabolites, and found that the plasma concentrations of 5-OH-FX and 5-OH-PA, but not FX and FX-PA, increased steadily before reaching steady state after 40 hours. This indicates that FX-PA would not cause liver toxicity in spite of its highest binding affinity to HLA-B*57:01. However, Maier-Salamon et al. observed the formation of FX-PA in some patients is predominant and it indicates that FX-PA could be a major cause of liver toxicity in these particular patients. (S)-5-OH-PA has not been experimentally detected as a metabolite so far. However, the present docking simulations have shown (S)-5-OH-PA is a strong binder to HLA-B*57:01 and it could be a metabolite to cause DILI in certain patients. |
export default {
setMultipleAttributes(target: HTMLElement, objectSeq: object) {
Object.keys(objectSeq).forEach((attrName: string) => {
target?.setAttribute(attrName, objectSeq[attrName])
});
}
}
|
<filename>PrivateFrameworks/Notes/NFPersistenceManager.h
//
// Generated by class-dump 3.5 (64 bit).
//
// class-dump is Copyright (C) 1997-1998, 2000-2001, 2004-2013 by <NAME>.
//
#import "NSObject.h"
@interface NFPersistenceManager : NSObject
{
}
+ (id)managedObjectContext;
+ (id)notesContainerLibraryURL;
+ (void)setNotesContainerLibraryURL:(id)arg1;
+ (BOOL)_backupExistingStore:(id)arg1 withCoordinator:(id)arg2 error:(id *)arg3;
+ (id)_storeURLForVersion:(unsigned long long)arg1 inDataDirectory:(id)arg2;
+ (id)_validStoreURLInDataDirectory:(id)arg1 movingOldStoreIfNeeded:(BOOL)arg2 withCoordinator:(id)arg3 error:(id *)arg4;
+ (id)persistentStoreCoordinatorAddPersistentStoreIfNecessary:(BOOL)arg1;
+ (id)persistentStoreCoordinator;
+ (void)addPersistentStoreIfNeeded;
+ (id)managedObjectModel;
+ (BOOL)isAppSandboxed;
+ (BOOL)isRunningInNotes;
+ (void)setStoreCoordinatorIsReadOnly:(BOOL)arg1;
+ (BOOL)storeCoordinatorIsReadOnly;
@end
|
Imagining the smart city through smart grids? Urban energy futures between technological experimentation and the imagined low-carbon city Current imaginaries of urban smart grid technologies are painting attractive pictures of the kinds of energy futures that are desirable and attainable in cities. Making claims about the future city, the socio-technical imaginaries related to smart grid developments unfold the power to guide urban energy policymaking and implementation practices. This paper analyses how urban smart grid futures are being imagined and co-produced in the city of Berlin, Germany. It explores these imaginaries to show how the politics of Berlins urban energy transition are being driven by techno-optimistic visions of the citys digital modernisation and its ambitions to become a smart city. The analysis is based on a discourse analysis of relevant urban policy and other documents, as well as interviews with key stakeholders from Berlins energy, ICT and urban development sectors, including key experts from three urban laboratories for smart grid development and implementation in the city. It identifies three dominant imaginaries that depict urban smart grid technologies as (a) environmental solution, (b) economic imperative and (c) exciting experimental challenge. The paper concludes that dominant imaginaries of smart grid technologies in the city are grounded in a techno-optimistic approach to urban development that are foreclosing more subtle alternatives or perhaps more radical change towards low-carbon energy systems. Introduction Smart grid technologies play an increasingly important role in imaginations of urban low-carbon transitions. Particularly in the context of Germany's Energiewende, smart grids are being hailed as environmental innovations and an indispensable means to achieve the mass integration of renewable energies in cities. Although only vaguely defined, smart grids integrate information and communication technologies (ICT) into electricity networks. The use of ICT in electricity networks is seen as a means to achieve low-carbon energy production through the integration of more (fluctuating) renewable energy sources, higher energy efficiency through the real-time coordination of resource flows, greater supply security through automatic grid reconfiguration and more active consumer participation in energy markets. Moreover, the digital enhancement of urban electricity grids is seen as an opportunity for increasing economic competitiveness through high-tech infrastructural modernisation and the attraction of highskilled, well-paying jobs. The imaginaries associated with urban smart grid infrastructures are inspiring unlikely alliances across different expert domains and stimulating visions of environmentally sustainable and economically thriving urban futures. This is happening at the height of the global smart city paradigm. Cities across the world are increasingly relying on high-tech innovation to solve a variety of urban problems, from transport congestion to citizen participation and environmental degradation. Urban administrations are instituting smart city strategies and opening urban laboratories, innovation spaces or other sites of technological experimentation to attract ICT companies and compete in the race for digital modernisation and progress. Urban studies researchers have amply criticised the smart cities paradigm as a corporate-driven strategy for promoting neoliberal agendas (Hollands, 2008;Sadowski and Bendor, 2019;So¨derstro¨;Vanolo, 2014) and as techno-reductionist in its claims to solve complex social and environmental problems (Luque-Ayala and Marvin, 2015;;Viitanen and Kingston, " " 2014; Wiig and Wyly, 2016). Nevertheless, smart urbanisation is rapidly being put into practice in a myriad of projects across the world (). Against this backdrop, it is worth asking how local imaginaries of the smart grid fit into the logics of 'low-carbon' on the one hand and the global logics of 'smart cities' on the other. The question, therefore, is whether and how visions of smart grids are opening pathways for the achievement of urban low-carbon transitions and how this relates to the logics of 'smart' that might simultaneously be at work. To answer this question, it is important to understand how and by whom smart grid futures are being imagined at the local level. Guiding questions for this research, therefore, were: How are smart grids being locally imagined? Who is promoting these imaginaries? How does this relate to the global smart city paradigm? We conceive of smart grids as sociotechnical infrastructure systems that are deeply entangled with the social, political and cultural shaping of cities, and whose development is driven by visions and imaginaries that nurture certain assumptions about desirable and attainable urban futures. Although the environmental promises associated with smart grids attract many (noncorporate) experts who are intrinsically motivated to make urban energy transitions work, we argue that dominant imaginaries accompanying the development of smart grid infrastructures at the local level are currently reinforcing the largely uncritical, techno-positivist logics of the global smart city paradigm. Our analysis of three smart grid pilot sites in Germany's capital Berlin reveals thatjust as with smart cities -the imaginaries associated with smart grids have become quasi hegemonic and thus irresistible to urban administrations, businesses and researchers alike. Because smart grids are still at an early stage of development, these emerging imaginaries are currently being advanced by a small community of experts mostly through involvement in three of Berlin's so-called 'future sites' (Zukunftsorte) -or urban laboratories for developing, testing and showcasing smart grids in the city. By disentangling the imaginaries that are associated with smart grids in the city of Berlin, this article discusses which urban problems smart grids seek to address, critically engages with the solutions that urban smart grids promise to provide and asks questions about who is currently involved in producing and reinforcing these imaginaries in the city. It starts by briefly contextualising our research within existing social and urban studies scholarship on smart grids, followed by an illustration of the conceptual framework of our research approach, including methods of data collection and analysis. It then goes on to discuss the research findings along the lines of the three dominant sociotechnical imaginaries we identified, which link smart grid futures with urban futures. Finally, we conclude und discuss our research results. Background: Smart grid imaginaries and the city Smart grids are challenging the sociotechnical systems that comprise urban electricity grids as we know them. Traditionally, urban electricity networks distribute stable loads uni-directionally from a small number of centralised (mostly fossil fuel based) power plants to many local consumers, and are centrally managed and controlled by a few large network operators. By contrast, smart grids are conceived to accommodate fluctuating (renewable) electricity loads, enable flows to and from various decentralised sources, and respond flexibly to customer-specific demand. These features are enabled by an 'energy information system' that coordinates a complex web of producers, consumers and storage units. Visions of the smart grid also involve the integration of infrastructural sectors other than electricity, such as water, gas, heating, cooling, waste management and electric mobility. Together, smart grids therefore offer a cleaner energy system based on more renewable energy sources, more efficient energy use through novel forms of storage and increased user participation through the integration of small-scale units of production. These visions have major implications for the configuration of urban electricity systems. Not least, the ubiquitous dissemination of energy sensors and automatic control mechanisms across urban infrastructures and into urban homes raises questions about the privacy and controllability of urban movement and urban energy flows (Luque-Ayala and Marvin, 2020). Moreover, their dependence on high-speed internet connections could affect differences in the quality of energy access and result in new forms of urban fragmentation. In addition, new actors and forms of market participation are challenging traditional governance arrangements, giving rise to novel forms of sociospatial collaboration, for example in smart urban energy districts (van ). Yet, while a growing body of especially science, technology and society (STS) research has engaged with smart grids as social endeavours (Kumar, 2019;) and socio-technical imaginaries (Ballo, 2015;Ko¨ktu¨rk and Tokuc x, 2017;Skjlsvold and Lindkvist, 2015;Tricoire, 2015), there is still relatively little urban studies literature on the topic (for exceptions see ;Levenda, 2018;;Luque-Ayala, 2014;). Social scientific research has found that the production of smart-grid-related imaginaries is often confined to relatively small communities of experts, mostly in the context of bounded sites of experimentation (Engels and Mu¨nch, 2015;McLean, 2013). Recent studies have voiced criticism that imaginaries of emerging smart grid infrastructures are depicting largely positivist notions of sustainability, reliability, efficiency, transparency and security (Ballo, 2015;Palensky and Kupzog, 2013;Wentland, 2016), while impeding more comprehensive, critical public debates (Lo¨sch and Schneider, 2017;Luque-Ayala, 2014;). Moreover, smart grid experts have been found to communicate mostly positive views of energy system automation, consumer engagement and security of supply to the general public, while hiding their concerns about risks and uncertainties (Luque-Ayala, 2014;). Selected empirical case studies have also pointed to the co-constitutive relationship of smart grids and materialised 'politics of urbanism' (;), yet a broad empirically grounded discussion on this relationship is lacking. Urban studies research has focused more generally on the increasing convergence of smart and low-carbon urban imaginaries (Caprotti, 2014;Haarstad, 2017;Haarstad and Wathne, 2019;;). While some of these studies find that the so-called 'smart-sustainability fix' is amplifying ecological modernisation agendas and forms of entrepreneurial urban governance (), others have found more nuanced, two-way relations (Haarstad and Wathne, 2019). This scholarship forms part of a broader effort to engage with the situated practices and material realities of the 'actually existing smart city' and how these are playing out in specific contexts, places and ways (). This article aims to expand on this literature by exploring what kinds of urban futures are being imagined and implemented through the development of smart grid infrastructures in the city of Berlin, Germany, and how they relate to questions of 'smart cities' on the one hand, and questions of 'sustainable' and 'low carbon cities' on the other. We argue that in Berlin, imaginaries of a future smart grid city are being coproduced through policies and implementation practices that are mutually reinforcing and which are being nurtured as much by environmental ideals as by the technical solutionism of the smart city. Conceptual framework: Imagined futures and the shaping of urban realities Our analysis is based on the concept of socio-technical imaginaries and the notion that they exert a strong influence on processes of political, social and spatial development in the present. Often these imaginaries are built around conceptions of technological and societal progress, for example of network-induced hygiene in the sanitary city or car-enabled mobility in the modern functionalist city. Urban infrastructures and the imaginaries they inspire anticipate future states (Lo¨), serve as collective visions of a good, desirable future (Bo¨hle and Bopp, 2014;;Ferrari and Lo¨sch, 2017;Jasanoff and Kim, 2009;Sand and Schneider, 2017) and thus configure urban reality. Recent scholarship underlines this by showing how science fiction and storytelling are entangled in the making of urban (energy) realities. This work resonates with long-standing debates about visions as goals and methods of urban planning (Shipley and Michela, 2006;Shipley and Newkirk, 1999). In their work on socio-technical imaginaries, Jasanoff and Kim argue that once certain claims about the future are sufficiently widespread, they develop into 'collectively held, institutionally stabilised, and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology' (Jasanoff and Kim, 2015: 4). These imaginaries can mask the political interests and power constellations that drive the development of infrastructural systems and act as somewhat fuzzy, implicit, broadly accepted and culturally embedded understandings of the 'good life' or the 'good future' that promote mostly positivist, seemingly value-neutral, apolitical notions of modernity and progress (Jasanoff and Kim, 2015). Whose visions take root in the collective imagination and how this influences what people consider to be 'modern', 'progressive' and 'up-to-date' as opposed to 'backwards' or 'forgotten' then becomes a highly political issue. As McFarlane and Rutherford put it: 'what is often at stake here is not simply the provision of infrastructure, but the conceptualisation of the city' (McFarlane and Rutherford, 2008: 366). As Jasanoff and others have shown, future imaginaries only develop this kind of normative force if they are communicated and reinforced through (policy) narratives, images, material manifestations or representations and (public) performances (see Figure 1) that make them 'stick' until they are shared collectively (Hajer and Pelzer, 2018;Jasanoff and Kim, 2015). Visions, therefore, depend on continuous repetition and real-life enactments as a means of perpetuation and diffusion. In the case of smart grids, urban laboratories play an important role in fulfilling this purpose by providing a space for articulating and negotiating socio-technical futures, as well as implementing and showcasing them to a broader public. By means of technology trials, they facilitate new policies, actor coalitions, institutional arrangements and cultures around issues such as energy, mobility and the like, and should therefore be understood as spaces not only for envisioning but also for governing and actively creating the city ;Caprotti and Cowley, 2017). In a similar vein, Van Lente argues that a cycle of continuous reinforcement can also result in a paradoxical dynamic, such that 'a compelling constellation of promising claims that enforces action in a way that perhaps none of the companies or researchers themselves would have chosen. Participants will reason in terms of ''not missing the boat'', but the ''boat'' only exists due to the collective decision not to miss it' (van Lente, 2012: 773). The irrationality and contingency of this process resonates with what the social studies of infrastructural development have called technological 'fetishism' (Kaika and Swyngedouw, 2000;Larkin, 2013). As Brian Larkin argues, technological infrastructures are far from purely rational in an economic or even a technical sense, but 'emerge out of and store within them forms of desire and fantasy and can take on fetish-like aspects that sometimes can be wholly autonomous from their technical function' (Larkin, 2013: 329). Imagined socio-technical futures, therefore, carry much more than the relatively mundane promise of solving an engineering problem but are intermingled with emotions of awe and hope that can be highly seductive. Methodological approach This article investigates the future imaginaries promoted through smart grids in urban development and implementation circles. These imagined futures manifest themselves in discourse. We base our analysis on the sociology of knowledge approach to discourse (SKAD), which understands discourses as narrative and material processes of sense-making that create social reality. SKAD emphasises the importance of practices, materialities and infrastructures as integral parts of these sense-making processes and thus as objects of analysis. Most importantly, however, it recognises that discourse is the place where 'creativity, interpretation, fantasy, imagination and desire come to the fore' (Keller and Truschkat, 2013: 35). To understand how smart-grid futures are being imagined in Berlin, we analysed the smart-grid-related narratives as well as their public performance and material representation at three spatial levels in the city: three smart grid implementation projects, including selected institutions, companies and/or individuals involved; three so-called future sites (Zukunftsorte) or urban laboratories, which host these smart grid projects; Berlin's political administration as well as relevant institutions and companies working in the field of smart grids in Berlin. We traced these imagined smart grid futures in documents and through interviews with key actors involved with smart grids at all three levels. We analysed a total of 42 publicly available policy documents and grey literatures such as laws, strategy papers, reports, policy briefs, company websites, advertisements and informational brochures (see overview in the Appendix). We complemented our document analysis with a total of 16 in-depth, semi-structured interviews that lasted approximately 1 hour each (see overview in the Appendix) and were conducted with experts from Berlin's energy, ICT and urban development sectors. Overall, our data cover material from city government and administration, the electric grid operator, the newly founded public services company, two civil society organisations, the local energy agency, two electronics companies, two project development companies and various research institutions. Based on SKAD's analytical framework, we then systematically coded all documents and interviews in MAXQDA and identified common frames, classifications and phenomenal structures, which resulted in three dominant storylines relating smart grids to the city. We call these storylines Berlin's imagined smart grid futures. Berlin as case study city The city of Berlin has set ambitious goals for becoming a leading 'smart' and 'green' European metropolis. In doing so, the city is attempting to position itself as frontrunner in the advancement of Germany's Energiewende and global competitor in the field of digital industries. These aspirations are based, among others, on the city's growing self-confidence as Germany's start-up capital. After a long phase of economic stagnation following the city's reunification, the prospect of developing leadership in a growing industrial field is being embraced by the city government as an opportunity to secure competitive, well-paying jobs. In 2015, the government passed a Smart City Strategy (Berlin Senate, 2015b) that details how it aims to support the equipping of numerous areas of urban life with digitised technologies over the coming years. This strategy has since been complemented by a less formalised digital agenda, which outlines the city's approach to confronting the so-called digitisation challenge. 1 In 2014 and 2015 the city administration also commissioned two studies called Climate-Neutral Berlin 2050 () and New Energy for Berlin (Enquete-Kommission, 2015), which were translated between 2016 and 2018 into a binding local Energy Transitions Law (Berlin Senate, 2016b) and related Energy and Climate Protection Program 2030 (Berlin Senate, 2016c). These programmes and strategies all emphasise the necessity of digitising the city's electric grid infrastructure. In the past few years, civil society organisations have also gained influence in the politics of Berlin's electricity grid. Since 2014, they have effectively campaigned to reinstate public ownership of the grid. In doing so, these citizen-led initiatives have put Berlin's electric grid back on the political agenda, turning electricity infrastructure into a highly politicised, highly disputed issue. Yet, while struggles over grid ownership have gained significant public and political attention, questions of digitising the grid or 'making it smart' are not among the top priorities of these initiatives and have remained largely under the popular radar. Meanwhile, Berlin's urban administration has designated ten so-called 'future sites' (Zukunftsorte) for pioneering and showcasing different kinds of novel digital technologies, at least three of which are dedicated -among other things -to the development of smart grids. These are the EUREF Campus, the Technology Park Adlershof and the TXL Urban Tech Republic (see Figure 2). At these sites, different stakeholders collaborate to develop, test and practically implement pilot versions of smart grid technologies under 'real-life' conditions. These expert coalitions include researchers, ICT companies, project developers, utilities, energy start-ups and consumers. Along with the city's policies and strategies, Berlin's future sites have thus become important spaces for negotiation and exchange, providing those involved with an opportunity for envisioning and making the 'smart grid city'. While the projects at EUREF Campus and Technology Park Adlershof are well underway, implementation activities at TXL Urban Tech Republic have been stalled because of problems with the project sitethe city's current airport. Instead of being replaced in 2012 as originally planned, the airport remains in use and TXL Urban Tech Republic continues in a state of seemingly never-ending expectation: always at the brink of realisation but never implemented. The material gathered in relation to this site is therefore informed by plans and aspirations rather than the details of actualisation. The smart grid projects on the three sites focus on different technologies and processes (see Figure 2). Results: Imagining and making smart grids in Berlin Our findings reveal three dominant imaginaries that relate smart grid technologies to the city, promoting them as (a) an environmental necessity for advancing Berlin's local Energiewende, (b) an economic imperative to secure Berlin's future as a thriving metropolis and (c) an exciting experimental challenge to modernise the city's infrastructure. Overall, smart grid technologies evoke a fuzzy but enticing urban imaginary that merges technological optimism with fantasies of economic achievement and environmental health. Among others, this fuzzy imaginary of a future smart grid city promotes a modern, eco-progressive 'Zeitgeist' that blurs the lines between the means and ends of 'smart': does Berlin need to advance the smart city to advance its smart grid? Or does it need a smart grid to become a smart city? Our findings show that Berlin's modern, eco-progressive smart grid imaginary is being mutually reinforced by urban policy narratives and corporate marketing strategies on the one hand and by research and implementation practices on the other. This co-constitutive process of imagining and making the smart grid city is driven by a relatively small circle of experts. While urban policy experts and corporate professionals are primarily using smart grids as a marketing tool to attract businesses and professionals, researchers at the implementation level are mostly committed to smart grids in a genuine effort to contribute technological solutions to Germany's Energiewende. Together, they are imagining and enacting an urban future that is driven by techno-optimism, built on few peoples' perspectives, lacks critical negotiation and is strongly embedded in the economic opportunities associated with the smart city. Smart grids as environmental necessity for advancing Berlin's Energiewende Berlin's urban and energy policies primarily depict smart grid technologies as a necessary prerequisite for achieving Berlin's local Energiewende. This expectation goes hand in hand with an increasing overall reliance on technological development to solve urban environmental problems. In Berlin, imaginaries of low-carbon urban futures are becoming increasingly interwoven with imaginaries of 'smart' technological progress, merging notions of environmental consciousness with notions of high-tech development and digital sophistication. Among others, the current city government's energy policies aim to help advance the city's Smart City Strategy and turn Berlin into a 'Smart Energy City' (Berlin Senate, 2016a). The Smart City Strategy, in turn, describes the development of 'intelligent' supply infrastructures as its 'backbone' (Berlin Senate, 2015b). 2 Similarly, a report commissioned by the urban administration in 2015 entitled 'New Energy for Berlin' states that Berlin should introduce smart grids 'so it can become a ''Smart City'' that contributes to the Energiewende' (Enquete-Kommission, 2015). The 'smartification' of electricity grids is therefore not only being justified with energy-related goals, but with the vague and overarching aim of digitising urban life in general. The Masterplan Energy Technology Berlin-Brandenburg further underlines this by stating that 'energy is part of an interconnected smart city and region' (Clustermanagement Energietechnik Berlin-Brandenburg, 2017). This shows how closely Berlin's urban policies and programmes link imaginaries of resource-efficiency and sustainability with notions of digitisation and vice versa. They portray the interface between energy and ICTs as a natural and inevitable process that goes hand in hand with the increasing digitisation of everyday life. By linking the smart city to local energy transitions, smart technological solutions are being depicted not only as healthy and clean but also as part of a response to the pressing global challenge of climate change and thus as a seeming moral imperative. Concomitantly, these urban development narratives are systematically linking imaginaries of the smart city to notions of climate-friendliness and sustainability, describing the smart city of Berlin as 'resource-efficient' (Erbsto¨er and Mu¨ller, 2017), 'post-fossil' (Berlin Senate, 2015a), 'ecologically modernised' and 'green' (Berlin Senate, 2016a). In Berlin's local policies, lowcarbon transitions are therefore imagined to be inherently 'smart' and smart cities are imagined to be 'low-carbon'. The seemingly inevitable connection between technology and environmental protection is being strengthened by smart grid imaginaries at the city's future sites. TXL Urban Tech Republic, for example, advertises that 'we need new solutions for mobility, for energy and for resources. And we need new materials and intelligent systems to make these solutions possible. We need Urban Technologies. Technologies for the cities of tomorrow' (Tegel Projekt GmbH, 2015). According to this advertisement, there seem to be no alternative 'solutions' to technological advancement. Moreover, these technologies are claimed to be 'what will keep alive the growing metropolitan centres of the 21st century' (Tegel Projekt GmbH, 2018), and thus depicted as a fundamental prerequisite for the sake of pure survival. The same is true for the EUREF Campus, which claims to bridge solutions not only for the 'intelligent transformation of the energy sector' (Technische Universita¨t but also for the intelligent city: We are discussing the global context, how to design the future intelligent city? and a smart grid is part of that. (Personal interview, researcher at EUREF Campus, 2017) Here, too, smart grids are depicted as an 'intelligent' and necessary means of urban environmental protection. Only one interview partner in Berlin, notably from an environmental NGO, actually looked into alternatives, asking: What is the goal of smart grids? If the goal of smart grids is, let's say, climate protection, which is actually our overarching goal; and climate protection in terms of energy use means avoidance, efficiency, and the rest renewable; then I think there are a lot of good alternatives. You don't need the intelligent house; it's a question of habits and how to address habits. (Personal interview, 2018) There is a growing debate over how the smart grid should finally look, what it should do and how it should be understood. Although smart grid technologies are (to some extent) necessary for integrating renewables at scale, contrary to dominant smart and low-carbon imaginaries the growing reliance on digitised technologies is significantly increasing overall electricity consumption and resource use and therefore counteracting long-term environmental objectives (Lange and Santarius, 2018: 146). Smart grids as an economic imperative to secure Berlin's future as a thriving metropolis Berlin's city administration also depicts smart grids as an attractive opportunity for boosting the low-carbon economy, evoking visions of a thriving and industrialised, yet post-fossil urban future (Berlin Senate, 2015a). The current government underlines this by stating that 'a smart city, an intelligent city, is able to increase growth while decreasing resource-use' (Berlin Senate, 2016a: 51). Among others, smart grids are envisaged to 'increase industrial value generation, expand technological expertise, create new jobs and increase urban quality of life' (Berlin Senate, 2015b: 28). These promises are built to a large degree on Berlin's existing strengths in the fields of research and digital industries. As well as hosting numerous renowned research institutions, Berlin has become Germany's leading hub for the (digital) start-up scene (). The urban administration therefore views smart grid technologies as a way to combine the city's socio-economic capital with its energy transformation goals and for leading it into a 'green' economy: The Energiewende offers Berlin's businesses unique opportunities on the future markets of a resource-efficient economy based on renewable energies. The extension and advancement of an intelligent electricity grid, smart grid, are important technological challenges that Berlin is especially suited for due to its combination of scientific research and industry. (Berlin Senate, 2015b: 26) The city's future sites advertise the same combination. At EUREF, the project development company states that 'we all benefit from this topic; we benefit, the companies benefit, and the idea behind it does too' (personal interview, project development company, 2016). And then adds: I want to prove that what we are doing here is not more expensive than what we have now. The Energiewende will only succeed if customers don't end up paying more. Maybe even pay less . I think that this is a commercial project that we are doing here. (Personal interview, project development company, 2016) This corporate actor therefore depicts smart grids as an economic opportunity that will help the Energiewende, not the other way around. Similarly, large businesses involved in Berlin's future sites are primarily driven by the opportunity for expanding into an emerging market: Suddenly the grid becomes a huge data project, and that makes it interesting for us. Wherever data packages are transmitted based on internet protocols, independent of whether it's video live streams or stock market data or private emails, we don't really care what it is, as long as it's a lot. That pretty much sums up our interests. (Personal interview, ICT/electronics company, 2017) Not surprisingly, large ICT companies are participating in Berlin's future sites primarily because they see a chance to increase their specialised knowledge and turn it into standardised products that can be transferred to multiple systems and situations. They are especially interested in devising 'cookie-cutter' solutions and developing them into mass-products (personal interviews, ICT/electronics companies, 2016ICT/electronics companies,, 2017. At the same time, these optimistic, forward-looking narratives are also built around a number of fears. They convey a strong sense of urgency and inevitability that depict smart grids as progressive technologies that are not only necessary but also without alternative. Berlin's digital agenda, for example, describes digital technologies as Berlin's 'only chance' at securing its economic competitiveness. There is a sense that Berlin needs to 'catch up' both in environmental and in technological terms (personal interviews, project development company at TXL and public energy agency). This is echoed by experts from Berlin's future sites: New York is ahead; Amsterdam, Copenhagen are also ahead of Berlin in many points. They have a more flexible administration, that isn't so stuck in the 80s and 90s as it is here. isn't as ideological, more pragmatic. (Interview, TXL Urban Tech Republic, 2017) Urban policy makers, researchers and businesses alike are conveying a sense that digitisation is coming and that Berlin can either keep up with the pace of technological development or lose out in the run for global competitiveness. Asked about possible alternatives, an expert from the city's network operator responds: 'Adobe huts. Then we won't need electricity, we won't need hot water; it'll be one cold shower a week Of course, then we'll use much less energy per person, but I don't know if that's really the path Germany wants to take' (personal interview, network operator, 2018). Smart grids, in this expert's view, are needed to avoid regression, underdevelopment and cold. The city of Berlin, in this reading, has to make a choice between being a pioneer or a loser, a world-class competitor or a poor house. There seems to be no middle ground and no time for considering possible risks or alternatives. The smart grid as an exciting experimental challenge These visions are met with positive notions of smart grids as an exciting collaborative challenge and an interesting opportunity for techno-scientific experimentation. Researchers, engineers and businesses are all highly motivated to 'make the Energiewende work' (personal interviews with researchers at Adlershof, EUREF and TXL), while their efforts are largely removed from broader social or urban development considerations. Instead, most engineers are driven by a sense of being at the cutting edge of research and development and by an interest in advancing and exploiting the full potential of existing technological possibilities (personal interviews with researchers at Adlershof, EUREF and TXL). They are motivated by a strong belief in the necessity of integrating more renewables into the city's energy system and by the prospect of contributing to global climate protection. Moreover, they view their work as an exciting possibility to build an attractive, interesting, modern and highly functional technology, thinking only marginally about risks or social consequences (personal interviews, researchers at Adlershof and EUREF). Among other things, they view smart grid technologies as 'stylish' (personal interview, public service provider, 2018), 'sexy' (personal interview, project development company at TXL, 2017), 'progressive' (personal interview, researcher at EUREF, 2017) and'cool' (personal interview, researcher at Adlershof, 2017). These attributes stand in stark contrast, for example, to questions of costs, which they perceive as mundane and reactionary (ewig gestrig) (personal interview, ICT entrepreneur at EUREF, 2016). While the city government is aware of costs, it too regards smart grids as a 'sexy' technology that small and medium sized enterprises need to be convinced of (personal interview, Berlin Senate Department for Economics, Energy and Public Enterprises, 2018). Most engineers and researchers involved in Berlin's future sites view smart grids as a personal opportunity for creating something new and the Energiewende thus takes on a quality of being 'the next big thing' in technological advancement. As the city government designates more and more spaces as experimental urban laboratories, these spaces are becoming important sites of urban (energy) governance, where Berlin's urban futures are not only imagined but materialised Castan Broto and Bulkeley, 2013;Engels and Mu¨nch, 2015;;Hoffman, 2011;). In Berlin, these laboratories are explicitly envisioned as places for advancing 'urban Energiewende innovations' (Berlin Senate, 2016c: 32), such as virtual power plants, heating and cooling networks, vehicle-togrid technologies or other (micro-)smart grid technologies. The city government is marketing them as spaces for pioneering technological advancement and offering cuttingedge research and development opportunities. These sites are supposed to 'make Berlin future-proof, shape its economic profile, and increase its international visibility' (Berlin Senate, 2015a: 54). They are depicted as 'hot spots', and 'innovation spaces' (Berlin Senate, 2018) for showcasing urban energy technologies to the world, and increasing Berlin's global competitiveness (Berlin Senate, 2015a). Adlershof even boasts to be Berlin's Silicon Valley. Beyond their function as local testbeds, these sites are conceived as 'lighthouses' and shining examples with an outreach and impact far beyond the region (TSB Technologiestiftung Berlin, 2012: 26). In other words, they are explicitly designed to provide development impulses for the broader city and region. A brochure advertising TXL Urban Tech Republic underlines this by saying that 'energy transformation policy is not only decided here; it is made here' (Tegel Projekt GmbH, 2015: 13). However, Berlin's urban laboratories are designed for an exclusive urban business and research establishment, catering to the young, creative, intelligent, cosmopolitan elite. They invite 'students, entrepreneurs, industrialists and researchers', to 'learn from one another and come up with new ideas together' in a joint 'democratic ambition' for making 'the cities of the future' (Tegel Projekt GmbH, 2015). Urban scholarship has shown that urban laboratories are often designed as privileged sites of formalised knowledge production that favour certain actors and interests over others (Evans and Karvonen, 2014). More often than not, 'the social aspects of urban development and issues that do not fit into the nexus of economic development and environmental protection are largely ignored' (Evans and Karvonen, 2014: 425). In Berlin, experimentation with smart grids has likewise been confined to a relatively small community of experts, mostly from the business and research domains. Interaction with the public is limited to showrooms that explain certain energy technologies and visualise flows but regular citizens are not part of the projects. This raises important questions about who gets to develop the city of the future and whose imaginaries are part of the process. In Berlin, this is currently a mix of researchers, engineers and business peoplebut hardly any citizens. Discussion and conclusion This article has attempted to disentangle and critically discuss dominant imaginaries of the future smart grid city and how they are being (co-)produced in Berlin's policy and implementation circles. We identify three dominant imaginaries that depict the smart grid city as a progressive, eco-friendly, economically thriving, attractive and liveable city of the future that is largely without alternatives and also without risks. We have shown that these dominant urban imaginaries merge notions of techno-scientific progress (most notably digitalisation) with the achievement of Berlin's urban energy transition, thus latching onto the technopositivist gravitation of Berlin's smart city paradigm. Put differently, these imaginaries depict urban smart grid technologies as a necessary prerequisite for developing Berlin into a low-carbon city on the one hand and a smart city on the other, making ICTimplementation seem like a natural and inevitable process (i.e. 'the smart city will have smart grids' (Erbsto¨er and Mu¨ller, 2017: 11)). Moreover, we have shown that these imaginaries are in part driven by a sincere interest in making Berlin's energy transition work but also in part by economic concerns and the pure thrill of spearheading technological development. They thus emphasise promises of economic competitiveness and (global) leadership over risks and vulnerabilities. Moreover, we have shown that in Berlin, dominant imaginaries of the smart grid city remain largely uncontested. Instead, the combined promises of the smart grid city are being pursued and marketed by Berlin's urban policymakers, researchers and businesses alike, be they from the energy, ICT or urban development sectors. We argue that the imaginaries that are created, reproduced and publicly promoted through urban laboratories are thus reinforcing what the city government is promoting in its policies and vice versa, and that a broader, more inclusive and possibly controversial debate is lacking. We draw three main conclusions from these findings. First, imaginaries of the future smart grid city are not only fuelled by urban (energy) policy but also gain traction through material manifestations in urban laboratories. In Berlin, this co-productive process of mutual reinforcement has created a spiral of reciprocal encouragement and affirmation rather than controversial debate or critical scrutiny. Smart grids have arguably taken on the fetish-like qualities of a technological fix or a 'boat' that is not to be missed, rather than one arising out of various means to an end. We are critical that these imaginaries are thus foreclosing debate about other pathways towards lowcarbon urban development such as digitally sufficient alternatives (Lange and Santarius, 2018), and that techno-scientific and economic rationalities are concealing the transformative potential of challenging incumbent infrastructural arrangements, for example, through commoning () or citizen participation (Parks and Rohracher, 2019). Therefore, Berlin's smart grid development is an example of how positivist imaginaries can serve as catalysts for technological change but largely without reflecting on the complex, interconnected, imperfect and very human realities of urban existence. Second, current smart grid imaginaries are emphasising (possible) technological benefits instead of weighing them against the environmental costs of technological expansion or the risks of digitally born vulnerabilities. They also convey a sense of fear and urgency that barely tolerates opposition. With the rising use of ICT-devices, data traffic and data centres are responsible for increasing energy consumption (). In policies, implementation projects or the minds of local stakeholders, risks are rarely mentioned and only in a vague and unspecific way. Only a few critical voices or alternative futures are making themselves heard in the city of Berlin. Issues such as supply security, data security and cyber security are mentioned as necessary prerequisites for smart grid implementation, yet they do not feature as part of the project design. Instead, possible costs are perceived as the most important 'risk' or obstacle to smart grid implementation. Urban policies should engage more in discussions about the risks, environmental impacts and implications for inclusive urban development when it comes to smart grid implementation projects instead of advocating material-intensive smart grid futures as the unalterable solution that will solve all urban energy challenges we are currently facing. And third, Berlin's smart grid city imaginaries are being promoted by a relatively small community of experts, not least because urban laboratories are limitinginstead of encouraging -necessary public debate. Currently, Berlin's future sites are being marketed as showcases for new technological developments and urban space is painted as an experimental playground for engineers and tech-enthusiasts to pursue these inspiring high-tech innovations. Instead, urban laboratories could be designed to include a broad cross-section of urban actors, notably also citizens, civil society organisations and planners. On a more general level, our study shows how the interplay of smart grid narratives and implementation practices at urban laboratories (i.e. policy narratives, corporate marketing strategies, research and development initiatives) can mutually reinforce each other to produce certain dominant imaginaries of urban smart grid futures at the expense of more nuanced, comprehensive, possibly controversial discussions. We hope that these lessons might inform the design of experimental sites and smart grid projects in other cities, so that they may become places for inclusive, controversial and democratic discussion and thus potential catalysts for urban change. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The contributions of the second author, Friederike Rohde, were funded by the Federal Ministry of Education and Research (BMBF) under Grant Agreement No. 01UU1607B within the framework of socio-ecological research (S F). |
The Westwood Centre is school based in Margate. The Westwood Centre is part of the Enterprise Learning Alliance Pupil Referral Unit. Pupils are referred to the Westwood Centre by mainstream schools from the Thanet area.
Pupils are referred to the school to help them re-engage with education and gain appropriate GCSE andvocational qualifications in a broad range of subjects.
The Westwood Centre has a strong teaching and learning team that has a record of providing education to a high standard for vulnerable pupils. The school has a proven track record of securing good academic and pastoral outcomes for pupils in the locallity.
The WestwoodCentre Manager will be responsible for subject delivery and managing the teaching and support staff in the school. The Centre Manager will lead by example in effectivley managing behaviour pupil progress and teaching and learning. Building positive relationships with local schools and support agencies, as well as contributing to the development of the school's vocational and academic curriculum.
Leading and modelling good practice in Teaching and Learning, planning, maintaining quality in recruitment, training and performance management and ensuring continuous professional development and improvement in service delivery.
For more information please visit the school website via the button below. |
<reponame>danielc92/robust-react-ui<filename>src/components/Icon/Mail/Mail.tsx
// this file was auto generated by python script
// this icon was sourced from feather icons 4.2.8
import React from 'react';
const Mail = () => (
<>
<path d="M4 4h16c1.1 0 2 .9 2 2v12c0 1.1-.9 2-2 2H4c-1.1 0-2-.9-2-2V6c0-1.1.9-2 2-2z"></path><polyline points="22,6 12,13 2,6"></polyline>
</>
);
export default Mail;
|
<gh_stars>0
package au.com.nicta.ssrg.pod;
public class AssertionLoggerCreator {
public static AssertionLogger create(Assertion assertion) {
return currentCreator.createInstance(assertion);
}
public static void setCurrentCreator(AssertionLoggerCreator creator) {
currentCreator = creator;
}
protected AssertionLogger createInstance(Assertion assertion) {
return new AssertionLogger(assertion);
}
private static AssertionLoggerCreator currentCreator = new AssertionLoggerCreator();
}
|
Factors contributing to decreased protein stability when aspartic acid residues are in sheet regions Asp residues are significantly under represented in sheet regions of proteins, especially in the middle of strands, as found by a number of studies using statistical, modeling, or experimental methods. To further understand the reasons for this under representation of Asp, we prepared and analyzed mutants of a domain. Two Gln residues of the immunoglobulin lightchain variable domain (VL) of protein Len were replaced with Asp, and then the effects of these changes on protein stability and protein structure were studied. The replacement of Q38D, located at the end of a strand, and that of Q89D, located in the middle of a strand, reduced the stability of the parent immunoglobulin VL domain by 2.0 kcal/mol and 5.3 kcal/mol, respectively. Because the Q89D mutant of the wildtype VLLen domain was too unstable to be expressed as a soluble protein, we prepared the Q89D mutant in a triple mutant background, VLLen M4L/Y27dD/T94H, which was 4.2 kcal/mol more stable than the wildtype VLLen domain. The structures of mutants VLLen Q38D and VLLen Q89D/M4L/Y27dD/T94H were determined by Xray diffraction at 1.6 resolution. We found no major perturbances in the structures of these Q→D mutant proteins relative to structures of the parent proteins. The observed stability changes have to be accounted for by cumulative effects of the following several factors: by changes in mainchain dihedral angles and in sidechain rotomers, by close contacts between some atoms, and, most significantly, by the unfavorable electrostatic interactions between the Asp side chain and the carbonyls of the main chain. We show that the Asn side chain, which is of similar size but neutral, is less destabilizing. The detrimental effect of Asp within a sheet of an immunoglobulintype domain can have very serious consequences. A somatic mutation of a strand residue to Asp could prevent the expression of the domain both in vitro and in vivo, or it could contribute to the pathogenic potential of the protein in vivo. |
/**
*
* @author Sebastian Sdorra
*/
@RunWith(MockitoJUnitRunner.class)
public class SecureKeyResolverTest
{
/**
* Method description
*
*/
@Test
public void testGetSecureKey()
{
SecureKey key = resolver.getSecureKey("test");
assertNotNull(key);
when(store.get("test")).thenReturn(key);
SecureKey sameKey = resolver.getSecureKey("test");
assertSame(key, sameKey);
}
/**
* Method description
*
*/
@Test
public void testResolveSigningKeyBytes()
{
SecureKey key = resolver.getSecureKey("test");
when(store.get("test")).thenReturn(key);
byte[] bytes = resolver.resolveSigningKeyBytes(null,
Jwts.claims().setSubject("test"));
assertArrayEquals(key.getBytes(), bytes);
}
/**
* Method description
*
*/
@Test
public void testResolveSigningKeyBytesWithoutKey()
{
byte[] bytes = resolver.resolveSigningKeyBytes(null, Jwts.claims().setSubject("test"));
assertThat(bytes[0]).isEqualTo((byte) 42);
}
/**
* Method description
*
*/
@Test(expected = IllegalArgumentException.class)
public void testResolveSigningKeyBytesWithoutSubject()
{
resolver.resolveSigningKeyBytes(null, Jwts.claims());
}
//~--- set methods ----------------------------------------------------------
/**
* Method description
*
*/
@Before
public void setUp()
{
ConfigurationEntryStoreFactory factory = mock(ConfigurationEntryStoreFactory.class);
when(factory.withType(any())).thenCallRealMethod();
when(factory.<SecureKey>getStore(argThat(storeParameters -> {
assertThat(storeParameters.getName()).isEqualTo(SecureKeyResolver.STORE_NAME);
assertThat(storeParameters.getType()).isEqualTo(SecureKey.class);
return true;
}))).thenReturn(store);
Random random = mock(Random.class);
doAnswer(invocation -> ((byte[]) invocation.getArguments()[0])[0] = 42).when(random).nextBytes(any());
resolver = new SecureKeyResolver(factory, random);
}
//~--- fields ---------------------------------------------------------------
/** Field description */
private SecureKeyResolver resolver;
/** Field description */
@Mock
private ConfigurationEntryStore<SecureKey> store;
} |
import {getInput, error, setFailed} from '@actions/core';
import {context, getOctokit} from '@actions/github';
import {uniq} from './util'
type Octokit = ReturnType<typeof getOctokit>;
export enum FileStatus {
added = 'added',
modified = 'modified',
removed = 'removed',
renamed = 'renamed',
}
export enum LabelType {
community = 'community',
documentation = 'documentation',
massChanges = 'mass changes',
newCommand = 'new command',
pageEdit = 'page edit',
tooling = 'tooling',
translation = 'translation',
waiting = 'waiting',
}
export interface PrFile {
filename: string;
/**
* The previous filename of the file exists only if status is renamed.
*/
previous_filename?: string;
status: FileStatus;
}
export interface PrLabel {
name: string
}
export interface PrMetadata {
labels: PrLabel[]
}
const communityRegex = /^MAINTAINERS\.md$/;
const documentationRegex = /\.md$/i;
const mainPageRegex = /^pages\//;
const toolingRegex = /\.([jt]s|py|sh|yml)$/;
const translationPageRegex = /^pages\.[a-z_]+\//i;
const getChangedFiles = async (octokit: Octokit, prNumber: number) => {
const listFilesOptions = octokit.rest.pulls.listFiles.endpoint.merge({
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: prNumber,
});
return octokit.paginate<PrFile>(listFilesOptions);
};
const getPrLabels = async (octokit: Octokit, prNumber: number): Promise<string[]> => {
const getPrOptions = octokit.rest.pulls.get.endpoint.merge({
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: prNumber,
});
const prResponse = await octokit.request<PrMetadata>(getPrOptions);
return uniq(prResponse.data.labels.map((label) => label.name));
};
const addLabels = async (
octokit: Octokit,
prNumber: number,
labels: string[],
): Promise<void> => {
await octokit.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: prNumber,
labels,
});
};
const removeLabels = async (
octokit: Octokit,
prNumber: number,
labels: string[],
): Promise<void> => {
await Promise.all(
labels.map((name) =>
octokit.rest.issues.removeLabel({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: prNumber,
name,
})
)
);
};
export const getFileLabel = (file: PrFile): string|null => {
if (mainPageRegex.test(file.filename) || (file.previous_filename && mainPageRegex.test(file.previous_filename))) {
if (file.status === FileStatus.added) {
return LabelType.newCommand;
}
if ([FileStatus.modified, FileStatus.removed, FileStatus.renamed].includes(file.status)) {
return LabelType.pageEdit;
}
}
if (translationPageRegex.test(file.filename) || (file.previous_filename && translationPageRegex.test(file.previous_filename))) {
return LabelType.translation;
}
if (communityRegex.test(file.filename)) {
return LabelType.community;
}
if (documentationRegex.test(file.filename)) {
return LabelType.documentation;
}
if (toolingRegex.test(file.filename)) {
return LabelType.tooling;
}
return null;
};
export const main = async (): Promise<void> => {
const token = getInput('token', { required: true });
const prNumber = context.payload.pull_request?.number;
if (!prNumber) {
console.log('Could not determine PR number, skipping');
return;
}
const octokit: Octokit = getOctokit(token);
const changedFiles = await getChangedFiles(octokit, prNumber);
const labels = uniq(
changedFiles.map(file => getFileLabel(file)).filter((label) => label !== null) as string[]
);
const prLabels = await getPrLabels(octokit, prNumber);
const labelsToAdd = labels.filter((label) => !prLabels.includes(label));
const extraPrLabels = prLabels.filter((label) => !labels.includes(label));
if (labelsToAdd.length) {
console.log(`Labels to add: ${labelsToAdd.join(', ')}`)
await addLabels(octokit, prNumber, labelsToAdd);
}
if (extraPrLabels.includes(LabelType.waiting)) {
console.log(`Labels to remove: ${LabelType.waiting}`)
await removeLabels(octokit, prNumber, [LabelType.waiting]);
}
};
export const run = async (): Promise<void> => {
try {
await main();
} catch (err) {
error(err as Error);
setFailed((err as Error).message);
}
};
|
<reponame>michalovsky/RPG
#pragma once
enum class Directions { UP = 0, DOWN = 1, LEFT = 2, RIGHT = 3 };
enum class InputKeys { UP = 0, DOWN = 1, LEFT = 2, RIGHT = 3, MOUSELEFT = 4, MOUSERIGHT = 5, E = 6, TAB = 7, Q = 8, X = 9 };
enum class Others { RESET = -1 };
enum class Weapons { MELEE = 0, DISTANCE = 1 };
const int MAP_WIDTH = 50;
const int MAP_HEIGHT = 40;
const int ITEMS_AMOUNT = 49;
const int MAX_ENEMIES_NUMBER = 15;
enum class Mode { MENU = 0, GAME = 1, EXIT = 2 };
|
Last week, science journalists reported on the discovery of a "nearby alien planet" that might be "capable of supporting life." Ross 128b lies approximately 11 light-years from earth and, according to a new study, "is likely a rocky and temperate world" that "could potentially have liquid water on its surface." Ross 128b appears to lie within the "habitable zone" of its star and boasts a surface "equilibrium temperature" of about 70 degrees Fahrenheit. Reports involving planets like Ross 128b typically ignite the imagination of ET enthusiasts and science fiction writers, but should this latest discovery provide real hope that we aren't alone in the universe?
There are very good reasons to believe no other intelligent life exists in the universe, despite the vast number of stars and planetary systems. In 1961, astronomer Frank Drake established an equation which may explain why we haven't yet detected intelligent alien life in the universe. His equation (N=R*•fp•ne•fl•fi•fc•L) multiplies the rate at which stars are formed in our galaxy, by the smaller number that are orbited by planets, by the tiny fraction of planets that could sustain life, by the few that might potentially support intelligent life, by the even fewer number of civilizations that could exist long enough to build technology, by the length of time such a civilization would spend sending signals (or traveling) into space.
You don't have to be a scientist to recognize what this equation says about the likelihood of finding intelligent life in the universe. If any one of these variables is zero, there is no chance there is any intelligent life in the universe other than our own. Unsurprisingly, the slim odds of finding extraterrestrial intelligence was affirmed when a new study was published this month examining Drake's equation in light of the latest data available for each variable.
According to the researchers, there's a 53% to 99.6% chance that we are the only intelligent life in the galaxy and a 39% to 85% chance we're the only intelligent beings in the observable universe. Anders Sandberg, one of the scientists behind the new research, has been quoted as saying, "There is a pretty decent chance we are alone, given what we know, even if we are very optimistic about alien intelligence."
Why are the odds so small? Because the requirements for life on any planet in the universe are extraordinarily high.
As I describe in God's Crime Scene: A Cold-Case Detective Examines the Evidence for a Divinely Created Universe, our planet happens to be positioned at just the right distance from the sun, tilted at just the right angle, and rotating at just the right speed. We have an atmosphere that is also just right: favorable to life and held in place by a gravitational force strong enough to maintain its composition. Our planet also happens to have a terrestrial crust that is just thin enough to allow the right amount of oxygen while thick enough to prevent pervasive earthquakes. This crust contains all the right "life-permitting" elements – including something surprising: phosphate.
In a study published in April, two Cardiff university astronomers acknowledged the presence of phosphate on our planet may be the reason we are the only planet capable of supporting intelligent life. According to these researchers, phosphate is one of the rarest inorganic chemicals in the universe. That's important, because without adequate phosphate, life cannot emerge on a planet, even if the planet in question, like Ross 128b, is similar to earth in other ways. The Cardiff scientists were astonished to find that phosphate is largely absent in the universe and even more surprised to find it is so abundant here on our planet.
Earth appears to be unique. One might even say uniquely designed.
One thing is certain: while the number of planets in the universe may be large, the odds against any of them possessing the vitally fine-tuned characteristics I describe in God's Crime Scene (including the presence of phosphate) is larger. That's why we're probably alone in the universe, despite the latest planet discovery.
J. Warner Wallace is a Cold-Case Detective, Senior Fellow at the Colson Center for Christian Worldview, Adj. Professor of Apologetics at Biola University, and the author of Cold-Case Christianity, God's Crime Scene, and Forensic Faith.
Are More Americans Turning to the Stars Instead of to the One Who Made the Stars? |
/**
* Determines whether a type can be converted to another without losing any
* precision. As a special case, void is considered convertible only to void
* and {@link Object} (either as {@code null} or as a custom value set in
* {@link DynamicLinkerFactory#setAutoConversionStrategy(MethodTypeConversionStrategy)}).
* Somewhat unintuitively, we consider anything to be convertible to void
* even though converting to void causes the ultimate loss of data. On the
* other hand, conversion to void essentially means that the value is of no
* interest and should be discarded, thus there's no expectation of
* preserving any precision.
*
* @param sourceType the source type
* @param targetType the target type
* @return true if lossless conversion is possible
*/
public static boolean isConvertibleWithoutLoss(final Class<?> sourceType, final Class<?> targetType) {
if(targetType.isAssignableFrom(sourceType) || targetType == void.class) {
return true;
}
if(sourceType.isPrimitive()) {
if(sourceType == void.class) {
return targetType == Object.class;
}
if(targetType.isPrimitive()) {
return isProperPrimitiveLosslessSubtype(sourceType, targetType);
}
return isBoxingAndWideningReferenceConversion(sourceType, targetType);
}
return false;
} |
Beckwith-Wiedemann Syndrome: Partnership in the Diagnostic Journey of a Rare Disorder * Abbreviation: BWS : Beckwith-Wiedemann syndrome Conditions like Beckwith-Wiedemann syndrome (BWS) carry a risk of an associated aggressive malignancy, and thus timely diagnosis is critical. Without a clear diagnosis and timely, appropriate medical care, complications of BWS-associated malignant tumors can be life-threatening or require organ transplant that otherwise could be avoided. Diagnosing rare pediatric syndromes remains challenging. Often the diagnosis may be aided by an astute pediatrician or a parent recognizing a subtle feature related to the syndrome. This establishes a valuable partnership between pediatrician, parent, and geneticist that can lead to a diagnosis. Without this partnership, families may embark on a diagnostic odyssey for years while their child remains at risk. We share the perspectives of 2 parents and a geneticist in an effort to raise awareness and promote early diagnosis of 1 of many rare diseases. Classically, BWS is an overgrowth and cancer predisposition disorder for which several clinical diagnostic algorithms have been developed.1 Diagnosis may be difficult when a child has only 1 feature of the syndrome (eg, macroglossia) or 1 or more less commonly known features. A pediatrician who is unfamiliar with the variability in the presentation of BWS may dismiss the diagnosis. Parents have access to a wealth of information on the Internet, and this access may enable them to identify subtle features. Pediatricians must be open to the parents considerations. The cases described below highlight the importance of access to information for both parents and physicians and the role that parents can take in contributing to the partnership with health care professionals. There are many pediatricians, both generalists and specialists, who create a partnership such as we Address correspondence to Jennifer M. Kalish, MD, PhD, Division of Human Genetics, Childrens Hospital of Philadelphia, CTRB 3028, 3501 Civic Center Blvd, Philadelphia, PA 19104-4302. E-mail: kalishj{at}email.chop.edu |
The wheel that lost its chair or how they came to bomb Palestine This paper reflects on what it takes and what it means to be interpellated as a threat. More specifically, it describes the European response to the success of Hamas in the 2006 Palestinian legislative elections. While you might recognise elements of securitisation in this paper, here the performative utterance of threat is a compressed history that relies on material and discursive histories of racism, sexism and colonialism to be successful. Securitisation is successful because it reiterates existing symbolic and material histories of permissible violence against racialised and sexualised subjects, normalised because the securitised other has always been cast as threatening. This paper stresses the asymmetries of power that mark the encounter between the European colonialist and Hamas, and the consequences of being marked as threatening entail. The securitisation of Hamas delivered through boycott, sanction and siege meant the collective punishment and death of the Palestinians and a coordinated imperial effort to dismantle local resistance. This paper interweaves excerpts from different performative texts: my interviews with European Union and Hamas representatives as they account of expectations of each other around the 2006 performance of democracy; theories of performativity and the questions of race and sex that shape contemporary developments this theory namely the works of Judith Butler and Sara Ahmed, and finally anti-colonial and anti-racist literatures and theories, notably Toni Morrison and Frantz Fanon that describe experiences of being marked as less than human and the resistances against this white supremacist fantasy. Here security and securitisation are described as delusional fantasies that emerge from and reiterate white fantasies of black and Arab as threatening and in need of European intervention. This delusion is obvious in the confusing European Union utterances that try to fashion a justification for sanctioning the Hamas government and the Palestinian people after the 2006 Palestinian elections. |
// Compare Logical Pointer Sized Register vs Memory
void TurboAssembler::CmpU64(Register dst, const MemOperand& opnd) {
DCHECK(is_int20(opnd.offset()));
#if V8_TARGET_ARCH_S390X
clg(dst, opnd);
#else
CmpU32(dst, opnd);
#endif
} |
Vegetable is a culinary term.
Its definition has no scientific value and is somewhat arbitrary and subjective.
All parts of herbaceous plants eaten as food by humans, whole or in part, are generally considered vegetables.
Mushrooms, though belonging to the biological kingdom, fungi, are also commonly considered vegetables.
Since "vegetable" is not a botanical term, there is no contradiction in referring to a plant part as a fruit while also being considered a vegetable.
Given this general rule of thumb, vegetables can include leaves (lettuce), stems (asparagus), roots (carrots), flowers (broccoli), bulbs (garlic), seeds (peas and beans) and of course the botanical fruits like cucumbers, squash, pumpkins, and capsicums.
Vegetables contain water soluble vitamins like vitamin B and vitamin C, fat-soluble vitamins including vitamin A and vitamin D, and also contain carbohydrates and minerals. |
// Generated by the protocol buffer compiler. DO NOT EDIT!
// source: whare_map_stats.proto
package firmament;
public final class WhareMapStatsOuterClass {
private WhareMapStatsOuterClass() {}
public static void registerAllExtensions(
com.google.protobuf.ExtensionRegistryLite registry) {
}
public static void registerAllExtensions(
com.google.protobuf.ExtensionRegistry registry) {
registerAllExtensions(
(com.google.protobuf.ExtensionRegistryLite) registry);
}
public interface WhareMapStatsOrBuilder extends
// @@protoc_insertion_point(interface_extends:firmament.WhareMapStats)
com.google.protobuf.MessageOrBuilder {
/**
* <code>optional uint64 num_idle = 1;</code>
*/
long getNumIdle();
/**
* <code>optional uint64 num_devils = 2;</code>
*/
long getNumDevils();
/**
* <code>optional uint64 num_rabbits = 3;</code>
*/
long getNumRabbits();
/**
* <code>optional uint64 num_sheep = 4;</code>
*/
long getNumSheep();
/**
* <code>optional uint64 num_turtles = 5;</code>
*/
long getNumTurtles();
}
/**
* Protobuf type {@code firmament.WhareMapStats}
*/
public static final class WhareMapStats extends
com.google.protobuf.GeneratedMessageV3 implements
// @@protoc_insertion_point(message_implements:firmament.WhareMapStats)
WhareMapStatsOrBuilder {
// Use WhareMapStats.newBuilder() to construct.
private WhareMapStats(com.google.protobuf.GeneratedMessageV3.Builder<?> builder) {
super(builder);
}
private WhareMapStats() {
numIdle_ = 0L;
numDevils_ = 0L;
numRabbits_ = 0L;
numSheep_ = 0L;
numTurtles_ = 0L;
}
@java.lang.Override
public final com.google.protobuf.UnknownFieldSet
getUnknownFields() {
return com.google.protobuf.UnknownFieldSet.getDefaultInstance();
}
private WhareMapStats(
com.google.protobuf.CodedInputStream input,
com.google.protobuf.ExtensionRegistryLite extensionRegistry)
throws com.google.protobuf.InvalidProtocolBufferException {
this();
int mutable_bitField0_ = 0;
try {
boolean done = false;
while (!done) {
int tag = input.readTag();
switch (tag) {
case 0:
done = true;
break;
default: {
if (!input.skipField(tag)) {
done = true;
}
break;
}
case 8: {
numIdle_ = input.readUInt64();
break;
}
case 16: {
numDevils_ = input.readUInt64();
break;
}
case 24: {
numRabbits_ = input.readUInt64();
break;
}
case 32: {
numSheep_ = input.readUInt64();
break;
}
case 40: {
numTurtles_ = input.readUInt64();
break;
}
}
}
} catch (com.google.protobuf.InvalidProtocolBufferException e) {
throw e.setUnfinishedMessage(this);
} catch (java.io.IOException e) {
throw new com.google.protobuf.InvalidProtocolBufferException(
e).setUnfinishedMessage(this);
} finally {
makeExtensionsImmutable();
}
}
public static final com.google.protobuf.Descriptors.Descriptor
getDescriptor() {
return firmament.WhareMapStatsOuterClass.internal_static_firmament_WhareMapStats_descriptor;
}
protected com.google.protobuf.GeneratedMessageV3.FieldAccessorTable
internalGetFieldAccessorTable() {
return firmament.WhareMapStatsOuterClass.internal_static_firmament_WhareMapStats_fieldAccessorTable
.ensureFieldAccessorsInitialized(
firmament.WhareMapStatsOuterClass.WhareMapStats.class, firmament.WhareMapStatsOuterClass.WhareMapStats.Builder.class);
}
public static final int NUM_IDLE_FIELD_NUMBER = 1;
private long numIdle_;
/**
* <code>optional uint64 num_idle = 1;</code>
*/
public long getNumIdle() {
return numIdle_;
}
public static final int NUM_DEVILS_FIELD_NUMBER = 2;
private long numDevils_;
/**
* <code>optional uint64 num_devils = 2;</code>
*/
public long getNumDevils() {
return numDevils_;
}
public static final int NUM_RABBITS_FIELD_NUMBER = 3;
private long numRabbits_;
/**
* <code>optional uint64 num_rabbits = 3;</code>
*/
public long getNumRabbits() {
return numRabbits_;
}
public static final int NUM_SHEEP_FIELD_NUMBER = 4;
private long numSheep_;
/**
* <code>optional uint64 num_sheep = 4;</code>
*/
public long getNumSheep() {
return numSheep_;
}
public static final int NUM_TURTLES_FIELD_NUMBER = 5;
private long numTurtles_;
/**
* <code>optional uint64 num_turtles = 5;</code>
*/
public long getNumTurtles() {
return numTurtles_;
}
private byte memoizedIsInitialized = -1;
public final boolean isInitialized() {
byte isInitialized = memoizedIsInitialized;
if (isInitialized == 1) return true;
if (isInitialized == 0) return false;
memoizedIsInitialized = 1;
return true;
}
public void writeTo(com.google.protobuf.CodedOutputStream output)
throws java.io.IOException {
if (numIdle_ != 0L) {
output.writeUInt64(1, numIdle_);
}
if (numDevils_ != 0L) {
output.writeUInt64(2, numDevils_);
}
if (numRabbits_ != 0L) {
output.writeUInt64(3, numRabbits_);
}
if (numSheep_ != 0L) {
output.writeUInt64(4, numSheep_);
}
if (numTurtles_ != 0L) {
output.writeUInt64(5, numTurtles_);
}
}
public int getSerializedSize() {
int size = memoizedSize;
if (size != -1) return size;
size = 0;
if (numIdle_ != 0L) {
size += com.google.protobuf.CodedOutputStream
.computeUInt64Size(1, numIdle_);
}
if (numDevils_ != 0L) {
size += com.google.protobuf.CodedOutputStream
.computeUInt64Size(2, numDevils_);
}
if (numRabbits_ != 0L) {
size += com.google.protobuf.CodedOutputStream
.computeUInt64Size(3, numRabbits_);
}
if (numSheep_ != 0L) {
size += com.google.protobuf.CodedOutputStream
.computeUInt64Size(4, numSheep_);
}
if (numTurtles_ != 0L) {
size += com.google.protobuf.CodedOutputStream
.computeUInt64Size(5, numTurtles_);
}
memoizedSize = size;
return size;
}
private static final long serialVersionUID = 0L;
@java.lang.Override
public boolean equals(final java.lang.Object obj) {
if (obj == this) {
return true;
}
if (!(obj instanceof firmament.WhareMapStatsOuterClass.WhareMapStats)) {
return super.equals(obj);
}
firmament.WhareMapStatsOuterClass.WhareMapStats other = (firmament.WhareMapStatsOuterClass.WhareMapStats) obj;
boolean result = true;
result = result && (getNumIdle()
== other.getNumIdle());
result = result && (getNumDevils()
== other.getNumDevils());
result = result && (getNumRabbits()
== other.getNumRabbits());
result = result && (getNumSheep()
== other.getNumSheep());
result = result && (getNumTurtles()
== other.getNumTurtles());
return result;
}
@java.lang.Override
public int hashCode() {
if (memoizedHashCode != 0) {
return memoizedHashCode;
}
int hash = 41;
hash = (19 * hash) + getDescriptorForType().hashCode();
hash = (37 * hash) + NUM_IDLE_FIELD_NUMBER;
hash = (53 * hash) + com.google.protobuf.Internal.hashLong(
getNumIdle());
hash = (37 * hash) + NUM_DEVILS_FIELD_NUMBER;
hash = (53 * hash) + com.google.protobuf.Internal.hashLong(
getNumDevils());
hash = (37 * hash) + NUM_RABBITS_FIELD_NUMBER;
hash = (53 * hash) + com.google.protobuf.Internal.hashLong(
getNumRabbits());
hash = (37 * hash) + NUM_SHEEP_FIELD_NUMBER;
hash = (53 * hash) + com.google.protobuf.Internal.hashLong(
getNumSheep());
hash = (37 * hash) + NUM_TURTLES_FIELD_NUMBER;
hash = (53 * hash) + com.google.protobuf.Internal.hashLong(
getNumTurtles());
hash = (29 * hash) + unknownFields.hashCode();
memoizedHashCode = hash;
return hash;
}
public static firmament.WhareMapStatsOuterClass.WhareMapStats parseFrom(
com.google.protobuf.ByteString data)
throws com.google.protobuf.InvalidProtocolBufferException {
return PARSER.parseFrom(data);
}
public static firmament.WhareMapStatsOuterClass.WhareMapStats parseFrom(
com.google.protobuf.ByteString data,
com.google.protobuf.ExtensionRegistryLite extensionRegistry)
throws com.google.protobuf.InvalidProtocolBufferException {
return PARSER.parseFrom(data, extensionRegistry);
}
public static firmament.WhareMapStatsOuterClass.WhareMapStats parseFrom(byte[] data)
throws com.google.protobuf.InvalidProtocolBufferException {
return PARSER.parseFrom(data);
}
public static firmament.WhareMapStatsOuterClass.WhareMapStats parseFrom(
byte[] data,
com.google.protobuf.ExtensionRegistryLite extensionRegistry)
throws com.google.protobuf.InvalidProtocolBufferException {
return PARSER.parseFrom(data, extensionRegistry);
}
public static firmament.WhareMapStatsOuterClass.WhareMapStats parseFrom(java.io.InputStream input)
throws java.io.IOException {
return com.google.protobuf.GeneratedMessageV3
.parseWithIOException(PARSER, input);
}
public static firmament.WhareMapStatsOuterClass.WhareMapStats parseFrom(
java.io.InputStream input,
com.google.protobuf.ExtensionRegistryLite extensionRegistry)
throws java.io.IOException {
return com.google.protobuf.GeneratedMessageV3
.parseWithIOException(PARSER, input, extensionRegistry);
}
public static firmament.WhareMapStatsOuterClass.WhareMapStats parseDelimitedFrom(java.io.InputStream input)
throws java.io.IOException {
return com.google.protobuf.GeneratedMessageV3
.parseDelimitedWithIOException(PARSER, input);
}
public static firmament.WhareMapStatsOuterClass.WhareMapStats parseDelimitedFrom(
java.io.InputStream input,
com.google.protobuf.ExtensionRegistryLite extensionRegistry)
throws java.io.IOException {
return com.google.protobuf.GeneratedMessageV3
.parseDelimitedWithIOException(PARSER, input, extensionRegistry);
}
public static firmament.WhareMapStatsOuterClass.WhareMapStats parseFrom(
com.google.protobuf.CodedInputStream input)
throws java.io.IOException {
return com.google.protobuf.GeneratedMessageV3
.parseWithIOException(PARSER, input);
}
public static firmament.WhareMapStatsOuterClass.WhareMapStats parseFrom(
com.google.protobuf.CodedInputStream input,
com.google.protobuf.ExtensionRegistryLite extensionRegistry)
throws java.io.IOException {
return com.google.protobuf.GeneratedMessageV3
.parseWithIOException(PARSER, input, extensionRegistry);
}
public Builder newBuilderForType() { return newBuilder(); }
public static Builder newBuilder() {
return DEFAULT_INSTANCE.toBuilder();
}
public static Builder newBuilder(firmament.WhareMapStatsOuterClass.WhareMapStats prototype) {
return DEFAULT_INSTANCE.toBuilder().mergeFrom(prototype);
}
public Builder toBuilder() {
return this == DEFAULT_INSTANCE
? new Builder() : new Builder().mergeFrom(this);
}
@java.lang.Override
protected Builder newBuilderForType(
com.google.protobuf.GeneratedMessageV3.BuilderParent parent) {
Builder builder = new Builder(parent);
return builder;
}
/**
* Protobuf type {@code firmament.WhareMapStats}
*/
public static final class Builder extends
com.google.protobuf.GeneratedMessageV3.Builder<Builder> implements
// @@protoc_insertion_point(builder_implements:firmament.WhareMapStats)
firmament.WhareMapStatsOuterClass.WhareMapStatsOrBuilder {
public static final com.google.protobuf.Descriptors.Descriptor
getDescriptor() {
return firmament.WhareMapStatsOuterClass.internal_static_firmament_WhareMapStats_descriptor;
}
protected com.google.protobuf.GeneratedMessageV3.FieldAccessorTable
internalGetFieldAccessorTable() {
return firmament.WhareMapStatsOuterClass.internal_static_firmament_WhareMapStats_fieldAccessorTable
.ensureFieldAccessorsInitialized(
firmament.WhareMapStatsOuterClass.WhareMapStats.class, firmament.WhareMapStatsOuterClass.WhareMapStats.Builder.class);
}
// Construct using firmament.WhareMapStatsOuterClass.WhareMapStats.newBuilder()
private Builder() {
maybeForceBuilderInitialization();
}
private Builder(
com.google.protobuf.GeneratedMessageV3.BuilderParent parent) {
super(parent);
maybeForceBuilderInitialization();
}
private void maybeForceBuilderInitialization() {
if (com.google.protobuf.GeneratedMessageV3
.alwaysUseFieldBuilders) {
}
}
public Builder clear() {
super.clear();
numIdle_ = 0L;
numDevils_ = 0L;
numRabbits_ = 0L;
numSheep_ = 0L;
numTurtles_ = 0L;
return this;
}
public com.google.protobuf.Descriptors.Descriptor
getDescriptorForType() {
return firmament.WhareMapStatsOuterClass.internal_static_firmament_WhareMapStats_descriptor;
}
public firmament.WhareMapStatsOuterClass.WhareMapStats getDefaultInstanceForType() {
return firmament.WhareMapStatsOuterClass.WhareMapStats.getDefaultInstance();
}
public firmament.WhareMapStatsOuterClass.WhareMapStats build() {
firmament.WhareMapStatsOuterClass.WhareMapStats result = buildPartial();
if (!result.isInitialized()) {
throw newUninitializedMessageException(result);
}
return result;
}
public firmament.WhareMapStatsOuterClass.WhareMapStats buildPartial() {
firmament.WhareMapStatsOuterClass.WhareMapStats result = new firmament.WhareMapStatsOuterClass.WhareMapStats(this);
result.numIdle_ = numIdle_;
result.numDevils_ = numDevils_;
result.numRabbits_ = numRabbits_;
result.numSheep_ = numSheep_;
result.numTurtles_ = numTurtles_;
onBuilt();
return result;
}
public Builder clone() {
return (Builder) super.clone();
}
public Builder setField(
com.google.protobuf.Descriptors.FieldDescriptor field,
Object value) {
return (Builder) super.setField(field, value);
}
public Builder clearField(
com.google.protobuf.Descriptors.FieldDescriptor field) {
return (Builder) super.clearField(field);
}
public Builder clearOneof(
com.google.protobuf.Descriptors.OneofDescriptor oneof) {
return (Builder) super.clearOneof(oneof);
}
public Builder setRepeatedField(
com.google.protobuf.Descriptors.FieldDescriptor field,
int index, Object value) {
return (Builder) super.setRepeatedField(field, index, value);
}
public Builder addRepeatedField(
com.google.protobuf.Descriptors.FieldDescriptor field,
Object value) {
return (Builder) super.addRepeatedField(field, value);
}
public Builder mergeFrom(com.google.protobuf.Message other) {
if (other instanceof firmament.WhareMapStatsOuterClass.WhareMapStats) {
return mergeFrom((firmament.WhareMapStatsOuterClass.WhareMapStats)other);
} else {
super.mergeFrom(other);
return this;
}
}
public Builder mergeFrom(firmament.WhareMapStatsOuterClass.WhareMapStats other) {
if (other == firmament.WhareMapStatsOuterClass.WhareMapStats.getDefaultInstance()) return this;
if (other.getNumIdle() != 0L) {
setNumIdle(other.getNumIdle());
}
if (other.getNumDevils() != 0L) {
setNumDevils(other.getNumDevils());
}
if (other.getNumRabbits() != 0L) {
setNumRabbits(other.getNumRabbits());
}
if (other.getNumSheep() != 0L) {
setNumSheep(other.getNumSheep());
}
if (other.getNumTurtles() != 0L) {
setNumTurtles(other.getNumTurtles());
}
onChanged();
return this;
}
public final boolean isInitialized() {
return true;
}
public Builder mergeFrom(
com.google.protobuf.CodedInputStream input,
com.google.protobuf.ExtensionRegistryLite extensionRegistry)
throws java.io.IOException {
firmament.WhareMapStatsOuterClass.WhareMapStats parsedMessage = null;
try {
parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry);
} catch (com.google.protobuf.InvalidProtocolBufferException e) {
parsedMessage = (firmament.WhareMapStatsOuterClass.WhareMapStats) e.getUnfinishedMessage();
throw e.unwrapIOException();
} finally {
if (parsedMessage != null) {
mergeFrom(parsedMessage);
}
}
return this;
}
private long numIdle_ ;
/**
* <code>optional uint64 num_idle = 1;</code>
*/
public long getNumIdle() {
return numIdle_;
}
/**
* <code>optional uint64 num_idle = 1;</code>
*/
public Builder setNumIdle(long value) {
numIdle_ = value;
onChanged();
return this;
}
/**
* <code>optional uint64 num_idle = 1;</code>
*/
public Builder clearNumIdle() {
numIdle_ = 0L;
onChanged();
return this;
}
private long numDevils_ ;
/**
* <code>optional uint64 num_devils = 2;</code>
*/
public long getNumDevils() {
return numDevils_;
}
/**
* <code>optional uint64 num_devils = 2;</code>
*/
public Builder setNumDevils(long value) {
numDevils_ = value;
onChanged();
return this;
}
/**
* <code>optional uint64 num_devils = 2;</code>
*/
public Builder clearNumDevils() {
numDevils_ = 0L;
onChanged();
return this;
}
private long numRabbits_ ;
/**
* <code>optional uint64 num_rabbits = 3;</code>
*/
public long getNumRabbits() {
return numRabbits_;
}
/**
* <code>optional uint64 num_rabbits = 3;</code>
*/
public Builder setNumRabbits(long value) {
numRabbits_ = value;
onChanged();
return this;
}
/**
* <code>optional uint64 num_rabbits = 3;</code>
*/
public Builder clearNumRabbits() {
numRabbits_ = 0L;
onChanged();
return this;
}
private long numSheep_ ;
/**
* <code>optional uint64 num_sheep = 4;</code>
*/
public long getNumSheep() {
return numSheep_;
}
/**
* <code>optional uint64 num_sheep = 4;</code>
*/
public Builder setNumSheep(long value) {
numSheep_ = value;
onChanged();
return this;
}
/**
* <code>optional uint64 num_sheep = 4;</code>
*/
public Builder clearNumSheep() {
numSheep_ = 0L;
onChanged();
return this;
}
private long numTurtles_ ;
/**
* <code>optional uint64 num_turtles = 5;</code>
*/
public long getNumTurtles() {
return numTurtles_;
}
/**
* <code>optional uint64 num_turtles = 5;</code>
*/
public Builder setNumTurtles(long value) {
numTurtles_ = value;
onChanged();
return this;
}
/**
* <code>optional uint64 num_turtles = 5;</code>
*/
public Builder clearNumTurtles() {
numTurtles_ = 0L;
onChanged();
return this;
}
public final Builder setUnknownFields(
final com.google.protobuf.UnknownFieldSet unknownFields) {
return this;
}
public final Builder mergeUnknownFields(
final com.google.protobuf.UnknownFieldSet unknownFields) {
return this;
}
// @@protoc_insertion_point(builder_scope:firmament.WhareMapStats)
}
// @@protoc_insertion_point(class_scope:firmament.WhareMapStats)
private static final firmament.WhareMapStatsOuterClass.WhareMapStats DEFAULT_INSTANCE;
static {
DEFAULT_INSTANCE = new firmament.WhareMapStatsOuterClass.WhareMapStats();
}
public static firmament.WhareMapStatsOuterClass.WhareMapStats getDefaultInstance() {
return DEFAULT_INSTANCE;
}
private static final com.google.protobuf.Parser<WhareMapStats>
PARSER = new com.google.protobuf.AbstractParser<WhareMapStats>() {
public WhareMapStats parsePartialFrom(
com.google.protobuf.CodedInputStream input,
com.google.protobuf.ExtensionRegistryLite extensionRegistry)
throws com.google.protobuf.InvalidProtocolBufferException {
return new WhareMapStats(input, extensionRegistry);
}
};
public static com.google.protobuf.Parser<WhareMapStats> parser() {
return PARSER;
}
@java.lang.Override
public com.google.protobuf.Parser<WhareMapStats> getParserForType() {
return PARSER;
}
public firmament.WhareMapStatsOuterClass.WhareMapStats getDefaultInstanceForType() {
return DEFAULT_INSTANCE;
}
}
private static final com.google.protobuf.Descriptors.Descriptor
internal_static_firmament_WhareMapStats_descriptor;
private static final
com.google.protobuf.GeneratedMessageV3.FieldAccessorTable
internal_static_firmament_WhareMapStats_fieldAccessorTable;
public static com.google.protobuf.Descriptors.FileDescriptor
getDescriptor() {
return descriptor;
}
private static com.google.protobuf.Descriptors.FileDescriptor
descriptor;
static {
java.lang.String[] descriptorData = {
"\n\025whare_map_stats.proto\022\tfirmament\"r\n\rWh" +
"areMapStats\022\020\n\010num_idle\030\001 \001(\004\022\022\n\nnum_dev" +
"ils\030\002 \001(\004\022\023\n\013num_rabbits\030\003 \001(\004\022\021\n\tnum_sh" +
"eep\030\004 \001(\004\022\023\n\013num_turtles\030\005 \001(\004b\006proto3"
};
com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner assigner =
new com.google.protobuf.Descriptors.FileDescriptor. InternalDescriptorAssigner() {
public com.google.protobuf.ExtensionRegistry assignDescriptors(
com.google.protobuf.Descriptors.FileDescriptor root) {
descriptor = root;
return null;
}
};
com.google.protobuf.Descriptors.FileDescriptor
.internalBuildGeneratedFileFrom(descriptorData,
new com.google.protobuf.Descriptors.FileDescriptor[] {
}, assigner);
internal_static_firmament_WhareMapStats_descriptor =
getDescriptor().getMessageTypes().get(0);
internal_static_firmament_WhareMapStats_fieldAccessorTable = new
com.google.protobuf.GeneratedMessageV3.FieldAccessorTable(
internal_static_firmament_WhareMapStats_descriptor,
new java.lang.String[] { "NumIdle", "NumDevils", "NumRabbits", "NumSheep", "NumTurtles", });
}
// @@protoc_insertion_point(outer_class_scope)
}
|
DUBAI/RIYADH (Reuters) - The participation of tens of thousands of young Saudis in a social media debate over plans to reform the kingdom’s oil-reliant economy last month marked a shift in how Riyadh’s conservative rulers interact with their subjects.
Saudi Arabia’s dynastic leaders, who rule by fiat and strictly limit public dissent, have historically courted public opinion only via informal councils with tribal, religious and business leaders or citizens seeking to petition them.
But in one of the most active countries on social media in the Arab world, the ruling Al Saud have started trying to shape the online debate with carefully managed media campaigns and senior officials have been sacked after social media criticism.
One recent showcase for this was the launch of 31-year-old Deputy Crown Prince Mohammed bin Salman’s Vision 2030 reform plans, which used Twitter alongside traditional media to build anticipation and introduce hash tags - key discussion phrases.
“A strong and determined country with a connection between the government and the citizen,” one of the slogans read.
The level of participation means even ministers without social media accounts invest time and money monitoring what people say about them online, said Diya Murra, a Riyadh-based account director for social media agency The Online Project.
“People are holding them accountable for things that are being done or not,” he said.
Social media use among the 21 million Saudis and roughly 10 million foreign residents of the kingdom cuts across political and religious lines: keenly followed social media users include both strict Muslim clerics and self-described liberals.
In a country in which debate has traditionally been strictly regulated by state decree and cultural tradition, and in which gender mixing is often illegal, social media has allowed many young Saudis to interact in ways that were impossible before.
Twitter is most popular among 18 to 24-year-olds in Saudi Arabia, followed closely by users in their late 20s to early 40s and its usage is split roughly between men and women, according to iMENA Digital, which serves clients in Saudi Arabia. It said photo-sharing site Instagram has become the leading channel among young Saudis, around three-quarters of them women.
Speaking at a packed discussion about Twitter in an expensive Riyadh hotel last month, Saudi Foreign Minister Adel al-Jubeir said the platform was not always an accurate barometer of public opinion, but that it could help track trends.
“It is direct. There are no barriers,” he told the largely young audience, who were segregated by gender.
However, he and other Gulf Arab politicians speaking at the forum also said they were in favor of controls to prevent anonymous posting and of punishing users who broke taboos by criticizing religion or calling to end monarchical rule.
Rights groups have criticized Saudi Arabia and its neighbors for jailing some who voiced dissent online, including Saudi blogger Raif Badawi, who was sentenced to 1,000 lashes and 10 years in prison for a “cyber crime” of insulting Islam.
He remains in prison 18 months after sentencing, but no more than 50 lashes were carried out. When asked about Badawi in May, Jubeir told a news conference the case was complicated and involved civil lawsuits that did not involve the government.
On Monday, a Riyadh court sentenced a man to 80 lashes for Tweets that carried “insults to the country”, as well as for drinking alcohol, Okaz daily reported.
Diplomats in Riyadh say while the judiciary has given harsh sentences to online dissenters who drew the anger of hard liners, the police routinely ignore on social media far more severe criticism of senior people than was ever allowed before.
The growing influence of social media became apparent in 2012 when the late King Abdullah sacked the religious police chief and replaced him with a relative progressive after a viral video showed members of the body harassing a family in a mall.
In April 2014, as a deadly outbreak of Middle East Respiratory Syndrome (MERS) swept Jeddah, anger over a perceived cover-up surged on social media and Abdullah sacked the health minister.
Since King Salman came to power in January 2015, such sensitivity seems to have only amplified. Another health minister, Ahmed al-Khateeb, widely regarded as a protege of the king, was dismissed after footage of him shouting at a Saudi citizen during a heated argument was captured on a smartphone.
Weeks later, Salman replaced his own head of royal protocol after he was caught on camera slapping a news cameraman covering the arrival of the Moroccan king in Riyadh.
It is a far cry from the days before widespread internet use in Saudi Arabia, when discussion was limited to informal meetings or to newspapers and television channels that rarely held officials to account or criticized government policies.
Still, a culture of public expressions of respect for government endures. More than a third of reactions to Vision 2030 on Twitter were positive, Semiocast said, adding the debate generated “patriotic pride” and expectations of progress.
The debate had been closely coordinated over various media and driven by influential Saudi personalities young people were already connected to, The Online Project’s Murra said.
One was Omar Hussein, a young comedian popular on YouTube and with 1.5 million followers on Twitter. He promises in a video to explain the vision in three minutes. Filmed as a piece to camera it has been viewed more than a quarter of a million times.
He is careful to explain the plan as a vision, with a more concrete blueprint coming later, something observers say is important for managing expectations about the ambitious goals.
“The vision as it stands has very few concrete measurable outcomes to hold anyone accountable for,” analyst Alyahya said. However, as programs were developed to implement it, there would be performance indicators and ministers held responsible for meeting targets. |
/* Copyright 2019 Axel Huebl, Benjamin Worpitz, Matthias Werner, René Widera
*
* This file is part of alpaka.
*
* This Source Code Form is subject to the terms of the Mozilla Public
* License, v. 2.0. If a copy of the MPL was not distributed with this
* file, You can obtain one at http://mozilla.org/MPL/2.0/.
*/
#include <alpaka/rand/Traits.hpp>
#include <alpaka/test/KernelExecutionFixture.hpp>
#include <alpaka/test/acc/TestAccs.hpp>
#include <catch2/catch.hpp>
//#############################################################################
class RandTestKernel
{
ALPAKA_NO_HOST_ACC_WARNING
template<typename TAcc, typename T_Generator>
ALPAKA_FN_ACC void genNumbers(TAcc const& acc, bool* success, T_Generator& gen) const
{
{
auto dist(alpaka::rand::distribution::createNormalReal<float>(acc));
auto const r = dist(gen);
#if !BOOST_ARCH_PTX
ALPAKA_CHECK(*success, std::isfinite(r));
#else
alpaka::ignore_unused(r);
#endif
}
{
auto dist(alpaka::rand::distribution::createNormalReal<double>(acc));
auto const r = dist(gen);
#if !BOOST_ARCH_PTX
ALPAKA_CHECK(*success, std::isfinite(r));
#else
alpaka::ignore_unused(r);
#endif
}
{
auto dist(alpaka::rand::distribution::createUniformReal<float>(acc));
auto const r = dist(gen);
ALPAKA_CHECK(*success, 0.0f <= r);
ALPAKA_CHECK(*success, 1.0f > r);
}
{
auto dist(alpaka::rand::distribution::createUniformReal<double>(acc));
auto const r = dist(gen);
ALPAKA_CHECK(*success, 0.0 <= r);
ALPAKA_CHECK(*success, 1.0 > r);
}
{
auto dist(alpaka::rand::distribution::createUniformUint<std::uint32_t>(acc));
auto const r = dist(gen);
alpaka::ignore_unused(r);
}
}
public:
//-----------------------------------------------------------------------------
ALPAKA_NO_HOST_ACC_WARNING
template<typename TAcc>
ALPAKA_FN_ACC auto operator()(TAcc const& acc, bool* success) const -> void
{
// default generator for accelerator
auto genDefault = alpaka::rand::generator::createDefault(acc, 12345u, 6789u);
genNumbers(acc, success, genDefault);
#if !defined(ALPAKA_ACC_GPU_CUDA_ENABLED) && !defined(ALPAKA_ACC_GPU_HIP_ENABLED)
# ifndef ALPAKA_ACC_ANY_BT_OMP5_ENABLED
// TODO: These ifdefs are wrong: They will reduce the test to the
// smallest common denominator from all enabled backends
// std::random_device
auto genRandomDevice = alpaka::rand::generator::createDefault(alpaka::rand::RandomDevice{}, 12345u, 6789u);
genNumbers(acc, success, genRandomDevice);
// MersenneTwister
auto genMersenneTwister
= alpaka::rand::generator::createDefault(alpaka::rand::MersenneTwister{}, 12345u, 6789u);
genNumbers(acc, success, genMersenneTwister);
# endif
// TinyMersenneTwister
auto genTinyMersenneTwister
= alpaka::rand::generator::createDefault(alpaka::rand::TinyMersenneTwister{}, 12345u, 6789u);
genNumbers(acc, success, genTinyMersenneTwister);
#endif
}
};
//-----------------------------------------------------------------------------
TEMPLATE_LIST_TEST_CASE("defaultRandomGeneratorIsWorking", "[rand]", alpaka::test::TestAccs)
{
using Acc = TestType;
using Dim = alpaka::Dim<Acc>;
using Idx = alpaka::Idx<Acc>;
alpaka::test::KernelExecutionFixture<Acc> fixture(alpaka::Vec<Dim, Idx>::ones());
RandTestKernel kernel;
REQUIRE(fixture(kernel));
}
|
Roman Kabachiy is an Ukrainian journalist based in Kiev. He is an expert of the Ukrainian Institute of Mass Information.
Young people from the civic resistance movement are trying to get elected to parliament. If they succeed, it could be the beginning of the end for the Yanukovych regime, thinks Roman Kabachiy.
Ukrainian nationalism: are Russian strategies at work?
President Yanukovych has awakened the spirit of nationalism in Ukraine, writes Roman Kabachiy. In all probability, this was a deliberate operation, executed to allow the “guarantor of the nation” to triumphantly rid his countrymen of radical nationalism.
The recent arrest of Ukrainian museum director Ruslan Zabily provoked an outcry. Did he actually leak state secrets or is the Yanukovych regime just trying to undo all Orange achievements, including the revival of Ukrainian historical memory?
During Viktor Yushchenko’s five years in power, Ukraine did not start facing up to its totalitarian history. Since President Yanukovich came to power, that task has become almost impossible. |
Known and hidden sources of caffeine in drug, food, and natural products. OBJECTIVE To review the pharmacology, pharmacokinetics, adverse effects, and drug interactions associated with caffeine and to raise awareness of the caffeine content of many prescription, nonprescription, and herbal drugs, beverages, and foods. DATA SOURCES Articles in English retrieved through a MEDLINE search (1966-August 2000) using the terms caffeine, human, and systemic. Additional product information was obtained from manufacturers' Web sites, through direct communications with manufacturers or distributors, and via Internet searches using the AltaVista search engine and the term caffeine. Only the first 10 Web sites selling caffeine products identified were visited. STUDY SELECTION All articles and correspondence items from data sources were evaluated, and all information deemed relevant was included. Priority was given to information provided by manufacturers. CONCLUSION With the increase in consumers' use of over-the-counter products for health maintenance and self-care, it is imperative for pharmacists to be knowledgeable about the caffeine content of all drug and herbal products. It is important to be familiar with patients' self-treatment habits in order to identify potential caffeine-drug interactions or caffeine-laboratory interactions resulting in false laboratory values. These interactions increase health care costs by creating adverse effects and causing misdiagnoses. |
1. Field of the Invention
The present invention relates to a steam reflow apparatus and a steam reflow method for soldering an electronic component mounted on a substrate by high-temperature superheated steam.
2. Description of the Related Art
A heated body such as a substrate on which an electronic component is mounted is fed to a reflow apparatus, and soldering is performed on the heated body. Such heated body may be collectively called a “substrate”. A heating furnace of the conventional reflow apparatus includes a preheating zone, a uniform heating zone, a melting zone, and a cooling zone. While the substrate is conveyed by a conveyor, the substrate is heated from ordinary temperature to about 150° C. or higher in the preheating zone, and is fed to the uniform heating zone. After the substrate is heated for a while at about 150° C. or higher in the uniform heating zone, the substrate is fed to the melting zone. Then, the substrate is rapidly heated to about 230° C. higher than or equal to a solder melting point (about 219° C. though the melting point varies depending on a kind of solder) in the melting zone, and the solder is melted. Then, after the substrate is fed to the cooling zone and is cooled by a fan or the like and the melted solder is solidified, the substrate is carried out to the outside of the heating furnace.
When the substrate is heated in the heating furnace as described above in air reflow, a surface of solder is oxidized by oxygen in air activated at a high temperature, and wettability of the solder is decreased. Hence, nitrogen reflow for filling the inside of the heating furnace with nitrogen gas which is inert gas and performing soldering is known. The nitrogen reflow can prevent oxidation of the surface of the solder even when the substrate is heated at the high temperature. However, the nitrogen reflow has a problem of increasing a cost since a large amount of nitrogen gas is consumed.
Hence, instead of the nitrogen reflow, steam reflow for performing soldering by superheated steam (hereinafter simply called “steam”) with a high temperature of 100° C. or higher is proposed (JP-A-2008-270499 and JP-A-2011-82282 as Patent References 1 and 2).
Patent Reference 1: JP-A-2008-270499
Patent Reference 2: JP-A-2011-82282 |
A Qualitative Study of Time Overrun of Completed Road Projects Awarded by the Niger Delta Development Commission in the Niger Delta Region of Nigeria Time overrun of completed road projects awarded by the Niger Delta Development Commission (NDDC) in the Niger Delta Region of Nigeria from its inception in 2000 up to 2015 was studied. Out of 3315 roads awarded, only 1081 roads representing 31.65 percent were completed within the review period. The qualitative study was carried out on randomly selected completed 162 road projects for analysis, and a conceptual model of time series was developed. In developing the regression model, both dependent and independent variables were subjected to normality tests assessed by skewness coefficient, kurtosis value, Jarque-Bera test, residual probability plot, heteroscedasticity test and the variance inflation factor. Also, with knowledge of total road projects awarded by the Commission, it is now possible to predict proportions of roads experiencing schedule overruns. |
Murine Pregnancy-Specific Glycoprotein 23 Induces the Proangiogenic Factors Transforming-Growth Factor Beta 1 and Vascular Endothelial Growth Factor A in Cell Types Involved in Vascular Remodeling in Pregnancy1 Abstract Haemochorial placentation is a unique physiological process in which the fetal trophoblast cells remodel the maternal decidual spiral arteries to establish the fetoplacental blood supply. Pregnancy-specific glycoproteins (PSGs) are members of the carcinoembryonic antigen family. PSGs are produced by the placenta of rodents and primates and are secreted into the bloodstream. PSG23 is one of 17 members of the murine PSG family (designated PSG16 to PSG32). Previous studies determined that PSGs have immunoregulatory functions due to their ability to modulate macrophage cytokine secretion. Here we show that recombinant PSG23 induces transforming growth factor (TGF) beta1, TGFB1, and vascular endothelial growth factor A (VEGFA) in primary murine macrophages and the macrophage cell line RAW 264.7 cells. In addition, we identified new cell types that responded to PSG23 treatment. Dendritic cells, endothelial cells, and trophoblasts, which are involved in maternal vasculature remodeling during pregnancy, secreted TGFB1 and VEGFA in response to PSG23. PSG23 showed cross-reactivity with human cells, including human monocytes and the trophoblast cell line, HTR-8/SVneo cells. We analyzed the binding of PSG23 to the tetraspanin CD9, the receptor for PSG17, and found that CD9 is not essential for PSG23 binding and activity in macrophages. Overall these studies show that PSGs can modulate the secretion of important proangiogenic factors, TGFB1 and VEGFA, by different cell types involved in the development of the placenta. |
def reset(self, *args, **kwargs) -> Any:
if self._mode == 'random':
self._param = np.random.choice(self._reset_param_list)
elif self._mode == 'order':
self._param = self._reset_param_list[self._reset_param_index]
self._reset_param_index + 1
if self._reset_param_index >= len(self._reset_param_list):
self._reset_param_index = 0
return super().reset(**self._param) |
Chronic relapsing inflammatory optic neuropathy
Signs and Symptoms
Pain, visual loss, relapse, and steroid response are typical of CRION. Ocular pain is typical, although there are some cases with no reported pain. Bilateral severe visual loss (simultaneous or sequential) usually occurs, but there are reports of unilateral visual loss. Patients can have an associated relative afferent pupillary defect. CRION is associated with at least one relapse, and up to 18 relapses have been reported in an individual. Interval between episodes can range from days to over a decade. Symptoms will improve with corticosteroids, and recurrence characteristically occurs after reducing or stopping steroids.
Pathogenesis
As of 2013, the etiology remained unknown. Given that CRION is responsive to immunosuppressive treatment, it may be immune-mediated. CRION has been classified as an autoimmune process but this description is not established with certainty and there is no known associated autoimmune antibody.
Since 2015, some research points to CRION belonging to the antiMOG associated encephalomyelitis spectrum.
As of 2019, the correlation between CRION and antiMOG associated encephalomyelitis is so high that now CRION is considered the most common phenotype related to myelin oligodendrocyte glycoprotein antibodies
Prognosis
Recurrence is essentially inevitable in patients without treatment and patients ultimately will require lifelong immunosuppression to prevent relapse.
Epidemiology
CRION was first described in 2003. The disease is rare, with only 122 cases published from 2003 to 2013. There is female predominance with 59 females (48%), 25 males (20%), and no gender designation for the rest of the 122 reported cases (32%). Age ranges from 14 to 69 years of age, and the mean age is 35.6. The disease is noted to occur worldwide and across many ethnicities, with reported cases in all continents except Africa and Australia. |
<gh_stars>0
package com.netcracker_study_autumn_2020.domain.interactor.usecases.workspace;
import com.netcracker_study_autumn_2020.domain.dto.UserDto;
import com.netcracker_study_autumn_2020.domain.dto.WorkspaceDto;
public interface CreateWorkspaceUseCase {
interface Callback{
void onWorkspaceCreated();
void onError(Exception e);
}
void execute(WorkspaceDto userDto, Callback callback);
}
|
Doing digital transformation: theorising the practitioner voice ABSTRACT The objective of this theory-building research is to explore the defining characteristics of doing Digital Transformation (DT) and present a holistic account of the practitioner practices that characterise doing DT. For the purposes of this research doing DT is defined as leveraging digital technologies to significantly alter an organisational design in order to enhance customer engagement. To fulfil this objective, we select 16 key informants (digital transformation leaders) based on their organisational perspective (Business or IT) and role (Strategic or Operational), which facilitates hearing 4 types of practitioner voices. Following an inductive open coding approach, 348 excerpts were coded, leading to the emergence of 95 concepts, which were further grouped into 14 categories. In this paper, we focus our write-up on the six most frequently occurring categories that are shaped by all four key informant groups (practitioner voices). This paper is unique in providing a holistic categorisation of the defining characteristics of doing DT, while also providing 24 Practitioner Priorities. These Practitioner Priorities sharpens the focus of academia and practice, highlighting the role of people, role of data and role of technology when doing DT. |
Background-Source Separation in astronomical images with Bayesian probability theory (I): the method A probabilistic technique for the joint estimation of background and sources with the aim of detecting faint and extended celestial objects is described. Bayesian probability theory is applied to gain insight into the coexistence of background and sources through a probabilistic two-component mixture model, which provides consistent uncertainties of background and sources. A multi-resolution analysis is used for revealing faint and extended objects in the frame of the Bayesian mixture model. All the revealed sources are parameterized automatically providing source position, net counts, morphological parameters and their errors. INTRODUCTION Background estimation is an omnipresent problem for source detection methods in astrophysics, especially when the source signal is weak and difficult to discriminate against the background. An inaccurate estimation of the background may produce large errors in object photometry and the loss of faint and/or extended objects. Deep observations are commonly obtained by combining several individual exposures to generate a final astronomical image. Often large exposure variations characterize these data and the background may vary significantly within the field. Hence, the background modelling has to incorporate the knowledge provided by the observatory's exposure time without compromising the statistical properties. In addition, instrumental structures, such as detector ribs or CCD gaps, produce lack of data. The missing data must be handled consistently in the background estimation to prevent undesired artificial effects. Celestial objects exhibit a large variety of morphologies and apparent sizes. Thus, the search for sources should not be driven by any predefined morphology, allowing proper estimate of their structural parameters. The instrumental point ⋆ E-mail: Fabrizia.Guglielmetti@ipp.mpg.de spread function (PSF) is often not known exactly for the whole field of view. So a source detection algorithm should be able to operate effectively without the knowledge of the instrument characteristics 1. SExtractor is one of the most widely used source detection procedure in astronomy. It has a simple interface and very fast execution. It produces reliable aperture photometry catalogues (). It is applied in X-ray regime on filtered images (). The sliding window technique is a fast and robust source detection method. This technique may fail while detecting extended sources, sources near the detection limit and nearby sources (). This source detection method has been refined with more elaborated techniques, such as matched filters (e.g. ;Stewart 2006) and recently the Cash method (Stewart 2009). The Cash method is a maximum likelihood technique. For source detection, the method employs a Cash likelihoodratio statistic, that is an extended 2 statistic for Poisson data (Cash 1979). Both the matched filters and Cash methods are at least by a factor of 1.2 more sensitive than the sliding-window technique (Stewart 2009). Though, both methods are designed for the detection of point sources. The candidate sources are characterized in a further step using maximum likelihood PSF fitting. The maximum likelihood PSF fitting procedure performs better than other conventional techniques for flux measurements of point-like sources although accurate photometry is achieved when the PSF model used is close to the image PSF (). In Pierre et al., the maximum likelihood profile fit on photon images is extended taking into account a spherically symmetric -model (King profile, see refs. King 1962, Cavaliere & Fusco-Femiano 1978 convolved with the instrumental PSF for improving the photometry of extended objects. Wavelet transform (WT) techniques improve the detection of faint and extended sources with respect to other conventional methods (see Starck & Pierre 1998 for more details). In fact, WT techniques are able to discriminate structures as a function of scale. Within larger scales, faint and extended sources are revealed. WTs are therefore valuable tools for the detection of both point-like and extended sources (). Nonetheless, these techniques favor the detection of circularly symmetric sources (). In addition, artefacts may appear around the detected structures in the reconstructed image, and the flux is not preserved (Starck & Pierre 1998). In order to overcome these problems, some refinements have been applied to the WT techniques. Starck & Pierre, for instance, employ a multi-resolution support filtering to preserve the flux and the adjoint WT operator to suppress artefacts which may appear around the objects. An advance on this method is presented in Starck et al.. A spacevariant PSF is incorporated in their WT technique. Object by object reconstruction is performed. For point sources the flux measurements are close to that obtained by PSF fitting. The detection of faint extended and point-like sources is a non-trivial task for source detection methods. This task becomes more complicated when the detection rate and photometric reconstruction for the sources are taken into account (). A self-consistent statistical approach for background estimation and source detection is given by Bayesian probability theory (BPT), which provides a general and consistent frame for logical inference. The achievement of Bayesian techniques on signal detections in astrophysics has already been shown, for example in the works of Gregory & Loredo, Loredo & Wasserman and Scargle, for instance. In modern observational astrophysics, BPT techniques for image analysis have been extensively applied: e.g. Hobson & McLachlan, Carvalho, Rocha & Hobson, Savage & Oliver, Strong. For the detection of discrete objects embedded in Gaussian noise (microwave regime), Hobson & McLachlan utilizes a model-fitting methodology, where a parameterized form for the objects of interest is assumed. Markov-chain Monte Carlo (MCMC) techniques are used to explore the parameter space. An advance on this work is provided by Carvalho et al.. For speeding up the method of Hobson & McLachlan, Carvalho et al. proposes to use Gaussian approximation to the posterior probability density function (pdf) peak when performing a Bayesian model selection for source detection. The work of Savage & Oliver is developed within Gaussian statistics (infrared data). At each pixel position in an image, their method estimates the probability of the data being described by point source or empty sky under the assumptions that the background is uniform and the sources have circular shapes. The Bayesian information criterion is used for the selection of the two models. Source parameters are estimated in a second step employing Bayesian model selection. Strong developed a technique for image analysis within Poisson statistics. The technique is instrument specific and is applied to -ray data. The first objective of this technique is to reconstruct the intensity in each image pixel given a set of data. The Maximum Entropy method is used for selecting from the data an image between all the available ones from a multi-dimensional space. The dimension of the space is proportional to the number of image pixels. We propose a new source detection method based on BPT combined with the mixture-model technique. The algorithm allows one to estimate the background and its uncertainties and to detect celestial sources jointly. The new approach deals directly with the statistical nature of the data. Each pixel in an astronomical image is probabilistically assessed to contain background only or with additional source signal. The results are given by probability distributions quantifying our state of knowledge. The developed Background-Source separation (BSS) method encounters: background estimation, source detection and characterization. The background estimation incorporates the knowledge of the exposure time map. The estimation of the background and its uncertainties is performed on the full astronomical image employing a two-dimensional spline. The spline models the background rate. The spline amplitudes and the position of the spline supporting points provides flexibility in the background model. This procedure can describe both smoothly and highly varying backgrounds. Hence, no cut out of regions or employment of meshes are needed for the background estimation. The BSS technique does not need a threshold level for separating the sources from the background as conventional methods do. The threshold level is replaced by a measure of probability. In conventional methods, the threshold level is described in terms of the noise standard deviation, then translated into a probability. The classification assigned to each pixel of an astronomical image with the BSS method allows one to detect sources without employing any predefined morphologies. Only, for parametric characterization of the sources predefined source shapes are applied. The estimation of source parameters and their uncertainties includes the estimated background into a forward model. Only the statis-tics of the original data are taken into account. The BSS method provides simultaneously the advantages of a multiresolution analysis and a multi-color detection. Objects in astronomical images are organized in hierarchical structures (Starck & Murtagh 2006). In order to quantify the multiscale structure in the data, a multi-resolution analysis is required (see ;Kolaczyk & Dixon 2000). In the BSS approach the multi-resolution analysis is incorporated in combination with the source detection and background estimation technique with the aim to analyse statistically source structures at multiple scales. When multiband images are available, the information contained in each image can be statistically combined in order to extend the detection limit of the data (see Szalay, Connolly & Szokoly 1999;Murtagh, Raftery & Starck 2005). The capabilities of this method are best shown with the detections of faint sources independently of their shape and with the detections of sources embedded in a highly varying background. The technique for the joint estimation of background and sources in digital images is applicable to digital images collected by a wide variety of sensors at any wavelength. The X-ray environment is particularly suitable to our Bayesian approach for different reasons. First, Xray astronomy is characterized by low photon counts even for relatively large exposures and the observational data are unique, i.e. the experiment is rarely reproduced. Second, the X-ray astronomical images provided by new generation instruments are usually a combination of several individual CCD imaging pointings. Last, there are few source detection algorithms developed so far for an automated search of faint and extended sources. In this paper, our aim is to describe the developed BSS technique. The outline of this paper is as follows. In Section 2, we briefly review the basic aspects of BPT. In Section 3, we introduce the background estimation and source detection technique. In Section 4, the BSS algorithm is extended in order to obtain an automated algorithm for source characterization. In Section 5, the issue of false positives in source detection is addressed. In Section 6, the BSS method is applied to simulated data. We show results for two different choices of prior pdf for the source signal. In Section 7, our results on the three simulated datasets are compared with the outcome from wavdetect algorithm (). In Section 8, we test the BSS method on astronomical images coming from ROSAT all-sky survey (RASS) data. Our conclusions on the BSS technique are provided in Section 9. In Section 10, a list of acronyms used throughout the paper is reported. The flexibility of our Bayesian technique allows the investigation of data coming from past and modern instruments without changes in our algorithm. An extensive application of the BSS technique to real data is addressed in a forthcoming paper (Guglielmetti et al., in preparation). In the latter, the BSS algorithm is used for the analysis of a specific scientific problem, i.e. the search of galaxy clusters in the CDF-S data. BAYESIAN PROBABILITY THEORY In order to analyse the heterogeneous data present in astronomical images, we employ BPT (see Jeffreys 1961;Bernardo & Smith 1994;;Sivia 1996;Jaynes 2003;Dose 2003;O'Hagan & Forster 2004;Gregory 2005). BPT gives a calculus how to resolve an inference problem based on uncertain information. The outcome of the BPT analysis is the pdf of the quantity of interest, which encodes the knowledge to be drawn from the information available (a posteriori). The posterior pdf comprises the complete information which can be inferred from the data and the statistical model, and supplementary information such as first-principle physics knowledge. This statistical approach is based on comparisons among alternative hypotheses (or models) using the single observed data set. Each information entering the models that describe the data set is combined systematically. The combination includes all data sets coming from different diagnostics, physical model parameters and measurement nuisance parameters. Each data set and parameters entering the models are subject to uncertainties which have to be estimated and encoded in probability distributions. Within BPT the so-called statistical and systematic uncertainties are not distinguished. Both kinds of uncertainties are treated as a lack of knowledge. Bayes' theorem states: where the number of alternative hypotheses to be compared is larger than or equal to 2. Bayes' theorem is a consequence of the sum and product rules of probability theory. The vertical bars in denote conditionality property, based on either empirical or theoretical information. Equation relates the posterior pdf P (Hi|D,, I) to known quantities, namely, the likelihood pdf P (D|Hi,, I) and the prior pdf P (Hi|I). P (D|I) is the evidence of the data which constitutes the normalization and will not affect the conclusions within the context of a given model. The posterior pdf is the quantity to be inferred. It depends on the full data set D, on the errors entering the experiment and on all relevant information concerning the nature of the physical situation and knowledge of the experiment (I). The likelihood pdf represents the probability of finding the data D for given quantities of interest, uncertainties and additional information I. It reveals the error statistics of the experiment. The prior pdf is independent from the experimental errors. It represents physical constraints or additional information from other diagnostics. The terms 'posterior' and 'prior' have a logical rather than temporal meaning. The posterior and prior pdfs can be regarded as the knowledge 'with' and 'without' the new data taken into account, respectively. In order to arrive at the pdf of any quantity x in the model, marginalization of the multi-dimensional pdf can be regarded as a projection of the complete pdf on to that quantity. Marginalization is performed by integration over the quantity y one wants to get rid of: P (x, y)dy = P (x|y)P (y)dy. Marginalization of a quantity y thus takes into account the uncertainty of y which is quantified by the pdf P(y). The uncertainty of y propagates into the pdf P(x). Marginalization provides a way to eliminate variables which are necessary to formulate the likelihood but otherwise uninteresting. Another important property of BPT is the capability of modelling the data by mixture distributions in the parameter space (Everitt & Hand 1981;Neal 1992). Mixture distributions are an appropriate tool for modelling processes whose output is thought to be generated by several different underlying mechanisms, or to come from several different populations. Our aim is to identify and characterize these underlying 'latent classes'. Therefore, we follow the standard Bayesian approach to this problem, which is to define a prior distribution over the parameter space of the mixture model and combine this with the observed data to give a posterior distribution over the parameter space. Essential for model comparison or object classification is the marginal likelihood (evidence, prior predicted value). Marginalization (integration) of the likelihood over parameter space provides a measure for the credibility of a model for given data. Ratios of marginal likelihoods (Bayes factors) are frequently used for comparing two models (Kass & Raftery 1995). Details of mixture modelling in the framework of BPT can be found in von der Linden et al. (1997Linden et al. (, 1999 and Fischer et al. (2000Fischer et al. (, 2001Fischer et al. (, 2002. In particular, Fischer & Dose have demonstrated the capability of the Bayesian mixture model technique even with an unknown number of components for background separation from a measured spectrum. The present approach follows these previous works. THE JOINT ESTIMATION OF BACKGROUND AND SOURCES WITH BPT The aim of the BSS method is the joint estimation of background and sources in two-dimensional image data. We can identify two basic steps of our algorithm: (A) background estimation and source detection, (B) calculation of source probability maps in a multi-resolution analysis. The input information of the developed algorithm is the experimental data, i.e. the detected photon counts, and the observatory's exposure time. The background rate is assumed to be smooth, e.g. spatially slowly varying compared to source dimensions. To allow for smoothness the background rate is modelled with a two-dimensional thin-plate spline (TPS), (Section 3.2). The number and the positions of the pivots, i.e. the spline's supporting points, decide what data structures are assigned to be background. All structures which can not be described by the background model will be assigned to be sources. The number of pivots required to model the background depends on the characteristics of the background itself. Though the minimum number of pivots is four (since the minimum expansion of the selected spline has four terms), their number increases with increasing background variation. The coexistence of background and sources is described with a probabilistic two-component mixture model (Section 3.1) where one component describes background contribution only and the other component describes background plus source contributions. Each pixel is characterized by the probability of belonging to one of the two mixture components. For the background estimation the photons contained in all pixels are considered including those containing additional source contributions. No data censoring by cutting out source areas is employed. For background estimation the source intensity is con-sidered to be a nuisance parameter. According to the rules of BPT, the source signal distribution is described probabilistically in terms of a prior pdf. The prior pdf of the source signal is an approximation to the true distribution of the source signal on the field. We studied two prior pdfs of the source signal: the exponential and the inverse-Gamma function. The background and its uncertainties (Section 3.3) are estimated from its posterior pdf. Therefore, for each pixel of an astronomical image an estimate of its background and its uncertainties are provided. Moreover, the Bayesian approach introduces hyperparameters, that are fundamental for the estimation of the posterior pdfs for the background and source intensities. Specifically, in Section 3.4 we show that the hyperparameters are estimated exclusively from the data. The source probability is evaluated with the mixture model technique for pixels and pixel cells 2 in order to enhance the detection of faint and extended sources in a multiresolution analysis. Pixels and pixel cells are treated identically within the Bayesian formalism. For the correlation of neighbouring pixels, the following methods have been investigated: box filter with a square, box filter with a circle, Gaussian weighting filter (see Section 3.1 for more details). The BSS technique is morphology free, i.e. there are no restrictions on the object size and shape for being detected. An analysed digital astronomical image is converted into the following: I) the background rate image, or 'TPS map', is an array specifying the estimated background rate at each image pixel for a given observation. The effects of exposure variations are consistently included in the spline model; II) the background image, or 'background map', is an array specifying the estimated background amplitude at each image pixel for a given observation. It is provided by the TPS map multiplied with the telescope's exposure; III) the source probability images, or 'source probability maps' (SPMs), display the probability that source counts are present in pixels and pixel cells for a given observation in a multi-resolution analysis. Movies are produced with the SPMs obtained at different resolutions. The moving images allow one to discern interactively the presence of faint extended sources in digital astronomical images. The size of faint extended sources is correlated with the scale of the resolution, used for their detection. SPMs coming from other energy bands can be combined statistically to produce conclusive SPMs at different resolutions with the advantage to provide likelihoods for the detected sources from the combined energy bands (Section 3.5.1). Two-component mixture model The general idea of the described Bayesian model is that a digital astronomical image consists of a smooth background with additive source signal, which can be characterized by any shape, size and brightness. The background signal is the diffuse cosmic emission added to the instrumental noise. The source signal is the response of the imaging system to a celestial object. A surface b(x) describes the background under the source signal, where x = (x, y) corresponds to the position on the grid in the image. Therefore, given the observed data set D = {dij} ∈ N0, where dij is photon counts in pixel (or pixel cell) {ij}, two complementary hypotheses arise: Hypothesis Bij specifies that the data dij consists only of background counts bij spoiled with noise ij, i.e. the (statistical) uncertainty associated with the measurement process. Hypothesis Bij specifies the case where additional source intensity sij contributes to the background. Additional assumptions are that no negative values for source and background amplitudes are allowed and that the background is smoother than the source signal. The smoothness of the background is achieved by modelling the background count rate with a bivariate TPS where the supporting points are chosen sparsely to ensure that sources can not be fitted. The spline fits the background component whereas count enhancements classify pixels and pixel cells with source contributions. In the following, pixel cells are subsumed by pixels. Pixel cells are collections of pixels where dij is the total photon count in the cell {ij}. The photon counts of neighbouring pixels are added up and the formed pixel cell is treated as a pixel. In principle, any cell shape can be chosen. In practice, two methods have been developed when pixels have weight of one within the cell (box filtering with cell shape of a square or of a circle) and one method when pixels have different weights within the cell (Gaussian weighting). The box filter with cells of squared shape consists of taking information of neighbouring pixels within a box. The cell size is the box size. The box filter with cells of circular shape considers pixels with a weight of one if inside the cell size, otherwise zero. Pixels have a weight of one when the cell size touches them at least at the centre. This method allows the pixel cells to have varying shapes. The Gaussian weighting method provides Gaussian weights around a centre: weights are given at decreasing values according to the distance from the centre. As expressed in, we are interested in estimating the probabilities of the hypotheses: Bij and Bij. In this paper we address the problem when the photon counts are small and Poisson statistics have to be used. The likelihood probabilities for the two hypotheses within Poisson statistics are: This technique is easily adaptable to other likelihoods, for instance employing Gaussian statistics as given in Fischer et al.. The prior pdfs for the two complementary hypotheses are chosen to be p(Bij) = and p(Bij) = 1 −, independent of i and j. Specifically, the parameter is the prior probability that a pixel contains only background. Since it is not known if a certain pixel (or pixel cell) contains purely background or additional source signal, the likelihood for the mixture model is employed. The likelihood for the mixture model effectively combines the probability distributions for the two hypotheses, Bij and Bij: where b = {bij}, s = {sij} and {ij} corresponds to the pixels of the complete field. The probability of having source contribution in pixels and pixel cells is according to Bayes' theorem (see ): This equation enables the data contained in an astronomical image to be classified in two groups: with and without source signal contribution. Equation is also used in the multi-resolution analysis. The SPM with the largest resolution is characterized by the probability of uncorrelated pixels. At decreasing resolutions a correlation length is defined. Starting from a value of 0.5, the correlation length increases in steps of 0.5 pixel for decreasing resolution. The SPMs at decreasing resolutions are, therefore, characterized by the information provided by background and photon counts in pixel cells. Specifically, photon counts and background counts are given by a weighted integration over pixel cells. The integrated photon and background counts enter the likelihood for the mixture model. Then, the source probability is estimated for each image pixel in the multi-resolution analysis. The multiresolution algorithm preserves Poisson statistics. Source signal as a nuisance parameter Following Fischer et al. (2000Fischer et al. (, 2001, the source signal in is a nuisance parameter, which is removed by integrating it out (marginalization rule, eq. ): A nuisance parameter is a parameter which is important for the model (describing the data), but it is not of interest at the moment. Following BPT, a prior pdf of the source signal has to be chosen. The final result depends crucially on the prior pdf set on the source signal. In fact, in addition to the choice of the TPS pivots, the prior pdf of the source signal provides a description of what is source. All that is not described as a source is identified as background and vice versa. Two approaches are presented: the first method accounts for the knowledge of the mean value of the source intensity over the complete field (exponential prior), the second approach interprets the source signal distribution according to a power-law (inverse-Gamma function prior). 3.1.1.1 Exponential prior Following the works of Fischer et al. (2000Fischer et al. (, 2001Fischer et al. (, 2002, we choose a prior pdf of the source signal that is as weakly informative as possible. The idea follows a practical argument on the difficulty of providing sensible information. We describe the prior pdf on the source intensity by an exponential distribution, This is the most uninformative Maximum Entropy distribution for known mean value of the source intensity over the complete field. In Fig. 1, equation is drawn for two values of the mean source intensity: = 1 count and = 10 counts. No bright sources are expected to appear in fields with ∼ 1. In the case of values of ≫ 1, bright and faint sources are present in these fields. The marginal Poisson likelihood for the hypothesis Bij has the form: Fischer et al., 2001. The behaviour of the Poisson and the marginal Poisson distributions is depicted with a parameter study. For the parameter study three background amplitudes b are chosen: 0.1, 1 and 10 counts. In Fig. 2 the Poisson distribution and the marginal Poisson distribution are drawn on a logarithmic scale. These plots are indicated with (a), (b) and (c), respectively. The parameter, which describes the mean intensity in a field, has values of: 1, 10 and 100 counts. The selected values for the background and for the parameter are chosen such that the large variety of cases one encounters analysing digital astronomical images are covered. For instance, b = 10 counts and = 1 count (plot (c)) corresponds to a field when the source signal is much smaller than the background signal. On the other side, b = 0.1 counts and = 100 counts (plot (a)) corresponds to a field characterized by bright sources and small background amplitude. The Poisson distribution is larger than the marginal Poisson distribution for photon counts lower than or equal to the background intensity. Hence, hypothesis Bij is more likely than the complementary hypothesis Bij. This situation changes when the photon counts are larger than the background amplitude. The decay length of the marginal Poisson distribution is determined by the expected source intensity. The probability to detect pixels satisfying hypothesis Bij is sensitive to the decay length of the marginal Poisson distribution and to the background amplitude, that is recognizable in the distance between the Poisson and the marginal Poisson distributions. Hence, the BSS method allows probabilities to be sensitive to the parameters characterizing the analysed digital astronomical image. Let's consider plot (b) in Fig. 2 for photon counts in the range (0 − 10). The background amplitude has a value of 1 count. If the expected mean source intensity on the complete field has a value of 1 count, i.e. = 1 count, 3 photon counts or more in a pixel are classified as a source. The probability of detecting a source increases with increasing counts in a pixel. This is due to the increasing distance of the marginal Poisson likelihood from the Poisson likelihood. If an analyst allows for many bright sources distributed in the field, then the relative number of faint sources is reduced. In fact, when a mean source signal 100 times larger than the background is expected, then 5 photon counts or more in a pixel are needed to identify the event due to the presence of a source. When = 100 counts, 5 photon counts in a pixel reveal a source probability lower than the one obtained when = 1 count. This situation changes for 7 or more photon counts in a pixel. 3.1.1.2 Inverse-Gamma function prior Alternatively to the exponential prior, a power-law distribution can be chosen to describe the prior knowledge on the source signal distribution. We are tempted to claim that any physical situation contains sensible information for a proper (normalizable) prior. We choose a prior pdf that is inspired by the cumulative signal number counts distribution used often in astrophysics for describing the integral flux distribution of cosmological sources, i.e. a power-law function (see refs. Rosati, Borgani & Norman 2002;Brandt & Hasinger 2005 and references therein). The power law can not be employed as a prior pdf, because the power law is not normalized. We use instead a normalized inverse-Gamma function. It behaves at large counts as the power law, because it is described by a power law with an exponential cutoff. The prior pdf of the source signal, described by an inverse-Gamma function, is: with slope and cutoff parameter a. When a has a small positive value, the inverse-Gamma function is dominated by a power-law behaviour. The parameter a gives rise to a cutoff of faint sources. This parameter has important implications in the estima- tion of the background. If a is smaller than the background amplitude, the BSS algorithm detects sources of the order of the background. If a is larger than the background amplitude, the BSS algorithm assigns faint sources with intensities lower than a to be background only. In Fig. 1, equation is drawn for two values of the parameter. For this example the cutoff parameter a has a value of ∼ 0.1 count. The distributions peak around a. The decay of each distribution depends on the value of. When is large, i.e. for values 2, the distribution drops quickly to zero. Instead the distribution drops slowly to zero, when approaches 1. Hence, small values of indicate bright sources distributed on the field. The marginal Poisson likelihood for the hypothesis Bij is now described by: where > 1, a > 0 and K k−+1 (2 √ a) is the Modified Bessel function of order k − + 1. In order to avoid numerical problems with the Bessel function, the following upward recurrence relation was derived: is the Bessel function of imaginary argument and it has the property: shows the corresponding parameter study for the inverse-Gamma function prior. The parameter is chosen to be 1.3, 2.0 and 3.0 and the cutoff parameter a ∼ 0.1 counts. The decay length of the marginal Poisson distributions are now given by the value of. The decay length decreases with increasing values. The likelihood for the mixture model The marginal Poisson likelihood pdf will be indicated with p(dij|Bij, bij, ), where indicates or. In the case of the inverse-Gamma function prior pdf, the cutoff parameter a does not appear since the value of this parameter is chosen such that the inverse-Gamma function is dominated by a power-law behaviour. The likelihood for the mixture model, as written in, now reads: In Fig. 3 we show the effect of the likelihood for the mixture model on the data (semi-log plot). The likelihood pdf for the mixture model is drawn for each prior pdf of the source signal employing the background value b and the prior pdf. The value chosen for the parameter indicates that 99 per cent of the pixels distributed in the astronomical image are containing only background. The likelihood pdfs are composed by a central peak plus a long tail. The central peak is primarily due to the presence of the Poisson distribution. The long tail is caused by the marginal Poisson distribution. The presence of the long tail is essential in order to reduce the influence of the source signal for background estimation (see for more details). Thin-plate spline The TPS has been selected for modelling the structures arising in the background rate of a digital astronomical image. It is a type of radial basis function. The TPS is indicated with t(x), where x = (x, y) corresponds to the position on the grid in the detector field. The shape of the interpolating TPS surface fulfills a minimum curvature condition on infinite support. More specifically, the TPS is a weighted sum of translations of radially symmetric basis functions augmented by a linear term (see Meinguet 1979 andWahba 2006), of the form Nr is the number of support points (pivots). The weight is characterized by l. f (x − x l ) is a basis function, a function of real values depending on the distance between the grid points x and the support points x l, such that |x − x l | > 0. Given the pivots xi and the amplitude zi = z(xi), the TPS satisfies the interpolation conditions: t 2 is a measure of energy in the second derivatives of t. In other words, given a set of data points, a weighted combination of TPSs centered about each pivot gives the interpolation function that passes through the pivots exactly while minimizing the so-called 'bending energy'. The TPS satisfies the Euler-Lagrange equation and its solution has the form: This is a smooth function of two variables defined via Euclidean space distance. In Fig. 4 an example of TPS with one pivot is pictured. In order to fit the TPS to the data, it is necessary to solve for the weights and the plane's coefficients so that it is possible to interpolate the local TPS's amplitude: which is the background rate. bij will indicate the local background amplitude, i.e. the multiplication of tij and the local value of the telescope's exposure time ( ): The TPS interpolant is defined by the coefficients, ci of the plane E(x) and the weights l of the basis functions. The solution for the TPS has been evaluated on an infinite support, since no solutions exist on a finite support, where the requirements for this function to be fulfilled are: Given the interpolation values z = (z1,..., zN r ), the weights l and ci are searched so that the TPS satisfies: t(x l ) = z l, l = 1,..., Nr and in order to have a converging integral, the following conditions need to be satisfied: Nr l=1 l y l = 0. This means that we have (Nr − 3) conditions on t(x l ) = z l. The coefficients of the TPS, l, and the plane, ci, can be found by solving the linear system, that may be written in matrix form as: where the matrix components are: After having solved (, c) T, the TPS can be evaluated at any point. The pivots can be equally spaced or can be located in structures arising in the astronomical image. Following the works of Fischer et al. and von Toussaint & Gori, the present work can be extended employing adaptive splines, i.e. allowing the number of pivots and their positions to vary in accordance with the requirements of the data. Estimation of the background and its uncertainties The posterior pdf of the background is, according to Bayes' theorem, proportional to the product of the mixture likelihood pdf, eq., and the prior pdf p(b), that is chosen constant for positive values of b and null elsewhere. Its maximum with respect to b, b *, gives an estimate of the background map which consists of the TPS combined with the observatory's exposure map. The estimation of the background considers all pixels, i.e. on the complete field, because we tackle the source signal implicitly as outlier. The posterior pdf for b is given by: This integral is complicated due to the presence of the delta function. This is, however, of minor importance since our interest focuses on the expectation values of some functionals of b, say g(b). Therefore: Assuming the maximum of p(D | z)p(z) is well determined we can apply the Laplace approximation: that means we approximate the integral function by a Gaussian at the maximum z * and we compute the volume under that Gaussian. The covariance of the fitted Gaussian is determined by the Hessian matrix, as given by, ∂z i ∂z j is element {ij} of the Hessian matrix. This approximation is the 2 nd order Taylor expansion of the posterior pdf around the optimized pivots amplitude's values. For more details see O' Hagan & Forster. Then equation becomes: Therefore, the posterior mean of b is: ij z, and the variance is: The 1 error on the estimated background function is therefore calculated by the square root of equation. Determining the hyper-parameters The two hyper-parameters and have so far been assumed to be fixed. However, these parameters can be appropriately estimated from the data. Within the framework of BPT the hyper-parameters (nuisance parameters) and have to be marginalized. Alternatively, and not quite rigorous in the Bayesian sense, the hyper-parameters can be estimated from the marginal posterior pdf, where the background and source parameters are integrated out, Hence, the estimate of the hyper-parameters is the maximum of their joint posterior. The basic idea is to use BPT to determine the hyperparameters explicitly, i.e. from the data. This requires the posterior pdf of and. Bayes' theorem gives: The prior pdfs of the hyper-parameters are independent because the hyper-parameters are logically independent. These prior pdfs are chosen uninformative, because of our lack of knowledge on these parameters. The prior pdf for is chosen to be constant in. Since is a scale parameter, the appropriate prior distribution is Jeffrey's prior: p() ∼ 1/. Equation can be written as follows: Assuming the maximum of p(D | z,, )p(z) is well determined, we can apply the Laplace approximation can be written as follows: p(z * ) is chosen to be constant. The last term corresponds to the volume of the posterior pdf of and for each, estimates. Probability of hypothesis B In principle, the probability of detecting source signal in addition to the background should be derived marginalizing over the background coefficients and the hyper-parameters. Following the works of von der Linden et al. and Fischer et al., the multidimensional integral, arising from the marginalization, can be approximated at the optimal values found of the background coefficients and the hyper-parameters. Equation is approximated with: where b * = {b * ij } is the estimated background amplitude, as explained in Section 3.3. SPMs are estimated employing this formula. The BSS method does not incorporate the shape of the PSF. When the PSF FWHM is smaller than the image pixel size, then one pixel contains all the photons coming from a point-like source. Otherwise point-like sources are detected on pixel cells as large as the PSF FWHM. Extended objects are detected in pixel cells large enough that the source size is completely included. The pixel cell must be larger than the PSF FWHM and it can exhibit any shape. Equation shows that the source probability strictly depends on the ratio between the Poisson likelihood, p(dij | Bij, bij), and the marginal Poisson likelihood, p(dij | Bij, bij, ) (Bayes factor). Bayes factors offer a way of evaluating evidence in favor of competing models. Fig. 5 shows the effect of the mixture model technique on the probability of having source contribution in pixels and pixel cells for the exponential and the inverse-Gamma function prior pdfs. For the parametric studies, the parameter is chosen to be 0.5. This noncommittal value of arises if each pixel (or pixel cell) is equally likely to contain source signal contribution or background only. Psource depends on the likelihood for the mixture model, which is the weighted sum of the Poisson distribution and the marginal Poisson distribution. For photon counts of about the mean background intensity, Psource is small. Psource increases with increasing photon counts due to the presence of the long tail in the marginal likelihood. This allows efficient separation of the source signal from the background. In panels (a)-(c), the distribution function of Psource, the Poisson pdf (P D) and the marginal Poisson pdf (M P D) are drawn using the exponential prior (see also Figs. 1 and 2). In the case of fields with bright sources ( > 10 times the background intensity), Psource is nearly zero for photon counts less than or equal to the mean background intensity. Psource increases rapidly with increasing photon counts. In the case of fields where the mean source intensity is similar to the mean background intensity, pixels containing photon counts close to the mean background intensity have probabilities about 50 per cent. In these cases, Psource increases slowly with increasing photon counts, because the two Poisson distributions are similar. In the case of fields dominated by large background signal ( < mean background amplitude), Psource increases very slowly with increasing photon counts. In this case, the decay of the marginal Poisson distribution follows closely the decay of the Poisson distribution (e.g. for b = 10 counts and = 1 count). In Fig. 5 panels (d)-(f ), the distribution functions are shown using the inverse-Gamma function prior (see also Figs. 1 and 2). The steepness of the slope depends on the parameter (Fig. 1). The source probability curve increases faster at decreasing values, because small values indicate bright sources distributed in the field. In panels (e) and (f ), Psource provides values close to 50 per cent at low numbers of photon counts. This effect addresses the cutoff parameter a. In fact, in these plots the cutoff parameter a is smaller than the background amplitude. The situation is different in the simulations with small background amplitude (panel (d)), where the source probabilities decrease below 50 per cent at low numbers of photon counts. In these simulations the cutoff parameter a is chosen larger than the mean background amplitude. Faint sources with intensities lower than a are described to be background. The interpretations of the probability of having source contributions in pixels and pixel cells are shown in Table 1. Source probabilities <50 per cent indicates the detection of background only. Psource is indifferent at values of 50 per cent. In both cases, Psource might contain sources but they can not be distinguished from the background due to statistical fluctuations. Source probability values ≫ 50 per cent indicate source detections. False sources due to statistical fluctuations may occur especially for values <99 per cent. For more details about the interpretations provided in Table 1 see Jeffreys and Kass & Raftery. Statistical combination of SPMs at different energy bands An astronomical image is usually given in various energy bands. SPMs, obtained with, acquired at different energy bands can be combined statistically. The probability of detecting source signal in addition to the background for the combined energy bands {k} is: where n corresponds to the effective energy band., and for the exponential prior pdf, panels (a)-(c); eqs., and for the inverse-Gamma function prior pdf, panels (d)-(f). Equation allows one to provide conclusive posterior pdfs for the detected sources from combined energy bands. It results, as expected, that if an object is detected in multiple bands, the resulting p(Bij | dij) comb is larger than the source probability obtained analysing a single band. An application of this technique is shown in Section 8.1.1. This analysis can be extended to study crowded fields or blends by investigating source colours. SOURCE CHARACTERIZATION Following the estimation of the background and the identification of sources, the sources can be parameterized. SPMs at different resolutions are investigated first. Sources are catalogued with the largest source probability reached in one of the SPMs. Local regions are chosen around the detected sources. The region size is determined by the correlation length where the maximum of the source probability is reached. Although any suitable source shape can be used, we model the sources by a two-dimensional Gaussian. The data belonging to a source detection area 'k' are given by: Dij are the modelled photon counts in a pixel {ij} spoiled with the background counts bij. Gij is the function which describes the photon counts distribution of the detected source: I is the intensity of the source, i.e. the net source counts. x, y and provide the source shape. xs, ys is the source pixel position on the image. Position, intensity and source shape are provided maximizing the likelihood function assuming constant priors for all parameters: p(xs, ys, I, x, y, |b, d) where dij are the photon counts detected on the image. According to and the source fitting is executed on the sources for given background. This is reasonable since the uncertainty of the estimated background is small. No explicit background subtraction from the photon counts is needed for estimating the source parameters. Source position and extensions are converted from detector space to sky space. Source fluxes are provided straightforwardly. The rms uncertainties of the source parameters are estimated from the Hessian matrix, where Hij := − ∂ 2 ln ∂ i ∂ j is element {ij} of the Hessian matrix and indicates the source parameters. The square root of the diagonal elements of the inverse of the Hessian matrix provides the 1 errors on the estimated parameters. The output catalogue includes source positions, source counts, source count rates, background counts, source shapes and their errors. The source probability and the source detection's resolution are incorporated. Source characterization can be extended with model selection. With the Bayesian model selection technique, the most suitable model describing the photon count profile of the detected sources can be found. The models to be employed are, for instance, Gaussian profile, King profile (Cavaliere & Fusco-Femiano 1978), de Vaucouleurs model, Hubble model. Such an extention to the actual method would allow an improvement in the estimation of the shape parameters of faint and extended sources. RELIABILITY OF THE DETECTIONS There are several methods for reducing the number of false positives, one is to use p-values from statistical significance tests (Linnemann 2003). P -values are used to a great extent in many fields of science. Unfortunately, they are commonly misinterpreted. Researches on p-values (e.g. Berger & Sellke 1987) showed that p-values can be a highly misleading measure of evidence. In Section 5.1 we provide the definition of p-values. In Section 5.2, we express our general view on how this problem can be tackled with BPT. The commonly used measure of statistical significance with Poisson p-values is introduced in Section 5.3. In Section 5.4 simulations are used for comparing the Poisson p-values with the Bayesian probabilities. P -values In hypothesis testing one is interested in making inferences about the truth of some hypothesis H0, given a set of random variables X: X ∼ f (x), where f (x) is a continuous density and x is the actual observed values. A statistic T (X) is chosen to investigate compatibility of the model with the observed data x, with large values of T indicating less compatibility (Sellke, Bayarri & Berger 2001). The p-value is then defined as: The significance level of a test is the maximum allowed probability, assuming H0, that the statistic would be observed. The p-value is compared to the significance level. If the p-value is smaller than or equal to the significance level then H0 is rejected. The significance level is an arbitrary number between 0 and 1, depending on the scientific field one is working in. However, classically a significance level of 0.05 is accepted. Unfortunately, the significance level of 0.05 does not indicate a particular strong evidence against H0, since it just claims an even chance. The Bayesian viewpoint Since the state of knowledge is always incomplete, a hypothesis can never be proved false or true. One can only compute the probability of two or more competing hypotheses (or models) on the basis of the only data available, see e.g. Gregory. The Bayesian approach to hypothesis testing is conceptually straightforward. Prior probabilities are assigned to all unknown hypotheses. Probability theory is then used to compute the posterior probabilities of the hypotheses given the observed data (Berger 1997). This is in contrast to standard significance testing which does not provide such interpretation. In fact, in the classic approach the truth of a hypothesis can be only inferred indirectly. Finally, it is important to underline that the observed data and parameters describing the hypotheses are subject to uncertainties which have to be estimated and encoded in probability distributions. With BPT there is no need to distinguish between statistical (or random) and systematic uncertainties. Both kinds of uncertainties are treated as lack of knowledge. For more on the subject see Fischer, Dinklage & Pasch. Significance testing with p-values Several measures of statistical significance with p-values have been developed in astrophysics. A critical comparison and discussion about methods for measuring the significance of a particular observation can be found in Linnemann. Following Linnemann, our attention is focused on the Poisson probability p-value: pP is the probability of finding d or more (random) events under a Poisson distribution with an expected number of events given by b. Linnemann remarks that Poisson probability p-value estimates lead to overestimates of the significance since the uncertainties on the background are ignored. Comparing threshold settings for source reliability In order to restrain the rates of false source detections per field, a threshold on probabilities is commonly set according to the goal of a specific research. For instance, Freeman et al. L is given by −ln(1 − P ), where P is the probability of source detection obtained by a maximum likelihood method (Cruddace, Hasinger & Schmitt 1988). The selected likelihood threshold corresponds to the detection of < 1 spurious source per field. In any systematic investigation to source detection, the threshold level is a tradeoff between the detection power and false detections rate. Following this idea, the Poisson probability p-value is compared to the Bayesian source probability. Fig. 6 compares the two statistics. Panel (a) shows the relation between pP and (1 − Psource) for varying background amplitudes and source intensities. Panel (b) displays the same data but with fixed background value and varying source intensities. These results are obtained employing the exponential prior. A more detailed study, including the inverse-Gamma function prior, can be found in Guglielmetti. The. On the abscissa, the background probability is calculated as the complementary source probability provided by the Bayesian method. The value close to unity corresponds to a source probability, Psource, which goes to zero, instead a value of 0.1 corresponds to 90 per cent source probability and 0.01 to 99 per cent source probability. Each plot shows a general tendency. For a given count number d, Psource and (1-pP ) increase with decreasing background intensity. However, Psource is more conservative. This is due to the dependency of Psource not only on the mean background intensity but also on the source intensity distri- bution. This dependency is expressed by the likelihood for the mixture model, that plays a central role for the estimation of the source probability. An example of the different interpretations of source detection employing the two statistics is provided in Table 2. In Fig. 7, Psource for a given number of photon counts in a pixel versus /b, the ratio between the mean source intensity and the background intensity, is drawn for a fixed background value. The abscissa is drawn on a logarithmic scale. For a given number of photon counts, the value of Psource varies with the source intensities expected in the astronomical image and with the background amplitude. For 2 photon counts, Psource reaches a maximum where the mean source intensity in the field has values in the range (1 − 5) counts. In this part of the curve, 2 photon counts in a pixel are discriminated best from the background. Away from this range, the source probability decreases. For small /b values, Psource approaches 0.5 because source and background can not be distinguished. For large /b values, Psource decreases since more sources with large intensities are expected relative to small intensities. Therefore, a signal with 2 counts is assigned to be background photons only. (1 − pP ) is calculated for the same values of background and photon counts as for Psource. (1 − pP ) is constant, since it does not depend on the source intensities expected in the field. Its value for 2 photon counts, as seen in Table 2, is larger than the maximum of Psource. In general, (1 − pP ) is larger than Psource for values of photon counts larger than the mean background intensity. If the values of the photon counts are lower than or equal to the mean background intensity, (1 − pP ) is lower than the maximum of Psource. The comparison shows that it is not possible to calibrate pP with Psource because of the intrinsic difference in the nature of the two statistics. In fact, Poisson p-values do not provide an interpretation about background or sources and do not include uncertainties on the background. The Bayesian method, instead, gives information about background and sources and their uncertainties. The comparison between the two statistics reveals that slightly different answers are arising for the two priors of the source signal. When the exponential prior is employed, fields with large intensities are less penalized by false positives caused by random Poisson noise than fields with source signal very close to the background amplitude. When the inverse-Gamma function prior is used, false positives detec-tions depend on the cutoff parameter a. This is because the cutoff parameter has an effect on faint sources. The same behaviour is expected on false positives in source detection. The exponential prior, instead, does not exclude small source intensities. Note that the choice of the source signal prior pdf is crucial for source detection. For a reliable analysis the source signal prior chosen has to be as close as possible to the true one. SIMULATED DATA Artificial data are used for performance assessment of the BSS technique. Three simulations are analysed utilizing the exponential and the inverse-Gamma function prior pdfs of the source signal. The datasets, described in Section 6.1, are meant to test the capabilities of the BSS method at varying background values. The idea is to cover different cases one encounters while surveying different sky regions or employing instruments of new and old generations. In Section 6.2 we review the outcome of our analysis on the three simulated datasets. Comments are given for each feature of the developed technique. In Section 6.3 we provide a summary on the outcome of our analysis. Simulations setup Three sets of simulated fields composed of 100 sources modelled on a constant background with added Poisson noise are generated. Groups of ten sources are characterized by the same number of photon counts but with different sizes. A logarithmic increment in photon counts per group is chosen ranging from 1 to 512. The shape of each source is characterized by a two-dimensional circular Gaussian. The source extensions, given by the Gaussian standard deviation, increase from 0.5 to 5.0 pixels in steps of 0.5. Sources are located equidistantly on a grid of 500 500 pixels. Initially, the simulated sources are located on the grid such that the source intensities increase on the abscissa, while the source extensions increase on the ordinate. Subsequently, the 100 sources are randomly permuted on the field. A background is added on each simulated field with values of 0.1, 1 and 10 counts respectively. We assume a constant exposure. In Fig. 8, we show one out of three datasets: the simulated data with small background. Image (a) represents the simulated data with added Poisson noise. The image indicated with (b) is the simulated data without Poisson noise. It is placed for comparison. The simulated datasets (a) and (b) are scaled in the range (0 − 3) photon countspixel −1 in order to enhance faint sources. The original scale of the simulated data with Poisson noise is (0 − 317) photon countspixel −1. The simulated data without Poisson noise show countspixel −1 in the range (0.1 − 326.0). The images representing the other two datasets for b = 1 and 10 counts are similar to the one shown in Fig. 8. In these datasets, the number of sources to be separated from the background decreases with increasing background intensity. The cutoff parameter a is chosen to be 0.14 counts in the three simulated datasets. This is to show the effect of a when the background is smaller or larger than a. Background estimation For the background modelling, only four pivots located at the field's corners are used. This choice is driven by the presence of a constant background. An optimization routine is used for maximizing the likelihood for the mixture model. The solution of the optimization routine is the pivots amplitude's estimates from which the background is calculated. The three setups are designed such that half of the 100 simulated sources are characterized by 16 photon counts. Some of these simulated sources are too faint for being detected. These sources may contribute to the background model. In Fig. 8, the estimated background maps are displayed when employing the exponential prior pdf (image (d)) and the inverse-Gamma function prior pdf (image (f )) for the simulated data with small background. The two images show that the background intensity decreases slightly toward the upper left and lower right corners of about 5 per cent. The same trend is seen also in the estimated backgrounds with intermediate and large values. Evidently, this effect is not introduced by the prior over the signal. Also, it is not introduced by the selected pivots positions. If that were the case, then the same magnitude is expected at each image corner. Instead, this is an overall effect induced by the simulated sources. All simulated sources are randomly permuted. In the upper left and lower right corners are located numerous faint sources. In the lower left corner many bright sources are clustered. The increment in the background intensity is due to the statistical distribution of the sources. This explains why the same trend in background intensities is seen in all background models. When employing the exponential prior pdf, the estimated background intensities are in agreement with the simulated background amplitudes. In the case of the inverse-Gamma function prior pdf, the estimated backgrounds are sensitive to the cutoff parameter a. When a is set larger than the mean background (i.e. simulated data with small background), the background is overestimated. The overestimated background is due to the presence of source signal below the cutoff parameter. Hence, no source intensities below 0.14 counts are allowed. It results that the estimated background is 40 per cent larger than the simulated one. For simulated data with intermediate background, the cutoff parameter a is fixed to a value lower than the simulated background. The background is underestimated by only ∼ 1 per cent with respect to the simulated one. For simulated data with large background, the cutoff parameter a is much lower than the simulated mean background value. The estimated background is in agreement with the simulated one. The background uncertainties are quite small compared with the background itself, on the order of few a percent. This effect holds because the background is estimated on the full field. However, the errors increase where the estimates deviate from the simulated background. In addition, when applying the inverse-Gamma function prior pdf, the errors are larger than those found utilizing the exponential prior pdf. Hyper-parameter estimation In Fig. 9 the contour plot in (, ) parameter space for the joint probability distribution is shown for the hyperparameters evaluated from the simulated data with small background. The contour levels indicate the credible regions. The values of the estimated hyper-parameters are: = (99.2 ± 0.03) per cent, = (3.68 ± 0.1) counts. The estimated value provides the information that only 0.8 per cent of the pixels in the field contains sources. A similar answer is found with the other simulated data: the value increases slightly at increasing background amplitudes., instead, provides the mean source intensity in the field. The estimated value of increases with increasing background amplitudes because small intensities are assigned to be background. When employing the inverse-Gamma function prior pdf, the hyper-parameter is found with the smaller value in the simulated data with small background. The largest value of is found in the simulated data with intermediate background. Large values of indicates that more faint sources and less bright sources are expected in the field (Fig. 1). These results do not contradict our expectations on the hyper-parameter estimates, since the cutoff parameter selects the source signal distribution at the faint end. The exponential prior pdf is plotted over the histogram with a continuous line. The inverse-Gamma function prior pdf, instead, is plotted with a dashed line. The simulated data are neither distributed exponentially nor as an inverse-Gamma function. Hence, the prior pdfs of the source signal are not expected to fit the data exactly. The marginal Poisson distribution weighted with (1−) is drawn with a dash-dot line when employing the exponential prior and with long dashes line in the case of the inverse-Gamma function prior. The Poisson distribution (dotted line) is weighted with for the exponential and the inverse-Gamma function prior pdfs. The intersection between the Poisson pdf and the marginal Poisson pdf indicates the source detection sensitivity. When employing the exponential prior in the simulated data with small background (panel (a)), the exponential prior enables the detection of fainter sources than the inverse-Gamma function prior. This is expected since the cutoff parameter occurs at a value larger than the simulated mean background. Considering the simulated data with intermediate background (panel (b)), the detection is more sensitive to faint sources when employing the inverse-Gamma function prior compared to the exponential prior. In fact, the cutoff parameter allows to describe as source signal part of the simulated background amplitude. For the simulated data with large background (panel (c)), the same sensitivity in source detection is expected when employing the two priors over the signal distribution. Source probability maps The box filter method with cell shape of a circle is used in the three simulations for the multi-resolution analysis. Examples of SPMs are shown in Fig. 8 for the simulated data with small background. Images (c) and (e) are obtained employing the exponential and the inverse-Gamma function prior pdfs, respectively. These images represent the probability of having source contributions in pixel cells with a resolution of 1.5 pixels. At this resolution a pixel cell is composed by 9 pixels. A pixel cell with a correlation radius of 1.5 pixels is drawn in the lower right corner of image (c) (Fig. 8). It is indicated with an arrow. The multi-resolution technique provides an analysis of source probabilities variation and of source features of the detected sources. In Fig. 11, we display the photon count image and the SPMs zoomed in a bright extended source. This source is detected with the largest source probability (∼ 1) at 2.5 pixels resolution. At this resolution the source is detected as one unique object, as given by the simulation. At larger resolutions, instead, the source is dissociated in small parts as seen in the photon count image. This indi- cates that the source counts are not distributed uniformly. In an astronomical observation, more information is required in order to understand the nature of such sources. Secondly, the maximum in source probability is reached at a correlation length that is smaller than the source size. This is due to the source brightness relatively to the small background value. Within the range of resolutions studied, the source probability is constant at correlation lengths larger than 2.5 pixels. This example shows that the multi-resolution technique combined with the BSS method is particularly appropriate for the search of extended and non-symmetrical sources. Comparison between estimated and simulated source parameters Source parameters and their uncertainties are derived as described in Section 4. Sources are catalogued when a probability larger than 50 per cent is reached at least in one of the SPMs. This threshold is chosen within these simulated datasets in order to clarify the interpretations provided in Table 1. The parameters of bright sources are precisely estimated. In Fig. 11 we give an example of detection of a bright extended source employing SPMs in the multiresolution technique. The true parameters of this source are: 128 photon counts, (x,y)=, (x,y)=. The estimated parameters of this source are: (129.79 ± 23.70) net source counts, (x,y)=(4.92 ± 0.67,4.96 ± 0.71) pixels, (x,y)=(359.57±0.92,269.29±0.98) pixels. Instead, the effect of background fluctuations on faint source estimates can be quite pronounced. In Fig. 12, we provide an example of the estimated source positions and extensions on the simulated data with small background, utilizing the exponential prior pdf. The errors on the estimated parameters are not considered in this plot. Some of the detected faint sources look uncentered and distorted. Four false positives in source detection are found. The background value of the simulated data are reported in countspixel −1. Epr and IGpr have same meaning as given in The simulated data without Poisson noise with the simulated source shapes superimposed are shown for comparison. Table 3 reports the number of detected sources for each simulation. Different columns are used for accounting true detections and false positives separately. The number of detected sources employing the inverse-Gamma function prior is larger with respect to the exponential prior case only when the cutoff parameter is set lower than the mean background amplitude. In Fig. 13, the estimated source counts are related to their resolution of source detection (panels (a) and (c)) and their source probability (panels (b) and (d)) for the simulated data with small background. These are log-linear plots. The results utilizing the exponential and the inverse-Gamma function prior pdfs are shown in panels (a) and (b) and in panels (c) and (d), respectively. The asterisks indicate true sources, while the squares show spurious detections. The effect of the cutoff parameter a on source detection is visible in panel (c). In this example, the value of a is chosen larger than the simulated background amplitude. Hence, the inverse-Gamma function prior pdf does not allow to detect sources as faint as the exponential prior. The plots on panels (a) and (c) denote the source detection technique at several correlation lengths. In both plots, a line is drawn only with the purpose to guide the eye. Left to the line, sources can not be detected. Right to the line, sources can be found. Very faint sources with few photon counts ( 10) are resolved by small correlation lengths. In the range (10 − 100) net source counts, sources are detected at decreasing resolutions. Bright sources do not require large correlation lengths for being detected. The plots in panels (b) and (d) of Fig. 13 highlight that most of the detections occurs at probabilities larger than 99 per cent. At this probability value and larger, sources matched with the simulated one are strongly separated from false positives. In Fig. 14 we show the relation between the simulated and the estimated source parameters. Good estimates in source parameters are achieved. The estimated net source counts errors and extension errors can be large for faint sources. In Fig. 15 a summary on the analysis of all the detected sources employing the exponential prior on the three simulated datasets is presented. The plot in panel (a) shows the difference between estimated and simulated net source counts, normalized with the estimated errors, versus the source probability of the merged data. This plot does not contain the information provided by false positives in source detection. 87 per cent of all detections occurred with probability larger than 99 per cent. The image in panel (b) is a semi-log plot of the normalized difference between measured and simulated net source counts versus the simulated net source counts of 98.5 per cent true sources detected in the three simulations. The values of two sources, detected in the simulated data with large background, are outside the selected y range. These two detections are included in the analysis of verification with existing algorithms (Section 7). The residuals are normally distributed, as expected. They are located symmetrically around zero. At the faint end, the results are only limited by the small number of simulated faint detectable sources. Faint and bright sources are equally well detected. False positives Until now we have discussed the detections that have counterparts with the simulated data. We consider now the detection of false positives. In Table 3 the number of detected false positives are listed for each simulation. At 50 per cent probability threshold more false positives are found with the inverse-Gamma function prior compared to the exponential one. At a 90 per cent source probability threshold, the analyses with the two prior pdfs provide similar results. True detections are strongly separated from statistical fluctuations for source probability values larger than 99 per cent. When employing the inverse-Gamma function prior pdf, the number of false positives is sensitive to the cutoff parameter. Less false positives are found when a is set larger than the background, because it reduces the number of detectable faint sources. It may be worth noting that even when the cutoff parameter is set larger than the background, a probability threshold 90 per cent has to be considered (see Fig. 13 for more details). False positives in source detections show large errors in their estimated parameters. The source probability variation and the source features analyses in the multi-resolution technique provide hints of ambiguous detections. However, as all methods, the BSS approach is limited by statistics. Spurious detections can not be ruled out. Figure 15. Merged information from three simulated datasets of the detected sources employing the exponential prior pdf. Panel (a), difference between estimated and simulated net source counts normalized by the errors on the estimated net source counts versus source probability. Panel (b), difference between estimated and simulated net source counts normalized by the errors on the estimated net source counts versus the simulated counts. Choice of the prior pdf of the source signal The big difference between the two prior pdfs of the source signal follows on from one prior pdf having one parameter and the other pdf having two. The parameter, indicating the mean intensity in an astronomical image, introduced with the exponential prior pdf is estimated from the data. The parameter, that is the shape parameter of the power-law, given by the inverse-Gamma function prior pdf is estimated from the data. Instead the cutoff parameter a is selected to a small value such that the inverse-Gamma function prior pdf behaves as a power-law. Astronomical images can be characterized by a small background. It results that a can be chosen from a number of alternatives, ranging from values that are above or below the background amplitude. The choice of a implies a selection on the detectable sources: sources whose intensity is lower than a are not detected; sources close to the background amplitude are detected when a is set below the background amplitude. On real data much more prior information for the cutoff parameter is needed. The inverse-Gamma function prior pdf can be employed if a mean value of the background amplitude is already known from previous analyses. The exponential prior pdf is preferable over the inverse-Gamma function prior pdf, since no predefined values are incorporated. This is also supported by the results obtained with the simulated data. However, the inverse-Gamma function prior is a more suited model to fit the data and it has potentials for improving the detections of faint objects. One way to improve the knowledge acquired with the inverse-Gamma function prior is by the estimation of the cutoff parameter from the data. This change in our algorithm is not straightforward and MCMC algorithms have to be employed. Summary Simulated data are employed to assess the properties of the BSS technique. The estimated background and source probabilities depend on the prior information chosen. A successful separation of background and sources must depend on the criteria which define background. Structures beyond the defined properties of the background model are, therefore, assigned to be sources. There is no sensible backgroundsource separation without defining a model for the object background. Additionally, prior information on source intensity distributions helps to sort data, which are marginally consistent with the background model, into background or source. Therefore, prior knowledge on the background model as well as on the source intensity distribution function is crucial for successful background-source separation. For the background model a two-dimensional TPS representation was chosen. It is flexible enough to reconstruct any spatial structure in the background rate distribution. The parameters are the number, position and amplitudes of the spline supporting points. Any other background model capable to quantify structures which should be assigned to background can be used as well. For the prior distribution of the source intensities the exponential and the inverse-Gamma function are used as illustrations. For both distributions the source probability can be given analytically. The hyper-parameters of both distributions can either be chosen in advance to describe known source intensity properties or can be estimated from the data. If they are estimated from the data simultaneously with the background parameters, properties of the source intensity distribution can be derived, but at the expense of larger estimation uncertainties. It is important to note that the performance of the BSS method increases with the quality of prior information employed for the source intensity distribution. The prior distribution of the source intensities determines the general behaviour of the sources in the field of view and the hyper-parameters are useful for fine-tuning. The aim of detecting faint sources competes with the omnipresent detection of false-positives. The suppression of false-positives depends both on the expedient choice of prior information and on the level of detection probability accepted for source identification. Compared to, e.g., p-values the BSS technique is rather conservative in estimating source probabilities. Therefore, a probability threshold of 99 per cent is mostly effective to suppress false positives. The estimated background rates are consistent with the simulated ones. Crowded areas with regions of marginally detectable sources might increase the background rate accordingly. The SPMs at different correlation lengths are an important feature of the technique. The multi-resolution analysis allows one to detect fine structures of the sources. The explanation of these columns can be found in Table 3. The source parameters are well determined. Their residuals are normally distributed. We will show in Section 7 that the BSS technique performs better than frequently used techniques. Naturally, the estimation uncertainties of parameters for faint sources are large due to the propagation of the background uncertainty. VERIFICATION WITH EXISTING ALGORITHMS In the X-ray regime, the sliding window technique and the WT techniques are widely used. However, the WT has been shown to perform better than the sliding window technique for source detection. The WT improvement in source detection with respect to the sliding window technique is inversely proportional to the background amplitude (). The WT has also other favourable aspects for being compared with the BSS method developed in our work. The WT allows for the search of faint extended sources. The BSS method with the multi-resolution technique is close to the WT method. Between all the available software employing WT, we choose wavdetect (), part of the freely available CIAO software package. We use version 3.4. wavdetect is a powerful and flexible software package. It has been developed for a generic detector. It is applicable to data with small background. The algorithm includes satellite's exposure variations. It estimates the local background count amplitude in each image pixel and it provides a background map. We utilize wavdetect on our simulated data described in Section 6. The threshold setting for the significance ('sigthresh') is chosen equal to 4.0 10 −6, in order to detect on the average 1 spurious source per image. The 'scale' sizes are chosen with a logarithmic increment from 2 to 64. Tests have been made changing the levels of these parameters, assuring us that the selected values provide good performance of this WT technique. In Table 4, we report the number of detected sources per simulated field, separating the sources matched with the simulated one (true detect) to the false positives in source detection found employing the above mentioned threshold setting. The three simulated datasets are distinguished by their background values (counts). The simulated background values are reported in column simulated data. These results are compared with the ones obtained with the BSS algorithm when employing the exponential prior pdf as shown in Table 3. The BSS technique finds all sources detected by wavdetect. In the simulated data characterized by small background, the two algorithms find the same number of false positives. The BSS algorithm reveals 8 per cent more true detections than wavdetect. These sources are characterized by counts in the range (4 − 8). Hence, the BSS technique performs better than wavdetect in the low number counts regime. The performance of the two techniques on the datasets with intermediate and large background values is similar. The explanation for these results is enclosed in the background estimate. The wavdetect estimates of background values are similar to the results obtained with the BSS technique in the intermediate and large background datasets. Though, the backgrounds provided by wavdetect present rings due to the Mexican Hat function employed as a filter on the image data. In the simulated data with small background, the wavdetect background model is on the average larger than the one estimated with the BSS method. The plots in Fig. 16 support these conclusions (semi-log plots). The image in panel (a) shows that wavdetect fluxes are underestimated for ∼ 20 per cent of all detected sources. In addition, wavdetect sensitivity for source detection is limited to 16 counts per source within these simulated datasets. In Fig. 15, panel (b) presents the sensitivity achieved by the BSS method. The plot in panel (b) of Fig. 16 displays a relation between the sources detected by wavdetect (ordinate) and by BSS (abscissa), both matched with the simulated data. Most of the wavdetect underestimated sources are coming from the simulated data with small background. BSS presents only two sources underestimated and detected in the simulated data with large background. By chance, the triangle located at (−8,−2) indicates the detection of two sources. Both sources where simulated with 256 source counts and a circular extension of 4 pixels one, 5 pixels the other. The estimated source positions are also improved with BSS (Fig. 17, semi-log plots). The residuals provided by the BSS technique are a factor of 10 smaller than the ones from wavdetect. Their estimates have many outliers. Our estimates are normally distributed. Though the comparison between the two detection methods is not yet carried out on real data, these results are encouraging. The BSS method detects at least as many sources as wavdetect. The simulations prove that the developed Bayesian technique ameliorates the detections in the low count regime. The BSS estimated positions and counts are improved. Finally, we expect that the BSS technique will refine wavdetect sensitivity on real data, because the BSS technique is designed for modelling highly and slowly varying backgrounds taking into account instrumental structures. APPLICATION TO OBSERVATIONAL DATA In this Section we employ RASS data to show the effectiveness of our method at large varying background, exposure values and where exposure nonuniformities are consistently taken into account. We present results obtained combining SPMs at different energy bands. We demonstrate that our algorithm can detect sources independently of their shape, sources at the field's edge and sources overlapping a diffuse emission. Last, the BSS technique provides evidence for celestial sources not previously catalogued by any detection technique in the X-ray regime. ROSAT PSPC in Survey Mode data We apply the BSS algorithm on a data sample coming from the Position Sensitive Proportional Counter (PSPC) on board of the ROSAT satellite in survey mode (0.1 − 2.4 keV). ROSAT satellite provided the only all-sky survey realized using an imaging X-ray telescope ((Voges et al., 2000. RASS data supply a unique map of the sky in the X-ray regime. The map of the sky was divided in 1, 378 fields each of 6.4 6.4, corresponding to 512512 pixels. In addition, the satellite's exposure time can vary between about 0.4 to 40 ks and some parts of the sky are without observations due to the satellite's crossing of the auroral zones and of the South Atlantic Anomaly. Therefore, RASS data provide a wide range of possibilities for testing the BSS algorithm. We present the results obtained analysing three ROSAT fields, whose IDs are: RS930625n00, RS932209n00 and RS932518n00. RS930625n00 This ROSAT field is located at = 17 h 49 m 5 s, = +61 52 30. It is characterized by large variations of the satellite's exposure ranging from 1.7 to 13.5 ks. The count rate image of RS930625n00 in the broad energy band (0.1 − 2.4 keV) is displayed in panel (a) of Fig. 18. The count rates range over (0.−0.11) photon counts −1 pixel −1. The image is here scaled in the range (0.−0.005) photon counts −1 pixel −1 in order to enhance the sources. Our results employing the BSS algorithm are shown in Fig. 18 with a SPM, the TPS map and the background map. In panel (b), the SPM is obtained combining statistically the soft (0.1 − 0.4 keV) and the hard (0.5 − 2.4 keV) energy bands. The SPM here displayed is obtained employing the exponential prior pdf and accounts for the width of the ROSAT instrumental PSF. The information of pixels is combined with the box filter method with the cell shape of a circle. This image corresponds to a correlation length of 1.5 arcmin. Sources are identified in terms of probabilities. The image is in linear scale. In panel (c), the TPS map is modelled from RS930625n00 field, broad energy band. The TPS models the background rate. Only 25 support points are used. The background rate ranges (0.0005 − 0.001) photon counts −1 pixel −1. The contours are superposed for enhancing the features relative to the modelled background rate. The innermost and the outermost contours indicate a level of 0.0005 and 0.001 photon counts −1 pixel −1, respectively. The corresponding background map estimated from the selected ROSAT field is displayed in panel (d). Its values are in the range (1.17−8.53) photon countpixel −1. The image is scaled in the range (1.17 − 6.68) photon countpixel −1. The contour levels close to black and light gray delineate 6.0 and 2.0 photon countpixel −1, respectively. The background map shows the prominent variation due to the heterogeneous satellite exposure time. The lower row of Fig. 18 shows the background rate (panel (e)) and the background amplitude (panel (f )) as obtained analysing the broad energy band with the Standard Analysis Software System (SASS). SASS is the detection method utilized for the realization of RASS. SASS combines the sliding window technique with the maximum likelihood PSF fitting method for source detection and characterization, respectively. The background rate image is in the range (0.0005 − 0.00097) photon counts −1 pixel −1. The innermost Figure 19. POSS-II red plate with overlaid ROSAT contours corresponding to 3, 4, 4.5 and 5 above the local background. In the image centre is located SDSS J172459.31+642424.0, a low redshift QSO. and the outermost contour levels delineate 0.0006 and 0.0009 photon counts −1 pixel −1, respectively. The background amplitude image estimated by SASS has values in the range (1.11 − 8.27) photon countpixel −1. The image is scaled as the one obtained from the BSS technique. The contour levels indicate from 2.0 to 6.0 photon countpixel −1. The BSS technique allows for more flexibility in the background model and the edges are more stable than the ones obtained with SASS. Hence, celestial objects located on the edges are not lost during source detection. In Fig. 19 we show an example of a QSO detection with the BSS algorithm analysing the field RS930625n00 in the hard (0.5 − 2.4 keV) band image. No counterparts have been found in the ROSAT Bright and Faint source catalogues, Voges et al. (1999Voges et al. (, 2000. A counterpart is found with the Sloan Digital Sky Survey. This QSO is catalogued as SDSS J172459.31+642424.0 (,, Vanden. It is located at the image centre. The image covers a field of view of ∼ 5 arcmin at the side. The ROSAT image, as shown in Fig. 18, has 45 arcsecpixel −1 resolution. The QSO optical position as given by the SDSS is 12 arcsec far from the BSS position, i.e. it is within the pixel resolution of the ROSAT data. The estimated source count rate with the BSS algorithm for this source is (0.0068 ± 0.0023) photon counts −1. Its source detection probability is 0.999. This object is located close to the north-west corner of the field RS930625n00 (∼ 45 arcmin). In this region the SASS background intensity is 7 per cent lower than the BSS background estimate. RS932209n00 The detection capabilities of our Bayesian approach on images with exposure non-uniformities is presented in Figs. 20 and 21. The analysed ROSAT field is located at = 3 h 31 m, = −28 07 08. In Fig. 20, panel (a), the soft band image (0.1−0.4 keV) is shown. The image accounts for photon countpixel −1 in the range (0−9). The image is scaled in the range (0−2) photon countpixel −1. In panel (b), an SPM as in output from the BSS method is displayed. The inverse-Gamma function prior pdf is used for source detection and background estimation. This SPM is obtained with a correlation length of 270 arcsec. The box filter method with the cell shape of a circle is employed. The SPM shows source detections. In Fig. 21, panel (a), the satellite's exposure is located. The exposure values are in the range (0. − 0.393) ks. In panel (b) we show the background map estimated with the developed BSS technique. Only 16 pivots equidistantly distributed along the field are employed. The background amplitude ranges from 0 to 0.318 photon countpixel −1. The background is estimated with null value where no satellite's exposure information is provided. The estimated background map is similar to the exposure map because the contribution from the cosmic background is very small. No artefacts, due to exposure non-uniformities, occur in the background map nor in the SPMs. In Fig. 22, panel (a), we display the photon count image of RS932518n00 field in the hard energy band. The image presents values in the range (0 − 136) photon countpixel −1. It is scaled in the range (0 − 15) photon countpixel −1 in order to enhance the X-ray emissions. The satellite's exposure map is displayed in Fig. 23, panel (a). The exposure map shows variations in the range (0.5 − 0.8) ks. The analysis of RS932518n00 in the hard energy band with the BSS algorithm is shown in Figs. 22, panels (b)-(d), and 23, panel (b). For background estimation and source detection the exponential prior pdf is employed. In Fig. 22, panels (b)-(d), three SPMs are displayed. The correlation lengths used for their realization is written on each image. The information of neighbouring pixels is put together with the Gaussian weighting method. These SPMs are in linear scale. The background map (Fig. 22, panel (b)) is estimated utilizing 36 pivots equidistantly spaced along the field. The estimated background values range from ∼ 0 to 4.1 photon countpixel −1. The heterogeneous background is recovered. The BSS method combined with the multi-resolution analysis allows one to identify the detection of point-like sources on top of the diffuse emission and of extended X-ray features, such as the Pencil Nebula located at = 9 h 0.2 m, = −45 57 and the SNR RXJ0852.0 − 4622. The BSS algorithm detects about 50 objects on this ROSAT field. In Table 5 we present part of this catalogue. Columns RA, Dec, sctr, x, y and are the estimated positions, source count rates and source parameters as in output from the developed BSS technique. Each listed object is detected with a probability larger then 0.999. The column indicated with matched ID corresponds to an ID given by catalogues created analysing ROSAT data and whose position matches with the one obtained with the BSS algorithm. During this selection, we give priority to IDs provided by RASS catalogues when available. Catalogues IDs highlighted with an asterisk are coming from a point source catalogue generated from all ROSAT PSPC pointed observations. CONCLUSIONS We have presented a new statistical method for source detection, background estimation and source characterization in the Poisson regime. With this technique we have elaborated a very general and powerful Bayesian method applicable to images coming from any count detector. It is particularly suitable for the search of faint and extended sources in images coming from instruments of a new generation. We apply our technique to X-ray data. The developed technique does not loose information from the data and preserves the statistics. The BSS algorithm provides a comprehensive error analysis where uncertainties are properly included in the physical model. The background and its errors are estimated on the complete field with a two-dimensional spline and the knowledge of the experimental data and the exposure map. The background model is well-defined along the complete field, so that field edges and instrumental effects are correctly handled and sources located at the edges or gaps are not penalized. The BSS method does not assume anything about the source size or shape for an object to be detected. Sources are separated from the background employing BPT combined with the mixture model technique. Point-like and extended sources are detected independent from their morphology and the kind of background. Consequently, the BSS technique References: ( ) Voges et al. ; ( * ) White, Giommi & Angelini can be applied to large data volumes, e.g. surveys from Xray missions. We developed the BSS technique utilizing two prior pdfs of the source signal. We proved that the prior pdfs of the source signal allow one to select what has to be described as source or as background. The SPMs are robust features of our technique. They allow for the analysis of faint and extended objects. SPMs obtained at different energy bands can be combined probabilistically providing conclusive likelihoods for each detected source. The BSS photometry is robust. The errors on the estimated source parameters are normally distributed. We demonstrated that our technique is capable of coping with spatial exposure non-uniformities, large background variations and of detecting sources embedded on a diffuse emission. We expect to handle consistently the heterogeneities present in astronomical images with CCD patterns, images superposed and mosaic of images. Finally, the verification procedure with existing algorithms through simulations demonstrates that the BSS technique improves the detection of sources with extended low surface brightness. This is supported by the applications on real data. The BSS method has good potentials for improving, first, the count rate of detected sources and, second, the sensitivity reached by other techniques also on real data. |
Relation of peer and media influences to the development of purging behaviors among preadolescent and adolescent girls. OBJECTIVE To assess prospectively the relation of peer and media influences on the risk of development of purging behaviors. DESIGN Prospective cohort study. SETTING One year follow-up of 6982 girls aged 9 to 14 years in 1996 who completed questionnaires in 1996 and 1997 and reported in 1996 that they did not use vomiting or laxatives to control weight. MAIN OUTCOME MEASURE Self-report of using vomiting or laxatives at least monthly to control weight. RESULTS During 1 year of follow-up, 74 girls began using vomiting or laxatives at least monthly to control weight. Tanner stage of pubic hair development was predictive of beginning to purge (odds ratio = 1.8; 95% confidence interval , 1.3-2.4). Independent of age and Tanner stage of pubic hair development, importance of thinness to peers (OR = 2.3; 95% CI, 1.8-3.0) and trying to look like females on television, in movies, or in magazines (OR= 1.9; 95% CI, 1.6-2.3) were predictive of beginning to purge at least monthly. Regardless of the covariates included in the logistic regression model, the risk of beginning to purge increased approximately 30% to 40% per 1-category increase in frequency of trying to look like females on television, in movies, or in magazines. CONCLUSIONS Both peers and popular culture, independent of each other, exert influence on girls' weight control beliefs and behaviors. Therefore, to make eating disorder prevention programs more effective, efforts should be made to persuade the television, movie, and magazine industries to employ more models and actresses whose weight could be described as healthy, not underweight. |
Implementation of a piezoelectrically actuated self-contained quadruped robot In this paper we present the development of a mesoscale self-contained quadruped mobile robot that employs two pieces of piezoelectric actuators for the bounding gait locomotion, i.e., two rear legs have the same movement and two front legs do too. The actuator named LIPCA (LIghtweight Piezoceramic Composite curved Actuator) is a piezocomposite actuator that uses a PZT layer that is sandwiched between composite materials of carbon/epoxy and glass/epoxy layers to amplify the displacement. A biomimetic concept is applied to the design of the robot in a simplified way, such that each leg of the robot has only one degree of freedom. Considering that LIPCA requires a high input voltage and possesses capacitive characteristics, a small power supply circuit using PICO chips is designed for the implementation of selfcontained mobile robot. The prototype with the weight of 125 gram and the length of 120 mm can locomote with the bounding gait. Experiments showed that the robot can locomote at about 50 mm/sec with the circuit on board and the operation time is about 5 minutes, which can be considered as a meaningful progress toward the goal of building an autonomous legged robot actuated by piezoelectric actuators. |
BALI IS MAGICAL! - beautiful, quiet, relaxed, joyful and inspiring…
A Quick Guide…
That is if you get away from the tourist crowds in the South and begin to explore the "real" Bali and her wonders….This Smash hit #1 Bestseller that beat out "Lonely Planet" and "Eat, Pray, Love" and is continuously listed in the "Most wished for" lists for Bali books will take you on a tour around the island to explore the quiet, magical parts of Bali, far away from the tourist crowds.is a quick guide, not an in-depth 500 page travel guide book a la Lonely Planet, Fodors and Frommers. It does not contain extensive lists of tour companies or accommodations for each area, tough a few are mentioned throughout the book based on the personal experiences of the author.Instead, this Bali guide from theseries by bestselling travel writer and Top 100 Business Author Gundi Gabrielle, is a charming, fun 2-hour read that will show you what's where and how to best plan your trip.will give you a good overview over the different regions with things to do along the way, practical logistical information and tips on where to get away from it all....
Exploring the “Real” Bali
Bali Indonesia will Enchant you
Are you ready?
aroundand the diving paradise of. The sleepy fishing villages along thewith lush palm tree vegetation. You become relaxed just looking at it....with many little gems to visit and a gorgeous countryside, incl. a pearl farm that employs only women, an award winning coral restoration project recognized by the United Nations, a winery, turtle and bee conservation projects and so much more....inviting to hiking, kayaking, and the remote diving/snorkeling island of. A bird watcher's paradise, lush with wildlife and unusual fauna., a surfer's paradise, with quiet, chill villages like- the way Bali used to be before tourism overran the South.And, of course, the spectacular volcanic mountain ranges in thewith massive lakes, luscious rain forest and gorgeous rice terraces - absolutely breath taking!Wherever you go, Bali will enchant and inspire you with stunning vistas, soaring heights, and a vibe just quiet and relaxed - to get away from it all....Sadly most visitors never get to see that side of Bali and instead spend all their time on overcrowded beaches, parties and shopping malls in the South - Kuta, Seminyak or Sanur.If you want to see the "real" Bali and all her magnificent wonders, this book will be for you.covers, so you know what to expect on your first visit to Bali. Communication, visa, currency/banking, accommodation, transportation, wifi/mobile usage and much more will be covered in Chapter 1.Next follows, the cultural centre of Bali, and still charming and lovely, despite heavy tourism influx.- or where you can find an organic restaurant in the midst of rice fields with beautiful views and healthy, delicious food? The book will tell you.From Ubud we travel around the East and North coast and inland into the mountains - even into Java for the great volcanoes.Then scroll back to the top and get your copy now! |
<gh_stars>0
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package com.os.operate;
import com.os.disk.DiskCodeInterpreter;
/**
*
* @author Administrator
*/
public class Filter {
public Filter() {
}
public String initeFilte(String absoluteRoute) {
String tempAbsoulteRoute = "";
int index = 0;
while (index < absoluteRoute.length()) { //处理路径分隔符问题
if (absoluteRoute.charAt(index) == '\\') {
tempAbsoulteRoute += '/';
index++;
} else if (index < absoluteRoute.length() - 1
&& absoluteRoute.charAt(index) == '/'
&& absoluteRoute.charAt(index + 1) == '/') {
tempAbsoulteRoute += '/';
index += 2;
} else {
tempAbsoulteRoute += absoluteRoute.charAt(index);
index++;
}
}
absoluteRoute = "";
for (int i = 0; i < tempAbsoulteRoute.length(); i++) {
absoluteRoute += tempAbsoulteRoute.charAt(i);
}
if (absoluteRoute.lastIndexOf("/") == absoluteRoute.length() - 1) {
return absoluteRoute.substring(0, absoluteRoute.length() - 1);
} else {
return absoluteRoute;
}
}
public String filteDirectoryName(String absoluteRoute) {
if (absoluteRoute.lastIndexOf("/") == -1) {
if (absoluteRoute.length() == 0) {
return "";
} else {
switch (absoluteRoute.toUpperCase()) {
case "C:":
return "";
case "D:":
return "";
case "E:":
return "";
case "F:":
return "";
case "G:":
return "";
default:
return "你输入的根目录非法!";
}
}
}
String[] s = absoluteRoute.split("/"); //将路径名分隔
switch (s[0].toUpperCase()) {
case "C:":
break;
case "D:":
break;
case "E:":
break;
case "F:":
break;
case "G:":
break;
default:
return "你输入的根目录非法!";
}
for (int i = 1; i < s.length; i++) {
String subDirectory = s[i];
if (subDirectory.isEmpty()) {
return "你输入的第" + (i + 1) + "层" + "目录的名字为空!";
}
if (subDirectory.length() > 3) {
return "你输入的第" + (i + 1) + "层" + "目录名的长度超过3!";
}
if (!subDirectory.matches("[^$^\\.]+")) {
return "你输入的第" + (i + 1) + "层" + "目录名中含有非法字符\"$\"或者\".\"";
}
}
return "";
}
public String filteFileName(String absoluteRoute) {
DiskCodeInterpreter diskCodeInterpreter = new DiskCodeInterpreter();
if (absoluteRoute.lastIndexOf("/") == -1) {
return "你输入文件的绝对路径名非法!";
}
String[] s = absoluteRoute.split("/"); //将路径名分隔
switch (s[0].toUpperCase()) {
case "C:":
break;
case "D:":
break;
case "E:":
break;
case "F:":
break;
case "G:":
break;
default:
return "你输入的根目录非法!";
}
for (int i = 1; i < s.length - 1; i++) { //扫描创建文件的父目录情况
String subDirectory = s[i];
if (subDirectory.isEmpty()) {
return "你输入的第" + (i + 1) + "层" + "目录的名字为空!";
}
if (subDirectory.length() > 3) {
return "你输入的第" + (i + 1) + "层" + "目录名的长度超过3!";
}
if (!subDirectory.matches("[^$^\\.]+")) {
return "你输入的第" + (i + 1) + "层" + "目录名中含有非法字符\"$\"或者\".\"";
}
}
if (s[s.length - 1].lastIndexOf(".") == -1) {
return "你输入的新文件缺乏所需类型!";
}
String fileName = s[s.length - 1].substring(0, s[s.length - 1].lastIndexOf("."));
String fileType = s[s.length - 1].substring(s[s.length - 1].lastIndexOf(".") + 1);
if (!fileName.matches("[^$^\\.]+")) {
return "文件名中含非法字符\".\"或者\".\"!";
}
if (fileName.length() > 3) {
return "文件名长度超过3!";
}
if (diskCodeInterpreter.turnTheFileTypeToCode(fileType) == 0) {
return "你输入的新文件缺乏所需类型!";
}
if (diskCodeInterpreter.turnTheFileTypeToCode(fileType) == -1) {
return "该文件类型是非法类型!";
}
return "";
}
}
|
The black hole binary V404 Cygni: an obscured AGN analogue Typical black hole binaries in outburst show spectral states and transitions, characterized by a clear connection between the inflow onto the black hole and outflow from its vicinity. The transient stellar mass black hole binary V404 Cyg apparently does not fit in this picture. Its outbursts are characterized by intense flares and intermittent low-flux states, with a dynamical range of several orders of magnitude on timescales of hours. During the 2015 June-July X-ray outburst a joint Swift and INTEGRAL observing campaign captured V404 Cyg in one of these low-flux states. The simultaneous Swift/XRT and INTEGRAL/JEM-X/ISGRI spectrum is reminiscent of that of obscured/absorbed AGN. It can be modeled as a Comptonization spectrum, heavily absorbed by a partial covering, high-column density material ($N_\textrm{H} \approx 1.4\times10^{24}\,\textrm{cm}^{-2}$), and a dominant reflection component, including a narrow Iron-K$\alpha$ line. Such spectral distribution can be produced by a geometrically thick accretion flow able to launch a clumpy mass outflow, likely responsible for both the high intrinsic absorption and the intense reflection emission observed. Similarly to what happens in certain obscured AGN, the low-flux states might not be solely related to a decrease in the intrinsic luminosity, but could instead be caused by an almost complete obscuration of the inner accretion flow. INTRODUCTION Black hole (BH) X-ray binaries (BHBs) are typically transient systems that alternate between long periods of (X-ray) quiescence and relatively short outbursts. During the outbursts their luminosity increases by several orders of magnitude (from ∼10 32−34 erg/s in quiescence to ∼10 38−39 erg/s or more in outburst), due to an increase in the mass transfer rate to the BH. When active, most BHBs show an "hysteresis" behaviour that becomes apparent as cyclic loops in a so-called Hardness-Intensity diagram (HID; see e.g., ). These cyclic patterns have a clear and repeatable association with mechanical feedback in the form of different kind of outflows (relativistic jets and winds, see ). In a typical BHB different spectral-timing states can be identified with different areas of the q-shaped track visible in the HID. In the hard state the X-ray energy spectrum is dominated by strong hard emission, peaking between ∼50-150 keV (e.g., Sunyaev & Truemper 1979,, ). The likely radiative mechanism involved is Compton up-scattering of soft seed photons either produced in a cool geometrically thin accretion disk truncated at large radii, or by synchrotron-self-Compton emission from hot electrons located close to the central black hole (e.g., Poutanen & Veledina 2014). In the soft state, instead, the spectrum is dominated by thermal emission from a geometrically thin accretion disk that is thought to extend down or close to the innermost stable circular orbit (). It is in this state that the peak X-ray luminosity is normally reached. In between these two states are the so-called intermediate states, where the energy spectra typically show both the hard Comptonized component and the soft thermal emission from the accretion disk. In these states the most dramatic changes in the emission -reflecting changes in the accretion flow -can be revealed through the study of the fast-time variability (e.g., Belloni & Motta 2016). While most BHBs that emit below the Eddington limit fit into this picture, systems accreting at the most extreme rates do not. A typical example is the BHB GRS 1915+105, which has been accreting close to Eddington during most of an on-going 23-years long outburst. Another example is the enigmatic high-mass X-ray binary V4641 Sgr (), which in 1999 showed a giant outburst, associated to a super-Eddington accretion phase, followed by a lower accretion rate phase during which its X-ray spectrum resembled closely the spectrum of the well-known BHB Cyg X-1 in the hard state. While GRS 1915+105 displays relatively soft spectra when reaching extreme luminosities, V4641 Sgr did not, showing instead significant reflection and heavy and variable absorption, due to an extended optically thick envelope/outflow ejected by the source itself (,. When the accretion rate approaches or exceeds the Eddington accretion rate, the radiative cooling time scale to radiate all the dissipated energy locally (a key requirement for thin disks) becomes longer than the accretion time scale. Therefore, radiation is trapped and advected inward with the accretion flow, and consequently both the radiative efficiency and the observed luminosity decrease. This configuration is known as slim disk (Begelman 1979,. The slim disk model has been successfully applied to stellar mass black holes, such as the obscured BHB candidate SS 433 (Fabrika 2004), to ultraluminous X-ray sources (), and to super massive BHs (narrowline Seyfert galaxies, e.g. ). High-accretion rate induced slim disks have been recently associated to high obscuration (high absorption) in a sample of weak emission-line AGN (). In those sources, which are likely seen close to edge on, a geometrically thick accretion flow found close to the central supermassive BH is thought to screen the emission from the central part of the system, dramatically reducing the X-ray luminosity. Flared disks are also the most commonly used explanation for obscuration in X-ray binaries seen at high inclinations (see White & Holt 1982and, in particular, for the case of V4641 Sgr, Fabrika 2004 for SS 433 for Swift J1357.2-0933). In both the AGN and BH X-ray binary populations, a large fraction of faint (obscured), high-inclination sources seems to be missed by current X-ray surveys (e.g.,, and. Even considering the entire population of accreting sources as a whole -encompassing stellar mass objects (compact and not), Ultra-luminous X-ray sources (ULXs Feng & Soria 2011) and active galactic nuclei (AGN) -only a small fraction of the known systems seems to be accreting close to Eddington rates, one of them being V404 Cyg (). V404 Cyg is an intermediate to high-inclination (), intrinsically luminous, likely often super-Eddington during outbursts () confirmed BHB (): studying this system opens the opportunity to probe a regime where high accretion rates, heavy and non-homogeneous absorption and reflection are interlaced and all play a key role in the emission from the source. Hence, understanding the physics of V404 Cyg's emission could shed light on the accretion related processes occurring not only in stellar mass BHs, but also in ULX sources and, most importantly, in AGN. V404 CYG, A.K.A. GS 2023+338 V404 Cyg was first identified as a optical nova in 1938 and later associated to the X-ray transient GS 2023+338, discovered by Ginga at the beginning of its X-ray outburst in 1989 (). The 1989 outburst displayed extreme variability, reaching several flux levels above that of the Crab. During this outburst, V404 Cyg became temporarily one of the brightest sources ever observed in Xrays. Casares et al. determined the orbital period of the system (∼6.5 days) and Miller-Jones et al. the distance to the source through radio parallax (d = 2.39 ± 0.14 kpc). Casares et al. also obtained the first determination of the system's mass function (f (M) = 6.26 ± 0.31 M⊙), confirming the black hole nature of the compact object in V404 Cyg and allowing to classify it as a low-mass X-ray binary. Shahbaz et al. later determined a BH mass of about 12M⊙. More recently, near-infrared spectroscopy allowed a more precise determination of the compact object mass, MBH = 9.0 +0.2 −0.6 M⊙ (). On 2015 June 15 18:32 UT (MJD 57188.772), the Swift/BAT triggered on a bright hard X-ray flare from a source that was soon recognized to be the black hole low mass X-ray binary V404 Cyg back in outburst after 26 years of quiescence (. V404 Cyg reached the outburst peak on June 26 and then began a rapid fading towards X-ray quiescence, that was reached between 2015 August 5 and August 21. All along this outburst the source displayed highly variable multi-wavelength activity (), that was monitored by the astronomical community through one of the most extensive observing campaigns ever performed on an X-ray binary outburst (see, and references therein). Already during the 1989 outburst (e.g. Oosterbroek et al.,, V404 Cyg seemed to break the typical BHB pattern. Since we now know the distance of V404 Cyg with high precision, we can say that in 1989 it showed luminosities exceeding the Eddington limit, but without showing a canonical disk-dominated state (however,, who report on a short-lived disk-dominated state). Furthermore, the outburst was characterized by extreme variability, partly due to mere accretion events (somewhat similar to those see in GRS 1915+105, see Belloni & Hasinger 1990), but also ascribed to a heavy and strongly variable photo-electric local absorption (Tanaka & Lewin 1995,,. DATA REDUCTION AND ANALYSIS After the initial Swift/BAT trigger (on 2015 June 15 18:32 UT, MJD 57188.772), INTEGRAL observed V404 Cyg almost continuously during its entire outburst, providing the best hard-X-ray coverage ever obtained for this source (Kuulkers 2015a,b). Swift provided several short observations (often more than one per day) from the start of the outburst all the way down to quiescence. Analysis of the INTEGRAL /ISGRI spectra -where absorption has little effect -showed that the source was sometimes seen in a plateau state, where the spectra could be described solely by a pure reflection spectrum from neutral material (). One Swift pointing (OBSID: 00031403048) happened to take place exactly during one of these states and, differently from what has been seen during the rest of the outburst (see, e.g. ), both the flux and the spectral shape of V404 Cyg were remarkably stable, allowing us to obtain an high-quality average broad band X-ray spectrum of the source. INTEGRAL INTEGRAL data were processed using the Off-line Scientific Analysis software (OSA, ), v10.2, using the latest calibration files at the time of the analysis. We selected only those INTEGRAL data which were strictly simultaneous to the two Swift snapshots described below, by using the appropriate good time interval (GTI) files. IBIS/ISGRI and JEM-X data were processed from the COR step to the SPE step, using standard reduction procedures. The IBIS/ISGRI spectra were extracted using the OSA default energy binning, which samples the energy range 20-500 keV using 13 channels with logarithmic variable energy bins. We fit the ISGRI spectrum between 20 and 250 keV (above 250 keV the emission is background dominated). The JEM-X spectra were extracted using 23 user-defined energy bins, adjusted to allow a better sampling of the energy region around the Iron-K line. We modelled the final JEM-X spectrum between 5 and 25 keV, using 16 energy channels, given the uncertainties in calibration outside this band. The IBIS/ISGRI and JEM-X spectra extracted to coincide with the two Swift snapshots were subsequently combined in a single spectrum per instrument. The net (dead-time corrected) exposure times for the combined spectra were 678 sec for IBIS/ISGRI and 866 sec for JEM-X. To account for calibration uncertainties, 5 per cent systematic errors was added to both spectra. Swift Observation 00031403048 was taken in Window Timing (WT) mode on 2015-06-21 at 03:55:18 UTC and had a total exposure of 994s, split in 2 snapshots. We extracted events in an circular region centred at the source position with fixed outer radius (30 pixels, source region from now on). To produce the energy spectrum we considered only grade 0 events and ignored data below 0.6 keV in order to minimize the effects of high absorption and possible effects of residual pile-up. However, the average count rate of this observation in the source region was just above 10 counts per second in the 0.6-10 keV band, therefore pile-up is unlikely (see http://www.swift.ac.uk/analysis/index.php). Since the spectra extracted from the first and second snapshots of observation 00031403048 did not show significant differences, we produced one single spectrum from the entire observation to improve the signal to noise ratio. We fitted the combined XRT spectrum between 0.6 and 10 keV. Treatment of the dust scattering X-ray halo As reported by Vasilopoulos & Petropoulou, Heinz et al. and Beardmore et al., in some of the Swift images an X-ray halo caused by interstellar dust scattering is seen around the source. This halo emission may strongly contaminate the background region used for spectral extraction. For this reason, we used in our fits an alternative background file extracted from a routine WT mode calibration observation of RXJ1856.4-3754 in March 2015 (exposure 17.8 ks). The background was extracted from an annular region centred at (RA, Dec = 284.17, -37.91 degrees) with inner and outer radii of 80 and 120 pixels. The X-ray halo may also contaminate the spectrum of V404 Cyg, producing an excess in soft X-rays. To evaluate this possible contamination, we also extracted the spectrum of the dustscattering halo using an annular region centred at the source position with inner and outer radius 30 and 60 pixels away from the source position, respectively (halo region from now on). We fitted the halo spectrum over the energy range 0.8 -4 keV. This energy range was selected to avoid the possible distortion in the spectrum at energies below ∼1 keV 1 and to avoid the energy band where the X-ray background might start to dominate over the halo spectrum. Both the source and halo spectral channels were grouped in order to have a minimum number of 20 counts per bin. A 3 per cent systematic error was added to both spectra. The X-ray spectrum of the dust scattering halo can be welldescribed by a soft power law (photon index ∼ 3), 2 affected by interstellar absorption (the same that does affect the source). The halo is variable on a time-scale significantly longer than the characteristic source variability time-scales, and can be considered constant during the XRT observation analysed here. In order to properly disentangle the halo emission and the source emission, we simultaneously fit the source spectrum and the dust scattering halo spectrum (see Fig. 1) using an absorbed power law to describe the X-ray halo emission (absorption tied to the interstellar value). Since the contribution of the halo emission to the source region can be in principle different from the contribution of the halo emission to the background, we left the normalization of the power law component describing the halo free to vary, while the photon index was set by the halo spectrum alone. The models described below will therefore have the form CONS*TBNEW1* +TBNEW2*POWERLAW in XSPEC, where CONS is a calibration constant (fixed at 1 for the Swift/XRT spectra and left free for the ISGRI and JEM-X spectra) and where the second term of the expression is aimed at fitting only the dust scattering halo emission. The absorption applied to the halo (TBNEW2) is tied to the absorption applied to the source spectrum (TBNEW1), which is fixed at 8.3 10 21 cm 2 (Valencic & Smith 2015). Different SOURCE MODEL options were tested on our dataset. These are described in detail in Sec. 4.1 to 4.3 (Model 1) and in 5 (Model 2). All the free parameters derived from these fits are summarized in Tab. 1 for Model 1 and in Tab. 3 for Model 2. We used 2 statistics in the model selection and for parameter error determination. In the following, we quote statistical errors at the 1 confidence level (∆ 2 = 1 for one parameter of interest). We initially fitted our data using the Comptonization model COMPPS in XSPEC (see Poutanen & Svensson 1996) modified by the interstellar absorption (TBNEW in XSPEC, ) with a fixed column density of NH = 8.3 10 21 cm −2. We left the Thomson optical depth electron temperature kTe and the normalization in COMPPS free to vary, while we fixed the seed photon temperature to kTbb = −0.1 keV (i.e. the seed photons are produced by a multi-color disk-blackbody with an inner disk temperature of 0.1 keV) and the inclination to 67 (see ). We also fixed to zero the ionization parameter since it only affects the reflection component, which is switched off in this case. We left all the remaining COMPPS parameters fixed to their default values. Large residuals indicated that a more complex spectral model was required. Therefore, we added a neutral absorber partially covering the source (TBNEW PCF). The addition of a narrow Gaussian line to describe the Iron-K line (see ) was required to account for some residuals around 6.4 keV. The line is unresolved in both the Swift/XRT and INTEGRAL /JEM-X spectra, therefore we fixed its width to 0.2 keV. This model did not provide satisfactory fits, having 2 = 620.12 with 461 d.o.f. and null hypothesis probability = 9.8210 −8. Large residuals especially above 10 keV indicate that the spectrum likely shows a significant reflection component. Case 2: Partially absorbed pure reflection Since the emission from the dust scattering halo was at times significant in the sky region around the source, sometimes outshining the source itself (see Vasilopoulos & Petropoulou 2015, and, ), we also fitted our broadband spectrum with a pure-reflection spectrum (COMPPS with reflection scaling fraction parameter -defined as Refl = /2frozen at -1, combined with a Gaussian line), superimposed to a steep power law used to model the X-ray halo spectrum, as de-scribed in Sec. 3.2. Again, we fitted simultaneously the dust scattering halo and the source spectrum to constrain the slope of the power law associated with the halo, while leaving its normalization free to vary. We did not find any evidence of a soft excess that required the addition of a soft component (e.g. a disk-blackbody) to our model. The best fit parameters are reported in Tab. 1 (Case 2). It is worth noticing that this best fit corresponds to an unlikely halo flux in the source region equal to 310 −10 erg/cm 2 /s, i.e. more than a factor of 2 higher than the flux in the halo region (1.410 −10 erg/cm 2 /s). Furthermore, when fitting our spectra with this model, the power-law photon index (associated with the dust halo) is significantly smaller than the photon index derived in case 1, possibly as a consequence of the presence of negative residuals at very low energies (below 1 keV) in the source spectrum. This causes the development of more residuals in the dust halo spectrum around 4 keV, which suggest that the photon index is likely forced to assume lower values by the source spectrum. The photon index derived using this model is probably too low to properly describe Table 1. Best fitting parameters for the three cases of Model 1 described in the text. Case 1: partially absorbed Comptonization; Case 2: partially absorbed pure reflection spectrum; Case 3: partially absorbed Comptonization spectrum with reflection; Case 4: partially absorbed Comptonization with reflection and variable local absorber. Parameters are: intrinsic column density associated to the direct (Compton) spectrum and relative covering fraction (pcfN H1 and pcf NH1 ), intrinsic column density associated to the reflection spectrum and relative covering fraction (pcfN H1 and pcf NH1 ), electron temperature, Compton parameter (see text), relative reflection factor, ionization parameter (see text), COMPPS normalization, Gaussian line energy, Gaussian line equivalent width, power law photon index (for the dust scattering halo), power law normalization (for the dust scattering halo), JEM-X calibration constant, ISGRI calibration constant. All the quoted errors are 1 level ones. Notice that all models include an additional absorber to account for the interstellar absorption (N H = 8.3 10 21 cm −2 ). The Fe K- line flux has been expressed in term of erg cm −2 s −1 instead of equivalent width in order to allow direct comparison with the lines flux from Model 2 (see 5, for which measuring the equivalent width is problematic.) 6.40 ± 0.05 6.40 ± 0.01 6.40 ± 0.05 6.40 ± 0.01 the halo spectrum, which further points to a non accurate spectral modelling. Case 3: Partially absorbed Comptonization and significant reflection Finally, we added a reflected component to the model described in Sec. 4.1, by allowing the reflection scaling factor parameter in COMPPS to vary freely. As in the previous models, we fitted simultaneously the source and the halo spectra. Also using this model there is no signature of a soft excess requiring the use of an additional soft component, while if we let the seed photon temperature vary, it remains consistent with 0.1 keV. We also note that the normalization of the COMPPS component corresponds to an apparent inner disk radius of about 10Rg. According to the best fit, the reflected component of the spectrum provides ∼90% of the total unabsorbed source emission. The best fit gives 2 = 474.23, 459 d.o.f. and null hypothesis probability = 0.302. The best fit parameters are reported in Tab. 1 (Case 3). We list in Table 2 the fluxes measured in the 0.6-200 keV energy range. From our best fit we find that the halo contributes to 0.7% of the total flux (source + halo) in the source region. This model is strongly statistically favoured with respect to the model described in Case 1 (see Sec. 4.1): an F-TEST returns an Fstatistic value = 70.60 and null hypothesis probability 1.8510 −27. On the other hand, while the model used in Case 2 (see Sec. 4.2) gives an acceptable fit to the data with 2 = 499.77, 459 d.o.f. and null hypothesis probability = 0.0973, it is still statistically disfavoured with respect to the model described in this section. An F-test returns F statistic value = 24.71 and probability 9.3910 −7 in favour of the model including both a direct Comptonization spectrum and a reflected component. Case 4: Partially absorbed Comptonization and significant reflection, with variable local absorber Since it is entirely possible that the illuminating spectrum and reflected emission are produced in different regions of the accreting system, it is also possible that the two components are affected in different ways by the heavy absorption that partially covers the source, which has been treated so far as an average quantity. Therefore, we attempted to separate the average local column density into two components, applied to the illuminating spectrum (NH1) and to the reflected one (NH2), respectively. In order to avoid spectral degeneracy, we fixed the partial covering fraction associated to the reflected spectrum, pcfNH2, to 1 (i.e. uniform covering) while leaving the partial covering fraction associated to the direct spectrum, pcfNH1, free to vary. Since the ionization parameter was pegged at zero during the spectral fitting, we fixed it to zero during the spectral fit. The best fit gives 2 = 482.98, 459 d.o.f. and null hypothesis probability 0.219. The best fit parameters are reported in Tab. 1 (Case 4). While this model gives an acceptable fit to the data, it is statistically disfavoured with respect to the model described in Sec. 4.3, which remains our best fit so far. We note, however, that the constant taking into account the instrumental cross-calibrations are not as well-constrained as in case 2 and 3, which could indicate the presence of mild spectral degeneracy. An obvious extension to this model would be to leave the partial covering fraction parameter of the absorber associated to the reflected spectrum (pcfNH2) free to vary. However, this results into pcfNH2 being pegged at 1, while the remaining parameters are consistent with those reported in Tab. 1, case 4. Iron-K line The small FWHM measured for the Iron-K line in our spectrum indicates that the line, as previously suggested by, e.g., Oosterbroek et al. and King et al., is likely produced far away from the central BH. While a Gaussian line still provides the best fit to our data around 6.4 keV, both the DISKLINE and LAOR models provide an inner disk radius (where the line is produced) of Rin ∼ 300 Rg (the value is unconstrained with DISKLINE). RELXILL returns an inner radius of the emitting region of Rin ∼ 200 Rg, leaving structured residuals in the iron line region. These fits indicate that the iron line is not produced near the BH. These residuals are probably related to an edge that is likely due to absorption rather than to reflection (but see ). SPECTRAL MODELLING -MODEL 2: REPROCESSED COMPTON SPECTRUM: THE MYTORUS MODEL From the results obtained in the previous sections, it appears that: (i) the reprocessed/reflected emission dominates the X-ray broad band spectrum of V404 Cyg; (ii) heavy absorption affects the spectral shape of the source; (iii) partial covering of the central X-ray source is necessary to obtain good fits to the data. We note that both the absorption model TBNEW and the reflection model we adopted 3, (as basically all other reflection and absorption models available in XSPEC), do not take into account the scattering associated with both absorption and reflection that becomes relevant already for column densities above a few 10 23 atoms/cm 2 and thus significant for columns like those measured here, 10 24 atoms/cm 2 (e.g. Rybicki & Lightman 1979). The effects of scattering in ∼ Compton-thick material affects the entire energy spectrum due to the Klein-Nishina effect and should be carefully modelled in order to obtain reliable luminosities. Model set-up The observational facts listed above suggest that the properties of V404 Cyg closely resemble those of obscured AGN. Hence, we fitted to our data the MYTORUS model, (Murphy & Yaqoob 2009;Yaqoob 2012), a spectral model describing a toroidal reprocessor that is valid from the Compton-thin to the Compton-thick regime. Even though MYTORUS was designed specifically for modelling the AGN X-ray spectra, its use is not restricted to any system size scale and therefore can be applied to any axis-symmetric distribution of matter centrally-illuminated by X-rays. An extensive description of the basic properties of MY-TORUS and its components is given in Appendix A. The model expression that we used in this work is the following in XSPEC: MODEL = CONSTANT1 * TBNEW1 * (CONSTANT2 * COMPTT1 + COMPTT2 * MYTORUS EZERO + CONSTANT3 * MYTORUS SCATTERED1 + CONSTANT4 * MYTORUS SCATTERED2 + (GSMOOTH * (CONSTANT5 * MYTL1 + CONSTANT6 * MYTL2)) + ZGAUSS) + TBNEW2 * POWERLAW This expression depends on a relatively small number of free parameters (listed in Tab. 3): the interstellar column density TBNEW1 and TBNEW2, the optical depth, the constant factors weighting the contribution of the different components of the model (namely: CONSTANT2, CONSTANT3 and CONSTANT4), the average column density NH Z, and the line-of-sight column density NHS, the centroid energy, FWHM and normalization parameters related to the zgauss component, and the power law parameters. CONSTANT1 accounts for the relative normalizations of the spectra from the different instruments, and it is equal to 1 for Swift/XRT and reported in Tab. 3 for INTEGRAL/JEM-X and INTEGRAL/ISGRI. In our fits, the optical depth is tied across all the components to the same (variable) value. The different components of the source spectrum are allowed to vary thanks to the constant factor preceding each of them (CONSTANT2, CONSTANT3 and CON-STANT4). The constant associated to the fluorescent line spectra (CONSTANT5 and CONSTANT6) are tied to the correspondent constants of the scattered spectra (CONSTANT3 and CONSTANT4), as the line flux must be consistent with the scattered flux. The lineof-sight column density NHS is tied across all the scattered components (continuum and line spectra), and can be either tied to or independent from the average column density NH Z (related to the transmitted spectrum, i.e. the zeroth-order continuum, see App. A). MYTORUS must be used setting the same abundances and cross section used to produce all the MYTORUS model tables. Therefore, we used the cross section by Verner et al. and the abundances by Anders & Grevesse instead of those by Wilms et al. that we used in Sec. 4. This implies a change in the ISM column density that we need to use with MYTORUS. Since the fit seems to be stable against small fluctuations of the ISM column density, we left this parameter free to vary, making sure that it did not drift to values inconsistent with those reported by Kalberla et al.. Fitting strategy Following Yaqoob, we initially fitted our data set only above 10 keV. This allows to establish if the high-energy emission is dominated by the scattered emission or by the transmitted spectrum through the reprocessor (i.e., the zeroth-order continuum), which can never be zero. The best fit thus obtained shows that the transmitted spectrum dominates the emission above 10 keV. Since thezeroth-order continuum depends on the electron temperature Te of the illuminating continuum, on the optical depth and on the average column density NHZ, this best fit provides initial constraints on these parameters: = 0.9±0.2, Te = 27 +3 −2 keV and NHZ = (2.4 +3 −2 )10 24 cm −24. We kept the seed photon temperature fixed at 0.1 keV during the fit, however it stays consistent with 0.1 keV (while drifting to even lower values), even when left free to vary. Since the electron temperature must be fixed when fitting MY-TORUS to a particular data set, we fixed Te to 28 keV for all the following steps. This implies using the correct Monte-Carlo table, produced for a Compton illuminating spectrum with electron temperature 28 keV (see Yaqoob 2012). Being a rather complex model, MYTORUS can cause a spectral fitting degeneracy: two completely different model configurations (e.g. either scattered emission or transmitted continuum dominating the spectrum) could describe equally well the same energy spectrum. However, the fact that the zeroth-order continuum dominates the high-energy emission above 10 keV in our case provides useful constraints to select the most appropriate model. In particular, any configuration where the scattered emission dominates over the transmitted one above 10 keV is to be discarded. Furthermore, when the zeroth-order continuum dominates the high-energy emission above 10 keV, the Compton-scattered continuum and Iron-K line emission must be dominated by photons originating from back-illumination of the reprocessor and then reaching the observer along paths that do not intercept the Compton-thick structure. In other words, the structure must be clumpy, allowing a reflection continuum to reach the observer either from the far inner side of a toroidal structure or from an extended and dispersed distribution of matter. In particular, with the zeroth-order continuum dominating the emission above 10 keV, the radiation from back-illumination of the reprocessor will dominate over the emission from reflection on the far inner side of the scattering torus (Yaqoob 2012). This configuration can be modelled through MYTORUS used in the decoupled configuration (corresponding to the expression given above), in which the Compton-scattered continuum is composed of a face-on and an edge-on component (MYTORUS SCATTERED1 and MYTL1, and MY-TORUS SCATTERED2 and MYTL2, respectively), each of which can be varied independently of the zeroth-order continuum. This setup can mimic a clumpy, patchy structure, axis-symmetric but not necessarily with a toroidal geometry. The inclination angle parameters in the MYTORUS SCATTERED1 and MYTORUS SCATTERED2 components of the decoupled model are fixed at 0 and 90, respectively, and are not related to the actual orbital inclination of the system. In this configuration, the inclination angle determines only if the scattered emission intercepts ( = 90 ) or not ( = 0 ) the reprocessor before reaching the observer. Consequently, for the reasons given above, MYTORUS SCATTERED2 and MYTL2 components must dominate over the MYTORUS SCATTERED1 and MYTL1 ones. We decided to leave the average column density NH Z, and the line-of-sight column density NHS independent from each other, as in the presence of non-uniform and high column density absorbing material local to the source one should expect differences in the column density intercepted by the line of sight and the overall column density. We fitted the MYTORUS model in the decoupled configuration as described above to our full-band spectrum, following the Table 3. Best fitting parameters from Model 2 (MYTORUS) described in Sec. 5. The parameters marked with a * were initially allowed to vary, then they were fixed to their best-fit values for the sake of stability of the spectral fit. These parameters are then allowed to be free, one parameter at a time, in order to derive statistical errors. This is a procedure sometimes required when fitting MYTORUS to a data set, aimed at keeping control on the spectral parameter of a model that is rather complex compared to the majority of the models included in XSPEC (Yaqoob 2012). The line fluxes are measured in the 0.6-10 keV energy band. The lines flux has been expressed in term of erg cm −2 s −1 instead of equivalent width since it is not possible to unambiguously measure the equivalent width of the Fe K lines given the complexity of the continuum. In other words, it is difficult to determine what continuum should be referred to the Fe K lines, therefore we decided to report the observed line flux. same procedure we used previously, i.e. fitting the source data together with the dust scattering halo spectrum, in order to better constrain the halo spectrum slope (see Sec. 4). The best fit is shown in Fig. 2, the best fit parameters are given in Tab. 3, and the fluxes from each spectral component as well as the total source intrinsic flux are reported in Tab. 4. The main difference between the results of this spectral modelling with respect to that described in Sec. 4 (Model 2, case 3) is in the huge difference between the observed and the intrinsic source luminosity, which approaches a factor of 40. From the spectral modelling point of view, the main difference between Model 1 (see Sec. 4) and Model 2 is that (weak) residuals to the XRT data require the addition of a line at ∼7.5 keV, consistent with the Ni K- line, expected in both AGN and binaries especially in the Compton-thick regime (Yaqoob & Murphy 2011). From this best fit we find that, similarly to Model 1, case 3 (see 4.3) the halo contributes only a little (0.3%) to the total flux (source + halo) in the source region. The resulting best fit is statistically favoured with respect to all the models described in the previous sections, with a 2 = 469.58, 464 d.o.f. and null hypothesis probability equal to 0.419. DISCUSSION We analysed a simultaneous INTEGRAL and Swift spectrum of the black hole binary V404 Cyg obtained during the 2015 Summer outburst, when the source was in a plateau, reflection dominated state. This is the first time that an X-ray spectrum of V404 Cyg is available simultaneously over such a wide energy range (0.6-250 keV). The broad-band X-ray energy spectrum of V404 Cyg is remarkably similar to the typical spectra of obscured AGN, where the primary emission is absorbed and reprocessed by high column density of gas. The spectrum analysed in this work can be well described by a combination of direct Compton emission produced by hot electrons (∼65 keV, see Tab. 1) in a optically translucent material ( ∼ 1.2) up-scattering low-temperature photons, and reflected emission including a narrow line centred at 6.4 keV. The presence of a high column density neutral absorber (equivalent NH ≈ 1.4 10 24 cm −2 ) covering about 85 per cent of the central source is necessary to describe the soft X-ray emission (below 10 keV). The prominent reflection hump especially evident in the IN-TEGRAL IBIS/ISGRI and JEM-X spectra, is indicative of Compton back-scattering of photons by Compton-thick material around the source. The very low FWHM of the Iron-K line suggests that the reflection takes place far away from the central black hole, as previously observed during the 1989 X-ray outburst of the source (). The heavy absorption we derive (NH ∼ 1.410 24 cm −2 ) is consistent with the values measured during the 1989 outburst. A more sophisticated modelling of the broad-band X-ray spectrum that takes into account both the complex geometry of the absorber and the presence of heavily reprocessed emission, shows that the best description of the spectrum is given by a bright point source whose emission is scattered and reflected by a patchy toroidal reprocessor surrounding it. In this case the illuminating spectrum can still be well-described by a Compton spectrum with optical depth ∼ 0.9, smaller than in the case where scattering is not taken into account. The electron temperature is also smaller with respect to the previous case, but consistent with the average electron temperature measured from INTEGRAL/ISBIS/ISGRI data (). This difference can be ascribed to the fact that, in the presence of heavy absorption, the effect of scattering significantly affects the source high-energy curvature by producing a strong transmitted component. The result is a blue-shift of the high-energy spectral roll-over, that can be erroneously ascribed to a higher electron temperature when the effects of scattering are not properly modelled. The average column density of the toroidal reprocessor is nearly in the Compton-thick regime (NH ≈ 310 24 cm −2 ), while the column density in the direction of the line of sight is about a factor of four smaller (NH ≈ 0.8 10 24 cm −2 ). This is expected in a scenario where a local, non uniform (patchy) and highly variable reprocessor heavily affects the emission from the central source, as the Comptonized emission, the transmitted emission and the scattered emission (which includes what is typically referred to as reflected emission) do not necessarily experience the same absorption. In particular, while the transmitted emission (i.e., the zeroth order continuum, which has essentially lost any direction information being the result of multiple scattering events) carries the effects of the overall average column density, the scattered emission is mostly affected by the line of sight column density, which can be significantly larger then the overall one. This can happen, for instance, in the situation where a Compton-thick clump of cold material is intercepting the line of sight. As expected in the presence of heavy absorption, the high-energy spectrum (i.e. above 10 keV) is dominated by the reprocessed emission. A large fraction of such emission comes from photons scattered multiple times in the local absorber, while the remaining fraction is due to photons scattered through back-illuminated matter in the reprocessor, and to photons reflected off the far inner side of the reprocessor and then reaching the observer without being further scattered. The line emission associated to the Iron-K and Iron-K transitions is not resolved in Swift/XRT and hence are seen as one single line. The Ni-K line is also unresolved by XRT. The relative observed flux of the Iron lines and of the Nickel line is in agreement with predictions by Yaqoob & Murphy, with the Nickel line flux being a factor > 7 fainter than the Iron lines flux. The fact that the Nickel line is not significantly detected in the Swift/XRT spectrum when fitting it using Model 1 (case 3) can be a consequence of a cruder modelling of the region around 7 keV, where the iron edges affect the most of the spectrum. The most relevant difference that we found between the two models we considered is the inferred intrinsic luminosity of the central X-ray source. On the one hand, a simple reflected Comptonized spectrum (Model 1, case 3, see Sec. 4.3), which does not consider scattering, but only pure reflection (i.e. the reflecting material is assumed to be characterized by infinite optical depth), returns a flux of 4.46 10 −8 erg/cm 2 /s, corresponding to ∼3 % the Eddington luminosity for a 9M⊙ black hole. On the other hand, a more complex modelling of the reprocessed emission that takes into account the scattering processes (Model 2, see Sec. 5) gives an intrinsic flux of 1.31 10 −6 erg/cm 2 /s, corresponding to ∼ the Eddington luminosity for V404 Cyg. As mentioned earlier, the data considered in this work correspond to a plateau phase in the light curve of V404 Cyg (see ) that occurred in between two of the major flares seen in the 2015 Summer outburst (flare peak observed on MJD 57194.11 and MJD 57194.31, respectively). We estimated the ratio R f l/pl of the average flarepeak flux to the average plateau flux from the INTEGRAL/ISGRI light curve in the 20-200 keV energy band (Kuulkers 2015b) obtaining R f l/pl = 10 ± 2. Then we compared such ratio with the ratio R Int/Obs of the intrinsic to the observed flux in the 20-200 keV energy band using the best fit to the data based on MYTORUS, which gave R Int/Obs = 8.6 ± 0.9. The fact that the two ratios are consistent indicate that the plateau where our data set comes from could be the result of a short-lived almost complete obscuration of the central source in a very bright phase, rather then an actual de-crease in the emitted flux. This is in agreement with the results of the spectral modelling, which required heavy absorption and, consequently, a significant amount of reprocessed emission, regardless of the details of the model used. Zycki et al. and Oosterbroek et al. reported on the high variability of an intrinsic absorption component based both on the spectral analysis of the source and on the time scales over which the absorption was changing, which suggests fast movement and/or fast changes in the physical properties (e.g. optical depth) of material within the system. ycki et al. suggested that the presence of heavy (and variable) intrinsic absorption is likely the main reason why the energy spectrum of V404 Cyg almost never resembles the spectrum typical of any of the spectral-timing canonical states of BHBs in outburst. Our results are in good agreement with those obtained with Ginga. The Swift/XRT sensitivity to the low energies, together with the INTEGRAL broad-band coverage, allowed to study in detail the combined effects of heavy absorption and strong reflection in the X-ray spectrum of V404 Cyg. The X-ray central engine is probably hidden beneath a layer of complex, heavily absorbing material that substantially suppresses the source intrinsic spectrum, making it hard to recover it. The in-homogeneity of the absorber is such that occasionally, the observer can get a glimpse of the un-obscured source, together with the reflected spectrum. A similar scenario was proposed to explain the properties of V4146 Sgr during the 1999 outburst (). After a Super-Eddington phase, the system ejected a significant amount of matter that was responsible for heavy and non-homogeneous absorption and intense reflection. The V4641 Sgr X-ray spectrum after the outburst peak was remarkably similar to that of a Type-2 AGN (). The derived electron temperature of the Comptonizing medium (for both Model 1 and Model 2) is consistent with the results from other authors for V404 Cyg during the 2015 outburst (and, and with what has been observed in other, more canonical transient BHBs (e.g., Cyg X-1, Sunyaev & Truemper 1979, GX 339-4, Del,, and GRO J1655-40,. Low temperature seed photons required to produce the direct Comptonized spectrum are normally found in black hole binaries at low luminosities (see, e.g., ). Such seed photons would come, in our case, from either a (cool) heavily absorbed accretion disk truncated at large radii (e.g., ) and/or from synchrotron self-Compton emission by non-thermal electrons in the hot Comptonizing medium (see, e.g., Poutanen & Veledina 2014). Since we do not find evidences of a soft component, such as a disk black-body, our data alone do not allow to unambiguously determine the origin of the Compton seed photons. ycki et al. reported the detection of a short-lived disk dominated state during the 1989 outburst of V404 Cyg. However, given the extreme luminosities reached during the 1989 outburst -comparable to those observed in 2015 -it is reasonable to assume that a dust scattering halo, Vasilopoulos & Petropoulou 2015and formed also back in 1989. Our results show that the presence of this halo does not contaminate the overall emission of the source to a significant level in the Swift/XRT observation. However, given the large field of view (1.12.0 2 FWHM) of the Ginga collimated proportional counter array (LAC, ), Ginga would have not been able to disentangle the source emission from the halo emission. Therefore, it is possible that the soft emission ascribed to an accretion disk in Ginga data b Zycki et al. is in reality soft emission from the halo. 6.1 V404 Cyg: an obscured super-Eddington AGN-analogue Both significant reflected emission (see, e.g., NGC 7582, ) and the effects of a patchy, neutral absorber (see, e.g., the case of NGC 4151,, de and NGC 1365, are sometimes seen in obscured AGN, where the variability of the absorber is thought to be responsible of most of the variability from the source. High values of reflection fractions in AGN are normally ascribed to the fact that the source of the illuminating continuum is no longer visible/active, i.e. because of intervening partially covering absorption or because the source switched off, and the only radiation seen is the reflected one (see, e.g.,, but see, e.g., for a different scenario). In both cases, the reflection amplitude is bound to increase significantly. V404 Cyg also showed substantial reflected emission, however in this system the scenario is probably slightly different from that of an obscured and/or reflected AGN, since the illuminating continuum can be directly observed, though largely absorbed, together with the (dominating) reflected emission. This suggests that the spectrum is a combination of Comptonized continuum and reprocessed emission, likely produced in different areas of the system (i.e., close to the central black hole and further out, respectively). According to the unified model of AGN (Antonucci 1993, Urry & Padovani 1995, the central BH is always surrounded by an axis-symmetric parsec-scale torus. Furthermore, a large fraction of AGN show clear evidence of absorption in the soft X-ray band, interpreted as material, either neutral or ionized, on the line of sight (Turner & Miller 2009). Recent finding have indicated that this absorber is most likely non homogeneous and located close to the central black hole (e.g., ), at a smaller distance than the dust torus. In addition, it has been found that in several AGN, the reflection components in the X-ray spectra are significantly stronger than expected for reflection off gas with the same column density measured from the absorption features (e.g., ). This is only possible if a thick reflector close to the BH and well-within the parsec scale torus, with column densities exceeding NH = 10 24 cm −2, covers a large fraction of the solid angle around the source (). In the spectrum of V404 Cyg we detected, together with a (weak) direct Comptonized spectrum, both high reflection and the signatures of heavy, non-homogeneous absorption, all effects pointing to the presence of non-uniform shielding material local to the source on the line of sight. The similarities between the properties of V404 Cyg and those of some AGN, suggest that the accretion configuration in the former might be very close to that expected in obscured but intrinsically luminous AGN, accreting at high accretion rates, where the inner accretion flow is well described by the slim disk model () and the central engine is thought to be partially or completely obscured by an absorber (the flared inner accretion disk) located close to the central black hole and internal w.r.t. the dust torus; in this case both reflection and absorption play a key role in shaping the broadband energy spectrum. Simulations show that both in stellar mass accreting BH and in AGN, high (super-Eddington) accretion rates can develop strong radiation forces able to sustain a thick accretion flow that might form at times a non-homogeneous (i.e., clumpy) mass outflow, launched within a few hundreds of Rg from the black hole (see, e.g., ). In addition, similar scenario has been suggested also for the ultra-luminous X-ray sources (e.g., ). In other words, the geometrically thin, optically thick accretion disk -the launching site of the winds seen in the optical band at thousands of Rg from the BH () -puffs-up in its inner tens to hundreds of Rg, becoming a geometrically thick accretion flow, sustained by the radiative forces that develop as a consequence of the highaccretion rates. This thick accretion flow then fragments out at a certain distance from the disk plane, forming high-density Comptonthick clumps of material, which could be responsible for the high intrinsic, non-homogeneous absorption seen in V404 Cyg, as well as for the intense reflected emission. When the inclination is high enough -like in the case of V404 Cyg -this inner slim disk is able to shield the innermost region of the accretion flow, preventing the radiation to directly reach the observer most of the time. The observed emission from these objects is therefore expected to be dominated by scattered/reflected radiation. Such a high-accretion rate regime is rarely observed in BHBs, but it is inferred to be present in about 1 per cent of high redshift optically selected AGN (). The first example of high quality X-ray spectrum of a Super-Eddington AGN has been presented in Lanzuisi et al., where one of the plausible scenarios that can explain the data is an intrinsic emission strongly reprocessed through absorption and reflection in partially covering Compton-thick material. In this case the obscuration of the AGN is not due to a distant parsec-scale torus, but rather to the inner accretion flow itself that under the strong radiation pressure puffs up into the slim disk configuration. The alternating phases of high and low luminosities observed during the 2015 outburst of V404 Cyg (, suggest that V404 Cyg might have been accreting erratically or even continuously at super-Eddington rates, while being partly or completely obscured by a in-homogeneous, high-density layer of neutral material local to the source (similarly to what happened to V4146 Sgr, ). In this context, the fact that the reprocessed emission almost dominates the entire spectrum, implies that the emitted luminosity can be order of magnitudes higher than what is directly measured (see, e.g., Murphy & Yaqoob 2009), as our results suggest. This has strong implications in the context of X-ray/radio correlations (e.g., ). The large difference between observed and measured X-ray flux should be taken into account carefully, since while the X-ray emitting region could be almost completely obscured, the radio emitting region is most likely always visible as it is probably emitted from a few to tens of Rg away from the accretion disk midplane. SUMMARY AND CONCLUSIONS We have analysed unique simultaneous INTEGRAL and Swift observations of the black hole candidate V404 Cyg (GS 2023+338) during the 2015 summer outburst. We observed the source in a rare, long, plateau, reflection-dominated state, where the energy spectrum was stable enough to allow time-averaged spectral analysis. Fits to the source X-ray spectrum in the 0.6-200 keV energy range revealed heavily absorbed, Comptonized emission as well as significant reprocessed emission, dominating at high energies (above ∼10 keV). The measured average high column density (NH ≈ 1 − 3 10 24 cm −2 ) is likely due to absorption by matter expelled from the central part of the system. The overall X-ray spectrum is consistent with the X-ray emission produced by a thick accretion flow, or slim disk, similar to that expected in obscured AGN accreting at high accretion rates (i.e. close to the Eddington rate), where the emission from the very centre of the system is shielded by a geometrically thick accretion flow. We therefore suggest that in some of the low-flux/plateau states detected between large X-ray flares during the 2015 outburst, the spectrum of V404 Cyg is similar to the spectrum of an obscured AGN. Given the analogy and the extreme absorption measured, we argue that occasionally the observed X-ray flux might be very different from the system intrinsic flux, which is almost completely reprocessed before reaching the observer. This may be particularly important when comparing the X-ray and radio fluxes, since the latter is likely always emitted sufficiently far away from the disk mid-plane and therefore never obscured. Given the fact that accretion should work on the same principles in BHBs and AGN, once a suitable scale in mass is applied, detailed studies of V404 Cyg and stellar mass black holes with similar characteristics could help in shading light on some of the inflow/outflow dynamics at play in some, still poorly understood, classes of obscured AGN. SEM acknowledges the anonymous referee whose useful comments largely contributed to improve this work. SEM acknowledges the University of Oxford and the Violette and Samuel Glasstone Research Fellowship program and ESA for hospitality. SEM also acknowledges Rob Fender, Andy Beardmore and Robert Antonucci for useful discussion. JJEK acknowledges support from the Academy of Finland grant 268740 and the ESA research fellowship programme. SEM and JJEK acknowledge support from the Faculty of the European Space Astronomy Centre (ESAC). EK acknowledges the University of Oxford for hospitality. MG acknowledges SRON, which is supported financially by NWO, the Netherlands Organisation for Scientific Research. This work is based on observations with INTEGRAL, an ESA project with instruments and science data centre funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Switzerland, Spain), and with the participation of Russia and the USA. to the source overall spectrum, while TBNEW2, tied to TBNEW1, is applied to the POWERLAW component aimed at describing the halo emission. COMPTT1 and COMPTT2 are both associated to the source illuminating spectrum, but while the former is intended to model the unobscured illuminating spectrum (the emission that reaches the observer through the 'holes' in the patchy absorber), the latter is attenuated by the effects of the toroidal reprocessor (MYTORUS EZERO), giving rise to the zeroth-order continuum. The main parameters of the COMPTT are the seed photons temperature T0, the electron temperature Te, the optical depth and the normalization. As we did in Sec. 4, we fixed T0 to 0.1 keV. The main parameter of MYTORUS EZERO is the average column density NHZ. The reason why we used COMPTT and not COMPPS is that MYTORUS is currently designed to allow either a power law intrinsic spectrum, or a Compton spectrum as described by COMPTT. Even if small differences between COMPTT and COMPPS are to be expected, we can reasonably assume that our results will be not affected by them, since the reprocessed emission, which dominates the emission, is not too sensitive to the details of the illuminating spectrum. MYTORUS SCATTERED1 and MYTORUS SCATTERED2 describe two different parts of the scattered spectrum: MY-TORUS SCATTERED1 accounts for the emission scattered on the far inner side of the toroidal reprocessor, that then reaches the observer without being further scattered; MYTORUS SCATTERED2 accounts for the emission from material back-illuminated from the central source (for details, see Yaqoob 2012 and in particular their fig. 2). The main parameters of MYTORUS SCATTERED1 and MY-TORUS SCATTERED2 are the column density along the line of sight NHS, the inclination angle parameter (which denotes the inclination angle between the line of sights and the axis of the reprocessing torus, and determines if the line of sight intercepts or not the reprocessor) and the optical depth. MYTL1 and MYTL2 are the fluorescent line spectra associated to MYTORUS SCATTERED1 and MYTORUS SCATTERED2, respectively. The main parameters of these components are the column density along the line of sight NHS and the optical depth and the inclination angle parameter, which in each fluorescent line spectrum component is tied to the correspondent scattered spectrum component. GSMOOTH is typically applied to the fluorescent line spectra and takes into account the possible velocity broadening of the lines, however our best fit did not require any significant broadening, therefore the broadening parameter was fixed to zero. ZGAUSS is an additional line added to the final model in order to describe residuals found in the XRT source spectrum around the Ni K- line (∼7.5 keV). This paper has been typeset from a T E X/ L A T E X file prepared by the author. |
Optimizing Secure Collaboration Transactions for Modern Information Systems The issue of securing a collaboration within an information system is often addressed from the viewpoint of establishing a security policy that grants access to data by what role an individual has or the permissions that are attached to the underlying data. These methods are well suited for transactions where the concern is to secure the data during and after a collaboration. In the situation where the concern is to release the data without any security control after the release, these methods are unproductive. In such an environment the initial disclosure of data is crucial. In this paper we present the Optimal Data Security Module that addresses the need for an individual to select a minimal data set for a single collaboration transaction that results in the greatest return when collaborating with other entities within an information system. Therefore an individual could minimize the risk of sharing his/her data and maximize the reward when collaborating with other parties. |
Does the 2008 global financial crisis matter for the determinants of conventional and Islamic banking performances in Indonesia? Originality The study examined a larger number of conventional and Islamic banks over more extended and updated study periods, namely six years (i.e., 2003-2008) before the 2008 GFC and ten years (i.e., 2009-2018) after the 2018 GFC. The study is among the first attempts to comparatively analyze the determinants of Indonesias Islamic and conventional banking performances between the preand post-2008 GFC periods using the panel multiple regression analysis to arrive at more comprehensive and robust empirical evidence. Introduction The 2007-2008 financial crisis, also known as the 2008 Global Financial Crisis (GFC) has been labeled as the worst global recession of the 21 st century and the most severe financial crisis since the 1930s' Great Depression (Tong & Wei, 2008;Kassim & Majid, 2010;Kassim, Majid, & Hamid, 2011). The crisis that began with depreciation in the sub-prime mortgage market in the US has caused an international banking crisis and a mere collapse of the world financial system. According to Davies, the 2008 GFC is among the 20 major crises in the world economy during the 21st century. This fact shows that, on average, the major financial crisis has repeatedly occurred once in every five years. The impact of the crisis has not only hit the financial and banking institutions in the developed economies, but also adversely impacted financial and banking institutions in the Asian emerging economies, including Indonesia Kassim, Majid, & Yusof, 2009;;Majid, 2018). The presence of the 2008 GFC has devastated Indonesia's national economy and its financial and banking institutions, although the country was claimed to have a robust economic fundamental. The Indonesian stock price fell by -48.41% from 2,627.3 points during the year 2008, and the value of market capitalization and trading volume sharply declined (Bank Indonesia, 2009). The crisis also hit the banking industry due to the withdrawals of investment funds in some companies by foreign investors. Consequently, the value of banking assets decreased either in the form of loans or securities. Likewise, the banks' capital adequacy also vividly dropped due to losses from the decline in the value of productive assets and an increase in non-performing loans. As a result, three state-owned banks in the country requested liquidity assistance of IDR5 trillion from the government of Indonesia in October 2008 (Bank Indonesia, 2010). In Indonesian context, Islamic banks operate in tandem with conventional banks in a dual banking system. Unlike conventional interest-based banking system, Islamic banks carry out their intermediary activities of collecting and channeling funds based on the interest-free system following Islamic tenets. Although Islamic banks are relatively new compared to conventional banks that have existed since the last three decades, they have experienced promising development since the launch of the first Islamic bank in the country, namely Bank Muamalat Indonesia in 1991. Since then, the number of Islamic banks has continuously increased. In 2015, there were 12 full-fledged Islamic banks and increased to 14 Islamic banks in 2017 due to the conversion of the Bank of Aceh and the Bank of West Nusa Tenggara (NTB) to full-fledged Islamic banks (Otoritas Jasa Keuangan, 2018). Albeit Islamic banks have proven to be able to survive during the 1997 East Asian economic crisis, Islamic banks were also hit by the 2008 GFC. However, the effect of 2008 GFC on the Islamic bank was far smaller than their conventional counterparts. Kassim and Majid documented that although Islamic banks were vulnerable to the financial crises, but they were more stable and resilient during the 2008 GFC. This is contrary to popular belief so far that the Islamic economic system is fully protected from the crisis because of its interest-free nature. The occurrence of 2008 GFC has positioned the banks in a more difficult situation, considering the fierce competition due to the decline in banking performance. Banks as intermediary financial institutions, which collect and distribute funds from and to the public, their sustainability are highly dependent on their ability to cope with the impacts of the crisis to maintain good performances. The degree of impact of the crisis and kinds of strategy to preserve performances are among the critical factors determining the success of banks' competition. The competition between Islamic banks and conventional banks depends on the bank's financial performance. The financial performance of a bank reflects the health condition of the bank. By having excellent performance, banks can quickly provide better services and benefits to internal and external parties. For this reason, research on measuring banking financial performance and its determinants during episodes of financial crises become increasingly crucial for the banks to identify the critical factors for enhancing their achievements and, in turn, the banks could fulfill all their functions, roles, and objectives. If the banks can shield themselves from the impact of the crisis, the banks would easily maintain excellent financial performance in the mid of the monetary crisis. Studies on measuring Islamic banking performance have been conducted by many researchers focusing on many countries, including Indonesia. For example, Rosly and Bakar and Wasiuzzaman and Nair Gunasegavan have measured Islamic banking performances using an average value in Malaysia. Meanwhile, Jaffar and Manarvi and Khan, Khan, and Tahir using a CAMEL (Capital, Asset Quality, Management, Earning, and Liquidity) approaches in Pakistan and Erol, Baklaci, Aydoan, and Tun in Turkey. However, all these studies only measured the performances of Islamic banks and failed to compare them with conventional counterparts. In the context of Indonesia, the comparative financial performance of conventional banks and Islamic banks has been studied by Subaweh, Ardiyana, Setyaningsih andUtami, Nugraha, and Betharino, but their sample selection consisting of only one conventional and Islamic bank over a shorter data periods. Meanwhile, the studies on the analysis of the determination of financial performance have also been conducted by Sukarno and Syaichu, Aryati and Balafif, Dewi, Sabir and Habbe, and Margaretha and Zai, but their studies focused on a smaller number of the banks, using shorter data periods, and generally utilizing time-series multiple linear regression analysis. Utilizing a smaller number of Islamic banks over a shorter data period and using the time-series regression analysis for panel data, the previous studies failed to provide comprehensive and robust empirical evidence of the determinants of financial performances of the conventional and Islamic banks. This is also the main weakness of the study by Majid, Musnadi, and Putra that explored the effect of profitability, loan risk, and debt management on the quality level of asset management of Islamic banks and conventional banks in Indonesia that utilized panel data, but in the data analysis instead, they used time series multiple linear regression analysis. Motivated to fill up the existing gaps in the previous studies, this study aims to empirically and comparatively measure and analyze the effects of financial performances of conventional and Islamic banks between the pre-and post-2008 GFC periods. Thus, this study's major novelty is in terms of its examination of a larger number of conventional and Islamic banks over more extended and updated study periods, namely six years (i.e., 2003-2008) before the 2008 GFC and ten years (i.e., 2009-2018) after the 2018 GFC. Additionally, the novelty of the study is in terms of its comparative analyses between the pre-2008 GFC and post-2008 GFC using the panel multiple regression analysis to arrive at more comprehensive and robust empirical evidence. The findings of this study are hoped to provide additional useful references for academicians and researchers about the comparative financial performances of Islamic and conventional banks and their determinants over the pre-and post-2008 GFC periods. The study results are also expected to be beneficial to the banks' management as a reference for policymaking to improve financial performance amid increasing competition in the banking industry under uncertain economic conditions. Data This population of this study comprises 115 conventional commercial banks and 13 full-fledged Islamic commercial banks in Indonesia. Of these banks, the three largest state-owned banks from each group were selected as the study sample using the purposive sampling technique. Only the banks that met the criterion of publishing monthly financial statements from 2003 to 2017 were investigated. Besides, the selected banks were the top-three largest state-owned commercial banks from each category based on their total assets. Thus, the three conventional banks investigated in the study include Bank Rakyat Indonesia (BRI), Bank Negara Indonesia (BNI), and Bank Mandiri (BM). In comparison, the three Islamic banks explored in the study comprise Bank Muamalat Indonesia (BMI), Bank Syariah Mandiri (BSM), and Bank Mega Syariah (BMS). Specifically, this study empirically compares the determinants of conventional and Islamic banking performances between the pre-2008 GFC period (i.e., 2003 -2008) and the post-2008post- GFC period (i.e., 2009post- -2017. The data from the banks' financial statements in the form of financial ratios were gathered and utilized for the analysis. These secondary data were collected from several sources, namely the reports from the Otoritas Jasa Keuangan (Financial Services Authority -FSA) and the websites of each sampled bank. Measurement of the Variables In this study, four determinants of financial performances of conventional and Islamic banks are investigated, namely capital adequacy, liquidity, non-performing loans or financing, and operating expenses. All the variables are measured in the ratio scale. For more details, variables, operationalized definitions, and their measurements are delineated in Table 1. An ability of a bank to produce net profit using its available assets (Rose & Hudgins, 2005). ROA = Profit after Tax Total Assets x 100% Independent: Capital Adequacy A comparison of the amount of capital owned by banks with risk-weighted assets. CAR = Own Capital Risk Weigted Assets x 100% Liquidity An ability of a bank to meet its obligations, repay all of its depositors and provide loans or financing proposed by customers without delay (Hazzi & Kilani, 2013). LDF = Total Financing Total Third Party Funds 100% Non-Performing Loans (Financing) An inability of customers to repay loans received from banks by a predetermined period (Aryati & Balafif, 2007). Operating Expenses A comparison of operating expenses with operating income aims to measure the efficiency of bank operations (Ongore & Kusa, 2013). CIR = Operating Expenses Operating Income x100% Note: ROA is the Return on Assets, CAR is the Capital Adequacy Ratio, LDR is the Loan to Deposit Ratio, FDR is the Financing to Deposit Ratio, NPL is the Non-Performing Loan, NPF is the Non-Performing Financing, and CIR is the Cost to Income Ratio. The term "loan" is used for conventional banks, while the word "financing" is used for an Islamic bank. Estimated Research Model A comparative analysis of determinants of conventional and Islamic financial performances between the pre-2008 GFC and post-2008 GFC periods are estimated by regressing the conventional and Islamic financial performances as the dependent variables on the capital adequacy, liquidity, non-performing loans (financing), and operating expenses as the independent variables. In analyzing the panel data, three estimated models, namely the common effect model, the fixed effects model, and the random-effects model are usually used (Hamid, Majid, & Khairunnisah, 2017;Majid & Maulana, 2012;;Yani, Arfan, & Majid, 2020). The general forms of panel multiple regression equations for the respected conventional banks (Equation 1) and Islamic banks (Equation 2) could be written as follows. where ROA is the ratio of return on assets to measure financial performance, 0 is an intercept, 1 − 4 are the estimated coefficient, CAR is the capital adequacy ratio to measure capital adequacy, LDR is the loan to deposit ratio to measure liquidity for the conventional bank, FDR is the financing to deposit ratio to measure liquidity for the Islamic bank, NPL is the nonperforming loans to measure the credit risk management for the conventional bank, NPF is the non-performing financing to measure the financing risk management for the Islamic bank, CIR is the costs to income ratio to estimate the operating expenses, i and t indicate a particular bank at a specific year, and is an error term. Within the framework of the fixed-effect model, referring to Equations (1.1) and (1.2), the panel regression models could be re-rewritten as follows: The subscript i in the Equations (2.1) and (2.2) in the intercept () shows the likelihood of the data to have varying values due to distinctive features of different investigated banks, such as management styles and philosophy (Majid and Maulana, 2012). Commonly, the dummy variables were introduced to confine the divergent intercepts. In this case, the fixed-effect model should be the most appropriate model to be adopted to anticipate the correlation between the individual definite intercept and regressors. Nevertheless, the fixed-effect model tends to reduce the number of degrees of freedom, and consequently lower the efficient parameter. To overcome this problem, the random-effect model introduces an error term that is timely and individually interdependences and assumes a stochastic intercept in its estimated panel regression model. If the data is found to random, the random effect model is the most appropriate to be assessed. Thus, referring to Equations (2.1) and (2.2), the random effect panel regression models could be re-rewritten as follows: This study will select the best-suited models out of the above-mentioned panel regression models for data analysis using the following tests, namely: 1) The Chow test is conducted to choose between the common-and fixed-effect models. 2) The Lagrange Multiplier test is conducted to select between the common-and the random-effect models. 3) The Hausman test is performed to choose between the fixed-and random-effects models. From the results of the tests, an appropriate model will be selected to estimate the determinants of financial performances both for Islamic and conventional banks. The aboveproposed panel regression models would be estimated four times; twice is conducted to measure and analyze the determination of the performance of conventional banks for the pre-and post-2008 GFC periods, and twice to measure and analyze the determinants of the performance of Islamic banks for the pre-and post-GFC periods. Results and Discussion This section reports and discusses the findings of the study, comprising descriptive statistics, correlation coefficients, the estimated determinations of banks' financial performances, and their implications. Descriptive Statistics Descriptive statistics that are reported in Table 2 describes the maximum, minimum, mean, and standard deviation values of each variable. As reported in Table 2, on average, the financial performances of conventional banks have declined by -39.75% from 5.56 points in the pre-2008 GFC to 3.35 points in the post-2008 GFC, as shown by the ratio of Return on Assets (ROA). Similarly, the financial performances of Islamic banks have also declined by -39.67% from 2.42 points in the pre-2008 GFC to 1.46 points in the post-2008 GFC. These findings show that the 2008 GFC has deteriorated the performances of both conventional and Islamic banks. However, the decline in the financial performances of conventional banks was higher than in Islamic banking performances. These findings imply that, to some extent, the Islamic banks were more stable and resilient to the 2008 GFC. The practices of Islamic banks, which are interest-free and assets-based, are believed to contribute to the smaller changes in their performances in the post-2008 GFC. These findings are in line with previous studies by Kassim and Majid. They documented evidence of the superiority of Islamic banks over their conventional counterparts during the 1997 East Asian economic crisis and the 2008 GFC. Furthermore, on average, the determinants of conventional banking performances, namely the capital adequacy (CAR), liquidity (FDR), non-performing loans (NPF), and operating expenses (CIR) of the conventional banks have also declined more than 50% from the pre-2008 GFC to the post-2008 GFC periods. On the other hand, on average, capital adequacy and nonperforming financing of Islamic banks have only declined about 2% from the pre-2008 GFC to the post-2008 GFC periods. Surprisingly, on average, the Islamic banking liquidity and operating expenses have increased by about 10% and 5%, respectively, from the pre-2008 GFC to the post-2008 GFC periods. These findings provide further pieces of evidence of the superiority of Islamic banks to their conventional banks. During the period 2009-2017 (post-2008 GFC), the Islamic banks have provided more financing to their customers, and consequently expediting the economic recovery. Islamic banks' ability to channel more financing to the real economic sector using equity-based contracts such as mudharabah and musharakah could further help the economy to exit from the financial crisis. In terms of financing risk management, Islamic banks are found to be superior compared to conventional banks. This is reflected by the lower average value of the non-performing financing of Islamic banks compared to the conventional banks. This finding shows that Islamic banks have implemented a better financing risk management by selectively channeling their financing based on the profit-loss sharing principles, thereby minimizing their financing default (Hassan Al-Tamimi, Miniaoui, & Elkelish, 2015). Finally, comparing to the conventional banks, Islamic banks have recorded higher operating expenses as compared to their conventional counterparts, as shown by the higher value of cost to income ratio. Thus, this indicates that Islamic banks have been less efficient in their operational activities. This finding could be partially due to the banks' smaller size, causing the banks to experience diseconomies of scale. Thus, this finding suggests the importance of Islamic banks to expand their capacities. For this purpose, the support from the government to invest more in Islamic banks is highly needed. Table 2 reports the findings of the Pearson's correlation coefficients. The coefficient shows the strength of the association between investigated variables. As shown in Table 2, except for capital adequacy (CAR), the performances of conventional banks (ROA) were recorded to have a significant correlation with liquidity (FDR), non-performing loans (NPL), and operating expenses (CIR) both in the pre-and post-2008 GFC periods at least at the 5% level of significance. However, the liquidity showed a positive correlation, while non-performing loans and operating expenses showed a negative correlation. These findings indicate that liquidity might contribute positively to the enhancement of conventional banking performances, while non-performing loans and operating expenses did not. Correlation Coefficients Similarly, the performances of Islamic banks were also documented to have a significant correlation with liquidity, non-performing loans, and operating expenses, at least at the 5% level of significance. However, their associations were found to be negative. These findings provide preliminary signals that these determinants adversely affected the performances of Islamic banks in both pre-and post-2008 GFC periods. Nonetheless, to ensure the direction of each determinant's effects on the banking performances, we should refer to the findings of estimated panel multiple regression, which will be reported and discussed in the next sub-section. Table 3. Pearson's coefficients of correlation Note: *** and ** indicate significance at the 1% and 5% levels, respectively. Conventional and Islamic Banking Performances between the Pre-and Post-2008 GFC Periods A panel multiple regression analysis is estimated to measure and analyze the effects of capital adequacy, liquidity, non-performing loan/financing, and operating expenses on the conventional and Islamic banking performances between the pre-and post-2008 GFC. The estimation of the panel regression model is conducted four times; twice is undertaken to measure and analyze the determination of the performance of conventional banks for the pre-and post-2008 GFC periods, and twice to measure and analyze the determinants of the performance of Islamic banks for the pre-and post-GFC periods. Of the three types of panel estimation models, only the common-and the fixed-effect models were found to be likely suitable for estimating the data in this study. In contrast, the random effect model could not be estimated at a smaller number of cross-sections than the number of researched variables. Therefore, the selection of the best model between the common-effect model and the fixed effect model is tested using a Chow test. The test showed that the fixed effect model is the most suitable model for further data analyses. Table 4 reports the findings of the estimated fixed-effect model. Pre-2008GFC: 2003 Post - As observed from Table 4, the study found significant simultaneous effects of capital adequacy, liquidity, non-performing loan/financing, and operating expenses on the performances of both conventional banks and Islamic banks at the 1% level of significance during the pre-and post-2008 GFC periods. The estimated coefficients of determination (Adjusted-R 2 ) of 0.7789 and 0.6449 for the conventional banks in the pre-and post-2008 GFC periods, respectively, signify that the variations in the conventional banking performances were explained 77.89% and 64.49% by the investigated determinants during the pre-and post-2008 GFC periods, respectively. Meanwhile, the rest 22.11% and 35.51% of the changes in the conventional banking performances during the pre-and post-2008 GFC periods were explained by other variables beyond our estimated model, such as other banks' characteristics and macroeconomic variables (Chowdhury, Haque, & Masih, 2017). Diagnostic Tests F-stats = 85.310 *** Prob. = 0.000 D-W = 0.640 Adj-R 2 = 0.2493 F-stats = 43.190 *** Prob. = 0.000 D-W = 0.540 Adj-R 2 = 0.7029 Note: *** and ** indicate significance at the 1% and 5% levels, respectively. Meanwhile, the estimated coefficients of determination for Islamic banks of 0.2493 and 0.7029 indicate that the variations in the Islamic banking performances were explained by 24.93% and 70.29% of the investigated determinants during the pre-and post-GFC periods, respectively. In comparison, the rest of 70.07% and 29.71% were explained by other internal and external factors affecting the banking performance that were not included in our estimated models. These findings further show that the estimated value of Adjusted-R 2 for Islamic banks during the post-2008 GFC period is found to be higher than their conventional counterparts, implying a more remarkable ability of determinants to predicts the changes in Islamic banking performances than those in the conventional banking performances. Furthermore, Table 4 also illustrates that capital adequacy has a significant positive effect on conventional banks' performance during the pre-2008 GFC period, but the effect turned to become negative after the post-2008 GFC period at a significant level of 1%, respectively. This estimated value signifies that for every 100% increase in capital adequacy, it has contributed to the rise in the conventional banking performances by 126.2% in the pre-2008 GFC period, but caused a decline in their performances by -118.9% in the post-2008 GFC period, ceteris paribus. The decline in capital adequacy of the conventional banks from the pre-2008 GFC period to the post-2008 GFC period contributed to changes in the direction of the effect of capital on banking performance from positive to become negative. These findings show that the 2008 GFC's presence has reduced the adequacy of banking capital, which in turn caused the conventional banking performances to decline. This finding is in line with the previous studies conducted by Sukarno and Syaichu and Ongore and Kusa, who found a significant effect of capital adequacy on banking performance. On the other hand, capital adequacy was found to have a significant positive influence on Islamic banking performances during the pre-2008 GFC period at the 1% level, but the effect turned to become insignificant in the post-2008 GFC period. This finding indicates that the capital adequacy of Islamic banks was able to make a real contribution to the enhancement of their performance by 1.296 units during the pre-2008 GFC period, but a small decline in the capital adequacy of the Islamic banks during the post-2008 GFC has caused insignificant changes in their performances. These findings show a better ability of Islamic banks to manage capital adequacy compared to their conventional counterparts during the 2008 GFC. In other words, a relatively smaller decline in capital adequacy could not change the Islamic banking performances. This finding is in harmony with previous studies conducted by Dewi and Sabir and Habbe, who documented a positive contribution of prudent capital management on the Islamic banks' performance. Moreover, the study recorded a significant negative influence of liquidity on the conventional banks' performance at the 1% significance level during the pre-2008 GFC period, but the effect turned to become insignificant after the 2008 GFC period. When the banks could accumulate more funds from the third party (depositors) but failed to channel them to the creditors, their accumulated profits from loan interest would, finally, decline too. However, in the post-2008 GFC period, the conventional banks have reduced their credits to the customers have prevented their performances from further weakening. Meanwhile, the study found a significant effect of financing on the Islamic banking performance in the pre-2008 GFC period with an estimated value of 1.305. This finding shows that an increase in financing by 100% has caused the banking performance to increase by 130.5%. In comparison to the conventional banks, this finding indicates a better ability of Islamic banks to channel their funds collected the depositors to the creditors have caused their performances to increase in the pre-2008 GFC period. Thus, the higher the profits generated by the banks, the better their financial performances. However, a small increase in their financing in the post-2008 GFC period has an insignificant contribution to promote their performances. These findings also show that the presence of the crisis has changed the landscape of the banking industry in Indonesia. This finding is consistent with the previous studies conducted by Hassan Al-Tamimi, Sabir and Habbe, and Margaretha and Zai, who found the importance of liquidity in improving banking performance. The study also documented the insignificant effects of non-performing loans or nonperforming financing on the performances of conventional and Islamic banks, respectively, during the pre-2008 GFC period. This finding shows that banks' credit or financing risk management has been neutral, thereby caused an insignificant impact on banking performances (Chamberlain, Hidayat, & Khokhar, 2020). This finding is consistent with previous studies that found a negligible non-performing loan or non-performing financing on banking performance (Sukarno & Syaichu, 2006;Banik & Das, 2013). In the post-2008 GFC, the non-performing loans have significantly caused a decline in conventional banks' performances with an estimated value of -0.096. This finding shows that an increase in non-performing loans by 100% has caused the performances of conventional banks to decline by 9.6%. This further indicates that the higher level of conventional non-performing loans has worsened their performances. The inability of conventional banks to impose prudent credit risk management is believed to partially cause a decline in banking performances. On the other hand, non-performing financing is surprisingly found to have a significant positive effect on the performance of Islamic banks with an estimated value of 0.166 at a significance level of 1%. This finding shows that an increase in non-performing financing by 100% has contributed to the rise in Islamic banking performance by 16.6%. This is mainly due to a manageable amount of non-performing financing provided by the Islamic banks to their customers in the post-2008 GFC period. Thus finding shows that although the non-financing performances of Islamic banks have slightly increased from the pre-2008 GFC period to the post-2008 GFC period (Table 2), but it still could contribute to the improvement of the banking performance. This could happen simply because Islamic banking products are based on profitloss sharing principles and prudent financing risk management adopted by Islamic banks. Our finding of the positive effect of the non-performing financing to the Islamic banking performance in the post-2008 GFC period is in harmony with the discovery of the previous study by Sabir and Habbe. Finally, as illustrated in Table 4, a similar significant negative effect of operating expenses on the conventional and Islamic banking performances were found for both pre-and post-2008 GFC periods at a significance level of 1%. These findings show that a more substantial amount of operating expenses has harmed banking performances and vice versa. When the operational costs to be borne by the banks are high, then these costs would reduce the net profits and, consequently, lower the banking performances. Conversely, if the banks operated efficiently, the banks would gain a higher advantage since the banks worked at the lowest level of expenses. The inability of the management of both conventional and Islamic banks to optimize their operational activities with a practical level has deteriorated performances of the banks both in the pre-and post-2008 GFC periods. These findings are in accordance with the previous studies that documented a negative effect of operating expenses on the performance of conventional banks (Sukarno & Syaichu, 2006;Margaretha & Zai, 2013) and performance of the Islamic banks (Wibowo & Syaichu, 2013). Overall, our findings showed that Islamic banks were more stable and resilient in responding to the 2008 GFC. Our results also showed the importance of banking management to oversee capital adequacy, liquidity, non-performance loan or financing, and operating expenses if they intend to manage and improve their financial performances as these factors are documented to affect the performances of both conventional and Islamic banks simultaneously. Meanwhile, each determinant has a different effect on conventional and Islamic banking performances between the pre-and post-2008 GFC. These findings show that the 2008 GFC occurrence has changed the landscape of the banking industry (Rachdi & Mokni, 2014). The conventional and Islamic banks have responded differently to the crisis to maintain as well as improve their performances due to the changing effects of the determinants of banking performance from the pre-2008 GFC period to the post-2008 GFC period. Ensuring an adequate amount of capital and funds' liquidity would promote banking performance. Having a prudent credit for financing risk management by selectively channel their funds to the creditors would also ensure the improvement of banking performances. Finally, to improve their performances, the banks should impose efficient measures for their operational activities. Conclusion This study measured and comparatively analyzed the determination of performances of conventional and Islamic banks in Indonesia between periods of the pre-and post-2008 GFC. The study documented that capital adequacy has positively affected both conventional and Islamic banking performances in the pre-2008 GFC period. However, in the post-2008 GFC period, the capital adequacy has negatively affected conventional banking performances and insignificantly affected conventional banking performances. As for the liquidity, the study documented the negative and positive effects on performances of conventional and Islamic banks, respectively. Meanwhile, in the pre-2008 GFC period, liquidity has an insignificant impact on both conventional and Islamic banking performances. Furthermore, the study recorded the significant adverse effects of non-performing loans on conventional banking performances in both pre-and post-2008 GFC periods. On the other hand, non-performing financing is found to have a significant adverse effect on Islamic banking performances in the pre-2008 GFC period, but the result turned to become significantly positive in the post-2008 GFC period. Finally, the operating expenses have similar negative effects on the conventional and Islamic banking performances during the pre-and post-2008 GFC periods. Overall, our findings show that the occurrence of the 2008 GFC has changed the direction of effects of capital adequacy, liquidity, and non-performing loans/financing on the performances of conventional and Islamic banks in Indonesia. In other words, the crisis has, to some extent, changed the landscape of the banking industry in Indonesia. However, the Islamic banks were documented to be in a better position when they were hit by the financial crisis, implying more stability and resilience of Islamic banks over their conventional counterpart during the financial crisis. Our findings suggest the importance of expanding the interest-free Islamic banking industry to create more stability of the national economy due to the just and fair practices of the Islamic banking system operated based on Islamic tenets. This could be done by ensuring the banking institutions abide stipulated regulation provided by the government to maintain the stability of their performances. The government is advised to support Islamic banks by placing more funds in the Islamic banks to ensure their capital adequacy. Future studies are suggested to add more banks and consider incorporating both internal and external factors determining banking performances in their analysis to enrich findings on this topic. Comparing the effects of more episodes of economic crises in the study would also provide a better picture of the impacts of crises on banking performances, both conventional and Islamic. |
Whether renting a house or apartment, tenants have local, state and federal rights.
1 What Rights Do Tenants Have Against a Landlord?
Renters are bound by the terms and responsibilities in rental agreements. However, tenants are protected by local, state and federal laws against unlawful practices on the part of landlords. According to the California Department of Consumer Affairs, a landlord is either an individual or business entity that owns rental property and a tenant as the individual renting the property. As a tenant, you can expect to have certain rights that govern the relationship with your landlord.
Tenants have the right to privacy in their homes, and the landlord may enter only for specific reasons. Under state law, landlords can enter a tenant’s home if the property has been abandoned by the tenant. In addition, landlords are allowed to enter to inspect the property at the end of the rental agreement or carry out repairs and improvements. Landlords can also enter a property to show the rental to potential tenants, contractors or buyers.
Tenants belonging to legally protected groups or who have protected characteristics may not be excluded from renting a dwelling by landlords or their representatives. California law specifies that it is against the law to discriminate against tenants on the basis of gender, religion, marital status, race, medical conditions, sexual orientation, national origins, source of income and disability.
By law, landlords are required to disclose the presence of harmful agents that have contaminated the rental property. For example, tenants have the right to know if lead-based paint or any connected hazards are present in the rental property if the property was built before 1978. However, landlords are not under the obligation to test for or remove lead-based paint. Landlords must also disclose if the property has been contaminated by asbestos and the production of methamphetamines. Further disclosures must indicate if there was a death in the property and whether the property is within a mile of an unused military base where explosives were used.
Tenants have the right to live in a habitable rental dwelling. Under California law, habitability means that the property can be occupied by humans and meets building, health and safety codes. To this end, both landlords and tenants bear a certain amount of responsibility. For example, landlords are required to prepare the rental property for occupation. Landlords are also responsible for ongoing repairs necessary to maintain the rental in habitable condition. In turn, tenants are legally required to take care of their homes and liable for damages resulting from personal abuse or neglect.
Limitations on tenant rights exist in special situations where there is no traditional relationship between tenant and landlord. For example, residents of hotels and motels do not have tenant rights if occupancy is under 30 days and the lodging is liable for the hotel occupancy tax. If occupancy is longer than 30 days and the resident has not paid lodging charges by the 30th day, the resident is not protected by tenant rights.
CA Department of Consumer Affairs: Who is a "Landlord" and Who is a "Tenant"
Fuentes, Gilberto. "Basic Tenant Rights." Home Guides | SF Gate, http://homeguides.sfgate.com/basic-tenant-rights-8079.html. Accessed 22 April 2019.
What Are a Renter's Tenant Rights? |
Engagement on Digital Platforms: A Theoretical Perspective In this short paper, we develop a comprehensive definition of digital platform engagement as a users degree of voluntary allocation of personal cognitive, emotional, and behavioral resources to a platform related interaction. We define each of these aspects of engagement cognitive, emotional, and behavioral as they relate to different objects with which users can interact: other users, the content/product that the service is offering, and the platform itself. We provide distinct conceptualizations of each of these dimensions of engagement as it relates to the objects that the users interact with, with the goal of resolving inconsistencies and disentangling concepts that are confounded with one another and to ultimately inform further academic research and practice. |
Comparison of the Accuracy of Fit of Metal, Zirconia, and Lithium Disilicate Crowns Made from Different Manufacturing Techniques. PURPOSE To evaluate the accuracy of fit of metal, lithium disilicate, and zirconia crowns, which were produced using different manufacturing techniques. MATERIALS AND METHODS Ten patients in need of a molar crown were recruited. Eight crowns were fabricated for each patient: 2 zirconia, 3 lithium disilicate (e.max), and 3 metal-ceramic crowns using conventional, conventional/digital, and digital techniques. Marginal, axial, and occlusal gaps were measured using a replica technique. Replicas were sectioned mesiodistally and buccolingually and were observed under a stereomicroscope. A total of 32 measurements for each crown replica at 3 different points (12 marginal, 12 axial, and 8 occlusal) were performed. Statistical analysis was performed using two-way ANOVA and Tukey HSD tests. RESULTS Marginal means ranged from 116.39 ± 32.76 m for the conventional metal-ceramic group to 147.56 ± 31.56 m for the digital e.max group. The smallest axial gap was recorded for the digital zirconia group (76.19 ± 23.94 m), while the largest axial gap was recorded for the conventional e.max (101.80 ± 19.81 m) and conventional/digital metal-ceramic groups (101.80 ± 35.31 m). The conventional e.max crowns had the smallest occlusal mean gap (185.59 ± 59.09 m), while the digital e.max group had the largest occlusal mean gap (295.38 ± 67.80 m). Type of crown had no significant effect on marginal (p = 0.07, f = 2.71), axial (p = 0.75, f = 0.29), or occlusal fit (p = 0.099, f = 2.4), while fabrication method had a significant effect on axial gap only (p = 0.169, f = 1.82, p = 0.003, f = 6.21, and p = 0.144, f = 2 for marginal, axial, and occlusal fit, respectively). Digital fabrication produced significantly smaller axial gaps than the conventional method (p = 0.02), and the conventional digital method (p = 0.005). CONCLUSIONS The type of crown and method of manufacturing had no effect on the marginal and occlusal gap of single posterior crown, while the method of manufacturing had a significant effect on the axial gap. The digital method produced the smallest axial fit in comparison with the other methods, while the type of crown had no effect on the axial gap. |
//
// Generated by class-dump 3.5 (64 bit).
//
// class-dump is Copyright (C) 1997-1998, 2000-2001, 2004-2013 by <NAME>.
//
#import <OfficeImport/PDAnimateBehavior.h>
@class OADColor;
__attribute__((visibility("hidden")))
@interface PDAnimateColorBehavior : PDAnimateBehavior
{
BOOL mHasBy;
double mBy[3];
BOOL mHasFrom;
OADColor *mFrom;
BOOL mHasTo;
OADColor *mTo;
BOOL mHasColorSpace;
int mColorSpace;
BOOL mHasColorDirection;
int mDirection;
}
- (void).cxx_destruct;
- (void)setDirection:(int)arg1;
- (int)direction;
- (BOOL)hasColorDirection;
- (void)setColorSpace:(int)arg1;
- (int)colorSpace;
- (BOOL)hasColorSpace;
- (void)setTo:(id)arg1;
- (id)to;
- (BOOL)hasTo;
- (void)setFrom:(id)arg1;
- (id)from;
- (BOOL)hasFrom;
- (void)setBy:(double [3])arg1;
- (double (*)[3])by;
- (BOOL)hasBy;
@end
|
Cytomegalovirus infections in heart and heart and lung transplant recipients. Of the first 166 heart and 15 heart and lung transplant recipients at Papworth Hospital, Cambridge, who survived for more than one month after transplantation, 162 were investigated for cytomegalovirus (CMV) infection by serological methods. Altogether, 73 (45%) developed CMV infection after transplantation: 30 (18.5%) had acquired primary infection and 43 (26.5%) reactivation or reinfection. Six patients died of primary infection, probably acquired from the donor organ. Recipients negative for CMV antibody who received an organ from an antibody positive donor had the most severe disease. Heart and lung transplant recipients experienced more severe primary CMV infection than those in whom the heart alone was transplanted. The most sensitive and rapid serological method was a mu-capture enzyme linked immunosorbent assay (ELISA) for detecting CMV specific IgM, the amount of which was often of prognostic value and influenced the management of patients. |
Legal Briefing: Medical Futility and Assisted Suicide Since 2009, Professor Pope has authored a quarterly "Legal Briefing" column for the Journal of Clinical Ethics. Each briefing comprehensively reviews legal developments concerning a particular issue in clinical bioethics. The Journal of Clinical Ethics owns the exclusive copyright to distribute the full-text content.This column is the successor of the Legal Trends column published in this journal since its first issue. But as indicated by the title, this column has adopted a new format. The old format consisted of a comprehensive citation-rich survey of all legal developments across a wide range of bioethics topics. While this nearly bibliographic collection of authorities was surely useful to many readers, staking out such a broad field left little room for analysis or explanation of the briefly-mentioned legal developments. This new Legal Briefing column will cover legal developments pertaining to just one or two topics in clinical ethics. Thereby, this column aims to provide readers with a deeper, more thorough understanding of evolving legal themes and issues. The primary objective is to synthesize and analyze recent changes in and applications of the law, not to assess the ethical or jurisprudential justifiability of such changes and applications. This issues Legal Briefing column covers legal developments pertaining to medical futility and assisted suicide. Not only have both these topics been the subject of recent articles in this Journal, but they have also been the subject of significant legislative and judicial activity over the past several months. While seemingly unrelated, medical futility and assisted suicide are endpoints on the same continuum. While the futility debate exemplifies limits on autonomy to pursue life-prolonging procedures, the assisted suicide debate exemplifies limits on autonomy to pursue life-ending procedures. |
The Right Reverend Paul McAleenan and Right Reverend John Wilson were ordained as Auxiliary Bishops for the diocese today, the Feast of the Conversion of St Paul, 25 January 2016, at the Metropolitan Cathedral of the Most Precious in Westminster. Bishop McAleenan has been assigned the titular see of Mercia and Bishop Wilson has been assigned the titular see of Lindisfarne.
His Eminence Cardinal Vincent Nichols was the main celebrant and principal ordaining officer. He was assisted by Bishops John Sherrington and Nicholas Hudson as the principal co-consecrating bishops. The ordination was attended by His Eminence, Cardinal Cormac Murphy-O'Connor, Emeritus Archbishop of Westminster; the Most Reverend Antonio Mennini, the Apostolic Nuncio; the Most Reverend Arthur Roche, Secretary of the Congregation for Divine Worship and the Discipline of the Sacraments; as well as archbishops, bishops and nearly 200 priests from across England and Wales, particularly from the Dioceses of Westminster and Leeds; ecumenical guests representing several churches and ecclesial communities; the Lord Mayor of Westminster; and Mayors and Members of Parliament from Hertfordshire and the western London Boroughs, representing the areas of pastoral responsibility of the newly-ordained bishops.
In his homily, Cardinal Nichols focused on the lessons from the life of St Paul, on the day the Church celebrates the feast of his conversion.
He explained that 'every bishop is chosen by the Father and given to his Son to be his companion in a special way;' and that this is the 'deepest identity of the bishop: to be a 'companion, with the apostles, of the Lord Jesus'.
It is this identity that 'gives shape to the daily life of the bishop' and is 'the rock of his life'. It is this same 'bond that remains at the heart of all that St Paul is and does', in the service of the Lord.
The Cardinal also encouraged the bishops to have a 'renewed sense of mission', encouraging and developing participation with 'the missionary aspiration of reaching everyone'.
Bishop Paul McAleenan will have pastoral responsibility for the deaneries of Hertfordshire and Bishop John Wilson will have pastoral responsibility for the deaneries in the western area of the diocese.
Prior to his episcopal ordination, Bishop McAleenan was a priest of the Diocese of Westminster for over 40 years. Born in Belfast, he trained at St Patrick's College, Thurles. He was ordained to the priesthood on 8 June 1985 and has served in various capacities throughout this time, beginning as Assistant Priest at Our Lady of Grace and St Edward in Chiswick. In 1987, he was appointed Assistant Priest at St Aidan's in Acton and Chaplain at Hammersmith Hospital. In 1990, he was appointed Assistant Stevenage Team Ministry.
He was appointed Parish Priest of St Scholastica's in Clapton in 1994, where he remained until 2001. He was then appointed Parish Priest of Holy Rood Watford. He was appointed a member of the Cathedral Chapter in 2010.
On 24 November 2015, he was appointed Auxiliary Bishop of Westminster by Pope Francis. Bishop McAleenan will have pastoral care of the deaneries of Hertfordshire.
Prior to his episcopal ordination, Bishop Wilson was a priest of the Diocese of Leeds. He trained at the Venerable English College and was ordained to the priesthood on 29 July 1995 by Bishop David Konstant. He was appointed Assistant Priest at St Joseph's, Pontefract in 1995, as well as hospital, hospice and school chaplain. He was appointed Assistant Priest at St Joseph's, Bradford in 1998, and also served as school chaplain.
In 1999, he was appointed Lecturer in moral theology at St Cuthbert's Seminary, Ushaw College, Durham. He also completed a PhD at Durham University and latterly served as Vice Rector. In 2005, he was appointed Episcopal Vicar for Evangelisation in the Diocese of Leeds, a role he held until 2012. From 2008 to 2014, he was sessional chaplain at HMP Leeds.
He was named a Chaplain to His Holiness by Pope Benedict XVI in May 2011. He was elected Administrator of the Diocese of Leeds by the College of Consultors during the vacancy of the see, an office which he held from September 2012 to November 2014. Most recently he served as Parish Priest of St Martin de Porres, Wakefield.
On 24 November 2015, he was appointed Auxiliary Bishop of Westminster by Pope Francis. Bishop Wilson will have pastoral care of the deaneries of the western area of the diocese.
Today we warmly welcome two new bishops into our diocesan church. What a great moment and what a blessing we are receiving, a blessing to strengthen us in so many ways.
As you so well know, the Church's storehouse is full of splendid statements about the dignity, role and vocation of the bishop. Here are just a few nuggets. The bishop as successor of the Apostles: apostle, one sent out to teach; the bishop as overseer of the diocesan Church, the original meaning of the word episcope; the bishop as a member of the College of Bishops, a sign and source of the universal nature and unity of the Church; the bishop's office rooted in the Trinitarian understanding of our entire Christian life, called by the Father, bound to the Son, filled with the Holy Spirit; the bishop with his triple tasks of prophet, priest and king, readily symbolised in the book of the Gospels held over his head, his ring of dedication and service, and his pastoral staff. There are so many rich themes for our reflection today.
But I would like to focus on just two, both of which come into sharp focus in the person of St Paul, whose conversion we celebrate today.
The first of these is that every bishop is chosen by the Father and given to his Son to be his companion in a special way. It is then the Father's will that we are here today, about to call on the Holy Spirit to transform Paul and John into chosen and loving companions of his Son in the company of the apostles. We read in the Gospel of Mark how Jesus 'summoned those he wanted.' We read, 'So they came to him and he appointed twelve; they were to be his companions' (Mk 3.14).
Here we have the deepest identity of the bishop: companions, with the apostles, of the Lord Jesus. It is this that gives shape to the daily life of the bishop, striving above all to stay close to the Lord in the midst of all the demands made upon him. This relationship is the rock of his life. Without it we bishops lose our focus and our true sense of purpose. We can so easily become functionaries of a demanding service provider. But that is not the life of the Church, even if at times it might seem so! Indeed, that particular roadblock is only avoided when every one of us, parishioners, priests, religious, bishops, is rooted in the life of Christ and sees our Church as a sharing in that life, held together by him and serving him alone.
The conversion of St Paul, with its drama, starts with this same point. Saul heard the powerful voice, rebuking him. Then he asked the fateful question: 'Who are you?' Once the answer was given, 'l am Jesus of Nazareth and you are persecuting me', a new world opened up for Paul. Immediately he replies 'What am I to do?' His bond with the Lord is sealed and his vocation begins to unfold. (Acts 22: 3-16).
'The most important thing of all to him, however, was that he knew himself to be loved by Christ. Enjoying this love, he considered himself happier than anyone else; were he without it, it would be no satisfaction to be the friends of principalities and powers. He offered to be thus loved and be the least of all, or even to be among the damned, than to be without that love and be with the great and honoured.' (Homily 2 on St Paul).
I pray that we bishops, and indeed every one of us, can be the same. As we strive to imitate St Paul, especially as bishops, co-apostles with him, let us remember that in our life there can be no place for high horses (Paul had to come off his), for prestige, or for love of the footlights. With Paul, who faced the criticism of being ambitious, the only thing we are to boast about is the Lord and the joy and consolation of knowing him and of preaching his Gospel (1 Cor 1.31).
Now the second facet of the life of a bishop, on which you, Paul and John, both embark today, is also contained in that line from St Mark's Gospel and in the life of St Paul. Jesus chose the twelve 'to be his companions and to be sent out to preach.' Today's Gospel passage gives this same command: 'Go out to the whole world; proclaim the Good News to all creation' (Mk 16.15). St Paul lived that mission with every fibre of his being. He sets the standard.
A renewed sense of mission is central to the Church today, and indeed central to the life of our diocese. Pope Francis spells out this task repeatedly. He asks us 'to put all things into this missionary key' (Evangelii Gaudium 34) and seeing the parish not 'as an outdated institution' but as 'the presence of Christ in a given territory', as 'the Church living in the midst of the homes of her sons and daughters.' He describes the parish as 'a community of communities, a sanctuary where the thirsty come to drink in the midst of their journey and a centre of constant missionary outreach' (EG28).
'The bishop must always foster this missionary communion in his diocesan Church... to do so he will sometimes go before his people, pointing the way and keeping their hopes vibrant. At other times, he will simply be in their midst with his unassuming and merciful presence. At yet other times, he will have to walk after them, helping those who lag behind and, above all, allowing the flock to strike out on new paths. In his mission of fostering a dynamic, open and missionary communion, he will have to encourage and develop the means of participation... with the principal aim not of ecclesiastical organisation but rather the missionary aspiration of reaching everyone.' (EG 31). And today we add, reaching everyone with the message of God's mercy.
This then is our mandate and our pathway. This is the shape of our episcopal ministry, or at least the shape to which we aspire: two Johns, Nicholas, Paul and Vincent. To this journey you, Paul and John are most welcome. You can be sure of that. You can be sure of the loving prayers and support of all the priests and people of this great diocese. We thank God for you, even as we now beseech him to grant to you a full measure of his Holy Spirit, marking you forever as his apostles, his bishops, whose lives will now be to his praise in this new and blessed ministry. |
Go Set a Watchman-Dive Right In The literary world has been delirious with excitement since the announcement in February that a second novel by Harper Lee had been found and was to be published. To Kill a Mockingbird was Harper Lees first (and until now only) novel published in 1960, winner of the 1961 Pulitzer Prize, and certainly one of the most beloved novels in American history. Having recently read Go Set a Watchman which, at this writing, is #1 on the New York Times Bestsellers List, and, judging by our holds, its a very popular book (but dont worry we have plenty of copies!), I will attempt to answer a few questions you may have. |
<filename>Code.py
from tkinter import *
root = Tk()
root.title("Calculator")
e = Entry(root, width = 40, borderwidth = 8)
e.grid(row = 0, column = 0, columnspan = 3, padx = 10, pady = 10)
def button_click(number):
#e.delete(0, END)
current = e.get()
e.delete(0, END)
e.insert(0, str(current) + str(number))
def button_clear():
e.delete(0, END)
def button_add():
first_number = e.get()
global f_num
global math
math = "addition"
f_num = int(first_number)
e.delete(0, END)
def button_equal():
second_number = e.get()
e.delete(0, END)
if math == "addition":
e.insert(0, f_num + int(second_number))
if math == "subtraction":
e.insert(0, f_num - int(second_number))
if math == "multiplication":
e.insert(0, f_num * int(second_number))
if math == "division":
e.insert(0, f_num / int(second_number))
if math == "power2":
e.insert(0, f_num ** int(2))
if math == "power":
e.insert(0, f_num ** int(second_number))
if math == "power3":
e.insert(0, f_num ** int(3))
def button_subtract():
first_number = e.get()
global f_num
global math
math = "subtraction"
f_num = int(first_number)
e.delete(0, END)
def button_multiply():
first_number = e.get()
global f_num
global math
math = "multiplication"
f_num = int(first_number)
e.delete(0, END)
def button_divide():
first_number = e.get()
global f_num
global math
math = "division"
f_num = int(first_number)
e.delete(0, END)
def button_power2():
first_number = e.get()
global f_num
global math
math = "power2"
f_num = int(first_number)
e.delete(0, END)
def button_power():
first_number = e.get()
global f_num
global math
math = "power"
f_num = int(first_number)
e.delete(0, END)
def button_power3():
first_number = e.get()
global f_num
global math
math = "power3"
f_num = int(first_number)
e.delete(0, END)
#Define buttons
button1 = Button(root,text = "1", padx = 40, pady = 20, command = lambda: button_click(1),bg = "black",fg = "white")
button2 = Button(root,text = "2", padx = 40, pady = 20, command = lambda: button_click(2), bg = "black",fg = "white")
button3 = Button(root,text = "3", padx = 40, pady = 20, command = lambda: button_click(3), bg = "black",fg = "white")
button4 = Button(root,text = "4", padx = 40, pady = 20, command = lambda: button_click(4), bg = "black",fg = "white")
button5 = Button(root,text = "5", padx = 40, pady = 20, command = lambda: button_click(5), bg = "black",fg = "white")
button6 = Button(root,text = "6", padx = 40, pady = 20, command = lambda: button_click(6), bg = "black",fg = "white")
button7 = Button(root,text = "7", padx = 40, pady = 20, command = lambda: button_click(7), bg = "black",fg = "white")
button8 = Button(root,text = "8", padx = 40, pady = 20, command = lambda: button_click(8),bg = "black",fg = "white")
button9 = Button(root,text = "9", padx = 40, pady = 20, command = lambda: button_click(9), bg = "black",fg = "white")
button0 = Button(root,text = "0", padx = 40, pady = 20, command = lambda: button_click(0), bg = "black",fg = "white")
button_add = Button(root,text = "+", padx = 39, pady = 20, command = button_add, bg = "red", fg = "white")
button_equal = Button(root,text = "=", padx = 89, pady = 20, command = button_equal, bg = "yellow", fg = "black")
button_clear = Button(root,text = "Clear", padx = 80, pady = 20, command = button_clear)
button_subtract = Button(root,text = "-", padx = 41, pady = 20, command = button_subtract, bg = "green", fg = "white")
button_multiply = Button(root,text = "x", padx = 40, pady = 20, command = button_multiply,bg = "green", fg = "white")
button_divide = Button(root,text = "/", padx = 41, pady = 20, command = button_divide,bg = "green", fg = "white")
button_power = Button(root,text = "X*", padx = 38, pady = 20, command = button_power, bg = "pink", fg = "black")
button_power2 = Button(root,text = "x2", padx = 37, pady = 20, command = button_power2, bg = "pink", fg = "black")
button_power3 = Button(root,text = "x3", padx = 38, pady = 20, command = button_power3, bg = "pink", fg = "black")
#Put the buttons on the screen
button1.grid(row =3 , column =0)
button2.grid(row =3 , column =1)
button3.grid(row =3 , column =2)
button4.grid(row =2 , column =0)
button5.grid(row =2 , column =1)
button6.grid(row =2 , column =2)
button7.grid(row =1 , column =0)
button8.grid(row =1 , column =1)
button9.grid(row =1 , column =2)
button0.grid(row =4 , column =0)
button_add.grid(row = 5 ,column = 0)
button_equal.grid(row = 5 ,column = 1, columnspan = 2)
button_clear.grid(row = 4 ,column = 1, columnspan = 2)
button_subtract.grid(row = 6, column = 0)
button_multiply.grid(row = 6, column = 1)
button_divide.grid(row = 6, column = 2)
button_power.grid(row = 7, column = 0)
button_power2.grid(row = 7, column = 1)
button_power3.grid(row = 7, column = 2)
root.mainloop()
|
At first, I was repulsed by the liquid meal replacement. Then I tried it. Then I loved it.
There’s a scene in Austin Powers: The Spy Who Shagged Me where Austin is telling Felicity Shagwell about the year 1999. Jokingly, he says all the food in the future is in pill form and the planet is ruled by apes, only for her to be horrified. That was me when I first heard about Soylent, Silicon Valley’s meal-replacement drink of choice. I was disgusted by the future.
When I first heard of Soylent (on Reddit, of course), users hailed it as an “amazing alternative to food” or a great staple for when you’re too lazy to eat. Soylent isn’t a pill but an off-white 400-calorie drink with a consistency somewhere between a milkshake and pancake batter. But not unlike the futuristic concept of food in pill form, its only purpose is to make you feel like you just consumed a meal, effectively relieving you of all the enjoyment food can provide. It’s Ensure, but for nerds who prioritize efficiency over pleasure.
My disgust stemmed from many places: the fact that anyone could be “too busy” to eat; that somehow, after hundreds of thousands of years, humans decided they needed an alternative to regular food. Humankind didn’t go through the horror of ’70s food trends only for us to drink pasty, flavorless sludge.
Even though Soylent lovers are the fervent Hamilton fans of food innovation—they’re onto something.
As Soylent gained mainstream popularity, I felt validated for having such a deep hatred. Soon everyone was making jokes about the company’s eccentric CEO, Rob Rhinehart, a man who once literally stopped taking shits to save water. Soylent had its place as the butt of jokes about clueless tech bros. But just as it solidified its place on the Internet as one of the Worst Things Ever, I began a sinful relationship with the product.
One day this past summer, my best friend sent me a text message that said, “Someone at work gave me a Soylent, and it’s actually pretty good.” I cracked my fingers and began roasting her. “Congratulations,” I said to her, “you’re officially on the wrong side of humanity.” She told me it was so nice to not think about lunch for work. I assured her she was becoming lazy and needed to check herself. She ended up liking it so much she bought a case.
The month of Ramadan was soon approaching, and in Canada that means almost a full day of not eating. Before a day of fasting begins, Muslims usually eat a pre-dawn meal—something I strategically plan for. The first couple of days are always a little difficult, so my friend suggested I take two bottles of Soylent, “just to see.” Hunger can make anyone do crazy things, so I took up her offer and drank it the next morning.
I’m ashamed to admit it worked. I was less hungry than I’d ever been during a summer Ramadan. It was as efficient as all the nerds said, but beyond that, I genuinely fell in love. The thick yet smooth texture and its bland cereal-milk taste were comforting, for some reason. It made me feel like I ate enough food, but it didn’t weigh anything in my stomach. While I was repulsed by the website’s image of a hip Soylent drink, I purchased a case and began looking forward to drinking one every morning before dawn.
Despite becoming a true believer, I felt shame. Mostly because as I quietly immersed myself in the culture surrounding Soylent by obsessively reading Reddit threads, I hated myself for who I’d become. No doubt I was correct about the kind of people who enjoyed Soylent; the diehard fans are still weirdos. A quick look at the Soylent hashtag on Instagram gave me a glimpse into the saddest fridge in the world. I didn’t (and still don’t) want to be associated with the drink, and it was difficult not to declare my love for it daily on Twitter. Soon I began halfheartedly telling people when they’d mention it—testing the waters to see if they would judge me or not. Like a cool teen with a crush on a band kid, I would find myself trying to defend it. “I mean, yeah, it’s awful, but it seems practical!”
EDITOR’S PICK
The thing is, even though Soylent lovers are the fervent Hamilton fans of food innovation—they’re onto something. It’s worth buying if you’re someone who sometimes skips meals out of laziness or if Seamless orders frequently make you go over budget. Drinking Soylent before dawn saved me the mental and physical energy of waking up earlier to fry an egg. |
// Package numstr -- номер строки слова
package numstr
import (
"fmt"
"github.com/prospero78/goOC/internal/types"
)
// TNumStr -- операции с номером строки
type TNumStr struct {
val types.ANumStr
}
// New -- возвращает новый INumStr
func New(val types.ANumStr) (types.INumStr, error) {
if val < 1 {
return nil, fmt.Errorf("numstr.go/New(): val(%v)<1", val)
}
ns := &TNumStr{
val: val,
}
return ns, nil
}
// Get -- возвращает хранимое значение номера строки
func (sf *TNumStr) Get() types.ANumStr {
return sf.val
}
// Set -- устанавливает значение хранимой строки
func (sf *TNumStr) Set(val types.ANumStr) error {
if val < 1 {
return fmt.Errorf("TNumStr.Set(): val(%v)<1", val)
}
sf.val = val
return nil
}
|
Relocation, Climate Change and Finding a Place of Belonging for Rohingya Refugees An estimated 745,000 Rohingyas were forced to flee to Cox's Bazar, Bangladesh, after a deadly crackdown in Rakhine state, Myanmar in August 2017. Responding to this crisis, the Bangladesh government launched the relocation of Rohingyas from the dense camps in Cox's Bazar to Bhasan Char island in the Bay of Bengal in December 2020. This article argues that the refugees perceptions of their idealized hometheir place of belongingcomposed of complex needs with security tied to environmental stability, have not adequately been considered in their relocation to Bhasan Char island. Further, the physical threats of climate change on the island combine with a denial of the spatial and cultural dimensions of home, creating the threat of Rohingyas becoming recycled refugees. The findings are based on qualitative case study research conducted with Rohingya refugees residing in Cox's Bazar and with those recently relocated to Bhasan Char. |
<filename>src/main/java/com/example/main/ThriftJsonRebuilder.java
package com.example.main;
import com.google.gson.stream.JsonReader;
import com.google.gson.stream.JsonToken;
import com.google.gson.stream.JsonWriter;
import lombok.extern.slf4j.Slf4j;
import org.apache.thrift.*;
import org.apache.thrift.protocol.TJSONProtocol;
import org.apache.thrift.protocol.TProtocolFactory;
import org.springframework.util.StringUtils;
import sun.reflect.generics.reflectiveObjects.ParameterizedTypeImpl;
import java.io.*;
import java.lang.reflect.*;
import java.nio.ByteBuffer;
import java.util.*;
import java.util.stream.Collectors;
import static com.example.main.ThriftJsonType.*;
@Slf4j
public class ThriftJsonRebuilder {
private final static Map<Class<?>, String> thriftTypeNameMap = new HashMap<>();
private final static Set<Class<?>> simpleClass = new HashSet<>();
static {
thriftTypeNameMap.put(boolean.class, NAME_BOOL);
thriftTypeNameMap.put(Boolean.class, NAME_BOOL);
thriftTypeNameMap.put(byte.class, NAME_BYTE);
thriftTypeNameMap.put(ByteBuffer.class, NAME_STRING);
thriftTypeNameMap.put(short.class, NAME_I16);
thriftTypeNameMap.put(Short.class, NAME_I16);
thriftTypeNameMap.put(int.class, NAME_I32);
thriftTypeNameMap.put(Integer.class, NAME_I32);
thriftTypeNameMap.put(long.class, NAME_I64);
thriftTypeNameMap.put(Long.class, NAME_I64);
thriftTypeNameMap.put(double.class, NAME_DOUBLE);
thriftTypeNameMap.put(Double.class, NAME_DOUBLE);
thriftTypeNameMap.put(String.class, NAME_STRING);
thriftTypeNameMap.put(Map.class, NAME_MAP);
thriftTypeNameMap.put(List.class, NAME_LIST);
thriftTypeNameMap.put(Set.class, NAME_SET);
thriftTypeNameMap.put(TBase.class, NAME_STRUCT);
thriftTypeNameMap.put(TEnum.class, NAME_ENUM);
simpleClass.addAll(Arrays.asList(
boolean.class, Boolean.class, byte.class, ByteBuffer.class, short.class, Short.class, int.class, Integer.class,
long.class, Long.class, double.class, Double.class, String.class));
}
public static <T> T jsonRebuild(String json, T request) throws TException, IOException {
String reformatJson = jsonReformat(json, request);
TProtocolFactory tProtocolFactory = new TJSONProtocol.Factory(false);
new TDeserializer(tProtocolFactory).deserialize((TBase) request, reformatJson, "UTF-8");
return (T) request;
}
public static <T> String jsonReformat(String json, T request) throws IOException {
JsonReader jsonReader = new JsonReader(new StringReader(json));
StringWriter resultWriter = new StringWriter();
JsonWriter jsonWriter = new JsonWriter(resultWriter);
jsonReader.beginObject();
jsonWriter.beginObject();
recursiveIterStruct(jsonReader, request.getClass(), jsonWriter);
return resultWriter.toString();
}
private static void recursiveIterStruct(JsonReader jsonReader, Class<?> base, JsonWriter jsonWriter) throws IOException {
String lastTypeName = "";
try {
while (true) {
JsonToken nextToken = jsonReader.peek();
if (JsonToken.BEGIN_OBJECT.equals(nextToken)) {
jsonReader.beginObject();
jsonWriter.beginObject();
} else if (JsonToken.END_OBJECT.equals(nextToken)) {
jsonReader.endObject();
jsonWriter.endObject();
break;
} else if (JsonToken.NAME.equals(nextToken)) {
String name = jsonReader.nextName();
ThriftMeta thriftMeta = getThriftMeta(name, base);
jsonWriter.name(String.valueOf(thriftMeta.getThriftFieldId()));
jsonWriter.beginObject();
String typeName = getTypeName(thriftMeta.getThriftFieldType());
jsonWriter.name(typeName);
lastTypeName = getTypeName(thriftMeta.getThriftFieldType());
if (NAME_LIST.equals(typeName) || NAME_SET.equals(typeName)) {
jsonWriter.beginArray();
jsonWriter.value(getTypeName(thriftMeta.getThriftFieldSubTypeFirst()));
} else if (NAME_MAP.equals(typeName)) {
jsonWriter.beginArray();
jsonWriter.value(getTypeName(thriftMeta.getThriftFieldSubTypeFirst()));
jsonWriter.value(getTypeName(thriftMeta.getThriftFieldSubTypeSecond()));
}
if (isSimpleClass(thriftMeta.getThriftFieldType())) {
writeSimpleData(jsonWriter, jsonReader, lastTypeName);
} else if (NAME_LIST.equals(typeName) || NAME_SET.equals(typeName)) {
recursiveIterList(jsonReader,
jsonWriter, thriftMeta.getThriftFieldSubTypeFirst());
} else if (NAME_MAP.equals(typeName)) {
StringWriter subResultWriter = new StringWriter();
JsonWriter subWriter = new JsonWriter(subResultWriter);
Integer count =
recursiveIterMap(jsonReader, subWriter, thriftMeta.getThriftFieldSubTypeFirst(), thriftMeta.getThriftFieldSubTypeSecond());
jsonWriter.value(count);
jsonWriter.jsonValue(subResultWriter.toString());
jsonWriter.endArray();
} else if (NAME_ENUM.equals(typeName)) {
writeSimpleData(jsonWriter, jsonReader, lastTypeName);
} else {
recursiveIterStruct(jsonReader, thriftMeta.getThriftFieldType(), jsonWriter);
}
jsonWriter.endObject();
} else if (JsonToken.STRING.equals(nextToken)) {
String value = jsonReader.nextString();
jsonWriter.value(value);
} else if (JsonToken.NUMBER.equals(nextToken)) {
try {
Number value = jsonReader.nextLong();
jsonWriter.value(value);
} catch (NumberFormatException e) {
Number value = jsonReader.nextDouble();
jsonWriter.value(value);
}
} else if (JsonToken.BOOLEAN.equals(nextToken)) {
boolean value = jsonReader.nextBoolean();
jsonWriter.value(value);
} else if (JsonToken.NULL.equals(nextToken)) {
jsonReader.nextNull();
jsonWriter.nullValue();
} else if (JsonToken.END_DOCUMENT.equals(nextToken)) {
break;
}
}
} catch (IOException e) {
e.printStackTrace();
} catch (NoSuchMethodException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
} catch (InvocationTargetException e) {
e.printStackTrace();
} catch (NoSuchFieldException e) {
e.printStackTrace();
}
}
private static Integer recursiveIterMap(JsonReader jsonReader, JsonWriter jsonWriter, Type keyParam, Type valueParam) throws IOException {
String lastFieldName = "";
Integer count = 0;
try {
while (true) {
JsonToken nextToken = jsonReader.peek();
if (JsonToken.BEGIN_OBJECT.equals(nextToken)) {
jsonReader.beginObject();
if (StringUtils.hasText(lastFieldName) && isTBaseClass((Class<?>) valueParam)) {
StringWriter subResultWriter = new StringWriter();
JsonWriter subWriter = new JsonWriter(subResultWriter);
subWriter.beginObject();
recursiveIterStruct(jsonReader, (Class<?>) valueParam, subWriter);
jsonWriter.jsonValue(subResultWriter.toString());
} else {
jsonWriter.beginObject();
}
} else if (JsonToken.END_OBJECT.equals(nextToken)) {
jsonReader.endObject();
jsonWriter.endObject();
break;
} else if (JsonToken.BEGIN_ARRAY.equals(nextToken)) {
jsonReader.beginArray();
jsonWriter.beginArray();
if (valueParam instanceof ParameterizedTypeImpl
&& List.class.equals(((ParameterizedTypeImpl) valueParam).getRawType())) {
jsonWriter.value(getTypeName(((ParameterizedTypeImpl) valueParam).getActualTypeArguments()[0]));
recursiveIterList(jsonReader, jsonWriter, ((ParameterizedTypeImpl) valueParam).getActualTypeArguments()[0]);
}
} else if (JsonToken.END_ARRAY.equals(nextToken)) {
jsonReader.endArray();
jsonWriter.endArray();
} else if (JsonToken.NAME.equals(nextToken)) {
String name = jsonReader.nextName();
jsonWriter.name(name);
count++;
lastFieldName = name;
} else if (JsonToken.STRING.equals(nextToken)) {
String value = jsonReader.nextString();
jsonWriter.value(value);
} else if (JsonToken.NUMBER.equals(nextToken)) {
try {
Number value = jsonReader.nextLong();
jsonWriter.value(value);
} catch (NumberFormatException e) {
Number value = jsonReader.nextDouble();
jsonWriter.value(value);
}
} else if (JsonToken.BOOLEAN.equals(nextToken)) {
boolean value = jsonReader.nextBoolean();
jsonWriter.value(value);
} else if (JsonToken.NULL.equals(nextToken)) {
jsonReader.nextNull();
jsonWriter.nullValue();
} else if (JsonToken.END_DOCUMENT.equals(nextToken)) {
break;
}
}
} catch (IOException e) {
e.printStackTrace();
}
return count;
}
private static void recursiveIterList(JsonReader jsonReader, JsonWriter jsonWriter, Type param)
throws IOException {
Queue<Object> queue = new LinkedList<>();
try {
while (true) {
JsonToken nextToken = jsonReader.peek();
if (JsonToken.BEGIN_OBJECT.equals(nextToken)) {
jsonReader.beginObject();
StringWriter subResultWriter = new StringWriter();
JsonWriter subWriter = new JsonWriter(subResultWriter);
subWriter.beginObject();
recursiveIterStruct(jsonReader, (Class<?>) param, subWriter); // TODO 目前不支持嵌套List
queue.add(subResultWriter);
} else if (JsonToken.END_OBJECT.equals(nextToken)) {
jsonReader.endObject();
jsonWriter.endObject();
break;
} else if (JsonToken.BEGIN_ARRAY.equals(nextToken)) {
jsonReader.beginArray();
queue.clear();
} else if (JsonToken.END_ARRAY.equals(nextToken)) {
jsonReader.endArray();
queueOut(queue, jsonWriter);
jsonWriter.endArray();
return;
} else if (JsonToken.STRING.equals(nextToken)) {
String value = jsonReader.nextString();
queue.add(value);
} else if (JsonToken.NUMBER.equals(nextToken)) {
try {
Number value = jsonReader.nextLong();
queue.add(value);
} catch (NumberFormatException e) {
Number value = jsonReader.nextDouble();
queue.add(value);
}
} else if (JsonToken.BOOLEAN.equals(nextToken)) {
boolean value = jsonReader.nextBoolean();
queue.add(value);
} else if (JsonToken.NULL.equals(nextToken)) {
jsonReader.nextNull();
jsonWriter.nullValue();
} else if (JsonToken.END_DOCUMENT.equals(nextToken)) {
break;
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
private static String getTypeName(Type type) {
if (ParameterizedTypeImpl.class.equals(type.getClass())) {
return getParameterizedTypeName((ParameterizedTypeImpl) type);
}
String typeName = thriftTypeNameMap.get((Class<?>) type);
if (typeName == null) {
if (isTBaseClass((Class<?>) type)) {
typeName = thriftTypeNameMap.get(TBase.class);
} else if (isTEnumClass((Class<?>) type)) {
typeName = thriftTypeNameMap.get(TEnum.class);
} else {
log.error(String.format("Can't find a thrift type with type %s", type.getTypeName()));
throw new RuntimeException(String.format("Can't find a thrift type with type %s", type.getTypeName()));
}
}
return typeName;
}
private static String getParameterizedTypeName(ParameterizedTypeImpl parameterizedType) {
Class<?> rawType = (Class<?>) parameterizedType.getRawType();
String typeName = thriftTypeNameMap.get(rawType);
if (typeName == null && isTBaseClass(rawType)) {
typeName = thriftTypeNameMap.get(TBase.class);
}
return typeName;
}
private static boolean isSimpleClass(Class<?> type) {
return simpleClass.contains(type);
}
private static void writeSimpleData(JsonWriter jsonWriter, JsonReader jsonReader, String lastTypeName) throws IOException {
JsonToken nextToken = jsonReader.peek();
if (JsonToken.STRING.equals(nextToken)) {
String value = jsonReader.nextString();
jsonWriter.value(value);
} else if (JsonToken.NUMBER.equals(nextToken)) {
try {
Number value = jsonReader.nextLong();
jsonWriter.value(value);
} catch (NumberFormatException e) {
Number value = jsonReader.nextDouble();
jsonWriter.value(value);
}
} else if (JsonToken.BOOLEAN.equals(nextToken)) {
boolean value = jsonReader.nextBoolean();
jsonWriter.value(value);
} else if (JsonToken.NULL.equals(nextToken)) {
jsonReader.nextNull();
jsonWriter.nullValue();
}
}
private static void queueOut(Queue<Object> queue, JsonWriter jsonWriter) throws IOException {
jsonWriter.value(queue.size());
while (!queue.isEmpty()) {
Object obj = queue.poll();
if (obj instanceof String) {
jsonWriter.value((String) obj);
} else if (obj instanceof Number) {
jsonWriter.value((Number) obj);
} else if (obj instanceof Boolean) {
jsonWriter.value((Boolean) obj);
} else if (obj == null) {
jsonWriter.nullValue();
} else if (obj instanceof StringWriter) {
jsonWriter.jsonValue(obj.toString());
} else {
throw new RuntimeException(
String.format("Type error with object %s", obj.getClass().getTypeName()));
}
}
}
private static boolean isTBaseClass(Class<?> base) {
if (base.getInterfaces().length == 0) {
return false;
}
// always contains one element
List<Class<?>> fieldInterfaces =
Arrays.stream(base.getInterfaces())
.filter(interfaceClz -> (interfaceClz.equals(TBase.class)))
.collect(Collectors.toList());
return !fieldInterfaces.isEmpty();
}
private static boolean isTEnumClass(Class<?> base) {
if (base.getInterfaces().length == 0) {
return false;
}
// always contains one element
List<Class<?>> fieldInterfaces =
Arrays.stream(base.getInterfaces())
.filter(interfaceClz -> (interfaceClz.equals(TEnum.class)))
.collect(Collectors.toList());
return !fieldInterfaces.isEmpty();
}
private static ThriftMeta getThriftMeta(String fieldName, Class<?> base)
throws NoSuchMethodException, InvocationTargetException, IllegalAccessException, NoSuchFieldException {
Class<?>[] innerCLz = base.getDeclaredClasses();
Optional<?> optionalClz = Optional.ofNullable(Arrays.stream(innerCLz).filter(Class::isEnum).findAny().get());
Class<Enum> enumClz = (Class<Enum>) optionalClz.orElse(null);
Method method = enumClz.getMethod("findByName", String.class);
TFieldIdEnum tFieldIdEnum = (TFieldIdEnum) method.invoke(null, fieldName);
Field field = base.getField(tFieldIdEnum.getFieldName());
ThriftMeta meta = ThriftMeta.builder()
.thriftFieldId(tFieldIdEnum.getThriftFieldId())
.thriftFieldName(tFieldIdEnum.getFieldName())
.thriftFieldType(field.getType())
.thriftFieldTypeName(field.getType().getTypeName())
.build();
if (List.class.equals(field.getType()) || Set.class.equals(field.getType()) || Map.class.equals(field.getType())) {
ParameterizedType parameterizedType = (ParameterizedType) field.getGenericType();
Type[] actualTypeArguments = parameterizedType.getActualTypeArguments();
if (actualTypeArguments.length > 0) {
meta = meta.toBuilder().thriftFieldSubTypeFirst((actualTypeArguments[0])).build();
}
if (actualTypeArguments.length > 1) {
meta = meta.toBuilder().thriftFieldSubTypeSecond(actualTypeArguments[1]).build();
}
}
log.debug(tFieldIdEnum.getThriftFieldId() + " + " + tFieldIdEnum.getFieldName() + " + " + field.getType().getTypeName());
return meta;
}
}
|
Contractile roles of the M2 and M3 muscarinic receptors in the guinea pig colon. The contractile roles of the M2 and M3 muscarinic receptors were investigated in guinea pig longitudinal colonic smooth muscle. Prior treatment of the colon with N-(2-chloroethyl)-4-piperidinyl diphenylacetate (4-DAMP mustard) (40 nM) in combination with [[2--1-piperidinyl]acetyl]-5,11- dihydro-6H-pyridobenzodiazepine-6-one (AF-DX 116) (1.0 microM) caused a subsequent, irreversible inhibition of oxotremorine-M-induced contractions when measured after extensive washing. The estimate of the degree of receptor inactivation after 2 hr (97%) was not much greater than that measured after 1 hr (95%), which suggests that both 4-DAMP mustard-sensitive and -insensitive muscarinic subtypes contribute to the contractile response. Pertussis toxin treatment had no significant inhibitory effect on the control contractile response to oxotremorine-M, but caused an 8.8-fold increase in the EC50 value measured after a 2-hr treatment with 4-DAMP mustard. These results suggest that, after elimination of most of the M3 receptors with 4-DAMP mustard, the contractile response can be mediated by the pertussis toxin-sensitive M2 receptor. After pertussis toxin treatment, the kinetics of alkylation of muscarinic receptors in the colon were consistent with a single, 4-DAMP mustard-sensitive, M3 receptor subtype mediating the contractile response. When measured after a 2-hr treatment with 4-DAMP mustard and in the presence of histamine (0.30 microM) and either forskolin (10 microM) or isoproterenol (0.60 microM), the contractile responses to oxotremorine-M were pertussis toxin-sensitive and potently antagonized by the M2 selective antagonist, AF-DX 116. Collectively, our results indicate that the M2 receptor elicits contraction through two mechanisms, a direct contraction and an indirect contraction by preventing the relaxant effects of cAMP-generating agents. |
package org.januslabs.consul;
import java.util.List;
import java.util.Map;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.actuate.health.AbstractHealthIndicator;
import org.springframework.boot.actuate.health.Health;
import com.orbitz.consul.Consul;
import com.orbitz.consul.model.ConsulResponse;
import com.orbitz.consul.model.agent.Agent;
import com.orbitz.consul.option.QueryOptions;
import lombok.extern.slf4j.Slf4j;
@Slf4j
public class ConsulHealthIndicator extends AbstractHealthIndicator {
@Autowired
public Consul consul;
@Override
protected void doHealthCheck(Health.Builder builder) throws Exception {
try {
log.info("doHealthCheck.....");
Agent agentSelf = consul.agentClient().getAgent();
log.info(agentSelf.toString());
log.info(agentSelf.getConfig().toString());
ConsulResponse<Map<String, List<String>>> services =
consul.catalogClient().getServices(QueryOptions.BLANK);
builder.up().withDetail("services", services.getResponse())
.withDetail("advertiseAddress", agentSelf.getConfig().getAdvertiseAddr())
.withDetail("datacenter", agentSelf.getConfig().getDatacenter())
.withDetail("domain", agentSelf.getConfig().getDomain())
.withDetail("nodeName", agentSelf.getConfig().getNodeName())
.withDetail("bindAddress", agentSelf.getConfig().getBindAddr())
.withDetail("clientAddress", agentSelf.getConfig().getClientAddr());
} catch (Exception e) {
builder.down(e);
}
}
}
|
Dimensionality effects on the luminescence properties of hBN. Cathodoluminescence (CL) experiments at low temperature have been undertaken on various bulk and exfoliated hexagonal boron nitride (hBN) samples. Different bulk crystals grown from different synthesis methods have been studied. All of them present the same so-called S series in the 5.6-6 eV range, proving its intrinsic character. Luminescence spectra of flakes containing 100 down to 6 layers have been recorded. Strong modifications in the same UV range are observed and discussed within the general framework of 2D exciton properties in lamellar crystals. |
#!/usr/bin/env python
# encoding: UTF-8
"""Test demo 14.1 for chapter 14."""
class C(object):
"""docstring for C."""
def __call__(self, *args):
u"""实现了该方法后, 意味着实例对象可被调用."""
print "I'm callable! Called with args:\n", args
c = C()
print c
print callable(c)
c()
c(3)
c(3, 'no more, no less')
|
EXPERIENCE IN APPLYING THE TECHNOLOGY OF CYTOREDUCTIVE SURGERY WITH HYPERTHERMIC INTRAOPERATIVE INTRAPERITONEAL CHEMOTHERAPY IN THE TREATMENT OF PATIENTS WITH PERITONEAL CARCINOMATOSIS We present our experience in using cytoreductive surgery and hyperthermic intraperitoneal chemotherapy (HIPEC) for ovarian cancer patients treated at Irkutsk Regional Cancer Center. All patients were divided into 2 groups. Group I consisted of 15 patients, who underwent cytoreductive surgery only. Group II comprised 17 patients, who underwent surgery and HIPEC. The main eligibility criteria for this study were verified peritoneal carcinomatosis and resectable ovarian cancer. The primary analysis of these groups included: preoperative period, length of operation, postoperative length of stay, and postoperative complications. The technique of performing HIPEC using Performer HT® (RAND, Medolla (MO), Italy) was completely described. Further study is required to estimate the difference in overall and disease-free survival between study groups. |
Labor is looking to find an edge in the polls ahead of upcoming elections this year by posing itself on the side of the customer, as the Hayne report reveals a series of choices to make for those in power. As the government prepares its response to the findings unearthed by the Royal Commission, they have several aspects to bear in mind.
As well as securing the confidence of the voters, they also need to keep the markets moving well, or the banks could suffer a slowdown and their best-laid intentions to reform the banks could prove fruitless. However, the Scott Morrison administration has already indicated it is less likely to choose more radical avenues, which leaves the door open for Bill Shorten and the opposition.
With the aim of scrapping “grandfathered” fees paid to advisers, meaning any fees paid for existing rules while new ones are brought in would no longer be valid, Labor are trying to steal a march on the Liberals.
The election race is now becoming clearer as both sides seem likely to set out their stall on who is most likely to be able to reform the banks, but also keep everything in working order. Given that most of this has happened under the watch of a Liberal government, there is an argument to both sides that they could have done more to prevent it, and that they have kept the banks from going into freefall.
One of the biggest clashes between political sides at present is that Labor want to bring in new changes faster, while the Liberals are concerned that implementing too much now could put them in a difficult situation at the polls if the changes begin to affect people’s savings and livelihoods.
Labor are now demanding grandfathered fees be scrapped a full year before the Liberal Party say they would do anything about it, but Shorten is likely to come under plenty of fire from the business sectors which could be affected by any changes.
Although many analysts have dismissed the relevance of the claim, many mortgage brokers have railed against Labor’s intentions to get rid of commission fees as it could send up the rates of home interest loans. For each change any party considers from the Hayne report, there are sure to be those in the sector who will warn against it.
Shares in major banks have reacted well to the developments so far, as there is a growing sentiment that the Hayne report did not go so far as to suggest anything too radical which would affect the bottom line. This means none of the banks will be obligated to sell off their wealth management arms, while there are not expected to be any new lending rules in place.
The lines between the two major parties widened after Shorten used a window to call for parliament to find time to debate new consumer protection laws, and also lambasted Morrison for voting against the setup of the Royal Commission on several occasions, saying this meant he was against the side of the people.
Labor have shown their intent to adopt as many of the Hayne recommendations as possible, while Treasurer Josh Frydenberg has been quick to call for caution and the need to not rush into everything without considering the permutations. Which way the voters prefer later this year will probably decide which path Australia goes down. |
package ingress
import (
"context"
"crypto/sha256"
"encoding/hex"
"fmt"
"regexp"
"strconv"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
elbv2api "sigs.k8s.io/aws-load-balancer-controller/apis/elbv2/v1beta1"
"sigs.k8s.io/aws-load-balancer-controller/pkg/algorithm"
"sigs.k8s.io/aws-load-balancer-controller/pkg/annotations"
"sigs.k8s.io/aws-load-balancer-controller/pkg/k8s"
elbv2model "sigs.k8s.io/aws-load-balancer-controller/pkg/model/elbv2"
)
const (
healthCheckPortTrafficPort = "traffic-port"
)
func (t *defaultModelBuildTask) buildTargetGroup(ctx context.Context,
ing ClassifiedIngress, svc *corev1.Service, port intstr.IntOrString) (*elbv2model.TargetGroup, error) {
tgResID := t.buildTargetGroupResourceID(k8s.NamespacedName(ing.Ing), k8s.NamespacedName(svc), port)
if tg, exists := t.tgByResID[tgResID]; exists {
return tg, nil
}
tgSpec, err := t.buildTargetGroupSpec(ctx, ing, svc, port)
if err != nil {
return nil, err
}
nodeSelector, err := t.buildTargetGroupBindingNodeSelector(ctx, ing, svc, tgSpec.TargetType)
if err != nil {
return nil, err
}
tg := elbv2model.NewTargetGroup(t.stack, tgResID, tgSpec)
t.tgByResID[tgResID] = tg
_ = t.buildTargetGroupBinding(ctx, tg, svc, port, nodeSelector)
return tg, nil
}
func (t *defaultModelBuildTask) buildTargetGroupBinding(ctx context.Context, tg *elbv2model.TargetGroup, svc *corev1.Service, port intstr.IntOrString, nodeSelector *metav1.LabelSelector) *elbv2model.TargetGroupBindingResource {
tgbSpec := t.buildTargetGroupBindingSpec(ctx, tg, svc, port, nodeSelector)
tgb := elbv2model.NewTargetGroupBindingResource(t.stack, tg.ID(), tgbSpec)
return tgb
}
func (t *defaultModelBuildTask) buildTargetGroupBindingSpec(ctx context.Context, tg *elbv2model.TargetGroup, svc *corev1.Service, port intstr.IntOrString, nodeSelector *metav1.LabelSelector) elbv2model.TargetGroupBindingResourceSpec {
targetType := elbv2api.TargetType(tg.Spec.TargetType)
tgbNetworking := t.buildTargetGroupBindingNetworking(ctx, tg.Spec.Port, *tg.Spec.HealthCheckConfig.Port)
return elbv2model.TargetGroupBindingResourceSpec{
Template: elbv2model.TargetGroupBindingTemplate{
ObjectMeta: metav1.ObjectMeta{
Namespace: svc.Namespace,
Name: tg.Spec.Name,
},
Spec: elbv2model.TargetGroupBindingSpec{
TargetGroupARN: tg.TargetGroupARN(),
TargetType: &targetType,
ServiceRef: elbv2api.ServiceReference{
Name: svc.Name,
Port: port,
},
Networking: tgbNetworking,
NodeSelector: nodeSelector,
IPAddressType: (*elbv2api.TargetGroupIPAddressType)(tg.Spec.IPAddressType),
},
},
}
}
func (t *defaultModelBuildTask) buildTargetGroupBindingNetworking(ctx context.Context, targetGroupPort int64, healthCheckPort intstr.IntOrString) *elbv2model.TargetGroupBindingNetworking {
if t.backendSGIDToken == nil {
return nil
}
protocolTCP := elbv2api.NetworkingProtocolTCP
if t.disableRestrictedSGRules {
return &elbv2model.TargetGroupBindingNetworking{
Ingress: []elbv2model.NetworkingIngressRule{
{
From: []elbv2model.NetworkingPeer{
{
SecurityGroup: &elbv2model.SecurityGroup{
GroupID: t.backendSGIDToken,
},
},
},
Ports: []elbv2api.NetworkingPort{
{
Protocol: &protocolTCP,
Port: nil,
},
},
},
},
}
}
var networkingPorts []elbv2api.NetworkingPort
var networkingRules []elbv2model.NetworkingIngressRule
tgPort := intstr.FromInt(int(targetGroupPort))
networkingPorts = append(networkingPorts, elbv2api.NetworkingPort{
Protocol: &protocolTCP,
Port: &tgPort,
})
if healthCheckPort.String() != healthCheckPortTrafficPort {
networkingPorts = append(networkingPorts, elbv2api.NetworkingPort{
Protocol: &protocolTCP,
Port: &healthCheckPort,
})
}
for _, port := range networkingPorts {
networkingRules = append(networkingRules, elbv2model.NetworkingIngressRule{
From: []elbv2model.NetworkingPeer{
{
SecurityGroup: &elbv2model.SecurityGroup{
GroupID: t.backendSGIDToken,
},
},
},
Ports: []elbv2api.NetworkingPort{port},
})
}
return &elbv2model.TargetGroupBindingNetworking{
Ingress: networkingRules,
}
}
func (t *defaultModelBuildTask) buildTargetGroupSpec(ctx context.Context,
ing ClassifiedIngress, svc *corev1.Service, port intstr.IntOrString) (elbv2model.TargetGroupSpec, error) {
svcAndIngAnnotations := algorithm.MergeStringMap(svc.Annotations, ing.Ing.Annotations)
targetType, err := t.buildTargetGroupTargetType(ctx, svcAndIngAnnotations)
if err != nil {
return elbv2model.TargetGroupSpec{}, err
}
tgProtocol, err := t.buildTargetGroupProtocol(ctx, svcAndIngAnnotations)
if err != nil {
return elbv2model.TargetGroupSpec{}, err
}
tgProtocolVersion, err := t.buildTargetGroupProtocolVersion(ctx, svcAndIngAnnotations)
if err != nil {
return elbv2model.TargetGroupSpec{}, err
}
healthCheckConfig, err := t.buildTargetGroupHealthCheckConfig(ctx, svc, svcAndIngAnnotations, targetType, tgProtocol, tgProtocolVersion)
if err != nil {
return elbv2model.TargetGroupSpec{}, err
}
tgAttributes, err := t.buildTargetGroupAttributes(ctx, svcAndIngAnnotations)
if err != nil {
return elbv2model.TargetGroupSpec{}, err
}
tags, err := t.buildTargetGroupTags(ctx, ing, svc)
if err != nil {
return elbv2model.TargetGroupSpec{}, err
}
svcPort, err := k8s.LookupServicePort(svc, port)
if err != nil {
return elbv2model.TargetGroupSpec{}, err
}
ipAddressType, err := t.buildTargetGroupIPAddressType(ctx, svc)
if err != nil {
return elbv2model.TargetGroupSpec{}, err
}
tgPort := t.buildTargetGroupPort(ctx, targetType, svcPort)
name := t.buildTargetGroupName(ctx, k8s.NamespacedName(ing.Ing), svc, port, tgPort, targetType, tgProtocol, tgProtocolVersion)
return elbv2model.TargetGroupSpec{
Name: name,
TargetType: targetType,
Port: tgPort,
Protocol: tgProtocol,
ProtocolVersion: &tgProtocolVersion,
IPAddressType: &ipAddressType,
HealthCheckConfig: &healthCheckConfig,
TargetGroupAttributes: tgAttributes,
Tags: tags,
}, nil
}
var invalidTargetGroupNamePattern = regexp.MustCompile("[[:^alnum:]]")
// buildTargetGroupName will calculate the targetGroup's name.
func (t *defaultModelBuildTask) buildTargetGroupName(_ context.Context,
ingKey types.NamespacedName, svc *corev1.Service, port intstr.IntOrString, tgPort int64,
targetType elbv2model.TargetType, tgProtocol elbv2model.Protocol, tgProtocolVersion elbv2model.ProtocolVersion) string {
uuidHash := sha256.New()
_, _ = uuidHash.Write([]byte(t.clusterName))
_, _ = uuidHash.Write([]byte(t.ingGroup.ID.String()))
_, _ = uuidHash.Write([]byte(ingKey.Namespace))
_, _ = uuidHash.Write([]byte(ingKey.Name))
_, _ = uuidHash.Write([]byte(svc.UID))
_, _ = uuidHash.Write([]byte(port.String()))
_, _ = uuidHash.Write([]byte(strconv.Itoa(int(tgPort))))
_, _ = uuidHash.Write([]byte(targetType))
_, _ = uuidHash.Write([]byte(tgProtocol))
_, _ = uuidHash.Write([]byte(tgProtocolVersion))
uuid := hex.EncodeToString(uuidHash.Sum(nil))
sanitizedNamespace := invalidTargetGroupNamePattern.ReplaceAllString(svc.Namespace, "")
sanitizedName := invalidTargetGroupNamePattern.ReplaceAllString(svc.Name, "")
return fmt.Sprintf("k8s-%.8s-%.8s-%.10s", sanitizedNamespace, sanitizedName, uuid)
}
func (t *defaultModelBuildTask) buildTargetGroupTargetType(_ context.Context, svcAndIngAnnotations map[string]string) (elbv2model.TargetType, error) {
rawTargetType := string(t.defaultTargetType)
_ = t.annotationParser.ParseStringAnnotation(annotations.IngressSuffixTargetType, &rawTargetType, svcAndIngAnnotations)
switch rawTargetType {
case string(elbv2model.TargetTypeInstance):
return elbv2model.TargetTypeInstance, nil
case string(elbv2model.TargetTypeIP):
return elbv2model.TargetTypeIP, nil
default:
return "", errors.Errorf("unknown targetType: %v", rawTargetType)
}
}
func (t *defaultModelBuildTask) buildTargetGroupIPAddressType(_ context.Context, svc *corev1.Service) (elbv2model.TargetGroupIPAddressType, error) {
var ipv6Configured bool
for _, ipFamily := range svc.Spec.IPFamilies {
if ipFamily == corev1.IPv6Protocol {
ipv6Configured = true
break
}
}
if ipv6Configured {
if *t.loadBalancer.Spec.IPAddressType != elbv2model.IPAddressTypeDualStack {
return "", errors.New("unsupported IPv6 configuration, lb not dual-stack")
}
return elbv2model.TargetGroupIPAddressTypeIPv6, nil
}
return elbv2model.TargetGroupIPAddressTypeIPv4, nil
}
// buildTargetGroupPort constructs the TargetGroup's port.
// Note: TargetGroup's port is not in the data path as we always register targets with port specified.
// so this settings don't really matter to our controller, and we do our best to use the most appropriate port as targetGroup's port to avoid UX confusing.
func (t *defaultModelBuildTask) buildTargetGroupPort(_ context.Context, targetType elbv2model.TargetType, svcPort corev1.ServicePort) int64 {
if targetType == elbv2model.TargetTypeInstance {
return int64(svcPort.NodePort)
}
if svcPort.TargetPort.Type == intstr.Int {
return int64(svcPort.TargetPort.IntValue())
}
// when a literal targetPort is used, we just use a fixed 1 here as this setting is not in the data path.
// also, under extreme edge case, it can actually be different ports for different pods.
return 1
}
func (t *defaultModelBuildTask) buildTargetGroupProtocol(_ context.Context, svcAndIngAnnotations map[string]string) (elbv2model.Protocol, error) {
rawBackendProtocol := string(t.defaultBackendProtocol)
_ = t.annotationParser.ParseStringAnnotation(annotations.IngressSuffixBackendProtocol, &rawBackendProtocol, svcAndIngAnnotations)
switch rawBackendProtocol {
case string(elbv2model.ProtocolHTTP):
return elbv2model.ProtocolHTTP, nil
case string(elbv2model.ProtocolHTTPS):
return elbv2model.ProtocolHTTPS, nil
default:
return "", errors.Errorf("backend protocol must be within [%v, %v]: %v", elbv2model.ProtocolHTTP, elbv2model.ProtocolHTTPS, rawBackendProtocol)
}
}
func (t *defaultModelBuildTask) buildTargetGroupProtocolVersion(_ context.Context, svcAndIngAnnotations map[string]string) (elbv2model.ProtocolVersion, error) {
rawBackendProtocolVersion := string(t.defaultBackendProtocolVersion)
_ = t.annotationParser.ParseStringAnnotation(annotations.IngressSuffixBackendProtocolVersion, &rawBackendProtocolVersion, svcAndIngAnnotations)
switch rawBackendProtocolVersion {
case string(elbv2model.ProtocolVersionHTTP1):
return elbv2model.ProtocolVersionHTTP1, nil
case string(elbv2model.ProtocolVersionHTTP2):
return elbv2model.ProtocolVersionHTTP2, nil
case string(elbv2model.ProtocolVersionGRPC):
return elbv2model.ProtocolVersionGRPC, nil
default:
return "", errors.Errorf("backend protocol version must be within [%v, %v, %v]: %v", elbv2model.ProtocolVersionHTTP1, elbv2model.ProtocolVersionHTTP2, elbv2model.ProtocolVersionGRPC, rawBackendProtocolVersion)
}
}
func (t *defaultModelBuildTask) buildTargetGroupHealthCheckConfig(ctx context.Context, svc *corev1.Service, svcAndIngAnnotations map[string]string, targetType elbv2model.TargetType, tgProtocol elbv2model.Protocol, tgProtocolVersion elbv2model.ProtocolVersion) (elbv2model.TargetGroupHealthCheckConfig, error) {
healthCheckPort, err := t.buildTargetGroupHealthCheckPort(ctx, svc, svcAndIngAnnotations, targetType)
if err != nil {
return elbv2model.TargetGroupHealthCheckConfig{}, err
}
healthCheckProtocol, err := t.buildTargetGroupHealthCheckProtocol(ctx, svcAndIngAnnotations, tgProtocol)
if err != nil {
return elbv2model.TargetGroupHealthCheckConfig{}, err
}
healthCheckPath := t.buildTargetGroupHealthCheckPath(ctx, svcAndIngAnnotations, tgProtocolVersion)
healthCheckMatcher := t.buildTargetGroupHealthCheckMatcher(ctx, svcAndIngAnnotations, tgProtocolVersion)
healthCheckIntervalSeconds, err := t.buildTargetGroupHealthCheckIntervalSeconds(ctx, svcAndIngAnnotations)
if err != nil {
return elbv2model.TargetGroupHealthCheckConfig{}, err
}
healthCheckTimeoutSeconds, err := t.buildTargetGroupHealthCheckTimeoutSeconds(ctx, svcAndIngAnnotations)
if err != nil {
return elbv2model.TargetGroupHealthCheckConfig{}, err
}
healthCheckHealthyThresholdCount, err := t.buildTargetGroupHealthCheckHealthyThresholdCount(ctx, svcAndIngAnnotations)
if err != nil {
return elbv2model.TargetGroupHealthCheckConfig{}, err
}
healthCheckUnhealthyThresholdCount, err := t.buildTargetGroupHealthCheckUnhealthyThresholdCount(ctx, svcAndIngAnnotations)
if err != nil {
return elbv2model.TargetGroupHealthCheckConfig{}, err
}
return elbv2model.TargetGroupHealthCheckConfig{
Port: &healthCheckPort,
Protocol: &healthCheckProtocol,
Path: &healthCheckPath,
Matcher: &healthCheckMatcher,
IntervalSeconds: &healthCheckIntervalSeconds,
TimeoutSeconds: &healthCheckTimeoutSeconds,
HealthyThresholdCount: &healthCheckHealthyThresholdCount,
UnhealthyThresholdCount: &healthCheckUnhealthyThresholdCount,
}, nil
}
func (t *defaultModelBuildTask) buildTargetGroupHealthCheckPort(_ context.Context, svc *corev1.Service, svcAndIngAnnotations map[string]string, targetType elbv2model.TargetType) (intstr.IntOrString, error) {
rawHealthCheckPort := ""
if exist := t.annotationParser.ParseStringAnnotation(annotations.IngressSuffixHealthCheckPort, &rawHealthCheckPort, svcAndIngAnnotations); !exist {
return intstr.FromString(healthCheckPortTrafficPort), nil
}
if rawHealthCheckPort == healthCheckPortTrafficPort {
return intstr.FromString(healthCheckPortTrafficPort), nil
}
healthCheckPort := intstr.Parse(rawHealthCheckPort)
if healthCheckPort.Type == intstr.Int {
return healthCheckPort, nil
}
svcPort, err := k8s.LookupServicePort(svc, healthCheckPort)
if err != nil {
return intstr.IntOrString{}, errors.Wrap(err, "failed to resolve healthCheckPort")
}
if targetType == elbv2model.TargetTypeInstance {
return intstr.FromInt(int(svcPort.NodePort)), nil
}
if svcPort.TargetPort.Type == intstr.Int {
return svcPort.TargetPort, nil
}
return intstr.IntOrString{}, errors.New("cannot use named healthCheckPort for IP TargetType when service's targetPort is a named port")
}
func (t *defaultModelBuildTask) buildTargetGroupHealthCheckProtocol(_ context.Context, svcAndIngAnnotations map[string]string, tgProtocol elbv2model.Protocol) (elbv2model.Protocol, error) {
rawHealthCheckProtocol := string(tgProtocol)
_ = t.annotationParser.ParseStringAnnotation(annotations.IngressSuffixHealthCheckProtocol, &rawHealthCheckProtocol, svcAndIngAnnotations)
switch rawHealthCheckProtocol {
case string(elbv2model.ProtocolHTTP):
return elbv2model.ProtocolHTTP, nil
case string(elbv2model.ProtocolHTTPS):
return elbv2model.ProtocolHTTPS, nil
default:
return "", errors.Errorf("healthCheckProtocol must be within [%v, %v]", elbv2model.ProtocolHTTP, elbv2model.ProtocolHTTPS)
}
}
func (t *defaultModelBuildTask) buildTargetGroupHealthCheckPath(_ context.Context, svcAndIngAnnotations map[string]string, tgProtocolVersion elbv2model.ProtocolVersion) string {
var rawHealthCheckPath string
switch tgProtocolVersion {
case elbv2model.ProtocolVersionHTTP1, elbv2model.ProtocolVersionHTTP2:
rawHealthCheckPath = t.defaultHealthCheckPathHTTP
case elbv2model.ProtocolVersionGRPC:
rawHealthCheckPath = t.defaultHealthCheckPathGRPC
}
_ = t.annotationParser.ParseStringAnnotation(annotations.IngressSuffixHealthCheckPath, &rawHealthCheckPath, svcAndIngAnnotations)
return rawHealthCheckPath
}
func (t *defaultModelBuildTask) buildTargetGroupHealthCheckMatcher(_ context.Context, svcAndIngAnnotations map[string]string, tgProtocolVersion elbv2model.ProtocolVersion) elbv2model.HealthCheckMatcher {
var rawHealthCheckMatcherHTTPCode string
switch tgProtocolVersion {
case elbv2model.ProtocolVersionHTTP1, elbv2model.ProtocolVersionHTTP2:
rawHealthCheckMatcherHTTPCode = t.defaultHealthCheckMatcherHTTPCode
case elbv2model.ProtocolVersionGRPC:
rawHealthCheckMatcherHTTPCode = t.defaultHealthCheckMatcherGRPCCode
}
_ = t.annotationParser.ParseStringAnnotation(annotations.IngressSuffixSuccessCodes, &rawHealthCheckMatcherHTTPCode, svcAndIngAnnotations)
if tgProtocolVersion == elbv2model.ProtocolVersionGRPC {
return elbv2model.HealthCheckMatcher{
GRPCCode: &rawHealthCheckMatcherHTTPCode,
}
}
return elbv2model.HealthCheckMatcher{
HTTPCode: &rawHealthCheckMatcherHTTPCode,
}
}
func (t *defaultModelBuildTask) buildTargetGroupHealthCheckIntervalSeconds(_ context.Context, svcAndIngAnnotations map[string]string) (int64, error) {
rawHealthCheckIntervalSeconds := t.defaultHealthCheckIntervalSeconds
if _, err := t.annotationParser.ParseInt64Annotation(annotations.IngressSuffixHealthCheckIntervalSeconds,
&rawHealthCheckIntervalSeconds, svcAndIngAnnotations); err != nil {
return 0, err
}
return rawHealthCheckIntervalSeconds, nil
}
func (t *defaultModelBuildTask) buildTargetGroupHealthCheckTimeoutSeconds(_ context.Context, svcAndIngAnnotations map[string]string) (int64, error) {
rawHealthCheckTimeoutSeconds := t.defaultHealthCheckTimeoutSeconds
if _, err := t.annotationParser.ParseInt64Annotation(annotations.IngressSuffixHealthCheckTimeoutSeconds,
&rawHealthCheckTimeoutSeconds, svcAndIngAnnotations); err != nil {
return 0, err
}
return rawHealthCheckTimeoutSeconds, nil
}
func (t *defaultModelBuildTask) buildTargetGroupHealthCheckHealthyThresholdCount(_ context.Context, svcAndIngAnnotations map[string]string) (int64, error) {
rawHealthCheckHealthyThresholdCount := t.defaultHealthCheckHealthyThresholdCount
if _, err := t.annotationParser.ParseInt64Annotation(annotations.IngressSuffixHealthyThresholdCount,
&rawHealthCheckHealthyThresholdCount, svcAndIngAnnotations); err != nil {
return 0, err
}
return rawHealthCheckHealthyThresholdCount, nil
}
func (t *defaultModelBuildTask) buildTargetGroupHealthCheckUnhealthyThresholdCount(_ context.Context, svcAndIngAnnotations map[string]string) (int64, error) {
rawHealthCheckUnhealthyThresholdCount := t.defaultHealthCheckUnhealthyThresholdCount
if _, err := t.annotationParser.ParseInt64Annotation(annotations.IngressSuffixUnhealthyThresholdCount,
&rawHealthCheckUnhealthyThresholdCount, svcAndIngAnnotations); err != nil {
return 0, err
}
return rawHealthCheckUnhealthyThresholdCount, nil
}
func (t *defaultModelBuildTask) buildTargetGroupAttributes(_ context.Context, svcAndIngAnnotations map[string]string) ([]elbv2model.TargetGroupAttribute, error) {
var rawAttributes map[string]string
if _, err := t.annotationParser.ParseStringMapAnnotation(annotations.IngressSuffixTargetGroupAttributes, &rawAttributes, svcAndIngAnnotations); err != nil {
return nil, err
}
attributes := make([]elbv2model.TargetGroupAttribute, 0, len(rawAttributes))
for attrKey, attrValue := range rawAttributes {
attributes = append(attributes, elbv2model.TargetGroupAttribute{
Key: attrKey,
Value: attrValue,
})
}
return attributes, nil
}
func (t *defaultModelBuildTask) buildTargetGroupTags(_ context.Context, ing ClassifiedIngress, svc *corev1.Service) (map[string]string, error) {
ingSvcTags, err := t.buildIngressBackendResourceTags(ing, svc)
if err != nil {
return nil, err
}
return algorithm.MergeStringMap(t.defaultTags, ingSvcTags), nil
}
func (t *defaultModelBuildTask) buildTargetGroupResourceID(ingKey types.NamespacedName, svcKey types.NamespacedName, port intstr.IntOrString) string {
return fmt.Sprintf("%s/%s-%s:%s", ingKey.Namespace, ingKey.Name, svcKey.Name, port.String())
}
func (t *defaultModelBuildTask) buildTargetGroupBindingNodeSelector(_ context.Context, ing ClassifiedIngress, svc *corev1.Service, targetType elbv2model.TargetType) (*metav1.LabelSelector, error) {
if targetType != elbv2model.TargetTypeInstance {
return nil, nil
}
var targetNodeLabels map[string]string
svcAndIngAnnotations := algorithm.MergeStringMap(svc.Annotations, ing.Ing.Annotations)
if _, err := t.annotationParser.ParseStringMapAnnotation(annotations.IngressSuffixTargetNodeLabels, &targetNodeLabels, svcAndIngAnnotations); err != nil {
return nil, err
}
if len(targetNodeLabels) == 0 {
return nil, nil
}
return &metav1.LabelSelector{
MatchLabels: targetNodeLabels,
}, nil
}
|
/*
* Copyright 2019 Project OpenUBL, Inc. and/or its affiliates
* and other contributors as indicated by the @author tags.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package io.github.project.openubl.searchpe.utils;
import io.github.project.openubl.searchpe.models.TipoPersona;
import io.github.project.openubl.searchpe.models.jpa.entity.ContribuyenteEntity;
import io.github.project.openubl.searchpe.models.jpa.entity.ContribuyenteId;
import java.util.ArrayList;
import java.util.List;
import java.util.Optional;
public class DataHelper {
public static String[] readLine(String line, int size) {
String[] result = new String[size];
String[] split = line.split("\\|");
for (int i = 0; i < result.length; i++) {
if (i < split.length) {
String value = split[i].trim();
if (value.equals("-") || value.isEmpty()) {
result[i] = null;
} else {
result[i] = value
.replaceAll("�", "Ñ")
.replaceAll("\\?", "Ñ");
}
} else {
result[i] = null;
}
}
return result;
}
public static Optional<List<ContribuyenteEntity>> buildContribuyenteEntity(Long versionId, String[] columns) {
if (columns[0] == null || columns[1] == null) {
return Optional.empty();
}
List<ContribuyenteEntity> result = new ArrayList<>();
ContribuyenteEntity personaJuridica = ContribuyenteEntity
.Builder.aContribuyenteEntity()
.withId(new ContribuyenteId(versionId, columns[0]))
.withTipoPersona(TipoPersona.JURIDICA)
.withNombre(columns[1])
.withEstado(columns[2])
.withCondicionDomicilio(columns[3])
.withUbigeo(columns[4])
.withTipoVia(columns[5])
.withNombreVia(columns[6])
.withCodigoZona(columns[7])
.withTipoZona(columns[8])
.withNumero(columns[9])
.withInterior(columns[10])
.withLote(columns[11])
.withDepartamento(columns[12])
.withManzana(columns[13])
.withKilometro(columns[14])
.build();
result.add(personaJuridica);
if (personaJuridica.id.numeroDocumento.startsWith("10")) {
ContribuyenteEntity personaNatural = ContribuyenteEntity.fullClone(personaJuridica);
personaNatural.id.numeroDocumento = personaNatural.id.numeroDocumento.substring(2, personaNatural.id.numeroDocumento.length() - 1); // Remove first 2 characters and also last character
personaNatural.tipoPersona = TipoPersona.NATURAL;
result.add(personaNatural);
}
return Optional.of(result);
}
}
|
Individual Differences in Accurately Judging Personality From Text. This research examines correlates of accuracy in judging Big Five traits from first-person text excerpts. Participants in six studies were recruited from psychology courses or online. In each study, participants performed a task of judging personality from text and performed other ability tasks and/or filled out questionnaires. Participants who were more accurate in judging personality from text were more likely to be female; had personalities that were more agreeable, conscientious, and feminine, and less neurotic and dominant (all controlling for participant gender); scored higher on empathic concern; self-reported more interest in, and attentiveness to, people's personalities in their daily lives; and reported reading more for pleasure, especially fiction. Accuracy was not associated with SAT scores but had a significant relation to vocabulary knowledge. Accuracy did not correlate with tests of judging personality and emotion based on audiovisual cues. This research is the first to address individual differences in accurate judgment of personality from text, thus adding to the literature on correlates of the good judge of personality. |
"South Park": Laughs on a deadline
"Rude, crude, and blasphemous" is how "60 Minutes" correspondent Steve Kroft describes the Broadway hit musical "The Book of Mormon." This week on "60 Minutes," Kroft profiles the creators of the Tony Award-winning musical, Trey Parker and Matt Stone. Parker and Stone are both famous and infamous for their animated cable TV series "South Park," which could best be described as, well, rude, crude and blasphemous.
60 minutes Overtime, South Park,OT
View Steve Kroft's full "60 Minutes" report here.
As you'll see in Steve Kroft's report, produced by Graham Messick, Parker and Stone have developed a unique chemistry over 20 years of collaboration. They can complete each other's sentences, and can speak in such shorthand that it often sounds like code.
On "60 Minutes Overtime," we dig into that partnership a bit more, showing how Parker and Stone accomplish the feat of creating a new South Park episode every six days.
The show airs on Wednesday, which means that on Thursday morning, the team starts with a clean slate, and through the sheer power of brainstorming and bathroom humor, they are ready for air less than a week later.
Parker and Stone tell Kroft that they make changes right up to the final moment in order to make it better, a work habit that Kroft says he can identify with. "We work exactly the same way" at "60 Minutes," Kroft tells them. |
Downregulation of microRNA214 improves therapeutic potential of allogeneic bone marrowderived mesenchymal stem cell by targeting PIM1 in rats with acute liver failure Acute liver failure (ALF) is a disease resulted from diverse etiology, which generally leads to a rapid degenerated hepatic function. However, transplantation bone marrowderived mesenchymal stem cells (BMSCs) transplantation has been suggested to relieve ALF. Interestingly, microRNA214 (miR214) could potentially regulate differentiation and migration of BMSCs. The present study aims to inquire whether miR214 affects therapeutic potential of BMSCs transplantation by targeting PIM1 in ALF. 120 male Wistar rats were induced as ALF model rats and transplanted with BMSCs postalteration of miR214 or PIM1 expression. Further experiments were performed to detect biochemical index (alanine aminotransferase , aspartate transaminase , total bilirubin ), and expression of miR214, PIM1, hepatocyte growth factor (HGF), caspase 3, tumor necrosis factor (TNF), and interleukin10 (IL10) in rat serum. Apart from the above detection, apoptosis of hepatocytes and Ki67 protein expression in hepatic tissues of rats were additionally assessed. After BMSCs transplantation with miR214 inhibition, a decreased expression of ALT, AST, and TBiL yet an increased expression of HGF was shown, coupled with a decline in the expression of caspase 3, TNF, and IL10. Meanwhile, alleviated hepatic injury and decreased apoptotic index of hepatic cells were observed and the positive rate of Ki67 protein expression was significantly increased. Moreover, miR214 and caspase 3, TNF, and IL10 decreased notably, while PIM1 was upregulated in response to miR214 inhibition. Strikingly, the inhibition of PIM1 reversed effects triggered by miR214 inhibition. These findings indicated that downregulation of miR214 improves therapeutic potential of BMSCs transplantation by upregulating PIM1 for ALF. |
And after unpaid internships? Then what?
My CV, full of “real-life experience,” helped me land a two-year contract in communications at the Ontario Public Service after graduating with my masters of journalism degree. Near the end of that contract there was talk of offering me another six-month contract. Not, note, a job.
Whatever you think of a suggestion this week by Bank of Canada Governor Stephen Poloz that young people discouraged by unemployment should work for free to bolster their resumes, it’s clear that just having experience on one’s resume, from unpaid or paid work, won’t solve our country’s problem with youth unemployment.
The Bank of Canada estimates about 200,000 young people want to work, or, something most commentators have glossed over, work more.
“Having something unpaid on your CV is very worth it, because that’s the one thing you can do to counteract this scarring effect. Get some real-life experience even though you’re discouraged, even if it’s for free,” Mr. Poloz said Monday in Toronto.
Besides the well-known issues associated with unpaid labour — only the privileged can work for free, it undercuts the labour market, it’s often illegal — the more pressing problem is what comes after.
If working an unpaid internship for six weeks to a year was enough to successfully land a full-time permanent job in one’s chosen field, streams of youth would scurry out of the dark cocoons of their parent’s basement and take him up on his suggestion.
Unpaid internships have only the potential as a base from which a slow climb up the shaky jungle gym of a labour market that is still recovering from the 2008 recession can begin.
And I would know something about them — I did three.
My CV, full of “real-life experience,” helped me land a two-year contract in communications at the Ontario Public Service after graduating with my masters of journalism degree. Near the end of that contract there was talk of offering me another six-month contract.
I didn’t necessarily want the next six months to resemble the way Caitlin, who asked to withhold her name to protect her employment, spent her last few years.
The 27-year-old holds two part-time, contract administrative assistant positions in the public sector: three days a week at a Toronto college, and two days a week at a Toronto hospital.
If my adult life was going to continue to echo a polar bear’s during global warming — jumping from one receding arctic ice sheet to another — I was determined for it to be an insecurity of my own making, which is why I took the plunge into freelance journalism.
Now, when I’m not writing for the National Post and other newspapers, I work at a bar, drawing clovers on Guinnesses and lining up Jager shots to make ends meet.
Of course, anybody foolish enough to want to work in a field like print journalism knows what she’s getting into — but what about Caitlin? And what about my fellow bar workers, among which include a teacher, a paralegal and an assignment editor in television news, none able to get more than part-time work in their chosen fields?
In 1997, the first year Statistics Canada began tracking this data, about 7% of Canadians under 30 worked temporary positions.
By 2011, about 11.5% did, with no indication that this growth has slowed in the last few years.
Moreover, it is only young people who have experienced such an expansion of non-permanent positions: just 5.7% of Canadians over 30 worked such a job in 2011, from 4% in 1997, according to a report from the Canadian Centre for Policy Alternatives.
Of course, there are those for whom temporary work is enjoyable and well-suited.
“The people they don’t work so well for are people who — young workers are a good example of this — who would ideally like and need the stability, and in particular the salary and the benefits that go along with full-time permanent work,” says Kendra Strauss, assistant professor in the Labour Studies Program at Simon Fraser University.
Caitlin doesn’t get an extra cent of goodwill beyond her $17 hourly wage — no parental leave, no sick days, no vacation days, no vacation pay and the college has structured her year-long contract into three-month renewable portions.
Danielle Kubes is a freelance journalist living in Toronto. She wrote her graduate thesis on grade inflation in Ontario universities. |
<filename>src/popups/TaxLotPopup.tsx
import esri = __esri;
import { whenOnce } from '@arcgis/core/core/watchUtils';
import { property, subclass } from '@arcgis/core/core/accessorSupport/decorators';
import { renderable, tsx } from '@arcgis/core/widgets/support/widget';
import Widget from '@arcgis/core/widgets/Widget';
import PopupTemplate from '@arcgis/core/PopupTemplate';
import CustomContent from '@arcgis/core/popup/content/CustomContent';
interface ContentProperties extends esri.WidgetProperties {
graphic: esri.Graphic;
}
let KEY = 0;
const CSS = {
table: 'esri-widget__table',
th: 'esri-feature__field-header',
td: 'esri-feature__field-data',
};
@subclass('cov.popups.TaxLotPopup.Content')
class Content extends Widget {
@property()
graphic!: esri.Graphic;
@property()
@renderable()
accessorValues: tsx.JSX.Element[] = [];
constructor(properties: ContentProperties) {
super(properties);
}
postInitialize() {
whenOnce(this, 'graphic', this.getAccessorValues.bind(this));
}
getAccessorValues(): void {
const { graphic, accessorValues } = this;
const { layer, attributes } = graphic;
const objectId = attributes[(layer as esri.FeatureLayer).objectIdField] as number;
(layer as esri.FeatureLayer)
.queryRelatedFeatures({
outFields: ['*'],
relationshipId: 0,
objectIds: [objectId],
})
.then((result: any) => {
const features = result[objectId].features;
if (features.length) {
features.forEach((feature: any): void => {
const { attributes } = feature;
accessorValues.push(
<tr key={KEY++}>
<td class={CSS.td}>
<strong>Tax Account {attributes.ACCOUNT_ID}</strong>
</td>
<td>Land / Improvement Values</td>
</tr>,
);
accessorValues.push(
<tr key={KEY++}>
<th class={CSS.th}>Assessed Value</th>
<td class={CSS.td}>
${attributes.AV_LAND.toLocaleString('en')} / ${attributes.AV_IMPR.toLocaleString('en')}
</td>
</tr>,
);
accessorValues.push(
<tr key={KEY++}>
<th class={CSS.th}>Real Market Value</th>
<td class={CSS.td}>
${attributes.RMV_LAND.toLocaleString('en')} / ${attributes.RMV_IMPR.toLocaleString('en')}
</td>
</tr>,
);
});
}
})
.catch((error: any) => {
console.log(error);
});
}
render(): tsx.JSX.Element {
const attributes = this.graphic.attributes;
if (attributes.BNDY_CLIPPED) {
return (
<p>
Note: This tax lot is clipped to the City of Vernonia area spatial extent. No tax lot data is provided here.
Please visit the{' '}
<a href="https://www.columbiacountyor.gov/departments/Assessor" target="_blank" rel="noopener">
Columbia County Assessor's
</a>{' '}
web site for tax lot information.
</p>
);
}
return (
<table class={CSS.table}>
{/* tax lot id */}
<tr>
<th class={CSS.th}>Tax Lot</th>
{attributes.VERNONIA === 1 ? (
<td class={CSS.td}>
<a href={`https://www.vernonia-or.gov/tax-lot/${attributes.TAXLOT_ID}/`} target="_blank">
{attributes.TAXLOT_ID}
</a>
</td>
) : (
<td class={CSS.td}>{attributes.TAXLOT_ID}</td>
)}
</tr>
{/* tax map */}
<tr>
<th class={CSS.th}>Tax Map</th>
<td class={CSS.td}>
<a
href={`http://172.16.17.32/geomoose2/taxlots_map_images/${attributes.TAXMAP}`}
target="_blank"
rel="noopener"
>
{`${attributes.TOWN}${attributes.TOWN_DIR}${attributes.RANGE}${attributes.RANGE_DIR} ${attributes.SECTION} ${attributes.QTR}${attributes.QTR_QTR}`}
</a>
</td>
</tr>
{/* owner */}
<tr>
<th class={CSS.th}>Owner</th>
<td class={CSS.td}>{attributes.OWNER}</td>
</tr>
{/* address */}
{attributes.ADDRESS ? (
<tr>
<th class={CSS.th}>Address (Primary Situs)</th>
<td class={CSS.td}>
<a
href={`https://www.google.com/maps/place/${attributes.ADDRESS.split(' ').join('+')}+${
attributes.CITY
}=${attributes.STATE}+${attributes.ZIP}/data=!3m1!1e3`}
target="_blank"
rel="noopener"
>
{attributes.ADDRESS}
</a>
</td>
</tr>
) : null}
{/* area */}
<tr>
<th class={CSS.th}>Area</th>
<td class={CSS.td}>
<span style="margin-right:0.75rem;">{`${attributes.ACRES} acres`}</span>
<span>{`${attributes.SQ_FEET.toLocaleString()} sq ft`}</span>
</td>
</tr>
{/* tax accounts */}
<tr>
<th class={CSS.th}>Tax Account(s)</th>
<td class={CSS.td}>
{attributes.ACCOUNT_IDS.split(',').map((accountId: string) => {
return (
<a
style="margin-right:0.75rem;"
href={`http://www.helioncentral.com/columbiaat/MainQueryDetails.aspx?AccountID=${accountId}&QueryYear=2021&Roll=R`}
target="_blank"
rel="noopener"
>
{accountId}
</a>
);
})}
</td>
</tr>
{/* assessor values */}
{this.accessorValues}
</table>
);
}
}
@subclass('cov.popups.TaxLotPopup')
export default class TaxLotPopup extends PopupTemplate {
@property()
title = `{TAXLOT_ID}`;
@property()
outFields = ['*'];
@property()
customContent = new CustomContent({
outFields: ['*'],
creator: (evt: any): Widget => {
return new Content({
graphic: evt.graphic,
});
},
});
@property()
content = [this.customContent];
}
|
before. Outside the compound, Cuban security men kept an eye on several hun- dred locals, who had gathered to cheer and to wave little Cuban and Ameri- can flags. Afew weeks later, an unmarked U.S. government plane landed at an air- strip in Havana, carrying the last per- son in the world the Castros might be expected to welcome: John Brennan, the director of the C.I.A. Brennan was there to meet with Alejandro Castro and discuss increasing intelligence coöperation between the two countries. Brennan considered Cuba's spy agen- cies the most capable in Latin Amer- ica, and hoped to work with them against drug cartels and terrorist networks. Brennan's enthusiasm wasn't univer- sally shared in the U.S. intelligence com- munity. Some o cials feared that Cuba could exploit any openings to expand its operations against the United States. Others, though, saw the idea of greater coöperation as an embodiment of the old adage "If you can't beat 'em, join 'em."The C.I.A., which prides itself on being the world's best intelligence ser- vice, doesn't advertise the fact that it has repeatedly been outplayed by the spy networks of an impoverished Caribbean state. But, over the years, Cuba's intel- ligence o cers have been remarkably successful at recruiting Americans. "They've penetrated just about anybody that the agency has ever tried to run against them," James Cason, who was the head of the U.S. Interests Section in the two-thousands, said. "They ba- sically beat us." After the Cold War ended and Rus- sia more or less abandoned Havana as a military outpost, the C.I.A. concen- trated less on Cuba. But Cuban intel- ligence agencies never took their eyes o the U.S. "Everything that they did focussed on us,"Cason said. At one point, Cuban security services assigned a bat- talion of intelligence o cers---estimates range from hundreds to thousands---to monitor the U.S. Interests Section. John Caulfield, a former head of the Inter- ests Section, used to tell his counter- parts, "Frankly, I think you have vastly overestimated my capability of destabi- lizing your society." Brennan's talks with Alejandro Cas- tro took place at a discreet government guesthouse, where a day of formal ne- gotiations was followed by a banquet featuring a spit-roasted pig.The Cuban government has long cast the C.I.A. as the ultimate enemy, dedicating large portions of a museum, the Denounce- ment Memorial, to railing against the agency's purported o enses (" con- spiracies to assassinate the commander in chief "). Nevertheless, U.S. o cials said that, during the talks, Cuban lead- ers made it clear that they respected the C.I.A., and, in fact, found it more reli- able than the State Department, which, during George W. Bush's Administra- tion, had aided programs intended to undermine the Cuban government. Rhodes sometimes joked with Alejan- dro Castro, "Who thought that the C.I.A. would be the agency which the Cubans would trust!" Brennan and Alejandro Castro agreed on a series of steps to build confidence. One called for the Cubans to post an o cer in Washington to act as a formal liaison between the two countries' in- telligence agencies. In the end, the Cubans didn't send a liaison o cer. American o cials spec- ulated that Alejandro Castro had been undermined by hard-liners in his sys- tem who opposed improving relations. Alejandro, in turn, complained that the C.I.A. didn't follow through with its commitments, and said that he believed Brennan was impeded by Cuba hawks at the agency. "The American and Cuban publics overwhelmingly support more engagement," Rhodes said in an interview. "But there are antibodies em- bedded in both governments that don't want to let go of the conflict." As Obama prepared for his visit, in March, , U.S. diplomats started to brief the Cubans on the army of security men, transport aircraft, and armored limousines that would descend on the island. To Cuban hard-liners, "it probably looked like their long- feared invasion," John Caulfield said. The Americans were thrilled with the pageantry. On March nd, Obama gave a speech about democracy and human rights, which was televised un- censored in Cuba. "I have come here to extend the hand of friendship to the Cuban people," he said. During a base- ball game attended by the two coun- tries' Presidents and thousands of Cu- bans, Rhodes introduced Alejandro Castro and his young daughter to Obama, a public gesture of good will. The détente brought some rapid changes to the island, including a surge in American tourists---from ninety thousand in to six hundred thou- sand last year. Companies from Europe and the U.S. rushed to invest, and Miami-style bars and restaurants opened in Havana. Rihanna went for a photo shoot. The makers of the "Fast "And, should you ever lose the key to the city, I hid another one here." |
#include "undirected/planar_dual_graph_maker.hpp"
#include <map>
#ifdef UNIT_TESTS
#include "gtest/gtest_prod.h"
#endif
/**
* @file undirected/planar_dual_graph_maker.cpp
*
* @brief
*/
namespace Wailea {
namespace Undirected {
using namespace std;
void PlanarDualGraphMaker::makeDualGraph(
Graph& src,
EmbeddedGraph& emb,
DualGraph& dual
) {
// Step 1. Make an EmbeddedGraph as a copy of the input graph.
copyInputGraph(src, emb);
// Step 2. Connect half edges to form face cycles.
findFaces(emb, dual);
// Step 3. Make dual edges to connecte faces.
findDualEdges(emb, dual);
// Step 4. Make inter-graph forward links to src.
makeForwardLinks(emb, dual);
}
void PlanarDualGraphMaker::copyInputGraph(
Graph& src,
EmbeddedGraph& emb
) {
vector<pair<node_list_it_t,node_ptr_t>> nodePairs;
vector<pair<edge_list_it_t,edge_ptr_t>> edgePairs;
auto nitPair = src.nodes();
for (auto nit = nitPair.first; nit != nitPair.second; nit++) {
auto np = make_unique<EmbeddedNode>();
np->pushIGBackwardLink(nit);
nodePairs.push_back(make_pair(nit,std::move(np)));
}
auto eitPair = src.edges();
for (auto eit = eitPair.first; eit != eitPair.second; eit++) {
auto ep = make_unique<EmbeddedEdge>();
ep->pushIGBackwardLink(eit);
edgePairs.push_back(make_pair(eit,std::move(ep)));
}
src.copySubgraph(nodePairs, edgePairs, emb);
for (auto eit = emb.edges().first; eit != emb.edges().second; eit++) {
auto& E = dynamic_cast<EmbeddedEdge&>(*(*eit));
auto& N1 = E.incidentNode1();
auto& N2 = E.incidentNode2();
auto& HE1 = E.mHalfEdge1;
auto& HE2 = E.mHalfEdge2;
HE1.mEmbeddedEdge = E.backIt();
HE2.mEmbeddedEdge = E.backIt();
HE1.mTheOtherHalfOn1 = false;
HE2.mTheOtherHalfOn1 = true;
HE1.mSrcNode = N1.backIt();
HE1.mDstNode = N2.backIt();
HE2.mSrcNode = N2.backIt();
HE2.mDstNode = N1.backIt();
}
}
list<node_list_it_t> PlanarDualGraphMaker::initializeUnprocessedQueues(
EmbeddedGraph& emb
) {
list<node_list_it_t> nodesPending;
for (auto nit = emb.nodes().first; nit != emb.nodes().second; nit++) {
auto& N = dynamic_cast<EmbeddedNode&>(*(*nit));
/** Place the node into 'pending' queue.
* During the processing loop far below, if a node has no more half
* edge to explore, it will be removed from the middle of the queue.
*
* mItIntoNodesPending is used to remember the location in the queue
* to remove it.
*/
if (N.degree() > 0) {
N.mItIntoNodesPending
= nodesPending.insert(nodesPending.end(), N.backIt());
for (auto eit = N.incidentEdges().first;
eit != N.incidentEdges().second; eit++) {
auto& E = dynamic_cast<EmbeddedEdge&>(*(*(*eit)));
auto& he1 = E.mHalfEdge1;
auto& he2 = E.mHalfEdge2;
auto pos = N.mEdgesPending.insert(N.mEdgesPending.end(), *eit);
if ( he1.mSrcNode == N.backIt()) {
he1.mItIntoEdgesPending = pos;
}
else {
he2.mItIntoEdgesPending = pos;
}
}
}
}
return nodesPending; //rvo
}
void PlanarDualGraphMaker::findFaces(
EmbeddedGraph& emb,
DualGraph& dual
) {
// Cope with k1.
if (emb.numNodes()==1 && emb.numEdges()==0) {
dual.addNode(make_unique<EmbeddedFace>());
}
list<node_list_it_t> nodesPending = initializeUnprocessedQueues(emb);
while (nodesPending.size() > 0) {
// Find an unprocessed node.
auto& N = dynamic_cast<EmbeddedNode&>(*(*(*(nodesPending.begin()))));
while (N.mEdgesPending.size() > 0 ) {
// Find an unprocessed halfedge
auto& E = dynamic_cast<EmbeddedEdge&>(
*(*(*(N.mEdgesPending.begin()))));
auto& AN = dynamic_cast<EmbeddedNode&>(E.adjacentNode(N));
/** The initial edge with adjacent nodes look like the following:
*
*
* <-- mSrcNode:he:mDstNode -->
*
* N <---> E <---> AN
* sit eit/HEOn1 dit
*/
node_list_it_t sit = N.backIt(); // source node
edge_list_it_t eit = E.backIt(); // edge
bool HEOn1 = E.mHalfEdge1.mSrcNode == N.backIt();
node_list_it_t dit = AN.backIt(); // dest node
/* The half edges around a face will be
* accumulated in the followinglists.
*/
list<edge_list_it_t> cycleEdges;
list<bool> cycleHalfEdgesOn1;
cycleEdges.push_back(eit);
cycleHalfEdgesOn1.push_back(HEOn1);
if (HEOn1) {
N.mEdgesPending.erase(E.mHalfEdge1.mItIntoEdgesPending);
}
else {
N.mEdgesPending.erase(E.mHalfEdge2.mItIntoEdgesPending);
}
findNextHalfEdge(sit, eit, HEOn1, dit);
#ifdef UNIT_TESTS
map<Node*,long> nodeMap;
nodeMap[&N] = 1;
#endif
// Explore the half edges and form a face cycle
while (sit != N.backIt()) {
auto& S = dynamic_cast<EmbeddedNode&>(*(*sit));
auto& E = dynamic_cast<EmbeddedEdge&>(*(*eit));
#ifdef UNIT_TESTS
if (nodeMap.find(&S)!=nodeMap.end()) {
auto& N = dynamic_cast<NumNode&>(S.IGBackwardLinkRef());
cerr << "!!! ERROR: Duplicate node [" << N.num()
<< "] found during dual graph generation. !!!\n";
// mDupFound = true;
}
else {
nodeMap[&S] = 1;
}
#endif
cycleEdges.push_back(eit);
cycleHalfEdgesOn1.push_back(HEOn1);
if (HEOn1) {
S.mEdgesPending.erase(E.mHalfEdge1.mItIntoEdgesPending);
}
else {
S.mEdgesPending.erase(E.mHalfEdge2.mItIntoEdgesPending);
}
if (S.mEdgesPending.size()==0) {
nodesPending.erase(S.mItIntoNodesPending);
}
findNextHalfEdge(sit, eit, HEOn1, dit);
}
if (N.mEdgesPending.size()==0) {
nodesPending.erase(N.mItIntoNodesPending);
}
// Now we have a face cycle. Create an EmbeddedFace.
makeOneFace(
dual, std::move(cycleEdges), std::move(cycleHalfEdgesOn1));
}
}
}
void PlanarDualGraphMaker::makeOneFace(
DualGraph& dual,
list<edge_list_it_t>&& cycleEdges,
list<bool>&& cycleHalfEdgesOn1
) {
auto& F = dynamic_cast<EmbeddedFace&>(
dual.addNode(make_unique<EmbeddedFace>()));
auto ceIt = cycleEdges.begin();
auto cheIt = cycleHalfEdgesOn1.begin();
auto ceItPrev = cycleEdges.begin();
auto cheItPrev = cycleHalfEdgesOn1.begin();
for (; ceIt != cycleEdges.end(); ceIt++,cheIt++) {
auto& E = dynamic_cast<EmbeddedEdge&>(*(*(*ceIt)));
auto& HE = (*cheIt)?E.mHalfEdge1:E.mHalfEdge2;
HE.mEmbeddedFace = F.backIt();
if (ceIt != cycleEdges.begin()) {
auto& Eprev = dynamic_cast<EmbeddedEdge&>(*(*(*ceItPrev)));
auto& HEprev = (*cheItPrev)?Eprev.mHalfEdge1:Eprev.mHalfEdge2;
HE.mPrevEdge = Eprev.backIt();
HE.mPrevHalfEdgeOn1 = (*cheItPrev);
HEprev.mNextEdge = E.backIt();
HEprev.mNextHalfEdgeOn1 = (*cheIt);
ceItPrev = ceIt;
cheItPrev = cheIt;
}
}
ceIt = cycleEdges.begin();
cheIt = cycleHalfEdgesOn1.begin();
auto& E = dynamic_cast<EmbeddedEdge&>(*(*(*ceIt)));
auto& HE = (*cheIt)?E.mHalfEdge1:E.mHalfEdge2;
auto& Eprev = dynamic_cast<EmbeddedEdge&>(*(*(*ceItPrev)));
auto& HEprev = (*cheItPrev)?Eprev.mHalfEdge1:Eprev.mHalfEdge2;
HE.mPrevEdge = Eprev.backIt();
HE.mPrevHalfEdgeOn1 = (*cheItPrev);
HEprev.mNextEdge = E.backIt();
HEprev.mNextHalfEdgeOn1 = (*cheIt);
F.mCycleEdges = std::move(cycleEdges);
F.mCycleHalfEdgesOn1 = std::move(cycleHalfEdgesOn1);
}
void PlanarDualGraphMaker::findNextHalfEdge(
node_list_it_t& sit, // (io): source node pointer
edge_list_it_t& eit, // (io): edge pointer
bool& HEOn1, // (io): half edge is mHaldEdge1
node_list_it_t& dit // (io): destination node pointer
) {
auto& E = dynamic_cast<EmbeddedEdge&>(*(*eit));
auto& D = dynamic_cast<EmbeddedNode&>(*(*dit));
node_incidence_it_t iit;
if (E.incidentNode1().backIt()==D.backIt()) {
iit = E.incidentBackItNode1();
}
else {
iit = E.incidentBackItNode2();
}
if (iit == D.incidentEdges().first) {
iit = D.incidentEdges().second;
}
iit--;
auto& Snext = D;
auto& Enext = dynamic_cast<EmbeddedEdge&>(*(*(*(iit))));
auto& Dnext = dynamic_cast<EmbeddedNode&>(Enext.adjacentNode(Snext));
HEOn1 = Enext.mHalfEdge1.mSrcNode == Snext.backIt();
sit = Snext.backIt();
eit = Enext.backIt();
dit = Dnext.backIt();
}
void PlanarDualGraphMaker::findDualEdges(
EmbeddedGraph& emb,
DualGraph& dual
) {
/** For each EmbeddedEdge, create a dual edge and connect two
* Embedded faces.
*/
for (auto eIt = emb.edges().first; eIt != emb.edges().second; eIt++) {
auto& E = dynamic_cast<EmbeddedEdge&>(*(*eIt));
auto& HE1 = E.mHalfEdge1;
auto& HE2 = E.mHalfEdge2;
auto& F1 = dynamic_cast<EmbeddedFace&>(*(*HE1.mEmbeddedFace));
auto& F2 = dynamic_cast<EmbeddedFace&>(*(*HE2.mEmbeddedFace));
auto& DE = dynamic_cast<DualEdge&>(
dual.addEdge(make_unique<DualEdge>(), F1, F2));
E.mDualEdge = DE.backIt();
DE.mEmbeddedEdge = E.backIt();
DE.pushIGBackwardLink(E.IGBackwardLink());
}
/** Reorder incident dual edges of each face according
* to the surrounding half edges.
*/
for (auto fIt = dual.nodes().first; fIt != dual.nodes().second; fIt++) {
auto& F = dynamic_cast<EmbeddedFace&>(*(*fIt));
list<edge_list_it_t> orderedDualEdges;
for (auto eIt : F.mCycleEdges) {
auto& E = dynamic_cast<EmbeddedEdge&>(*(*eIt));
orderedDualEdges.push_back(E.mDualEdge);
}
F.reorderIncidence(std::move(orderedDualEdges));
}
}
void PlanarDualGraphMaker::makeForwardLinks(
EmbeddedGraph& emb,
DualGraph& dual
) {
for (auto nit = emb.nodes().first; nit != emb.nodes().second; nit++) {
auto& N = dynamic_cast<EmbeddedNode&>(*(*nit));
N.IGBackwardLinkRef().pushIGForwardLink(nit);
}
for (auto eit = emb.edges().first; eit != emb.edges().second; eit++) {
auto& EE = dynamic_cast<EmbeddedEdge&>(*(*eit));
EE.IGBackwardLinkRef().pushIGForwardLink(eit);
}
}
}// namespace Undirected
}// namespace Wailea
|
<reponame>swantescholz/cpptdd
#include "Util.h"
#include "Hacks.h"
#include "tdd.h"
#include "ignoreTests.h"
#include <cmath>
namespace tdd {
Util::Util() {}
Util::~Util() {}
bool Util::almostEqual(double a, double b, double epsilon) {
return std::abs(a - b) <= ( (std::abs(a) < std::abs(b) ? std::abs(b) : std::abs(a)) * epsilon);
}
Test(min/max works with variadic templates) {
assertEqual(util.min(4,3,2,5), 2);
assertClose(util.min(4.3,3.3333333,5.9), 10.0/3);
}
Test(almost equal works) {
assertFalse(util.almostEqual(1.0,1.01));
assertTrue(util.almostEqual(1.0,1.01,0.1));
}
} // namespace tdd
|
/**
* Makes sure that a custom vaadin service that is not vaadin servlet service can be used in when desired.
*
*/
public class CustomVaadinServiceImplementationTest {
@Test
public void StaticFileServer_Constructor_uses_VaadinService()
throws NoSuchMethodException, SecurityException {
Assert.assertNotNull(
StaticFileServer.class.getConstructor(VaadinService.class));
}
@Test
public void VaadinServlet_uses_VaadinService_getService()
throws NoSuchMethodException, SecurityException {
Assert.assertNotNull(VaadinServlet.class.getDeclaredMethod(
"createStaticFileHandler", VaadinService.class));
Method mcreateVaadinResponse = VaadinServlet.class.getDeclaredMethod(
"createVaadinResponse", HttpServletResponse.class);
Assert.assertNotNull(mcreateVaadinResponse);
Assert.assertEquals(VaadinResponse.class,
mcreateVaadinResponse.getReturnType());
Method mgetService = VaadinServlet.class
.getDeclaredMethod("getService");
Assert.assertNotNull(mgetService);
Assert.assertEquals(VaadinService.class, mgetService.getReturnType());
}
@Test
public void VaadinServletRequest_uses_VaadinService_getService()
throws NoSuchMethodException, SecurityException {
Assert.assertNotNull(VaadinServletRequest.class
.getConstructor(HttpServletRequest.class, VaadinService.class));
Method mgetService = VaadinServletRequest.class
.getDeclaredMethod("getService");
Assert.assertNotNull(mgetService);
Assert.assertEquals(VaadinService.class, mgetService.getReturnType());
}
@Test
public void VaadinServletResponse_uses_VaadinService_getService()
throws NoSuchMethodException, SecurityException {
Assert.assertNotNull(VaadinServletResponse.class.getConstructor(
HttpServletResponse.class, VaadinService.class));
Method mgetService = VaadinServletResponse.class
.getDeclaredMethod("getService");
Assert.assertNotNull(mgetService);
Assert.assertEquals(VaadinService.class, mgetService.getReturnType());
}
@Test
public void PushRequestHandler_uses_VaadinService_createPushHandler()
throws NoSuchMethodException, SecurityException {
Method mgetService = PushRequestHandler.class
.getDeclaredMethod("createPushHandler", VaadinService.class);
Assert.assertNotNull(mgetService);
}
@Test
public void VaadinResponse_sendError() throws NoSuchMethodException,
SecurityException, ServiceException, IOException {
VaadinService vs = new MockVaadinService();
VaadinHttpServletResponseI response = Mockito
.mock(VaadinHttpServletResponseI.class);
Mockito.doThrow(new RuntimeException(
"Please check that you really nead more than a HttpServletResponse"))
.when(response)
.sendError(Mockito.anyInt(), Mockito.anyString());
vs.handleSessionExpired(Mockito.mock(VaadinRequest.class), response);
}
abstract class AbstractMockVaadinService extends VaadinService {
private static final long serialVersionUID = 1L;
public static final String TEST_SESSION_EXPIRED_URL = "TestSessionExpiredURL";
@Override
protected RouteRegistry getRouteRegistry() {
// ignore
return null;
}
@Override
protected PwaRegistry getPwaRegistry() {
// ignore
return null;
}
@Override
public String getContextRootRelativePath(VaadinRequest request) {
// ignore
return null;
}
@Override
public String getMimeType(String resourceName) {
// ignore
return null;
}
@Override
protected boolean requestCanCreateSession(VaadinRequest request) {
// ignore
return false;
}
@Override
public String getServiceName() {
// ignore
return null;
}
@Override
public String getMainDivId(VaadinSession session,
VaadinRequest request) {
// ignore
return null;
}
@Override
public URL getStaticResource(String url) {
// ignore
return null;
}
@Override
public URL getResource(String url) {
// ignore
return null;
}
@Override
public InputStream getResourceAsStream(String url) {
// ignore
return null;
}
@Override
public String resolveResource(String url) {
// ignore
return null;
}
@Override
protected VaadinContext constructVaadinContext() {
// ignore
return null;
}
}
class MockVaadinService extends AbstractMockVaadinService {
private static final long serialVersionUID = 1L;
@Override
public Iterable<RequestHandler> getRequestHandlers() {
return new ArrayList<>();
}
@Override
public SystemMessages getSystemMessages(Locale locale,
VaadinRequest request) {
SystemMessages systemMessages = Mockito.mock(SystemMessages.class);
Mockito.when(systemMessages.getSessionExpiredURL())
.thenReturn(TEST_SESSION_EXPIRED_URL);
return systemMessages;
}
}
interface VaadinHttpServletResponseI
extends VaadinResponse, HttpServletResponse {
}
} |
F. Henri Klickmann
Personal life
Klickmann was born on February 4, 1885 in Chicago, Illinois. His father, Rudolph Klickmann, was a German immigrant. His mother Carolina (née Laufer) Klickmann was originally from Illinois. Frank was the second-born of five children: Emily (b. 1881), Ida (b. 1887), Florence (b. 1889), and Robert (b. 1890).
In 1908, Klickmann married Jeanette Klickmann. It was his first marriage and her second. They lived in Chicago for an extensive period of time, before moving to Manhattan between 1922 and 1923. They remained in New York for the remainder of their lives.
On June 25, 1966, Klickmann died at the Knickerbocker Hospital in New York. He was 81 years old.
Career
In 1906, Klickmann's first publication, Oh Babe, appeared under the name F. Henri Klickmann. Many of his early "rag" songs were co-written with bandleader, Paul Biese. Together they composed the songs,The Maurice Walk and The Murray Walk. The former was written for vaudeville performers Maurice and Florence Walton, and the latter for silent film actress, Mae Murray. Klickmann occasionally played in Biese's orchestra and arranged music for them.
Klickmann's songwriting career began with, My Sweetheart Went Down With the Ship, which was a song inspired by the sinking of the Titanic. One of his first hits was a 1914 anti-war song, Uncle Sam Won't Go to War, co-written with Al Dubin. Klickmann arranged music for various music companies, including the McKinely Music Company of Chicago. In 1917, he ended his working relationship with the Paul Biese Orchestra in order to focus all his attention to arranging music. This decision provided Klickmann more time to the McKinley Music Company.
By the 1920s, Klickmann's work was being published by the largest music publishers of popular sheet music in the country, Waterson, Berlin & Snyder, Inc.. At this time, he was rearranging composer Zez Confrey's songs into more readable arrangements for accompanied instruments, and working with lyricist Harold G. Frost.
In 1921, he became a member of the American Society of Composers, Authors and Publishers.
In 1923, Klickmann was hired full-time by Jack Mills Music, Inc. Under Mills, Klickmann published novelty songs, a book about jazz performance (1926), and jazz band orchestrations. He arranged music for the Six Brown Brothers and Eddie Cantor, and composed music for the ukulele and accordion. He also collaborated on a project with cartoonist Rube Goldberg, based on Goldberg's character, Boob McNutt.
Klickmann edited various books containing the popular pieces of musicians, Wendell Hall, Buddy Rich, and Tommy Dorsey.
During the 1930s, work started to wane. By 1942, Klickmann was self-employed and working for various music publishers. He also co-led a popular swing and jazz group with trombonist, Fred Norman and backing singers, Millie Bosman and Irene Redfield. In the mid-1950s, Klickmann retired. |
Using diffusion imaging to study human connectional anatomy. Diffusion imaging can be used to estimate the routes taken by fiber pathways connecting different regions of the living brain. This approach has already supplied novel insights into in vivo human brain anatomy. For example, by detecting where connection patterns change, one can define anatomical borders between cortical regions or subcortical nuclei in the living human brain for the first time. Because diffusion tractography is a relatively new technique, however, it is important to assess its validity critically. We discuss the degree to which diffusion tractography meets the requirements of a technique to assess structural connectivity and how its results compare to those from the gold-standard tract tracing methods in nonhuman animals. We conclude that although tractography offers novel opportunities it also raises significant challenges to be addressed by further validation studies to define precisely the limitations and scope of this exciting new technique. |
<filename>cilksan/old/shadow_mem.h<gh_stars>1-10
// -*- C++ -*-
#ifndef __SHADOW_MEM__
#define __SHADOW_MEM__
#include "csan.h"
#include "debug_util.h"
#include "frame_data.h"
// Forward declarations
class CilkSanImpl_t;
class SimpleShadowMem;
// class Shadow_Memory {
// SimpleShadowMem *shadow_mem;
// public:
// ~Shadow_Memory() { destruct(); }
// void init(CilkSanImpl_t &CilkSanImpl);
// bool setOccupied(bool is_read, uintptr_t addr, size_t mem_size);
// void clearOccupied();
// void freePages();
// // Inserts access, and replaces any that are already in the shadow memory.
// template <bool is_read>
// void insert_access(const csi_id_t acc_id, uintptr_t addr, size_t mem_size,
// FrameData_t *f);
// // Returns true if ANY bytes between addr and addr+mem_size are in the shadow
// // memory.
// template <bool is_read>
// __attribute__((always_inline)) bool does_access_exists(uintptr_t addr,
// size_t mem_size) const;
// __attribute__((always_inline)) void clear(size_t start, size_t size);
// void record_alloc(size_t start, size_t size, FrameData_t *f,
// csi_id_t alloca_id);
// void record_free(size_t start, size_t size, FrameData_t *f, csi_id_t free_id,
// MAType_t type);
// void clear_alloc(size_t start, size_t size);
// __attribute__((always_inline)) void
// check_race_with_prev_read(const csi_id_t acc_id, uintptr_t addr,
// size_t mem_size, bool on_stack,
// FrameData_t *f) const;
// template <bool is_read>
// __attribute__((always_inline)) void
// check_race_with_prev_write(const csi_id_t acc_id, MAType_t type,
// uintptr_t addr, size_t mem_size, bool on_stack,
// FrameData_t *f) const;
// __attribute__((always_inline)) void
// update_with_write(const csi_id_t acc_id, MAType_t type, uintptr_t addr,
// size_t mem_size, bool on_stack, FrameData_t *f);
// __attribute__((always_inline)) void
// update_with_read(const csi_id_t acc_id, uintptr_t addr, size_t mem_size,
// bool on_stack, FrameData_t *f);
// __attribute__((always_inline)) void
// check_and_update_write(const csi_id_t acc_id, MAType_t type, uintptr_t addr,
// size_t mem_size, bool on_stack, FrameData_t *f);
// void destruct();
// };
#endif // __SHADOW_MEM__
|
#ifdef PEGASUS_OS_AIX
#ifndef __UNIX_CERTIFICATEAUTHORITY_PRIVATE_H
#define __UNIX_CERTIFICATEAUTHORITY_PRIVATE_H
#endif
#endif
|
<gh_stars>1-10
package hamgo
import (
"os"
"path/filepath"
"strings"
)
func writeString(filename string, content string) bool {
f := openFile(filename)
if f == nil {
return false
}
defer f.Close()
_, err := f.WriteString(content)
if err != nil {
println("append file failed!", err.Error())
return false
}
return true
}
func writeBytes(filename string, content []byte) bool {
f := openFile(filename)
if f == nil {
return false
}
defer f.Close()
_, err := f.Write(content)
if err != nil {
println("append file failed!", err.Error())
return false
}
return true
}
func isFileExist(filename string) bool {
var exist = true
if _, err := os.Stat(filename); os.IsNotExist(err) {
exist = false
}
return exist
}
func deleteFile(filename string) bool {
err := os.RemoveAll(filename)
if err != nil {
return false
}
return true
}
func openFile(filename string) *os.File {
var f *os.File
var err error
if !isFileExist(filename) {
err = os.MkdirAll(filepath.Dir(filename), 0755)
if err != nil {
println("mk dir failed ", filename, " failed,", err)
return nil
}
}
f, err = os.OpenFile(filename, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0755)
if err != nil {
println("open file failed ", filename, " failed,", err)
return nil
}
return f
}
func renameFile(filename, newname string) bool {
err := os.Rename(filename, newname)
if err == nil {
return true
}
return false
}
func currentPath(filename string) string {
index := strings.LastIndex(filename, "/")
return filename[:index+1]
}
|
. OBJECTIVE To study the effects of Yupingfeng Powder (YPFP) on cisplatin (DDP) induced oxidative damage of organs in hepatocellular carcinoma mice. METHODS A total of 2 x10 Hepa1 -6 cells were inoculated subcutaneously into the right flank of 15 C57BL/6 mice to establish a mice model of hepatocellular carcinoma. Then the mice were randomly divided into three groups, i.e., the model group, the DDP group, and the DDP + YPFP group, 5 in each group. Mice in the DDP group and the DDP + YPFP group were intraperitoneally injected with DDP (2. 5 mg/kg), once every three day for 2 weeks. Physiological saline was intraperitoneally injected to mice in the model group. Meanwhile, YPFP water decoction (25 g/kg) was given to mice in the DDP + YPFP group by gastrogavage once daily for 2 weeks. Corresponding distilled water was given by gastrogavage to mice in the DDP group and the model group. Fourteen days later, mice were sacrificed and the tumor inhibition ratio was calculated. The weights of kidneys, livers, and lungs were weighed and the organ coefficient calculated. The activities of superoxide dismutase (SOD) and the content of malondialdehyde (MDA) in the tissue were detected. The pathologic changes were observed. RESULTS The tumor weight obviously decreased in the DDP group and the DDP + YPFP group when compared with the model group (P < 0.05, P < 0.01). Obvious oxidative damage existed in the kidneys and livers after induced by DDP. Oxidative damage also existed in the lungs to some extent. YPFP could obviously decrease the content of MDA and the activities of SOD in livers (P < 0.05), and increase the activities of SOD in lungs (P < 0.01). The pathologic changes showed the same effect trend. CONCLUSIONS YPFP could protect the organs (kidney, liver, lung) from the oxidative damage induced by DDP. Anti-oxidation is one of its mechanisms. |
/** \mainpage
*
****************************************************************************
* Made for the DotDotFactory, by the Hogeschool Utrecht.
*
* Copyright The DotDotFactory ( 2018 - 2019 )
*
* Date : 26/01/2018
*
****************************************************************************
*
* \section License
*
* TODOOOOO 8===>
**************************************************************************/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <algorithm>
#include "freertos/FreeRTOS.h"
#include "freertos/task.h"
#include "freertos/event_groups.h"
#include "nvs.h"
#include "nvs_flash.h"
#include "esp_system.h"
#include "esp_spi_flash.h"
#include "esp_wifi.h"
#include "esp_event_loop.h"
#include "esp_adc_cal.h"
#include "rom/rtc.h"
#include "driver/rtc_io.h"
#include "driver/gpio.h"
#include "driver/adc.h"
#include "SystemVariables.hpp"
#include "Systemerrors.hpp"
#include "Setup.hpp"
#include "SdWriterController.hpp"
#include "SensorController.hpp"
#include "StandbyController.hpp"
#include "WifiController.hpp"
RTC_DATA_ATTR struct timeval SleepEnterTime;
EventGroupHandle_t GlobalEventGroupHandle;
struct timeval GlobalTimeValNow;
time_t GlobalStartTime;
extern "C" void app_main(void)
{
ESP_LOGI("MAIN", "Booting completed");
// Print wakeup cause
int WakeUpCause = esp_sleep_get_wakeup_cause();
switch(WakeUpCause) {
case ESP_SLEEP_WAKEUP_TIMER: ESP_LOGI("MAIN", "Woke up from a timer reset"); break;
case ESP_SLEEP_WAKEUP_EXT1: ESP_LOGI("MAIN", "Woke up from SD Card"); break;
case ESP_SLEEP_WAKEUP_EXT0: ESP_LOGI("MAIN", "Woke up from Motion Interrupt"); break;
default: ESP_LOGI("MAIN", "Woke up from normal reset"); break;
}
// Initialize GPIO
gpio_init_all();
// Initialize I2C
i2c_master_init();
// Check if sd card is present, else sleep
CheckForSdcard();
// Initialize flash
nvs_flash_init();
// Read errors from last run
error_flash_init();
// Initialize global event handle group
GlobalEventGroupHandle = xEventGroupCreate();
// Check if BOD was enabled, if so go to sleep again
int ResetCause = rtc_get_reset_reason(0);
ESP_LOGI("MAIN", "Reset reason: %d", ResetCause);
if(ResetCause == 15 && gpio_get_level(GPIO_CHARGE_DETECT) == 0) {
ESP_LOGI("MAIN", "Reset reason was BOD and no charger detecte, going to sleep");
esp_sleep_enable_timer_wakeup(600 * 1000000);
esp_deep_sleep_start();
}
// Set blink color and frequency
blink_set_led(GPIO_LED_GREEN, 10, 5000);
// Start blink task
xTaskCreatePinnedToCore(&blink_task, "blink_task", BLINKTASK_STACK_SIZE, NULL, BLINKTASK_PRIORITY, NULL, BLINKTASK_CORE_NUM);
// Feedback
ESP_LOGI("MAIN", "Creating SNTP task");
// Taskhandle for sntp task
TaskHandle_t SNTPTaskHandle;
// Start SNTP task. Wifi is also initialized here
xTaskCreatePinnedToCore(sntp_task, "sntp_task", SNTPTASK_STACK_SIZE, NULL, SNTPTASK_PRIORITY, &SNTPTaskHandle, SNTPTASK_CORE_NUM);
// Wait for task to be done
xEventGroupWaitBits(GlobalEventGroupHandle, SNTPTaskDoneFlag, pdTRUE, pdFALSE, portMAX_DELAY);
// Delete task
vTaskDelete(SNTPTaskHandle);
// Calculate and print sleep time
int sleep_time_ms = (GlobalTimeValNow.tv_sec - SleepEnterTime.tv_sec) * 1000 + (GlobalTimeValNow.tv_usec - SleepEnterTime.tv_usec) / 1000;
ESP_LOGI("MAIN", "Time spent in deep sleep: %d ms", sleep_time_ms);
// Build filename
char name[64];
BuildFileName(name, sizeof(name));
// Create SDWriter object
SDWriter *GlobalSDWriter = new SDWriter;
// Initialize card
GlobalSDWriter->InitSDMMC(SDMMC_INIT_RETRIES);
// Set filename used for writing
GlobalSDWriter->SetFileName(name);
// Create DataProcessor object
DataProcessor *GlobalDataHandler = new DataProcessor;
// Set timeout value for DataProcessor
GlobalDataHandler->SetTimeoutValue(TIMEOUT_TIME_SEC * 1000);
// Set triggers for data DataProcessor
GlobalDataHandler->SetTrigger(DP_SLEEP_THRESHOLD, DP_SLEEP_THRESHOLD, DP_SLEEP_THRESHOLD);
// Create DoubleBuffer object
DoubleBuffer *GlobalDoubleBuffer = new DoubleBuffer(*GlobalSDWriter);
// Create and run Standby task
StandbyController *sbc = new StandbyController(STANDBYCONT_PRIORITY);
// Create and run Sensor task
SensorController *st = new SensorController(SENSORTASK_PRIORITY, *GlobalDoubleBuffer, *GlobalDataHandler);
// Create and run SDWriter task
SdWriterController *sdw = new SdWriterController(WRITERTASK_PRIORITY, *GlobalDoubleBuffer, *GlobalSDWriter);
// Create and run Wifi task
WifiController *wt = new WifiController(WIFITASK_PRIORITY, *GlobalDataHandler);
// Initialization done
ESP_LOGI("MAIN", "Init done");
}
|
/*
* Copyright 2014 JBoss Inc
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.optaplanner.core.impl.domain.variable.descriptor;
import java.beans.PropertyDescriptor;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import org.optaplanner.core.impl.domain.common.PropertyAccessor;
import org.optaplanner.core.impl.domain.common.ReflectionPropertyAccessor;
import org.optaplanner.core.impl.domain.entity.descriptor.EntityDescriptor;
import org.optaplanner.core.impl.domain.variable.listener.VariableListener;
import org.optaplanner.core.impl.domain.variable.supply.Demand;
import org.optaplanner.core.impl.domain.variable.supply.Supply;
public abstract class VariableDescriptor {
protected final EntityDescriptor entityDescriptor;
protected final PropertyAccessor variablePropertyAccessor;
protected final String variableName;
private List<ShadowVariableDescriptor> shadowVariableDescriptorList = new ArrayList<ShadowVariableDescriptor>(4);
public VariableDescriptor(EntityDescriptor entityDescriptor,
PropertyDescriptor propertyDescriptor) {
this.entityDescriptor = entityDescriptor;
variablePropertyAccessor = new ReflectionPropertyAccessor(propertyDescriptor);
variableName = variablePropertyAccessor.getName();
}
// ************************************************************************
// Worker methods
// ************************************************************************
public EntityDescriptor getEntityDescriptor() {
return entityDescriptor;
}
public String getVariableName() {
return variableName;
}
public String getSimpleEntityAndVariableName() {
return entityDescriptor.getEntityClass().getSimpleName() + "." + variableName;
}
public Class<?> getVariablePropertyType() {
return variablePropertyAccessor.getPropertyType();
}
// ************************************************************************
// Shadows
// ************************************************************************
public void registerShadowVariableDescriptor(ShadowVariableDescriptor shadowVariableDescriptor) {
shadowVariableDescriptorList.add(shadowVariableDescriptor);
}
public List<ShadowVariableDescriptor> getShadowVariableDescriptorList() {
return shadowVariableDescriptorList;
}
// ************************************************************************
// Extraction methods
// ************************************************************************
public Object getValue(Object entity) {
return variablePropertyAccessor.executeGetter(entity);
}
public void setValue(Object entity, Object value) {
variablePropertyAccessor.executeSetter(entity, value);
}
@Override
public String toString() {
return getClass().getSimpleName() + "(" + variableName
+ " of " + entityDescriptor.getEntityClass().getName() + ")";
}
}
|
Interior Minister Gilad Erdan revokes permanent Jerusalem residency status of the driver for 2001 Dolphinarium attacks.
Interior Minister Gilad Erdan (Likud) has officially cancelled the permanent residency status of a Palestinian terrorist Sunday morning, negating the rights of Mahmoud Nadi, the driver for the suicide bomber responsible for the bombing at the Dolphinarium Disco in Tel Aviv in June 2001. That attack killed 21 people and wounded over 100 others.
Nadi was convicted of accomplice to manslaughter, assisting in terror, and assistance to harbor an illegal alien in Israel. He was sentenced to ten years in Israeli prison for the offenses.
Erdan wrote to Nadi in making the decision, noting that "under these circumstances and in view of the severity of your actions, [assisting in the attack] is a blatant breach of trust as a resident of the State of Israel and the state."
"I decided to use the authority and cancel your permanent residence permits in Israel," Erdan added.
Erdan's decision effectively cancels Nadi's registration in the population census, revokes the validity of his teudat zehut identity card, and bars him from all rights and services, including social security and health insurance.
Erdan explained his decision as the outcome of the recent waves of terror.
"The State of Israel is currently suffering a wave of terror and incitement," Erdan said. "[Terrorists] are involved in carrying out attacks on the country's citizens, [and] help them and justify them, and incite others to commit crimes and murders."
He said, "These people are not able to continue to enjoy the status of permanent residents in the country, and I will work very hard to revoke this residency and deny them any economic benefit reaped from this grant."
It comes just one day after the Interior Minister announced that he was examining the possibility of expanding his powers to expel Arab terrorists from Jerusalem, precisely by revoking their residency permits.
This is the latest in a series of legal actions to crack down on terrorism since Tuesday's massacre of four Rabbis in Har Nof, Jerusalem. The terrorists, Ghassan and Uday Abu al Jamal, were also permanent residents - and could be the precedent for Erdan's decision.
Meanwhile, in an unprecedented move, the Israeli government has reportedly refused to release Ghassan and Uday's bodies as a deterring gesture - the first time that Jerusalem has refused to sponsor the burial of terrorists. |
<filename>sdk/nodejs/helm/v3/helm.ts
// Copyright 2016-2021, Pulumi Corporation.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// *** WARNING: this file was generated by the pulumigen. ***
// *** Do not edit by hand unless you're certain you know what you are doing! ***
import * as pulumi from "@pulumi/pulumi";
import * as path from "../../path";
import { getVersion } from "../../utilities";
import * as yaml from "../../yaml/index";
/**
* Chart is a component representing a collection of resources described by an arbitrary Helm
* Chart. The Chart can be fetched from any source that is accessible to the `helm` command
* line. Values in the `values.yml` file can be overridden using `ChartOpts.values` (equivalent
* to `--set` or having multiple `values.yml` files). Objects can be transformed arbitrarily by
* supplying callbacks to `ChartOpts.transformations`.
*
* `Chart` does not use Tiller. The Chart specified is copied and expanded locally; the semantics
* are equivalent to running `helm template` and then using Pulumi to manage the resulting YAML
* manifests. Any values that would be retrieved in-cluster are assigned fake values, and
* none of Tiller's server-side validity testing is executed.
*
* ## Example Usage
* ### Local Chart Directory
*
* ```typescript
* import * as k8s from "@pulumi/kubernetes";
*
* const nginxIngress = new k8s.helm.v3.Chart("nginx-ingress", {
* path: "./nginx-ingress",
* });
* ```
* ### Remote Chart
*
* ```typescript
* import * as k8s from "@pulumi/kubernetes";
*
* const nginxIngress = new k8s.helm.v3.Chart("nginx-ingress", {
* chart: "nginx-ingress",
* version: "1.24.4",
* fetchOpts:{
* repo: "https://charts.helm.sh/stable",
* },
* });
* ```
* ### Set Chart values
*
* ```typescript
* import * as k8s from "@pulumi/kubernetes";
*
* const nginxIngress = new k8s.helm.v3.Chart("nginx-ingress", {
* chart: "nginx-ingress",
* version: "1.24.4",
* fetchOpts:{
* repo: "https://charts.helm.sh/stable",
* },
* values: {
* controller: {
* metrics: {
* enabled: true,
* }
* }
* },
* });
* ```
* ### Deploy Chart into Namespace
*
* ```typescript
* import * as k8s from "@pulumi/kubernetes";
*
* const nginxIngress = new k8s.helm.v3.Chart("nginx-ingress", {
* chart: "nginx-ingress",
* version: "1.24.4",
* namespace: "test-namespace",
* fetchOpts:{
* repo: "https://charts.helm.sh/stable",
* },
* });
* ```
* ### Chart with Transformations
*
* ```typescript
* import * as k8s from "@pulumi/kubernetes";
*
* const nginxIngress = new k8s.helm.v3.Chart("nginx-ingress", {
* chart: "nginx-ingress",
* version: "1.24.4",
* fetchOpts:{
* repo: "https://charts.helm.sh/stable",
* },
* transformations: [
* // Make every service private to the cluster, i.e., turn all services into ClusterIP instead of LoadBalancer.
* (obj: any, opts: pulumi.CustomResourceOptions) => {
* if (obj.kind === "Service" && obj.apiVersion === "v1") {
* if (obj.spec && obj.spec.type && obj.spec.type === "LoadBalancer") {
* obj.spec.type = "ClusterIP";
* }
* }
* },
*
* // Set a resource alias for a previous name.
* (obj: any, opts: pulumi.CustomResourceOptions) => {
* if (obj.kind === "Deployment") {
* opts.aliases = [{ name: "oldName" }]
* },
*
* // Omit a resource from the Chart by transforming the specified resource definition to an empty List.
* (obj: any, opts: pulumi.CustomResourceOptions) => {
* if (obj.kind === "Pod" && obj.metadata.name === "test") {
* obj.apiVersion = "v1"
* obj.kind = "List"
* },
* ],
* });
* ```
*/
export class Chart extends yaml.CollectionComponentResource {
/**
* Create an instance of the specified Helm chart.
* @param releaseName Name of the Chart (e.g., nginx-ingress).
* @param config Configuration options for the Chart.
* @param opts A bag of options that control this resource's behavior.
*/
constructor(
releaseName: string,
config: ChartOpts | LocalChartOpts,
opts?: pulumi.ComponentResourceOptions
) {
if (config.resourcePrefix !== undefined) {
releaseName = `${config.resourcePrefix}-${releaseName}`
}
const aliasOpts: pulumi.ComponentResourceOptions = {...opts, aliases: [{type:"kubernetes:helm.sh/v2:Chart"}]}
super("kubernetes:helm.sh/v3:Chart", releaseName, config, aliasOpts);
const allConfig = pulumi.output(config);
(<any>allConfig).isKnown.then((isKnown: boolean) => {
if (!isKnown) {
// Note that this can only happen during a preview.
pulumi.log.info("[Can't preview] all chart values must be known ahead of time to generate an " +
"accurate preview.", this);
}
});
this.resources = allConfig.apply(cfg => {
return this.parseChart(cfg, releaseName)
});
this.ready = this.resources.apply(m => Object.values(m));
}
parseChart(config: ChartOpts | LocalChartOpts, releaseName: string) {
const blob = {
...config,
releaseName,
toJSON() {
let obj: any = {};
for (const [key, value] of Object.entries(this)) {
if (value) {
switch(key) {
case "apiVersions": {
obj["api_versions"] = value;
break;
}
case "caFile": {
obj["ca_file"] = value;
break;
}
case "certFile": {
obj["cert_file"] = value;
break;
}
case "fetchOpts": {
obj["fetch_opts"] = value;
break;
}
case "includeTestHookResources": {
obj["include_test_hook_resources"] = value;
break;
}
case "skipCRDRendering": {
obj["skip_crd_rendering"] = value;
break;
}
case "releaseName": {
obj["release_name"] = value;
break;
}
case "resourcePrefix": {
obj["resource_prefix"] = value;
break;
}
case "untardir": {
obj["untar_dir"] = value;
break;
}
default: {
obj[key] = value;
}
}
}
}
return obj
}
}
const jsonOpts = JSON.stringify(blob)
const transformations: ((o: any, opts: pulumi.CustomResourceOptions) => void)[] = config.transformations ?? [];
if (config?.skipAwait) {
transformations.push(yaml.skipAwait);
}
// Rather than using the default provider for the following invoke call, use the version specified
// in package.json.
let invokeOpts: pulumi.InvokeOptions = { async: true, version: getVersion() };
const promise = pulumi.runtime.invoke("kubernetes:helm:template", {jsonOpts}, invokeOpts);
return pulumi.output(promise).apply<{[key: string]: pulumi.CustomResource}>(p => yaml.parse(
{
resourcePrefix: config.resourcePrefix,
objs: p.result,
transformations,
},
{ parent: this }
));
}
}
interface BaseChartOpts {
/**
* The optional kubernetes api versions used for Capabilities.APIVersions.
*/
apiVersions?: pulumi.Input<pulumi.Input<string>[]>;
/**
* By default, Helm resources with the `test`, `test-success`, and `test-failure` hooks are not installed. Set
* this flag to true to include these resources.
*/
includeTestHookResources?: boolean;
/**
* By default, CRDs are rendered along with Helm chart templates. Setting this to true will skip CRD rendering.
*/
skipCRDRendering?: boolean;
/**
* The optional namespace to install chart resources into.
*/
namespace?: pulumi.Input<string>;
/**
* Overrides for chart values.
*/
values?: pulumi.Inputs;
/**
* A set of transformations to apply to Kubernetes resource definitions before registering
* with engine.
*/
transformations?: ((o: any, opts: pulumi.CustomResourceOptions) => void)[];
/**
* An optional prefix for the auto-generated resource names.
* Example: A resource created with resourcePrefix="foo" would produce a resource named "foo-resourceName".
*/
resourcePrefix?: string
/**
* Skip await logic for all resources in this Chart. Resources will be marked ready as soon as they are created.
* Warning: This option should not be used if you have resources depending on Outputs from the Chart.
*/
skipAwait?: pulumi.Input<boolean>;
}
/**
* The set of arguments for constructing a Chart resource from a remote source.
*/
export interface ChartOpts extends BaseChartOpts {
/**
* The repository name of the chart to deploy.
* Example: "stable"
*/
repo?: pulumi.Input<string>;
/**
* The name of the chart to deploy. If [repo] is provided, this chart name will be prefixed by the repo name.
* Example: repo: "stable", chart: "nginx-ingress" -> "stable/nginx-ingress"
* Example: chart: "stable/nginx-ingress" -> "stable/nginx-ingress"
*/
chart: pulumi.Input<string>;
/**
* The version of the chart to deploy. If not provided, the latest version will be deployed.
*/
version?: pulumi.Input<string>;
/**
* Additional options to customize the fetching of the Helm chart.
*/
fetchOpts?: pulumi.Input<FetchOpts>;
}
function isChartOpts(o: any): o is ChartOpts {
return "chart" in o;
}
/**
* The set of arguments for constructing a Chart resource from a local source.
*/
export interface LocalChartOpts extends BaseChartOpts {
/**
* The path to the chart directory which contains the `Chart.yaml` file.
*/
path: string;
}
function isLocalChartOpts(o: any): o is LocalChartOpts {
return "path" in o;
}
/**
* Additional options to customize the fetching of the Helm chart.
*/
export interface FetchOpts {
/** Specific version of a chart. Without this, the latest version is fetched. */
version?: pulumi.Input<string>;
/** Verify certificates of HTTPS-enabled servers using this CA bundle. */
caFile?: pulumi.Input<string>;
/** Identify HTTPS client using this SSL certificate file. */
certFile?: pulumi.Input<string>;
/** Identify HTTPS client using this SSL key file. */
keyFile?: pulumi.Input<string>;
/**
* Location to write the chart. If this and tardir are specified, tardir is appended to this
* (default ".").
*/
destination?: pulumi.Input<string>;
/** Keyring containing public keys (default "/Users/alex/.gnupg/pubring.gpg"). */
keyring?: pulumi.Input<string>;
/** Chart repository password. */
password?: pulumi.Input<string>;
/** Chart repository url where to locate the requested chart. */
repo?: pulumi.Input<string>;
/**
* If untar is specified, this flag specifies the name of the directory into which the chart is
* expanded (default ".").
*/
untardir?: pulumi.Input<string>;
/** Chart repository username. */
username?: pulumi.Input<string>;
/** Location of your Helm config. Overrides $HELM_HOME (default "/Users/alex/.helm"). */
home?: pulumi.Input<string>;
/**
* Use development versions, too. Equivalent to version '>0.0.0-0'. If --version is set, this is
* ignored.
*/
devel?: pulumi.Input<boolean>;
/** Fetch the provenance file, but don't perform verification. */
prov?: pulumi.Input<boolean>;
/** If set to false, will leave the chart as a tarball after downloading. */
untar?: pulumi.Input<boolean>;
/** Verify the package against its signature. */
verify?: pulumi.Input<boolean>;
}
|
/**
* A phase which produces expressions from tokens. Usually the second phase in the pipeline.
*/
public class Parser {
/**
* The map of token types to prefix parselets.
*/
private final Map<Token.Type, PrefixParselet> prefixParselets = new HashMap<>();
/**
* The map of token types to infix parselets.
*/
private final Map<Token.Type, InfixParselet> infixParselets = new HashMap<>();
/**
* The lexer that provides input to the parser.
*/
private final Morpher lexer;
/**
* The token that was previously peeked, if any.
*/
private Token peeked;
/**
* The token that was last peeked, if any.
*/
private Token lastRead;
/**
* Creates a new Parser and registers the default parselets.
*
* @param lexer The lexer to use as input.
*/
public Parser(Morpher lexer) {
this.lexer = lexer;
registerPrefix(NAME, new NameParselet());
registerPrefix(NUMBER, new NumberParselet());
registerPrefix(OPEN_PAREN, new ParenthesesParselet());
registerPrefix(OPEN_BRACE, new BlockParselet());
registerPrefix(TRUE, new BooleanParselet());
registerPrefix(FALSE, new BooleanParselet());
registerPrefix(STRING, new StringParselet());
registerPrefix(IF, new IfParselet());
registerPrefix(WHILE, new WhileParselet(false));
registerPrefix(DO, new WhileParselet(true));
registerPrefix(NULL, new NullParselet());
registerPrefix(OPEN_BRACKET, new ListParselet());
registerPrefix(CLASS, new ClassParselet());
registerPrefix(ANNOTATION, new AnnotationParselet());
registerPrefix(IMPORT, new ImportParselet());
registerPrefix(TRY, new TryCatchParselet());
registerPrefix(OPEN_MAP_BRACE, new MapParselet());
registerPrefix(CHAR, new CharParselet());
registerPrefix(VAR, new VarParselet());
prefix(PLUS, MINUS, TILDE, BANG);
infix(PLUS, Precedence.SUM);
infix(MINUS, Precedence.SUM);
infix(TIMES, Precedence.PRODUCT);
infix(DIVIDE, Precedence.PRODUCT);
infix(POW, Precedence.EXPONENT);
infix(EQEQ, Precedence.EQUALITY);
infix(NEQ, Precedence.EQUALITY);
infix(LT, Precedence.COMPARISON);
infix(LTE, Precedence.COMPARISON);
infix(GT, Precedence.COMPARISON);
infix(GTE, Precedence.COMPARISON);
infix(ANDAND, Precedence.LOGICAL);
infix(OROR, Precedence.LOGICAL);
infix(IS, Precedence.EQUALITY);
infix(ELVIS, Precedence.LOGICAL);
registerInfix(OPEN_PAREN, new CallParselet());
registerInfix(EQ, new AssignmentParselet());
registerInfix(OPEN_BRACKET, new IndexParselet());
registerInfix(DOT, new MemberAccessParselet());
registerInfix(INTERRODOT, new MemberAccessParselet());
registerInfix(ARROW, new MiniFunctionParselet());
registerInfix(PLUS_EQ, new BinaryMutatorParselet(Token.Type.PLUS));
registerInfix(MINUS_EQ, new BinaryMutatorParselet(Token.Type.MINUS));
registerInfix(TIMES_EQ, new BinaryMutatorParselet(Token.Type.TIMES));
registerInfix(DIVIDE_EQ, new BinaryMutatorParselet(Token.Type.DIVIDE));
registerInfix(COLON, new TypePatternParselet());
registerInfix(MATCH, new MatchParselet());
postfix(BANG);
}
/**
* Parses one expression from the input and returns it.
*
* @param token The token input to parse.
* @param precedence The starting precedence.
* @return An expression, or null if the end of the stream is reached.
*/
public Expr next(Token token, int precedence) {
while (token != null && token.is(LINE)) {
token = read();
}
if (token == null)
return null;
PrefixParselet prefix = prefixParselets.get(token.getType());
if (prefix == null)
throw new ParseException("unexpected " + token.getType().getName(), token);
Expr left = prefix.parse(this, token);
while (precedence < getPrecedence()) {
token = read();
if (token == null || token.is(LINE))
break;
InfixParselet infix = infixParselets.get(token.getType());
left = infix.parse(this, left, token);
}
return left;
}
/**
* Parses one expression from the input and returns it.
*
* @param token The token input to parse.
* @return An expression, or null if the end of the stream is reached.
*/
public Expr next(Token token) {
return next(token, 0);
}
/**
* Parses one expression from the input and returns it.
*
* @param precedence The starting precedence.
* @return An expression, or null if the end of the stream is reached.
*/
public Expr next(int precedence) {
return next(read(), precedence);
}
/**
* Parses one expression from the input and returns it.
*
* @return An expression, or null if the end of the stream is reached.
*/
public Expr next() {
return next(read());
}
/**
* Returns the precedence of the current token.
*/
private int getPrecedence() {
Token peeked = peek();
if (peeked == null)
return 0;
InfixParselet parser = infixParselets.get(peeked.getType());
if (parser != null) return parser.getPrecedence();
return 0;
}
/**
* Peeks at the next token in the stream and returns it. Does not consume any tokens.
*/
public Token peek() {
if (peeked != null)
return peeked;
return peeked = read();
}
/**
* Registers a prefix parselet for a token type.
*
* @param token The token type.
* @param parselet The parselet.
*/
public void registerPrefix(Token.Type token, PrefixParselet parselet) {
prefixParselets.put(token, parselet);
}
/**
* Registers a prefix operator parselet for a token type.
*
* @param token The token type.
*/
public void prefix(Token.Type token) {
registerPrefix(token, new PrefixOperatorParselet());
}
/**
* Registers a prefix operator parselet for some token types.
*
* @param tokens The token types.
*/
public void prefix(Token.Type... tokens) {
for (Token.Type token : tokens) {
prefix(token);
}
}
/**
* Registers an infix parselet for a token type.
*
* @param token The token type.
* @param parselet The parselet.
*/
public void registerInfix(Token.Type token, InfixParselet parselet) {
infixParselets.put(token, parselet);
}
/**
* Registers an infix operator parselet for a token type.
*
* @param token The token type.
* @param precedence The precedence of the operator.
*/
public void infix(Token.Type token, int precedence) {
registerInfix(token, new BinaryOperatorParselet(precedence));
}
/**
* Registers a postfix operator parselet for a token type.
*
* @param token The token type.
*/
public void postfix(Token.Type token) {
registerInfix(token, new PostfixOperatorParselet());
}
/**
* Registers a postfix operator parselet for some token types.
*
* @param tokens The token types.
*/
public void postfix(Token.Type... tokens) {
for (Token.Type token : tokens) {
postfix(token);
}
}
/**
* Reads a single token from the input (or, if a token was peeked, that token) and returns it.
*
* @return The read token, or null if the end of the input has been reached.
*/
public Token read() {
if (peeked != null) {
Token peeked = this.peeked;
this.peeked = null;
return lastRead = peeked;
} else {
return lastRead = lexer.next();
}
}
/**
* Reads a single token. If its type is not equal to the given type, throws an exception.
*
* @param type The type to compare.
* @throws me.abje.lingua.parser.ParseException If the type of the read token is different from the given type.
*/
public void expect(Token.Type type) {
Token read = null;
if (peek() == null || !(read = read()).is(type)) {
if (read != null)
throw new ParseException("expected " + type, read);
else if (lastRead != null)
throw new ParseException("expected " + type, lastRead);
else
throw new ParseException("expected " + type, "", 1);
}
}
/**
* Skips over as many {@link me.abje.lingua.lexer.Token.Type#LINE} tokens as possible.
*/
public void eatLines() {
while (peek() != null && peek().is(LINE)) {
read();
}
}
/**
* Peeks a single token. If its type is equal to the given type, consumes it and returns true.
* Otherwise, returns false.
*
* @param type The type to compare.
* @return True if the peeked token has type <code>type</code>.
*/
public boolean match(Token.Type type) {
if (peek() == null) {
return false;
} if (peek().is(type)) {
read();
return true;
} else {
return false;
}
}
} |
Absence of autoimmune serological reactions in chronic non A, non B viral hepatitis. In 18 cases of chronic liver disease due to non-A, non-B hepatitis virus(es) in which the diagnosis was established by transmission, including chimpanzee inoculation in nine, sera were tested for the autoantibodies characteristically associated with autoimmune chronic active hepatitis. The frequency of autoantibodies to nuclear, smooth muscle, cytofilament, mitochondrial and liver membrane antigens was low, being not greater than that recorded for a normal population, and the few positive reactions obtained were at very low titre. These findings suggest that among cases of 'HBsAg negative' chronic hepatitis, those due to NANB infection are distinguishable from those due to autoimmune chronic hepatitis by negative serological tests for autoantibodies. |
Unsealed Remington Documents Posted by Public Justice Show Defective Triggers in Millions of Rifles Could Fire on Their Own
Gun Owners Need to Claim Free Trigger Replacements ASAP
By Arthur Bryant
Chairman
Public Justice today made over 118,000 previously-sealed Remington documents available to the public on a new website, www.remingtondocuments.com. The documents show the company knew for decades the trigger in the Remington Model 700—the most popular bolt-action rifle in America—and a dozen other Remington models could fire when no one pulled it. Remington denied that fact (and still denies it), hid the truth, and kept selling the rifles. As a result, hundreds of people were maimed or killed—and millions are still at risk.
In December 2015, CNBC published an investigative report and aired a one-hour special, Remington Under Fire: The Reckoning, based in part on some of these documents. Public Justice’s new Remington Rifle Trigger Defect Documents website is making them public so people who own these rifles can protect themselves, their loved ones, and others.
Over 7.5 million Remington 700 and other rifles with this defective trigger are now in gun owners’ hands. A proposed settlement in Pollard v. Remington Arms, a national class action in federal court in Kansas City, MO, would provide free trigger replacements to all owners of Remington Model 700, Seven, Sportsman 78, 673, 710, 715, and 770 rifles who file claims. Everyone who owns one or more of these rifles should stop using them and submit a claim for each rifle.
Details about the proposed settlement are provided on Public Justice’s new Remington Rifle Trigger Defect Documents website and on the proposed settlement web site, http://remingtonfirearmsclassactionsettlement.com. To submit a claim, go here or here.
Proposed class members should file claims as soon as possible. They have until November 18, 2016, to opt out of the proposed settlement or object to it. A hearing on whether to approve the proposed settlement is scheduled for February 15, 2017.
The proposed settlement does not provide a free trigger replacement for Remington Model 600, 660, or XP-100 rifles, which were recalled in 1979. Their triggers can still be repaired for free. Everyone who owns one or more of these rifles should stop using them, get them repaired for free, and consider filing a claim for the compensation the proposed settlement provides. Go here for the Model 600 and 660 recall info and here for the XP-100 recall info.
The proposed settlement also does not provide a free trigger replacement for Remington Model 721, 722, and 725 rifles, which have the same defective trigger, too. Everyone who owns one or more of these rifles should stop using them (unless you get the defective trigger fixed) and consider filing a claim for the compensation the settlement provides.
Public Justice won public access to the documents it released today—and all of the documents in all lawsuits ever filed against Remington over these defective triggers—with the help of the plaintiffs’ lawyers in Pollard v. Remington Arms. To see the letters agreeing to public access, click here.
The documents were sought by Public Justice, in part, so Richard Barber of Montana—an NRA member and avid sportsman whose 9-year-old son, Gus Barber, was shot and killed when a Remington 700 fired without a trigger pull in 2000—could avoid Remington’s threat to sue him for contempt of court if he disclosed what he knew about the trigger’s defects. For more details on Richard Barber and Public Justice’s work to unseal the documents, click here.
Based on the documents, CNBC then published and broadcast Remington Under Fire: The Reckoning, and several related follow-up pieces. To see CNBC’s coverage, click here.
Public Justice’s new website includes PowerPoints and Timelines highlighting key documents and exposing Remington’s willingness to endanger its customers, their friends, and families to maximize profits. They reveal what Remington knew and what the company did—and didn’t—do, including decisions not to recall the rifles because it would cost too much and to destroy test results. They shine a light on the company’s response to customer complaints, triggers tests that failed, and Remington’s efforts to mislead its customers, the press, and the public.
The PowerPoints and Timelines were provided by Timothy Monsees of Monsees & Mayer, PC, attorneys experienced representing people injured by the Remington 700 and other rifles with the defective trigger. Elijah Ltd. designed and is hosting the website.
Public Justice was not involved in negotiating and has taken no position on the proposed settlement in Pollard v. Remington Arms. We believe strongly, however, that, to the extent that the proposed settlement leads to the replacement of the defective triggers in these rifles – or stops these rifles from being used – it will have performed an important public service.
If you own one of these rifles or know someone who does, please visit Pubic Justice’s new Remington Rifle Trigger Defect Documents website and take action, immediately. |
A flier for UCB’s student-run course “Palestine: A Settler Analysis” features anti-Israel maps. Photo: Facebook.
An upcoming “anti-Zionism” course at the University of California, Berkeley will contribute to greater hostility toward Jewish and pro-Israel students on campus, the head of a watchdog group and an Israel advocate told The Algemeiner on Thursday.
“This is clear eliminationist anti-Zionism, which is not just criticism of Israel, but opposition to the existence of the Jewish state with efforts to eliminate that state,” she said.
Lily Greenberg Call, a Jewish UC Berkeley student who serves as a CAMERA Fellow and is the co-vice president of the campus group Bears for Israel, told The Algemeiner that she became “very upset” after seeing posters advertising the class.
“It’s one thing to have a political group like Students for Justice in Palestine (SJP) on campus, but teaching material that is so biased and factually inaccurate in a classroom setting violates academic integrity,” she said.
…[E]xamine key historical developments that have taken place in Palestine, from the 1880s to the present, through the lens of settler colonialism…we will explore the connection between Zionism and settler colonialism, and the ways in which it has manifested, and continues to manifest, in Palestine. Lastly, drawing upon literature on decolonization, we will explore the possibilities of a decolonized Palestine, one in which justice is realized for all its peoples and equality is not only espoused, but practiced.
The faculty sponsor of the course, Dr. Hatem Bazian, is the co-founder of SJP and a major supporter of the US Campaign for the Academic & Cultural Boycott of Israel (USACBI). Bazian is a former fundraising speaker for the anti-Israel organization KindHearts, which was shut down by the US government in 2006 for its alleged ties to Hamas.
In his promotion of the course on Facebook, Hadweh wrote he will take students on an “in-depth” exploration of “the history and present of Zionist settler colonialism in Palestine.” The post was accompanied by an image featuring infamous anti-Israel maps that have been decried as distorting history.
Reading material includes selections from works by anti-Zionist Israeli historian Ilan Pappe; the late Edward Said, a fierce Israel critic; and Saree Makdisi, an advocate of the elimination of Israel as a Jewish state. Also, there are testimonies from the controversial and widely debunked Israeli group Breaking the Silence.
According to a recent AMCHA report, which studied antisemitism across US college campuses, the presence of three specific factors — anti-Zionist student groups; faculty who support boycotts of Israel; and pro-BDS activity — are strong predictors of anti-Jewish hostility.
Rossman-Benjamin told The Algemeiner that “with a course like ‘Palestine: A Settler Colonial Analysis,’ even though the expression takes place in a classroom, it creates an overall hostile climate for Jewish students” that she believes can lead to violent anti-Jewish activity, including assault, suppression of rights and discrimination.
“Just because the classroom door closes, it doesn’t mean that students are not influenced,” she said.
I want to state without reservation or equivocation that this university is committed to fostering and sustaining a campus climate where every individual feels safe, welcome and respected.
In a recent survey, 75 percent of our Jewish students reported feeling comfortable on this campus — a number that is identical to the campus average and two points higher than comfort levels reported by students with a Christian affiliation.
However, we believe we can do better still, not just for Jewish students, but for all of our students. That is but one of the reasons we recently took the unprecedented step of forming Chancellor’s Advisory Committee on Jewish Student Life and Campus Climate, a group that includes students, faculty, staff and leading members of the Bay Area’s Jewish community who are joining us in this important effort.
Berkeley also takes great pride in our our new kosher dining facility; vibrant Hillel chapter; the broad range of other Jewish student groups; the Institute for Jewish Law and Israeli Law at the Berkeley law school our library’s Magnes Collection of Jewish Art and Life; and our world class Center for Jewish Studies.
Suffice it to say that we will continue to confront intolerance, bias, and we are in full support of the Regents recently issued Principles Against Intolerance.
Hadweh did not respond to The Algemeiner’s request for comment by press time. |
DOWN-on-her-luck "Occupy Wall Street" protester Tracy Postert spent 15 days washing sidewalks and making sandwiches at New York's Zuccotti Park - then landed a dream job at a Financial District investment firm thanks to a high-powered passer-by who offered her work.
The Upper West Sider, who has a PhD in biomedical science specializing in pharmacology, was unemployed and had all but given up on finding work when she joined the movement in October.
She held signs that read, "Reagan sucks," and, "I'll vote after the revolution."
But she still needed to get a job, so she made a new sign that read, "PhD Biomedical Scientist seeking fulltime employment," and on the back, "Ask me for my resume."
It caught the eye of Wayne Kaufman, chief market analyst for John Thomas Financial Brokerage. The exec was not looking to hire, but he took Postert's resume anyway.
The next day, Kaufman, impressed by her CV, sent her an email asking if she would like to come for an interview two blocks from Zuccotti Park at 14 Wall St.
"I had been unemployed for so long, I thought why not?" Postert said, adding that she is in her 30s and has no background in finance or business.
Kaufman offered her a job as a junior analyst evaluating medical companies as potential investments.
Postert said the decision to accept was painful but she has now just completed her third week as a Wall Street worker and she is already studying for her exams to be a certified financial analyst.
CEO Thomas Belesis said he believes Postert will be a great asset.
"She was ranting about Wall Street, and now she's working on Wall Street. Banks are not so bad. I hope we have opened her eyes," he said. |
Preparation for WLCG production from a Tier-1 viewpoint The GRIDPP Tier-1 Centre at RAL is one of 10 Tier-1 centres worldwide preparing for the start of LHC data taking in late 2007. The RAL Tier-1 is expected to provide a reliable grid-based computing service running thousands of simultaneous batch jobs with access to a multi-petabyte CASTOR-managed disk storage pool and tape silo, and will support the ATLAS, CMS and LHCb experiments as well as many other experiments already taking or analysing data. The RAL Tier-1 is already well advanced towards readiness for LHC data-taking. We describe some of the reliability and performance issues encountered with various generations of storage hardware in use at RAL and how the problems were addressed. We also describe the networking challenges for shipping large volumes of data into and out of the Tier-1 storage systems, and system to system within the Tier-1, and the changes made to accommodate the expected data volumes. We describe the scalability and reliability issues encountered with the grid-services and the various strategies used to minimise the impact of problems, including multiplying the number of service hosts, splitting services across a number of hosts, and upgrading services to more resilient hardware. The RAL Tier-1 The GRIDPP Tier-1 at RAL is the UK Tier-1 for the World LHC Computing Grid, supporting ATLAS, CMS, and LHCb, as well as existing HEP experiments. The facility currently has around 800TB of usable disk storage space, approximately 1200 CPU cores running in the batch system, and a tape silo with a 10,000-tape capacity. The latter has approximately 18 T10000 tape drives and 10 9940B tape drives. There are approximately 100 systems running grid services of various sorts, as well as the usual monitoring nodes and system services such as mail, authentication etc. The computing hardware is generally rack-mounted to save space, and has network-accessible power controllers. The hardware has been procured in stages with major purchases each year adding to the batch and disk storage capacity, and to the tape drive and tape inventory for the tape silo. The planned lifetime of the CPU and disk hardware is 5 years, of which 3 years is under full warranty and maintenance, the fourth year carries now maintenance with an expected mortality of 10% and the fifth year no maintenance and progressive decommissioning around the end of the fifth year. Processing and disk storage capacity is dominated by the most recent procurements which totalled 516TB useable disk space and 550,000 SPECint_base2000 processing power. A typical batch worker is a 1U rackmounted unit and has two CPU chips with single or dual CPU cores, 1GB RAM per CPU core, a system disk with 50GB capacity plus 50GB per CPU core, and dual 1Gb/s Ethernet ports. The storage servers provide between 1.6TB and 9.5TB useable space per system. Hardware reliability The Tier-1 operates the majority of its services on commodity hardware. The services nodes have tended to be re-tasked batch workers which do not have any of the usual additions to harden the services against failure: dual hot-swap capable disks and power supplies. The storage hardware has been mainly commodity solutions with some additional hardware for reliability: hot-swap fans, and power supplies, and in most cases hardware or software raid systems disks. The data storage arrays themselves are SCSI-attached IDE and SATA disk arrays with RAID5 and more recently direct attached SATA arrays with RAID5 or RAID6 capability. 2.1. Storage reliability Annual disk failure rates have averaged between 2% and 3%, depending on the generation of storage hardware, with no generation significantly different from another. Commissioning issues aside, all the technologies have proved reasonably reliable but have highlighted two big issues. 2.1.1. Interconnects. With the SCSI-attached generations, we have observed that thermal cycling of the hardware due both to daily temperature changes and to air conditioning events can cause problems with the interconnect cables. It appears that regular thermal cycling in a machine room where the ceiling is directly under the flat roof of the building is sufficient to work cables loose enough to cause signal issues on the interconnect, and cause the SCSI transport layer in the software to observe various errors. This usually causes the array to be dropped offline, but has on several occasions caused data corruption when the system persisted in trying to write data to the array. Most cases are solved by completely detaching the cables, cleaning the contacts reattaching them. Where the array has been dropped offline, it has usually been possible to recover the data with a simple file system check (fsck), but in some cases, files have been discovered to be corrupted. In one or two cases, significant data corruption has occurred, necessitating recovery from tape where possible. 2.1.2. Multi-disk failures. The second issue is the recovery or rebuild time for the hardware RAID arrays. For RAID5 arrays, a single disk failure will trigger a rebuild onto a hot spare disk. While this is happening the array is in a degraded state and is vulnerable to any problem with the remaining disks or the hot spare, but the array will continue to operate and read or write data. Rebuild times vary depending on the system tuning and I/O load -when the system is under high I/O load the controller cannot progress the array rebuild quite as fast as it can when under no load. This increases the vulnerability window. The size of the array also effects rebuild times -bigger arrays with more and larger disks take longer to rebuild, increasing the vulnerable period, and the probability of a second disk failure. If a second disk does experience an error before the array is rebuilt the controller may be able to recover depending on the issue, but it may not, causing potential data loss. RAID6 helps with this issue in that there are two sets of redundancy information in the array. This means that if one disk fails, although a rebuild may not immediately start if there is no hot spare, the array is still operating with the a complete parity set intact (essentially in RAID 5 mode) and can suffer a second disk failure without loss of data. The Tier-1 now procures storage systems requiring RAID6 or multiple redundancy capability to reduce the likelihood of a dual disk failure leading to data loss. The added cost is not significant and the saving in staff effort recovering damaged data systems is significant. Service hardware In general, the services hardware has been reliable, with the expected complement of disk and RAM failures depending on age of the hardware. However running grid services which require maximum uptime and the ability to survive system issues on re-tasked batch workers has met with mixed success. In most cases the hardware copes with the loads, but cannot cope with disk, ram and PSU failures. Various strategies can be employed to guard against service failures, ranging from backups, fast reinstallations or re-instancing, multiple disk software or hardware system disks, and redundant power supplies right up to expensive fully redundant hardware and more recently virtualization. To give some added robustness to the services, we have elected fit selected sets of services nodes with additional disks and employ software raid configurations and backups strategies where needed to enable systems to survive and maintain availability. This has worked very well with several hosts maintaining service availability after a disk failure during silent hours, due to their RAID1 system disk pairs. Service interventions for these systems can then be planned and announced in advance. An added advantage in some cases in that of increased I/O speed, particularly for reading data, increasing the performance of the service. More recently services such as the 3D database project have made use of systems with not only built-in hardware redundancy but also Oracle Real Applications Cluster (RAC) technology, enabling both the hardware and the database itself to be more robust against failures. In future, we recognise that the reliability and availability requirements of the hardware for the Grid services will require not just hardware with redundant features but also much more powerful hardware to run the increasingly demanding Grid software. Grid Services Performance The RAL Tier-1 runs a wide range of Grid services ranging from Compute Elements (CEs) and Resources Brokers (RBs) to Proxy servers, Local File Catalogues, a File Transfer Services (FTS) and the R-GMA central service host. In the early days of operation, a single reasonably powered host system was sufficient to run each service. In some cases, a single host was able to share two or more services, for example, the local BDII service at RAL was co-hosted with the CE. As the processing and data transfer loads on the WLCG service have risen, and the complexity of the services has increased, single instances of many types of services node have proved inadequate to provide a reliable service, and shared hosting of services on a single host has become very difficult. 3.1. The RAL Compute Element As mentioned previously the CE host at RAL originally hosted the CE and the local BDII. As the number of grid jobs arriving at the Tier-1 increased, the performance monitoring of the host began to show that the host itself (a dual-chip Xeon system) was overloaded and unable to keep up with the service requirements, and would eventually grind itself to a standstill, with little or no response to services requests and no response on its console. The first move was to implement the local BDII service on a new host to remove the load it represented, and to safeguard that part of the local grid service against issues on the CE host. This had a marginal effect at best and it soon became clear that the CE was still underpowered. The main effect of the move was to increase the site reliability in response to information system requests to the local BDII. Some of the load on the CE is generated by considerable local I/O to the disk, and the local disk space available was proving insufficient to meet the needs of the Grid job loading. The host type concerned had in the past exhibited limited performance to the (IDE) disk under load conditions so the CE was transplanted to a slightly more powerful machine (another dual-chip Xeon system) with a different main board chipset, more memory and a bigger disk. This made a small difference at first, by reducing swapping and speeding up I/O, but the system ultimately proved incapable of handling the load so a second move was made to a dual-chip system with dual-core (Opteron) CPUs and a faster (SATA) disk. This has proved adequate to the task so far but we now recognize that multiple CEs targeted at specific experiment communities is the way to reduce the load on individual systems and increase the overall facility reliability. 3.2. The UK BDII The Tier-1 at RAL has run the UK-wide BDII service since the first releases of the WLCG software stack. The UK BDII provides the information service with data about which resources in the UK are available and was used by the WLCG Site Availability Monitor (SAM) tests and their successors to direct the availability testing jobs. Until mid-2006, the single service host proved adequate for the task but the increasing number of queries from more systems in the UK using it as their information service started to have a detrimental effect not only the host itself but also the availability of the whole WLCG service in the UK. If the UK BDII fails to respond, then the availability monitors do not know which resources to test for availability. A two pronged solution was deployed to increase both the reliability and the response time of the BDII. Firstly, two new identical BDII hosts running on slightly faster hardware were prepared. Then the service was transferred to one of the new hosts using the existing host name as an alias. The old host was decommissioned after queries stopped being made against it, and the second new host was added to the first as a DNS round-robin pair with a short time-to-live set in the DNS. Thus queries would go to which ever of the two hosts was named in the DNS response. The load of queries balanced out roughly evenly using this strategy but we noted that although the level of dropped or failed queries was now very low and the load on each host was not significant, a small number of queries were still being dropped. We therefore added a third (identical) host to the DNS round-robin set for the BDII, which reduced the query failure rate to zero. The advantage gained from having three hosts providing the BDII service is that if one host fails, only one third of the queries will fail. While this is not ideal, it does allow some time to recover the failed host or provide a new host to replace the failed one -automatic deployment of BDII host configurations is quite quick. In the future, developments at RAL will allow the Tier-1 team to modify the DNS directly such failures occur to take the failed host out of the DNS. Resource Brokers. The Resource Broker service was originally comprised of a single host, migrating from host to host the initial versions of the LCG software stack were released. It was evident over time that the RB was one of the more resource-hungry services, so when the LCG-2 stack was released, the RB service was installed on a dual-chip Xeon system, which performed well during initial production service running. However, the increased load of grid-jobs began to cause resource issues in the host, manifesting in very high load averages as the system tried to keep up with the transit of jobs. In addition, limits in the MySQL database structures meant that the RB databases needs to be regularly purged of old job records causing service down-time. VO groups were also raising reliability and availability issues. To attempt to alleviate these problems, a second and later a third RB were added to the service. These were targeted at specific VOs and enable maintenance on the RB databases to take place without stopping all transactions. Batch Scheduler The batch system and local job scheduler used successfully at the Tier-1 before the grid was OpenPBS, firstly with its own FIFO scheduler, then later with tweaks for job priority reordering, and most recently with the Maui scheduler. The batch system software has now been migrated to Torque but still uses the Maui scheduler. Various hard limits within the Maui code have been patched to enable the larger number of jobs, queues and classes required for the Tier-1 setup. The batch and local job scheduling systems run together on the same host which has software RAID1 system disks to provide added protection against disk failure. CASTOR With the purchase of a new tape silo (an STK SL85000), a decision was taken to migrate the tape management system from the home-grown ADS system to CASTOR2 from CERN. The existing system is to be run in parallel for older data but for the WLCG data storage, CASTOR2 would be used exclusively. There have been a number of initial problems with the CASTOR software, notably in the area of reliability and stability. A number of issues were encountered for which fixes were sought from the CATSOR developers at CERN, but the rapid development and issue of newer versions meant that a production service on an older version was less likely to be fixed, the development team electing to provide fixes in later versions. This continued for some months until agreement was reached between CERN and RAL, and other interested parties establishing a framework under which controlled migration from production version to production version is supported at the same time providing for development versions to be tested in near-production instances prior to deployment. This has increased the confidence in more recent versions of CASTOR and stabilised the production instances at the Tier-1. Networking Issues The internal network within the Tier-1 was based on a small number of gigabit Ethernet switches providing fast access to data resources, and a larger number of 100Mb/s Ethernet switches providing general access to batch workers. The switches were all interconnected using gigabit Ethernet with a single gigabit Ethernet switch acting as a central 'hub', which also hosted several of the tape store servers and high access demand servers such as the home file system. The Tier-1 had a 1Gb/s link to the site backbone and from there relied on the site link to the UK wide area network (SuperJANET ) for data transfer to other sites, competing with other site traffic. In addition, there was (and is) considerable data transfer from the rest of the RAL site into the Tier-1 subnet to store data on the tape silo. It was quickly clear from the published data transport requirements for the experiments that 100Mb/s connections for batch servers were not adequate, and that single gigabit Ethernet backbone links would not be adequate either. The published requirements for data import to the RAL Tier-1 also exceeded the available RAL site link to the WAN. Thus a program of improvements and upgrades to link capacities was started. To cope with the expected data import rates of 150MB/s continuous and 400MB/s for short periods, a new link was developed, which became part of the LHC Optical Private Network (the OPN). This provided connectivity direct from CERN to RAL via two paired 1Gb/s links, running in the UK over the UKLight development network circuits. Progressive upgrades took place over several months to bring the capacity to four 1Gb/s aggregated circuits, and then to a 10Gb/s circuit running over dedicated fibre on the Thames Valley Network, the local WAN. The RAL Tier-1 is unique in that the four associated Tier-2s in the UK are in fact each a federation of sites. Each Tier-2 supports its own set of experiments but not each site within a federated Tier-2 supports the same experiments and they do not have identical storage resources to network connectivity. Since the data transfer requirements at the Tier1 for data to and from the Tier-2s vary from site to site, each site may be treated as a separate Tier-2. Thus the RAL Tier-1 can expect to be importing data from and exporting data to as many as 21 Tier-2 sites in the UK, and several outside the UK. Since Tier-1 to Tier-2 data traffic must pass over SuperJANET, the RAL site link with its firewall would be a considerable bottleneck. As the SuperJANET backbone network in the UK was being upgraded during this period, the RAL site connection was upgraded from a single 1Gb/s connected to the TVN to redundant 10Gb/s links connected directly to the new SuperJANET5 (SJ5) backbone. To match the changes in the external links, the Tier1 internal network links were improved. Starting in late 2005, a major move began to stacks of commodity gigabit Ethernet switches with very high speed intra-stack backbone interconnects provided all capable systems with 1Gb/s connectivity. The inter-stack links were formed using four 1Gb/s link aggregated as trunks. As the OPN link speed increased from 2Gb/s to 4Gb/s, the link from the Tier-1 to the OPN end-point at RAL was also increased in bandwidth to match. To alleviate the bandwidth restriction on the link to the RAL backbone, the link was doubled to form a trunked pair at 2Gb/s. The Tier-1 participated in the LCG Service Challenges during the period of these updates. The increasing complexity of the WAN and LAN interlinks highlighted issues with trunked links. It became apparent from network link monitoring that traffic on the WAN links as far as the access routers was balanced across the circuits. However the traffic to the WAN from the LAN and the traffic within the LAN was not being balanced across the aggregated links. The switch stacks were not balancing the data traffic, but were apportioning the various streams based on both source and destination addresses. Data transfer performance was compromised by the pre-existing non-random allocation of network addresses to the data servers, causing one or two links in a trunk set to be fully used and the others under utilized. It became clear that use of trunked links within the Tier-1 was not a long term option, and the existing medium-term plan to upgrade the internal backbone links to 10Gb/s was implemented by adding 10Gb/s capability to the individual stacks. The improvement was immediate, allowing other data server performance issues to be detected and analysed without the restrictions of bandwidth limits on the backbones. The initial 'daisy-chain' of 10Gb/s links has been upgraded to a star formation with a central hub built of commodity stackable switches, the same units used to provide the 1Gb/s switch stacks with 10Gb/s uplink capability. We use the Cacti tool to monitor the performance of the network, which provides long term traffic pattern data. So far data transfer rates within the Tier-1 and on the links to the WANs has been well within the capability of the backbones. However we are now considering whether the topology of the network may have to be revised to split the stacks handing the data servers into smaller stacks each with its own 10Gb/s uplink, or to double the uplinks to the bigger stacks, to ensure that data transfer traffic will not be limited. Conclusion The RAL Tier-1 has made good progress towards readiness for LCH data processing. Important lessons have been learnt about the types of hardware needed for running services, and the various methods of distributing services or service instances across multiple hosts. Elements of the software stack, particularly CASTOR, are now more stable and reliability in increasing. It remains for storage systems tuning work to be finished so that optimum performance can be attained for the various data transfer streams. |
Improving the performance of deadlock recovery based routing in irregular mesh NoCs using added mesh-like links Heterogeneity is one of the challenges in the current NoC design which forces designers to consider irregular topologies. Therefore, finding an optimal topology with minimum cost (minimum use of links, buffers, NIs, etc) and power consumption, and maximum flexibility can provide the best cost-performance trade-off. Irregular mesh is a topology which combines the benefits of regularity and advantage of irregularity. Routing algorithms especially those coupled with wormhole switching should deal with deadlock occurrences. Unlike deadlock avoidance-based schemes, deadlock detection and recovery-based routing schemes, do not restrict routing adaptability. In this paper, we modify irregular mesh architecture and add some extra mesh-like links to improve its performance using deadlock recovery routing. We evaluate the performance under three well-known deadlock recovery routing algorithms and different traffic patterns before and after the link insertion. Simulation results show the proposed method can noticeably reduce the number of detected deadlocks, average packet latency, routing table size at each node, and energy consumption. |
<filename>KidsTC/KidsTC/Business/Main/Strategy/OldStrategy/Strategy/M/ParentingStrategyListItemModel.h<gh_stars>0
//
// ParentingStrategyListItemModel.h
// KidsTC
//
// Created by 钱烨 on 7/23/15.
// Copyright (c) 2015 KidsTC. All rights reserved.
//
#import <Foundation/Foundation.h>
@interface ParentingStrategyListItemModel : NSObject
@property (nonatomic, copy) NSString *identifier;
@property (nonatomic, strong) NSURL *imageUrl;
@property (nonatomic, copy) NSString *title;
@property (nonatomic, strong) NSURL *editorFaceImageUrl;
@property (nonatomic, copy) NSString *editorName;
@property (nonatomic, assign) NSUInteger viewCount;
@property (nonatomic, assign) NSUInteger commentCount;
@property (nonatomic, assign) BOOL isHot;
@property (nonatomic, assign) BOOL isRecommend;
@property (nonatomic, assign) NSUInteger likeCount;
@property (nonatomic, copy) NSString *brief;
@property (nonatomic, assign) CGFloat imageRatio;
- (instancetype)initWithRawData:(NSDictionary *)data;
- (CGFloat)cellHeight;
@end
|
import { GenesisBlock, AccountSchema, AccountDefaultProps } from '../types';
export declare const readGenesisBlockJSON: <T = AccountDefaultProps>(genesisBlockJSON: Record<string, unknown>, accountSchemas: {
[name: string]: AccountSchema;
}) => GenesisBlock<T>;
|
Getty Images
Jon Kitna is taking time over his winter break as a math teacher at Lincoln High School in Tacoma, Wash. to serve as an emergency backup quarterback for the Dallas Cowboys this week.
If Tony Romo is unable to play against the Philadelphia Eagles in a game that will decide the NFC East title, Kitna will be the backup for Kyle Orton. As a 10-year NFL veteran, Kitna will make $55,294 for his one week of work likely holding a clipboard.
Kitna has been teaching and serving as head football coach at his alma mater since retiring from the league after the 2011 season.
According to Barry Horn of the Dallas Morning News, Kitna indicated Wednesday his school would be the beneficiary of his brief return to the NFL. Kitna told teammates and broadcaster Brad Sham that he would be donating the check to Lincoln High School.
If the Cowboys lose Sunday night and miss the postseason, Kitna will be back home in time for the start of school after the first of the year. Regardless of the outcome of the game, Lincoln will be the winners for Kitna’s brief foray back to the Cowboys. |
<gh_stars>0
#ifndef FOGSOURCEFILETYPE_HXX
#define FOGSOURCEFILETYPE_HXX
struct FogSourceFileType_Flyweights // dbxtool goes infinite if T has static array of T.
{
static const FogSourceFileType _flyWeights[];
};
class FogSourceFileType
{
public:
//
// Enum enumerates the nature of the soiurce file.
//
enum Enum
{
TOP_INPUT, // Source file specified from command line
HASH_INPUT, // Source file #include'd
USING_INPUT, // Source file using/include'd
UNREAD_INPUT, // Source file not-read
INVALID
};
private:
Enum _file_type;
private:
FogSourceFileType(const FogSourceFileType& fileType);
FogSourceFileType& operator=(const FogSourceFileType& fileType);
private:
friend struct FogSourceFileType_Flyweights;
FogSourceFileType(Enum fileType) : _file_type(fileType) {}
public: // egcs ignores friendship on static destructor
~FogSourceFileType() {}
public:
bool is_hash() const { return _file_type == HASH_INPUT; }
bool is_read() const { return _file_type <= USING_INPUT; }
bool is_top() const { return _file_type == TOP_INPUT; }
bool is_unread() const { return _file_type == UNREAD_INPUT; }
Enum value() const { return _file_type; }
public:
friend bool operator==(const FogSourceFileType& firstType, const FogSourceFileType& secondType)
{ return firstType._file_type == secondType._file_type; }
friend bool operator!=(const FogSourceFileType& firstType, const FogSourceFileType& secondType)
{ return firstType._file_type != secondType._file_type; }
friend std::ostream& operator<<(std::ostream& s, const FogSourceFileType& fileType);
public:
static const FogSourceFileType& hash_input()
{ return FogSourceFileType_Flyweights::_flyWeights[HASH_INPUT]; }
static const FogSourceFileType& invalid() { return FogSourceFileType_Flyweights::_flyWeights[INVALID]; }
static const FogSourceFileType& top_input()
{ return FogSourceFileType_Flyweights::_flyWeights[TOP_INPUT]; }
static const FogSourceFileType& unread_input()
{ return FogSourceFileType_Flyweights::_flyWeights[UNREAD_INPUT]; }
static const FogSourceFileType& using_input()
{ return FogSourceFileType_Flyweights::_flyWeights[USING_INPUT]; }
static const PrimEnum& values();
};
typedef FogEnumHandle<FogSourceFileType> FogSourceFileTypeHandle;
#endif
|
Confused about copyright in Canada? Worried your Netflix account may be nixed because you’re cross-border viewing?
Here’s a story I wrote for the weekend Vancouver Sun to help you make sense of what’s happening with copyright in Canada.
This month’s change to Canada’s copyright rules happened to coincide with a rumoured Netflix crackdown involving Canadian users who bypass licensing regulations to access shows from the U.S. service.
A company that tracks online piracy in Canada has uncovered three million cases of illegal downloading and video streaming in the past three months. And notices have already been sent to Canadian consumers, threatening with American-style draconian penalties, including having their Internet service cut off.
It has all left some consumers worried they’re going to lose their favourite TV shows, face fines or even get bumped off the Internet.
Business as usual or online apocalypse? Is it going to change how and what we watch on our televisions, tablets, smartphones and other devices?
But it’s up to consumers to educate themselves on their rights and responsibilities so they don’t fall victim to false demands for payments and other bullying behaviour.
Internet Service Providers are now required to forward notices of copyright infringement to their subscribers at the request of the rights holder, referred to as notice and notice. However, the rights holder doesn’t know the identity of the subscriber.
Michael Geist, Canada research chairman in Internet and ecommerce law at the University of Ottawa, said while Canada’s legislation strikes a fair balance protecting consumers, rights holders and ISPs, there is already evidence it’s being exploited.
He said there is concern that some of the rights holders or anti-piracy companies will take advantage of warning notices to seek settlements, even though they don’t know who they are being sent to. If an individual receives the letter and doesn’t know Canadian copyright law, they could think they are liable.
“That strikes me as a real misuse of the system if we start seeing that emerge and I think we are and we will,” said Geist.
Geist posted an example of a recent notice on his blog, which threatens to suspend Internet service and apply penalties up to $150,000 per infringement — neither of which are penalties under Canadian law.
David Christopher, spokesman for the Internet advocacy group OpenMedia.ca, said Canada’s rules are meant to stop copyright trolls, those who hope to make money by threatening people with legal action if they don’t pay up.
Should you be scared? No. At least not unless you’re in the business of piracy, in which case you could face the heftier penalties reserved for those who make money from infringing copyright.
Should you be informed? Yes.
If you’re among the estimated 1.92 million Canadians who pretend they’re located in the U.S. so they can access the extensive listings of Netflix.com rather than settling for Netflix.ca, you may be relieved to know Netflix has quashed rumours that it has a launched a special campaign to cut you off.
Also, Ottawa’s new rules involve a practice that has been carried out by a number of ISPs for some time.
“We’re not talking about dramatic changes,” said Geist. “Notice and notice, for example, has taken place informally for 10 years, so people have been getting these kind of notices for a long time.”Rogers sent out 207,000 notices in 2010, representing about five per cent of its customers. Among those who received a copyright infringement notice, 67 per cent stopped the practice, and with a second notice, 89 per cent stopped, according to company data.
Penalties in Canada are also heftier for people in the business of pirating than they are for ordinary consumers. The minimum fine for non-commercial infringement is $100 and the maximum is $5,000 — and that’s for all infringements in a lawsuit, not per infringement. That’s down from $20,000 per infringement, which is still the penalty for commercial infringement. So unlike the United States, in Canada we don’t get stories of single moms and students facing fines of hundreds of thousands of dollars involving illegally downloaded files.
If rights holders want to pursue a case, they have to go to court to get an order for the identity of the subscriber so they can send a demand letter. A Federal Court judgment last year in a case between Voltage Pictures and ISP TekSavvy set out clear guidelines for judicial oversight over the information that rights holders would receive and the wording of the letters they can send out. It is expected the minimum statutory damages of $100 or damages proven by the copyright holder, which might amount to the cost of a movie or a rental cost, will discourage lawsuits in non-commercial cases.
Barry Logan is the managing director of Canipre, a company that tracks copyright infringement in Canada for clients including movie, television, music and software rights holders. Starting this week in Western Canada, the company will be sending out notices to ISPs to be sent to alleged copyright offenders among their subscribers.
The company is not spying on your computer but rather tracking sites that offer illegal downloading and video streaming of copyright content. It’s the content Canipre focuses on, so for example, it may collect all the IP (Internet Protocol) addresses of users who download or stream a specific movie from a site in contravention of copyright.
Logan said notice and notice is a helpful tool for rights holders but it may take some time to see an effect from the recent change.
“It’s going to take three months before we see anything happening,” he said. Asked for a sample copyright infringement notice, Logan said he was advised by the company’s lawyers not to share it.
As Internet users shift from downloading content to streaming it, the trackers are not far behind.
Logan said among the three million cases of piracy in the past three months, “there’s a lot of recidivism.” That suggests the same people are infringing copyright repeatedly, rather than the picture of the average Canadian consumer as an online pirate.Morten Rand-Hendriksen could be representative of Canadian consumers who are willing to pay but expect convenience and service at affordable prices. Rand-Henriksen is among those who receives most of his entertainment through streaming options.
“I try to be as legal as I can be, which at times can be extremely frustrating,” said Rand-Hendriksen, who works for Lynda.com, the online software training site, has seen his own work pirated online. “I don’t do anything illegal because I think it’s just wrong.
As an example, Rand-Hendriksen cited the Games of Thrones, which was first released by HBO, so if you wanted to watch it, you had to subscribe to the network.
The show was eventually made available through iTunes but for Rand-Hendriksen, the experience highlights the shortcomings in the entertainment model.
“I think the solution is a new model along the lines of what we have, streaming services, and you also have to look at a distribution network where you can pick what you want to watch. You can pay on an ongoing basis for a show,” he said.
Just as in music, where some artists bypassed the middle man to sell straight to their fans, video creators are exploring new ways of distributing their content.
Comedian David Cross is using BitTorrent’s Bundle service to release his feature film, Hits, this Feb. 13. The film debuted at Sundance last year but this will be its first widespread release, available on a pay-what-you-want basis.
Canadians, through their adoption of Netflix, music streaming services like Rdio and Spotify, and other flat rate all-you-can-use entertainment services, have demonstrated that they’re willing to pay for content.
Netflix responded by saying it hasn’t changed its practices.
Netflix offerings vary by country, depending on their licensing arrangements, so a show or movie that is available in the U.S. may not be available in Canada. Consumers looking for expanded offerings can subscribe to VPN services such as Unblock-Us, which effectively provides an IP address that indicates you’re in another location, say the U.S. or the United Kingdom, instead of Canada. (Unblock-Us has also said there is no evidence Netflix is testing new methods of disabling geoblocking services.) So while trying to access Netflix.com from your computer or mobile device in Canada will redirect you to Netflix.ca, using a geoblocking service, you could access the U.S. site.
So widespread is the practice that there’s an app, Nu for Netflix, that lets you search for movies and TV shows and tells you which country they’re available in.
Has Netflix launched a crackdown on users who access Netflix sites outside of their own country? Not so, according to the company, which says it hasn’t changed its practices.
“It’s not clear that it matters all that much to rights holders because they still get paid to make the content available and if they were really that concerned with it they would increase the pressure on Netflix to stop it,” he said.
“Your ISP has forwarded you this notice.
Your ISP account has been used to download, upload or offer for upload copyrighted content in a manner that infringes on the rights of the copyright owner. |
Kalmati says he still hopes to see the West burn in righteous fire, but doesn’t get as bent out of shape about it as he used to.
MIRANSHAH, PAKISTAN—Admitting he has “mellowed out a bit” with age, 54-year-old militant jihadist Adil Jalal Kalmati confided to reporters Wednesday that he now finds himself far less enraged by Western culture than he did in his younger days as a religious extremist.
The veteran Taliban insurgent confirmed that while he still strictly and unflinchingly follows the tenets of Sharia law, he gets less worked up than he used to whenever someone expresses a personal value that could be seen to clash in any way with the fundamentalist practice of Sunni Islam.
Kalmati explained to reporters that as he’s grown older, he’s begun to realize that life is too short to spend an entire day disemboweling a teacher for the offense of educating girls, or publicly flogging a woman seen walking with a man who isn’t a relative—things he has reportedly done in the past without hesitation.
Now that he’s reached middle age, the Islamist operative said, he is also less inclined to react violently when coming across Pashtuns who embrace elements of American culture, a far cry from when he was younger and once shot the entire staff of a local movie theater execution-style for screening a Hollywood film.
While acknowledging that the thought of male doctors treating female patients will still, on occasion, cause him to erupt in a fit of anger and burn down a public hospital, Kalmati explained that he now focuses more time and energy on his personal hobbies, which include cooking and photography.
The aging zealot stressed that his desire to cleanse the world of all infidels—even if it means massacring fellow Muslims who aren’t strictly devout or who practice Shi’a Islam—remains as strong as ever. But he noted he is far happier now that his entire emotional state no longer revolves solely around that end. |
Enhanced Electrochemical Performances of Bi2O3/rGO Nanocomposite via Chemical Bonding as Anode Materials for Lithium Ion Batteries. Bismuth oxide/reduced graphene oxide (termed Bi2O3@rGO) nanocomposite has been facilely prepared by a solvothermal method via introducing chemical bonding that has been demonstrated by Raman and X-ray photoelectron spectroscopy spectra. Tremendous single-crystal Bi2O3 nanoparticles with an average size of ∼5 nm are anchored and uniformly dispersed on rGO sheets. Such a nanostructure results in enhanced electrochemical reversibility and cycling stability of Bi2O3@rGO composite materials as anodes for lithium ion batteries in comparison with agglomerated bare Bi2O3 nanoparticles. The Bi2O3@rGO anode material can deliver a high initial capacity of ∼900 mAh/g at 0.1C and shows excellent rate capability of ∼270 mAh/g at 10C rates (1C = 600 mA/g). After 100 electrochemical cycles at 1C, the Bi2O3@rGO anode material retains a capacity of 347.3 mAh/g with corresponding capacity retention of 79%, which is significantly better than that of bare Bi2O3 material. The lithium ion diffusion coefficient during lithiation-delithiation of Bi2O3@rGO nanocomposite has been evaluated to be around ∼10-15-10-16 cm2/S. This work demonstrates the effects of chemical bonding between Bi2O3 nanoparticles and rGO substrate on enhanced electrochemical performances of Bi2O3@rGO nanocomposite, which can be used as a promising anode alterative for superior lithium ion batteries. |
Incorporating a-priori expert knowledge in genetic algorithms Conventional applications of genetic algorithm (GA) suggest using a random initial population. However, it is intuitively clear that any search routine could converge faster if starting points are good solutions. In this paper, a novel method is illustrated which incorporates a-priori knowledge in creating a fitter initial population while allowing for randomness among members of the population for diversity. Furthermore, the methodology is applied to optimization of a fuzzy controller's membership parameters in a water desalination control process, in particular a brine heater temperature control problem. It is shown that the GA-improved PID fuzzy controller is able to reduce overshoot by 80 percent when compared to non-GA PID fuzzy controller. |
Diabetes and stroke The association between diabetes and stroke is well established. Recent largescale, international population studies suggest that diabetes is one of the most important modifiable risk factors for cerebrovascular disease. Despite this, we still have a relative paucity of evidence around the management of diabetes in stroke. The landscape is evolving and recent studies are helping establish best practice and suggesting new therapeutic opportunities. It is possible to develop a practical and clinical synthesis of the evidence around managing diabetes in adult patients with stroke and cerebrovascular disease, based on large trials, systematic reviews and guidelines, and focusing on the scenarios most often encountered in clinical practice. It is also important to recognise that there are common situations where robust evidence is lacking, but practical guidance for clinicians can be suggested. Copyright © 2019 John Wiley & Sons. Practical Diabetes 2019; 36 : 126131 |
package com.wagner.mycv.service;
import com.wagner.mycv.framework.service.SimpleCrudService;
import com.wagner.mycv.web.dto.request.CertificationRequestDto;
import com.wagner.mycv.web.dto.CertificationDto;
public interface CertificationService extends SimpleCrudService<CertificationRequestDto, CertificationDto> {
}
|
Model-based predictive sampled-data control and its robustness This paper proposes a model-based predictive sampled-data controller with a large fixed sampling rate h. Although the linear-time-invariant (LTI) plant is unknown, a nominal model is available. This nominal model is used to predict and compensate the influence of the large sampling using the measured information from the plant. The controller is designed on the basis of the nominal model. The robustness and performance of this model-based predictive sampled-data controller are explored with respect to the sampling rate h, the mismatches between the nominal model and plant as well as the choice of the feedback gain matrix K. It is interesting to observe that the robustness of the proposed method is not proportional to the sampling rate h, neither a small h nor a large h is robust. Maximum robustness requires a well-chosen finite sampling rate h. |
<filename>src/components/pages/top-news/index.tsx<gh_stars>0
import { useCountry } from '../../../hooks/params';
import { useNewsQuery } from '../../../hooks/query/news';
import Typography from '@mui/material/Typography';
import { getCountryDisplayName } from '../../../constants';
import { NewsItemCollection } from '../../news-item/collection';
export const TopNewsContent = (): JSX.Element => {
const country = useCountry();
const { data, isLoading } = useNewsQuery(country);
return (
<div>
<Typography variant="h4" sx={{ mb: 3 }}>
Top news from {getCountryDisplayName(country)}
</Typography>
<NewsItemCollection
articles={data?.articles}
isLoading={isLoading || !country}
skeletonCount={20}
/>
</div>
);
};
|
package com.circumgraph.graphql.internal.search;
import com.circumgraph.model.ScalarDef;
public class OffsetDateTimeCriteria
extends RangeCriteria
{
public OffsetDateTimeCriteria()
{
super(ScalarDef.OFFSET_DATE_TIME);
}
}
|
Erythropoiesis and erythropoietin in hypo- and hyperthyroidism. Qualitative and quantitative studies of erythropoiesis in 23 patients with hypothyroidism and 21 patients with hyperthryoidism included routine hematologic evaluation, bone marrow morphology, status of serum iron, B12 and folate red blood cell mass and plasma volume by radioisotope methods, erythrokinetics and radiobioassay of plasma erythropoietin. A majority of patients with the hypothyroid state had significant reduction in red blood cell mas per kg of body weight. The presence of anemia in many of these patients was not evident from hemoglobin and hematocrit values due to concomitant reduction of plasma volume. The erythrokinetic data in hypothyroid patients provided evidence of significant decline of the erythropoietic activity of the bone marrow. Erythroid cells in the marrow were depleted and also showed reduced proliferative activity as indicated by lower 3H-thymidine labeling index. Plasma erythropoietin levels were reduced, often being immeasurable by the polycythemic mouse bioassay technique. These changes in erythropoiesis in the hypothyroid state appear to be a part of physiological adjustment to the reduced oxygen requirement of the tissues due to diminished basal metabolic rate. Similar investigations revealed mild erythrocytosis in a significant proportion of patients with hyperthyroidism. Failure of erythrocytosis to occur in other patients of this group was associated with impaired erythropoiesis due to a deficiency of hemopoietic nutrients such as iron, vitamin B12 and folate. The mean plasma erythropoietin level of these patients was significantly elevated; in 4 patients the levels were in the upper normal range whereas in the rest, the values were above the normal range. The bone marrow showed erythyroid hyperplasia in all patients with hyperthyroidism. The mean 3H-thymidine labeling index of the erythroblasts was also significantly higher than normal in hyperthyroidism; in 8 patients the index was within the normal range whereas in the remaining 13 it was above the normal range. Erythrokinetic studies also provided evidences of increased erythropoietic activity in the bone marrow. It is postulated that thyroid hormones stimulate erythropoiesis, sometimes leading to erythrocytosis provided there is no deficiency of hemopoietic nutrients. Stimulation of erythropoiesis by thryoid hormones appears to be mediated through erythropoietin. |
WP1066 suppresses macrophage cell death induced by inflammasome agonists independently of its inhibitory effect on STAT3 The compound WP1066 was originally synthesized by modifying the structure of AG490, which inhibits the activation of signal transducer and activator of transcription 3 (STAT3) by directly targeting Janus kinases (JAKs). WP1066 exhibits stronger anticancer activity than AG490 against malignant glioma and other cancer cells and is regarded as a promising therapeutic agent. By screening a small library of targetknown compounds, we identified WP1066 as an inhibitor of macrophage cell death induced by agonists of the NLRP3 inflammasome, an intracellular protein complex required for the processing of the proinflammatory cytokine interleukin (IL)1. WP1066 strongly inhibited cell death as well as extracellular release of IL1 induced by inflammasome agonists in mouse peritoneal exudate cells and human leukemia monocytic THP1 cells that were differentiated into macrophagic cells by treatment with PMA. However, inflammasome agonists did not increase STAT3 phosphorylation, and another JAK inhibitor, ruxolitinib, did not inhibit cell death, although it strongly inhibited basal STAT3 phosphorylation. Thus, WP1066 appears to suppress macrophage cell death independently of its inhibitory effect on STAT3. In contrast, WP1066 itself induced the death of undifferentiated THP1 cells, suggesting that WP1066 differentially modulates cell death in a contextdependent manner. Consistent with previous findings, WP1066 induced the death of human glioma A172 and T98G cells. However, neither ruxolitinib nor AG490, the former of which completely suppressed STAT3 phosphorylation, induced the death of these glioma cells. These results suggest that WP1066 targets cell deathmodulating molecules other than those involved in JAKSTAT3 signaling. The compound WP1066 was originally synthesized by modifying the structure of AG490, which inhibits the activation of signal transducer and activator of transcription 3 (STAT3) by directly targeting Janus kinases (JAKs). WP1066 exhibits stronger anti-cancer activity than AG490 against malignant glioma and other cancer cells and is regarded as a promising therapeutic agent. By screening a small library of target-known compounds, we identified WP1066 as an inhibitor of macrophage cell death induced by agonists of the NLRP3 inflammasome, an intracellular protein complex required for the processing of the proinflammatory cytokine interleukin (IL)-1b. WP1066 strongly inhibited cell death as well as extracellular release of IL-1b induced by inflammasome agonists in mouse peritoneal exudate cells and human leukemia monocytic THP-1 cells that were differentiated into macrophagic cells by treatment with PMA. However, inflammasome agonists did not increase STAT3 phosphorylation, and another JAK inhibitor, ruxolitinib, did not inhibit cell death, although it strongly inhibited basal STAT3 phosphorylation. Thus, WP1066 appears to suppress macrophage cell death independently of its inhibitory effect on STAT3. In contrast, WP1066 itself induced the death of undifferentiated THP-1 cells, suggesting that WP1066 differentially modulates cell death in a context-dependent manner. Consistent with previous findings, WP1066 induced the death of human glioma A172 and T98G cells. However, neither ruxolitinib nor AG490, the former of which completely suppressed STAT3 phosphorylation, induced the death of these glioma cells. These results suggest that WP1066 targets cell death-modulating molecules other than those involved in JAK-STAT3 signaling. S ignal transducer and activator of transcription 3 (STAT3) functions mainly as a transcription factor and has been shown to be involved in tumor cell proliferation, survival and invasion. Activated Janus kinases (JAKs), such as JAK1 and JAK2, directly phosphorylate STAT3 at Y705, and phosphorylated STAT3 dimerizes and translocates to the nucleus. Thus, low-molecular-weight kinase inhibitors targeting JAKs have been regarded as promising anti-cancer agents. WP1066 is among them and was originally synthesized as an anti-cancer compound more potent than previous ones by modifying the structure of AG490, one of the prototypic JAK2 inhibitors. WP1066 induces apoptosis and/or growth inhibition in a variety of cancer cells, such as those of malignant glioma, acute myelogenous leukemia, melanoma, renal cell carcinoma and oral squamous cell carcinoma, both in vitro and in vivo. According to ClinicalTrials.gov (https://clinicaltrials.gov), a registry and results database of publicly and privately supported clinical studies of human participants conducted around the world, a phase I trial of WP1066 in patients with recurrent malignant glioma and brain metastasis from melanoma is ongoing. The proinflammatory cytokine interleukin (IL)-1b is transcriptionally induced as a precursor protein, called pro-IL-1b, after the activation of Toll-like receptor (TLR) signaling in inflammatory cells such as macrophages. In response to a variety of pathogen-associated molecular patterns (PAMPs) and damageassociated molecular patterns (DAMPs), the cysteine protease caspase-1 is activated in an intracellular protein complex called the inflammasome, and activated caspase-1 proteolytically processes pro-IL-1b into mature, active IL-1b. Among the various inflammasomes, the NLRP3 inflammasome, which consists of NLRP3, a member of the NOD-like receptor family, and the adaptor protein ASC (apoptosis-associated speck-like protein containing a caspase activation and recruitment domain), together with caspase-1, is responsive to the broadest range of stimuli and therefore plays a central role in the regulation of IL-1b processing. Finally, active IL-1b is released from cells; however, the mechanism of extracellular release of IL-1b remains elusive because the processing of pro-IL-1b occurs in the cytosol and generates active IL-1b that lacks a secretory signal sequence. Thus, elucidating the mechanism of extracellular release of IL-1b is a prerequisite for understanding the regulation of inflammation and inflammatory diseases. In this study, we identified WP1066 as a strong inhibitor of macrophage cell death and the extracellular release of IL-1b induced by NLRP3 inflammasome agonists. We further examined the effects of WP1066 on cell death in various contexts, with a particular focus on the relationship between its cell death-modulating activity and its inhibitory effect against STAT3. Materials and Methods Reagents. Nigericin was purchased from Wako Chemical (Osaka, Japan). R837/imiquimod was purchased from Invivo-Gen (San Diego, CA, USA) and Tokyo Chemical Industry (Tokyo, Japan). WP1066, AG490 and ruxolitinib were purchased from Cayman Chemical (Ann Arbor, MI, USA). Lipopolysaccharide (LPS) and phorbol 12-myristate 13-acetate (PMA) were purchased from Sigma-Aldrich (St. Louis, MO, USA). Ac-YVAD-CMK was purchased from Peptide Institute (Osaka, Japan). Chemical compounds in the SCADS Inhibitor Kit (Screening Committee of Anticancer Drugs supported by Grant-in-Aid for Scientific Research on Priority Area "Cancer" from the Ministry of Education, Culture, Sports, Science and Technology, Japan) were used for screening. Cell culture. Human leukemia monocytic THP-1 cells (American Type Culture Collection, Manassas, VA, USA) were cultured in RPMI 1640 medium supplemented with 100 units/mL penicillin G and 0.1 mg/mL streptomycin containing 8% fetal bovine serum (FBS) under a 5% CO 2 atmosphere at 37°C. THP-1 cells were differentiated into macrophagic cells by overnight treatment with 0.1 lM PMA. Human glioma A172 cells and T98G cells (American Type Culture Collection) were cultured in Dulbecco's modified Eagle's medium (DMEM) containing 8% FBS and supplemented with 100 units/mL penicillin G and 0.1 mg/mL streptomycin under a 5% CO 2 atmosphere at 37°C. Mouse peritoneal exudate cells (PECs) were isolated from the peritoneal cavity of 8-to 12-week-old mice 2 days after the intraperitoneal injection of 2 mL of 4% fluid thioglycollate medium (BD Diagnostic Systems, Heidelberg, Germany) and were further cultured in RPMI 1640 medium containing 8% FBS and supplemented with 100 units/mL penicillin G and 0.1 mg/mL streptomycin under a 5% CO 2 atmosphere at 37°C for overnight. Prior to stimulation with inflammasome agonists, cells were washed twice with PBS and further cultured for 4 h in Opti-MEM I Reduced-Serum Medium (Thermo Fisher Scientific, Waltham, MA, USA) containing 100 ng/mL LPS, which was the "priming" stimulus to induce the transcription of pro-IL-1b. Immunoblot analysis. For immunoblot analysis of culture supernatants, culture medium was collected and centrifuged for 1 min at 860 g, and the resulting supernatants were added to the same volume of methanol and one-fourth volume of chloroform and vigorously mixed. After incubation on ice for 15 min, the solution was centrifuged at 21 500 g for 10 min, and the upper phase of the solution was removed. The remaining solution was added to 500 lL of methanol and vigorously mixed. After incubation on ice for 15 min, the solution was centrifuged at 21 500 g for 10 min, and the supernatants were removed. Methanol was added to the pellet, and the solution was further centrifuged at 21 500 g for 10 min. The supernatants were removed, and the pellet was air-dried and then dissolved in a buffer containing 125 mM Tris-HCl (pH 6.8), 20% glycerol, 4% SDS and 10 mM DTT. For immunoblot analysis of cell lysates, PECs were lysed with a buffer containing 62.5 mM Tris-HCl (pH 6.8), 10% glycerol, 2% SDS and 5 mM DTT, followed by sonication for 1 min. Other cells were lysed with a buffer containing 25 mM Tris-HCl (pH 7.5), 150 mM NaCl, 5 mM EGTA, 1% Triton X-100, 5 lg/mL aprotinin, 1 mM phenylmethylsulfonyl fluoride, and after centrifugation at 21 500 g for 15 min the supernatants were collected as cell lysates. When detecting phospho-Stat3, PhosSTOP Phosphatase Inhibitor Cocktail (Roche Life Science, Mannheim, Germany) was included in the lysis buffer. Cell lysates were then fractionated by SDS-polyacrylamide gel electrophoresis and electroblotted onto polyvinylidene difluoride membranes. The membranes were probed with primary antibodies and horseradish peroxidase (HRP)-conjugated secondary antibodies. Protein bands were visualized using the enhanced chemiluminescence system and analyzed with an ImageQuant LAS4000 (GE Healthcare, Piscataway, NJ, USA). The following primary antibodies were used in this study: anti-IL-1b (human specific; #12703) antibody, anti-IL-1b (mouse specific; #8689) antibody, anti-phospho-Stat3 (Tyr705) antibody, anti-phospho-Stat3 (Ser727) antibody, anti-Stat3 antibody and anti-b-actin antibody, all from Cell Signaling (Danvers, MA, USA); anti-cleaved IL-1b (mouse) antibody (MBL, Nagoya, Japan); and anti-caspase-1 (p20) antibody (Adipogen, San Diego, CA, USA). HRP-conjugated anti-mouse IgG (GE Healthcare) and HRP-conjugated anti-rabbit IgG (Cell Signaling) were used as secondary antibodies. Il-1b ELISA. Culture medium was collected and centrifuged at 860 g for 1 min, and the IL-1b level in the resulting supernatants was measured using an IL-1b ELISA kit (Quantikine ELISA; R&D Systems, Minneapolis, MN, USA) according to the manufacturer's instructions. Cell death assay. For propidium iodide (PI) staining, 2 lg/ mL PI was added to the culture medium 10 min before cell harvest. For adherent cells, cells were dissociated with trypsin and suspended into single cells by pipetting or passing through 23G needles. The suspended cells were centrifuged at 860 g for 3 min and resuspended in PBS. The fluorescence emitted by cells was analyzed using a BD Accuri C6 flow cytometer (BD Bioscience, Franklin Lakes, NJ, USA). To detect the level of lactate dehydrogenase (LDH) released from the cells, the Cytotoxicity LDH Assay Kit-WST (Dojindo, Kumamoto, Japan) was used according to the manufacturer's instructions. Results WP1066 suppresses IL-1b release from macrophages. To explore the mechanism of IL-1b release from macrophages, we sought to identify target-known low-molecular-weight compounds that inhibit IL-1b release. We expected that this would be a fast approach to identify important molecules that regulate IL-1b release. We screened the effects of 365 compounds on the release of IL-1b from human leukemia monocytic THP-1 cells treated with the chemical NLRP3 inflammasome agonist R837/imiquimod. Prior to treatment with R837, THP-1 cells were differentiated into macrophagic cells by PMA treatment and were then primed with LPS to efficiently induce the transcription of pro-IL-1b. We found that WP1066 was among the strongest compounds that inhibited R837-induced IL-1b release from THP-1 cells (Fig. 1a). While IL-1b was continuously released from THP-1 cells after R837 stimulation, 10 lM WP1066 completely suppressed IL-1b release during the 120min treatment with R837 (Fig. 1b), and as little as 1 lM WP1066 suppressed IL-1b release 60 min after stimulation (Fig. 1c). Immunoblot (IB) analysis of IL-1b released into the culture supernatant of THP-1 cells revealed that WP1066 also strongly suppressed the release of IL-1b and unprocessed pro-IL-1b in response to the bacterial toxin nigericin, monosodium urea (MSU) crystals and low osmolarity (treatment with 2:5 diluted Opti-MEM I Reduced-Serum Medium), all of which have been shown to act as strong NLRP3 inflammasome agonists (Fig. 1d). These results suggest that WP1066 suppresses IL-1b release upon NLRP3 inflammasome activation from THP-1 cells. We next examined the effect of WP1066 on PECs. WP1066 suppressed the nigericin-and R837-induced release of IL-1b together with that of active caspase-1, as shown by IB analysis of the culture supernatant of PECs (Fig. 1e, upper two panels), suggesting that WP1066 also suppresses IL-1b release upon NLRP3 inflammasome activation in PECs. IB analysis of PEC lysates revealed that WP1066 suppressed the nigericin-or R837-induced processing and activation of caspase-1 (Fig. 1e, lower three panels). Although we could not examine the effect of WP1066 on caspase-1 activation in THP-1 cells because of the unavailability of antibodies detecting active caspase-1 in human cells, the NLRP3 inflammasome or its upstream pathway may be at least one of the targets of WP1066 in inflammasome agonist-induced IL-1b release. WP1066 suppresses cell death induced by inflammasome agonists. In IB analysis of the culture supernatant of THP-1 cells, we noticed that b-actin was released from the cells in a timedependent manner similar to that of IL-1b release after R837 (Fig. 2a). This raised the possibility that cell death was induced by inflammasome agonists, as has been shown in the case of pyroptosis, a form of cell death mediated by the activation of caspase-1. In fact, the LDH release assay revealed that R837 strongly induced the death of THP-1 cells, an effect that was suppressed by pretreatment with WP1066 (Fig. 2b). When dead cells were detected through the cellular incorporation of propidium iodide (PI), R837-induced death and the strong inhibitory effect of WP1066 on it were confirmed in THP-1 cells (Fig. 2c, upper graph). This R837induced death was strongly inhibited by the addition of high KCl to the culture medium. High KCl is known to inhibit NLRP3 inflammasome activation and cell death by blocking K + efflux, a common trigger of NLRP3 inflammasome activation, as confirmed by the considerable reduction in released IL-1b in the culture supernatant (Fig. 2c, lower panel). On the other hand, the inhibition of cell death by the caspase-1 inhibitor Ac-YVAD-CMK was limited, although it effectively suppressed caspase-1 activation, as confirmed by the reduction in cleaved IL-1b in the supernatant (Fig. 2c). We found similar results when THP-1 cells were stimulated with nigericin (Fig. 2d). Furthermore, the R837-induced death of PECs was suppressed by WP1066, whereas it was not suppressed but rather enhanced by Ac-YVAD-CMK (Fig. 2e). These results suggest that the site of action of WP1066 is not limited to NLRP3 inflammasome activation, i.e., caspase-1 activation and that WP1066 targets the cell death-inducing machinery that does not rely on caspase-1 activity in macrophages. Inflammasome agonists do not induce STAT3 activation. Because WP1066 has been characterized as a STAT3 inhibitor, we examined the activation state of STAT3 by monitoring its phosphorylation at Y705 in THP-1 cells. Whereas the basal phosphorylation of Y705 (P-Y705) was quite low in undifferentiated THP-1 cells, P-Y705 increased in a timedependent manner in response to PMA and increased further by 4-h priming with LPS (Fig. 3a). The PMA/LPS-induced P-Y705 was attenuated by treatment with WP1066. The phosphorylation of STAT3 at S727, which has been reported to be required for the mitochondrial translocation of STAT3, did not change throughout treatment with PMA and LPS, in contrast to P-Y705. We first expected that inflammasome agonists would further activate STAT3 to facilitate NLRP3 inflammasome activation and cell death, but quite the contrary, R837 time-dependently decreased P-Y705 that had been induced by pretreatment with PMA and LPS (Fig. 3b). Moreover, none of other NLRP3 inflammasome agonists examined increased STAT3 P-Y705 (Fig. 3c). WP1066 suppresses inflammasome agonist-induced cell death independently of its inhibitory effect on STAT3. We then examined the involvement of STAT3 in IL-1b release and cell death using different JAK inhibitors that have previously been shown to inhibit STAT3 activation. Ruxolitinib (INCB018424), a well-characterized inhibitor of JAK1 and JAK2, strongly suppressed STAT3 P-Y705 but unexpectedly did not suppress IL-1b release from THP-1 cells treated with R837 (Fig. 4a). The inhibitory effect of AG490, the predecessor compound of WP1066, on R837-induced IL-1b release was much weaker than that of WP1066 (Fig. 4b). Consistent with these results, neither ruxolitinib nor AG490 suppressed the R837-induced death of THP-1 cells (Fig. 4c). Thus, the inhibitory effect of WP1066 on inflammasome agonist-induced cell death does not appear to depend on its inhibitory effect on STAT3, and WP1066 may target cell death-inducing molecules other than those involved in JAK-STAT3 signaling. WP1066 induces the death of undifferentiated THP-1 cells. As described above, WP1066 has been shown to induce apoptosis THP-1 cells were treated with 0.1 lM phorbol 12-myristate 13-acetate (PMA) for the indicated periods and then treated with lipopolysaccharide (LPS) for 1 or 4 h. Cells were treated with or without WP1066 for 2 h before lysis. Cell lysates were subjected to immunoblot (IB) analysis. (b) THP-1 cells were treated with 0.1 lM PMA for 24 h followed by treatment with 100 ng/mL LPS for 4 h and were further treated with 10 lg/mL R837 for the indicated periods. Cells were or were not treated with WP1066 30 min before treatment with R837. Cell lysates were subjected to IB analysis. (c) Differentiated THP-1 cells were pretreated with 10 lM WP1066 for 30 min and treated with 5 lM nigericin, 10 lg/mL R837, 50 lg/mL MSU, or low osmolarity (Low Osm) for 2 h. Cell lysates were subjected to IB analysis. Fig. 4. WP1066 suppresses inflammasome agonist-induced cell death independently of its inhibitory effect on STAT3. (a) Differentiated THP-1 cells were pretreated with the indicated doses of ruxolitinib or WP1066 for 30 min and treated with 10 lg/mL R837 for 2 h. The culture supernatants (Sup) and cell lysates were subjected to immunoblot (IB) analysis. (b) Differentiated THP-1 cells were pretreated with the indicated doses of AG490 or WP1066 for 30 min and treated with 10 lg/mL R837 for 2 h. The culture supernatants (Sup) and cell lysates were subjected to IB analysis. (c) Differentiated THP-1 cells were pretreated with 10 lM WP1066, AG490 (AG), or ruxolitinib (RX) for 30 min and treated with 10 lg/mL R837 for 2 h. Cells were subjected to a propidium iodide (PI) assay. Data are shown as the mean AE SEM (n = 3). ***P < 0.001, Dunnett's multiple comparison test, compared with the cells treated with R837 but not with any inhibitors. in a variety of malignant cells. We thus examined whether WP1066 itself affected the viability of THP-1 cells. Whereas WP1066 exhibited no obvious toxicity on PMA-differentiated THP-1 cells, it clearly induced the death of undifferentiated THP-1 cells 6 h after stimulation (Fig. 5a). More prolonged treatment with WP1066 (for 12 h) still did not induce the death of differentiated THP-1 cells but strongly induced the death of undifferentiated THP-1 cells (Fig. 5b). These results suggest that the sensitivity of THP-1 cells to WP1066 largely depends on their differentiation state. Similar to the inhibitory effects on inflammasome agonist-induced cell death, neither AG490 nor ruxolitinib induced the death of undifferentiated THP-1 cells (Fig. 5a,b), and cell viability was not correlated with the phosphorylation state of STAT3 Y705 (Fig. 5c). Thus, the cell death-inducing activity of WP1066 may also not depend on its inhibitory effect on STAT3. WP1066 induces the death of glioma cells independently of its inhibitory effect on STAT3. Given that WP1066 likely induces the death of undifferentiated THP-1 cells by targeting molecules other than those involved in JAK-STAT3 signaling, we examined whether the well-characterized cell death-inducing effects of WP1066 on glioma cells indeed were dependent on its inhibitory effect on STAT3. Whereas WP1066 consistently induced death and inhibited STAT3 P-Y705 in A172 glioma cells (Fig. 6a), it induced death but did not inhibit STAT3 P-Y705 in another glioma cell line, T98G (Fig. 6b). In addition, neither AG490 nor ruxolitinib, the latter of which completely suppressed STAT3 P-Y705, induced death in both cell lines. These results suggest that, contrary to previous findings, WP1066 induces the death of glioma cells independently of its inhibitory effect on STAT3. Discussion In this study, we found that WP1066 strongly suppressed macrophage cell death and the extracellular release of IL-1b induced by NLRP3 inflammasome agonists in two models of macrophages, mouse primary PECs and PMA-differentiated human THP-1 cells. IL-1b is essential for an appropriate acute inflammatory response to various pathogens and injury, but its dysregulated excess release leads to sepsis and septic shock or to chronic inflammation. Thus, the NLRP3 inflammasome is a fascinating drug target for controlling IL-1b production. It has indeed been reported that a small-molecule inhibitor of the NLRP3 inflammasome is effective in treating various autoinflammatory and autoimmune diseases. However, considering that the extracellular release of IL-1b upon cell death does not totally depend on the activation state of the NLRP3 inflammasome, regulating cell death may be the ultimate way to control IL-1b production, particularly in the case of severe inflammatory conditions, such as septic shock. In this regard, it would be interesting to see if WP1066 could also effectively suppress IL-1b production in in vivo models of inflammation. Although cell death appears to be one of the potential ways to release IL-1b from cells, the mechanisms by which macrophage cell death is induced by various inflammatory stimuli, including NLRP3 inflammasome agonists, are still elusive. It has recently been reported that R837 activates the NLRP3 inflammasome by inducing robust reactive oxygen species through disturbing the quinone oxidoreductase NQO2 and mitochondrial complex I, but how R837 induces cell death has not been clarified. In our experiments, the caspase-1 inhibitor Ac-YVAD-CMK did not suppress the R837-or nigericininduced death of THP-1 cells or the R837-induced death of PECs, suggesting that pyroptosis, which depends largely on caspase-1, is not a major type of cell death in these conditions. In fact, it has recently been reported that necrosis mostly contributes to IL-1b release at least from THP-1 cells. Nevertheless, caspase-1 has been proposed to induce necrosis independently of its catalytic activity, which may indeed be induced in our experiments particularly in the presence of Ac-YVAD-CMK (Fig. 2). Thus, we should further carefully examine the role of caspase-1 in macrophage cell death induced by various inflammatory stimuli. Although we concluded in this study that WP1066 suppresses inflammasome agonist-induced macrophage cell death independently of its effect on JAK-STAT3 signaling, identification of the molecules targeted by WP1066 in stimulated macrophages will shed new light on the mechanism regulating inflammatory cell death in macrophages. The induction of death in THP-1 cells only in their undifferentiated state was an unexpected effect of WP1066. Interestingly, this cell death-promoting effect of WP1066 on undifferentiated THP-1 cells was also independent of its effect on JAK-STAT3 signaling. Although it is unknown at the present time whether the target molecules of WP1066 differ between undifferentiated and differentiated THP-1 cells, WP1066 generally targets molecules critical for the regulation of cell death regardless of cellular conditions. This difference in sensitivity to WP1066 between the two differentiation states of THP-1 cells suggests that the range of cell death-inducing stimuli differs depending on the differentiation state of the THP-1 cells. This might account for the mechanism by which activated macrophages are prone to inflammatory death when they should release intracellular cytokines such as IL-1b. WP1066 would also be a strong tool to use in exploring this issue. The results that WP1066 induced death in two glioma cell lines, A172 and T98G, apparently independently of the activation state of STAT3 were unexpected. Of course, we need to examine many more cell types, including those other than glioma cells, to confirm our conclusion. However, this appears to provide an important caution with regard to the clinical use of this compound. In this case, neither the phosphorylation level of STAT3 nor the activity of JAKs may be suitable markers for the use of WP1066, at least in malignant glioma. Thus, identification of the WP1066 target molecules that are critical for cell death regulation would be a rather difficult but reliable approach to ensure the appropriate clinical use of this compound. Fig. 6. WP1066 induces the death of glioma cells independently of its inhibitory effect on STAT3. A172 cells (a) and T98G cells (b) were treated with 10 lM WP1066 (WP), AG490 (AG), or ruxolitinib (RX) for 12 h. Cells were subjected to a propidium iodide (PI) assay (upper graphs), and cell lysates were subjected to immunoblot (IB) analysis (lower panels). Data from the PI assay are shown as the mean AE SEM (n = 3). ***P < 0.001, Dunnett's multiple comparison test, compared with the untreated cells. |
from django.db import transaction
from tulius.forum.threads import models
from tulius.games import models as game_models
from tulius.stories import models as story_models
from tulius.gameforum import core
def test_thread_with_wrong_variation(
story, game, admin, variation_forum):
variation = story_models.Variation(story=story, name='Variation2')
variation.save()
base_url = f'/api/game_forum/variation/{variation.pk}/'
response = admin.get(base_url + f'thread/{variation_forum.pk}/')
assert response.status_code == 403
def test_access_to_variation(variation, variation_forum, client, user):
response = client.get(variation_forum.get_absolute_url())
assert response.status_code == 403
response = user.get(variation_forum.get_absolute_url())
assert response.status_code == 403
def test_guest_access_to_game(game, variation_forum, admin, game_guest):
game.status = game_models.GAME_STATUS_IN_PROGRESS
with transaction.atomic():
game.save()
# create thread with "no read" and no role
response = admin.put(
variation_forum.get_absolute_url(), {
'title': 'thread', 'body': 'thread description',
'room': False, 'default_rights': models.NO_ACCESS,
'granted_rights': [],
'important': True, 'closed': True, 'media': {}})
assert response.status_code == 200
thread = response.json()
# check guest can read it
response = game_guest.get(thread['url'])
assert response.status_code == 200
data = response.json()
assert data['body'] == 'thread description'
# create thread with no specified rights. There was a problem with
# fail on it
response = admin.put(
variation_forum.get_absolute_url(), {
'title': 'thread', 'body': 'thread description',
'room': False, 'default_rights': None,
'granted_rights': [],
'important': True, 'closed': True, 'media': {}})
assert response.status_code == 200
thread = response.json()
# check guest can read it
response = game_guest.get(thread['url'])
assert response.status_code == 200
def test_finishing_game_rights(
game, variation_forum, admin, user, detective, client):
game.status = game_models.GAME_STATUS_IN_PROGRESS
with transaction.atomic():
game.save()
# create thread with "no read" and no role
response = admin.put(
variation_forum.get_absolute_url(), {
'title': 'thread', 'body': 'thread description',
'room': False, 'default_rights': models.NO_ACCESS,
'granted_rights': [],
'important': False, 'media': {}})
assert response.status_code == 200
thread = response.json()
# create own user thread
response = user.put(
variation_forum.get_absolute_url(), {
'title': 'thread', 'body': 'thread description',
'room': False, 'default_rights': models.NO_ACCESS,
'granted_rights': [], 'role_id': detective.pk, 'media': {}})
assert response.status_code == 200
thread2 = response.json()
# check user can add comments
response = user.post(
thread2['url'] + 'comments_page/', {
'reply_id': thread2['first_comment_id'],
'title': 'Hello', 'body': 'my comment is awesome',
'media': {}, 'role_id': detective.pk,
})
assert response.status_code == 200
data = response.json()
assert len(data['comments']) == 2
# check user can't read first thread
response = user.get(thread['url'])
assert response.status_code == 403
# change game status
game.status = game_models.GAME_STATUS_FINISHING
with transaction.atomic():
game.save()
# check user still can write
response = user.post(
thread2['url'] + 'comments_page/', {
'reply_id': thread2['first_comment_id'],
'title': 'Hello', 'body': 'my comment is awesome',
'media': {}, 'role_id': detective.pk,
})
assert response.status_code == 200
data = response.json()
assert len(data['comments']) == 3
# And thread is opened now
response = user.get(thread['url'])
assert response.status_code == 200
# Finish game
game.status = game_models.GAME_STATUS_COMPLETED
with transaction.atomic():
game.save()
# check user can't write any more
response = user.post(
thread2['url'] + 'comments_page/', {
'reply_id': thread2['first_comment_id'],
'title': 'Hello', 'body': 'my comment is awesome',
'media': {}, 'role_id': detective.pk,
})
assert response.status_code == 403
# Thread is still opened
response = user.get(thread['url'])
assert response.status_code == 200
# but still not for anonymous
response = client.get(thread['url'])
assert response.status_code == 403
# Open game
game.status = game_models.GAME_STATUS_COMPLETED_OPEN
with transaction.atomic():
game.save()
# now anyone can read
response = client.get(thread['url'])
assert response.status_code == 200
def test_grant_moderator_rights(game, variation_forum, admin, user, detective):
game.status = game_models.GAME_STATUS_IN_PROGRESS
with transaction.atomic():
game.save()
base_url = f'/api/game_forum/variation/{game.variation.pk}/'
# create thread with "no read" and no role
response = admin.put(
base_url + f'thread/{variation_forum.id}/', {
'title': 'thread', 'body': 'thread description',
'room': False, 'default_rights': models.ACCESS_OPEN,
'granted_rights': [], 'important': False, 'media': {}})
assert response.status_code == 200
thread = response.json()
# add a comment by admin
response = admin.post(
thread['url'] + 'comments_page/', {
'reply_id': thread['first_comment_id'],
'title': 'Hello', 'body': 'my comment is awesome',
'media': {}, 'role_id': detective.pk,
})
assert response.status_code == 200
data = response.json()
assert len(data['comments']) == 2
comment = data['comments'][1]
# check user can read thread and cant edit comment
response = user.get(thread['url'])
assert response.status_code == 200
data = response.json()
assert data['body'] == 'thread description'
response = user.post(comment['url'], {
'title': 'Hello', 'body': 'my comment is awesome2',
'media': {}, 'role_id': detective.pk})
assert response.status_code == 403
# grant moderate rights
response = admin.post(
thread['url'] + 'granted_rights/', {
'user': {'id': detective.pk},
'access_level': models.ACCESS_MODERATE
}
)
assert response.status_code == 200
# check we can update comment
response = user.post(comment['url'], {
'title': 'Hello', 'body': 'my comment is awesome2',
'media': {}, 'role_id': detective.pk, 'edit_role_id': detective.pk})
assert response.status_code == 200
data = response.json()
assert data['body'] == 'my comment is awesome2'
def test_chain_strict_read(
game, variation_forum, admin, user, detective, murderer):
game.status = game_models.GAME_STATUS_IN_PROGRESS
with transaction.atomic():
game.save()
# create room with read limits
response = admin.put(
variation_forum.get_absolute_url(), {
'title': 'room', 'body': 'room description',
'room': True, 'default_rights': models.NO_ACCESS,
'granted_rights': [], 'media': {}})
assert response.status_code == 200
room = response.json()
# create thread with "no read" and no role and detective grants
response = admin.put(
room['url'], {
'title': 'thread', 'body': 'thread description',
'room': False, 'default_rights': models.NO_ACCESS,
'granted_rights': [{
'user': {'id': detective.pk},
'access_level': models.ACCESS_READ
}], 'important': False, 'media': {}})
assert response.status_code == 200
thread = response.json()
# check user can read thread, because have exceptions, even have no access
# to parent room.
response = user.get(thread['url'])
assert response.status_code == 200
# and user don't see room
response = user.get(variation_forum.get_absolute_url())
assert response.status_code == 200
data = response.json()
assert not data['rooms']
# but admin see it even if he play
murderer.user = admin.user
murderer.save()
response = admin.get(variation_forum.get_absolute_url())
assert response.status_code == 200
data = response.json()
assert len(data['rooms']) == 1
# grant read rights to room
response = admin.post(
room['url'] + 'granted_rights/', {
'user': {'id': detective.pk},
'access_level': models.ACCESS_READ
}
)
assert response.status_code == 200
# check thread now
response = user.get(thread['url'])
assert response.status_code == 200
data = response.json()
assert data['body'] == 'thread description'
# check root
response = user.get(variation_forum.get_absolute_url())
assert response.status_code == 200
data = response.json()
assert len(data['rooms']) == 1
# check room
response = user.get(room['url'])
assert response.status_code == 200
data = response.json()
assert data['threads'][0]['accessed_users'][0]['id'] == detective.pk
assert data['threads'][0]['accessed_users'][0]['title'] == detective.name
def test_chain_write_rights(game, variation_forum, admin, user, detective):
game.status = game_models.GAME_STATUS_IN_PROGRESS
with transaction.atomic():
game.save()
# create thread room with read limits
response = admin.put(
variation_forum.get_absolute_url(), {
'title': 'room', 'body': 'room description',
'room': True, 'default_rights': models.ACCESS_READ,
'granted_rights': [], 'media': {}})
assert response.status_code == 200
room = response.json()
# create middle room with "not set" type and write rights
response = admin.put(
room['url'], {
'title': 'room', 'body': 'room description',
'room': True, 'default_rights': None,
'granted_rights': [], 'media': {}})
assert response.status_code == 200
room2 = response.json()
# create thread with "no read" and no role and detective grants
response = admin.put(
room2['url'], {
'title': 'thread', 'body': 'thread description',
'room': False, 'default_rights': models.NO_ACCESS,
'granted_rights': [{
'user': {'id': detective.pk},
'access_level': models.ACCESS_READ
}], 'important': False, 'media': {}})
assert response.status_code == 200
thread = response.json()
# check user can read thread
response = user.get(thread['url'])
assert response.status_code == 200
# but can't write
response = user.post(
thread['url'] + 'comments_page/', {
'reply_id': thread['first_comment_id'],
'title': 'Hello', 'body': 'my comment is awesome',
'media': {}, 'role_id': detective.pk,
})
assert response.status_code == 403
# grant rights to middle room
response = admin.post(
room2['url'] + 'granted_rights/', {
'user': {'id': detective.pk},
'access_level': models.ACCESS_WRITE
}
)
assert response.status_code == 200
# and now still cant write
response = user.post(
thread['url'] + 'comments_page/', {
'reply_id': thread['first_comment_id'],
'title': 'Hello', 'body': 'my comment is awesome',
'media': {}, 'role_id': detective.pk,
})
assert response.status_code == 403
def test_broken_tree_rights(game, variation_forum, admin):
game.status = game_models.GAME_STATUS_IN_PROGRESS
with transaction.atomic():
game.save()
base_url = f'/api/game_forum/variation/{game.variation.pk}/'
# create thread room with read limits
response = admin.put(
base_url + f'thread/{variation_forum.id}/', {
'title': 'room', 'body': 'room description',
'room': True, 'default_rights': models.ACCESS_READ,
'granted_rights': [], 'media': {}})
assert response.status_code == 200
thread = response.json()
# break forum tree
game.variation.thread = core.create_game_forum(admin.user, game.variation)
game.variation.save()
# now get thread. Previously it caused 500 on tree rights check.
response = admin.get(thread['url'])
assert response.status_code == 200
def test_grant_rights_to_variation(variation, variation_forum, user):
response = user.get(variation_forum.get_absolute_url())
assert response.status_code == 403
# grant rights
admin = story_models.StoryAdmin(story=variation.story, user=user.user)
admin.save()
# check
response = user.get(variation_forum.get_absolute_url())
assert response.status_code == 200
# delete
admin.delete()
response = user.get(variation_forum.get_absolute_url())
assert response.status_code == 403
def test_grant_rights_to_game(game, variation_forum, user):
game.status = game_models.GAME_STATUS_IN_PROGRESS
with transaction.atomic():
game.save()
response = user.get(variation_forum.get_absolute_url())
assert response.status_code == 403
# grant rights
admin = game_models.GameAdmin(game=game, user=user.user)
admin.save()
# check
response = user.get(variation_forum.get_absolute_url())
assert response.status_code == 200
response = user.put(
variation_forum.get_absolute_url(), {
'title': 'room', 'body': 'room description',
'room': True, 'default_rights': models.ACCESS_READ,
'granted_rights': [], 'media': {}})
assert response.status_code == 200
# delete
admin.delete()
response = user.get(variation_forum.get_absolute_url())
assert response.status_code == 403
# grant guest rights
guest = game_models.GameGuest(game=game, user=user.user)
guest.save()
# check
response = user.get(variation_forum.get_absolute_url())
assert response.status_code == 200
response = user.put(
variation_forum.get_absolute_url(), {
'title': 'room', 'body': 'room description',
'room': True, 'default_rights': models.ACCESS_READ,
'granted_rights': [], 'media': {}})
assert response.status_code == 403
# delete
guest.delete()
response = user.get(variation_forum.get_absolute_url())
assert response.status_code == 403
def test_not_inherited_read_only_root(
game, variation_forum, user, admin, detective):
response = admin.put(
variation_forum.get_absolute_url() + 'granted_rights/',
{'default_rights': models.ACCESS_READ + models.ACCESS_NO_INHERIT})
assert response.status_code == 200
# create sub room
response = admin.put(
variation_forum.get_absolute_url(), {
'title': 'room', 'body': 'room description',
'room': True, 'default_rights': None,
'granted_rights': [], 'media': {}})
assert response.status_code == 200
room = response.json()
# start game. Reload game to update thread caches.
game = game_models.Game.objects.get(pk=game.pk)
game.status = game_models.GAME_STATUS_IN_PROGRESS
with transaction.atomic():
game.save()
# check user can read and can't write at root
response = user.get(variation_forum.get_absolute_url())
assert response.status_code == 200
response = user.put(
variation_forum.get_absolute_url(), {
'title': 'room', 'body': 'room description',
'room': True, 'default_rights': None, 'role_id': detective.pk,
'granted_rights': [], 'media': {}})
assert response.status_code == 403
# check we can read and write in sub room
response = user.get(room['url'])
assert response.status_code == 200
response = user.put(
room['url'], {
'title': 'room', 'body': 'room description',
'room': True, 'default_rights': None, 'role_id': detective.pk,
'granted_rights': [], 'media': {}})
assert response.status_code == 200
def test_not_inherited_read_only_room(
game, variation_forum, user, admin, detective):
# create sub room
response = admin.put(
variation_forum.get_absolute_url(), {
'title': 'room', 'body': 'room description',
'room': True,
'default_rights': models.ACCESS_READ + models.ACCESS_NO_INHERIT,
'granted_rights': [], 'media': {}})
assert response.status_code == 200
room1 = response.json()
# create sub sub room
response = admin.put(
variation_forum.get_absolute_url(), {
'title': 'room', 'body': 'room description',
'room': True, 'default_rights': None,
'granted_rights': [], 'media': {}})
assert response.status_code == 200
room2 = response.json()
# start game
game.status = game_models.GAME_STATUS_IN_PROGRESS
with transaction.atomic():
game.save()
# check we can read and write in root
response = user.get(variation_forum.get_absolute_url())
assert response.status_code == 200
response = user.put(
variation_forum.get_absolute_url(), {
'title': 'room', 'body': 'room description',
'room': True, 'default_rights': None, 'role_id': detective.pk,
'granted_rights': [], 'media': {}})
assert response.status_code == 200
# check user can read and can't write at room
response = user.get(room1['url'])
assert response.status_code == 200
response = user.put(
room1['url'], {
'title': 'room', 'body': 'room description',
'room': True, 'default_rights': None, 'role_id': detective.pk,
'granted_rights': [], 'media': {}})
assert response.status_code == 403
# check we can read and write in sub room
response = user.get(room2['url'])
assert response.status_code == 200
response = user.put(
room2['url'], {
'title': 'room', 'body': 'room description',
'room': True, 'default_rights': None, 'role_id': detective.pk,
'granted_rights': [], 'media': {}})
assert response.status_code == 200
def test_not_defined_rights_on_root(
game, variation_forum, user, admin, detective):
response = admin.put(
variation_forum.get_absolute_url() + 'granted_rights/',
{'default_rights': None})
assert response.status_code == 200
# start game. Reload game to update thread caches.
game = game_models.Game.objects.get(pk=game.pk)
game.status = game_models.GAME_STATUS_IN_PROGRESS
with transaction.atomic():
game.save()
# check user can read and write at root
response = user.get(variation_forum.get_absolute_url())
assert response.status_code == 200
response = user.put(
variation_forum.get_absolute_url(), {
'title': 'room', 'body': 'room description',
'room': True, 'default_rights': None, 'role_id': detective.pk,
'granted_rights': [], 'media': {}})
assert response.status_code == 200
def test_rights_override(game, variation_forum, user, admin, detective):
game.status = game_models.GAME_STATUS_IN_PROGRESS
with transaction.atomic():
game.save()
response = admin.put(
variation_forum.get_absolute_url(), {
'title': 'room', 'body': 'room description',
'room': True, 'default_rights': models.ACCESS_READ,
'role_id': None,
'granted_rights': [], 'media': {}})
assert response.status_code == 200
room1 = response.json()
response = admin.put(
room1['url'], {
'title': 'room2', 'body': 'room2 description',
'room': True, 'default_rights': models.ACCESS_OPEN,
'role_id': None,
'granted_rights': [], 'media': {}})
assert response.status_code == 200
room2 = response.json()
# check no write room1
response = user.put(
room1['url'], {
'title': 'thread', 'body': 'thread description',
'room': False, 'default_rights': None, 'role_id': detective.pk,
'granted_rights': [], 'media': {}})
assert response.status_code == 403
# check write room2
response = user.put(
room2['url'], {
'title': 'thread', 'body': 'thread description',
'room': False, 'default_rights': None, 'role_id': detective.pk,
'granted_rights': [], 'media': {}})
assert response.status_code == 200
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.