content
stringlengths
7
2.61M
The Essential Role of Empirical Analysis in Developing Law and Economics Theory To serve as an effective basis for positive or normative analysis of law, theoretical law and economics analysis must be predicated on models that accurately capture the essential characteristics of the decision-making environment, decision-makers available choices, and individuals decision-making processes. Empirical analysis of law plays a vital role in helping theoretical analysis achieve this goal. Empirical analyses contribute indirectly by testing the predictions of models; results contrary to theoretical predictions regularly prompt theorists to reexamine the decision-making environment. Empirical analyses contribute directly by providing direct evidence on the decision-making environment, available choice sets or decision-makers mental processes. This interaction of empirical analysis and theory has led theory towards increasing use of models predicated on incomplete information, incomplete contracting, and decision-making that may deviate from rational choice theory.
<filename>py_libgit/py_libgit/core/refs.py<gh_stars>0 import logging, py_libgit.settings logger = logging.getLogger(__name__) import os class Refs: def __init__(self): logger.info('Create a Refs object') self.pwd = os.getcwd() def create_refs_dir(self, repo_name, bare_repo=False): '''Create the refs directory for holding the hash reference to commits Keyword arguments: repo_name -- the name of the repository bare_repo -- specify if this is a bare repo (default False) ''' if bare_repo: refs_dir = os.path.join(self.pwd, repo_name, 'refs', 'heads') else: refs_dir = os.path.join(self.pwd, repo_name, '.git', 'refs', 'heads') os.makedirs(refs_dir, mode=0o644)
Trajectories of Criminal Behavior Across the Life Course This chapter provides an overview of developmental-typological theories of criminal behavior best known by criminologists. These theories aim to explain heterogeneity in intra-individual change in criminal behavior across the life course by postulating developmental trajectories differing in terms of onset, variety and frequency of offending, criminal career duration, and period of desistance. The chapter outlines the most popular statistical methods for modeling developmental trajectories. It also summarizes the empirical findings from available developmental trajectory research. Overall, available longitudinal studies tend to support only some postulates of developmental-typological theories. This chapter thus proposes a number of directions for future research and provides some concluding insights.
Electron-Electron Interactions and Plasmon Dispersion in Graphene Plasmons in two-dimensional electron systems with nonparabolic bands, such as graphene, feature strong dependence on electron-electron interactions. We use a many-body approach to relate plasmon dispersion at long wavelengths to Landau Fermi-liquid interactions and quasiparticle velocity. An identical renormalization is shown to arise for the magnetoplasmon resonance. For a model with N>>1 fermion species, this approach predicts a power-law dependence for plasmon frequency vs. carrier concentration, valid in a wide range of doping densities, both high and low. Gate tunability of plasmons in graphene can be exploited to directly probe the effects of electron-electron interaction. Plasmonics has emerged recently as an active direction in graphene research. Surface plasmons in 2D electron systems are propagating charge density waves in which collective dynamics of clouds of charge is mediated by electric field in 3D. 6 The dual matter-field nature of plasmons is a key ingredient for many interesting and important phenomena. 7,8 Plasmons in graphene display a range of potentially useful properties, such as low Ohmic losses, a high degree of field confinement, and gate-tunability. 9,10 Gate-tunability of plasmons in graphene was demonstrated recently. 11,12 The goal of this article is to investigate the density dependence of plasmons and relate it to the interaction effects in the electron system. The dependence of plasmon dispersion on carrier density arises due to several effects. It takes on the simplest form in the limit of weak electron-electron interactions, where E F ∼ n 1/2 is the Fermi energy of noninteracting massless Dirac particles, and n is carrier density. Here is an effective dielectric constant of the substrate and a long-wavelength limit is assumed, q p F. Furthermore, plasmon dispersion features strong dependence on interactions. Renormalization of the dispersion relation, Eq., due to electron-electron interactions was predicted in Ref. 3, where perturbation expansion in a weak fine structure parameter = e 2 /hv was employed. The results of Ref.3 point to an interesting possibility to directly probe the effects of interactions by measuring plasmon dispersion relation. However, strong interactions in graphene, ∼ 2.5, render the weak coupling approximation unreliable. Acknowledging the difficulty of modelling the strongcoupling regime, it is beneficial to adopt a somewhat more general approach. Rather than attempting to make predictions based on a specific microscopic models, one can ask if a relation between the plasmon dispersion and some other fundamental characteristics of the system can be established. Below we point out that such a relation arises naturally from the Landau theory of Fermi liquids. 13 This theory affords a general, modelindependent framework to describe systems of strongly interacting fermions at degeneracy. The effects of interactions are encoded in the Landau parameters, representing a "genetic code" of the Fermi liquid (FL). The parameter values can in principle be predicted from perturbation theory if interactions are weak. For systems with strong interactions, however, the most reliable way to obtain the Landau parameters is to use their relation with experimentally measurable quantities, such as compressibility, heat capacity, spin susceptibility and dispersion of collective excitations. Our many-body analysis upholds the conventional square-root dependence ∼ q 1/2. We show that all the effects of interactions are accumulated in the prefactor, where p F is Fermi momentum, N = 4 is the number of spin/valley flavors, and a long-wavelength limit is assumed, q p F. Here F 1 is the Landau interaction harmonic with m = 1, and v is the Fermi velocity renormalized by interactions. The quantity in Eq. has units of frequency and depends only on the fundamental constants and carrier density via p F. In some cases the dielectric constant may feature an essential q dependence. In particular, when image charges arise due to conducting boundaries and gates, a simple model yields 14 (q) = 1 2 ( 1 + 2 coth(qd)), giving an acoustic plasmon dispersion ∼ q. Magnetic field alters the behavior, turning the gapless plasmon mode into a gapped mode. The magnetoplasmon dispersion relation obtained by adding Lorentz force to the FL dynamics takes on the following form: where 0 (q) is plasmon dispersion at B = 0 given by Eq., and r c is the cyclotron radius. The dispersion relation becomes more complex at qr c ∼ 1 due to the presence of Bernstein modes. 15 The size of the gap at q = 0 scales linearly with B, with a density dependent prefactor. Notably, the magnetoplasmon dependence on the interactions is described by the same combination Y = (1 + F 1 )v as that appearing in Eq.. The quantity Y describes the interaction dependence of plasmon dispersion. Measuring it as a function of carrier density can be used to determine the electronelectron interaction strength in the system. This behavior is in sharp departure with that for plasmons in two-dimensional systems with parabolic band dispersion, where Galilean invariance leads to an identity for Landau parameters, ( 13 where v 0 = mp F is Fermi velocity of noninteracting particles at the same density . As a result, the value (1 + F 1 )v is independent of interactions, leading to the "universal" long-wavelength plasmon dispersion in the parabolic case, 2 0 (q) = 2e 2 n m(q) q. Similarly, at a finite magnetic field, Galilean invariance leads to a simple result for longwavelength magnetoplasmons, 2 B (q) = 2 0 (q)+ 2 c, where m is unrenormalized band mass, c = eB/mc is the cyclotron frequency and qr c 1. These dependences carry no information on the quantum effects and neither on the interactions. The situation is quite different in systems with nonparabolic dispersion, such as graphene. The density dependence in Y arises because the values v and F 1 are renormalized in an essentially different way. As an illustration, we analyze the limit of a large number of spin/valley flavors, N 1, using a renormalization group (RG) approach. In this case, as we will see, Eq. yields a power-law dependence on carrier density, Here the exponent is identical to that found from oneloop RG for velocity renormalization, = 8 N 2, 16-18 and the prefactor is A ∼ v 0 e 2 h a −, with a ≈ 0.142 nm the carbon spacing. For a non-interacting system, = 0. Crucially, the power law n (1−)/2 describes the dependence on carrier density not only near charge neutrality but also for all accessible n values. Measurements of the density dependence of a plasmon resonance in graphene ribbons were reported in Ref. 11. The observed dependence approximately follows the relation 2 ∝ q, with the prefactor exhibiting an approximately linear dependence on n 1/2. However, the limited range of densities in which the dispersion was measured, as well as possible corrections due to the finite width of the ribbons, made it challenging to distinguish between = 0 and = 0. An attempt to experimentally determine the RG scaling exponents directly from transport measurements was made recently in Ref. 19. In this work, a systematic variation of the period of quantum oscillations with carrier density was interpreted in terms of Fermi velocity renormalization, giving a value = 0.5-0.55. This value is considerably larger than the one-loop RG result, = 8 2 N ≈ 0.2. This discrepancy is not yet understood. We note parenthetically that the interaction effects are not expected to vanish in graphene bilayer despite the parabolic character of its band dispersion. Electronic states in graphene bilayer are Dirac-like rather than Schrdinger like, and hence do not admit Galilean transformation. For plasmons in this material we therefore expect a behavior similar to that in materials with nonparabolic band, described by Eqs.,. I. MICROSCOPIC FERMI-LIQUID ANALYSIS The goal of this section is to relate plasmon dispersion with the standard quantities such as Landau FL interactions and renormalized velocity. The analysis proceeds by standard steps via resumming the ladder contributions to the dynamical polarization function which account for the quasiparticle dynamics in Landau's FL framework. In doing so, we keep and q small but finite, as appropriate for a plasmon dispersion analysis. This leads to a polarization response, (q, ) ∼ q 2 / 2, describing plasmon excitations in the low-frequency and long-wavelength domain, E F, q p F. Charge carriers in graphene single layer are described by the Hamiltonian for N = 4 species of massless Dirac particles. In second-quantized representation the Hamiltonian reads where i, j = 1...N, v 0 ≈ 10 6 m/s is unrenormalized Fermi velocity, and V (q) = 2e 2 /|q| is the Coulomb interaction with the dielectric constant describing screening by the substrate. Here p,i is a two-component spinor describing the wave-function amplitude on the two sublattices of the graphene crystal lattice. The amplitudes associated with the two sublattices are usually referred to as pseudospin up and down components, with the (pseudo)spin-1/2 Pauli matrices in Eq. acting on (pseudo)spinors p,i. Plasmons are collective excitations of 2D electrons coupled by the electric field in 3D. They can be described microscopically using the density correlation function where q (t) = p,i p,i (t) p+q,i (t) are Fourier harmonics of the total electron density. The quantity K is expressed in a standard fashion 13 through geometric series involving the polarization function (q, ) defined as the irreducible density-density correlator, FIG. 1: Resummed Feynman graphs for the polarization operator (q, ). The non-quasiparticle contribution 0(q, ) and the FL ladder n≥1 n(q, ) are shown. Only the contributions 1(q, ) and 2(q, ) contribute to the low-energy plasmon dispersion, see text. Zeros of the dynamical screening function(q, ) give the poles of K, defining plasmon dispersion. To obtain the dispersion from the condition(q, ) = 0 we need an input on (q, ) from a microscopic approach. In the long-wavelength limit, q p F, E F, the behavior of the quantity (q, ) is dominated by excitations near the Fermi surface, which can be described in the FL framework. The microscopic approach used to justify the FL picture involves several standard steps. We start, as usual, by isolating a quasiparticle pole contribution to the elec- The first term is a regular part of the Greens function behaving as a smooth function near the Fermi level. The second term is a singular contribution describing quasiparticles, Here Z is a quasiparticle residue, is a quasiparticle decay rate, and (p) = v(p − p F ) is a quasiparticle energy dispersion, with v the renormalized velocity. This general discussion can be specialized to the case of graphene as follows. The Green's function for electrons in graphene has a 2 2 matrix pseudospin structure. By projecting on the conduction and valence bands, it can be represented as where P >(<) = (1 ± e p )/2 are projectors for the two bands (here e p is a unit vector in the direction of momentum p). The quasiparticle excitations with low energies, which govern the low-frequency and long-wavelength response, reside near the Fermi level. Without loss of generality, we assume n-type doping, so that the Fermi level lies in the upper band, E F > 0. In this case, excitations from the lower band do not appear explicitly in the FL theory and lead only to renormalization of various parameters such as the effective interactions and the quasiparticle velocity. The quasiparticle pole in Eq. therefore arises only from the upper-band contribution G > whereas the lower-band contribution G < can be absorbed into the regular part G (reg). Below the subscripts > and < will be omitted for brevity. The next step, which is key for understanding the role of low-energy excitations, is the analysis of the polarization function (q, ) at small and q. This is done by identifying the contributions due to pairs of Greens functions with proximal poles (the "dangerous" twoparticle crosssections), 13 which we write symbolically as G (sing) G (sing) ∼ Z 2 vk −vk. One can represent (q, ) as a sum of terms with different numbers of such contributions, The corresponding graphs are shown in Fig.1. Here we introduced so-called quasiparticle-irreducible quantities: the renormalized scalar vertex T and the two-particle scattering vertex (see Fig.2 b,c). These quantities absorb all non-quasiparticle contributions in the upper band as well as the inter-band processes and the contribution of the states in the lower band. We recall that the quasiparticle-irreducible quantities are distinct from the conventional irreducible quantities defined as sums of Feynman graphs that cannot be split in two by removing two electron lines. 13 For example, the quasiparticle-irreducible vertex is obtained by summing all kinds of graphs except the ones with dangerous cross-sections. The vertex (T ) can be obtained from the conventional irreducible vertex 0 (T 0 ) by the resummation procedure pictured in Fig.2, where the hatched blocks represent contributions due to pairs of Greens functions save for G (sing) G (sing). To analyze the dependence on and q in the longwavelength limit, caution must be exercised by employing the quantities T and taken at small but nonzero frequency and momentum values. We therefore adopt an approach similar to that used in Ref.20: our quasiparticle-irreducible quantities correspond to Luttinger's -quantities which are taken at finite and q. They are distinct from the conventional -quantities 13 obtained in the limit, q → 0, ( v F q). This distinction, however, turns out to be inessential: Luttinger's -quantities reproduce the conventional -quantites in the limit, q → 0, which can be taken in arbitrary order since dangerous cross-sections were left out of the definition of T and. Proceeding with the analysis, we note that the dependence on and q is very different for 0 (q, ) and n≥1 (q, ). We will first analyze the contribution 0 (q, ). This quantity does not contain dangerous cross-sections which can generate a nonanalytic behavior Feynman graphs for the "quasiparticle-irreducible" quantities T,. The bold lines represent a full Green's function, the thin lines represent a singular part G (sing). The vertex 0 represents the conventional irreducible vertex, whereas hatched blocks represent products of two Green's functions, with the contributions G (sing) G (sing) which give a non-analytical behavior at small, q, taken out (see text). at small and q. Taking 0 (q, ) to be analytic, we can represent it as where A() and B() are regular functions. Further, we recall that gauge invariance prohibits any physical response to spatially uniform time-dependent scalar field. Applying this to the full polarization function, Eq., we see that setting q = 0 yields (, 0) = 0. Also, since the contributions of the dangerous cross-sections GG vanish at q = 0, all the quantities n≥1 (q, ) do so. We therefore conclude that the function A() vanishes, leaving us with 0 (q, ) = (q/p F ) 2 B() + O(q 4 ). This gives an effective q-dependent permittivit The second term in Eq. may henceforth be ignored in the long-wavelength limit. Indeed, since V (q) = 2e 2 /q in 2D, whereas 0 ∝ q 2, the quantity equals unity in the limit q/p F → 0. It is instructive to compare this behavior of(q, ) with that arising for q p F. In this case the effects of finite doping are negligible and we can estimate the polarization function using the result obtained for massless Dirac particles at zero doping, where N = 4 is the number of spin/valley flavors. In the limit qv, we obtain a well known renormalized permittivity (q, ) = 1 + N 8 This q-independent expression describes the effect of intraband polarization in undoped graphene. We stress, however, that while > 1, the above expression is obtained for q and values which are not relevant for plasmon excitations. This is so because plasmons do not exist for such (q, ), as the plasmon dispersion terminates for q > ∼ p F. In contrast, the permittivity in Eq., evaluated in the long-wavelength limit relevant for plasmons, q p F, E F, equals unity. Next, we proceed with the analysis of the remaining terms, n≥1 (q, ), which give a leading contribution to the low-energy plasmon dispersion. This is the part of polarization which depends on the quasiparticle contributions. The corresponding Feynman graphs are given by ladder with rungs consisting of two quasiparticle lines separated by vertex parts, as shown on a Fig.1. This gives geometric series that can be easily summed up: where F is an integral operator Here = p F /(2h 2 v) is the density of states per flavor and ( ) is an angle between p(p ) and q. For zero external momentum q → 0 the kernel of the operator F depends only on the angle between p and p. In what follows, we will need the quantity (0, 0,, ) ≡ ( − ). The scalar vertex T takes on a simple form on the Fermi surface. For small external frequency and momentum values, v F q F the vertex can be decomposed as where T 0 = Z −1 by virtue of Ward's identity. 13 The linear terms are potentially relevant, if judged by power counting. However these contributions drop out, because for external frequency and momentum q, the expressions in question contain both T (, q, ) and T (−, −q, ). This leads to a cancellation of the terms linear in and q. Continuing with the analysis, we note that only the first two terms of the series in Eq. are relevant for long-wavelength plasmons with v F q F. Anticipating the square root dependence for plasmon frequency vs. wavenumber, we expand in qv/ to obtain The terms n≥3, expanded in qv/, yield contributions which are higher order in q. The same is true for contributions arising from expanding T, in powers of q and (with the exception for potentially relevant linear terms T 1, T 2 in Eq. which merely cancel out). These terms are therefore not essential in the long-wavelength limit. Combining all the above results for 0 and n≥1, we find the long-wavelength asymptotic behavior for the net polarization function: The quantity F 1 also gives the eigenvalues of the integral operator F corresponding to eigenfunctions cos and sin. We can therefore write F 1 cos = F , which gives a Fourier harmonic of the operator kernel identical to Eq.. Plasmon dispersion can now be obtained from the relation 1 − V (q)(q, ) = 0, giving Eq.. The effects of interaction are encoded in the quantity Y which equals Fermi velocity v 0 in the absence of interactions and is renormalized to a different value in an interacting system. We note a difference between the quantities F m used in the FL literature 13 and those used here, which is manifest in their sign. The difference arises due to the long-range character of the 1/r interaction. In our case the densitydensity interaction F ( − ) accounts for the effects due to exchange correlation but not for the Hartree effects. The Hartree contribution is expressed through the 1/r interaction taken at the plasmon momentum q, corresponding to the Feynman graphs which can be disconnected by cutting a single interaction line. These contributions are incorporated in the dynamically screened interaction, Eq., and hence not included in the definition of above. In contrast, for Fermi liquids with shortrange interactions, the Landau interactions describing density-density response are dominated by the Hartree effects. As a result, they have positive sign for weak repulsive interactions. In contrast, our F m are negative, since they are dominated by exchange effects. In particular, we expect F 1 < 0. The negative sign, expected from this general reasoning, is also borne out by a microscopic analysis at weak coupling, see below. We also note an interesting analogy between the approach developed in this section and the analysis of superconducting Fermi liquids by Larkin and Migdal, 21 and Leggett. 22 Refs.21,22 were concerned with Fermi-liquid renormalization of the quantities such as superfluid density in a metal with BCS pairing. Their analysis focused on the current correlation function which determines the response of current to vector potential, and followed similar steps as in the above discussion of (q, ). The renormalization effects were expressed through a combination of FL parameters, featuring a cancellation for a system with a parabolic band. II. DENSITY DEPENDENCE FROM ONE-LOOP RG In this section we derive plasmon dispersion for a simple model describing strongly interacting Dirac particles. This is done by employing the renormalization group analysis developed in Refs.. We treat the two-body scattering vertex by accounting for dynamical screening of the Coulomb interaction in the random-phase approximation (RPA), Here the quantity(q, ) which describes dynamical screening is identical to that introduced in the above discussion of the dynamical density correlator, Eq.. Here (q, ) is the polarization function 2 with the band indices {s, s } = ± and the coherence factors |F k,k | 2 = | k, s |k, s | 2 describing overlaps of different pseudospin states. The polarization function is a sum of interband and intraband contributions, = 1 + 2, described by s = s and s = s, respectively. The factor (q, ) describes the effect of intrinsic screening in graphene arising due to both the interband and intraband polarization. For undoped graphene, only interband transitions contribute, giving 1 (q, ) = − N 16 q 2 √ v 2 q 2 + 2. This expression is sufficient for our RG analysis . The full RG analysis of log-divergent corrections to Greens functions and vertices was performed in Refs.. Below we use the results for one-loop RG calculation for large N. The RG flow for the quasiparticle velocity takes the form where = ln(p 0 /p) is the RG time parameter (here the UV cutoff is set by interatomic spacing in graphene lattice, p 0 ∼ a −1 ). This gives a power-law dependence For N = 4 we find ≈ 0.2. This value is obtained from a one-loop RG which employs 1/N as a small parameter. The results for N ∼ 1 are qualitatively similar, however the mathematical expressions are more cumbersome. Acknowledging an approximate character of the scaling dimensions obtained from one-loop RG, we shall leave the exponent unspecified in the analytic expressions. In the case of interest (doped graphene) the interband contribution 1 follows the above dependence for large momenta and frequencies, |q| p F, E F, which dominate the RG flow. The intraband contribution 2 is much smaller than 1 at such q and, with the two contributions becoming comparable for |q| ∼ p F, ∼ E F. In the static limit, E F, the polarization is dominated by the 2 contribution. In the range q < 2p F, which is where we will need it below, it is identical to that for two-dimensional systems with parabolic band, We can obtain the two-particle scattering vertex by taking the interaction on the Fermi surface, q ∼ p F, E F. This gives where g p,p = | p |p | 2 = cos 2 (∆/2) is the coherence factor describing the overlap of (pseudo)spinors describing quasiparticles at different points of the Fermi surface (here ∆ = − ). The minus sign in Eq. arises because this expression represents a contribution from an exchange part of the two-particle vertex. 13 The FL interaction can now be obtained from its relation with the vertex, Eq.. Combining Eq. and Eq., we find In the large N limit, the static RPA-screened interaction can be approximated as U q, ≈ − 1 (q,) = 1 N, where we take into account that |∆p| < 2p F. Both Z and T flow under RG, however their product remains equal to unity because of the Ward identity. As a result, FL interactions do not undergo a powerlaw renormalization. Starting from Z(p)T (p) = 1, where both Z(p) and T (p) are given by power laws drawn from RG, we set p = p F. This gives We therefore conclude, that up to a remnant dependence on p F which may arise in the angle dependence due to the ratio T (∆p)/T (p F ), the function F does not flow under RG. The function F ( − ) is essentially independent of doping, whereas the velocity has a power-law dependence on doping, v ∝ p − F, given by Eq. for p = p F. Combining these results, we find a power law dependence for plasmon dispersion, This result is valid for plasmons with long wavelengths, q p F. The predicted power-law dependence holds in a wide range of carrier densities, both large and small, except very near the neutrality point where spatial inhomogeneity and thermal broadening play a role. To conclude, plasmon renormalization results from competition of two effects: plasmons tend to stiffen due to RG-enhancement of velocity, and to soften due to the negative sign of F 1. However, since F 1 does not flow under RG, whereas velocity v does, the net effect of interactions is to stiffen plasmon dispersion. The predicted dependence A ∝ n (1−)/2 can be used for extracting the exponent from measurement results. III. MAGNETOPLASMON IN A FERMI LIQUID Below we analyze plasmon dispersion using FL transport equations. We will first deal with plasmons in the absence of magnetic field, then proceed to add a B field. Some of the relevant quantites, such the Landau FL interaction F ( − ) has already been introduced and analyzed, here we discuss them again to make connection to the microscopic derivation in Sec.I. In a semiclassical picture, the main effect dominating the Fermi-liquid behavior is forward scattering wherein the whole system of interacting particles acts as a refractive medium in which a quasiparticle energy is a function of occupancies of other particles. This is described by socalled Landau functional, 13 where n(r, t) accounts for deviation of quasiparticle distribution from equilibrium. Since deviation from equilibrium occurs in a narrow band of states near Fermi surface, it is convenient to write the Landau functional by setting |p| = |p | = p F and parameterizing the Fermi surface by a unit vector n = p. Introducing the dimensionless Landau interaction F (p, p ) = f (p, p ), where = p F /(2h 2 v) is the density of states per flavor, we write Here 0 (p) = v(p − p F ) is linearized quasiparticle energy, the angle describes orientation of p, and (p) is obtained by integrating n(p) along the Fermi surface normal. The expression can be treated as a Hamiltonian of one quasiparticle moving in a selfconsistent field of other quasiparticles. Equations of motion can then be obtained from Hamiltonian formalism via ∂ t n = {H, n}. This gives where F is the integral operator defined in Eq.. In a system with rotational symmetry, such as graphene and 2DEG's, the functional F depends only on the angle between p and p : This expression defines a hermitian operator in the space of functions on the Fermi surface with the inner product The eigenvalues of F are simply given by the Fourier coefficients The quantities F m parametrize FL interactions of a 2D system. To describe plasmons, we add to Eq. a long range electric field arising due to oscillating charge density, Here the sum is taken over N spin/valley flavors, and the dielectric constant accounts for screening by substrate. Performing Fourier transform, n(, r, t) = dd 2 k (2) 3 n,q ()e −it+iqr, we arrive at an eigenvalue equation of the form identical to that found in Sec.I by analyzing poles of the dynamical screening function, The quantity (q, ) is identical to that found above by summation of FL-type ladder graphs, where trace is taken with respect to the inner product defined by Eq.. Plasmon dispersion in the long wavelength limit can be found by expanding in the ratio v|q|/. We obtain (q, ) ≈ 2 qv|(1+ F )|qv = q 2 v 2 2 cos |(1+ F )| cos Expressing the angle-averaged quantity through the Fourier coefficient F 1 and using the relation = p F /(2v) we rewrite this result as Plugging this into Eq. and restoring Planck's constant, we obtain the same expression for plasmon dispersion as above, see Eq.. This analysis can be easily generalized to a system in the presence of an external magnetic field. This is done by accounting for the Lorentz force in the ∇ p n term: where the velocity = ∇ p (p, n) includes the contributions accounting for the distribution function change n(p). This equation can be linearized as above, n(p, r, t) = n 0 (p) + n(p, r, t). In doing so, particular caution must be taken with the Lorentz force term since it is affected by the FL interactions. Accounting for the term in the velocity that depends on n(p), we writ v(n 0 + n) = ∇ p (n 0 + n) Here we used Eq., performing integration by parts in the last term. Terms linear in n can arise both from ∇ p n 0 and. Taking a solution in a plane wave form n(r, p, t) ∝ e −it+iqr ∇ n 0 (p)f () where is the angle between q and v, we have This equation can be simplified as follows: This gives an eigenvalue problem with a spectral parameter and f () an eigenfunction. Inverting the operator on the left hand side gives a self-consistency equation where 1 +... is a shorthand for operator inverse. Magnetoplasmon dispersion can be obtained via perturbation theory in the parameter qv/ 1, giving This analysis ignores Bernstein modes which appear for qr c ∼ 1, where r c is the cyclotron radius. 15 The validity of Eq. is henceforth limited to long wavelengths, qr c 1. Using the notation Y = (1 + F 1 )v we arrive at Eq.. Magnetoplasmon dependence on interactions is therefore described by the parameter Y identical to that found for plasmons at B = 0. As discussed above, the density dependence of the quantities v and 1 + F 1 can be linked to their flow under RG. The power-law RG flow of velocity leads to stiffening of plasmon dispersion, which overwhelms the effect of softening due to the negative sign of F 1. IV. COMPARISON TO SYSTEMS WITH PARABOLIC BAND DISPERSION To put the above results in perspective, we recall some important aspects of long-wavelength plasmons in 2D electron systems with a parabolic band. Such plasmons afford a simple description in terms of classical equations of motion for collective "center-of-mass" variables describing oscillating charge density. 6 The result is expressed in a general form through unrenormalized band mass and electron interaction as where n is carrier density and V (q) = 2e 2 |q| for twodimensional systems. An identical result is found for the quantum problem, since Heisenberg evolution generates classical equations of motion for the operators corresponding to the center-of-mass variables describing collective charge dynamics. The absence of renormalization of plasmon dispersion, Eq., can be linked to Galilean invariance. In quantum systems, Galilean invariance is a symmetry of the Hamiltonian generated by the transformation x = x+vt, t = t. This symmetry, which holds for any system with parabolic band dispersion and instantaneous interactions, ensure a complete cancellation of the effects of interaction, rendering plasmon dispersion unrenormalized. As discussed above, the cancellation of Fermi-liquid corrections follows from the Fermi-liquid identity 13 which relates renormalized velocity with the quantity F 1, where v 0 = p F /m is Fermi velocity for noninteracting particles. Crucially, the validity of this identity depends on the band structure being parabolic on the scales > ∼ E F and ∼ E F which determine the FL interactions. The relation between unrenormalized plasmon dispersion and Galilean symmetry also holds in the presence of a magnetic field, wherein gapless plasmons turn into gapped magnetoplasmons. The magnetoplasmon dispersion is 2 B (q) = 2 0 (q) + 2 c, where 0 (q) is given by Eq. and c = eB/mc is unrenormalized cyclotron frequency. In this case, the absence of renormalization is guaranteed by Kohn's theorem. 23 The Kohn's theorem is established by treating collective charge dynamics in magnetic field using the center-of-mass variables in complete analogy with the derivation of Eq.. Because of the Galilean invariance, Heisenberg equations of motion for the center-of-mass variables obey classical dynamics with unrenormalized cyclotron frequency. Unrenormalized plasmon dispersion also arises in other space dimensions, with V (q) = 4e 2 q 2 for 3D systems and V (q) = 2e 2 ln 1 |q|a for 1D systems. In the latter case, plasmon dispersion matches that of charge modes in one-dimensional Luttinger liquids. We stress that, in a general Luttinger liquid framework, the effective interaction for 1D plasmons is distinct from the bare interaction. Nevertheless, due to Galilean invariance, plasmon dispersion in a 1D system with parabolic bands is expressed through unrenormalized bare interaction. As noted above, what matters here is the character of the overall band structure rather than the linear dispersion in a system linearized near the Fermi points. In contrast to systems with parabolic dispersion, plasmons in graphene are sensitive to interactions. This is so because Galilean invariance is a non-symmetry for particles with linear dispersion, and hence the absence of renormalization is not guaranteed by any general principles. As a result, plasmons in graphene feature a nontrivial dependence on interactions. As we have seen above, plasmon dispersion is expressed through the Fermi velocity value v which is renormalized by the interaction effects, and also through the FL interaction via a factor 1 + F 1. We parenthetically note that electronic spectrum of graphene bilayer, while featuring parabolic band dispersion, does not obey Galilean invariance. We therefore expect plasmon dispersion in a bilayer to exhibit a fullfledged Fermi-liquid renormalization, similar to graphene monolayer. To summarize, renormalization of electron properties due to interactions results in a nonclassical dependence of plasmon frequency on carrier density. Using a nonperturbative approach based on the FL theory, we show that plasmon dispersion can be expressed through Landau FL interactions. Measurements of plasmon resonance can therefore be used to extract the interaction parameters in a model-free way, which is particularly useful for studying strongly interacting systems such as graphene. Our results indicate a significant deviation from the n 1/4 power law dependence predicted for weakly interacting electrons in Refs.. The density dependence predicted by our approach derives from the RG flow of the quantity Y = (1+F 1 )v, where the RG "time" parameter value tracks the Fermi momentum. As an illustration, we con-sider RG for a large number of fermion flavors, which yields a power law of the form n (1−)/4, > 0. The density dependence of the plasmon resonance can therefore provide a direct, model-free probe of the RG theory of interaction effects in graphene. We thank A. V. Chaplik, M. I. Dyakonov, F. H. L. Koppens, I. V. Kukushkin and M. Yu. Reizer for useful discussions. This work was supported in part under the MIT Skoltech Initiative, a collaboration between the Skolkovo Institute of Science and Technology (Skoltech), the Skolkovo foundation, and the Massachusetts Institute of Technology.
package cn.fty1.javase.thread; import java.util.concurrent.ThreadFactory; public class SimpleThread { int i = 10; public static void main(String[] args) { SimpleThread simpleThread = new SimpleThread(); for(int i = 0; i < 3000;i++){ Runnable runnable = () -> { System.out.println(String.format("i:p{%s},r{%s}",1,simpleThread.getNextId())); }; Runnable runnable2 = () -> { System.out.println(String.format("i:p{%s},r{%s}",1,simpleThread.getNextId())); }; runnable.run(); runnable2.run(); } } public int getNextId(){ return i--; } }
async def run_scheduler(self): while True: interval = 60 for s in await self.get_service('data_svc').locate('schedules'): now = datetime.now().time() diff = datetime.combine(date.today(), now) - datetime.combine(date.today(), s.schedule) if interval > diff.total_seconds() > 0: self.log.debug('Pulling %s off the scheduler' % s.name) sop = copy.deepcopy(s.task) sop.set_start_details() await self._services.get('data_svc').store(sop) self.loop.create_task(sop.run(self.get_services())) await asyncio.sleep(interval)
1. Field of the Invention The present invention relates to digital photography. In particular, the described embodiments relate to the extraction of color information from an image. 2. Related Art Portable digital cameras and video cameras have become very common in all aspects of everyday life. With the explosion of personal computer technology over the past decade, it is possible to transmit data which is representative of images captured by a digital still camera or video camera in a format recognizable by a personal computer. The image data received at the personal computer may then be manipulated, edited or reproduced on media using a color printer. Digital cameras typically focus a scene or an object through optics onto an imaging array which captures the focused image. Instead of capturing the image on photographic film as in conventional photography, however, a digital camera typically captures the image on a semiconductor imaging sensor which is suited for the capture and reconstruction of color images. Signals representative of the captured image are then transmitted from the semiconductor imaging sensor to a memory for further processing. The processed image may then be transmitted to a personal computer or some other device capable of reproducing the image on a particular medium. The typical semiconductor imaging sensor absorbs photon energy at various locations leading to the creation of carrier pairs (i.e., electron and hole pairs). Circuitry formed in the semiconductor imaging array stores the charge resulting from the carriers during an exposure period for the semiconductor imaging array. Following the exposure period, the charge stored in the circuitry is read out and processed to reconstruct the image. The charge is typically collected in the semiconductor substrate by applying an electric field to separate holes and electrons in the carriers. In a charge couple device (CCD) arrangement, a metal oxide semiconductor (MOS) capacitor is formed and an electric field is induced by applying a voltage to a gate of the capacitor. In a CMOS imaging array, a photodiode is formed in the semiconductor having a junction with a built-in field. The photodiode can be reverse biased to further enhance the field. Once the semiconductor imaging array absorbs photon energy, resulting in the creation of an electron-hole pair, it is not possible to determine the wavelength or the color of light associated with the photon energy. To detect color information, typical imaging sensors control the color of light that is permitted to be absorbed into the substrate. This is achieved in some systems by employing a prism to decompose a full color image into its component colors and using an individual imaging device to collect the image for each of the component colors. This requires precise alignment of the imaging devices and therefore tends to be very costly. Household video cameras typically use microfilter technology to control the color of light that is allowed to reach any given pixel location. For such a video camera with a semiconductor sensor (or sensing element) array, each detector in the semiconductor imaging array, therefore, is used to detect the intensity of a particular color of light at a particular location in the imaging array. Such filters are typically directly deposited onto each of the light sensing elements formed in the imaging array. The filter color pattern deposited on a given sensing element in the imaging array controls the color of light detected by the particular element. While camera optics produce an image of a scene which has full color depth at each point in the image, only one color is collected at any particular location. A typical imaging sensor uses red, green and blue as primary colors, including red, green and blue transmissive filters distributed uniformly throughout the imaging array. The intensity of the photon energy collected at each of the pixel locations is typically represented by eight bits at each pixel location. Since much of the light incident at a pixel location is filtered out, color information is lost. Using red, blue and green as the primary colors, the original image would have 24 bits of color data at each location. A color filter pattern using red, blue and green filters deposited at different pixel locations in a specific pattern, known as the Bayer pattern, is discussed in detail in U.S. Pat. No. 3,971,065. Much color information is lost at any particular pixel location using Bayer pattern. Accordingly, there is a need to provide a cost effective system for extracting additional color information from a semiconductor imaging sensor. An object of an embodiment of the present invention is to provide an imaging sensor which is capable of providing high resolution images. It is another object of an embodiment of the present invention to provide a method and system for capturing additional color information from images in digital photography. It is another object of an embodiment of the present invention to provide an improved CMOS based imaging sensor. It is another object of an embodiment of the present invention to provide a low cost digital imaging sensor which can capture color information to more fully exploit color printing technology. It is yet another object of an embodiment of the present invention to provide an imaging sensor which directly measures color information at pixel locations in a xe2x80x9cwhitexe2x80x9d spectral region. Briefly, embodiments of the present invention are directed to an imaging array comprising a first set of light-sensitive elements and a second set of light-sensitive elements. Each of the first set of light-sensitive elements have a sensitivity to energy in one of a plurality of spectral regions which are substantially distinct from each other. Each of the second set of light-sensitive elements have a sensitivity to energy in a spectral region which includes substantially all of the spectral regions of the first set of light-sensitive elements. The second set of light-sensitive elements are preferably distributed among the first set of light-sensitive elements substantially uniformly throughout the array. By having the second set of light-sensitive elements, the imaging array is capable of capturing wideband spectral information from an image, in addition to narrower band information captured at the first set of light-sensitive elements. The wideband spectral information provides measurements of the intensity of photo-exposure by xe2x80x9cwhitexe2x80x9d light at locations of light-sensitive elements. The wideband spectral information may then be employed to provide a more accurate reproduction of images at, for example, a color printer.
Energy-Aware Data Compression for Wireless Sensor Networks Data compression techniques have extensive applications in power-constrained digital communication systems, such as in the rapidly-developing domain of wireless sensor network applications. This paper explores energy consumption tradeoffs associated with data compression, particularly in the context of lossless compression for acoustic signals. Such signal processing is relevant in a variety of sensor network applications, including surveillance and monitoring. Applying data compression in a sensor node generally reduces the energy consumption of the transceiver at the expense of additional energy expended in the embedded processor due to the computational cost of compression. This paper introduces a methodology for comparing data compression algorithms in sensor networks based on the figure of merit D/ E, where D is the amount of data (before compression) that can be transmitted under a given energy budget E for computation and communication. We develop experiments to evaluate, using this figure of merit, different variants of linear predictive coding. We also demonstrate how different models of computation applied to the embedded software design lead to different degrees of processing efficiency, and thereby have significant effect on the targeted figure of merit.
package WayofTime.alchemicalWizardry.common.alchemy; public interface ICombinationalCatalyst { }
Use of rituximab in diffuse large Bcell lymphoma in the salvage setting The addition of rituximab (R) to CHOP (cyclophosphamide, doxorubicin, vincristine, prednisone) chemotherapy was a milestone in the development of frontline therapy for diffuse large Bcell lymphoma (DLBCL). RCHOP and equivalent rituximabcontaining anthracyclinebased regimens are now widely accepted as the standard of care in this setting. However, the optimal treatment for patients with DLBCL relapsing or progressing after frontline therapy is not yet established. This review explores the role of rituximab in the treatment of DLBCL in the salvage setting, as monotherapy, in combination with chemotherapy or novel agents, and in the context of autologous stem cell transplantation (ASCT). Current evidence suggests that rituximab may improve outcomes in several ways: the higher response rates achieved with rituximabbased induction in the salvage setting optimize the number of patients who are able to proceed to highdose therapy ASCT; rituximab may improve outcomes following ASCT when used as posttransplantation consolidation/maintenance therapy; and addition of rituximab to salvage regimens may improve outcomes for patients ineligible for transplantation. However, patients refractory to or relapsing after firstline therapy (including rituximabbased regimens) still have a poor prognosis. In conclusion, rituximab in salvage therapy for DLBCL is effective and well tolerated. Ongoing studies will further clarify the optimal use of rituximab in the salvage setting.
Safety Second: The NRC and America's Nuclear Power Plants, Union of Concerned Scientists. 1987. Indiana University Press, Bloomington, IN. 194 pages. ISBN: 0-253-35034-4. $22.50 In 1975, Congress created the Nuclear Regulatory Commission (NRC). Its primary responsibility was to be the regulation of the nuclear power industry in order to maintain public health and safety. On March 28, 1979, in the worst commercial nuclear accident in US history, the plant at Three Mile Island began to leak radioactive material. How was Three Mile Island possible. Where was the NRC. This analysis by the Union of Concerned Scientists (UCS) of the NRC's first decade, points specifically to the factors that contributed to the accident at Three Mile Island. The NRC, created as a watchdog of the nuclear power industry, suffers from problems of mindset, says the UCS. The commission's problems are political, not technical; it repeatedly ranks special interests above the interest of public safety. This book critiques the NRC's performance in four specific areas. It charges that the agency has avoided tackling the most pervasive safety issues; has limited public participation in decision making and power plant licensing; has failed to enforce safety standards or conduct adequate regulation investigations; and, finally, has maintained a fraternal relationship with the industry it was created to regulate, serving as its advocate rather than it adversary. The final chaptermore offers recommendations for agency improvement that must be met if the NRC is to fulfill its responsibility for safety first.« less
The U.S. Postal Service is taking precautions to ensure the safe delivery of mail in the aftermath of the approaching storm, it says in a press release issued today. Hughes says delivery service may be delayed or curtailed whenever streets or walkways present hazardous condition for letter carriers or when snow is plowed against mailboxes. Delayed mail is attempted the next delivery day.
The Power of the Word: Scripture and the Rhetoric of Empire. By Elisabeth Schssler Fiorenza. Minneapolis: Fortress, 2007. viii + 280 pages. $29.00 average undergraduate will respond favorably to sermonettes interspersed amid assigned readings. Saint Marys Press has long published catechetical materials; it is new to offering university resources. College courses are not catechesis. My reservation concerns inserting this kind of thing into a Bible. I am disappointed to find no mention of the contributions of newer methods of biblical analysis: canon criticism, structuralism, social science critique, feminist hermeneutics, et al. The scholars who wrote the articles and introductions are certainly aware of these approaches in which many Catholic scholars have made significant contributions. The articles appropriately distinguish Catholic critical understandings and methods from the literal understanding of fundamentalists. There is, however, no mention that most main-line Protestants, Anglicans, most Orthodox, and many Jews follow the same critical premises as Catholics. Finally, a total revision of the NABs Old Testament has been completed by members of the Catholic Biblical Association of America. It is anticipated that the United States Catholic Bishops will accomplish their review and give their approval soon if they have not already done so by this writing. When this happens, new editions of the NAB will be issued. This is not secret information. It is astonishing that St. Marys Press issued an elaborate new Bible when it is pending revision.
<reponame>pravgupt/aws-sdk-js-v3<gh_stars>0 import { _TargetGroupInfo, _UnmarshalledTargetGroupInfo } from "./_TargetGroupInfo"; import { _TrafficRoute, _UnmarshalledTrafficRoute } from "./_TrafficRoute"; /** * <p> Information about two target groups and how traffic is routed during an Amazon ECS deployment. An optional test traffic route can be specified. </p> */ export interface _TargetGroupPairInfo { /** * <p> One pair of target groups. One is associated with the original task set. The second is associated with the task set that serves traffic after the deployment is complete. </p> */ targetGroups?: Array<_TargetGroupInfo> | Iterable<_TargetGroupInfo>; /** * <p> The path used by a load balancer to route production traffic when an Amazon ECS deployment is complete. </p> */ prodTrafficRoute?: _TrafficRoute; /** * <p> An optional path used by a load balancer to route test traffic after an Amazon ECS deployment. Validation can occur while test traffic is served during a deployment. </p> */ testTrafficRoute?: _TrafficRoute; } export interface _UnmarshalledTargetGroupPairInfo extends _TargetGroupPairInfo { /** * <p> One pair of target groups. One is associated with the original task set. The second is associated with the task set that serves traffic after the deployment is complete. </p> */ targetGroups?: Array<_UnmarshalledTargetGroupInfo>; /** * <p> The path used by a load balancer to route production traffic when an Amazon ECS deployment is complete. </p> */ prodTrafficRoute?: _UnmarshalledTrafficRoute; /** * <p> An optional path used by a load balancer to route test traffic after an Amazon ECS deployment. Validation can occur while test traffic is served during a deployment. </p> */ testTrafficRoute?: _UnmarshalledTrafficRoute; }
I really loved the Grammy Awards back in the day when I knew the songs and who was who. That will be a bit of a challenge later today as I am covering the awards from Staples Center. But I’ve done my homework and am raring to go. Here are some moments from Grammy telecasts in years gone by starting with a young Whitney Houston singing “One Moment in Time” (above) and below is Michael Jackson in 1988 in an incredible performance of “Man in the Mirror” – my favorite song iof his ever. What could have been a disaster turned out to be one of Aretha’s finest moments. Gotta include Janet with this.
Macular Hole Detection Using a New Hybrid Method: Using Multilevel Thresholding and Derivation on Optical Coherence Tomographic Images Optical coherence tomography (OCT) is a noninvasive imaging test. OCT imaging is analogous to ultrasound imaging, except that it uses light instead of sound. In this type of image, microscopic quality intratissue images are provided. In addition, fast and direct imaging of tissue morphology and reproducibility of results are the advantages of this imaging. Macular holes are a common eye disease that leads to visual impairment. The macular perforation is a rupture in the central part of the retina that, if left untreated, can lead to vision loss. A novel method for detecting macular holes using OCT images based on multilevel thresholding and derivation is proposed in this paper. This is a multistep method, which consists of segmentation, feature extraction, and feature selection. A combination of thresholding and derivation is used to diagnose the macular hole. After feature extraction, the features with useful information are selected and finally the output image of the macular hole is obtained. An open-access data set of 200 images with the size of 224224 pixels from Sankara Nethralaya (SN) Eye Hospital, Chennai, India, is used in the experiments. Experimental results show better-diagnosing results than some recent diagnosing methods. Introduction Optical coherence tomography (OCT) is a fundamentally new type of optical imaging modality. OCT performs high-resolution, cross-sectional tomographic imaging of the internal microstructure in materials and biologic systems by measuring backscattered or backreflected light. OCT images are two-dimensional data sets that represent the optical backscattering in a cross-sectional plane through the tissue. Image resolutions of 1 to 15 m can achieve one to two orders of magnitude higher than conventional ultrasound. Imaging can be performed in situ and in real-time. e unique features of this technology enable a broad range of research studies and clinical applications. Mendes and Abrah 2019 provide an overview of OCT technology, its background, and its potential biomedical and clinical applications. OCT, imaging the internal crosssectional microstructure of tissues using measurements of optical backscattering or backreflection, was first demonstrated in 1991. OCT imaging was performed in vitro in the human retina and atherosclerotic plaque as examples of imaging in transparent, weakly scattering media and nontransparent, highly scattering media. OCT was initially applied for imaging in the eye and to date. OCT has had the largest clinical impact in ophthalmology. e first in vivo tomograms of the human optic disc and macula were demonstrated in 1993. OCT enables the noncontact, noninvasive imaging of the anterior eye as well as imaging of morphologic features of the human retina including the fovea and optic disc. Numerous clinical studies have been performed by many groups in the last several years. More recently, advances in OCTtechnology have made it possible to image nontransparent tissues, thus enabling OCT to be applied in a wide range of medical specialties. Imaging depth is limited by optical attenuation from tissue scattering and absorption. However, imaging up to 2 to 3 mm deep can be achieved in most tissues. is is the same scale as that typically imaged by conventional biopsy and histology. Although imaging depths are not as deep as with ultrasound, the resolution of OCT is more than 10 to 100 times finer than the standard clinical ultrasound. OCT has been applied in vitro to image arterial pathology and can differentiate plaque morphologies. Imaging studies have also been performed in vitro to investigate applications in dermatology, gastroenterology, urology, gynecology, surgery, neurosurgery, and rheumatology. OCT has also been applied in vivo to image developing biologic specimens for applications in developmental biology. OCT is of interest because it allows repeated imaging of developing morphology without the need to sacrifice specimens. Numerous developments in OCT technology have also been made. High-speed real-time OCT imaging has been demonstrated with acquisition rates of several frames per second. High-resolution and ultrahigh-resolution OCT imaging has been demonstrated using novel laser light sources, and axial resolutions as high as 1 m have been achieved. Cellular level OCT imaging has recently been demonstrated in developmental biology specimens. CT has been interfaced with catheters, endoscopes, and laparoscopes which permit internal body imaging. Catheter and endoscope OCT imaging of the gastrointestinal, pulmonary, and urinary tracts as well as arterial imaging has been demonstrated in vivo in an animal model. In many cases, the disease is examined only superficially (without considering the complete characteristics involved in the disease). Many research groups are currently performing preliminary clinical studies. e macular located in the middle of the retina is where most of the cone cells accumulate. A small depression in the middle of the macular called FOA has the largest cone cell. e macular is responsible for central vision, hue vision, and accurate detail recognition. e cylindrical cells are located in the peripheral part of the retina (around the retina) and allow night vision and the movement of objects around. e partial or complete absence of the macular sensory membrane is called macular perforation. e macular hole may be of an unknown and age-related cause (age-related macular hole) or maybe caused by trauma to the eye. e age-related type of the disease is most prevalent in older women in the seventh decade of life. e OCT images of normal and abnormal macular are shown in Figure 1. Related Works In connection with macular pathologies, we face some harms such as macular edema, age-related macular degeneration (AMD), macular edema, central serous retinopathy (CSR), and macular hole (MH). Macular holes lead to low vision and blindness, which can lead to retinal holes due to overuse of fovea. e disease is more likely to occur in people over 40 years of age. Early diagnosis of the disease will help a lot in its treatment because if the disease progresses little, it can be treated with the help of medicine or surgery. erefore, knowing the characteristics of the hole such as shape, size, width, diameter, and length can be very effective. OCT image is an effective tool to diagnose the condition of an MH. e OCT is an applicable tool to diagnose the retina pathologies, which can visualize the 3D shape and structure of the retina without physical contact. e highresolution OCT images of the retina usually have speckle noise and shadows caused by retinal blood vessels and other abnormalities on retina, which are challenges in the segmentation process. Because the disease can cause irreparable damage if misdiagnosed by algorithms, manual segmentation has always been a priority, but our algorithm ignores the use of this method with a very low error coefficient to detect the macular hole Even in some cases, the correct diagnosis of a disease depends on several studies. Nevertheless, the need for fully automatic diagnostic methods for retinal pathology is now felt. One of the important applications of OCT images is to detect MH ( Figure 1). erefore, the researchers want to propose full automated, novel, and trustfully methods for MH diagnosis and segmentation. New research studies in this area seek to cover each other to reduce each other's shortcomings and to make them acceptable methods by experts. e valuable information and features provided by OCT images can help researchers develop more valuable segmentation and automated diagnostic techniques to help patients lead better lives. In, a fully automatic method to identify the main layers of the retinal that delimits the retinal area is proposed. erefore, an active contour-based model is used to segment the main retinal boundaries. is method uses the horizontal placement information of these layers and their relative location on the images to restrict the search space. A new OCT-based method to investigate epiretinal membrane (ERM) pathology in human eyes is proposed by Gonz'alez-L'opez et al.. e new approach assesses automatically the ERM thickness and the space between the epiretinal membrane (ERM) and the retina surface. e adjusted mean arc length (AMAL) for segmenting OCT images for macular pathology is being used for segmenting OCT images for macular pathology. is metric is used for automated OCT segmentation. In recent years, a segmentation method based on feature extraction using SFTA-based histological analysis has been introduced by Keller et al.. In this research, a graph-based segmentation is used to find the layers of retina. e 3D level set segmentation approach based on the level set method can used to accurate segmentation of the MH. is method is fully automatic and shows robust results in a variety of conditions. A novel layer guided convolutional neural network (LGCNN) is proposed by Nasrulloh et al. to identify the common types of macular pathologies and normal retina. In this method, the retinal layer segmentation maps are generated by an efficient segmentation network, which can determine two lesion-related retinal layers associated with the meaningful retinal lesions. en, two subnetworks in LGCNN integrate the information of these layers. Consequently, by focusing on the significant lesion-related layer regions, LGCNN can effectively modify OCT classification. Huang et al. proposed a multi-instance multilabel-based lesions recognition method to detect and recognize simultaneously multiple lesions. e proposed method consists of the 2 Computational Intelligence and Neuroscience segmentation based on the different lesions and feature extraction and constructs multilabel detectors and recognizes the region of interests. A unique joint model that combines multiple information is proposed for retinal segmenting using OCT images. A multimodal multiresolution graph-based method is proposed for internal limiting membrane segmentation within OCT images. A hybrid method using multilevel thresholding and derivation on optical coherence tomographic images was proposed for automated detection. In this paper, a new combination is proposed to obtain more information for macular hole diagnostic. is is a multistep method, which consists of segmentation, feature extraction, and feature selection. A combination of thresholding and derivation is used to diagnose the macular hole. After feature extraction, the features with useful information are selected and finally the output image of the macular hole is obtained. e main contributions of the proposed method are high sensitivity in the various OCT images, better accuracy and reliability than the conventional methods, and short processing time. e remaining parts of this paper are organized as follows: In Section 3, the mentioned proposed method is introduced. Experiments and results can be found in Section 4. Comparison with some recent methods is demonstrated in Section 5, and the conclusion is given in Section 6. Proposed Methods e block diagram of the proposed method is shown in Figure 2. e proposed method consists of multiple steps. e preprocessing step is usually used before the main image analysis and data extraction, and its purpose is to obtain an accurate image that removes annoying data from the image. In medical imaging, major disorders are observed, including noise due to high-frequency reception, different brightness in the field, and problems due to distant orientation. For this reason, preprocessors are applied to all images taken from a device. For this reason, these processes are usually device dependent and must be fast and efficient. Photogrammetric methods are used when the spatial or luminous properties of the noise are known. In this paper, we used an adaptive filter to remove noise from OCT images. In the next step, the proposed algorithm is implemented on the images, and images have been qualitatively improved; therefore, their characteristics must be determined and extracted. Most image data may be subdivided into enclosed areas, dots, and lines. To identify objects, they must be able to distinguish them from the context. It is usually best to convert the gray spectrum image to a binary image (black and white). Techniques such as image splitting and edge recognition work best on binary images but are sometimes applied to images in the gray or color spectrum. e purpose of feature extraction is to extract features that are directly related to the output. After this step, the extracted features with basic information are selected and finally the output image is obtained. Computational Intelligence and Neuroscience In the proposed method block diagram (Figure 2), a combination of thresholding and derivation is used for diagnosis. is method has been implemented with special emphasis on FOA depression and image features in this area, and finally a suitable segmentation has been used to distinguish this disease from other diseases and the rate of disease progression. Edge as the location of changes in lighting levels, the range of these changes should also be considered to decide on the presence of the edge and its exact location. In this case, if the edges of an image are exposed, the location of all the prominent and opaque objects in the image is determined. As a result, the use of an accurate edge detector directly helps to increase the feature recognition rate and the ability to accurately segment the image. e vector f (x, y) represents the maximum rate of change of brightness. Outset of edge (I 1 ) and end of edge (I 2 ) are defined in the following equation: On the other hand, when the multilevel thresholding method is applied, it converts the gray level into a binary code. e key point in this method is to select the threshold value (or threshold values for the case where several levels are desired). Most images include objects with a uniform brightness level on a background with different brightness levels. For such images, brightness is a distinguishing feature that can separate the object from the background. Another way to choose the value of the threshold of light is to place the threshold value equal to the minimum point of the histogram between the two peaks. Data Set. In this paper, we use the open-access database collected by. is database includes more than 500 high-resolution OCT images of different pathological conditions. A raster scan protocol with a 2 mm scan length was used to obtain these images. e size of these images is 512 1024 pixels and took using a Cirrus HD-OCT machine (Carl Zeiss Meditec, Inc., Dublin, CA) at Sankara Nethralaya (SN) Eye Hospital, Chennai, India. In each volumetric scan, a fovea-centered image was selected by an experienced clinical optometrist (MKP). e axial resolution was 5 m, and the transverse resolution was 15 m (in tissue). e images were then resized to a 224 224 pixel size. e pathological conditions were determined by clinicians, and the labeling process was done by retinal clinical experts at SN hospital. Progressive stages and macular degeneration are given. is wide range of stages, less severe, medium severe, and more severe stages, in each disease would be ideal for the researchers; therefore, they can test their proposed method in various scenarios. MH OCT images are the main category of the database, and we used several images of MH cases. Initialization. e raw data were used separately, in the mentioned method using MATLAB 2019a software to identify macular hole; in the first stage, OCT images were collected as input, with background uniformity and improved image resolution, under preprocessing. In the second stage, identification was performed using the segmentation method and the desired points were examined from the image. In the third and final step, a new hybrid method is implemented on the images. To implement the simulated results, Corei7 CPU, GeForce 7900GTX graphics processing unit, and 16 GB of memory were used. Experiments. In this part, the implementation results of the proposed method on OCT images provided by Lakshminarayanan et al. are presented. e image derivation is an important step in many image-processing algorithms. e simplest case may involve applying one algorithm. More complex cases include more accurate margin detection algorithms or applying several separable models, given that time processing is also important. Using two methods of thresholding and derivation, we introduced a new and more efficient method than other methods ( Figure 3). In the recognition of surfaces, the most important features can be extracted from the surfaces (including corners and lines). e output of the process is a set of parts whose community includes the whole image or a set of lines extracted from the image. Each pixel in each section is similar in which it has specific properties such as hue, brightness, or texture. Adjacent sections are considered different from each other according to the mentioned features. By recognizing the pixels in the image, considering the existence of valuable and important information in the image, a segmentation algorithm was used in which each pixel is assigned a label so that pixels with the same label have similar properties ese features must have properties so that with a set of these features, each image can be described uniquely in order to identify the identity of an image from the patterns of that image. Mean sensitivity and mean accuracy for 12 sample OCT images are shown in Table 1. e accuracy is computed by comparing the automated results with the ophthalmologist's diagnosis opinions. e proposed algorithm helps the ophthalmologist to educate the patient about the progression of the disease. is algorithm can aid the ophthalmologist by analysing the huge number of samples in a short time and presenting only the samples exhibiting features of diseases. An automated preexclusion of normal cases might help to improve the program's efficiency. In future work, we consider to improve the accuracy in the detection of macular hole. e results of applying the proposed algorithm are clearly shown on the images at each stage of processing. Comparison In order to compare the results of our proposed method with some recent segmentation methods, we compare it with SVM, KNN, Navie Bayes, Decision Tree, MS-LGDF, and CMF. SVM, KNN, Navie Bayes, and Decision Tree are four traditional segmentation methods; MS- LGDF is a segmentation method based on the local Gaussian distribution fitting (LGDF) energy functional which can collect various macular hole measurements. e segmentation results such as accuracy, sensitivity, Jaccard index, and DSC are shown and compared in Table 2. According to the comparison, it can be concluded that the proposed method is much better than the other methods. Image segmentation methods are of two main categories: border-based logic and area-based logic, and each of them it is divided into several techniques. e output of a process is a set of sections whose assembly comprises the entire image or a set of lines extracted from the image. Each pixel in each section is similar in which it has specific properties such as color, brightness, or texture. Adjacent sections are considered different from each other according to the mentioned features. Computational Intelligence and Neuroscience In traditional methods, due to the complexity of the surface of the algorithms used, the accuracy was always low, the sensitivity was at the lowest level, and the long data processing time was clearly visible. Image fragmentation has been done in areas such as computer vision and image processing, and due to its wide and wide application, it has suitable research fields. Despite the complexity of the algorithm, fragmentation has high accuracy and sensitivity and on the other hand fewer data processing time than traditional methods. e success of this research can be seen in other areas as well such as medicine. Remote and image retrieval is crucial, which helps to save, maintain, and protect human life. e run times of the proposed combination method and other compared methods are tabulated in Table 3. e experiments were conducted on a computer with an Intel Corei7 CPU and 16 GB RAM running Windows 7 64Bit operating system. In addition, MATLAB version 2019a is used to obtain the computational times. As can be observed, the run time of the proposed method is better than that of the competition methods. Conclusions In this paper, we proposed an automatic and powerful method to segment OCT images to detect macular holes. is is a multistep method, which consists of segmentation, feature extraction, and feature selection. A combination of thresholding and derivation was used to diagnose the macular hole. After feature extraction, the features with useful information were selected and finally the output image of the macular hole was obtained. e proposed method was evaluated on an open-access data set from Sankara Nethralaya (SN) Eye Hospital, Chennai, India. Comparison with some state-of-the-art MH segmentation methods reveals the robustness of the proposed method. In the future, we want to develop this method for 3D segmentation and measurements. Data Availability All data used in the database are valid. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
Effects of turbinoplasty versus outfracture and bipolar cautery on the compensatory inferior turbinate hypertrophy in septoplasty patients Introduction The most common cause of septoplasty failure is inferior turbinate hypertrophy that is not treated properly. Several techniques have been described to date: total or partial turbinectomy, submucosal resection (surgical or with a microdebrider), with turbinate outfracture being some of those. Objective In this study, we compared the pre- and postoperative lower turbinate volumes using computed tomography in patients who had undergone septoplasty and compensatory lower turbinate turbinoplasty with those treated with outfracture and bipolar cauterization. Methods This retrospective study enrolled 66 patients (37 men, 29 women) who were admitted to our otorhinolaryngology clinic between 2010 and 2017 because of nasal obstruction and who were operated on for nasal septum deviation. The patients who underwent turbinoplasty due to compensatory lower turbinate hypertrophy were the turbinoplasty group; Outfracture and bipolar cauterization were separated as the out fracture group. Compensatory lower turbinate volumes of all patients participating in the study (mean age 34.0 ± 12.4 years, range 1761 years) were assessed by preoperative and postoperative 2 month coronal and axial plane paranasal computed tomography. Results The transverse and longitudinal dimensions of the postoperative turbinoplasty group were significantly lower than those of the out-fracture group (p = 0.004). In both groups the lower turbinate volumes were significantly decreased (p = 0.002, p < 0.001 in order). The postoperative volume of the turbinate on the deviated side of the patients was significantly increased: tubinoplasty group (p = 0.033). Conclusion Both turbinoplasty and outfracture are effective volume-reduction techniques. However, the turbinoplasty method results in more reduction of the lower turbinate volume than outfracture and bipolar cauterization. Results: The transverse and longitudinal dimensions of the postoperative turbinoplasty group were significantly lower than those of the out-fracture group (p = 0.004). In both groups the lower turbinate volumes were significantly decreased (p = 0.002, p < 0.001 in order). The postoperative volume of the turbinate on the deviated side of the patients was significantly increased: tubinoplasty group (p = 0.033). Conclusion: Both turbinoplasty and outfracture are effective volume-reduction techniques. However, the turbinoplasty method results in more reduction of the lower turbinate volume than outfracture and bipolar cauterization. Introduction The most common cause of chronic nasal obstruction is septum deviation and lower turbinate pathologies. 1 Inferior turbinate hypertrophy is frequently seen in allergic rhinitis, vasomotor rhinitis, and as compensatory hypertrophy in septum deviation. Lower turbinate hypertrophy on the concave side of the nasal septum is called compensatory hypertrophy. 2 The most common cause of septoplasty failure is inferior turbinate hypertrophy that is not treated properly. 3 Several techniques have been described to date: total or partial turbinectomy, submucosal resection (surgical or with a microdebrider), outfracture, electrocautery, radiofrequency application, argon plasma treatment, and cryosurgery. 4 None of the turbinate surgical techniques performed with or without septoplasty are perfect. Short-and longterm complications, such as bleeding, bruising, and atrophy, are frequent. 5 Ideally, turbinate surgery should be done without damaging the mucosal surface. This ensures preservation of normal lower turbinate function, rapid healing, and inhibition of atrophic rhinitis. 6 Despite the increasing number of lower turbinate surgical procedures, turbinoplasty, outfracture, and bipolar cautery methods have been used frequently for the last three decades. 7 Turbinoplasty is more difficult and has a higher complication rate than the outfracture method, despite its high success rate. Lower turbinate outfracture and bipolar cauterization can be applied in the same order and more quickly. 8 In this study, we compared the pre-and postoperative lower turbinate volumes using computed tomography (CT) in patients who had undergone septoplasty and compensatory lower turbinate turbinoplasty with those treated with outfracture and bipolar cauterization. Patient selection This retrospective study enrolled 66 patients (37 men, 29 women) who were admitted to our otorhinolaryngology clinic between 2010 and 2017 because of nasal obstruction and who were operated on for nasal septum deviation. CT showed septum deviation and contralateral compensatory lower turbinate hypertrophy. The patients were divided into two groups. The turbinoplasty group included patients who underwent septoplasty and turbinoplasty; the outfracture group underwent septoplasty with compensatory lower turbinate outfracture and bipolar cauterization. Patients with maxillofacial trauma, paranasal sinus tumors, nasal polyps, septal perforations, acute or chronic rhinosinusitis, S type nasal septum deviation, turbinate bullosa, or previous nasal or paranasal surgery were excluded from the study. Ethics committee approval was obtained from Istanbul University, Cerrahpaa Medical Faculty, Ethical Committee (n 61328). Surgical procedure All patients were operated by the same surgeon under general anesthesia. First, a septoplasty was performed. Thirty-two patients (19 men, 13 women; mean age, 36.6 ± 15.0 years, range: 19---61 years) in the turbinoplasty group underwent compensatory lower turbinate turbinoplasty. A superior-to-inferior incision was made on the anterior surface of the lower turbinate with a n 15 blade, working under a 0 endoscopic video image, and this incision was extended posteriorly along the inferior surface. The medial side of turbinate was elevated. The turbinatel mucosa and turbinate were excised while preserving the medial flap. Bleeding was controlled with bipolar cauterization. The flap was replaced, packing was placed in both nasal cavities, and the operation completed. Nasal packing was removed after 48 h. The outfracture group comprised 44 patients (18 men, 16 women; mean age, 31.4 ± 9.5 years, range: 17---49 years) who underwent turbinate outfracture and bipolar cauterization. Using an elevator, the lower turbinate was first mobilized medially and laterally. Posterior anterior bipolar cauterization was then applied to the inferomedial face of the lower turbinate. Both nasal cavities were filled with nasal cuffs and the operation completed. Nasal packing was removed after 48 h. Patient evaluation The compensatory turbinatel volume of all subjects was assessed preand postoperatively using coronal and axial plane paranasal CT performed in 1 mm sections from anterior (nares) to posterior (choana). The volumetric evaluations were performed by the same radiologist. The lower turbinate volumes were calculated in mm 3 using the ellipse formula: longitudinal dimension (mm) transverse dimension (mm) anteroposterior dimension (mm) 0.52. The longitudinal and transverse turbinate dimensions were calculated from the cross-section through the coronal plane after the uncinate processes. The longest dimension of the lower turbinate was set as the anteroposterior dimension in the axial plane. Statistical analysis Statistical analysis was performed using STATA/MP 11. The data were summarized as means and standard deviation. Pre-and postoperative comparisons were made using paired t-tests within each group. The independent t-test was used to compare preoperative groups, while analysis of covariance (ANCOVA) was used to compare postoperative groups using the preoperative values as covariates. The independent t-test was used to compare relative postoperative changes (%) between groups. Statistical significance was taken as p < 0.05. Results Endoscopic hemorrhage control was performed because of hemorrhage development on postoperative 4th and 6th days in postoperative period in only 2 patients in the group of turbinoplasty. In the other 64 patients, there were no complications such as postoperative hemorrhage, synechia or infection. Nasal endoscopic examinations were performed at 2 months postoperatively. No signs of septum deviation, turbinate hypertrophy, or atrophic rhinitis were observed in the follow-up examinations, and there were no complaints of nasal obstruction. The differences in the pre-and postoperative parameters were significant in the turbinoplasty and outfracture groups ( Table 1). The transverse and longitudinal dimensions of the lower turbinate in the turbinoplasty group were significantly lower than in the outfracture group (p = 0.004). The postoperative lower turbinate volumes decreased significantly in both the turbinoplasty and outfracture groups. In the turbinoplasty group, the mean lower turbinate volume was 4523.5 mm 3 preoperatively and 1492.2 mm 3 postoperatively (p = 0.002), versus 4282.2 mm 3 preoperatively and 2699.9 mm 3 postoperatively (p < 0.001) in the outfracture group. Comparing the turbinoplasty and outfracture groups, the postoperative volume was significantly lower in the turbinoplasty group (p = 0.019) ( Table 2). In the between-group comparison, the volume reduction was greater in the turbinoplasty group (p = 0.037) ( Table 2). The transverse and longitudinal dimensions of the lower turbinate decreased more in the turbinoplasty group compared with the outfracture group (p = 0.001 and p < 0.001, respectively) ( Table 2). In the turbinoplasty group, the turbinate volume had an average reduction of 56% and 36% in the out-fracture group (Fig. 1). The lower turbinate volumes on the side of the deviation were significantly increased in both the turbinate and out-fracture groups postoperatively (p = 0.0002, p = 0.0297, respectively) ( Table 3). Discussion A compensatory turbinate develops to protect the moreinvolved nasal passage from cold, dry air. The most common site is the inferior turbinate. There is thickening of the turbinate bones, and an increase in the spongiform structure and orientation to the midline. Mucosal hypertrophy is also present. 9 Many techniques have been described to reduce the volume in lower turbinate hypertrophy. In some of these techniques, the aim is only to decrease the mucosal volume, while in others the mucous membrane and bone volume are both reduced. 10 There is no consensus regarding the best lower turbinate reduction technique. Although less invasive methods have become popular over the last 20 years, more invasive procedures, such as turbinoplasty, remain important because of their high success rates. Many studies have examined the effectiveness of radiofrequency application in lower turbinate surgery, 10---12 and other techniques have been evaluated in nonseptoplasty patients. 13--- 15 Veit et al. did not evaluate lower turbinate volumes despite comparing lower turbinate reduction methods during septoplasty. 16 We measured the turbinate volume using CT and compared the volume after outfracture and bipolar cauterization, which caused only mucosal volume loss, with that of turbinoplasty, which resulted in mucosal and bone volume loss during septoplasty. Other studies have measured the volume using CT or magnetic resonance imaging. 13,17 Turbinoplasty is a successful method despite postoperative synechia, drying, and nasal discharge problems. 16,18 In our study, postoperative desiccation and nasal discharge was not followed up in the turbinoplasty patients. Bykl and Zhang 19,20 reported that the outfracture method was effective for expanding the nasal passages in lower turbinate hypertrophy. With turbinate bipolar cauterization, superficial thermal ablation creates scar tissue and fibrosis, and obliterates the venous sinuses. In one study, the results at 2 months after bipolar cauterization were successful in 76% of the cases. 14 In our study, the lower turbinate volume in the outfracture group decreased significantly and the patients' complaints of nasal obstruction disappeared. In both groups, the improvement in the nasal obstruction was likely related to both the lower turbinate reduction and correction of the septum deviation. Various studies have compared the effectiveness of lower turbinate surgical techniques using objective tests such as acoustic rhinomanometry, mucociliary function tests, and acoustic rhinometry. 15,21,22 Can et al. 13 have studied the effects of radiofrequency ablation in patients undergoing lower turbinate submucosal resection and found that the volume reduction was significant in both groups, but it was greater with radiofrequency ablation. In our study, the postoperative axial, transverse, and longitudinal lower turbinate dimensions were decreased significantly in both groups. Changes in lower turbinate volume have been assessed after applying different reduction methods. Demir et al. 12 found that the lower turbinate volume decreased by 25% after thermal radiofrequency ablation. Can et al. 13 reported a 42.4% volume reduction after submucosal resection. We observed greater volume reduction in the turbinoplasty group (67.1%) than the outfracture group (36.9%), indicating that hypertrophic mucosa and bone formation with compensatory hypertrophy constitutes a significant volume. Furthermore, the decrease in the transverse and longitudinal dimensions of the lower turbinate was significantly (p < 0.001) greater in our turbinoplasty group compared with the outfracture group, and the loss in the turbinoplasty group could be attributed to bone tissue loss. Turbinoplasty method results in a greater volume decrease and can be selected for lower turbinate in which the bone mass produces a significant volume, while outfracture and bipolar cauterization, which has a lower risk of complications, can be performed in patients with more moderate lower turbinate hypertrophy. Lower turbinate outfracture and bipolar cauterization are less invasive than turbinoplasty, while the risk of perioperative bleeding is greater than with turbinoplasty. 18 While hemorrhage, synechiae, and mucosal discharge can occur after turbinoplasty, these effects are not observed after outfracture and bipolar cauterization. In addition, turbinoplasty is suitable for bleeding control under an endoscopic view. Consequently, turbinoplasty takes longer to perform than outfracture and bipolar cauterization. In our series, no peri-or postoperative complications were recorded in either group, but this may be due to the small number of subjects. In a comparison of the pre-and postoperative lower turbinate volumes of patients who underwent radiofrequency ablation of the lower turbinate, Bahadr et al. 10 stated that the postoperative volumes of six lower turbinate were increased, which might have been due to the stage of the nasal cycle. In our study, the significant increase in the volume of the uninvolved lower turbinate (p = 0.033) on the deviated side in the turbinoplasty group might have been due to a process other than the nasal cycle following correction of the deviation. Conclusion Both turbinoplasty and outfracture are effective volume reduction techniques. However, the turbinoplasty method causes more reduction of the lower turbinate volume of the than outfracture and bipolar cauterization Ethical approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent Informed consent was obtained from all individual participants included in the study. The English in this document has been checked by at least two professional editors, both native speakers of English. For a certificate, please see: http://www.textcheck.com/ certificate/eqNE75. Conflicts of interest The authors declare no conflicts of interest.
package org.taxi.nyc; import java.time.Instant; import java.util.UUID; import java.util.function.Function; import org.apache.commons.lang.math.RandomUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.context.annotation.Bean; @SpringBootApplication public class Application { private static final Logger logger = LoggerFactory.getLogger(Application.class); public static void main(String[] args) { SpringApplication.run(Application.class); } @Bean public Function<RideUpdated, PaymentCharged> processPayment() { return rideUpdated -> { logger.info("Received Ride Updated Event:" + rideUpdated); //TODO Process Payment PaymentCharged pc = new PaymentCharged(); pc.setRideId(rideUpdated.getRideId()); pc.setAmountCharged(rideUpdated.getMeterReading()); pc.setPaymentStatus("accepted"); pc.setPaymentChargedId(UUID.randomUUID().toString()); pc.setInvoiceSystemId("PSG-" + RandomUtils.nextInt()); pc.setInformationSource("ProcessPayment Microservice"); pc.setTimestamp(Instant.now().toString()); pc.setEntityType("Driver"); logger.info("Created PaymentCharged Event:" + pc); return pc; }; } }
Investigation of the action of poly(ADP-ribose)-synthesising enzymes on NAD+ analogues ADP-ribosyl transferases with diphtheria toxin homology (ARTDs) catalyse the covalent addition of ADP-ribose onto different acceptors forming mono- or poly(ADP-ribos)ylated proteins. Out of the 18 members identified, only four are known to synthesise the complex poly(ADP-ribose) biopolymer. The investigation of this posttranslational modification is important due to its involvement in cancer and other diseases. Lately, metabolic labelling approaches comprising different reporter-modified NAD+ building blocks have stimulated and enriched proteomic studies and imaging applications of ADP-ribosylation processes. Herein, we compare the substrate scope and applicability of different NAD+ analogues for the investigation of the polymer-synthesising enzymes ARTD1, ARTD2, ARTD5 and ARTD6. By varying the site and size of the NAD+ modification, suitable probes were identified for each enzyme. This report provides guidelines for choosing analogues for studying poly(ADP-ribose)-synthesising enzymes. Introduction ADP-ribosyl transferases with diphtheria toxin homology (ARTDs), also termed poly(ADP-ribose) polymerases (PARPs), form an enzyme family of 18 human members that mediate their widespread functions in cellular homeostasis through the catalysis of ADP-ribosylation. This posttranslational modification received considerable attention within the last decade and has been linked to tumour biology, oxidative stress, inflammatory, and metabolic diseases. Using NAD + as a substrate, ARTDs covalently transfer ADP-riboses onto themselves or different targets forming mono(ADP-ribos)ylated proteins. Some ARTDs are in particular able to elongate these initial units with additional NAD + molecules to build a complex, highly charged biopolymer called poly(ADP-ribose) (PAR, Figure 1). These polymers consist of up to 200 units of ADP-ribose and may branch every 20 to 50 monomers. To date, only four ARTD members were found to accomplish the synthesis of PAR, namely the DNA-dependent ARTD1 and ARTD2 as well as the tankyrases ARTD5 and ARTD6. ARTD1 as the founding member is the best investigated enzyme of ARTDs and is considered the main source of cellular PAR. ARTD1 and its closest relative ARTD2 comprise DNA-binding domains and their activity is stimulated by binding to different types of DNA breaks. They fulfil functions in DNA repair, genome maintenance, transcription, and metabolic regulation. The tankyrases ARTD5 and ARTD6 also exhibit a unique domain structure consisting of multiple ankyrin repeats mediating protein-protein interactions. Tankyrases are involved in telomere homeostasis, Wnt/catenin signalling, glucose metabolism, and cell cycle progression. Remarkable efforts have been undertaken to develop tools and assays for studying PARylation on a molecular level and to understand the complex processes and interactions of the involved ARTDs. Recently, the employment of NAD + analogues resulted in the development of powerful applications for the determination and visualisation of ARTD activity, the identification of PARylation sites and targets and the real-time imaging of PARylation processes. In this report, we systematically compare the substrate scope of the four poly(ADP-ribose)-synthesising enzymes ARTD1, ARTD2, ARTD5 and ARTD6. For this purpose, we tested reporter-modified NAD + analogues 1-6 ( Figure 1) that were previously applied in ARTD1 catalysed ADP-ribosylation. By investigating them in biochemical assays, we identified sites and sizes of modifications for each enzyme that are well-accepted and competitively used in the presence of natural substrate. In this way, new insights of the enzyme's substrate scope and the applicability of NAD + analogues are gained and should thus guide future experiments. Results and Discussion Alkyne-modified NAD + analogues First, the position of the reporter group is systematically varied by introducing small, terminal alkyne functionalities at common sites of the adenine base. Upon successful incorporation into PAR, these alkynes serve as handles for copper(I) catalysed azide-alkyne click reaction (CuAAC) with fluorescent dyes. Terminal alkynes are the smallest possible reporter group that allows the selective labelling of poly(ADP-ribose). As reported, the synthesis of alkyne-modified derivatives 1-4 was previously accomplished by preparing the respective alkyne-modified nucleosides from common precursors and turning them into their corresponding NAD + analogues in a two-step procedure (Supporting Information File 1, Scheme S1). Next, NAD + substrate properties were investigated in ADPribosylation assays with histone H1.2 as acceptor and in ARTD automodification. For a better comparison, the assay conditions for ARTD2, ARTD5 and ARTD6 were chosen to be similar and were derived from previously established ARTD1 catalysed ADP-ribosylation. Incubation of NAD + or NAD + analogues with ARTD enzyme in reaction buffer and with or without histone H1.2 as additional acceptor protein were performed at 30 °C to decrease the reported NADase activity of tankyrases. Reaction times were elongated to 1 h, 4 h and 2 h, respectively, to achieve noticeable PAR formation. Moreover, no DNA was added to the tankyrase reactions. Of note, ARTD2 was found to be not activated by short, octameric DNA such as applied in case of ARTD1 and thus activated calf thymus DNA was added to enable ARTD2 catalysed PAR production. After the times indicated, copper-catalysed click conjugations to a fluorophore-containing azide were performed and the reactions were analysed by SDS PAGE. Then, fluorescent signals were detected and compared to the Coomassie Blue stained gels ( Figure 2). Each analogue was additionally tested in a 1:1 mixture with natural NAD + to explore their competitiveness against natural substrate and all gels contain controls without enzyme. A positive PARylation reaction is indicated by heterogeneous, polymer-modified proteins and/or the reduction of the ARTD band due to automodification. If analogues are successfully incorporated, the polymer chains can additionally be detected in the fluorescence read-out. For a better comparison, ARTD1-based ADP-ribosylation assays were also performed, because all four analogues have never been tested in parallel before. The outcome of these experiments is summarised in Table 1. For illustration, Figure 3 shows the processing of derivative 1 by all the four ARTDs tested. Of note, it was previously reported that the incubation of proteins with NAD + analogues may result in non-enzymatic Schiff base formation of ADP-riboses with lysine residues and can be detected by some minor staining of the involved proteins, which is also visible in some of the investigated reactions. As expected from the close structural similarity between ARTD1 and ARTD2 (panel a and b), both enzymes behave Table 1: Acceptance of alkyne-modified NAD + analogues 1-4 by different ARTDs without or with competition of natural substrate. a = analogue is well processed, = analogue is processed with lower efficiency, = analogue is not processed. NAD + analogue Nat. NAD + ARTD1 ARTD2 ARTD5 ARTD6 Figure S1 and Figure S2. similarly in histone ADP-ribosylation (Supporting Information File 1, Figure S1) and in auto(ADP-ribos)ylation ( Figure S2). As known from previous work, ARTD1 was not able to process 7-and 8-modified NADs 3 and 4 and so does ARTD2 (Supporting Information File 1, Figure S1, lanes 9 to 14 and Figure S2, lanes 7 to 10). In both assays, only small amounts of modified PAR was formed with the 6-modified derivative 2 and in the absence of natural NAD + (Supporting Information File 1, Figure S1, lane 7 and Figure S2, lane 5), when compared in parallel with 2-modified analogue 1. However, a strong signal is detected in a mixture containing NAD + ( Figure S1, lane 8 and Figure S2, lane 6). Application of compound 1 results in the strongest signal and is competitive towards natural substrate ( Figure S1, lanes 4 to 5 and Figure S2, lanes 3 to 4). Also in ARTD5-and ARTD6-catalysed ADP-ribosylation (panel c and d), analogues 3 and 4 were not used as substrates (Supporting Information File 1, Figure S1, lanes 9 to 10 and Figure S2, lanes 7 to 10). In contrast, compounds 1 and 2 were both used by both enzymes for PAR formation, even in the absence of natural NAD +. In case of ARTD5, derivative 1 seems to be slightly better processed than 2 in histone ADP- Table 2: Acceptance of dye-modified NAD + analogues 5 and 6 by different ARTDs without or with competition of natural substrate. a = analogue is well processed, = analogue is processed with lower efficiency, = analogue is not processed. Figure S3 and Figure S4. b 6 is accepted in H1.2 ADP-ribosylation with little efficiency, but not in automodification. c Analogues are not accepted in automodification. ribosylation, whereas in case of ARTD6 both are used as substrates in both assays with similar efficiencies. Dye-modified NAD + analogues Because the alkyne-tag induces only small alterations to the NAD + scaffold, we also investigated how these enzymes would act on bulkier substitutions. For this purpose, we selected bulky, dye-modified NAD + analogues 5 and 6, which were previously prepared by our group, in order to have a direct, fluorescent read-out. The outcome is summarised in Table 2 and the SDS PAGE gels obtained are depicted in Figure 4 and Supporting Information File 1, Figures S3 and S4. As shown in Figure 4 and Supporting Information File 1, Figure S4b, ARTD2 processes analogue 5 in a competitive manner and fluorescent and Coomassie Blue stained polymer chains are formed in the absence and the presence of natural substrate ( Figure 4, lanes 4 to 5 and Figure S4b, lanes 3 to 4). Unlike ARTD1 (Supporting Information File 1, Figure S3a), little fluorescent signal is obtained with compound 6 in ARTD2 catalysed histone PARylation in the absence of natural NAD + (Figure 4, lane 7) and in ARTD2 automodification ( Figure S4b, lane 5). ARTD5 showed decreased incorporation of the larger substituted analogues 5 and 6. During automodifcation, both compounds failed to form detectable, fluorescent PAR chains ( Figure 4 and Supporting Information File 1, Figure S4c, lanes 3 to 6). In general, it can be concluded that ARTD5 showed less activity in automodification compared to the other ARTDs. Nevertheless, analogue 6 was somewhat processed using the histone-based assay as seen by fluorescent and Coomassie-bluestained polymers in the absence of natural substrate and increased polymer in the presence of natural NAD + (Figure 4, lanes 7 to 8). The fluorescence observed in the presence of 5 is similar to the background signal indicating poor processing of 5 ( Figure 4, lanes 4 to 5). In case of ARTD6, both analogues were used for the ADP-ribosylation of histone ( Figure 4, lanes 3 to 8) and in automodification (Supporting Information File 1, Figure S4d, lanes 3 to 6) with similar efficiency. Conclusion In this paper, we investigated the scope of PAR synthesising enzymes, namely ARTD1, ARTD2, ARTD5 or ARTD6 for using modified NAD + analogues. It was found that NAD + analogues 1 and 2 modified with alkyne groups in adenine position 2 and 6 are used by all these enzymes to a certain extent, whereas the employed substitutions in adenine at position 7 and 8 completely abrogated the processing towards PAR. The DNA-dependent ARTDs ARTD1 and ARTD2 can process 2-modified analogues best as also sterically demanding compounds such as dye-modified 5 are processed. Thus, 2-modified analogues are the best choice for the study of these enzymes. On the other hand, 6-modified derivatives should be chosen for the study of the tankyrases ARTD5 and ARTD6. When bulky substitutions are added on the NAD + scaffold, tankyrases tolerate better 6-modifed analogues. Because ARTD5 and ARTD6 exhibit different constraints for metabolising bulky 2-modified analogue 5, this behaviour could be used to discriminate their activity in a cellular context. By choosing the best NAD + substrate for each enzyme more reliable and valuable insights into PARylation can be achieved and will help to decipher these processes in more detail. Supporting Information Supporting Information File 1 Additional figures, synthesis of compounds and biochemical methods.
/** Returns a builder for the specified PTransform id and PTransform definition. */ public static Builder builder(String pTransformId, RunnerApi.PTransform pTransform) { return new AutoValue_PTransformRunnerFactoryTestContext.Builder() .pipelineOptions(PipelineOptionsFactory.create()) .beamFnDataClient( new BeamFnDataClient() { @Override public void registerReceiver( String instructionId, List<ApiServiceDescriptor> apiServiceDescriptors, CloseableFnDataReceiver<Elements> receiver) { throw new UnsupportedOperationException("Unexpected call during test."); } @Override public void unregisterReceiver( String instructionId, List<ApiServiceDescriptor> apiServiceDescriptors) { throw new UnsupportedOperationException("Unexpected call during test."); } @Override public <T> CloseableFnDataReceiver<T> send( ApiServiceDescriptor apiServiceDescriptor, LogicalEndpoint outputLocation, Coder<T> coder) { throw new UnsupportedOperationException("Unexpected call during test."); } }) .beamFnStateClient( new BeamFnStateClient() { @Override public CompletableFuture<StateResponse> handle(StateRequest.Builder requestBuilder) { throw new UnsupportedOperationException("Unexpected call during test."); } }) .beamFnTimerClient( new BeamFnTimerClient() { @Override public <K> CloseableFnDataReceiver<Timer<K>> register( LogicalEndpoint timerEndpoint, Coder<Timer<K>> coder) { throw new UnsupportedOperationException("Unexpected call during test."); } }) .pTransformId(pTransformId) .pTransform(pTransform) .processBundleInstructionIdSupplier( () -> { throw new UnsupportedOperationException("Unexpected call during test."); }) .pCollections(Collections.emptyMap()) .coders(Collections.emptyMap()) .windowingStrategies(Collections.emptyMap()) .pCollectionConsumers(new HashMap<>()) .startBundleFunctions(new ArrayList<>()) .finishBundleFunctions(new ArrayList<>()) .resetFunctions(new ArrayList<>()) .tearDownFunctions(new ArrayList<>()) .progressRequestCallbacks(new ArrayList<>()) .incomingDataEndpoints(new HashMap<>()) .incomingTimerEndpoints(new ArrayList<>()) .splitListener( new BundleSplitListener() { @Override public void split( List<BundleApplication> primaryRoots, List<DelayedBundleApplication> residualRoots) { throw new UnsupportedOperationException("Unexpected call during test."); } }) .bundleFinalizer( new BundleFinalizer() { @Override public void afterBundleCommit(Instant callbackExpiry, Callback callback) { throw new UnsupportedOperationException("Unexpected call during test."); } }); }
On the 12th of Iyar, 1402, the Jews of Rome were granted "privileges" by Pope Boniface IX. They were given legal right to observe their Shabbat, protection from local oppressive officials, their taxes were reduced and orders were given to treat Jews as full-fledged Roman citizens. Tomorrow is the twenty-eighth day of the Omer Count. Since, on the Jewish calendar, the day begins at nightfall of the previous evening, we count the omer for tomorrow's date tonight, after nightfall: "Today is twenty-eight days, which are four weeks, to the Omer." (If you miss the count tonight, you can count the omer all day tomorrow, but without the preceding blessing). Tonight's Sefirah: Malchut sheb'Netzach -- "Receptiveness in Ambition" Fire can be dangerous—but not nearly as dangerous as ice. If a fire burns inside you, keep burning. Turn the fire towards the divine. If your path is of cold, lifeless intellect, stop, turn around, and jump into fire. Be singed by the flaming coals of the sages’ words.
Surgical Repair of a Sinus of Valsalva Aneurysm: A 22-Year Single-Center Experience. BACKGROUND Several reports described the repair of sinus of Valsalva aneurysms (SVAs); however, there is still debate regarding the optimal method of operation. We investigated the determinants of the development of significant aortic regurgitation (AR) and long-term survival after surgical repair. METHODS Between January 1995 and December 2016, 71 patients (31 females; median age: 33.3 years) underwent surgical SVA repair with (n=60) or without (n=11) rupture. Aortic valvuloplasty (AVP) was performed using Trusler's technique in 28 patients (39.4%), and 11 patients (15.5%) underwent aortic valve replacement during the first operation. RESULTS There was no early mortality, and three deaths occurred during follow-up (median: 65.4 months). Patients with grade II preoperative AR who underwent AVP tended to develop significant postoperative AR, but freedom from significant AR did not differ statistically (p=0.387). Among patients who underwent AVP, freedom from significant AR did not differ statistically between those with grades I and II and those with grades III and IV (p=0.460). CONCLUSION Surgical repair of SVA with or without rupture can be performed safely using the dual approach technique. Concomitant aortic valve repair can be performed without difficulty and should be recommended not only for patients with moderate or severe preoperative AR (grades III and IV) but also for those with minimal or mild preoperative AR (grades I and II), whose aortic valve geometry needs correction.
The Home Sweet Home group will build a mass movement of citizens to change housing policy in the way the Right2Water campaign “beat” water charges, the group’s co-founder Brendan Ogle has said. The movement will be “multi-stranded” – legal, political and campaigning – to challenge the “broken policies which favour banks, vulture funds and put property before the most basic needs of people”. Mr Ogle, education officer with the Unite trade union, described the group’s 28-day occupation of Apollo House – a vacant Nama-managed office block in Dublin 2 – to accommodate homeless people as the “most stressful but most rewarding thing” he had done in more than two decades of activism. The group left the building on January 12th in compliance with a High Court order. During their time there, almost 90 homeless people stayed and moved onto supported six-month accommodation. Six homeless people are still being supported, staying in a house rented by the group until suitable accommodation is found. Home Sweet Home “will be a permanent and very necessary intervention in the national debate about housing”. The level of support shown to the group on social media, by volunteers, in donations – €170,000 was raised – and by the business community was “overwhelming and surprising”, he said. A legal case “is being built”, he says, to argue an “implied right to housing” in the Constitution “backed up by a weight of international law”. Home Sweet Home will “go to the highest court in the land, and to Europe if necessary,” he says. The group will publish its accounts, this week, showing how much was raised and how much was spent, and on what. It is neither taking further donations nor planning further occupations “currently”. A conference, to plan further strategy for Home Sweet Home, will take place in the Mansion House on February 18th.
<gh_stars>1-10 /** * Copyright (C) 2011 <NAME> <<EMAIL>> * All contents Licensed under the terms of the Apache License 2.0 * http://www.apache.org/licenses/LICENSE-2.0.html */ package com.marzhillstudios.quizme; import com.marzhillstudios.quizme.R; import android.util.Log; import android.test.ActivityInstrumentationTestCase2; import android.test.TouchUtils; import android.test.ViewAsserts; import android.app.Activity; import android.view.View; import android.widget.Button; import android.widget.LinearLayout; /** * This is a simple framework for a test of an Application. See * {@link android.test.ApplicationTestCase ApplicationTestCase} for more information on * how to write and extend Application tests. * <p/> * To run this test, you can type: * adb shell am instrument -w \ * -e class com.marzhillstudios.quizme.MainActivityTest \ * com.marzhillstudios.quizme.tests/android.test.InstrumentationTestRunner */ public class MainActivityTest extends ActivityInstrumentationTestCase2<MainActivity> { protected Activity activity; protected Button launchQuizBtn; protected Button launchCardManagerBtn; protected LinearLayout mainLayout; public MainActivityTest() { super("com.marzhillstudios.quizme", MainActivity.class); } protected void setUp() throws Exception { super.setUp(); activity = this.getActivity(); launchQuizBtn = (Button) activity.findViewById(R.id.LaunchQuizBtn); launchCardManagerBtn = (Button) activity.findViewById(R.id.launchManagerBtn); mainLayout = (LinearLayout) activity.findViewById(R.id.MainLayout); } public void testPreconditions() throws Exception { assertNotNull(mainLayout); assertNotNull(launchQuizBtn); assertNotNull(launchCardManagerBtn); } }
Physicochemical characterization and quality of cold-pressed peanut oil obtained from organically produced peanuts from Macedonian Virginia variety The physicochemical characterization and quality of cold pressed peanut edible oil from the Virginia variety, organically produced from the region of Macedonia, were examined in this work for the first time. The fatty acid composition of the oil showed almost equal levels of oleic and linoleic acids with an abundance of 34.19±0.01 and 36.13±0.01%, respectively. The most dominant saturated fatty acid was palmitic acid with a level of 10.06±0.00%. The level of tocopherols and other vitamin-E-related compounds was in strong agreement with the antioxidant activity of the oils measured by the DPPH assay. Almost equal amounts of and tocopherols indicated an antioxidant potential of 288.63±59.78 mgL -tocopherol. Phytosterols, as minor compounds present in the oils, can be additional antioxidants responsible for the health benefits of this oil in human nutrition. The four major pytosterols were -sitosterol (1812.21±22.17 mgkg oil), champesterol (320.55±17.07 mgkg oil), 5-avenasterol (236.16±14.18 mgkg) and stigmasterol (133.12±12.51 mgkg oil). Induction time, Peroxide number, FFA and specific extinction (K232 and K270, values 1.82 and 0.22) gave us an indication of the oxidative stability of cold pressed peanut oil. INTRODUCTION Current trends in the consumption of coldpressed edible oils, in particular, peanut oil make it necessary to research the quality, stability and nutritional impact on human health (). The conventional processing of peanuts in the food industry involves mechanical pressing during which the temperature does not surpass 40 °C. This process yields cold-pressed peanut oil and low-value peanut cake. Peanut seeds are a good source of protein, lipids and fatty acids. Peanut seeds contain 45-50% oil (). Cold pressed peanut oil enjoys a very popular place in human nutrition due to the high level of polyunsaturated fatty acids, tocopherols, pytosterols, carotenoids, chlorophylls and polyphenolics (). A positive correlation between the consumption of polyphenol-enriched cold-pressed oil consumption and a reduced risk of coronary heart diseases, level of "LDL", degenerative diseases and cancer is well known (). Peanuts belong to the genus Arachis, a member of the family Leguminoseae, and are widely distributed in the tropics and moderate regions of the world. Peanuts make an important contribution to the diet in many countries. Peanut seeds are a good source of protein, lipid and fatty acids for human nutrition. The cold-pressed oil from peanuts is very important due to its significant level of and -tocopherol, as well as significant amount of resveratrol. Trans-resveratrol or trans-3,5,40-trihidroxystilbene is the most abundant stilbene in peanuts and is responsible for the benefit to human health. The antioxidant and antimicrobial efficiency of resveratrol provides health benefits, such as the prevention of cardiovascular diseases, arteriosclerosis and cancer. Originally, epidemiological studies indicated an inverse relationship between food enriched with resveratrol and the risk of coronary heart disease (). Many researchers have studied the chemical composition and nutritional value of cold-pressed peanut oil. Fatty acid and phytosterol compositions of peanut seeds have been studied by. Grosso et al. studied the chemical composition of the oil of aboriginal peanut seeds from Uruguay (). Furthermore, Grosso et al. have reported the oil, protein, ash, carbohydrate contents, iodine value and fatty acid composition of some wild peanut specie (Arachis) seeds (). The level of phytosterols in different peanut species from the region of Bolivia and Argentina was the object of study in the work of. The effects of cultivar, location, and their interaction in the fatty acid composition have been investigated by Brown et al.. In another study, fatty acid composition, protein levels, amino acid composition and other components have been investigated in peanut seeds (Ahmed and Young, 1982). Lopes et al. published a study for peanuts with an overview of the chemical composition, focusing on secondary metabolites and their biological activity (). Peanuts are a rich source of oil (45-50%). The oil obtained from the cold pressing of peanuts is pale yellow and has the characteristic "nutty" flavor of roasted peanuts. This characteristic flavor was examined by Chetschik et al.. by application of stable isotope dilution analyses (SIDA) and gas chromatography-olfactometry. According to their findings, 26 odor-active compounds were identified and quantified in raw and pan-roasted peanut meal. 3-Isopropyl-2-methoxypyrazine, acetic acid, and 3-(methylthio) propanal showed the highest OAVs in raw peanuts, whereas methanethiol, 2,3-pentanedione, 3-(methylthio) propanal, and 2-and 3-methylbutanal as well as the intensely popcorn-like smelling 2-acetyl-1-pyrroline revealed the highest OAV in the pan-roasted peanut meal. The fatty acid profile, vitamin-E-active compounds and phytosterol composition of peanut oil from South America was the object of study in the work of Carrn and Carelli. According to their finding, the most dominant fatty acids in peanut oil were palmitic (C16:0), oleic (C18:1) and linoleic (C18:2) acid. According to their findings, palmitic acid was present in the range from 9.3 to 13%, oleic acid from 35.6 to 58.3% and linoleic acid from 20.9 to 43.2% (Carrn and Carelli, 2010). Regarding, the distribution of vitamin E active compounds, the most present were and -tocopherol with levels between 18-57 mgkg −1 oil and 36-78 mgkg −1 oil, respectively. and -tocopherol were not detected nor present in trace amounts till 6 mgkg −1 of oil (Carrn and Carelli, 2010). It is obvious that there are already many published studies on the chemical composition of cold-pressed peanut oils, but, until now, to the best of our knowledge, there are no published results on the quality of the cold-pressed peanut oil from the region of Macedonia. The most famous variety of peanuts growing in the region of south-east Macedonia is the "Virginia" variety. The other two main varieties ("Ranner" type and "Valencia" type) are less present in the Macedonian region. The "Virginia" variety of peanuts has the largest kernels and accounts for most of the peanuts roasted in their shell. Therefore, the main object of this study was to give an overview of the chemical composition and general quality parameters including fatty acid profile, content of tocopherols and phytosterols as well as physicochemical evaluation, oxidative stability and antioxidant activity of cold-pressed peanut oil from the Macedonian "Virginia" variety. Harvesting and selection of plant material The peanuts (Arachis hypogea, L.) were cultivated in the valleys of Strumica and Gevgelija regions with an average yield of 1200 kgha −1. The peanuts collected for the experimental purpose of this study belong to the "Virginia" variety. The peanuts from the Macedonian "Virginia" variety were collected in mid-October from the valleys of Gevgelia and Strumica, 2012. The selected plant from Arachis hypogea, L., was hand-picked and dried for 7-10 days. After drying, the peanut seeds were separated from plant material and sorted according to their quality and maturity. Furthermore, additional dying at 40 °C was performed in drying ovens. Purifi cation and cold pressing The purification process of the plant material started with the removal of broken or damaged peanuts. This step was necessary because unpurified plant material can negatively affect the quality of cold-pressed oil. After purification, the next step before pressing was peeling and roasting. After peeling, the peanuts were roasted at 120 °C for 5 min. The process of cold pressing was performed using a Komet single press (IBG Monforts Oecotec, Germany). After pressing, the fresh cloudy oil was purified from solid impurities in tanks by sedimentation over 17 days. The quantities of plant material were collected only for the needs of this experiment and the yield of cold-pressed oil was lower than 1 kg. After sedimentation, the collected oil was filtrated by using a protection filter and bottled in dark 250 mL bottles. The cold pressed oil was stored at temperatures below 15 °C in the dark. Determination of the fatty acid composition The fatty acid composition of Macedonian coldpressed peanut oil was determined using gas chromatography equipped with a flame ionization detector (FID). The esters were prepared using 2 drops of each oil dissolved in 1 mL of heptane. After the addition of 50 L of sodium methylate with a concentration of 2 molL −1, the samples were homogenized. After homogenization, 100 L of distilled water were added to each sample. Samples were centrifuged and the lower phase was removed: while 50 L of 1 M HCl were added to the upper phase. After a second homogenization, sodium sulphate anhydride was added to remove water traces. Finally, the upper phase was transferred to GC vials and fatty acid methyl esters were analyzed using a capillary GC equipped with a CP7420 Select FAME column, 100 m0.25 mm internal diameter with 0.25 m film thickness. The analyses were performed on an Agilent 6890 equipped with KAS4Plus and FID. The oven temperature was programmed to increase from 150 °C to 240 °C at a rate of 1.5 °Cmin −1 with the isotherm kept at 240 °C for 20 min. The injector and detector temperature were both 260 °C. Hydrogen was used as the carrier gas at an average velocity of 25 mLmin −1. The retention times of separated peaks were confirmed by FAME standards. Determination of tocopherols 100 mg of cold pressed peanut oil were dissolved in 1 mL of heptane. The determination of tocopherols was performed with an HPLC instrument equipped with an L6000 pump, a Merck-Hitachi F-1000 fluorescence detector with excitation wavelength on 295 nm and emission wavelength of 330 nm and a Diol phase HPLC column 25 cm4.6 mm ID (Merck, Darmstadt, Germany). The flow rate was 1.3 mLmin −1 and the injection volume 20 L. The mobile phase was a mixture of heptane and TBME at a ratio 95:5. Rancimat test For the determination of the oxidative stability of Macedonian peanut oil, a Metrohm Rancimat model 743 (Herisau, Switzerland) was used. In order to determine the oil stability index (OSI), a stream of purified air was passed through 3.6 g of oil at 120 °C at a flow rate of 20 Lh −1. For each sample of oil, the OSI index was determined in two portions by measuring two samples in the apparatus simultaneously. The induction time in hours was automatically recorded and taken as the break point of the plotted curves (the intersection point of the two extrapolated parts of the curve). Peroxide value For the determination of peroxide value, the DGF standard method C-VI 6a -Part 1 according to Wheeler was used. In brief, for titration of the oil, a standard solution of sodium thiosulfate was prepared with a concentration of 0.01 molL −1. A mixture of glacial acetic acid and isooctane was prepared in the ratio of 60:40 and was used as extracting agent for the oils. A saturated solution of potassium iodide was prepared by dissolving 10 g potassium iodide in 5 mL of boiled Millipore water. Analyses were performed by dissolving of 5.0 g of oil in 50 mL of extracting agent and 100 ml of Millipore water. After adding 500 L of the saturated solution of potassium iodide, the potentiometric titration was performed by using the automatic titrator, Methrom 888 Titrando (Methrom, Herisau, Switzerland). Determination of free fatty acid content (Acidity) For determination of the content of free fatty acids the DGF standard method C-V 2 was used. In brief, a mixture of ethanol and light petroleum was prepared in the ratio of 50:50 and was used as extracting agent for the oil. 10.0 g of oil was dissolved in this mixture and titration was performed using potassium hydroxide at a concentration of 0.1 molL −1. Potentiometric titration was performed using the automatic titrator, Methrom 888 Titrando (Methrom, Herisau, Switzerland). Determination of ultraviolet absorbance expressed as specifi c UV extinction This method was equivalent to ISO 3656:2012. For this purpose, 1.0 g of oil was dissolved into a 100-mL flask by iso-octane. The mixture was shaken and the solution was transferred into rectangular quartz cells with covers, having an optical length of 1 cm and determinations were made at 232 and 268 nm. The absorption at the wavelengths specified in the method is due to the presence of conjugated diene and triene in the oil. Determination of phytosterols The sterol composition of the cold-pressed peanut oil was determined following ISO/FIDS 12228:1999 (E). In brief, 250 mg of oil were saponified with a solution of ethanolic potassium hydroxide by boiling under reflux. The unsaponifiable matter was isolated by solid-phase extraction on an aluminium oxide column (Merck, Darmstadt, Germany) onto which fatty acid anions were retained and sterols passed through. The sterol fraction was separated from other unsaponifiable matter by thin-layer chromatography (Merck, Darmstadt, Germany), re-extracted from the TLC material, and afterwards, the composition of the sterol fraction was determined by GLC using betulin as internal standard. The compounds were separated on an SE 54 CB (50 m long, 0.32 mm ID, 0.25 m film thickness) (Macherey-Nagel, Dren, Germany). Further parameters were as follows: hydrogen as carrier gas, split ratio 1:20, injection and detection temperature adjusted to 320 °C, temperature program, 245 °C to 260 °C at 5 °Cmin −1. Peaks were identified either by standard compounds (-sitosterol, campesterol, stigmasterol), by a mixture of sterols isolated from rape seed oil (brassicasterol) or by a mixture of sterols isolated from sunflower oil (7-avenasterol, 7-stigmasterol, and 7-campesterol). All other sterols were firstly identified by GC-MS and afterward analyzed by comparison of retention times. Determination of antioxidant activity by DPPH assay The antioxidant activity of cold-pressed peanut oil was estimated spectro-photometrically using the DPPH assay. For this purpose, the antioxidant activity of the oil was expressed as percentage of de-colorization of a solution of the stable radical DPPH (2,2-diphenyl-1-picrylhydrazyl radical) at 517 nm. The DPPH reagent was dissolved in hexane and the stock solution of 0.25 mL with a concentration of 0.5 M was used for the determination of the antioxidant activity. -tocopherol at different concentrations (100-1000 mgL −1 ) was dissolved in hexane and used as standard for the preparation of the calibration curve. Statistical analyses A one-way ANOVA was used to examine the level of every particular minor and major compound by considering the type of oil. The significance level of all statistical analyses was set at 0.05. The level of significance of difference between the percentages of fatty acids, level of tocopherols, level of phytosterols, and values of antioxidant activity measured by the DPPH assay mean values was determined at 5% by a one-way ANOVA using Tukey's test. This treatment was carried out using SPSS v.16.0 software, IBM Corporation, USA. Fatty acid composition Fatty acids are very important to human nutrition and in vegetable oils they are manly presented as triacylglycerols (TAG). Furthermore, they are classified as saturated, monounsaturated (MUFA) and polyunsaturated fatty acid (PUFA). The unsaturated fatty acids are classified into well-known series, such as omega-9, omega-6 and omega-3. Omega-9 fatty acids are considered as non-essential for the human diet and omega-6 and omega-3 are essential since these fatty acids cannot be synthetized by mammals and they must be obtained from the diet. The fatty acid composition of the cold pressed peanut oil from the "Virginia" variety is presented in Table 1. Regarding saturated fatty acids, the coldpressed oil from peanuts had the highest percentage of palmitic acid (10.06±0.00%). Stearic acid was presented in significantly lower levels (4.40±0.01%). Oleic acid and linoleic acids as unsaturated fatty acids were the most dominant with levels of 34.19±0.01 and 36.13±0.01%, respectively. Lignoceric acid was abundant at a level of 3.93±0.00%. Paullinic (Omega -7) fatty acid was detected at the level of 1.37±0.00% in the sample of peanut oil organically produced from the territory of Macedonia. The fatty acid profile was in good agreement with data published by zcan and Seven who stated that palmitic and stearic acids were the most abundant saturated fatty acids in a Turkish variety of peanut oils (in the range of 8.70% to 13.03% for palmitic acid and 3.77% to 4.53% for stearic acid) (zcan and Seven, 2003). However, Carrn and Carelli stated that the degree of saturation in peanuts oil is dependent on temperature, genotype, seed maturity, and growth location as well as the interaction of all these factors. More precisely, seed development in regions with lower temperatures usually produces oil with higher levels of unsaturated fatty acid. According to the results published by Carrn and Carelli, Macedonian peanut oil belongs to the relatively high linoleic acid oil category (36.13%). As can be seen from table 1, the highest abundance with a percentage of 37.05±0.03% was attributed to monounsaturated fatty acids (MUFA). Oleic acid was the most dominant monounsaturated fatty acid. Furthermore, polyunsaturated fatty acid (PUFA) was present at the level of 36.62±0.01% with linoleic acid as the most dominant fatty acid. Finally, the lowest percentage belonged to saturated fatty acids (SFA) with 18.49±0.01% of the total fatty acid composition. The total fatty acids dominated 92% whereas the rest belonged to traces below 0.05% each. Oxidative stability Regarding the results from Table 2, the coldpressed peanut oil from Macedonian "Virginia" variety had a very good oxidative stability index (OSI) of 6.3±0.3 h which corresponded to over 18% of saturated fatty acids in this oil. Furthermore, the relatively high oxidative stability of this oil can be explained by the high content of monounsaturated oleic acid with an abundance of 34.19%. On the other hand, the remarkable stability of this oil can be explained by the Maillard Reaction Products (MRPs) formed during roasting, which are products of the reactions among reducing sugars and amino acids at elevated temperatures and low moisture. Durmaz et al. explained that the MRPs obtained from model systems could also retard the oxidative deterioration of oils. According to their findings, during the process of roasting apricot kernel seeds, the degradation of naturally occurring antioxidants and formation of antioxidant MRPs occured together. Under severe roasting conditions, the degradation rate might have been higher than the formation of MRPs and the total antioxidant capacity could be reduced. According to the findings of O' Keefe et al., high oleic peanut oils with 75.6% oleic acid had better oxidative stability than normal oil (O' ). Almost equal amounts of oleic and linoleic acid and a significant amount of Vitamin-E-active compounds resulted in oxidative stability for over 6 h. Peroxide value, free fatty acids (acidity) and specifi c extinction Peroxide value and specific extinction at 232 nm summarize the oxidative state, while the amount of free fatty acids reveals some information about the quality of the raw material used. The content of free fatty acids in Macedonian peanut oil was significantly below the limit defined for cold-pressed and Virgin oils by the Codex Standard for Named Vegetable Oils as 2.0%. This was an indication of the use of high quality raw material for the preparation of cold-pressed peanut oil. Vitamin-E-active compounds The fatty acid profile of the oil is not the only indicator for the identity and quality of an oil. Vitamin-E-active compounds such as tocopherols and tocotrienols are very important minor compounds responsible for the oxidative stability of the oil and its antioxidant activity. Table 3 shows almost equal amounts of and tocopherols (14.38±0.20 and 14.51±0.20 mg100 g −1 of oil, respectively). The total level of Vitamin-Eactive compounds in Macedonian peanut oil was similar to the results published by Jonnala et al. According to their findings, the peanut oil from the variety "Tamrun 96" had the highest tocopherol content with 32.2 mg100 g −1 oil (). Macedonian peanut oil from the "Virginia" variety had 29.56±0.4 mg100 g −1 of oil of total Vitamin-E-active compounds. The levels of -tocotienol (0.35±0.00 mgkg −1 of oil) and -tocopherol (0.31±0.00 mgkg −1 of oil) were in good agreement with the published results of Carrn and Carelli (Carrn and Carelli, 2010). Antioxidant activity by DPPH assay Oxidative stability and antioxidant activity are two parameters which explain the resistance of oils against oxidative deterioration by oxygen during heating and storage. Antioxidant effectiveness of oil is dependent on the extent to which the antioxidant participates in side reactions, such reactions with species other than peroxyl radicals. These side reactions will decrease the level of antioxidant active compounds such as tocopherols, phytosterols and phenolic compounds and lead to the formation of active radicals able to initiate new oxidation reaction chains. Statements in our work are very similar to those made by Tuberoso et al. ; Kostadinovi Velikovska and Mitrev and Kostadinovi Velikovska et al. who concluded that free radical scavenging was mainly influenced by the tocopherol content and polyunsaturated fatty acids in oil. Sterol composition in oils Pytosterols are minor usaponifiable compounds which are predominant in cold-pressed oils and almost absent in refined oils. Their presence in the oils is frequently connected to higher antioxidant activity. The published results for the total level of pytosterols in peanut oil were in the range of 900-4344 mgkg −1 of oil (Carrn and Carelli, 2010). Table 4 shows the level of particular phyrosterols as well as the total content of phytosterols in peanut oil (2658.59±74.82 mgkg −1 of oil). -sitosterol was the major phytosterol with amounts of 1812.21±22.17 mgkg −1 oil. Champesterol was the second, most dominant phytosterol in cold-pressed peanut oil with a level of 320.55±17.07 mgkg −1 oil. 5-avenasterol was found to be the third most abundant sterol with a level of 236.16±14.18 mgkg −1 oil. It was found that 5-avenasterol has an antipolymerization effect, which could protect the oil from oxidation during prolonged heating at high temperatures. The significant oxidative stability of cold-pressed peanut oil can be attributed to the presence of higher levels of this phytosterol apart from the fatty acid profile and vitamin-E-related compounds in the oil. Stigmasterol was present in the amount of 133.12±12.51 mgkg −1 oil and all other phytosterols were present in amounts bellow 35 mgkg −1 oil. The results of the phytosterol composition obtained from Macedonian peanut oil from "Virginia" variety were in good agreement with the results of. According to their findings, -sitosterol was detected at levels of 55.3 to 61.6% from the total phytosterols in all peanut varieties. The abundance of -sitosterol in Macedonian peanut oil was 68.16% from the total phytosterol content. Campesterol was present from 13.7 to 17.2% in the samples from Bolivia and Argentina. The content of campesterol in Macedonian peanut oil was 12.06%. However, Macedonian peanut oil from the "Virginia" variety consists of almost double the quantity of 5-avenasterol in comparison to stigmasterol. In the samples of peanut oil from Bolivia and Argentina almost equal contents of both phytosterols were detected. CONCLUSIONS Cold-pressed peanut oil obtained from organically produced peanuts from the Macedonian "Virginia" variety was investigated for the first time. The analytical methods presented in this study can be used for fast and accurate chemical characterization of major and minor constituents present in vegetable oils. Results from this study show that the most important class of compounds responsible for the antioxidant activity of peanut oil measured by the DPPH assay was Vitamin-E-active compounds. Almost equal amounts of and -tocopherol indicated peanut oil as a valuable source of Vitamin-E-active compounds with a total amount of 29.56±0.4 mg100 g −1 oil. Phytosterols are the second most important class of compounds which can participate in the antioxidant potential of the oil. Apart from -sitosterol as the most abundant phytosterol (1812.21±22.17 mgkg −1 oil), the other three phytosterols, 5-avenasterol (236.16±14.18 mgkg −1 oil), champesterol (320.55± 17.07 mgkg −1 oil), and stigmasterol (133.12±12.51 mgkg −1 oil) can contribute to the antioxidant potential of the oil. However, the antioxidant assay indicated Vitamin-E-active compounds as the most active against the DPPH radical. Finally, oxidative stability is strongly influenced by the degree of unsaturation of fatty acids as well as the effect of roasting. ACKNOWLEDGMENTS Financial support from Deutscher Akademischer Austausch Dienst (DAAD) for Sanja Kostadinovi Velikovska as a participant in the SOE-DAAD project "Catalyst for South Eastern Europe -MatCatNet" is gratefully acknowledged. Augustin C. Mot from Babes-Bolyai University, Cluj-Napoca is acknowledged for his scientific advice.
/* Copyright 2019 The TensorFlow Authors. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ==============================================================================*/ #include "tensorflow/core/profiler/convert/xplane_to_op_stats.h" #include "tensorflow/core/platform/types.h" #include "tensorflow/core/profiler/convert/step_events_to_steps_db.h" #include "tensorflow/core/profiler/convert/xplane_to_op_metrics_db.h" #include "tensorflow/core/profiler/convert/xplane_to_step_events.h" #include "tensorflow/core/profiler/protobuf/hardware_types.pb.h" #include "tensorflow/core/profiler/utils/event_span.h" #include "tensorflow/core/profiler/utils/hardware_type_utils.h" #include "tensorflow/core/profiler/utils/tf_xplane_visitor.h" #include "tensorflow/core/profiler/utils/xplane_schema.h" #include "tensorflow/core/profiler/utils/xplane_utils.h" namespace tensorflow { namespace profiler { namespace { DeviceCapabilities GetDeviceCapFromXPlane(const XPlane& device_plane) { DeviceCapabilities cap; XPlaneVisitor plane = CreateTfXPlaneVisitor(&device_plane); if (auto clock_rate_khz = plane.GetStats(kDevCapClockRateKHz)) { cap.set_clock_rate_in_ghz(clock_rate_khz->int64_value() / 1000000.0); } if (auto core_count = plane.GetStats(kDevCapCoreCount)) { cap.set_num_cores(core_count->int64_value()); } // Set memory bandwidth in bytes/s. if (auto memory_bw = plane.GetStats(kDevCapMemoryBandwidth)) { cap.set_memory_bandwidth(memory_bw->int64_value()); } if (auto memory_size_in_bytes = plane.GetStats(kDevCapMemorySize)) { cap.set_memory_size_in_bytes(memory_size_in_bytes->uint64_value()); } if (auto cap_major = plane.GetStats(kDevCapComputeCapMajor)) { cap.mutable_compute_capability()->set_major(cap_major->int64_value()); } if (auto cap_minor = plane.GetStats(kDevCapComputeCapMinor)) { cap.mutable_compute_capability()->set_minor(cap_minor->int64_value()); } return cap; } PerfEnv GetPerfEnvFromXPlane(const XPlane& device_plane) { PerfEnv result; DeviceCapabilities cap = GetDeviceCapFromXPlane(device_plane); result.set_peak_tera_flops_per_second(GetFlopMaxThroughputPerSM(cap) / 1000 * cap.num_cores()); result.set_peak_hbm_bw_giga_bytes_per_second(cap.memory_bandwidth() / 1e9); result.set_ridge_point(result.peak_tera_flops_per_second() * 1000 / result.peak_hbm_bw_giga_bytes_per_second()); return result; } void SetRunEnvironment(int32 accelerator_count, RunEnvironment* env) { // Currently, we only support profiling one host and one program. env->set_host_count(1); env->set_task_count(1); env->set_device_type(accelerator_count > 0 ? "GPU" : "CPU"); env->set_device_core_count(accelerator_count); } } // namespace OpStats ConvertXSpaceToOpStats(const XSpace& space) { const XPlane* host_plane = FindPlaneWithName(space, kHostThreads); std::vector<const XPlane*> device_planes = FindPlanesWithPrefix(space, kGpuPlanePrefix); OpStats op_stats; StepEvents step_events; // Convert device planes. OpMetricsDbCombiner op_metrics_db_combiner( op_stats.mutable_device_op_metrics_db()); SetRunEnvironment(device_planes.size(), op_stats.mutable_run_environment()); for (const XPlane* device_trace : device_planes) { if (!op_stats.has_perf_env()) { *op_stats.mutable_perf_env() = GetPerfEnvFromXPlane(*device_trace); } const PerfEnv& perf_env = op_stats.perf_env(); OpMetricsDb device_op_metrics_db = ConvertDeviceTraceXPlaneToOpMetricsDb( *device_trace, perf_env.peak_tera_flops_per_second(), perf_env.peak_hbm_bw_giga_bytes_per_second()); op_metrics_db_combiner.Combine(device_op_metrics_db); CombineStepEvents(ConvertDeviceTraceXPlaneToStepEvents(*device_trace), &step_events); } // Convert a host plane. bool has_device = !device_planes.empty(); if (host_plane) { *op_stats.mutable_host_op_metrics_db() = ConvertHostThreadsXPlaneToOpMetricsDb(*host_plane); CombineStepEvents( ConvertHostThreadsXPlaneToStepEvents( *host_plane, /*use_device_step_events=*/has_device, step_events), &step_events); } StepEvents nonoverlapped_step_events = ToNonOverlappedStepEvents(step_events); *op_stats.mutable_step_db() = ConvertStepEventsToStepDb(has_device, nonoverlapped_step_events); *op_stats.mutable_device_op_metrics_db()->mutable_precision_stats() = ComputePrecisionStats(nonoverlapped_step_events); return op_stats; } } // namespace profiler } // namespace tensorflow
People of color have a higher "pollution burden" in the U.S. Buying things–especially things that require a lot of shipping–causes air pollution. White people in the U.S. are bigger spenders, but the pollution their dollars create primarily affects people of color. If you are black or Hispanic in the United States, your environmental footprint is probably much lighter than the average white American. But at the same time, you probably breathe in much more pollution. This disparity has long been felt in communities of color. It’s formed the backbone of environmental justice campaigns calling for protections and green-infrastructure interventions in those areas that are disproportionately affected by pollution. Much research has been done to back up the fact that minorities, on average, are subjected to poorer air quality than white Americans. But until now, there’s been little data to prove that their consumption habits contribute far less to atmospheric pollution. White people in America, on average, breathe in around 17% less pollution than they create. Conversely, black and Hispanic Americans shoulder a “pollution burden” of 56% and 63% more exposure, respectively, than they contribute to. The reasons for this disparity come down to two main factors, says University of Washington civil and environmental engineering professor and report author Christopher Tessum: how much people consume, and how polluted the air around them is. “The first thing we needed to see was how much money people are spending, by race and ethnicity,” Tessum says. They found that information from the Bureau of Labor Statistics, which surveys people about their spending habits. Then, they had to link that data with statistics from the Bureau of Economic Analysis, which tracks money going into and out of businesses. “That allowed us to see how money flows through the economy,” Tessum says. “If you buy an iPhone, that money will go to the retailer, who then buys from the manufacturer, who pays for shipping and purchases raw materials and electricity. That activity correlates with emissions from the industry.” Once the researchers had data on the corresponding emissions from economic activity, they ran the emissions data through an air quality model to see where pollution collects. Overwhelmingly, economic activity originates with white Americans, and pollution concentrates around communities of color. In the U.S., white people are much more likely to possess wealth. That affects both spending patterns, Tessum says, and where people live: Poorer neighborhoods are often sited closer to factories or highways, both of which contribute significantly to pollution levels. The stakes for understanding this are high: Poor air quality contributes to 63% of all deaths caused by environmental factors. There was a third factor that Tessum and his team of researchers considered in drawing out the disparate emissions burden in the U.S., and that was what people spent their money on. But the researchers found that even more conscious consumerism didn’t make much of a difference in the overall environmental effect of purchasing goods. Because the researchers tracked the emissions associated with a product along its entire supply chain–including sourcing, manufacturing, and shipping–they found that it didn’t matter so much what type of product a person chose to purchase. What had the greatest bearing on emissions was the volume of demand for various products, which drove up pollution levels across the supply chain. How much you buy is just as–if not more–important than what you buy. To Tessum, the findings indicate the importance of stricter emissions regulations, “which have been effective in reducing pollution levels,” he says. In fact, over the data time frame of the report, which spanned 1997 to 2015, pollution levels actually declined overall even as economic activity rose, which Tessum says indicates that stricter standards around emissions would not hamper economic growth. He notes that this is a timely finding, given the momentum around the Green New Deal proposed by Representative Alexandria Ocasio-Cortez, which calls for decarbonizing industries while maintaining economic stability. The Green New Deal will not happen overnight, and in the meantime, this new data could lend support for more short-term initiatives to close the emissions burden gap between white Americans and communities of color.
Ahead of the visit by the United States of America’s Secretary of State, Mr John Kerry, the Presidency has said that reporters would not be allowed to cover it. Only photographers and videographers will be allowed to establish Kerry’s arrival as well as the bilateral meeting between him and his host, President Muhammadu Buhari. According to a media advisory issued on Monday, Buhari will receive Kerry at the State House at 3pm today (Tuesday). Kerry will also meet with some governors from the north east. “The Secretary of State will not grant any press interview after the meeting,” the advisory reads. ‎A statement issued by the US Department of State said Kerry would travel to Sokoto, and Abuja, Nigeria, on August 23-24.
Chameleon v3.0 released, start to support change Windows desktop wallpaper. Please go to http://chameapp.azurewebsites.net/chamedesk/chamedesk_setup.exe to download desktop tool to enable this feature. **************************** Chameleon comes from a quite simple idea – bring a little more comfort every time when your Windows 8.1 is started or locked. How does it make that happen? First, it keeps watching the online services such as “photo of the day”, “image search” and “photo sharing” around the world, and also your local collections. Then, you can make Chameleon change the background picture of your lock screen in every 15mins, 30mins, 1hour etc. For the online service of “photo of the day”, Chameleon checks the photos from Bing – photo of the day, NASA - astronomy Picture of the Day, National Geographic magazine photo of the Day, Flickr Interestingness Pictures, Wikipedia picture of the Day, etc… You can also create your own picture channel online by selecting your preferred key words. Chameleon will search the internet with those key words and fill your own channel with fascinating pictures. Thanks to Google Image search and Baidu Image search. (For users from China Mainland, Baidu is more recommend, for reasons u know I know… ) All the pictures are from free and public internet services, the copyrights belong to their owners. The application obtains the pictures from the internet considering the authors of these pictures have already accepted the corresponding picture sharing disclosure agreement. Any individual or organization have no right to use these images for commercial purposes. Any legal issues arises by an unauthorized use of the images could be investigated for legal responsibility by the copyright holders according to local laws. Any sharing of this application is appreciated. You are also welcome to share some good photo channels with us and we will add them to the coming version. Any advice will be appreciated to be communicated by: jar.bob@gmail.com Note: The automatic changement of the lock screen DOESN'T WORK in an inactivated Windows 8.1 , please check your Windows activation status before leaving us a unsatisfied note.
. The purpose was to examine effects of subject's familiarity with experimenter, and the task orientation of experimenter towards preschool children's communication in a problem solving situation. After subjects were given one of three different information about a target card, they were asked to identify the target one. However, subjects were unable to identify the target one without questions. The main results were as follows: 5-year-olds made more questions and chose the target more often than 4-year-olds, High familiarity group made questions and chose the target more often than low one, Task-oriented group made more questions and chose the target more often than non-oriented one, and Interaction of familiarity and the task-orientation was not significant. These results suggested that familiarity and task-orientation affected rate of communication occurrence in problem solving by preschool children.
To CBS Sports blogger R.J. Anderson, Colin Kaepernick will do just fine if he never plays football again. On the other hand, when Tim Tebow resumes life as a professional athlete, he will do so as a "divider." Anderson blogged about both of these athletes in transition today, but he praises Kaepernick as a humanitarian, without mentioning his attacks on the flag and America’s police officers: "While Kaepernick waits for his next NFL opportunity, he is busy raising money to fly food and water to people in Somalia. "Kaepernick will reportedly stand for the anthem next season. But if this is where Kap’s NFL career ends, he should be fine; since coming into the league in 2011 he’s earned more than $43 million." As for the former football star who will play for the Mets' Class A baseball team in Columbia, S.C., Anderson joined with a rival of that team in mocking Tebow. Anderson wrote that Tebow divides everyone and "It didn’t take long for people to make jokes -- because Tebow can’t walk down the road without someone making a joke." <<< Please support MRC's NewsBusters team with a tax-deductible contribution today. >>> The jokester is Columbia's South Atlantic League rival, the Greenville Drive, Boston's Class A team. Anderson posted three Twitter messages issued by the Greenville team, all insulting to Tebow: Not jealous. We prefer developing future MLBers, not PR stunts https://t.co/kSllgN5Vmn — Greenville Drive (@GreenvilleDrive) March 20, 2017 We're not judging...okay, maybe we are. Big difference in HS kids playing in Class A and 28-year-old wannabes https://t.co/NmUKODaPiA — Greenville Drive (@GreenvilleDrive) March 20, 2017 While Anderson saw fit to brag up Kaepernick for his humanitarian work, he refused to acknowledge the many good things happening through the Tim Tebow Foundation. The Tebow CURE Hospital is an outreach that has provided physical and spiritual healing through orthopedic surgeries for 650 children in the Philippines who were unable to afford health care. The Night to Shine program serves the special needs community. Additionally, Timmy’s Playrooms are being built in children’s hospitals around the world to provide charitable support for patients and their families. The Tebow Foundation's Orphan Care outreach serves hundreds of children in four countries who were left homeless or abandoned. They are receiving food, shelter, clothing, medical care and education expenses. The Foundation also assists families with adoption expenses and hospital visits. Tebow receives no income from his foundation. At least the Greenville Drive stopped its negative Tweets after some people objected. That's more than can be said for Anderson's bashing of a true role model and humanitarian.
def create_app(config_name): newsApp=Flask(__name__) newsApp.config.from_object(config_option[config_name]) bootstrap.init_app(newsApp) from .lead import crucial as lead_blueprint newsApp.register_blueprint(lead_blueprint) from .request import configure_request configure_request(newsApp) return newsApp
Efficacy of high-dose methotrexate in childhood acute lymphocytic leukemia: analysis by contemporary risk classifications. High-dose methotrexate (HDMTX) added to a basic regimen of chemotherapy proved superior to cranial irradiation and sequentially administered drug pairs (RTSC) in prolonging complete remissions in children with "standard-risk" acute lymphocytic leukemia. To extend this result to more contemporary risk groups, we reclassified the patients according to methods of the Pediatric Oncology Group (POG), the Childrens Cancer Study Group (CCG), the Rome workshop, and St Jude Total Therapy Study XI. By life table analysis, 70% to 78% of patients with a favorable prognosis would remain in continuous complete remission (CCR) at 4 years if treated with HDMTX. Uniformly lower CCR rates could be expected with RTSC, especially in St Jude better-risk patients. HDMTX also would show greater efficacy than RTSC in the CCG average-risk and POG poor-risk groups, but the results appear inferior to those being achieved with intensified regimens for high-risk leukemia. Although both therapies would provide adequate CNS prophylaxis in favorable-risk groups, RTSC would offer greater protection in patients classified as being in a worse-risk group by St Jude criteria. We conclude that HDMTX-based therapy, as described in this report, would be most effective in patients with a presenting leukocyte count of less than 25 x 10/L, of the white race, aged 2 to 10 years, and having leukemic cell hyperdiploidy without translocations.
/** * Algorithm to distribute number of records to specified number of folders. * * @param numberOfFolders - number of available folders * @param numberOfRecords - number of records that need to be distributed in all available folders * @return an array with number of records to be created for each folder */ private int[] generateRandomValues(int numberOfFolders, int numberOfRecords) { int[] aux = new int[numberOfFolders+1]; int[] generatedValues = new int[numberOfFolders]; Random r = new Random(); for(int i = 1;i < numberOfFolders;i++) { aux[i] = r.nextInt(numberOfRecords)+1; } aux[0] = 0; aux[numberOfFolders] = numberOfRecords; Arrays.sort(aux); for(int i = 0;i < numberOfFolders;i++) { generatedValues[i] = aux[i+1] - aux[i]; } return generatedValues; }
X Ray Based Covid 19 Detection System The World Health Organization (WHO) recognized COVID-19 as the cause of a worldwide pandemic in 2019. The disease is usually contagious, and those who are infected can quickly pass it to others with whom they originate into contact. As a result, observing is an effective way to stop the virus from spreading more. Another disease caused by a pandemic the same as COVID-19 is pneumonia. This is of-ten significantly unsafe for young-sters, individuals over 65 years getting on, and people with health issues or immune systems that are affected. In this paper, we have classified COVID-19 and pneumonia using deep transfer learning. We have used the VGG16 architecture, which was constructed by collecting dataset of COVID-19, Pneumonia and normal X-Ray images. Our main objective is to ease the work of radiologist by providing a Graphical User Interface which takes x-ray as input and can directly distinguish whether or not patient has COVID-19, pneumonia or is normal.
/** * Verifies if one of the {@link KogitoCapability#ENGINES} are present in the classpath. * * @param capabilities */ @BuildStep void verifyCapabilities(final Capabilities capabilities) { if (KogitoCapability.ENGINES.stream().noneMatch(kc -> capabilities.isPresent(kc.getCapability()))) { throw this.exceptionForEngineNotPresent(); } }
/** * The RuntimeProcessor contains the code and configuration necessary to plug in a new runtime language, such as Javascript. * This default runtime processor is used to represent the default Java runtime. */ public class DefaultRuntimeProcessor implements IRuntimeProcessor, Serializable { public transient LayeredSystem system; /** Destination name you can use to talk to this runtime. (TODO: should this be a list?) */ public String destinationName; public String runtimeName; boolean loadClassesInRuntime = true; boolean useContextClassLoader = true; boolean useLocalSyncManager = true; ArrayList<String> syncProcessNames; public DefaultRuntimeProcessor(String rtName, boolean useContextClassLoader) { runtimeName = rtName; // TODO: do we need a better way to define this? if (rtName != null && (rtName.equals("android") || rtName.equals("gwt"))) this.useContextClassLoader = useContextClassLoader; } public String getDestinationName() { return destinationName; } public String getRuntimeName() { return runtimeName; } public boolean initRuntime(boolean fromScratch) { return false; } public void saveRuntime() { } public void initAfterRestore() { } public void start(BodyTypeDeclaration def) { } public void process(BodyTypeDeclaration def) { } public void stop(BodyTypeDeclaration def) { } public List<SrcEntry> getProcessedFiles(IFileProcessorResult model, Layer genLayer, String buildSrcDir, boolean generate) { return model.getProcessedFiles(genLayer, genLayer.buildSrcDir, generate); } public String getStaticPrefix(Object def, JavaSemanticNode refNode) { // Used only by the JSRuntimeProcessor throw new UnsupportedOperationException(); } /** Called after starting all types */ public void postStart(LayeredSystem sys, Layer genLayer) { } /** Called after stopping all types */ public void postStop(LayeredSystem sys, Layer genLayer) { } /** Called after processing all types */ public void postProcess(LayeredSystem sys, Layer genLayer) { } public String[] getCompiledFiles(String lang) { return new String[0]; } public List<String> getCompiledFiles(String lang, String refTypeName) { return Collections.emptyList(); } public int getExecMode() { return 0; } public String replaceMethodName(LayeredSystem sys, Object methObj, String name) { return name; } public UpdateInstanceInfo newUpdateInstanceInfo() { return new UpdateInstanceInfo(); } public void resetBuild() { } public void clearRuntime() { } public List<SrcEntry> buildCompleted() { return null; } public boolean getCompiledOnly() { return false; } public boolean getNeedsAnonymousConversion() { return false; } public void applySystemUpdates(UpdateInstanceInfo info) { } public boolean getNeedsEnumToClassConversion() { return false; } public boolean needsSrcForBuildAll(Object cl) { return false; } public void setLoadClassesInRuntime(boolean val) { this.loadClassesInRuntime = val; } public boolean getLoadClassesInRuntime() { return loadClassesInRuntime; } public void setUseContextClassLoader(boolean val) { useContextClassLoader = val; } public boolean getUseContextClassLoader() { return useContextClassLoader; } public List<String> getSyncProcessNames() { return syncProcessNames; } public void setSyncProcessNames(List<String> syncProcessNames) { this.syncProcessNames = new ArrayList<String>(syncProcessNames); } public void addSyncProcessName(String procName) { if (syncProcessNames == null) syncProcessNames = new ArrayList<String>(); syncProcessNames.add(procName); } public String toString() { return runtimeName + " runtime"; } public void setLayeredSystem(LayeredSystem sys) { system = sys; sys.enableRemoteMethods = false; // Don't look for remote methods in the JS runtime. Changing this to true is a bad idea, even if you wanted the server to call into the browser. We have some initialization dependency problems to work out. } public LayeredSystem getLayeredSystem() { return system; } public String runMainMethod(Object type, String runClass, String[] runClassArgs) { if (system.options.verbose) System.out.println("Warning: - not running main method for: " + runClass + " in runtime: " + getRuntimeName()); return null; } public String runStopMethod(Object type, String runClass, String stopMethod) { if (system.options.verbose) System.out.println("Warning: - not running stopMethod for: " + runClass + " in runtime: " + getRuntimeName()); return null; } protected ArrayList<IRuntimeProcessor> syncRuntimes = new ArrayList<IRuntimeProcessor>(); /** The list of runtimes that we are synchronizing with from this runtime. In Javascript this is the default runtime */ public List<IRuntimeProcessor> getSyncRuntimes() { return syncRuntimes; } public boolean usesThisClasspath() { return true; } public boolean usesLocalSyncManager() { return useLocalSyncManager; } public boolean equals(String runtimeName) { if (runtimeName == null) return getRuntimeName().equals(DEFAULT_RUNTIME_NAME); return getRuntimeName().equals(runtimeName); } public int hashCode() { // since null and 'java' are equivalent if (getRuntimeName().equals(DEFAULT_RUNTIME_NAME)) return 0; return getRuntimeName().hashCode(); } public static boolean compareRuntimes(IRuntimeProcessor proc1, IRuntimeProcessor proc2) { return proc1 == proc2 || (proc1 != null && proc1.equals(proc2)) || (proc2 != null && proc2.equals(proc1)); } private static File getRuntimeFile(LayeredSystem sys, String runtimeName) { File typeIndexMainDir = new File(sys.getStrataCodeDir("idx")); return new File(typeIndexMainDir, runtimeName + ".ser"); } public static IRuntimeProcessor readRuntimeProcessor(LayeredSystem sys, String runtimeName) { if (runtimeName.equals(IRuntimeProcessor.DEFAULT_RUNTIME_NAME)) return null; File runtimeFile = getRuntimeFile(sys, runtimeName); ObjectInputStream ois = null; FileInputStream fis = null; try { ois = new ObjectInputStream(fis = new FileInputStream(runtimeFile)); Object res = ois.readObject(); if (res instanceof IRuntimeProcessor) { IRuntimeProcessor runtime = (IRuntimeProcessor) res; runtime.initAfterRestore(); return runtime; } else { sys.error("Invalid runtime file contents: " + runtimeFile); runtimeFile.delete(); } } catch (InvalidClassException exc) { System.out.println("runtime processor index - version changed: " + runtimeFile); runtimeFile.delete(); } catch (IOException exc) { System.out.println("*** can't read runtime processor file: " + exc); } catch (ClassNotFoundException exc) { System.out.println("*** can't read runtime processor file: " + exc); } finally { FileUtil.safeClose(ois); FileUtil.safeClose(fis); } return null; } public static void saveRuntimeProcessor(LayeredSystem sys, IRuntimeProcessor runtime) { if (runtime == null) // Default runtime - no need to save return; File runtimeFile = getRuntimeFile(sys, runtime.getRuntimeName()); ObjectOutputStream os = null; try { os = new ObjectOutputStream(new FileOutputStream(runtimeFile)); os.writeObject(runtime); } catch (IOException exc) { System.err.println("*** can't save runtime processor in index " + exc); } finally { FileUtil.safeClose(os); } } public Object invokeRemoteStatement(BodyTypeDeclaration type, Object inst, Statement expr, ExecutionContext ctx, ScopeContext target) { throw new UnsupportedOperationException(); } public boolean supportsSyncRemoteCalls() { return true; } public boolean supportsTagCaching() { return true; } public boolean hasDefinitionForType(String typeName) { return system.getSrcTypeDeclaration(typeName, system.buildLayer, true, false, true, system.buildLayer, false) != null; } }
/** * Generic method for sharing that Deliver some data to someone else. Who * the data is being delivered to is not specified; it is up to the receiver * of this action to ask the user where the data should be sent. * * @param subject The title, if applied * @param message Message to be delivered */ public static void genericSharing(String subject, String message) { Intent intent = new Intent(Intent.ACTION_SEND); intent.setType("text/plain"); intent.putExtra(Intent.EXTRA_TEXT, message); intent.putExtra(Intent.EXTRA_SUBJECT, subject); intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); QuickUtils.getContext().startActivity(intent); }
//---------------------------------------------------------------------------// // Copyright (c) 2017-2022 <NAME>. All rights reserved. // // This file is part of the Rusted PackFile Manager (RPFM) project, // which can be found here: https://github.com/Frodo45127/rpfm. // // This file is licensed under the MIT license, which can be found here: // https://github.com/Frodo45127/rpfm/blob/master/LICENSE. //---------------------------------------------------------------------------// /*! Module with all the code to interact with ESF PackedFiles. ESF are like savestates of the game. !*/ use bitflags::bitflags; use serde_derive::{Serialize, Deserialize}; use std::{fmt, fmt::Display}; use rpfm_error::{ErrorKind, Result}; use rpfm_macros::*; use crate::common::decoder::Decoder; /// Extensions used by CEO/ESF PackedFiles. pub const EXTENSIONS: [&str; 3] = [".ccd", ".esf", ".save"]; /// Signatured/Magic Numbers/Whatever of a ESF PackedFile. pub const SIGNATURE_CAAB: &[u8; 4] = &[0xCA, 0xAB, 0x00, 0x00]; pub const SIGNATURE_CEAB: &[u8; 4] = &[0xCE, 0xAB, 0x00, 0x00]; pub const SIGNATURE_CFAB: &[u8; 4] = &[0xCF, 0xAB, 0x00, 0x00]; pub mod caab; //pub mod diff; //---------------------------------------------------------------------------// // Markers, from ESFEdit //---------------------------------------------------------------------------// /// Invalid marker. pub const INVALID: u8 = 0x00; /// Primitives pub const BOOL: u8 = 0x01; pub const I8: u8 = 0x02; pub const I16: u8 = 0x03; pub const I32: u8 = 0x04; pub const I64: u8 = 0x05; pub const U8: u8 = 0x06; pub const U16: u8 = 0x07; pub const U32: u8 = 0x08; pub const U64: u8 = 0x09; pub const F32: u8 = 0x0a; pub const F64: u8 = 0x0b; pub const COORD_2D: u8 = 0x0c; pub const COORD_3D: u8 = 0x0d; pub const UTF16: u8 = 0x0e; pub const ASCII: u8 = 0x0f; pub const ANGLE: u8 = 0x10; /// Optimized Primitives pub const BOOL_TRUE: u8 = 0x12; pub const BOOL_FALSE: u8 = 0x13; pub const U32_ZERO: u8 = 0x14; pub const U32_ONE: u8 = 0x15; pub const U32_BYTE: u8 = 0x16; pub const U32_16BIT: u8 = 0x17; pub const U32_24BIT: u8 = 0x18; pub const I32_ZERO: u8 = 0x19; pub const I32_BYTE: u8 = 0x1a; pub const I32_16BIT: u8 = 0x1b; pub const I32_24BIT: u8 = 0x1c; pub const F32_ZERO: u8 = 0x1d; /// Unknown Types pub const UNKNOWN_21: u8 = 0x21; pub const UNKNOWN_23: u8 = 0x23; pub const UNKNOWN_24: u8 = 0x24; pub const UNKNOWN_25: u8 = 0x25; /// Three Kingdoms DLC Eight Princes types pub const UNKNOWN_26: u8 = 0x26; /// Primitive Arrays pub const BOOL_ARRAY: u8 = 0x41; pub const I8_ARRAY: u8 = 0x42; pub const I16_ARRAY: u8 = 0x43; pub const I32_ARRAY: u8 = 0x44; pub const I64_ARRAY: u8 = 0x45; pub const U8_ARRAY: u8 = 0x46; pub const U16_ARRAY: u8 = 0x47; pub const U32_ARRAY: u8 = 0x48; pub const U64_ARRAY: u8 = 0x49; pub const F32_ARRAY: u8 = 0x4a; pub const F64_ARRAY: u8 = 0x4b; pub const COORD_2D_ARRAY: u8 = 0x4c; pub const COORD_3D_ARRAY: u8 = 0x4d; pub const UTF16_ARRAY: u8 = 0x4e; pub const ASCII_ARRAY: u8 = 0x4f; pub const ANGLE_ARRAY: u8 = 0x50; /// Optimized Arrays pub const BOOL_TRUE_ARRAY: u8 = 0x52; // makes no sense pub const BOOL_FALSE_ARRAY: u8 = 0x53; // makes no sense pub const U32_ZERO_ARRAY: u8 = 0x54; // makes no sense pub const U32_ONE_ARRAY: u8 = 0x55; // makes no sense pub const U32_BYTE_ARRAY: u8 = 0x56; pub const U32_16BIT_ARRAY: u8 = 0x57; pub const U32_24BIT_ARRAY: u8 = 0x58; pub const I32_ZERO_ARRAY: u8 = 0x59; // makes no sense pub const I32_BYTE_ARRAY: u8 = 0x5a; pub const I32_16BIT_ARRAY: u8 = 0x5b; pub const I32_24BIT_ARRAY: u8 = 0x5c; pub const F32_ZERO_ARRAY: u8 = 0x5d; // makes no sense pub const COMPRESSED_DATA_TAG: &str = "COMPRESSED_DATA"; pub const COMPRESSED_DATA_INFO_TAG: &str = "COMPRESSED_DATA_INFO"; // Blocks have quite a few bits that can toggle their behavior. bitflags! { /// This represents the bitmasks a Record Block can have applied to its type byte. #[derive(Default, Serialize, Deserialize)] pub struct RecordNodeFlags: u8 { /// Used to specify that the type is indeed a record block. const IS_RECORD_NODE = 0b1000_0000; /// Used to specify that this block contains nested groups of nodes. const HAS_NESTED_BLOCKS = 0b0100_0000; /// Used to specify that this block doesn't use optimized integers for version and name index. const HAS_NON_OPTIMIZED_INFO = 0b0010_0000; } } //---------------------------------------------------------------------------// // Enum & Structs //---------------------------------------------------------------------------// /// This holds an entire ESF PackedFile decoded in memory. #[derive(GetRef, Set, PartialEq, Clone, Debug, Serialize, Deserialize)] pub struct ESF { signature: ESFSignature, unknown_1: u32, creation_date: u32, root_node: NodeType, } /// This enum contains the different signatures of ESF files. #[derive(PartialEq, Clone, Copy, Debug, Serialize, Deserialize)] pub enum ESFSignature { /// Signature found on 3K files. CAAB, CEAB, CFAB } /// Node types supported by the ESF. /// /// NOTE: These are partially extracted from EditSF. #[derive(PartialEq, Clone, Debug, Serialize, Deserialize)] pub enum NodeType { /// Invalid type. Invalid, /// Primitive nodes. Bool(BoolNode), I8(i8), I16(i16), I32(I32Node), I64(i64), U8(u8), U16(u16), U32(U32Node), U64(u64), F32(F32Node), F64(f64), Coord2d(Coordinates2DNode), Coord3d(Coordinates3DNode), Utf16(String), Ascii(String), Angle(i16), /// Unknown Types Unknown21(u32), Unknown23(u8), //Unknown24(u32), Unknown25(u32), Unknown26(Vec<u8>), /// Primitive Arrays BoolArray(Vec<bool>), I8Array(Vec<i8>), I16Array(Vec<i16>), I32Array(VecI32Node), I64Array(Vec<i64>), U8Array(Vec<u8>), U16Array(Vec<u16>), U32Array(VecU32Node), U64Array(Vec<u64>), F32Array(Vec<f32>), F64Array(Vec<f64>), Coord2dArray(Vec<Coordinates2DNode>), Coord3dArray(Vec<Coordinates3DNode>), Utf16Array(Vec<String>), AsciiArray(Vec<String>), AngleArray(Vec<i16>), /// Record nodes Record(RecordNode), } /// Node containing a bool value, and if the node should be optimized or not. #[derive(GetRef, GetRefMut, PartialEq, Clone, Debug, Serialize, Deserialize)] pub struct BoolNode { value: bool, optimized: bool, } /// Node containing an i32 value, and if the node should be optimized or not. #[derive(GetRef, GetRefMut, PartialEq, Clone, Debug, Serialize, Deserialize)] pub struct I32Node { value: i32, optimized: bool, } /// Node containing an u32 value, and if the node should be optimized or not. #[derive(GetRef, GetRefMut, PartialEq, Clone, Debug, Serialize, Deserialize)] pub struct U32Node { value: u32, optimized: bool, } /// Node containing an f32 value, and if the node should be optimized or not. #[derive(GetRef, GetRefMut, PartialEq, Clone, Debug, Serialize, Deserialize)] pub struct F32Node { value: f32, optimized: bool, } /// Node containing a Vec<i32>, and if the node should be optimized or not. #[derive(GetRef, GetRefMut, PartialEq, Clone, Debug, Serialize, Deserialize)] pub struct VecI32Node { value: Vec<i32>, optimized: bool, } /// Node containing a Vec<u32>, and if the node should be optimized or not. #[derive(GetRef, GetRefMut, PartialEq, Clone, Debug, Serialize, Deserialize)] pub struct VecU32Node { value: Vec<u32>, optimized: bool, } /// Node containing a pair of X/Y coordinates. #[derive(GetRef, GetRefMut, PartialEq, Clone, Default, Debug, Serialize, Deserialize)] pub struct Coordinates2DNode { x: f32, y: f32, } /// Node containing a group of X/Y/Z coordinates. #[derive(GetRef, GetRefMut, PartialEq, Clone, Default, Debug, Serialize, Deserialize)] pub struct Coordinates3DNode { x: f32, y: f32, z: f32, } /// Node containing a record of data. Basically, a node with other nodes attached to it. #[derive(GetRef, Set, Default, PartialEq, Clone, Debug, Serialize, Deserialize)] pub struct RecordNode { record_flags: RecordNodeFlags, version: u8, name: String, children: Vec<Vec<NodeType>> } //---------------------------------------------------------------------------// // Implementation of ESF //---------------------------------------------------------------------------// /// Implementation of `ESF`. impl ESF { /// This function returns if the provided data corresponds to a ESF or not. pub fn is_esf(data: &[u8]) -> bool { match data.get_bytes_checked(0, 4) { Ok(signature) => signature == SIGNATURE_CAAB, Err(_) => false, } } /// This function creates a `ESF` from a `Vec<u8>`. pub fn read(packed_file_data: &[u8]) -> Result<Self> { let signature_bytes = packed_file_data.get_bytes_checked(0, 4)?; let signature = if signature_bytes == SIGNATURE_CAAB { ESFSignature::CAAB } else if signature_bytes == SIGNATURE_CEAB { ESFSignature::CEAB } else if signature_bytes == SIGNATURE_CFAB { ESFSignature::CFAB } else { return Err(ErrorKind::ESFUnsupportedSignature(format!("{:#X}{:#X}", signature_bytes[0], signature_bytes[1])).into()) }; let esf = match signature { ESFSignature::CAAB => Self::read_caab(packed_file_data)?, _ => return Err(ErrorKind::ESFUnsupportedSignature(format!("{:#X}{:#X}", signature_bytes[0], signature_bytes[1])).into()) }; //use std::io::Write; //let mut x = std::fs::File::create("ceo.json")?; //x.write_all(&serde_json::to_string_pretty(&esf).unwrap().as_bytes())?; Ok(esf) } /// This function takes a `ESF` and encodes it to `Vec<u8>`. pub fn save(&self) -> Vec<u8> { match self.signature { ESFSignature::CAAB => self.save_caab(), _ => return vec![], } } /// This function creates a copy of an ESF without the root node.. pub fn clone_without_root_node(&self) -> Self { Self { signature: self.signature, unknown_1: self.unknown_1, creation_date: self.creation_date, root_node: NodeType::Invalid, } } } /// Implementation of `NodeType`. impl NodeType { /// This function creates a copy of a node without its children. pub fn clone_without_children(&self) -> Self { match self { Self::Record(node) => { let mut new_node = RecordNode::default(); new_node.set_name(node.get_ref_name().to_owned()); new_node.set_record_flags(*node.get_ref_record_flags()); new_node.set_version(*node.get_ref_version()); Self::Record(new_node) } _ => self.clone() } } /*pub fn get_removed_nodes(&self, vanilla_node: &NodeType) -> NodeType { match vanilla_node { Self::Record(vanilla_node) => { match self { Self::Record(node) => { // If there's a difference in the nodes, it may be due to missing nodes. // We need to dig deeper. if vanilla_node.get_ref_children() != node.get_ref_children() { } } } } } }*/ /* /// This function checks if the provided NodeType values are "equal", even if the type is different. pub fn eq_value(&self, other: &Self) -> bool { match self { // Invalid type. Self::Invalid => other == &Self::Invalid, // Primitive nodes. Self::Bool(value) => match other { Self::Bool(other_value) => value.optimized == other_value.optimized && value.value == other_value.value, Self::BoolTrue => value.optimized && value.value, Self::BoolFalse => value.optimized && !value.value, _ => false }, Self::I8(value) => match other { Self::I8(other_value) => value == other_value, _ => false }, Self::I16(value) => match other { Self::I16(other_value) => value == other_value, _ => false }, Self::I32(value) => match other { Self::I32Zero => *value == 0, Self::I32Byte(other_value) => value == other_value, Self::I32_16bit(other_value) => value == other_value, Self::I32_24bit(other_value) => value == other_value, Self::I32(other_value) => value == other_value, _ => false }, Self::I64(value) => match other { Self::I64(other_value) => value == other_value, _ => false }, Self::U8(value) => match other { Self::U8(other_value) => value == other_value, _ => false }, Self::U16(value) => match other { Self::U16(other_value) => value == other_value, _ => false }, Self::U32(value) => match other { Self::U32Zero => *value == 0, Self::U32One => *value == 1, Self::U32Byte(other_value) => value == other_value, Self::U32_16bit(other_value) => value == other_value, Self::U32_24bit(other_value) => value == other_value, Self::U32(other_value) => value == other_value, _ => false }, Self::U64(value) => match other { Self::U64(other_value) => value == other_value, _ => false }, Self::F32(value) => match other { Self::F32(other_value) => (value - other_value).abs() >= std::f32::EPSILON, Self::F32Zero => (value - 0.0).abs() >= std::f32::EPSILON, _ => false }, Self::F64(value) => match other { Self::F64(other_value) => value == other_value, _ => false }, Self::Coord2d(value) => match other { Self::Coord2d(other_value) => value == other_value, _ => false }, Self::Coord3d(value) => match other { Self::Coord3d(other_value) => value == other_value, _ => false }, Self::Utf16(value) => match other { Self::Utf16(other_value) => value == other_value, _ => false }, Self::Ascii(value) => match other { Self::Ascii(other_value) => value == other_value, _ => false }, Self::Angle(value) => match other { Self::Angle(other_value) => value == other_value, _ => false }, // Optimized Primitives Self::BoolTrue => match other { Self::Bool(other_value) => other_value.optimized && other_value.value, Self::BoolTrue => true, _ => false }, Self::BoolFalse => match other { Self::Bool(other_value) => other_value.optimized && !other_value.value, Self::BoolFalse => true, _ => false }, Self::U32Zero => match other { Self::U32Zero => true, Self::U32One => false, Self::U32Byte(other_value) => *other_value == 0, Self::U32_16bit(other_value) => *other_value == 0, Self::U32_24bit(other_value) => *other_value == 0, Self::U32(other_value) => *other_value == 0, _ => false }, Self::U32One => match other { Self::U32Zero => false, Self::U32One => true, Self::U32Byte(other_value) => *other_value == 1, Self::U32_16bit(other_value) => *other_value == 1, Self::U32_24bit(other_value) => *other_value == 1, Self::U32(other_value) => *other_value == 1, _ => false }, Self::U32Byte(value) => {false}, Self::U32_16bit(value) => {false}, Self::U32_24bit(value) => {false}, Self::I32Zero => {false}, Self::I32Byte(value) => {false}, Self::I32_16bit(value) => {false}, Self::I32_24bit(value) => {false}, Self::F32Zero => {false}, // Unknown Types Self::Unknown21(value) => {false}, Self::Unknown23(value) => {false}, //Self::Unknown24(u32) => {false}, Self::Unknown25(value) => {false}, Self::Unknown26(value) => {false}, // Primitive Arrays Self::BoolArray(value) => {false}, Self::I8Array(value) => {false}, Self::I16Array(value) => {false}, Self::I32Array(value) => {false}, Self::I64Array(value) => {false}, Self::U8Array(value) => {false}, Self::U16Array(value) => {false}, Self::U32Array(value) => {false}, Self::U64Array(value) => {false}, Self::F32Array(value) => {false}, Self::F64Array(value) => {false}, Self::Coord2dArray(value) => {false}, Self::Coord3dArray(value) => {false}, Self::Utf16Array(value) => {false}, Self::AsciiArray(value) => {false}, Self::AngleArray(value) => {false}, // Optimized Arrays Self::U32ByteArray(value) => {false}, Self::U32_16bitArray(value) => {false}, Self::U32_24bitArray(value) => {false}, Self::I32ByteArray(value) => {false}, Self::I32_16bitArray(value) => {false}, Self::I32_24bitArray(value) => {false}, // Record nodes Self::Record(value) => {false}, } }*/ } /// Default implementation for `ESF`. impl Default for ESF { fn default() -> Self { Self { signature: ESFSignature::CAAB, unknown_1: 0, creation_date: 0, root_node: NodeType::Invalid, } } } /// Display implementation for `ESFSignature`. impl Display for ESFSignature { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { Display::fmt(match self { Self::CAAB => "CAAB", Self::CEAB => "CEAB", Self::CFAB => "CFAB", }, f) } } /// Implementation to create an `ESFSignature` from a `&str`. impl From<&str> for ESFSignature { fn from(data: &str) -> Self { match data { "CAAB" => Self::CAAB, "CEAB" => Self::CEAB, "CFAB" => Self::CFAB, _ => unimplemented!() } } }
Service curve allocation for end-to-end delay requirement An application with guaranteed service cares about whether or not the network can satisfy its performance requirement, such as end-to-end delay. However, the network wants to achieve performance guarantee and high utilization simultaneously. If the end-to-end delay provided by the network can be allocated properly to each switching node, then the network resources can get a better utilization. Conventionally, the delay is allocated equally to each switching node along the path that the connections pass through, referred to as Even Division policy. The advantage of this policy is easy to implement. However, we can not understand how this policy will affect the network utilization. In this paper, we proposed an allocation scheme called MaxMin allocation to improve network utilization. Under SCED (Service Curve Earlist Deadline) scheduling policy, we reduce the service curve allocation to end-to-end delay allocation, and use MaxMin allocation policy to promote the network utilization. From simulations, we find that the performance of MaxMin policy is better than Even Division policy.
Milka Planinc Early life Planinc was born Milka Malada in a mixed ethnic Croat and ethnic Serb family in Žitnić, a small village near Drniš, Dalmatia. She attended school, but with the onset of World War II her schooling was interrupted. She joined the Communist Youth League in 1941, which was a pivotal year in Planinc’s life and for her country. Nazi Germany invaded Yugoslavia and divided the country among German, Italian, Hungarian, and Bulgarian occupying authorities. Soon a resistance movement known as the Partisans was formed, led by Marshal Josip Broz Tito. Planinc waited impatiently for the day when she would be old enough to join the anti-Fascist Council for the National Liberation of Yugoslavia. Aged 19, Planinc joined the Partisans and became extremely devoted to Tito. In 1944 she joined the Communist Party of Yugoslavia. She became county commissar of the 11th Dalmatian Shock Brigade whose job it was to teach party principles and policies, and ensure party loyalty. Planinc spent years working for the Partisans and the Communist Party, and when they gained control of the entire region she enrolled in the Higher School of Administration in Zagreb to continue her education. Partisan commander Simo Dubajić later alleged that Planinc was involved with the post-war massacre at Kočevski Rog. In 1950, she married an engineer named Zvonko Planinc. The couple had a son and a daughter. Political career Planinc began to pursue a full-time career within the League of Communists of Croatia. She specialized in education, agitation, and propaganda, and in 1959 she was elected into the Croatian Central Committee, the executive body. Having served in a variety of posts in Zagreb, as the Secretary of the People’s Assembly of Trešnjevka in 1957 and then the Secretary of Cultural Affairs of the City of Zagreb in 1961, she became the Secretary of the City of Zagreb League of Communists Committee, and the Secretary for Education of the Socialist Republic of Croatia in 1963, a position in which she remained until 1965. Greater party acknowledgement did not come until 1966 when she was elected into the Presidium of the League of Communists of Croatia (LCC), and then to the executive committee of the LCC in 1968. She served as the President of the Assembly of the Socialist Republic of Croatia 1967–1971. After the events of Croatian Spring, the leadership of LCC was removed, and Planinc became president of the Central Committee in 1971. She made the decision to arrest Franjo Tuđman, Marko Veselica, Dražen Budiša, Šime Đodan and Vlado Gotovac, among others, who had all participated in the Croatian Spring. She remained the Leader of the League of Communists of Croatia until 1982. When Tito died in 1980, he left a plan for a rotation of eight leaders, with the leader coming from each federal unit in turn. On 29 April 1982, the Federal Conference of the Socialist Alliance of the Working People of Yugoslavia approved a list of ministers submitted by Planinc, and on 15 May 1982 a joint session of the National Assembly’s two houses named her head of the Federal Executive Council; thus she became prime minister. She became the first woman to occupy such a high post in the country's 64-year history. Planinc had a new governmental body, The Federal Executive Committee, and it consisted of 29 members. All of the members of this committee were new, except for five that were members of the old committee. She would serve as the President of the Federal Executive Council (Prime Minister) of SFR Yugoslavia between 1982 and 1986. Her mandate as prime minister was remembered as the times when the government finally decided to regulate external debt of SFR Yugoslavia and to start to pay it back. In order to achieve necessary means, her cabinet implemented restrictive economic measures for a few years. The 1974 constitution had left the central government with very little authority, as the power was divided into the separate republics. Planinc tried to re-focus the central government and gain international alliances with visits to Britain, the United States, and Moscow. Though her visits to Washington gave her promises of economic support, her visit to Moscow was said to be with "nothing lost, and nothing gained". Planinc offered her resignation in October 1985, but this was not accepted. On 12 February 1986 Planinc's government submitted a request to the International Monetary Fund for advanced surveillance. The request was approved a month later. Her term ended in May 1986, and before long she became a member of the LCY Central Committee. After politics The former prime minister spent the rest of her time living through bitter days of war with the Collapse of Communism, the fall of the Berlin Wall, and the fighting between Croats and Serbs. In 1993 her husband died, and in 1994 her son Zoran committed suicide. From the late 1990s until her death, Planinc was dependent on a wheelchair and rarely left her apartment. She resided in Zagreb until her death on 7 October 2010, aged 85. She was interred at Mirogoj Cemetery.
Stabilization of Au at edges of bimetallic PdAu nanocrystallites. Density functional calculations were performed to study the distribution of Au atoms in bimetallic PdAu nanoparticles. A series of Pd(79-n)Au(n) clusters of truncated octahedral shape with different content of Au ranging from n = 1 to 60 was used to model such bimetallic nanosystems. Segregation of Au to the particle surface is found to be thermodynamically favorable. The most stable sites for Au substitution are located at the edges of the PdAu nanoclusters. The stabilization at the edges is rationalized by their higher flexibility for surface relaxation which minimizes the strain induced by larger atomic radius of Au as compared to Pd. This stabilization of Au at the edges indicates the possibility to synthesize PdAu particles with Pd atoms located mainly on the facets, and edges "decorated" by Au atoms. Such nanocrystallites are expected to exhibit peculiar catalytic properties and, being thermodynamically stable, should be prone to retaining their initial shape under catalytic conditions.
/* * This class serves for decision-deck's plotNumericPerformanceTable web service invocation. * This web service requires a methodParameters XMCDA document which this class provides as a Node. */ public class CreateMethodParamatersNode { public static Element plotMethodParametersNodeForWSCall(Document doc) { Element XMCDANode = doc.createElement("xmcda:XMCDA"); Element plotMethodParametersNode = doc.createElement("methodParameters"); XMCDANode.appendChild(plotMethodParametersNode); // Create chart_title mandatory parameter Element parameter1 = doc.createElement("parameter"); Attr parameter1Id = doc.createAttribute("id"); parameter1Id.setValue("chart_title"); parameter1.setAttributeNode(parameter1Id); plotMethodParametersNode.appendChild(parameter1); Element value1Node=doc.createElement("value"); parameter1.appendChild(value1Node); Element label1Node =doc.createElement("label"); value1Node.appendChild(label1Node); label1Node.appendChild(doc.createTextNode("Performance table plot")); // Create domain_axis mandatory parameter Element parameter2 = doc.createElement("parameter"); Attr parameter2Id = doc.createAttribute("id"); parameter2Id.setValue("domain_axis"); parameter2.setAttributeNode(parameter2Id); plotMethodParametersNode.appendChild(parameter2); Element value2Node=doc.createElement("value"); parameter2.appendChild(value2Node); Element label2Node =doc.createElement("label"); value2Node.appendChild(label2Node); label2Node.appendChild(doc.createTextNode("Alternatives")); // Create range_axis mandatory parameter Element parameter3 = doc.createElement("parameter"); Attr parameter3Id = doc.createAttribute("id"); parameter3Id.setValue("range_axis"); parameter3.setAttributeNode(parameter3Id); plotMethodParametersNode.appendChild(parameter3); Element value3Node=doc.createElement("value"); parameter3.appendChild(value3Node); Element label3Node =doc.createElement("label"); value3Node.appendChild(label3Node); label3Node.appendChild(doc.createTextNode("Performance on criteria")); XMCDANode.setAttribute("xmlns:xmcda", "http://www.decision-deck.org/2012/XMCDA-2.2.1"); return XMCDANode; } }
/** * The base class for testing Logback-access events. */ @RunWith(SpringRunner.class) @SpringBootTest( value = "logback.access.config=classpath:logback-access.queue.xml", webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT ) public abstract class AbstractLogbackAccessEventsTest { /** * The REST template. */ @Autowired protected TestRestTemplate rest; /** * The server port. */ @LocalServerPort protected int port; /** * Creates a test rule. * * @return a test rule. */ @Rule public TestRule rule() { return RuleChain .outerRule(new LogbackAccessEventQueuingAppenderRule()) .around(new LogbackAccessEventQueuingListenerRule()); } /** * Tests a Logback-access event. */ @Test public void logbackAccessEvent() { LocalDateTime startTime = LocalDateTime.now(); ResponseEntity<String> response = rest.getForEntity("/test/text", String.class); IAccessEvent event = LogbackAccessEventQueuingAppender.appendedEventQueue.pop(); LocalDateTime endTime = LocalDateTime.now(); LogbackAccessEventQueuingListener.appendedEventQueue.pop(); assertThat(response) .hasStatusCode(HttpStatus.OK) .hasContentLengthHeader("TEST-TEXT".getBytes(StandardCharsets.UTF_8).length) .hasBody("TEST-TEXT"); assertThat(event) .hasTimestamp(startTime, endTime) .hasServerName("localhost") .hasLocalPort(port) .hasRemoteAddr("127.0.0.1") .hasRemoteHost("127.0.0.1") .doesNotHaveRemoteUser() .hasProtocol("HTTP/1.1") .hasMethod(HttpMethod.GET) .hasRequestUri("/test/text") .hasQueryString("") .hasRequestUrl(HttpMethod.GET, "/test/text", "HTTP/1.1") .doesNotHaveRequestHeaderName("X-Test-Header") .doesNotHaveRequestHeader("X-Test-Header") .doesNotHaveRequestHeaderInMap("X-Test-Header") .doesNotHaveRequestParameter("test-parameter") .doesNotHaveRequestParameterInMap("test-parameter") .hasStatusCode(HttpStatus.OK) .hasContentLength("TEST-TEXT".getBytes(StandardCharsets.UTF_8).length) .doesNotHaveResponseHeaderName("X-Test-Header") .doesNotHaveResponseHeader("X-Test-Header") .doesNotHaveResponseHeaderInMap("X-Test-Header") .hasElapsedTime(startTime, endTime) .hasElapsedSeconds(startTime, endTime) .hasThreadName(); } /** * Tests a Logback-access event with a query string. */ @Test public void logbackAccessEventWithQueryString() { ResponseEntity<String> response = rest.getForEntity("/test/text?query", String.class); IAccessEvent event = LogbackAccessEventQueuingAppender.appendedEventQueue.pop(); LogbackAccessEventQueuingListener.appendedEventQueue.pop(); assertThat(response).hasStatusCode(HttpStatus.OK); assertThat(event) .hasRequestUri("/test/text") .hasQueryString("?query") .hasRequestUrl(HttpMethod.GET, "/test/text?query", "HTTP/1.1"); } /** * Tests a Logback-access event with a request header. */ @Test public void logbackAccessEventWithRequestHeader() { RequestEntity<Void> request = RequestEntity .get(rest.getRestTemplate().getUriTemplateHandler().expand("/test/text")) .header("X-Test-Header", "TEST-HEADER") .build(); ResponseEntity<String> response = rest.exchange(request, String.class); IAccessEvent event = LogbackAccessEventQueuingAppender.appendedEventQueue.pop(); LogbackAccessEventQueuingListener.appendedEventQueue.pop(); assertThat(response).hasStatusCode(HttpStatus.OK); assertThat(event) .hasRequestHeaderName("X-Test-Header") .hasRequestHeader("X-Test-Header", "TEST-HEADER") .hasRequestHeaderInMap("X-Test-Header", "TEST-HEADER"); } /** * Tests a Logback-access event with a request parameter. */ @Test public void logbackAccessEventWithRequestParameter() { ResponseEntity<String> response = rest.getForEntity("/test/text?test-parameter=TEST-PARAMETER", String.class); IAccessEvent event = LogbackAccessEventQueuingAppender.appendedEventQueue.pop(); LogbackAccessEventQueuingListener.appendedEventQueue.pop(); assertThat(response).hasStatusCode(HttpStatus.OK); assertThat(event) .hasRequestParameter("test-parameter", "TEST-PARAMETER") .hasRequestParameterInMap("test-parameter", "TEST-PARAMETER"); } /** * Tests a Logback-access event with a response header. */ @Test public void logbackAccessEventWithResponseHeader() { ResponseEntity<String> response = rest.getForEntity("/test/text-with-header", String.class); IAccessEvent event = LogbackAccessEventQueuingAppender.appendedEventQueue.pop(); LogbackAccessEventQueuingListener.appendedEventQueue.pop(); assertThat(response).hasStatusCode(HttpStatus.OK); assertThat(event) .hasResponseHeaderName("X-Test-Header") .hasResponseHeader("X-Test-Header", "TEST-HEADER") .hasResponseHeaderInMap("X-Test-Header", "TEST-HEADER"); } /** * Tests a Logback-access event asynchronously. */ @Test public void logbackAccessEventAsynchronously() { ResponseEntity<String> response = rest.getForEntity("/test/text-asynchronously", String.class); IAccessEvent event = LogbackAccessEventQueuingAppender.appendedEventQueue.pop(); LogbackAccessEventQueuingListener.appendedEventQueue.pop(); assertThat(response).hasStatusCode(HttpStatus.OK); assertThat(event).hasThreadName(); } /** * Tests a Logback-access event to get empty text. */ @Test public void logbackAccessEventToGetEmptyText() { ResponseEntity<String> response = rest.getForEntity("/test/empty-text", String.class); IAccessEvent event = LogbackAccessEventQueuingAppender.appendedEventQueue.pop(); LogbackAccessEventQueuingListener.appendedEventQueue.pop(); assertThat(response) .hasStatusCode(HttpStatus.OK) .hasContentLengthHeader(0); assertThat(event).hasContentLength(0); } /** * Tests a Logback-access event without a response header of content length. */ @Test public void logbackAccessEventWithoutContentLengthResponseHeader() { ResponseEntity<String> response = rest.getForEntity("/test/json", String.class); IAccessEvent event = LogbackAccessEventQueuingAppender.appendedEventQueue.pop(); LogbackAccessEventQueuingListener.appendedEventQueue.pop(); assertThat(response) .hasStatusCode(HttpStatus.OK) .doesNotHaveContentLengthHeader() .hasBody("{\"TEST-KEY\":\"TEST-VALUE\"}"); assertThat(event) .hasContentLength("{\"TEST-KEY\":\"TEST-VALUE\"}".getBytes(StandardCharsets.UTF_8).length); } /** * The base class of context configuration. */ @EnableAutoConfiguration(exclude = SecurityAutoConfiguration.class) @Import({LogbackAccessEventQueuingListenerConfiguration.class, TestControllerConfiguration.class}) public static abstract class AbstractContextConfiguration { } }
Free Bits and Non-Approximability ( Extended This paper investigates the possibility that tight hardnes s of approximation results may be derived for several combinatorial optimization problems via the pcp-connect ion (a.k.a., the FGLSS-reduction ). We study the amortized free bit-complexity of probabilistic verifie rs and invert the FGLSS-reduction by showing that an NP-hardness result for the approximation of MaxClique to wi thin a factor ofN1=(g+1) would imply a probabilistic verifier for NP with logarithmic randomness and amortized fr ee-bit complexityg. In addition, we present a new proof system for NP in which the amortized free-bit complexi ty is 2 which yields (via the FGLSS-reduction) that approximating the clique to within a factor of N1=3 (in anN -vertex graph) is NP-hard. The new proof system is based on a new code and means for checking codewords. This machiner y is used to obtain improved non-approximability results for Max-SNP problems such as Max-2SAT, Max-3SAT, Ma x-CUT and Min-VC. This extended abstract provides less technical detail than customary. The interested reader is referred to our (124-pages) technical report . Advanced Networking Laboratory, IBM T.J. Watson Research C enter, P.O. Box 704, Yorktown Heights, NY 10598, USA. e-mail : mihir@watson.ibm.com. y Department of Computer Science and Applied Mathematics, We izmann Institute of Sciences, Rehovot, Israel. e-mail: oded@wisdom. weizmann.ac.il. Partially supported by grant No. 92-00226 from the USIsra el Binational Science Foundation (BSF), Jerusalem, Israel. z Research Division, IBM T.J. Watson Research Center, P.O. Bo x 218, Yorktown Heights, NY 10598, USA. e-mail: madhu@watson. ibm.com.
# Test creation of a CRA request file # Assuming you are currently in the directory `django/api/services/test`... # To run this test do the following. # ```bash # cd .. # python3 -m test.cra_write # ``` from cra import write data = [{ 'sin': '123456789', 'year': '2020', 'given_name': 'John', 'family_name': 'Smith', 'birth_date': '1955-01-01' },{ 'sin': '987654321', 'year': '2020', 'given_name': 'Amanda', 'family_name': 'Williams', 'birth_date': '1965-02-21' }] file = write(data) print(file)
import { Component, OnInit } from '@angular/core'; import { OidcSecurityService } from './common/services/auth/oidc.security.service' import './app.component.scss'; import '../style/app.css'; @Component({ selector: 'my-app', templateUrl: 'app.component.html' }) export class AppComponent implements OnInit { constructor(public securityService: OidcSecurityService) { } ngOnInit() { if (window.location.hash) { this.securityService.authorizedCallback(); } } login() { console.log('start login'); this.securityService.authorize(); } refreshSession() { console.log('start refreshSession'); this.securityService.authorize(); } logout() { console.log('start logoff'); this.securityService.logoff(); } }
Awareness and Practices of Dental Students and Dentists Regarding Infection Control in Prosthodontic Clinics RESEARCH ARTICLE Awareness and Practices of Dental Students and Dentists Regarding Infection Control in Prosthodontic Clinics Ragheb Halawani, Khalid Aboalshamat, Ruba Alwsaidi, Sultana Sharqawi, Rawan Alhazmi, Zahra Abualsaud, Amal Alattallah and Maha Alamri Department of Dentistry, Umm Al-Qura University, Makkah, Saudi Arabia Dental Public Health Division, Preventive Dentistry Department, Faculty of Dentistry, Umm Al-Qura University, Makkah, Saudi Arabia Department of Dentistry, Alfarabi Private College, Jeddah, Saudi Arabia Department of Dentistry, King Fahad Armed Forces Hospital, Jeddah, Saudi Arabia INTRODUCTION Assuring patient safety is one of the priorities in the process of achieving higher quality health care standards and other germs that can be infectious to others. Most clinical procedures in dentistry include exposure to saliva, blood, or other contaminated and highly contagious elements. Thus, maintaining infection control is an essential practice for dental clinics as a crucial protocol for preserving the practice's integrity. Many studies have advocated strict adherence to infection control guidelines in dental clinics. However, dental students and dentists in previous studies have demonstrated low levels of compliance with infection control guidelines , which increases the probability of cross-contamination in a possibly serious manner such as through a needle prick . In fact, several studies have assessed the levels of knowledge, attitude, and practice of infection control by dentists and dental students in dental clinics in various countries, including Saudi Arabia and Yemen. These studies have generally shown variable levels of knowledge and practicesaimed at infection control; however, the authors have concluded that levels of knowledge should be improved and that practices were not satisfactory and need to be strengthened to adhere to guidelines for better infection and disease control. One of the areas of frequent infection are the prosthetics in prosthodontic clinics and dental prosthetic labs, and this has become a growing concern frequently reported in> the literature. In fact, many studies have assessed different modalities for controlling infection in dental prosthetics using a variety of materials, including chlorhexidine digluconate, sodium hypochlorite, sodium perborate, sodium hypochlorite, and vinegar in different concentrations and applications, which highlight its importance. Prosthetic clinics have a number of instruments used for various procedures that result in frequent transportation of impressionable materials between the dental clinic and the laboratory, increasing the possibility> for cross-contamination.This means that a number of studies have concentrated more on infection control in dental prosthetic clinics. Two studies assessed the knowledge and practice of infection control in prosthetic clinics in India and in Riyadh, Saudi Arabia.The studies had similar results, showing that 14.4% to 17.8% of the respondents disinfect dental casts before sending them to the laboratory, and all of them used gloves during prosthetic treatments. Students had fair to good scores in awareness of prosthetic infection control policies, and around half of the students were satisfied with their knowledge and performance. However, the results were different in terms of hazardous events, where sharp injuries and eye splashing occurred at rates of 28.9% and 18.3%, respectively, in India, and 57% and 30.2%,respectively,in Saudi Arabia, so the students in Saudi Arabia experienced> more contagious events. In fact, the Saudi study reported that students were highly concerned about wearing face masks, gloves, and protective gowns, but were less concerned about using safety glasses and protective head caps as safety barriers. There were 53.5% to 79.1% of the participants who were aware of the need to clean instruments used in prosthodontic clinics, including alginate mixing spatulas, rubber bowls, shade guides, and facebows, which justifies the authors' conclusion that student knowledge and practices for infection control in prosthetic clinics should be boosted. However, the results of such studies cannot be generalized to other cities in Saudi Arabia, and therefore, because of the importance, it is necessary to assess similar topics in other Saudi cities such as Jeddah, which is considered the secondlargest city with a population of dentists and which has numerous private dental schools that might have different levels of knowledge and practice. Thus, the aim of this study was to evaluate the knowledge and practices of dental students and dentists in Jeddah, Saudi Arabia, regarding infection control in prosthodontics clinics. MATERIALS AND METHODS This was a multi-center cross-sectional study with the participants recruited from all of the dental colleges located in Jeddah (King Abdulaziz University, Alfarabi, Batterjee, and Ibn Sina) dental colleges, in addition to the dental department of the King Fahad Armed Forces Hospital. Data collection took place from November to December 2019. The inclusion criteria were dental students in clinical years (fourth, fifth, sixth, and intern), general dentists, or dentists who were working in a prosthetic clinic at the time of the study. Students were identified by their university identification card, and dentists were identified by their work card. Exclusion criteria were dentists who worked in areas other than prosthetics, such as public health, orthodontics, oral radiology, oral diagnosis, oral surgery, endodontics, pedodontics, and all other basic oral sciences departments. A convenience sampling technique was used, and by conducting a sample size calculation with a confidence level of 90%, the estimated prevalence of 50%, and precision level of 5%, it was determined that the minimum number of participants required for this study was 271. To obtain >an estimated non-response rate, 500 participants were invited to participate in this study. A hard copy selfadministered questionnaire was distributed to the target population by the research team during participants' break times at work or school, and the completed questionnaires were collected immediately after participants finished the task. The average time to complete the questionnaire was six minutes. The questionnaire was derived from two previous studies, with modifications, and it was composed of four sections. The first section had four demographic questions about gender, age, academic year or work status, and place of study or work. The second section was composed of 16 questions investigating the practice of infection control in the prosthetic clinic, including the use of gloves, face masks, eyeglasses, gowns, and head caps and the practices of disinfecting patients, disinfecting materials before sending it to the lab, disinfecting of a final impression, and sterilization of certain instruments used in prosthetic clinics. The third section consisted of two questions about the education of> the participant received during their academic years of study with regard to infection control. The third and fourth sections included three questions on self>-evaluation of personal satisfaction with knowledge and the implementation of infection control procedures in the clinic. The statistically significant level was set at 0.05, and data analysis was conducted using SPSS v21 (IBM Corp., Armonk, NY, USA). The study was approved by the Faculty of Dentistry Institutional Review Board, Umm Al-Qura University, with ethical approval number 151-19. All the participants signed the study consent before participating after being advised that participation in the study was entirely voluntary and that all data were anonymous to protect participants' confidentiality. RESULTS A total of 460 of 500 invited dentists and dental students participated in this study, making a response rate of 92%. The mean age was 25.01, with a standard deviation (SD) of 3.03. There were 316 (68.70%) students and 144 (31.3%) interns or graduated dentists. The participants'> demographic data are displayed in Table 1. Participants' answers to the items regarding infection control practices are shown in Table 2. Bearing> in mind that 'Yes' was the correct choice for all questions. After calculating the total correct answers for these 16 questions, the mean of correct answers was calculated as 12.5 with SD of 2.8. The ttest showed that participants from governmental colleges (m = 13.67, SD = 2.17) had significantly higher scores than participants from private colleges (m = 12.35, SD = 2.9), t = 3.549, p <0.001. However, the t-test and linear regression showed that the total correct practice score was not significantly related to age, gender, or academic/working level. In> Kruskal-Wallis test, to compare 4th, 5th, and 6th year students, dental internees>, and graduated dentists, there was no significant difference in the total knowledge score regarding infection control in prosthetic clinics> (H Participants were asked to self-evaluate their knowledge levels, their implementation of infection control measures, and their satisfaction with their knowledge of infection control measures in prosthetic clinics. The participants' answers are shown in Table 3. DISCUSSION This study aimed to evaluate the level of appropriate infection control practices by students and dentists in university dental prosthetic clinics in Jeddah, Saudi Arabia. Generally, participants showed high levels of awareness and implementation of most of the correct infection control procedures. Nevertheless, some infection control procedures that were not followed by the majority were wearing head caps, using protective eyeglasses, disinfecting the facebow and bite forks before sending materials to the lab and sterilizing certain items before using them on patients, including facebow and bite fork for Fox's occlusal plane plate. Very few participants had not attended any previous lectures or workshops about infection control. Despite half of the participants rating themselves as having poor knowledge, around three-fourths were fairly to totally satisfied with their level of knowledge and also believed they were fairly good to very good about implementing infection control procedures in prosthetic clinics fairly to very good. Assessing infection control in a prosthetic clinic is a very important issue because of the risks involved in> working with several procedures and material exchanges between dentists, patients, and the laboratory, with tremendous potential for cross-contamination among them, as highlighted by the Korean Society of Prosthodontic Review and a recent systematic review. This might be more important than contamination by sharp injuries, which were found to have a lower rate of incidence in comparison to other dental specialties. Comparing our results with previous studies reveals some similarities and some differences. Two major studies were similar in assessing infection control practices in prosthetic clinics in India among private dental colleges and in Riyadh, Saudi Arabia, among governmental dental colleges. However, our study involved four dental colleges where some were private and some were governmental, so it had more variability. Despite the expectation that senior> dental students should be more knowledgeable about infection control, our results indicated no significant difference> between the students, dental intern or graduated dentists (specialists/ consultants). This might be the result of the low number of participants in 4th year students or graduated dentists. Future studies are needed to confirm with a larger sample size of graduated dentists. While comparing the results regarding wearing barriers, in our study, almost all of the participants wore gloves and face masks, ranging from 93.9% to 99.8%, which was similar to the Indian and Riyadh studies previously mentioned. However, there was variability in the percentage of participants wearing gowns, protective eyeglasses, and head caps. In our study, gowns, protective eyeglasses, and head caps were used by 98%, 60.7%, and 73.3% of the participants, while in the Riyadh study, 96%, 73.3%, and 36%, and in the Indian study, 21.1%, 37.2%, and 96.6% of the participants, respectively. This might indicate that different universities in different cities enforce different infection control rules and regulations. In fact, such relatively lower compliance with wearing eyeglasses and head caps was noticed in other studies assessing dental practitioners in clinics among local and non-local studies. However, it should be mentioned that around 30.2% of dental students and dentists were affected by eye splash injuries in prosthetic clinics, which urges stakeholders to enforce the wearing of proactive barriers of different types and not only gloves and face masks. Infections can also spread from patient to patient if materials are not disinfected. Disinfecting such tools is very important; for example, shade guides should be cleaned and to be disinfected using an intermediate level hospital tuberculocidal disinfectant, according to Occupational Safety and Health Administration (OSHA) guidelines. Our data showed that the disinfection of rubber bowls, mixing spatulas, and shade guides ranged from 79.3% to 82.8%, which was greater than the studies from Riyadh (53.5% to 60.5%) and India (56.1% to 62.2%). Conversely, the rate of instrument sterilization using an autoclave for impression trays, facebows, and the bite fork of Fox's occlusal plane plate ranged in our study between 53.7% and 76.1%, which was lower than the Riyadh study (73.3% to 84.9%) and the Indian study (57.2% to 87.2%) for the same items. Here again, the difference might be attributable to the variability in compliance with infection control guidelines of different universities and cities in Saudi Arabia. It is recommended that future studies assess the policies of the universities and dental clinics to understand if such practices are based on dental students and dentists' own knowledge or are following the organizational guidelines. Because numerous studies have highlighted the potential for cross-infection when using and transferring materials between dental clinics and laboratories, the Centers for Disease Control (CDC) recommended that dental practitioners disinfect all impressions, dental casts, metal framework, bite registrations or wax before sending them to the dental laboratory. Our study showed that the percentage of participants who disinfected dental casts, prostheses, metal frameworks, facebows, and the bite fork of the occlusal plane plate ranged from 66.7% to 82.6%, which was a little higher than the Riyadh study (62.6% to 68.6), but similar to the Indian study (54.4% to 87.2%). It should be noted that rates of disinfection of the facebow and bite fork of Fox's occlusal plane plate were the lowest in all three studies. This may be because students, in general, might have problems using the facebow; some studies have implied that the use of a facebow is not taught in all dental colleges. Furthermore, and more importantly, our results and the two other studies showed that the majority of participants do rinse impressions with running water before sending them to the laboratory; however, while our study and the Riyadh study showed that the majority of the participants also disinfected them after using the water, the Indian study did not show this> result. Similar to other studies, only a small percentage of the participants did not attended> any theoretical lectures about infection control. This is similar to previous studies. However, there were 13.9% who had never attended any practical training for infection control, which was lower than that reported in previous studies (39.5%-40.6%). This might indicate that dental colleges in Jeddah had better coverage of this topic in continuing education workshops. Indeed, attendance at such lectures and workshops on an annual basis has> a relationship with> increased levels of compliance and awareness about infection control. Self-evaluation is considered to be key for understanding participants' future attitudes toward infection control in the prosthodontic clinic. Around 50% of the students rated their knowledge level as very poor or poor despite having good knowledge scores. Yet at the same time, the majority evaluated their implementations as fair to very good and were also satisfied with their levels of knowledge and performance. When comparing this to previous studies, this contradiction exists only with regard to self-evaluation of levels of knowledge. Previous studies have had low numbers of participants who rated themselves as having poor knowledge level (3.5% to 3.9%). This might indicate that participants in Jeddah underestimate their levels of knowledge and have low self-confidence in their knowledge for unknown reasons. One explanation is that participants might think there are areas they do not know or procedures they are not sure about. Selfconfidence is usually associated with good clinical practice in dentistry, so this point can be covered for future infection control continuing education in the future. An important aspect of this study is investigating some practices that occur only in prosthetic clinics, such as the use of a facebow, whereas other studies about infection control have focused on general dentistry practices and do not focus on such areas. In addition, this study was conducted using four dental colleges that were both private and governmental, in addition to one major governmental hospital, so it had more representative data in comparison to similar studies. In fact, our data also showed that there might be some differences in practices; our results indicated that governmental organizations seem to better adhere to infection control practices. Such comparison had not been conducted in prior similar studies where the data were taken from a single center. The study does have a few limitations worth mentioning. The number and distribution of participants were> not representative of all of Saudi Arabia, and thus our results lack external validity. Also, the questionnaire did not include lab technicians, who play an important> role in infection control at prosthodontic clinics. More studies are needed to >investigate the policies at other dental colleges in order to make better recommendations to all stakeholders. Furthermore, future studies might investigate infection control precaution awareness toward the Middle East Respiratory Syndrome (MERS), Severe Acute Respiratory Syndrome (SARS) and coronavirus in the light of recent pandemic occurring> in the world that has> even reached Saudi Arabia. CONCLUSION Our results indicated that dental students and dentists have high levels of knowledge regarding> appropriate practices for infection control in prosthodontic clinics. Few participants had not received any previous lectures or workshops about infection control. Moreover, despite half of the participants rating themselves as having poor knowledge, around threefourths were fairly to totally satisfied with their levels of knowledge and also believed they were fairly good to very good at implementing infection control procedures. Our data were generally similar to previous local and international studies, with some variations that can be attributed to differences in policies and culture between universities. Future studies can be implemented with more generalized samples from Saudi universities, and it is recommended to investigate organizational policies to understand their effects on dental students and dentists in terms of compliance with infection control guidelines in prosthetic clinics. ETHICS APPROVAL AND CONSENT TO PARTI-CIPATE This study received the approval of the Institutional Review Board, Umm Al-Qura University, Saudi Arabia, with ethical approval number 151-19. HUMAN AND ANIMAL RIGHTS No animals were used in this research. All human research procedures followed were in accordance with the ethical standards of the committee responsible for human experimentation (institutional and national), and with the Helsinki Declaration of 1975, as revised in 2013. CONSENT FOR PUBLICATION All participants signed the consent before participating in the study. AVAILABILITY OF DATA AND MATERIALS The data that support the findings of this study are available from the corresponding author, , upon reasonable request. FUNDING None.
#include <linux/videodev2.h> #include <poll.h> #include <sys/ioctl.h> #include "test.h" #define FOURCC_STR(str) v4l2_fourcc(str[0], str[1], str[2], str[3]) #define BUF_QUEUE_SIZE 5 struct cam_vid_pipe { uint32_t input_width, input_height; uint32_t output_x, output_y; uint32_t output_width, output_height; int cap_fd; uint32_t plane_id; struct framebuffer bufs[BUF_QUEUE_SIZE]; int prime_fds[BUF_QUEUE_SIZE]; uint32_t b1, b2, b3; }; static struct { int drm_fd; uint32_t crtc_id; uint32_t crtc_width; uint32_t crtc_height; struct cam_vid_pipe pipes[2]; } global; static void init_drm() { const char *card = "/dev/dri/card0"; global.drm_fd = drm_open_dev_dumb(card); } static void uninit_drm() { close(global.drm_fd); } static void find_crtc(int fd) { drmModeRes *res = drmModeGetResources(fd); ASSERT(res); for (int i = 0; i < res->count_crtcs; ++i) { uint32_t crtc_id = res->crtcs[i]; drmModeCrtc *crtc = drmModeGetCrtc(fd, crtc_id); ASSERT(crtc); if (!crtc->mode_valid) { drmModeFreeCrtc(crtc); continue; } global.crtc_id = crtc->crtc_id; global.crtc_width = crtc->width; global.crtc_height = crtc->height; drmModeFreeCrtc(crtc); drmModeFreeResources(res); return; } ASSERT(false); } static void create_bufs(struct cam_vid_pipe *pipe, int width, int height) { for (int n = 0; n < BUF_QUEUE_SIZE; ++n) { struct framebuffer *fb = &pipe->bufs[n]; drm_create_dumb_fb2(global.drm_fd, width, height, DRM_FORMAT_YUYV, fb); int r = drmPrimeHandleToFD(global.drm_fd, fb->planes[0].handle, DRM_CLOEXEC, &pipe->prime_fds[n]); ASSERT(r == 0); drm_draw_test_pattern(fb, 0); } } static void destroy_bufs(struct cam_vid_pipe *pipe) { for (int n = 0; n < BUF_QUEUE_SIZE; ++n) { struct framebuffer *fb = &pipe->bufs[n]; close(pipe->prime_fds[n]); drm_destroy_dumb_fb(fb); } } static void v4l2_init_capture(struct cam_vid_pipe *pipe, const char *viddev) { int r; int fd = open(viddev, O_RDWR | O_NONBLOCK); ASSERT(fd >= 0); struct v4l2_format fmt = { .type = V4L2_BUF_TYPE_VIDEO_CAPTURE, }; r = ioctl(fd, VIDIOC_G_FMT, &fmt); ASSERT(r == 0); fmt.fmt.pix.pixelformat = FOURCC_STR("YUYV"); fmt.fmt.pix.width = pipe->input_width; fmt.fmt.pix.height = pipe->input_height; r = ioctl(fd, VIDIOC_S_FMT, &fmt); ASSERT(r == 0); pipe->cap_fd = fd; struct v4l2_requestbuffers reqbuf = { .type = V4L2_BUF_TYPE_VIDEO_CAPTURE, .memory = V4L2_MEMORY_DMABUF, .count = BUF_QUEUE_SIZE, }; r = ioctl(pipe->cap_fd, VIDIOC_REQBUFS, &reqbuf); ASSERT(r == 0); } static void v4l2_queue_buffer(struct cam_vid_pipe *pipe, int index) { int r; int dmafd = pipe->prime_fds[index]; struct v4l2_buffer buf = { .type = V4L2_BUF_TYPE_VIDEO_CAPTURE, .memory = V4L2_MEMORY_DMABUF, .index = index, .m.fd = dmafd, }; r = ioctl(pipe->cap_fd, VIDIOC_QBUF, &buf); ASSERT(r == 0); } static int v4l2_dequeue_buffer(struct cam_vid_pipe *pipe) { int r; struct v4l2_buffer v4l2buf = { .type = V4L2_BUF_TYPE_VIDEO_CAPTURE, .memory = V4L2_MEMORY_DMABUF, }; r = ioctl(pipe->cap_fd, VIDIOC_DQBUF, &v4l2buf); ASSERT(r == 0 || errno == EAGAIN); if (r != 0 && errno == EAGAIN) return -1; int index = v4l2buf.index; return index; } static void v4l2_stream_on(struct cam_vid_pipe *pipe) { enum v4l2_buf_type type = V4L2_BUF_TYPE_VIDEO_CAPTURE; int r = ioctl(pipe->cap_fd, VIDIOC_STREAMON, &type); ASSERT(r == 0); } static void init_camera_video_pipe(struct cam_vid_pipe *pipe, const char *viddev, uint32_t iw, uint32_t ih, uint32_t ox, uint32_t oy, uint32_t ow, uint32_t oh) { pipe->input_width = iw; pipe->input_height = ih; pipe->output_x = ox; pipe->output_y = oy; pipe->output_width = ow; pipe->output_height = oh; v4l2_init_capture(pipe, viddev); create_bufs(pipe, pipe->input_width, pipe->input_height); pipe->plane_id = drm_reserve_plane(global.drm_fd); for (int i = 0; i < BUF_QUEUE_SIZE; ++i) v4l2_queue_buffer(pipe, i); v4l2_stream_on(pipe); } static void free_camera_video_pipe(struct cam_vid_pipe *pipe) { destroy_bufs(pipe); drm_release_plane(pipe->plane_id); } static void process_pipe_init(struct cam_vid_pipe *pipe) { pipe->b1 = pipe->b2 = pipe->b3 = -1; } static void process_pipe(struct cam_vid_pipe *pipe) { if (pipe->b3 != -1) { v4l2_queue_buffer(pipe, pipe->b3); pipe->b3 = -1; } int b = v4l2_dequeue_buffer(pipe); if (b == -1) return; pipe->b3 = pipe->b2; pipe->b2 = pipe->b1; pipe->b1 = b; struct framebuffer *buf = &pipe->bufs[pipe->b1]; int r; r = drmModeSetPlane(global.drm_fd, pipe->plane_id, global.crtc_id, buf->fb_id, 0, // output pipe->output_x, pipe->output_y, pipe->output_width, pipe->output_height, //input 0 << 16, 0 << 16, buf->width << 16, buf->height << 16); ASSERT(r == 0); } int main(int argc, char **argv) { init_drm(); find_crtc(global.drm_fd); bool dual_camera = true; init_camera_video_pipe(&global.pipes[0], "/dev/video0", 800, 600, 0, 0, global.crtc_width, global.crtc_height); if (dual_camera) init_camera_video_pipe(&global.pipes[1], "/dev/video1", 800, 600, 25, 25, global.crtc_width / 3, global.crtc_height / 3); process_pipe_init(&global.pipes[0]); if (dual_camera) process_pipe_init(&global.pipes[1]); struct pollfd fds[] = { { .fd = 0, .events = POLLIN }, { .fd = global.pipes[0].cap_fd, .events = POLLIN }, { .fd = dual_camera ? global.pipes[1].cap_fd : 0, .events = POLLIN }, }; while (true) { int r = poll(fds, ARRAY_SIZE(fds), -1); ASSERT(r > 0); if (fds[0].revents != 0) break; process_pipe(&global.pipes[0]); if (dual_camera) process_pipe(&global.pipes[1]); } free_camera_video_pipe(&global.pipes[1]); if (dual_camera) free_camera_video_pipe(&global.pipes[0]); uninit_drm(); fprintf(stderr, "exiting\n"); return 0; }
(Branislav Bokun/Dreamstime) From the December 7, 2015, issue of NR When it comes to due process on campus, Republicans in Congress, who campaigned on vows to rein in the Obama administration’s abuses of executive power, have largely acquiesced in its bureaucratic imposition of quasi-judicial tyranny. For more than four years, the White House and the Education Department’s Office for Civil Rights (OCR) have used an implausible reinterpretation of a 1972 civil-rights law to impose mandates unimagined by the law’s sponsors. It has forced almost all of the nation’s universities and colleges to disregard due process in disciplinary proceedings when they involve allegations of sexual assault. Enforced by officials far outside the mainstream, these mandates are having a devastating impact on the nation’s universities and on the lives of dozens — almost certainly soon to be hundreds or thousands — of falsely accused students. Advertisement Advertisement One might have expected an aggressive response by House Republicans to such gross abuses of power — including subpoenas, tough oversight hearings, and corrective legislation. Instead, most of them have been mute. In the Senate, meanwhile, presidential candidate Marco Rubio of Florida, Judiciary Committee chairman Charles Grassley of Iowa, and rising star Kelly Ayotte of New Hampshire have teamed with Democratic demagogues Kirsten Gillibrand of New York and Claire McCaskill of Missouri in co-sponsoring a bill that would make matters even worse. Advertisement The authors of this article are not partisan critics. One of us is an independent, the other a Democrat who twice voted for Obama and donated to his presidential campaign. But when the president and his party go rogue, it is the duty of the loyal opposition to blow the whistle and fight back. Advertisement The administration’s crusade against due process for students accused of sexual assault has relied on Title IX of the Educational Amendments Act of 1972, a brief, unadorned provision that simply prohibits sex discrimination in federally funded educational institutions. It has most famously been used against gender imbalances in college athletics and, more recently, in scientific and technical fields, but in the act’s first 39 years, no administration claimed that Title IX gave the federal government authority to micromanage university disciplinary procedures. RELATED: Title IX, the All-Purpose Leftist Excuse Advertisement Barack Obama and his appointees adopted a radically different approach. In April 2011, the OCR sent college administrators a 19-page “Dear Colleague” letter that ordered colleges and universities that receive federal funds (as almost all do) to change their disciplinary procedures regarding sexual assault. Each of the required changes — reducing the burden of proof in campus sex cases (and only in those cases) from “clear and convincing evidence” to “preponderance of the evidence,” introducing a form of double jeopardy by allowing accusers to appeal not-guilty findings, and demanding accelerated investigations that hamper the ability of accused students to gather evidence to defend themselves — increased the likelihood of guilty findings. Worst of all, the letter “strongly” discouraged institutions from allowing an accused student to cross-examine his accuser. And a 2014 missive from the OCR threatened schools that allow such cross-examination — “the greatest legal engine ever invented for the discovery of truth,” as the Supreme Court has repeatedly called it — with a charge of “perpetuat[ing] a hostile environment,” which is illegal. No federal law or court decision provides a grain of support for such bureaucratic tyranny. This situation cries out for legislative oversight. Over the four and a half years since the first letter, the White House and the OCR have escalated, in ways too numerous to detail here, their attacks on due process — and on freedom of speech and academic freedom — in the guise of punishing sexual harassment. No federal law or court decision provides a grain of support for such bureaucratic tyranny. This situation cries out for legislative oversight, but, despite controlling the House since 2011 and the Senate since this January, the Republican response to the administration’s lawless evisceration of campus due process has been puny. Congress has subjected Obama’s two heads of the OCR — assistant secretaries Russlynn Ali and, since 2013, Catherine Lhamon — to just six minutes of challenging questions, all in a June 2014 oversight hearing, when Tennessee senator Lamar Alexander quite effectively pressed Lhamon to explain how her agency could make law by merely sending “detailed guidance for 22 million students on 7,200 campuses.” Stressing that the “guidance” could be no more than “your whim,” Alexander asked: “Who gave you the authority to do that?” Advertisement “With gratitude, you did, when I was confirmed,” shot back Lhamon, oozing disdain for the former secretary of education. Six committee Democrats defended Lhamon, and no other Republican senator even attended the hearing. Advertisement RELATED: Campus Rape and the ‘Emergency’: It’s Always an Excuse for Authoritarianism Apart from Alexander’s efforts, virtually the only congressional resistance to Obama’s campus agenda has come from House Republicans Matt Salmon (Ariz.), Pete Sessions (Texas), and Kay Granger (Texas). They recently introduced the Safe Campus Act, which would commendably cripple the Obama administration’s efforts to channel rape cases away from law enforcement and into college disciplinary proceedings. It would also ensure somewhat fairer hearings. This proposal has the support of the National District Attorneys Association, civil-liberties advocates, and families who say their sons have been harmed by false accusations and railroaded by campus kangaroo courts. In mid September, the House Education and Workforce Committee convened its first hearing on campus rape. North Carolina Republican Virginia Foxx, who chaired the hearing, sounded like the OCR’s Lhamon in her opening remarks, citing the resoundingly discredited claim that one in five women is sexually assaulted while at college. Even though the majority Republicans selected three of the four witnesses, only one, Joseph Cohn of the Foundation for Individual Rights in Education, unequivocally defended campus due process. Advertisement The hearing’s climate was captured by Representative Jared Polis (D., Colo.), who asserted: “If there are ten people who have been accused, and under a reasonable-likelihood standard maybe one or two did it, it seems better to get rid of all ten people.” In a scene that would have made the framers of the Constitution weep, campus-rape activists in the hearing room applauded this effusion. (Polis later issued a half-hearted retraction.) RELATED: Kirsten Gillibrand’s Campus-Rape Mendacity Meanwhile, powerful Senate Republicans have jumped onto Obama’s anti-due-process bandwagon. Six of them, led by Rubio, Grassley, and Ayotte, joined Gillibrand, McCaskill, and four other Democrats in co-sponsoring the benign-sounding but dangerous Campus Accountability and Safety Act (CASA). Advertisement These Republicans are keeping bad company. Gillibrand, for example, published two statements branding a Columbia University student a “rapist” even though he had been cleared by the university and the police had found no basis for charging him. McCaskill, ignoring two generations of progress in the way police and prosecutors approach rape allegations, oddly asserted that “the criminal-justice system has been very bad, in fact much worse than the military and much worse than college campuses, in terms of addressing victims and supporting victims and pursuing prosecutions.” With key Republicans along for the ride, McCaskill and Gillibrand produced a bill designed to advance the administration’s agenda. Its language presumes the guilt of all students accused of sexual assault by repeatedly calling accusers who have not yet substantiated their claims “victims,” without the critical qualifier “alleged.” CASA would also order colleges to provide a “confidential advisor” for these “victims,” with no comparable help for the accused. And it would require universities to publish data on the outcomes of their campus sexual-assault cases (which only Yale does now), apparently in the hope that doing so will invite Title IX complaints against any college that finds an insufficient number of accused students guilty. RELATED: The Democrats’ War on Due Process Advertisement Further, McCaskill has said that CASA, by making adjudication processes uniform for all institutions, is designed to help “remove the underpinning of . . . lawsuits” by accused students who say they were railroaded. No wonder McCaskill believes that “victims” might see themselves as “better off doing the Title IX process” than going through the criminal-justice system. The Washington Examiner’s Ashe Schow asked each sponsoring senator’s office how CASA would ensure due process for accused students. An Ayotte spokesperson declined to answer Schow’s questions, justifying the senator’s co-sponsorship by repeating the canard that one in five college women is sexually assaulted. A Rubio spokesperson replied, “This bill does not address this issue.” When asked whether college officials or law enforcement would have the most authority to investigate allegations, the spokesperson responded: “The victim will have the most authority.” This reflected (at best) an astonishing misunderstanding both of the need for impartial adjudication of such serious charges and of the fact that at the investigative stage there is no “victim”; there are an accuser and an accused. #share# Advertisement Why have Republicans abandoned their duty to expose and oppose Obama’s disregard for basic fairness on the matter of campus tribunals for alleged sexual assault? Part of the reason is fear of the “war on women” demagoguery that greets any Republican challenge to any Obama-administration policy involving gender. In addition, some social conservatives seem intent on taking advantage of the current alarm about sexual relations on campus to try to restore traditional gender norms there — a lost cause. And protecting the civil liberties of people accused of violent crimes has never been a high priority for most Republicans, who (like most other Americans) remain ignorant of both the railroading of innocent students and the radical nature of Obama’s campus agenda. Most have been misled by the administration’s allies in politics, academia, and the media to believe three myths: that a campus rape epidemic exists, that it’s getting worse, and that almost all accused males are guilty. RELATED: The Campus-Rape Lie: Bad Statistics, Shoddy Journalism, Leftist Power Grabs, and a Crisis That Isn’t None of these things is true. While rape is a very serious problem for women in their late teens and 20s, the best data show that roughly one in 30 (not one in five, as Obama and his allies claim) women are sexually assaulted while in college; that they are safer on campus than off; that the campus rape rate has plunged since 1997; and that false or likely false accusations are not uncommon, albeit impossible to quantify with confidence. On the last point, accusations against innocent students seem to be increasing at colleges, where accusers are urged by campus sex bureaucrats, professors, and activists to report dubious or simply false allegations. Institutions of higher learning also tend to define rape and sexual assault far more broadly than either the criminal law or common understanding, as in the suggestion that sex with a partner who in any degree is intoxicated constitutes sexual assault. Far from acquiescing in “rape culture” as sensationalized by the media, America’s universities are in the grip of a dangerous presume-guilt-and-rush-to-judgment culture, driven by the Obama administration. An entire generation of college students is learning to disregard due process and the dispassionate evaluation of evidence. And dozens of clearly or at least probably innocent students, whose cases we will detail in a book we are now writing, have been branded sex criminals, been railroaded out of their universities, and seen their hopes and dreams ruined. An entire generation of college students is learning to disregard due process and the dispassionate evaluation of evidence. Their persecutors include Amherst, Brandeis, Colgate, Columbia, Harvard, Miami of Ohio, Michigan, Michigan State, Middlebury, Occidental, St. Joseph’s, Swarthmore, the University of California–San Diego, the University of North Dakota, the University of Tennessee–Chattanooga, Vassar, and Washington and Lee, among others. And given the opacity of the college disciplinary process, those cases are almost certainly just a small part of the total, as hundreds of other similar injustices remain veiled in secrecy. How can Republicans improve on their lamentable acceptance of these Obama-driven abuses? Electing a president protective of campus due process would be the best hope, but it also seems the most unlikely. No Republican presidential candidate has spoken up for campus due process, and Senator Rubio appears to be part of the problem, not the solution. Of course, Hillary Clinton likely would make things even worse. Taking the Obama OCR to court also offers only limited hope. While the courts have upheld some lawsuits filed against universities by falsely accused and wrongly expelled students, the obstacles to suing federal agencies such as the OCR for abusing their power are almost insuperable. And pro-due- process legislation, such as the Safe Campus Act, is probably doomed in the Senate even if it can clear the likes of Virginia Foxx in the House. RELATED: The Rape Epidemic Is a Fiction But there is still much that an awakened Congress and state governments can do to limit the damage, to mobilize public opinion in support of fairness, and to prevent demagogues such as Gillibrand and other Obama allies from doing to the criminal law what they have already done to campus discipline. Senator Alexander has made a start by focusing on drunk-with-power bureaucrats wildly overreaching their authority. In a September 23, 2015, hearing, he extracted from Amy McIntosh, a deputy assistant secretary of education, the admission that “guidance that the department issues does not have the force of law.” This after more than four years during which the OCR had enforced its “guidance” letters to universities as though it did have such authority. Republican-run oversight committees should put Catherine Lhamon on television at every opportunity. Members could start by asking her about her recent, preposterous suggestion that because colleges “are equipped to investigate . . . plagiarism or drug dealing,” they are competent to police alleged sex crimes. Plagiarism is not a crime, let alone a violent one. And it’s hard to imagine how colleges could even begin to investigate a serious criminal offense such as drug dealing. Advertisement RELATED: Fighting Against ‘Rape Culture’ Means Never Having to Say You’re Sorry Oversight committees also should demand documents from the administration regarding the origins of the 2011 “Dear Colleague” letter. How much was the White House involved? Was this part of Obama’s political strategy of mobilizing the Democratic base by aggressively using executive power to promote their causes? Did anyone worry about the certainty that innocent as well as guilty accused students would be expelled as rapists? What did the document’s drafts say? Why has the OCR told universities that they can’t require sexual-assault accusers to report their complaints to police? Do any other federal agencies discourage reporting felony offenses to law enforcement? Does the administration hold the view that police are hostile to victims? Why the almost exclusive focus on alleged victims at colleges, and not on the far more numerous, less privileged women for whom the police are the only recourse? #related#As for the criminal law, the prestigious American Law Institute is now considering proposals to criminalize sexual relations as they have been routinely and consensually practiced throughout human history. Whenever a woman claims that she did not give “affirmative consent,” either verbally or with unequivocal nonverbal cues, to a recent or long-past sexual encounter that her partner reasonably considered consensual, that would be rape. As has been seen at colleges that have adopted the standard, the effect of the change would be to shift the burden of proof from the accuser (and the state) to the accused, undermining the presumption of innocence in the process. Will Republicans wake up in time to stop such lunacy? – Stuart Taylor Jr. is a writer based in Washington, D.C. KC Johnson is a history professor at Brooklyn College and the CUNY Graduate Center. They co-authored Until Proven Innocent: Political Correctness and the Shameful Injustices of the Duke Lacrosse Rape Case. This article originally appeared in the December 7, 2015, issue of National Review. * National Review magazine content is typically available only to paid subscribers. Due to the immediacy of this article, it has been made available to you for free. To enjoy the full complement of exceptional National Review magazine content, sign up for a subscription today. A special discounted rate is available for you here.
Vedera History The band members all grew up the Kansas City suburb of Blue Springs, Missouri. Kristen May was born into a musical family and her mother introduced her to the work of Joni Mitchell, Carole King, and Jim Croce. When May was 17, her father, a drummer, gave her a guitar and she began writing her own songs. Meanwhile, Brian Little's father gave his son his first guitar at age 13 and Little played in bands throughout high school. Little loved classic guitarists like Jimi Hendrix, Jimmy Page, and Lindsey Buckingham, whom he cites as someone who "knows how to add melody to a song, but not take away from what the singer is trying to say." May and Little knew each other in high school and began playing in a band together called, Red Authentic, after May returned home following an aborted stint at college in Nashville. "I didn't want to study music anymore, I just wanted to play it", she says. In 2003, the band she had formed with her brothers needed a guitarist and they called on Little. However, with her brothers unable to travel and pursue the career musician life that May and Little craved, they asked Little's high-school friend Jason Douglas to join and also recruited Brian's younger brother Drew to play drums. The band was renamed "Veda". Veda to Vedera In the summer of 2004, Veda recorded its six-song EP, This Broken City, at Black Lodge with the support of Ed Rose. In 2005, the band released an LP entitled The Weight of an Empty Room. In early December 2005, Veda's label, Second Nature Recordings, announced that the band had been forced to change its name from "Veda" to "Vedera" to avoid legal complications with another band of "a similar moniker". Though the other band spells their name as Vaeda, it's pronounced exactly the same way and had caused them several instances of confusion in the marketplace. Since the summer of 2005, the band has been touring at various times with Dredg, MewithoutYou, Communiqué, Thrice, Mae, and Underoath. In early January 2006, Vedera went on a 40 city tour opening for Mutemath, followed by an opening stint for Owen. In the Summer of 2006, Vedera joined a tour with Lucero and Murder By Death. After a month off, they headed back on the road for another tour with Mae and The New Amsterdams. Kristen May and Brian Little subsequently married. In 2007, Vedera signed to Epic Records and were featured on the hit T.V. show, The Hills. They were also featured on iTunes and played their single "Satisfy" on Ellen. May also sang with the lead singer of The All American Rejects and Jason Mraz while they were touring with them. Vedera opened for Eisley on their Combinations tour in early 2008. In early 2009 Vedera opened for The Fray on their club tour and also performed in the spring with The All-American Rejects on the I Wanna Rock tour. Stages Recording of their album Stages began October 17, 2007. It was released online on October 6, 2009. May suggested Stages is about "our blood, sweat, and tears over the past five years as a band on the road." Vedera recorded Stages with Epic Records A&R executive Mike Flynn, who co-produced the Fray's double-platinum album How to Save A Life and Augustana's 2008 album Can't Love, Can't Hurt, and Producer/Engineer Warren Huart (The Fray, Augustana, Howie Day). "Mike is like our George Martin," Little says. "He has a great sense of melody," May adds. The songs begin with May and Little, then the pair bring their ideas to Jason and Drew who "take it to the next level," May says. "That's the great thing about being in a band. Brian and I could have been a couple of folk songwriters, but Jason and Drew really make us who we are as a band." Breakup In 2011, Vedera disbanded and announced that May and Brian will be working together on May's new solo album. On October 22, 2012, it was announced that May joined the band Flyleaf as lead vocalist.
I'M A sportswriter, the high school sports coordinator for the Houston Chronicle — a big deal in a state where passion for high school football runs high. This year, Texas' state high school football finals return to Houston for the first time since 2008. So during the season, each week, the Chronicle has highlighted a different champion that won a title in Houston. The story this week was about the 1991 Killeen Kangaroos, who beat Dulles 14-10 at the Astrodome for the Class 5A Division I state title. It felt personal: I remember how much the town needed that win. IT WAS Jan. 17, 1991. We sat in the bleachers inside the gym. There wasn't a game. My parents, my sisters, myself and hundreds just like us were waiting. At any moment, we would watch our dad and his fellow soldiers line up in formation, say their goodbyes and board buses headed for the airfield. For many of us, if it wasn't a father or mother, it was a guardian, uncle, friend or loved one. Everyone knew someone involved in the Gulf War, which shifted from Operation Desert Shield, the operation of deploying troops and defense to Saudi Arabia, to Operation Desert Storm, the combat phase against Iraq's invasion of Kuwait, the night my dad left. I was nine. My dad, SFC Angel Verdejo, was in air defense with the 1st Calvary Division. But to me at that age, there wasn't much difference between miles away from immediate danger firing ground-to-air missiles and being on the front lines. That was just how things were in Killeen — a town attached at the hip to Fort Hood, one of the largest military bases on the planet. Thousands of soldiers were gone. The town's businesses, which depend on the military, suffered. And families counted down the days until loved ones could return home. AMAZINGLY, THE year got harder. On October 16, 1991, a man named George Hennard, Jr., crashed his pickup truck into Killeen's Luby's. It was lunchtime, crowded. He began to shoot. People hid anywhere they could. In the end, 24 people were dead, including Hennard, and 27 wounded. My friends and I were in school. We didn't know what had happened until later. We'd been worried about our loved ones in Iraq. And now, we found out, Killeen wasn't safe either. THE KILLEEN Kangaroos had missed the playoffs in 1990, and nearly missed them again in 1991. But after the Luby's shooting, each week was magical. Beating Austin Johnston on penetrations. Holding off Jersey Village by one point and both Tyler Lee and San Angelo Central by four. My sister, as part of the band, traveled to Houston for the finals. I stayed home, at our house on Starlight Drive, glued to the TV broadcast of game. (Somewhere, my parents still have the VHS recording.) Dulles had a stout run game and beat perennial powerhouse Converse Judson in the semifinals. The game was a matchup between the Vikings' power against Killeen's speed. The biggest series was the defensive stand Killeen made late, highlighted by Dion Marion sacking the Dulles quarterback. Marion was the star running back, but like Billy Spiller and Charles West, played defense because it was best for the team. The most memorable play was the 55-yard pass from quarterback Spiller to receiver West in double coverage. It was a trick play, one the newspaper broke down later: It started with one toss then another to a receiver, and finally back in the hands of Spiller, who threw downfield to the endzone. Dulles had the play defended well. Thinking back, Spiller didn't make the best decision. But West was the go-to receiver, and he came down with it. At the house on Starlight Drive, we went wild. That night, we went to the school and waited at the field house. When those two charter buses rolled in, the place went nuts. The Roos had given us something. They gave us a reason to smile, cheer and laugh. A reason to think everything was going to be okay. AFTER I finished writing the Chronicle story, I sent a text to West, now a football coach here in Houston, thanking him for his help with it. "It still gives me chills," he texted back. Me too. Angel Verdejo, Jr., is the Chronicle's high school sports coordinator — and a Killeen High graduate, class of 1999. Bookmark Gray Matters. It has a stout run game.
# Copyright 2008-2009 by <NAME>. All rights reserved. # This code is part of the Biopython distribution and governed by its # license. Please see the LICENSE file that should have been included # as part of this package. #TODO - Clean up the extra files created by clustalw? e.g. *.dnd #and *.aln where we have not requested an explicit name? from Bio import MissingExternalDependencyError import sys import os import subprocess from Bio import Clustalw #old and obsolete from Bio.Clustalw import MultipleAlignCL #old and obsolete from Bio import SeqIO from Bio import AlignIO from Bio.Align.Applications import ClustalwCommandline #new! ################################################################# clustalw_exe = None if sys.platform=="win32": #TODO - Check the path? try: #This can vary depending on the Windows language. prog_files = os.environ["PROGRAMFILES"] except KeyError: prog_files = r"C:\Program Files" #Note that EBI's clustalw2 installer, e.g. clustalw-2.0.10-win.msi #uses C:\Program Files\ClustalW2\clustalw2.exe so we should check #for that. # #Some users doing a manual install have reported using #C:\Program Files\clustalw.exe # #Older installers might use something like this, #C:\Program Files\Clustalw\clustalw.exe # #One particular case is www.tc.cornell.edu currently provide a #clustalw1.83 installer which uses the following long location: #C:\Program Files\CTCBioApps\clustalw\v1.83\clustalw1.83.exe likely_dirs = ["ClustalW2", "", "Clustal","Clustalw","Clustalw183","Clustalw1.83", r"CTCBioApps\clustalw\v1.83"] likely_exes = ["clustalw2.exe", "clustalw.exe", "clustalw1.83.exe"] for folder in likely_dirs: if os.path.isdir(os.path.join(prog_files, folder)): for filename in likely_exes: if os.path.isfile(os.path.join(prog_files, folder, filename)): clustalw_exe = os.path.join(prog_files, folder, filename) break if clustalw_exe : break else: import commands #Note that clustalw 1.83 and clustalw 2.0.10 don't obey the --version #command, but this does cause them to quit cleanly. Otherwise they prompt #the user for input (causing a lock up). output = commands.getoutput("clustalw2 --version") if "not found" not in output and "CLUSTAL" in output.upper(): clustalw_exe = "clustalw2" if not clustalw_exe: output = commands.getoutput("clustalw --version") if "not found" not in output and "CLUSTAL" in output.upper(): clustalw_exe = "clustalw" if not clustalw_exe: raise MissingExternalDependencyError(\ "Install clustalw or clustalw2 if you want to use Bio.Clustalw.") ################################################################# print "Checking error conditions" print "=========================" print "Empty file" input_file = "does_not_exist.fasta" assert not os.path.isfile(input_file) cline = MultipleAlignCL(input_file, command=clustalw_exe) try: align = Clustalw.do_alignment(cline) assert False, "Should have failed, returned %s" % repr(align) except IOError, err: print "Failed (good)" #Python 2.3 on Windows gave (0, 'Error') #Python 2.5 on Windows gives [Errno 0] Error assert "Cannot open sequence file" in str(err) \ or "not produced" in str(err) \ or str(err) == "[Errno 0] Error" \ or str(err) == "(0, 'Error')", str(err) print print "Single sequence" input_file = "Fasta/f001" assert os.path.isfile(input_file) assert len(list(SeqIO.parse(open(input_file),"fasta")))==1 cline = MultipleAlignCL(input_file, command=clustalw_exe) try: align = Clustalw.do_alignment(cline) assert False, "Should have failed, returned %s" % repr(align) except IOError, err: print "Failed (good)" assert "has only one sequence present" in str(err) except ValueError, err: print "Failed (good)" assert str(err) == "No records found in handle" #Ideally we'd get an IOError but sometimes we don't seem to #get a return value from clustalw. If so, then there is a #ValueError when the parsing fails. print print "Invalid sequence" input_file = "Medline/pubmed_result1.txt" assert os.path.isfile(input_file) cline = MultipleAlignCL(input_file, command=clustalw_exe) try: align = Clustalw.do_alignment(cline) assert False, "Should have failed, returned %s" % repr(align) except IOError, err: print "Failed (good)" #Ideally we'd catch the return code and raise the specific #error for "invalid format", rather than just notice there #is not output file. #Note: #Python 2.3 on Windows gave (0, 'Error') #Python 2.5 on Windows gives [Errno 0] Error assert "invalid format" in str(err) \ or "not produced" in str(err) \ or str(err) == "[Errno 0] Error" \ or str(err) == "(0, 'Error')", str(err) ################################################################# print print "Checking normal situations" print "==========================" #Create a temp fasta file with a space in the name temp_filename_with_spaces = "Clustalw/temp horses.fasta" handle = open(temp_filename_with_spaces, "w") SeqIO.write(SeqIO.parse(open("Phylip/hennigian.phy"),"phylip"),handle, "fasta") handle.close() #Create a large input file by converting another example file #(See Bug 2804, this will produce so much output on stdout that #subprocess could suffer a deadlock and hang). Using all the #records should show the deadlock but is very slow - just thirty #seems to lockup on Mac OS X, even 20 on Linux (without the fix). temp_large_fasta_file = "temp_cw_prot.fasta" handle = open(temp_large_fasta_file, "w") records = list(SeqIO.parse(open("NBRF/Cw_prot.pir", "rU"), "pir"))[:40] SeqIO.write(records, handle, "fasta") handle.close() del handle, records for input_file, output_file, newtree_file in [ ("Fasta/f002", "temp_test.aln", None), ("GFF/multi.fna", "temp with space.aln", None), ("Registry/seqs.fasta", "temp_test.aln", None), ("Registry/seqs.fasta", "temp_test.aln", "temp_test.dnd"), ("Registry/seqs.fasta", "temp_test.aln", "temp with space.dnd"), (temp_filename_with_spaces, "temp_test.aln", None), (temp_filename_with_spaces, "temp with space.aln", None), (temp_large_fasta_file, "temp_cw_prot.aln", None), ]: #Note that ClustalW will map ":" to "_" in it's output file input_records = SeqIO.to_dict(SeqIO.parse(open(input_file),"fasta"), lambda rec : rec.id.replace(":","_")) if os.path.isfile(output_file): os.remove(output_file) print "Calling clustalw on %s (with %i records)" \ % (repr(input_file), len(input_records)) print "using output file %s" % repr(output_file) if newtree_file is not None: print "requesting output guide tree file %s" % repr(newtree_file) #Prepare the command... cline = MultipleAlignCL(input_file, command=clustalw_exe) cline.set_output(output_file) if newtree_file is not None: cline.set_new_guide_tree(newtree_file) #Run the command... align = Clustalw.do_alignment(cline) #Check the output... print "Got an alignment, %i sequences" % (len(align)) #The length of the alignment will depend on the version of clustalw #(clustalw 2.0.10 and clustalw 1.83 are certainly different). output_records = SeqIO.to_dict(SeqIO.parse(open(output_file),"clustal")) assert set(input_records.keys()) == set(output_records.keys()) for record in align: assert str(record.seq) == str(output_records[record.id].seq) assert str(record.seq).replace("-","") == \ str(input_records[record.id].seq) #Clean up... os.remove(output_file) #Check the DND file was created. #TODO - Try and parse this with Bio.Nexus? if newtree_file is not None: tree_file = newtree_file else: #Clustalw will name it based on the input file tree_file = os.path.splitext(input_file)[0] + ".dnd" assert os.path.isfile(tree_file), \ "Did not find tree file %s" % tree_file os.remove(tree_file) #And again, but this time using Bio.Align.Applications wrapper #Any filesnames with spaces should get escaped with quotes automatically. #Using keyword arguments here. cline = ClustalwCommandline(clustalw_exe, infile=input_file, outfile=output_file) assert str(eval(repr(cline)))==str(cline) if newtree_file is not None: #Test using a property: cline.newtree = newtree_file #I don't just want the tree, also want the alignment: cline.align = True assert str(eval(repr(cline)))==str(cline) #print cline child = subprocess.Popen(str(cline), stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=(sys.platform!="win32")) output, error = child.communicate() return_code = child.returncode assert return_code == 0 assert output.strip().startswith("CLUSTAL") assert error.strip() == "" align = AlignIO.read(open(output_file), "clustal") assert set(input_records.keys()) == set(output_records.keys()) for record in align: assert str(record.seq) == str(output_records[record.id].seq) assert str(record.seq).replace("-","") == \ str(input_records[record.id].seq) #Clean up... del child os.remove(output_file) #Check the DND file was created. #TODO - Try and parse this with Bio.Nexus? if newtree_file is not None: tree_file = newtree_file else: #Clustalw will name it based on the input file tree_file = os.path.splitext(input_file)[0] + ".dnd" assert os.path.isfile(tree_file), \ "Did not find tree file %s" % tree_file os.remove(tree_file) #Clean up any stray temp files.. if os.path.isfile("Fasta/f001.aln"): os.remove("Fasta/f001.aln") if os.path.isfile("Medline/pubmed_result1.aln"): os.remove("Medline/pubmed_result1.aln") if os.path.isfile(temp_filename_with_spaces): os.remove(temp_filename_with_spaces) if os.path.isfile(temp_large_fasta_file): os.remove(temp_large_fasta_file) print "Done"
HARRIS COUNTY, Texas - Judge Ed Emmett served as Harris County Judge from 2007-2018. He is now a Channel 2 political analyst. Emmett said he's reluctant to criticize Harris County Judge Lina Hidalgo, but did answer some questions about his view of her performance during the ITC/Deer Park fire. Eisenbaum: "What grade would you give to your replacement, Hidalgo, in her handling of the ITC fire?" Emmett: "I don't think I'm going to get into giving out grades. I do think the concern is you have to get information out quicker, and this was her first attempt and, OK, there was a learning curve there. I do worry more about the coordination. We didn't know what was burning and we didn't know where the plume was going or where it was going to come down and I think that's the concern that everybody has to have." Eisenbaum: "What do you think about her press conferences?" Emmett: "The first press conference, the company wasn't there, the state wasn't there. If you drag it on and it's an hour and you are having all of these people coming back and forth to the microphone. People don't have time for that, they want facts and they want them quick." Eisenbaum: "What's your best advice moving forward?" Emmett: "The most important thing now is to say, 'OK, what can we do better?' That's always a learning experience, and the professionals in the county will do that."
• WEBSITE: http://mysolartab.com • TWITTER: https://twitter.com/solartab • FACEBOOK: https://www.facebook.com/solartab • GOOGLE PLUS: http://plus.google.com/+Mysolartab • INSTAGRAM: http://instagram.com/solartab • PINTEREST: http://www.pinterest.com/solartab/ • TUMBLR: http://solartab.tumblr.com/ The Solartab is unlike any charger on the market today. A truly unique product featuring great design, a powerful solar panel and a built-in cover that ensures maximum sun exposure at all times. Solartab: the solar charger you’ll want to take with you everywhere you go! Check out our video (above) to see what the Solartab looks like in action! MEET THE SOLARTAB Welcome, and thanks for taking a look at the Solartab – the powerful, portable and beautiful solar charger: • A PREMIUM SOLAR CHARGER: The Solartab was born out of the idea of creating a high quality solar charger with attractive design and a truly efficient solar panel. Solar power should never be a mere gimmick – on the contrary, we believe it’s the future of power consumption! A tablet-sized solar charger makes for a powerful and essential asset that truly liberates you from the power outlets of your home. Every day. • UNIQUE COVER: We believe in smart design. Our unique cover does not only give the Solartab a stunning and nostalgic look, it also works as a stand for the solar panel. Using the stand, you can angle the panel in three different positions, allowing your phone or tablet to always charge as efficiently as possible. Find the right angle for maximum exposure to sunlight, plug in your device, and start charging! And even more, the cover will protect the Solartab while it’s tucked away in your bag – always ready to join you wherever you’re headed next. • A POWERFUL ASSET - DAY AND NIGHT: Thanks to the Solartab’s large solar panel you can always charge your device directly from the sun. Even while listening to your favorite summer tune or browsing the Internet at the same time. But we also equipped the Solartab with a high capacity internal in battery, so you can even store solar power – and give your tablet another full charge after sunset. Just whip it out during the day, not matter where you are, and you’ve always got power stored for when you really need it! • PUT THE POWER INTO YOUR OWN HANDS: With two USB ports and a micro USB port, the Solartab is ready to keep any your mobile devices going at all times. And instead of simply leeching energy out of your home’s power outlets, your devices will become truly independent units – taking advantage of the free and green energy that the sun provides right outside of your doorstep. An environmentally friendly, portable and never-ending source of energy, always ready for use: The Solartab is the perfect companion to your favorite gadgets! Here are a few situations where you can use the Solartab: • The park • The beach • On a rooftop • Your favorite café • Music festivals • When hiking • On a boat • A clearance in the middle of the forest • At picnics • On savannahs or the outback • In your backyard with some brews • On a desert island • At outdoor parties • On the Great Wall of China • In your grandma’s orchard WHERE DESIGN MEETS CONVENIENCE The Solartab started out with a basic idea: How can we create a powerful and environmentally friendly charger that people want to use all the time? To us, it was important to create a product that people would embrace as one of their favorite belongings. And add to their their list of essentials that they always take with them, every time they leave home. Drawing from our previous experience with solar panels, professionally and as consumers, we knew that we wanted a bigger solar panel. Something that could truly charge your devices efficiently. Too many solar chargers on the market today feature solar panels that simply are too small – barely strong enough to charge an iPhone within a reasonable amount of time. We wanted to create a solar charger strong enough to effectively charge even devices like the iPad. A charger that would allow its user to be mobile everywhere, anytime. So we increased the size and the quality of the solar panel, way above what you’ll find on the average charger on the market today. And already after our quick initial research and drafts we found that we could still create a slim, light and perfectly portable product. A product that would still be the size of a magazine. And weigh the same. We couldn’t find a solar charger on the market that did everything we wanted, so we decided to design one. And after a lot of work, we’re ready to make our dream product a reality: A charger more powerful, reliable, good looking and portable than anything out there! After all, why should solar power be an inconvenient gimmick? We believe solar power should rather be a fully integrated part of your daily life. We wanted to create a solar charger featuring beautiful and smart design. A robust aluminium frame encloses the Solartab's highly efficient solar panel, giving the product its clean, minimalist look. The cover is made out of soft, high grade polyurethane, which is both durable and water resistant – so don’t worry about placing it on wet grass. An elegant elastic band and strong magnets hold the cover firmly closed, while adding to the Solartab’s irresistable and timeless notebook look. And make sure it will stay closed and safe when you put it in your bag. Simply clever Scandinavian design. But there is more. Since solar panels can only fully release their potential when sunlight hits them as directly as possible, we designed the cover to also serve as a stand, which always ensures maximum efficiency. Thanks to the stand’s three different positions, the Solartab’s solar panel can always be adjusted to the current position of the sun. On top of that we have equipped the Solartab with a powerful internal battery. This means you can charge your devices even after the sun has set! As long as you let it soak up sun during the day until it’s full, the internal battery will always be able to provide a full charge on an iPad – and more. With the Solartab, you’ve always got complete independence in all your charging needs! Through our long and thorough product development process, we constantly strived to design a product that would truly change the way you perceive and use solar chargers. Through our uncompromising and perfectionist approach, all the way from early sketches to physical prototypes to the final product, we believe we have made the Solartab the best portable solar charger there is. It combines efficient solar charging with an attractive and smart design. The result is a truly unique product that will become your trusted everyday companion. Power everywhere! JOIN US: TAKE ENERGY INTO YOUR OWN HANDS, AND BRING IT WITH YOU EVERYWHERE For urban people enjoying an active, modern and versatile lifestyle, having constant access to phones and tablets is nothing less than essential. And as technology to an ever greater degree leaves the comfort and the power outlets of one’s home, the demand for alternate power sources increases. Our generation is also an environmentally conscious one. We bear great responsibility upon our shoulders: As it becomes ever more clear how unsustainable the power consumption of previous generations has been, we are constantly on the lookout for more environmentally friendly solutions to our ever increasing energy needs. Using solar power to an increasing degree is exactly the direction we should be heading. By the end of the day, it is up to us individuals to change the world. And the fact is, we’re currently in the middle of a revolution that empowers the individual in many different fields – slowly weakening what previously used to be the rock-hard centres of power, one step at a time. "Energy has traditionally been generated centrally, distributed over power lines and sold to consumers. (…)The Edison Electric Institute (EEI), a trade group, warns that distributed generation could do to energy companies what the internet did to newspapers." - The Economist on solar power, March 8th, 2014 We’re living in an exciting age. Since fairly recently, the computer and the internet have made it possible for anyone to record music or videos, or write books, and distribute them on their own. Right from their personal tablet or laptop computer, and out to the whole wide world. We are in no way as dependent on the big record companies, movie studios publishing houses as we were just a few years ago. In the same way, the influx of 3D printers is bound to shift production of all kinds of goods from giant factories to the comfort of our homes. And the solar power revolution will take ever more power from the big energy companies, and give each individual the ability to take advantage of the sun fuelled energy present right outside his or her doorstep. Free yourself from the power outlets of your home, and let the Solartab follow you and your smartphone or tablet everywhere – never running out of energy. It’s green, it’s free, and it gives us the freedom we require. SPECIFICATIONS PRODUCTION SCHEDULE We’ll ship in July 2014. Here's our schedule for Solartab delivery: PACKAGE CONTENTS The Solartab comes with: - Beautiful integrated cover - 2.1A wall charger (either US or EU/UK) - USB to micro-USB charging cable - Instruction Manual Does the weather forecast for tomorrow look a bit troubling? Since you can charge the Solartab from a wall outlet using its wall charger, you can make sure the battery is full even before you head out. It doesn’t matter if it’s grey outside: Every single day your iPad is joined by your Solartab, its battery life is already doubled! WE NEED YOUR HELP! We’ve already come a long way with the Solartab, but need your help to make this project become a reality! We still need to: - Buy the injection moulds (to get the beautiful polyurethane cover to look perfect we need to get very expensive moulds made). - Work with our manufacturer to design an IC that is suitable for mass production and will pass standard testing. (Luckily our manufacturer has a lot of experience with this!) - Get the Package Designed. (We want the box the Solartab comes in to be as beautiful as the product itself!) - Get enough volume to make the Solartab a reality. (Any factory will have a minimum volume that needs to be met) We have an excellent relationship with our manufacturer and will be on the ground in China during the entire manufacturing process. Our manufacturer can work with a very short lead-time and we have a great sourcing capability for solar cells. We are therefore confident that we can deliver the Solartab by July!
from .Camera import Camera import aravis import cv2 # import numpy # import cv2 class GigECamera(Camera): """ Picamera extension to the Camera abstract class. """ def set_camera_settings(self, camera): """ Sets the camera resolution to the max resolution if the config provides camera/height or camera/width attempts to set the resolution to that. if the config provides camera/isoattempts to set the iso to that. if the config provides camera/shutter_speed to set the shutterspeed to that. :param picamera.PiCamera camera: picamera camera instance to modify """ try: camera.resolution = camera.MAX_RESOLUTION if type(self.config) is dict: if hasattr(self, "width") and hasattr(self, "height"): camera.resolution = (int(self.width), int(self.height)) if "width" in self.config and "height" in self.config: camera.resolution = (int(self.config['width']), int(self.config['height'])) camera.shutter_speed = getattr(self, "shutter_speed", camera.shutter_speed) camera.iso = getattr(self, "iso", camera.iso) else: if self.config.has_option("camera", "width") and self.config.has_option("camera", "height"): camera.resolution = (self.config.getint("camera", "width"), self.config.getint("camera", "height")) if self.config.has_option("camera", "shutter_speed"): camera.shutter_speed = self.config.getfloat("camera", "shutter_speed") if self.config.has_option("camera", "iso"): camera.iso = self.config.getint("camera", "iso") except Exception as e: self.logger.error("error setting picamera settings: {}".format(str(e))) def capture_image(self, filename: str = None): """ Captures image using the Raspberry Pi Camera Module, at either max resolution, or resolution specified in the config file. Writes images disk using :func:`encode_write_image`, so it should write out to all supported image formats automatically. :param filename: image filename without extension :return: :func:`numpy.array` if filename not specified, otherwise list of files. :rtype: numpy.array """ st = time.time() try: with picamera.PiCamera() as camera: with picamera.array.PiRGBArray(camera) as output: time.sleep(2) # Camera warm-up time self.set_camera_settings(camera) time.sleep(0.2) # self._image = numpy.empty((camera.resolution[1], camera.resolution[0], 3), dtype=numpy.uint8) camera.capture(output, 'rgb') # self._image = output.array self._image = Image.fromarray(output.array) # self._image = cv2.cvtColor(self._image, cv2.COLOR_BGR2RGB) if filename: filenames = self.encode_write_image(self._image, filename) self.logger.debug("Took {0:.2f}s to capture".format(time.time() - st)) return filenames else: self.logger.debug("Took {0:.2f}s to capture".format(time.time() - st)) except Exception as e: self.logger.critical("EPIC FAIL, trying other method. {}".format(str(e))) return None return None
. Evoked potentials and unitary discharges in responses to tooth pulp and acoustic click stimuli were recorded from the hippocampus of freely moving rats. The spatial distribution of evoked field responses to tooth pulp stimulation and acoustic clicks were identical. Averaged evoked potentials consisted of a large negative deflection (N1) preceded by a small positive potential (P1). The shortest latency N1 was recorded from the middle third of the dentate molecular layer and the outer portion of apical dendrites of Ca3 (27 ms). The peak latency of N1 was significantly longer (34 ms) in the stratum radiatum of CA1. Laminar profiles of N1 in the dentate gyrus and CA3 were similar to that evoked by electrical stimulation of the perforant path and in CA1 similar to the response profile evoked by the Schaffer collaterals. The largest amplitude P1 was observed above the pyramidal layer of CA1 and the hilus. Both sensory modalities were able to modify the discharge rate of neurons in all hippocampal regions. A conclusion is made that information about sensory stimuli can reach the hippocampus by two distinctive pathways: a short-latency inhibitory input via the fimbrial fornix and a longer-latency path via the entorhinal cortex.
from numpy import random from sequence.components.photon import Photon from sequence.components.switch import Switch from sequence.kernel.timeline import Timeline from sequence.utils.encoding import time_bin random.seed(0) FREQ = 1e6 def create_switch(basis_list, photons): class Receiver(): def __init__(self, tl): self.timeline = tl self.log = [] def get(self, photon=None): self.log.append((self.timeline.now(), photon)) tl = Timeline() sw = Switch("sw", tl) r1 = Receiver(tl) r2 = Receiver(tl) sw.set_detector(r1) sw.set_interferometer(r2) sw.set_basis_list(basis_list, 0, FREQ) tl.init() for i, photon in enumerate(photons): tl.time = 1e12 / FREQ * i sw.get(photons[i]) tl.time = 0 tl.run() return r1.log, r2.log def test_Switch_get(): # z-basis measure |e> and |l> photons = [Photon('', encoding_type=time_bin, quantum_state=time_bin["bases"][0][i]) for i in range(2)] log1, log2 = create_switch([0] * 2, photons) expects = [0, 1e12 / FREQ + time_bin["bin_separation"]] for i, log in enumerate(log1): time, photon = log assert time == expects[i] assert photon is None assert len(log2) == 0 # z-basis measure |e+l> and |e-l> photons = [Photon('', encoding_type=time_bin, quantum_state=time_bin["bases"][0][random.randint(2)]) for _ in range(2000)] log1, log2 = create_switch([0] * 2000, photons) assert len(log2) == 0 counter1 = 0 counter2 = 0 for time, photon in log1: assert photon is None time = time % (1e12 / FREQ) if time == 0: counter1 += 1 elif time == time_bin["bin_separation"]: counter2 += 1 else: assert False assert abs(counter2 / counter1 - 1) < 0.1 # x-basis get photons photons = [Photon('', encoding_type=time_bin, quantum_state=time_bin["bases"][random.randint(2)][random.randint(2)]) for _ in range(2000)] log1, log2 = create_switch([1] * 2000, photons) assert len(log1) == 0 for time, photon in log1: time = time % (1e12 / FREQ) assert time == 0 assert photon is not None
<filename>datapack/data/scripts/quests/633_InTheForgottenVillage/__init__.py ### --------------------------------------------------------------------------- ### Create by Skeleton!!! ### --------------------------------------------------------------------------- import sys from com.l2jfrozen.gameserver.model.quest import State from com.l2jfrozen.gameserver.model.quest import QuestState from com.l2jfrozen.gameserver.model.quest.jython import QuestJython as JQuest qn = "633_InTheForgottenVillage" #NPC MINA = 31388 #ITEMS RIB_BONE = 7544 Z_LIVER = 7545 # Mobid : DROP CHANCES DAMOBS = { 21557 : 328,#Bone Snatcher 21558 : 328,#Bone Snatcher 21559 : 337,#Bone Maker 21560 : 337,#Bone Shaper 21563 : 342,#Bone Collector 21564 : 348,#Skull Collector 21565 : 351,#Bone Animator 21566 : 359,#Skull Animator 21567 : 359,#Bone Slayer 21572 : 365,#Bone Sweeper 21574 : 383,#Bone Grinder 21575 : 383,#Bone Grinder 21580 : 385,#Bone Caster 21581 : 395,#Bone Puppeteer 21583 : 397,#Bone Scavenger 21584 : 401 #Bone Scavenger } UNDEADS = { 21553 : 347,#Trampled Man 21554 : 347,#Trampled Man 21561 : 450,#Sacrificed Man 21578 : 501,#Behemoth Zombie 21596 : 359,#Requiem Lord 21597 : 370,#Requiem Behemoth 21598 : 441,#Requiem Behemoth 21599 : 395,#Requiem Priest 21600 : 408,#Requiem Behemoth 21601 : 411 #Requiem Behemoth } class Quest (JQuest): def __init__(self,id,name,descr): JQuest.__init__(self,id,name,descr) def onEvent (self,event,st): htmltext = event if event == "accept" : st.set("cond","1") st.setState(STARTED) st.playSound("ItemSound.quest_accept") htmltext = "31388-04.htm" if event == "quit": st.takeItems(RIB_BONE, -1) st.playSound("ItemSound.quest_finish") htmltext = "31388-10.htm" st.exitQuest(1) elif event == "stay": htmltext = "31388-07.htm" elif event == "reward": if st.getInt("cond") == 2: if st.getQuestItemsCount(RIB_BONE) >= 200: st.takeItems(RIB_BONE, 200) st.giveItems(57, 25000) st.addExpAndSp(305235, 0) st.playSound("ItemSound.quest_finish") st.set("cond","1") htmltext = "31388-08.htm" else : htmltext = "31388-09.htm" return htmltext def onTalk (self,npc,player): htmltext = "<html><body>You are either not carrying out your quest or don't meet the criteria.</body></html>" st = player.getQuestState(qn) if not st: return npcId = npc.getNpcId() if npcId == MINA: id = st.getState() cond = st.getInt("cond") if id == CREATED: if st.getPlayer().getLevel() > 64: htmltext = "31388-01.htm" else: htmltext = "31388-03.htm" st.exitQuest(1) elif cond == 1: htmltext = "31388-06.htm" elif cond == 2: htmltext = "31388-05.htm" return htmltext def onKill(self,npc,player,isPet): npcId = npc.getNpcId() if npcId in UNDEADS.keys(): partyMember = self.getRandomPartyMemberState(player, STARTED) if not partyMember: return st = partyMember.getQuestState(qn) if not st : return if st.getRandom(1000) < UNDEADS[npcId]: st.giveItems(Z_LIVER, 1) st.playSound("ItemSound.quest_itemget") elif npcId in DAMOBS.keys(): partyMember = self.getRandomPartyMember(player, "cond", "1") if not partyMember: return st = partyMember.getQuestState(qn) if not st : return if st.getRandom(1000) < DAMOBS[npcId]: st.giveItems(RIB_BONE, 1) if st.getQuestItemsCount(RIB_BONE) == 200: st.set("cond","2") st.playSound("ItemSound.quest_middle") else: st.playSound("ItemSound.quest_itemget") return QUEST = Quest(633, qn, "In The Forgotten Village") CREATED = State('Start', QUEST) STARTED = State('Started', QUEST) for i in DAMOBS.keys(): QUEST.addKillId(i) for i in UNDEADS.keys(): QUEST.addKillId(i) QUEST.setInitialState(CREATED) QUEST.addStartNpc(MINA) QUEST.addTalkId(MINA) STARTED.addQuestDrop(MINA,RIB_BONE,1) STARTED.addQuestDrop(MINA,Z_LIVER,1)
<reponame>frogrammer/databricks-workspace-cleaner import base64 import json from zipfile import ZipFile from firehelper import CommandRegistry from utils.db import ws_import def import_zip(path, import_root='IMPORT', overwrite=False, format='JUPYTER'): with ZipFile(path, 'r') as z: for fn in z.namelist(): with z.open(fn) as f: content = base64.b64encode(f.read()) path = '/{}{}'.format(import_root, '.'.join(fn.split('.')[:-2])) language = fn.split('.')[-2] ws_import(path, language, format, content, overwrite) print('done') import_commands = { 'import': { 'zip': import_zip } } CommandRegistry.register(import_commands)
An optimal rotator for iterative reconstruction For implementations of iterative reconstruction algorithms that rotate the image matrix, the characteristics of the rotator may affect the reconstruction quality. Desirable qualities for the rotator include: preservation of global and local image counts; accurate count positioning; a uniform and predictable amount of blurring due to the rotation. A new rotation method for iterative reconstruction is proposed which employs Gaussian interpolation. This method was compared to standard rotation techniques and is shown to be superior to standard techniques when measured by these qualities. The computational cost was demonstrated to be only slightly more than bilinear interpolation.
PARK JI-SUNG has admitted he is disappointed by Manchester United's 'up and down' season under Louis van Gaal. Park spent seven years at Old Trafford under former manager Sir Alex Ferguson when things were very different for United. The South Korean won four Premier League titles and lifted the Champions League in the 2007/08 season during his time with the club. United are currently fifth and are only four points off rivals City in fourth place with six games left to play. However, Van Gaal's men have a tough run-in, facing high-flying Leicester and West Ham as well as relegation-threatened Norwich.
DOWN'S SYNDROME SCREENING MARKER LEVELS FOLLOWING ASSISTED REPRODUCTION Maternal serum fetoprotein (AFP), human chorionic gonadotrophin (hCG), and unconjugated oestriol (uE3) levels were examined in 1632 women who had ovulation induction and 327 who had in vitro fertilization. There was a highly statistically significant increase in hCG and reduction in uE3 among those with ovulation induction. The median levels were respectively 109 and 092 multiples of the normal gestationspecific median (MOM) based on a total of 34582 women. Ovulation induction appeared to have no material effect on the median AFP level but this masked a significant increase when treatment was with Clomiphene (105MOM) and a significant decrease when Pergonal was used (093MOM). There was a highly statistically significant reduction in uE3 among women having in vitro fertilization with a median level of 092MOM. Those fertilized with a donor egg had significantly higher AFP and uE3 levels than when their own egg was used. Our results were confounded by differences in gravidity, but formally allowing for this factor did not materially change the findings. None of the observed effects is great enough to warrant routine adjustment of marker levels to allow for them. Moreover, women with positive Down's syndrome screening results can be reassured that this is unlikely to be due to them having had assisted reproduction.
// Validate validates that the received struct is correct func (pdbs *PodDisruptionBudgetSpec) Validate() error { if pdbs.MinAvailable != nil && pdbs.MaxUnavailable != nil { return fmt.Errorf("only one of 'spec.podDisruptionBudget.minAvailable' or 'spec.podDisruptionBudget.maxUnavailable' is allowed") } return nil }
<gh_stars>100-1000 /* Copyright (C) 2020 by Arm Limited. All rights reserved. */ #ifndef GET_EVENT_KEY_H #define GET_EVENT_KEY_H using CounterKey = int; CounterKey getEventKey(); #endif // GET_EVENT_KEY_H
#include "includes.h" #include <sys/ioctl.h> #define LAYER_NAME_MAX 255 /* Display usage and exit */ static void usage(char *pgm, char *name) { if (strcmp(name, "stats") == 0) { fprintf(stderr, "usage: %s %s <mnt> <id> [-c]\n", pgm, name); fprintf(stderr, "\t mnt - mount point\n"); fprintf(stderr, "\t id - layer name\n"); fprintf(stderr, "\t [-c] - clear stats (optional)\n"); fprintf(stderr, "Specify . as id for displaying stats for all layers\n"); } else if (strcmp(name, "syncer") == 0) { fprintf(stderr, "usage: %s %s <mnt> <time>\n", pgm, name); fprintf(stderr, "\t mnt - mount point\n"); fprintf(stderr, "\t time - time in seconds, " "0 to disable (default 1 minute)\n"); } else if (strcmp(name, "pcache") == 0) { fprintf(stderr, "usage: %s %s <mnt> <pcache>\n", pgm, name); fprintf(stderr, "\t mnt - mount point\n"); fprintf(stderr, "\t memory - memory limit in MB (default 512MB)\n"); #ifndef __MUSL__ } else if (strcmp(name, "profile") == 0) { fprintf(stderr, "usage: %s %s <mnt> [enable|disable]\n", pgm, name); fprintf(stderr, "\t mnt - mount point\n"); fprintf(stderr, "\t [enable|disable] - enable/disable profiling\n"); #endif } else if (strcmp(name, "verbose") == 0) { fprintf(stderr, "usage: %s %s <mnt> [enable|disable]\n", pgm, name); fprintf(stderr, "\t mnt - mount point\n"); fprintf(stderr, "\t [enable|disable] - enable/disable verbose mode\n"); } else { fprintf(stderr, "usage: %s %s <mnt>\n", pgm, name); fprintf(stderr, "\t mnt - mount point\n"); } exit(EINVAL); } /* Display (and optionally clear) stats of a layer. * Issue the command from the layer root directory. */ int ioctl_main(char *pgm, int argc, char *argv[]) { char name[LAYER_NAME_MAX + 1], *dir, op; int fd, err, len, value; enum ioctl_cmd cmd; struct stat st; if ((argc != 2) && (argc != 3) && (argc != 4)) { usage(pgm, argv[0]); } if (stat(argv[1], &st)) { perror("stat"); fprintf(stderr, "Make sure %s exists\n", argv[1]); usage(pgm, argv[0]); } len = strlen(argv[1]); dir = alloca(len + strlen(LC_LAYER_ROOT_DIR) + 3); memcpy(dir, argv[1], len); dir[len] = '/'; memcpy(&dir[len + 1], LC_LAYER_ROOT_DIR, strlen(LC_LAYER_ROOT_DIR)); len += strlen(LC_LAYER_ROOT_DIR) + 1; dir[len] = 0; fd = open(dir, O_DIRECTORY); if (fd < 0) { perror("open"); fprintf(stderr, "Make sure %s exists and has permissions\n", dir); exit(errno); } if (strcmp(argv[0], "stats") == 0) { if (argc < 3) { close(fd); usage(pgm, argv[0]); } if ((argc == 4) && strcmp(argv[3], "-c")) { close(fd); usage(pgm, argv[0]); } len = strlen(argv[2]); assert(len < LAYER_NAME_MAX); memcpy(name, argv[2], len); name[len] = 0; cmd = (argc == 3) ? LAYER_STAT : CLEAR_STAT; err = ioctl(fd, _IOW(0, cmd, name), name); } else if (strcmp(argv[0], "flush") == 0) { if (argc != 2) { close(fd); usage(pgm, argv[0]); } err = ioctl(fd, _IO(0, DCACHE_FLUSH), 0); } else if (strcmp(argv[0], "grow") == 0) { if (argc != 2) { close(fd); usage(pgm, argv[0]); } err = ioctl(fd, _IO(0, LCFS_GROW), 0); } else if (strcmp(argv[0], "commit") == 0) { if (argc != 2) { close(fd); usage(pgm, argv[0]); } err = ioctl(fd, _IO(0, LCFS_COMMIT), 0); } else if ((strcmp(argv[0], "verbose") == 0) #ifndef __MUSL__ || (strcmp(argv[0], "profile") == 0) #endif ) { if (argc < 3) { close(fd); usage(pgm, argv[0]); } if (strcmp(argv[2], "enable") == 0) { op = 1; } else if (strcmp(argv[2], "disable") == 0) { op = 0; } else { close(fd); usage(pgm, argv[0]); } cmd = #ifndef __MUSL__ (strcmp(argv[0], "profile") == 0) ? LCFS_PROFILE : #endif LCFS_VERBOSE; err = ioctl(fd, _IOW(0, cmd, op), &op); } else { if (argc != 3) { close(fd); usage(pgm, argv[0]); } value = atoll(argv[2]); if (value < 0) { close(fd); usage(pgm, argv[0]); } if (strcmp(argv[0], "syncer") == 0) { err = ioctl(fd, _IOW(0, SYNCER_TIME, int), argv[2]); } else if (value && (strcmp(argv[0], "pcache") == 0)) { err = ioctl(fd, _IOW(0, DCACHE_MEMORY, int), argv[2]); } else { close(fd); usage(pgm, argv[0]); } } if (err) { perror("ioctl"); close(fd); exit(errno); } close(fd); return 0; }
package org.deri.grefine.reconcile.rdf.endpoints; import org.deri.grefine.reconcile.rdf.executors.DumpQueryExecutor; import org.deri.grefine.reconcile.rdf.executors.QueryExecutor; import org.deri.grefine.reconcile.rdf.factories.JenaTextSparqlQueryFactory; import org.deri.grefine.reconcile.rdf.factories.SparqlQueryFactory; import org.apache.jena.rdf.model.Model; public class QueryEndpointFactory { public QueryEndpoint getLarqQueryEndpoint(Model model){ SparqlQueryFactory queryFactory = new JenaTextSparqlQueryFactory(); QueryExecutor queryExecutor = new DumpQueryExecutor(model); return new QueryEndpointImpl(queryFactory, queryExecutor); } }
n,m = list(map(int,input().split())) linp = list(map(int,input().split())) l = [0]*101 for i in linp: l[i] += 1 test = 100 while test > 0: l2 = [i//test for i in l] peop = sum(l2) if peop >= n: break else: test -= 1 print(test)
The North Carolina High School Athletic Association released the high school basketball regional pairings Saturday, and the Hendersonville Bearcats (17-9) drew Bishop McGuinness (28-2). The two teams square off at 7 p.m. Friday in Greensboro. That will be the first of two 1-A quarterfinal games Friday night at Fleming Gym on the campus of UNC-Greensboro. The nightcap, scheduled to begin around 8:30 p.m., will be against Cherryville (22-8) and South Stokes (12-15). The winners of Friday's games face off in the 1-A West Regional championship at 4 p.m. Saturday. In Tuesday's 1-A East Regional at J.H. Rose High School in Greenville, Weldon (25-2) takes on Manteo (24-5) at 7 p.m., and East Bladen (24-2) battles East Columbus (24-4) at approximately 8:30 p.m. The winners of those games will battle for the East Regional championship at 1 p.m. Saturday. The West Regional and East Regional champions will face off on March 14 at Reynolds Coliseum at N.C. State for the 1-A state championship.
Some of the pricier seats at the new Yankee Stadium went unused during the team's first home stand, but the Yankees are by far still the most valuable team in baseball, according to Forbes' annual franchise valuations. The Yankees are worth $1.5 billion - the cost of their new ballpark - according to the magazine, which has picked the Yanks as baseball's most valuable club the last 12 years. The Mets, valued at $912 million, are second and the Red Sox ($833 million) are third among the game's 30teams. The average franchise is worth $482 million. The Yankees' value increased 15% in the last year and the Mets' by 11% because of their new stadium and cable monies, Forbes said. Ten franchises declined in value according to Forbes, the most since 2004. The economy has hurt franchises with a lot of debt or stadiums in cities with high unemployment, Forbes said. Rounding out the top 10 are the Dodgers ($722 million); Cubs ($700 million); Angels ($509 million); Phillies ($496 million); Cardinals ($486 million); Giants ($471 million) and White Sox ($450 million).
<gh_stars>0 use syn; use options::{OptionsBuilder, OptionsBuilderMode, parse_lit_as_string, parse_lit_as_bool, parse_lit_as_path, FieldMode, StructOptions}; use derive_builder_core::{DeprecationNotes, Bindings}; #[derive(Debug, Clone)] pub struct StructMode { build_fn_name: Option<String>, build_fn_enabled: Option<bool>, build_target_name: String, build_target_generics: syn::Generics, build_target_vis: syn::Visibility, builder_name: Option<String>, builder_vis: Option<syn::Visibility>, derive_traits: Option<Vec<syn::Ident>>, deprecation_notes: DeprecationNotes, validate_fn: Option<syn::Path>, struct_size_hint: usize, } impl OptionsBuilder<StructMode> { pub fn parse(ast: &syn::MacroInput) -> Self { trace!("Parsing struct `{}`.", ast.ident.as_ref()); // Note: Set `build_target_name` _before_ parsing attributes, for better diagnostics! let mut builder = Self::from(StructMode { build_target_name: ast.ident.as_ref().to_string(), build_target_generics: ast.generics.clone(), build_target_vis: ast.vis.clone(), builder_name: None, builder_vis: None, build_fn_enabled: None, build_fn_name: None, derive_traits: None, deprecation_notes: Default::default(), validate_fn: None, struct_size_hint: 0, }); builder.parse_attributes(&ast.attrs); builder } } impl StructMode { impl_setter!{ ident: builder_name, desc: "builder name", map: |x: String| { x }, } impl_setter!{ ident: build_fn_name, desc: "build function name", map: |x: String| { x }, } impl_setter!{ ident: build_fn_enabled, desc: "build function enabled", map: |x: bool| { x }, } impl_setter!{ ident: validate_fn, desc: "validator function path", map: |x: syn::Path| { x }, } impl_setter!{ ident: derive_traits, desc: "derive traits", map: |x: Vec<syn::Ident>| { x }, } #[allow(non_snake_case)] fn parse_build_fn_options_metaItem(&mut self, meta_item: &syn::MetaItem) { trace!("Build Method Options - Parsing MetaItem `{:?}`.", meta_item); match *meta_item { syn::MetaItem::Word(ref ident) => { self.parse_build_fn_options_word(ident) }, syn::MetaItem::NameValue(ref ident, ref lit) => { self.parse_build_fn_options_nameValue(ident, lit) }, syn::MetaItem::List(ref ident, ref nested_attrs) => { self.parse_build_fn_options_list(ident, nested_attrs) } } } #[allow(non_snake_case)] fn parse_build_fn_options_nameValue(&mut self, ident: &syn::Ident, lit: &syn::Lit) { trace!("Build fn Options - Parsing named value `{}` = `{:?}`", ident.as_ref(), lit); match ident.as_ref() { "name" => { self.parse_build_fn_name(lit) }, "skip" => { self.parse_build_fn_skip(lit) }, "validate" => { self.parse_build_fn_validate(lit) }, _ => { panic!("Unknown build_fn option `{}` {}.", ident.as_ref(), self.where_diagnostics()) } } } /// e.g `private` in `#[builder(build_fn(private))]` fn parse_build_fn_options_word(&mut self, ident: &syn::Ident) { trace!("Setter Options - Parsing word `{}`", ident.as_ref()); match ident.as_ref() { "skip" => { self.build_fn_enabled(false); } _ => { panic!("Unknown build_fn option `{}` {}.", ident.as_ref(), self.where_diagnostics()) } }; } #[allow(non_snake_case)] fn parse_build_fn_options_list(&mut self, ident: &syn::Ident, nested: &[syn::NestedMetaItem]) { trace!("Build fn Options - Parsing list `{}({:?})`", ident.as_ref(), nested); match ident.as_ref() { _ => { panic!("Unknown option `{}` {}.", ident.as_ref(), self.where_diagnostics()) } } } fn parse_build_fn_name(&mut self, lit: &syn::Lit) { trace!("Parsing build function name `{:?}`", lit); let value = parse_lit_as_string(lit).unwrap(); self.build_fn_name(value.clone()) } #[allow(dead_code,unused_variables)] fn parse_build_fn_skip(&mut self, skip: &syn::Lit) { self.build_fn_enabled(!parse_lit_as_bool(skip).unwrap()); } fn parse_build_fn_validate(&mut self, lit: &syn::Lit) { trace!("Parsing build function validate path `{:?}`", lit); let value = parse_lit_as_path(lit).unwrap(); self.validate_fn(value); } } impl OptionsBuilderMode for StructMode { fn parse_builder_name(&mut self, name: &syn::Lit) { trace!("Parsing builder name `{:?}`", name); let value = parse_lit_as_string(name).unwrap(); self.builder_name(value.clone()); } fn parse_build_fn_options(&mut self, nested: &[syn::NestedMetaItem]) { for x in nested { match *x { syn::NestedMetaItem::MetaItem(ref meta_item) => { self.parse_build_fn_options_metaItem(meta_item); }, syn::NestedMetaItem::Literal(ref lit) => { error!("Expected NestedMetaItem::MetaItem, found `{:?}`.", x); panic!("Could not parse builder option `{:?}` {}.", lit, self.where_diagnostics()); } } } } /// Parse the `derive` list for struct-level builder declarations. fn parse_derive(&mut self, nested: &[syn::NestedMetaItem]) { let mut traits = vec![]; let where_diag = self.where_diagnostics(); for x in nested { match *x { // We don't allow name-value pairs or further nesting here, so // only look for words. syn::NestedMetaItem::MetaItem(syn::MetaItem::Word(ref tr)) => { match tr.as_ref() { "Default" | "Clone" => { self.push_deprecation_note( format!("The `Default` and `Clone` traits are automatically added to all \ builders; explicitly deriving them is unnecessary ({})", where_diag)); }, _ => traits.push(tr.clone()) } } _ => { panic!("The derive(...) option should be a list of traits (at {}).", self.where_diagnostics()) } } } self.derive_traits(traits); } fn push_deprecation_note<T: Into<String>>(&mut self, x: T) -> &mut Self { self.deprecation_notes.push(x.into()); self } /// Provide a diagnostic _where_-clause for panics. fn where_diagnostics(&self) -> String { format!("on struct `{}`", self.build_target_name) } fn struct_mode(&self) -> bool { true } } impl From<OptionsBuilder<StructMode>> for (StructOptions, OptionsBuilder<FieldMode>) { fn from(mut b: OptionsBuilder<StructMode>) -> (StructOptions, OptionsBuilder<FieldMode>) { // Check if field visibility has been expressly set at the struct level. // If not, and if the crate is operating under the old public fields mode, // present a compilation warning. if !cfg!(feature = "private_fields") && b.field_vis.is_none() { let where_diagnostics = b.where_diagnostics(); b.mode.push_deprecation_note(format!( "Builder fields will be private by default starting in the next version. \ (see https://github.com/colin-kiegel/rust-derive-builder/issues/86 for \ more details). To squelch this message and adopt the new behavior now, \ compile `derive_builder` with `--features \"private_fields\"` or add \ `field(<vis>)` to the builder attribute on the struct. (Found {})", where_diagnostics)); } #[cfg(feature = "struct_default")] let (field_default_expression, struct_default_expression) = (None, b.default_expression); #[cfg(not(feature = "struct_default"))] let (field_default_expression, struct_default_expression) = (b.default_expression, None); let field_defaults = OptionsBuilder::<FieldMode> { setter_enabled: b.setter_enabled, builder_pattern: b.builder_pattern, setter_name: None, setter_prefix: b.setter_prefix, setter_vis: b.setter_vis, setter_into: b.setter_into, try_setter: b.try_setter, field_vis: b.field_vis, default_expression: field_default_expression, no_std: b.no_std, mode: { let mut mode = FieldMode::default(); mode.use_default_struct = struct_default_expression.is_some(); mode }, }; let m = b.mode; let pattern = b.builder_pattern.unwrap_or_default(); let bindings = Bindings { no_std: b.no_std.unwrap_or(false) }; let struct_options = StructOptions { build_fn_enabled: m.build_fn_enabled.unwrap_or(true), build_fn_name: syn::Ident::new( m.build_fn_name.unwrap_or("build".to_string()) ), builder_ident: syn::Ident::new( m.builder_name.unwrap_or(format!("{}Builder", m.build_target_name)) ), builder_visibility: m.builder_vis.unwrap_or(m.build_target_vis), builder_pattern: pattern, build_target_ident: syn::Ident::new(m.build_target_name), derives: m.derive_traits.unwrap_or_default(), deprecation_notes: m.deprecation_notes, generics: m.build_target_generics, struct_size_hint: m.struct_size_hint, bindings: bindings, default_expression: struct_default_expression, validate_fn: m.validate_fn, }; (struct_options, field_defaults) } }
A massive manhunt is underway in Vancouver for the person police say has sexually assaulted six women on the University of British Columbia campus as they walked home alone in the early hours of the morning. The disclosure Tuesday by the RCMP that they believe a single suspect is responsible for not just three attacks this month but three more – the most recent last Sunday – is deepening the climate of fear on the 400-hectare campus. The situation has prompted the deployment of police bike-patrol officers, dog-services officers and members of the Emergency Response team. Profilers and crime analysts are also on the case, struggling to provide investigators some deeper sense of a suspect described by witnesses and victims as Caucasian, in his mid- to late-20s or early 30s, with a thin build, between 5-foot-8 and 6-foot-2 tall. Story continues below advertisement "These attacks seem to be crimes of opportunity where the suspect is specifically targeting lone females in somewhat secluded areas," RCMP Sergeant Peter Thiessen told a news conference on campus. "In all situations, the women were assaulted while walking on the campus late into the evening or into the early morning hours," Sgt. Thiessen. He said he doesn't recall "a similar set of circumstances on a campus or at an educational facility in this province." Elsewhere in Canada, university administrators contacted Tuesday by The Globe and Mail said the alleged serial nature of the crimes was unusual. The three attacks had prompted various university measures, including boosted security patrols, increased campus lighting and the distribution of safety whistles, but officials said they are going further. As of Tuesday, the school posted security officers at each of six key residence complexes on campus, launched a volunteer service to provide escorts for students in residence, and offered counselling. "This is a stressful time for many people on our campus and in this area of the city," Louise Cowin, a university vice-president sitting alongside Sgt. Thiessen, told the news conference. "This latest news will add to the anxiety. That fear is understandable but it is also critical to act and act decisively." She added: "This is not a time to give in to fear." On Tuesday, police linked attacks in April and May to the previous three, then added an incident that occurred at about 1:30 a.m. Sunday. An attacker grabbed a young woman from behind, police said, but she managed to scare him off by flailing her arms. Police said it took a few days to disclose the latest attack for investigative reasons. Story continues below advertisement Story continues below advertisement In one October attack, a man rushed out of a wooded area and tried to drag a 17-year-old into the trees, punching her in the face before she managed to break free. While Sgt. Thiessen said police are confident about tracking down the suspect, he expressed another worry. "In this type of investigation, escalation is always a concern," he said. Tips are key, he added. "Somebody knows something about who this individual may possibly be," he said. "All we need is that little piece of information that will point us in a direction of a particular individal." The situation was the talk of the campus. "I generally feel really safe around UBC, but it is really crazy this would happen," said Celina Fletcher, an environmental science student, who said she would leave the campus before nightfall until the situation is resolved. Nearby, arts students Kristin Weaver and Hana Decolonogon were checking a laptop awaiting a university advisory on the situation. "I hope that they catch this guy soon," said Ms. Weaver, who lives on campus. "I think people are shocked that this number of things are increasing so quickly." Story continues below advertisement Ms. Decolongon said the situation has had an impact. "I don't think it's an unsafe environment," she said, "but it definitely makes you feel uneasy." With a report from James Bradshaw in Toronto
<gh_stars>1-10 #!/usr/bin/env python # encoding: utf-8 from __future__ import absolute_import, unicode_literals from . import Entry from . import Journal from . import time as jrnl_time from . import __title__ # 'jrnl' from . import __version__ import os import re from datetime import datetime import time import fnmatch import plistlib import pytz import uuid import tzlocal from xml.parsers.expat import ExpatError import socket import platform class DayOne(Journal.Journal): """A special Journal handling DayOne files""" # InvalidFileException was added to plistlib in Python3.4 PLIST_EXCEPTIONS = (ExpatError, plistlib.InvalidFileException) if hasattr(plistlib, "InvalidFileException") else ExpatError def __init__(self, **kwargs): self.entries = [] self._deleted_entries = [] super(DayOne, self).__init__(**kwargs) def open(self): filenames = [os.path.join(self.config['journal'], "entries", f) for f in os.listdir(os.path.join(self.config['journal'], "entries"))] filenames = [] for root, dirnames, f in os.walk(self.config['journal']): for filename in fnmatch.filter(f, '*.doentry'): filenames.append(os.path.join(root, filename)) self.entries = [] for filename in filenames: with open(filename, 'rb') as plist_entry: try: dict_entry = plistlib.readPlist(plist_entry) except self.PLIST_EXCEPTIONS: pass else: try: timezone = pytz.timezone(dict_entry['Time Zone']) except (KeyError, pytz.exceptions.UnknownTimeZoneError): timezone = tzlocal.get_localzone() date = dict_entry['Creation Date'] try: date = date + timezone.utcoffset(date, is_dst=False) except TypeError: # if the system timezone is set to UTC, # pytz.timezone.utcoffset() breaks when given the # arg `is_dst` pass entry = Entry.Entry(self, date, text=dict_entry['Entry Text'], starred=dict_entry["Starred"]) entry.uuid = dict_entry["UUID"] entry._tags = [self.config['tagsymbols'][0] + tag.lower() for tag in dict_entry.get("Tags", [])] """Extended DayOne attributes""" try: entry.creator_device_agent = dict_entry['Creator']['Device Agent'] except: pass try: entry.creator_generation_date = dict_entry['Creator']['Generation Date'] except: entry.creator_generation_date = date try: entry.creator_host_name = dict_entry['Creator']['Host Name'] except: pass try: entry.creator_os_agent = dict_entry['Creator']['OS Agent'] except: pass try: entry.creator_software_agent = dict_entry['Creator']['Software Agent'] except: pass try: entry.location = dict_entry['Location'] except: pass try: entry.weather = dict_entry['Weather'] except: pass self.entries.append(entry) self.sort() return self def write(self): """Writes only the entries that have been modified into plist files.""" for entry in self.entries: if entry.modified: utc_time = datetime.utcfromtimestamp(time.mktime(entry.date.timetuple())) if not hasattr(entry, "uuid"): entry.uuid = uuid.uuid1().hex if not hasattr(entry, "creator_device_agent"): entry.creator_device_agent = '' # iPhone/iPhone5,3 if not hasattr(entry, "creator_generation_date"): entry.creator_generation_date = utc_time if not hasattr(entry, "creator_host_name"): entry.creator_host_name = socket.gethostname() if not hasattr(entry, "creator_os_agent"): entry.creator_os_agent = '{}/{}'.format(platform.system(), platform.release()) if not hasattr(entry, "creator_software_agent"): entry.creator_software_agent = '{}/{}'.format(__title__, __version__) filename = os.path.join(self.config['journal'], "entries", entry.uuid.upper() + ".doentry") entry_plist = { 'Creation Date': utc_time, 'Starred': entry.starred if hasattr(entry, 'starred') else False, 'Entry Text': entry.title + "\n" + entry.body, 'Time Zone': str(tzlocal.get_localzone()), 'UUID': entry.uuid.upper(), 'Tags': [tag.strip(self.config['tagsymbols']).replace("_", " ") for tag in entry.tags], 'Creator': {'Device Agent': entry.creator_device_agent, 'Generation Date': entry.creator_generation_date, 'Host Name': entry.creator_host_name, 'OS Agent': entry.creator_os_agent, 'Software Agent': entry.creator_software_agent} } if hasattr(entry, 'location'): entry_plist['Location'] = entry.location if hasattr(entry, 'weather'): entry_plist['Weather'] = entry.weather plistlib.writePlist(entry_plist, filename) for entry in self._deleted_entries: filename = os.path.join(self.config['journal'], "entries", entry.uuid + ".doentry") os.remove(filename) def editable_str(self): """Turns the journal into a string of entries that can be edited manually and later be parsed with eslf.parse_editable_str.""" return "\n".join(["# {0}\n{1}".format(e.uuid, e.__unicode__()) for e in self.entries]) def parse_editable_str(self, edited): """Parses the output of self.editable_str and updates its entries.""" # Method: create a new list of entries from the edited text, then match # UUIDs of the new entries against self.entries, updating the entries # if the edited entries differ, and deleting entries from self.entries # if they don't show up in the edited entries anymore. # Initialise our current entry entries = [] current_entry = None for line in edited.splitlines(): # try to parse line as UUID => new entry begins line = line.rstrip() m = re.match("# *([a-f0-9]+) *$", line.lower()) if m: if current_entry: entries.append(current_entry) current_entry = Entry.Entry(self) current_entry.modified = False current_entry.uuid = m.group(1).lower() else: date_blob_re = re.compile("^\[[^\\]]+\] ") date_blob = date_blob_re.findall(line) if date_blob: date_blob = date_blob[0] new_date = jrnl_time.parse(date_blob.strip(" []")) if line.endswith("*"): current_entry.starred = True line = line[:-1] current_entry.title = line[len(date_blob) - 1:].strip() current_entry.date = new_date elif current_entry: current_entry.body += line + "\n" # Append last entry if current_entry: entries.append(current_entry) # Now, update our current entries if they changed for entry in entries: entry._parse_text() matched_entries = [e for e in self.entries if e.uuid.lower() == entry.uuid.lower()] # tags in entry body if matched_entries: # This entry is an existing entry match = matched_entries[0] # merge existing tags with tags pulled from the entry body entry.tags = list(set(entry.tags + match.tags)) # extended Dayone metadata if hasattr(match, "creator_device_agent"): entry.creator_device_agent = match.creator_device_agent if hasattr(match, "creator_generation_date"): entry.creator_generation_date = match.creator_generation_date if hasattr(match, "creator_host_name"): entry.creator_host_name = match.creator_host_name if hasattr(match, "creator_os_agent"): entry.creator_os_agent = match.creator_os_agent if hasattr(match, "creator_software_agent"): entry.creator_software_agent = match.creator_software_agent if hasattr(match, 'location'): entry.location = match.location if hasattr(match, 'weather'): entry.weather = match.weather if match != entry: self.entries.remove(match) entry.modified = True self.entries.append(entry) else: # This entry seems to be new... save it. entry.modified = True self.entries.append(entry) # Remove deleted entries edited_uuids = [e.uuid for e in entries] self._deleted_entries = [e for e in self.entries if e.uuid not in edited_uuids] self.entries[:] = [e for e in self.entries if e.uuid in edited_uuids] return entries
Viral Modulation of TLRs and Cytokines and the Related Immunotherapies for HPV-Associated Cancers The modulation of the host innate immune system is a well-established carcinogenesis feature of several tumors, including human papillomavirus- (HPV-) related cancers. This virus is able to interrupt the initial events of the immune response, including the expression of Toll-like receptors (TLRs), cytokines, and inflammation. Both TLRs and cytokines play a central role in HPV recognition, cell maturation and differentiation as well as immune signalling. Therefore, the imbalance of this sensitive control of the immune response is a key factor for developing immunotherapies, which strengthen the host immune system to accomplish an efficient defence against HPV and HPV-infected cells. Based on this, the review is aimed at exposing the HPV immune evasion mechanisms involving TLRs and cytokines and at discussing existing and potential immunotherapeutic TLR- and cytokine-related tools. Introduction The evasion of immune tumor surveillance is a wellestablished feature of cancer. In HPV-related tumors, papillomavirus is responsible for this escape (Figure 1). This virus is able to abrogate the initial steps of the immune innate system, which embraces Toll-like receptor signalling as well as cytokine synthesis and secretion, thus, compromising the immune response against an invasive agent. TLRs and cytokines play pivotal roles in the immune defence against HPV-infected and tumor cells. TLRs, for example, are responsible for recognizing the conserved pathogen-associated molecular patterns (PAMPs), promoting changes on host endogenous ligands and thus, initiating a protein cascade that follows in the expression of key molecules for the development of the immune response, which includes the synthesis and secretion of cytokines. Cytokines, in turn, are important mediators of immune cell activities, such as cell recruitment, maturation, and signalling. In addition, both molecules (TLRs and cytokines) control gene expression and are essential for creating a suitable tumor microenvironment, either for immune surveillance or for immune modulation. As a result, these molecules are involved in the pathogenesis of various diseases besides cancer, such as autoimmune, inflammatory, and infectious diseases. Therefore, both TLRs and cytokines are crucial targets for immunotherapeutic studies that are aimed at preventing or treating cancer. In fact, they have already been used in pharmaceutical formulations for cancer therapies by taking advantage of the fact that currently there is no effective treatment for HPV-related cancer patients, especially for those who have unsuccessfully undergone radio-and chemotherapy treatments. Several major studies have reported the great potential value of immunotherapeutic approaches that modulated TLR and cytokine levels or synthesis. In accordance with these studies, this review highlights the immune mechanisms of TLRs and cytokines for infection resolution and viral immune evasion activities correlated with HPV-associated cancers. Furthermore, we will discuss the effectiveness of the immunotherapeutic approaches involving these targets. The molecular structure of these receptors is comprised of two domains: the extracellular N-terminal and the intracellular C-terminal domains. The extracellular domain contains leucine-rich repeats responsible for recognition of PAMPs, depending on the TLR subtype; the C-terminal portion has a conserved region called Toll/IL-1 receptor (TIR) domain, which is responsible for transducing the signal for adapter molecules. Usually, TLRs are associated with the protection against pathogen invasion, carcinogenesis, and infection clearance, which are essential in inducing and linking innate and adaptive responses, such as the Th1 and the cytotoxic cellmediated subtypes. In addition, TLRs are able to recognize some host endogenous ligands (see Figure 2), representing an important role in tissue repair and homeostasis. TLRs support the uptake, processing, and presentation of antigens by APCs, boost DC maturation, NK cell cytotoxicity, and targeted cell apoptosis as well as upregulate the expression of major histocompatibility complex (MHC), C-C chemokine receptor 7 (CCR7), interferons (e.g., IFN-I, IFN-), and inflammatory cytokines (e.g., IL-6, IL-12). Nevertheless, several TLRs were reported to be overexpressed in cancer. They were associated with malignant transformation by preventing the activation of immune responses or enhancing inflammation through the induction of the nuclear transcription factor kappa B (NF-B) pathway. Therefore, TLRs seem to have dual functions in the tumor microenvironment, to the extent that even cancer cells may express those molecules in order to alter immune response and sustain malignant progression. Indeed, recently, TLRs were showed to activate the nitric oxide signalling pathway, supporting cervical carcinogenesis. The expression of both TLR4 and 9 receptors was reported to be altered during HPV infection. In cervical carcinogenesis studies, TLR4 and 9 levels were reported to be crucial for the initiation of the innate immune response due to the induction of cytokine synthesis and cytotoxicity on target cells. As a consequence, lower levels are generally associated with a poor prognosis and cancer progression. However, these receptors have also been correlated with malignancy. TLR4 was found to be overexpressed in human cervical cancer line (HeLa) as well suppresses IL-12 expression, leading to a decreased production of IFNs (type I and IFN-) and blocking macrophages (M1 phenotype) and dendritic cell (DC) activity; promotes downregulation of HLA expression and transportation to membrane surface, preventing antigen presentation, T cell activation, and NK and CTL cytotoxicity; and finally, the downregulation of lymphocyte activity related to a decreased activity of APC cells (as M1 and DC) impairs adaptive immune activation. as in premalignant and malignant specimens. It was also associated with the proliferation of cancer cells and immunosuppression, through the production of IL-6, TGF-, and other immunomodulatory cytokines. TLR9 was also shown to be overexpressed in high-grade cervical lesions and/or cancer [3,13,. It is possible that the upregulation of these receptors in malignant lesions may be due to the following components: (i) compensation for TLR deficiency or by harnessing mechanisms of the host immune defence system against tumor cells, (ii) reflecting increased inflammation, which damages an effective immune response against the pathogen, (iii) the existence of polymorphisms and the measurement of different subtypes, which misleads data interpretation, (iv) the measurements at different intervals during cancer progression, (v) the differences in methods of TLR assessment in cell lines, or (vi) the activity of tumor cells that create a proper milieu for cancer development. Modulation of TLR9 levels probably occurs due to HPV16 E7 oncoprotein activity on the TLR9 promoter by interference in the NF-B pathway. A chromatin repressive complex was also found on the TLR9 promoter, negatively regulating its transcription ( Figure 3). A repression of the TLR9 expression disturbs the synthesis of IL-6, IL-8, and C-C chemokine ligand 20/macrophage inflammatory protein-3 (CCL20/MIP-3) (which is an important chemokine for immune surveillance because of the recruitment of lymphocytes and Langerhans cells (LCs) to skin). The viral transcription and replication processes also undergo intervention, due to interferon deficiency caused by TLR9 repression. Moreover, the altered levels of TLR9 can be caused by the devious expression of specific polymorphisms that change the effective receptor availability. Other TLR expressions were also notably altered. In SCC (squamous cell carcinoma) specimens, it was demonstrated that the TLRs 3, 4, and 5 were significantly underexpressed while TLR1 was the only significantly overexpressed (TLRs 2, 7, 8, and 9 were not significant) when compared to the normal samples. However, opposite results were also observed. No study has clearly demonstrated that altered TLR levels are a response of the host immune system against the infection or a consequence of virus activity supporting the infection. In addition, the knowledge of which cell is responsible for the alteration of TLR levels would be crucial to understand the TLRs' role in carcinogenesis; immune, stromal, and cancer cells have different functions in cancer development and thus, the shift of their TLR expression pattern could represent a marker for cancer progression or resolution. Besides, another study found similar results and showed that the mRNA expression of TLR7 and 8 in cervical biopsies of cervical cancer patients was elevated. In turn, infection regression (HPV16) was also associated with an increased expression of several TLRs (3, 7, 8, and 9), and their modulation could be used as a therapeutic approach. Regarding other HPV-related cancers, TLRs were not as extensively studied like in cervical cancer. HPV is a key etiological factor of head and neck squamous cell carcinoma (HNSCC), in particular of oropharyngeal squamous cell carcinoma (OPSCC). HNSCC is commonly recognized as an immunosuppressive disease due to HPV activity. Consequently, an imbalanced cytokine profile, low amounts of CD3 + CD4 + and CD8 + T lymphocytes, high infiltration of M2 macrophages (TAM), Treg cells and Treg/CD8 + T cell ratio, impaired NK cell activity (higher expression levels of KIR genes), augmented expression of inhibitory receptors (e.g., CTLA-4, LAG-3, TIM-3, and PD-1), and decreased antigen presentation have been observed. Thereafter, the expression levels of several TLRs were reported and its dubious role was evident in some cases. It was found that TLR2 was upregulated in both vicinity Figure 2: TLR activation. The scheme shows major microbial and endogenous ligands that activate TLR signals in immune cell surface. These signals are able to promote host protection against pathogen invasion and infection establishment. immune cells and malignant keratinocytes in the microenvironment of oral squamous cell carcinoma (OSCC) compared to hyperplasia. In another OSCC study, TLR3 was found to be upregulated in three head and neck cell lines (mRNA and protein), as well as when it induced apoptosis of the tumor cells, and in vivo, when it interrupted tumor growth. Also in this study, the mRNA expression of other TLRs was assessed observing elevated expressions of TLRs 1, 2, 4, and 6 while the TLRs 5, 7, 8, and 10 were found reduced. In this type of carcinoma, however, TLR3 was found to have elevated levels and this event was attributed to an increase of tumor aggressiveness and invasion, a similar conclusion of another study which measured TLR3 in various HNSCC cell lines. Both studies associated the triggering of the NF-B pathway with the increase in tumor aggressiveness. In the mentioned studies above, it was not verified whether HPV was present or not. TLR4 and 9 were also assessed in OSCC, and these receptors were found to have elevated expression levels which were correlated with tumor development. Whether the increased levels of these receptors were associated with carcinogenesis or a reaction of the host immune response against tumor cells is still not known. TLR4 was also correlated with tumor development and protection of cancer cells from host immune response in an HNSCC study. Several other studies have also reported controversial results regarding OSCC and HNSCC, though it has been observed that different cell lines were used several times for the same purpose, which could explain the opposite results in some cases. TLR7 was found overexpressed in the nuclear membrane and nuclei of cancer cells (a novel localization discovered) having higher levels in HPV-positive specimens, unlike TLR9 that showed reduced expression levels in HPVpositive samples compared to HPV-negative. TLR7 also showed altered localization depending on the cancer cell status: it was found with elevated levels in the plasma membrane of cancer-free cells compared to cancer cells, where this receptor was observed to have higher levels in the nuclear membrane and nuclei. Therefore, it seems that TLR7 may play different roles depending on the cancer cell status. Furthermore, increased levels in TLR7 demonstrated to be statistically significant for the direct correlation with p16. Thus, the increased levels of TLR7 could be indirectly related to E7 expression since the elevated levels of p16 are caused by the deregulation of pRb pathway, initially caused by E7 oncogenic activity, which probably also altered TLR9 levels. The other TLRs did not show any significant differences between cancer and control groups. Figure 3: Inhibition of TLR9 expression by HPV16 E7 oncogene takes place via NF-B canonical pathway, when this oncoprotein recruits the inhibitory complex NF-B p50-p65 to a new cis element at the TLR9 promoter. This occurs with the additional binding of ER (estrogen ) to another neighbour cis element, ERE (estrogen-responsive element), within that same promoter, and in the presence of HPV16 E7. ER is also able to interact with the p65 subunit in the peri-or intranuclear region and contribute to transcription repression. Furthermore, it was also observed that there was a chromatin repressive complex composed by JARID1B demethylase and by HDAC1 deacetylase. These two catalytic units interact with ER and negatively regulate TLR9 expression. The consequence of preventing TLR9 expression is the establishment of an immunosuppressive status with the inhibition of interferon and immune surveillance by cytokine responses. NF-B blue circle corresponds to p50 subunit and the purple one to p65. The straight blue arrows indicate an activation process or progress to the next stage; the curved arrows indicate motion; and the progressive arrows indicate the movement of some molecules interacting with the target. IKK: inhibitor of kappa B kinase; P: phosphate group; M: methyl group; A: adenyl group; JARID1B: lysine-specific demethylase 5B; HDAC1: histone deacetylase 1; Site B: 9-10 base pair DNA sites where p50 and p65 subunits bind. Based on what has been discussed above, the modulation of TLR expression or activities has been considered in the treatment of cervical and head and neck carcinomas as adjuvants in various vaccine strategies. The goal is to increase the synthesis of cytokines (e.g., IFN, TNF-) and chemokines, generally by activating NK and dendritic cells, so as to activate CTL cells for generation of effector responses. For a more efficient outcome, TLR agonists can be used in combination with non-TLR agonists, as demonstrated in a C57BL/6 mice tumor model in which HPV16 E7 vaccine was coadministrated with monophosphoryl lipid A (MPL), a TLR4 agonist, and -galactosylceramide. Also, TLR activities can be blocked aiming to hamper inflammation by preventing TLR downstream activation signalling, such as the MyD88-NF-B pathway. Another interesting view to be highlighted is the importance of the modulation of TLR expression in stromal cells. It was found that these receptors were upregulated in these cells and may contribute to carcinogenesis. Accordingly, several pharmacologic substances, which modify the activity of these receptors, have been tested/used first as adjuvants in vaccines for cervical cancer (e.g., Cervarix) and later in other HPV-related cancers. In the Cervarix vaccine for example, MPL is used to activate the innate response by interferon and proinflammatory cytokine synthesis. As a consequence, the adaptive response is also induced. Other examples are CpG (TLR9 agonist) alone or with rlipo (TLR2 agonist), imiquimod and resiquimod (both TLR7 agonists), poly(I:C) (TLR3 agonist), and VTX-2337 (TLR8 agonist). Many studies have shown satisfactory results, especially with the simultaneous use of these agonists. Generally, an increased rate of tumor cell death upon TLR stimulation was observed. Also, very high percentages of curing preexisting tumors in mice were reported. In HPV-related cancers, the use of these adjuvants has also brought better outcomes. Imiquimod is able to induce Th1 responses by stimulating DC maturation and migration, Langerhans cell migration to draining lymph nodes, the inhibition of myeloid-derived suppressor cells, and the secretion of interleukins and cytokines. In another study, imiquimod and poly(I:C) effects on cell death in in vitro and in vivo HNSCC models were evaluated. Both agonists were found to cause an increase in tumor cell death in vitro and in vivo. In addition, poly(I:C) induced higher levels of proinflammatory cytokine secretion (IFN-I, IL-6, and CXCL-10), MHC I expression of tumor cells, and monocyte activation in vitro, impairing tumor growth in vitro and in vivo even when TLR signalling was hampered on host cells. Another study is currently recruiting participants to evaluate polyICLC (a modified version of poly(I:C)) combined with tremelimumab, a CTLA-4 antibody, and durvalumab, a PD-L1 antibody (clinicaltrials.gov identifier NCT02643303). Another example is motolimod (VTX-2337), a TLR8 agonist which was tested for the treatment of HNSCC and other tumors. It was seen that this substance caused an increase in antitumor activities by inducing cytokine and chemokine synthesis as well as activating monocytes, NK, and dendritic cells, consequently boosting T cell activation. In a study using the TLR8 agonist with an anti-EGFR monoclonal antibody (cetuximab), an increase in NK cell-mediated cancer cell lysis and an enhancement of DC cross-priming of EGFR-specific CD8 + T cells were observed. Other possibilities are currently being tested for VTX-2337, such as combining it with cisplatin or carboplatin/fluorouracil/cetuximab (NCT01836029) and with cetuximab or cetuximab and nivolumab, an anti-PD-1 antibody (NCT02124850). In regard to TLR4, OK-432 (picibanil, approved in Japan) was tested in association with DCs and chemotherapy, and the results still have not been reported (NCT01149902). Other studies have also been or are currently being conducted with other TLRs: ISA201 (HESPECTA, a second generation vaccine based on ISA101), which uses a synthetic TLR2 agonist (Amplivant) with two HPV16 E6 peptides (NCT02821494) ; EMD 1201081 (IMO2055, TLR9 agonist) + cetuximab (NCT01040832), which did not demonstrate additional clinical efficacy than using cetuximab alone but it was well tolerated by patients ; EMD 1201081 + fluorouracil + cisplatin + cetuximab (NCT01360827); and entolimod (CBLB502), a TLR5 agonist which is being tested in combination with cisplatin and radiation (NCT01728480) due to the previously shown effects on radiation: it induced an increase in its therapeutic effect and a reduction of its toxicity in vivo when administered 1 h after exposure to radiation. Table 1 summarizes the mentioned studies and is correlated with the Figure 4 which shows the therapeutic role and the activated signalling pathways of natural and synthetic TLR ligands. Cytokines Cytokines are the primordial mediators of the immune response, including antitumor activities. It has been reported that in cervical cancer as well as in HNSCC, immunostimulatory signals (Th1 cytokine profile) are hampered whereas proinflammatory and immunosuppressor ones (Th2 cytokine profile) are stimulated. Several studies have reported this shift of the cytokine pattern in preneoplastic and cancer specimens. It is known that Th1 cytokines are potent activators of cellular-mediated immunity response that may precede HPV clearance, while Th2 cytokines impair the immune response, leading to an inefficient virus elimination and to chronic infection. Furthermore, different hrHPV genotypes are associated with different cytokine profiles, so they interfere distinctly with the immune system, making disease progression different among the various hrHPV subtype infections. Therefore, the modulation of cytokine expression is a key event for the induction of chronic infection and cancer development. It is known that the appropriate cytokine pattern defines the appropriate phenotype of immune cells, which whether or not results in the elimination of infected and (pre)cancerous cells. Thus, cytokines were widely used in cancer immunotherapy, including cervical cancer and HNSCC. The main goal of their use was to induce a CTL response supporting the cancer cell apoptosis and tumor regression. They can be used in combination with several immunotherapy approaches such as DNA, DC-based and protein-based vaccines, TLR agonists, and monoclonal antibodies (e.g., cetuximab and the immune checkpoint inhibitors like tremelimumab and durvalumab). In the next subheadings, the most important cytokines for the treatment of HPV-related cancers are discussed according to their roles in the immune response. 3.1. Immunostimulatory Cytokines. Among the immunostimulatory cytokines, IL-2, IL-12, TNF-, and interferons are the most prominent in anti-infection and antitumor activity. IL-12 is secreted by activated DCs and macrophages and is the most effective and promising cytokine for cervical cancer treatment. Several antitumor activities in animal models have been observed, such as the increase in IFN- and TNF- levels, maturation of APCs and the lysis of immature ones, and the activation of NK responses (caused by the upregulation of NK activation receptors and ligands). Consequently, IL-12 stimulates Th1 polarization and CTL (antigen-specific response) cytotoxicity and plays an important antiangiogenic role. As a result, IL-12 has been suggested to be used in several HPV-related cancer treatment strategies for potentiation of antitumor activity, for instance, in viral gene therapy, coadministrated with other cytokines or in DNA vaccine preparations, such as the INO-3112 vaccine. This promising strategy, which combines the gene sequences of E6 + E7 antigens (VGX-3100) and the IL-12 (INO-9012), has been tested for treatment of both cervical invasive (NCT02172911) and head and neck (NCT02163057) HPVrelated cancers in clinical phase I/II trials with good results about safety and CD8 + T cell immunogenicity. Another ongoing study evaluated the immunotherapeutic effects of the coadministration of the recombinant IL-12 with cetuximab (NCT01468896) in patients with relapse, metastatic, or inoperable HNSCC. In the previous study regarding this combined treatment, an improvement in the lysis of tumor cell by NK cells was observed. In another approach using IL-12, a higher lymphocyte infiltrate and an improved overall survival rate were observed when IL-12 was coadministrated with IL-1, IFN-, and TNF-. TNF- is another key cytokine which creates an antitumoral milieu for virus elimination. This molecule supports the activation of macrophages, dendritic and NK cells and recruits them to the tumor site by inducing keratinocytes to release MIP-3 and CCL2/MCP-1 (C-C motif chemokine ligand 2/monocyte chemoattractant protein-1), which is reduced in high-grade cervical lesions and in E6/E7expressing cells. Moreover, it inhibits HPV E6 and E7 oncogene transcription and proliferation in HPVtransformed keratinocytes in vitro and causes apoptosis of cervical cancer cell lineage. This cytokine is associated with lesion regression; its reduced level in serum and its presence in cervical cancer are correlated with tumor growth. Increased levels of TNF- were also associated with CIN2/3 lesions, but also with an exacerbated inflammatory response. A precise level of inflammatory response, which includes the precise secretion level of several cytokines including TNF-, is the threshold between the occurrence and regression of cellular transformation. A sufficient level of this response at the infection onset is a mark of a valid immune response, but when the inflammatory response becomes excessive and persistent, it favours an appropriate milieu for infection development. Along with TNF-, interferon belongs to the group of crucial cytokines which creates an antiviral state, activating cell-mediated immunity which is potentiated in the presence of TNF- (see Table 2 for IFN activities). Interferon is classified in type I (IFN- and -), type II (IFN-), and type III (IFN-). Similar to TNF-, the amount of interferon arises mainly from NK cells, but T cells also synthesize these cytokines, both cytokines are upregulated in therapeutic approaches which cause infection regression. With regard to this, it was suggested that the upregulation of IFN would be a good hallmark for HPV16 infection clearance. This event is generally associated with the expansion of NK cell cytotoxicity and the expression of IL-12 and TNF- in infiltrated proinflammatory lymphocytes. The increased levels of IFN restore immune function and induce CD4 + and CD8 + responses, which led to a complete regression of the disease in a half of the patients who had undergone treatment. Several reports have shown the inhibition of type I IFN expression or their transduction pathways by HPV16 and 18 oncoproteins (E6 and E7). In this way, key genes involved in the immune surveillance and cytotoxic response are blocked. Diminished IFN- synthesis has also been associated with persistent infection and the level of malignancy of cervical lesions as well as to the development of HPV-related cancer. In HNSCC, for example, the reduced levels of this cytokine are able to modify STAT1/STAT3 balance that blocks antigen presentation and DC maturation. boosting the overexpression of adhesion molecules; (iii) antigen presentation and synthesis of IL-12 by activation of DC; and (iv) more sensitivity to granule activity and death signalling. In addition, type III IFN (IFN-) has also been demonstrated to play an important role in immune response against HPV infection. Alteration of this interferon was reported in hrHPV cell infection status, and it was suggested that it may support immune surveillance against HPV infection and cervical carcinogenesis. IFN- shows similar activities of type I IFNs but only specific cells respond to this type of IFNs like epithelial cells. For all these activities and owing to the oncoprotein interference with these molecules, interferons are a class of cytokines with a great potential value when used in the development of more efficient approaches for HPVrelated cancer therapy. Activities Prevents several tumor cell lines growth Promotes antiviral and antitumor responses In the treatment of viral and neoplastic diseases it has been tested with type I IFNs showing synergic effects and reduced side effects HPV interferences None reported Another cytokine demonstrating to have strong antitumor activity is IL-2, which is produced primarily by CD4 + cells after antigen priming. It plays several important roles, including the activation and maturation of DC, stimulation of NK cell cytotoxicity, the expansion of CD8 + and CD4 + cells, and the polarization for the Th1 cell profile. Several studies have attributed the reduced levels of this cytokine to the lesion progression or cancer in HPV infection. Its reduced levels were suggested to be a viral evasion mechanism, which is frequently associated with increased levels of IL-10, TGF-, and Treg cells. Due to its strong immunoprotective activity, IL-2 has been used in several immunotherapeutic approaches, like in association with TG4001 HPV vaccine, which is constituted of a recombinant Ankara vaccinia virus expressing HPV16 E6 and E7. The disadvantages of IL-2 administration are systemic toxicity and the induction of Treg cell proliferation. For example, in HNSCC, the administration of IL-2 and IFN-2a demonstrated high toxicity. Th17 Cells and Proinflammatory Cytokines. The inflammation process is an important feature of any cancer. In regard to HPV-associated cancer, a deregulated proinflammatory network is induced by HPV inducing a favourable milieu for tumor development. Th17 cell (CD4 +, IL-17 + ), a T cell phenotype involved in the inflammatory response, was reported to be linked to the development of cervical cancer as well as others. It has been shown that the percentage of this cell phenotype, as well as Th17/Treg ratio, was higher in peripheral blood samples of patients with premalignant and cervical cancer lesions when compared to the normal cytology group, a similar outcome previously observed. In other studies, elevated levels of Th17 cells in CIN and cancer patients and in high-grade cervical lesions were found, when compared to healthy controls. A higher statistical prevalence of this cell type was reported not only in serum but also in the cervical tissue of cancer patients. Likewise, a statistical significance of Th17 prevalence among patients with advanced (higher prevalence) and early stages in malignant processes was observed and thus, its level has been deemed a good independent prognostic factor in cervical cancer. In HNSCC, elevated levels of Th17 cells were found along with Treg in serum and in tumor milieu, concluding its negative impact on the immune response against HPV, especially due to the induction of an exaggerated inflammatory response through IL-17 secretion. The increased levels of Th17 or IL-17 were also found in patients with hypopharyngeal carcinoma, as well as other cancers such as colon, gastric and lung, suggesting a connection with cancinogenesis and malignant progression. Conversely, premalignant oral lesion treatments with TGF- type I receptor inhibitor and IL-23 showed maintenance of the Th17 phenotype instead of changing to Treg cells. This resulted in the production of stimulatory and inflammatory molecules and slowed the progression of premalignant lesions to oral cancer. The last research outcome demonstrated that the effect of Th17 is still unclear, mainly in other types of HPV-related cancers. IL-1 (both IL-1 and -) is the other interleukin present in higher levels in cervical cancer. Secretion of IL-1 is promoted by keratinocyte damage, the major IL-1-producing cells. This interleukin is also secreted by TAM, another important cell in harmful proinflammatory response which induces metastasis, tumor growth, angiogenesis, and Treg differentiation. IL-1 expression is modulated by NK-B and vice versa, and the same occurs with TNF-, that participates in the IL-1 synthesis pathway. In particular, IL-1 exhibits an essential role in inflammation-associated carcinogenesis and supports tumor growth and metastasis. This interleukin promotes the secretion of a great range of cytokines, chemokines, growth factors, and various metastatic mediators, such as TGF-, VEGF, metalloproteinases, and endothelial adhesion molecules. In cancer studies, IL-1 is associated with a poor prognosis, and in cervical cancer research, it supports tumor progression and carcinogenesis. IL-1 was also overexpressed in several types of tumors like breast, colon, oesophageal, lung, and oral cancer. A high throughput bioinformatics analysis plus in vitro and in vivo observation demonstrated that IL-1 is one of the key genes involved in HNSCC formation. This interleukin is closely related to the malignant transformation of oral cells, protumorigenic microenvironment generation that leads to oral carcinogenesis, and cell growth of the same type of cells. Due to its crucial role in inflammation and carcinogenesis, this interleukin has been considered useful in therapeutic strategies as well as IL-1 which also plays an important role in carcinogenesis. Interestingly, the expression of IL-1 has been associated with higher risk of distant metastasis in HNSCC, the major cause of death in this type of cancer. In this scenario, the evaluation of IL-1 and clinical information may predict patients with high risk of HNSCC metastasis, thus leading to new treatment strategies. Other examples of proinflammatory cytokines which support inflammation-associated cervical carcinogenesis are listed in Table 3. 3.3. Immunosuppressive Cytokines. There are several cytokines which are directly involved in downregulation of inflammatory status, promoting infection progress and cancer development. These include TGF-, IL-4, IL-6, and IL-10 which are the main Th2 cytokines related to this antiinflammatory profile and are discussed in this topic. The expression of these cytokines is modulated by HPV oncoproteins o create a Th2 microenvironment. They have been reported to be upregulated in premalignant and cancer lesions, and were suggested as biomarkers for HPVrelated cancer. Other Th2 interleukins are also encountered at high levels in cervical cancer patients, such as IL-9 and IL-15. The latter and TGF-, for example, were reported to induce the expression of CD94/NKG2A, preventing NK cell activity and CTL cytotoxicity. An ongoing study evaluated a recombinant human IL-15 in advanced HNSCC patients in order to measure NK cell count, activity, and other immune response parameters (NCT01727076). TGF- plays a crucial role in the repression of immune responses against HPV and is upregulated in cervical carcinogenesis by viral oncoprotein activities. It blocks effector functions by suppressing antigen presentation, NK cytotoxicity, B cell and CTL proliferation, and cytokine synthesis of the Th1 profile. It downregulates IL-2 receptor signalling in T cells and IL-12 expression by APCs. Moreover, this cytokine is able to promote IL-10 expression by macrophages, induces proteolytic activity, which causes angiogenesis and metastasis, induces CD94/NKG2A expression on T cells, and stimulates the differentiation of T cells to Treg and Th17 phenotypes. Moreover, TGF- upregulation has been associated with favouring tumor development, CIN III specimens, cervical cancer, and cancer invasiveness. IL-4 is another important cytokine frequently mentioned in cervical cancer studies. It is able to inhibit cytotoxic activity and IFN- expression, even in the presence of PMA (phorbol myristate acetate) and ionomycin-substances used in carcinogenesis models that stimulate immune responses. This interleukin induces a switch to a Th2 responsiveness profile along with IL-2 and is associated with viral persistence, disease severity, and progression of precancerous lesions. Like IL-4, IL-6 is upregulated during cervical carcinogenesis progression, playing a role in HPV-immortalized and carcinoma-derived cervical cell line proliferation. It has been stated to induce the phosphorylation of STAT3 (the activated condition) in HNSCC, causing immunosuppression by inhibiting maturation of DC and activation of neutrophil, macrophage, NK and T cells. STAT3 is a key transcription factor which is involved in several other immunosuppressive activities such as IL-10 signalling, downregulation of IL-12, impairment of DC, and production of Treg cells. IL-6 also contributes to the proliferation and inhibition of apoptosis of cancer cells, and thus, supports chronic inflammation and cancer development. This interleukin affects DC migration, induces angiogenesis, MMP9 synthesis, and TAM differentiation. However, IL-6 has also been reported to play anti-infection and antitumor functions. Its expression is associated with a poor clinical prognostic factor and its transcription repression can be used in immunotherapy approaches in cervical and other HPVrelated cancers. IL-10 is the most studied Th2 cytokine with immunosuppressive activity in HPV-related cancer. It is synthesized by various cells, including Treg, Th2, M2 macrophages, APCs, and NK cells. This interleukin supports the creation of a microenvironment favourable to tumor development and it is the main cytokine having a Th2 role along with TGF-. It hampers immune surveillance by blocking (i) antigen presentation by DC through the reduction of MHC II, adhesion, and costimulatory molecules, (ii) the synthesis of cytokines of the Th1 profile and (iii) the activities of monocytes and NK cells. IL-10 also supports immunomodulation by inducing the differentiation of T cells and macrophages to the Treg and M2 profiles, respectively. Furthermore, it has been demonstrated that IL-10 prejudices antiviral immune responses, since it impairs Th1 profile differentiation, CD8 + cytotoxic response, and CD3 + expression, which is essential in activating T cells. This interleukin also causes the downregulation of MHC I and II on the surface of monocytes. Moreover, this molecule is able to upregulate HPV16 E7 in cervical carcinoma cells in vitro, inducing tumor proliferation. Despite the immunosuppressive activities cited above, there were disagreements over what high levels of IL-10 were related to, since it has been also associated with low grade lesions. However, its immune regulatory activities have been well established. Elevated levels of IL-10 are commonly correlated with high-grade lesions and cancer condition, and its immunosuppressive roles have been reported countless Table 3: Some proinflammatory cytokines in HPV-related carcinogenesis. Cytokine Action mechanism IL-8 (i) It induces neutrophil chemoattraction and cell survival (ii) It stimulates cell growth and cancer metastasis (iii) Prognosis of patients with high levels of IL-8 is extremely poor and its expression was associated with lesion severity IL-17 (i) It is associated with lymphatic metastasis (ii) It is found in high levels in patients with cervical cancer (iii) It is also linked to the antitumor response, when it supports the recruitment and activation of neutrophils, the maturation of DC/priming of T cells, and the synthesis of TNF-, IL-1, and IL-6 IL-23 (i) It is synthesized by activated APCs (ii) It induces macrophage secretion of TNF-, DC production of IL-12, and the synthesis of IL-17 (iii) It induces upregulation of MMP9 (matrix metalloproteinase 9), tumor angiogenesis, and TAM activity and prevents T CD8 + cell infiltration (iv) As well as IL-17, shows antitumor activities through its immune surveillance properties, such as the promotion of CTL and NK cell, the IFN- secretion, and the stimulation of the IL-12-induced Th1 response times, what are probably induced by HPV E2 protein activity. Hence, IL-10, along with CD8 + T cells and Treg cell ratio, has been considered an independent factor of poor prognosis. Treg cell is associated with tumor growth, tumorigenesis, and lymph node metastasis, constituting a poor prognosis for patients with cervical and other cancers as well. These cells are rich in high-grade carcinoma samples, suppress CTL and NK cell activities, and support cancer progression through cytokine synthesis, such as IL-10 itself and TGF-. Therefore, Treg cells, IL-10, and TGF- (and other immune activities interfered by HPV are summarized in Figure 5) are considered targets for therapeutic interventions. Another cytotoxic cell has also been encountered at high numbers in cervical cancer samples. The expansion of this novel T cell subtype might affect lesion fate. It is positive for CD4 and NKG2D markers and is subdivided in CD28 +/−, showing a statistically significant association with underexpression of several proinflammatory cytokines, such as IL1-, IL-2, and TNF-. The role it plays and whether it is related to tumor growth or tumor suppression are still not known. Novel cytokines in cervical cancer study have been gaining more and more attention due to their therapeutic potential. One of them, IL-37, has shown promising results because of its anticancer activity. For the first time, its anticancer role in an in vitro cervical cancer model has been demonstrated. This interleukin suppressed proliferation and invasion of HeLa and C33A cell lines, with higher inhibition rates in HeLa cell line, showing that anticancer activities were related to the HPV. This occurred through the inhibition of mRNA and protein expression of STAT3. Moreover, STAT3 phosphorylation was also blocked. This protein is a key transducer signalling molecule for developing an immune response in a tumor setting and could be an antagonist of IL-6 activity since STAT3 is central to the IL-6 signalling pathway. In summary, cytokines are key molecules which modulate the pathology milieu. Thus, the regulation of the transcription, the synthesis, and the secretion of these signalling molecules in immunotherapies are necessary events to achieve satisfactory results in therapy. Cytokines are involved in several mechanisms of the immune system, such as immune cell maturation/differentiation, maintaining activation and regulating immune cell activities, which contribute to an immunoprotective background. Consequently, these molecules are important for therapy, i.e., by supporting the establishment of a proimmune milieu for infection clearance or by preventing the immunosuppressive role of immune cells. Many therapeutic interventions which take advantage of this have been developed and are currently in progress in therapeutic practices with very promising results, such as the active cytokine components of IRX-2, which caused an increase in antigen presentation, in NK cell activity, in the synthesis of costimulatory molecules and in CD8 + T cell responses. The mixture of cytokines is safe, generally well tolerated, and has currently been tested in a phase II trial for HNSCC. Future Prospects The immune response is vital in HPV-related cancer disease progression and resolution. In this set of collected data, the host immune response was observed to be used to benefit patient's health through various ways, as these attributes occur in the natural resolution of infection. These new immunological approaches open novel horizons in diagnosis and especially in cancer therapy. TLRs and cytokines have been used to create an ideal tumor milieu for preventing tumor development or favouring transformed cell destruction. Their utilization appears to be a very promising immunotherapeutic strategy nowadays. The activation of DC and NK cells by means of administration of appropriate TLR and cytokines is essential to ensure the T-helper and cytotoxic responses. Another possibility of using TLRs and cytokines in immunotherapy is their use in combination with monoclonal antibodies that prevent the activity of the immune checkpoint molecules, such as CTLA-4, PD-1, LAG-3, and TIM-3. These receptors are found in T cells and interact to ligands located at the cell membrane of tumor and antigenpresenting cells. In tumor pathogenesis, the activation of these molecules triggers signalling pathways which primarily prevent the function of CD8 + and CD4 + cells. In addition, the induction of other immune evasion activities offers a suitable environment for infection persistence and tumor development. Thus, inhibition of these activities may be considered a good therapeutic target; CTLA-4, for example, is found to be highly expressed in HPV-related cancer when compared to HPV-negative samples. It competes with CD28 for interactions with the CD80 and CD86 ligands in DC surface, which prevented T cell priming by these cells. PD-1/PD-L1 is also greatly related to immune escape in HPV-related cancers, being correlated with a reduced disease survival rate due to CTL activity attenuation. Similarly, LAG-3 and TIM-3 have been recently considered in immunotherapy approaches. The first molecule enhances Treg cell activity ; and the second one induces T cell exhaustion and suppression of the innate response, associated with poor prognosis and tumor progression. Checkpoint inhibitor monoclonal antibodies combined with other immune approaches have already been evaluated for cervical and HNSCC treatment: anti-PD-1 such as pembrolizumab (which is in ongoing studies for HNSCC treatment: NCT02255097 and NCT02252042) and nivolumab (NCT02105636 and NCT02488759); anti-PD-L1 such as durvalumab (NCT02207530) ; anti-CTLA4 such as ipilimumab and tremelimumab; anti-LAG-3 such as BMS-986016 (NCT01968109); and anti-TIM-3 (NCT02817633). The cited studies including LAG-3 and TIM-3 were for treatment of solid tumors. On the other hand, the combination with TLR agonists is new for HPV-related cancers; there are only two studies, NCT02643303 and NCT02124850, which are currently recruiting patients for testing the poly(I:C) with tremelimumab and durvalumab and the combination of VTX-2337 plus cetuximab or VTX-2337 plus cetuximab plus nivolumab, respectively. Regarding the evaluation of the combination of checkpoint inhibitors with cytokines in HPV-related cancers, no current study is reported at least to the best of our knowledge; and only three studies were reported for other HPV-unrelated solid tumors (NCT02614456, NCT02174172, and NCT02947165). Therefore, the combination of different immunotherapeutic methods has shown increased beneficial effects and seems to be crucial in achieving better outcomes as observed in preclinical and clinical trials. As discussed here, HPVrelated tumors require a great immune suppressor status for cancer development with increased activities of Treg, CTLA-4, and PD-1 and the suppression of APC and NK cells. Thus, studies on such evasion mechanisms are needed and offer new therapeutic perspectives. SCC: squamous cell carcinoma TAB: TGF--activated kinase I/MAP3K7-binding protein TAK: TNF receptor-associated factor TAM: Tumor-associated macrophage (M2 macrophage phenotype) TAP-1: Transporter antigen processing-1 TIRAP: TIR-associated protein TNF-: Tumor necrosis factor TLR: Toll-like receptor TRAM: Toll receptor-associated molecule TRAF: TNF receptor-associated factor TRIF: TIR-domain-containing adapter-inducing interferon-. Conflicts of Interest The authors declare that there is no conflict of interest regarding the publication of this paper. Authors' Contributions Aldo Venuti and Antonio Carlos de Freitas equally contributed to this work.
. The purpose of this study was to determine the role of dentate granule cells upon limbic seizure of Wistar rat caused by unilateral intra-amygdaloid administration of kainic acid (KA). Stereotactic surgery was performed in Wistar rats and stainless steel injection chemitorode was inserted in the left amygdala. Left dentate granule cells lesion were induced by microinjection of colchicine. The rats obtained recovery period for 7 days, postoperatively. The rats were divided into two groups. One group were used for observation of symptoms and electroencephalographic findings during the limbic seizure for 6 hours after the KA injection. Another group was processed for measuring local cerebral glucose utilization (LCGU) during limbic seizure status. The histological study demonstrated a selective loss of dentate granule cells in the left hippocampus 7 days after the colchicine injection. After the KA injection, initiation of the spike discharge was significantly retarded not only in the hippocampus (from 6.01 min. to 37.25 min.) but also in the amygdala (from 2.96 min. to 10.8 min.). Progression, frequency and intensity of the KA induced seizures were also inhibited by the colchicine-induced dentate granule cells lesion. During limbic seizure status, LCGU obtained by 14C-deoxyglucose autoradiography were significantly decreased not only in the hippocampus but also in the amygdala on the site of KA injection. These data suggest that hippocampal dentate granule cells play an important role on initiation and progression of the KA induced limbic seizure. The result suggested that there was an acceleration mechanism of the limbic seizure between amygdala and hippocampus.
package styles import ( "github.com/alecthomas/chroma" ) var ( // Inspired by Apple's Xcode "Default (Dark)" Theme background = "#1F1F24" plainText = "#FFFFFF" comments = "#6C7986" strings = "#FC6A5D" numbers = "#D0BF69" keywords = "#FC5FA3" preprocessorStatements = "#FD8F3F" typeDeclarations = "#5DD8FF" otherDeclarations = "#41A1C0" otherFunctionAndMethodNames = "#A167E6" otherTypeNames = "#D0A8FF" ) // Xcode dark style var XcodeDark = Register(chroma.MustNewStyle("xcode-dark", chroma.StyleEntries{ chroma.Background: plainText + " bg:" + background, chroma.Comment: comments, chroma.CommentMultiline: comments, chroma.CommentPreproc: preprocessorStatements, chroma.CommentSingle: comments, chroma.CommentSpecial: comments + " italic", chroma.Error: "#960050", chroma.Keyword: keywords, chroma.KeywordConstant: keywords, chroma.KeywordDeclaration: keywords, chroma.KeywordReserved: keywords, chroma.LiteralNumber: numbers, chroma.LiteralNumberBin: numbers, chroma.LiteralNumberFloat: numbers, chroma.LiteralNumberHex: numbers, chroma.LiteralNumberInteger: numbers, chroma.LiteralNumberOct: numbers, chroma.LiteralString: strings, chroma.LiteralStringEscape: strings, chroma.LiteralStringInterpol: plainText, chroma.Name: plainText, chroma.NameBuiltin: otherTypeNames, chroma.NameBuiltinPseudo: otherFunctionAndMethodNames, chroma.NameClass: typeDeclarations, chroma.NameFunction: otherDeclarations, chroma.NameVariable: otherDeclarations, chroma.Operator: plainText, chroma.Punctuation: plainText, chroma.Text: plainText, }))
Effect of CPAP on blood pressure in patients with minimally symptomatic obstructive sleep apnoea: a meta-analysis using individual patient data from four randomised controlled trials Background CPAP reduces blood pressure (BP) in patients with symptomatic obstructive sleep apnoea (OSA). Whether the same benefit is present in patients with minimally symptomatic OSA is unclear, thus a meta-analysis of existing trial data is required. Methods The electronic databases Medline, Embase and trial registries were searched. Trials were eligible if they included patients with minimally symptomatic OSA, had randomised them to receive CPAP or either sham-CPAP or no CPAP, and recorded BP at baseline and follow-up. Individual participant data were obtained. Primary outcomes were absolute change in systolic and diastolic BP. Findings Five eligible trials were found (1219 patients) from which data from four studies (1206 patients) were obtained. Mean (SD) baseline systolic and diastolic BP across all four studies was 131.2 (15.8)mmHg and 80.9 (10.4)mmHg, respectively. There was a slight increase in systolic BP of 1.1mmHg (95% CI −0.2 to 2.3, p=0.086) and a slight reduction in diastolic BP of 0.8mmHg (95% CI −1.6 to 0.1, p=0.083), although the results were not statistically significant. There was some evidence of an increase in systolic BP in patients using CPAP <4h/night (1.5mmHg, 95% CI −0.0 to 3.1, p=0.052) and reduction in diastolic BP in patients using CPAP >4h/night (−1.4mmHg, 95% CI −2.5 to −0.4, p=0.008). CPAP treatment reduced both subjective sleepiness (p<0.001) and OSA severity (p<0.001). Interpretation Although CPAP treatment reduces OSA severity and sleepiness, it seems not to have a beneficial effect on BP in patients with minimally symptomatic OSA, except in patients who used CPAP for >4h/night. ABSTRACT Background CPAP reduces blood pressure (BP) in patients with symptomatic obstructive sleep apnoea (OSA). Whether the same benefit is present in patients with minimally symptomatic OSA is unclear, thus a meta-analysis of existing trial data is required. Methods The electronic databases Medline, Embase and trial registries were searched. Trials were eligible if they included patients with minimally symptomatic OSA, had randomised them to receive CPAP or either sham-CPAP or no CPAP, and recorded BP at baseline and follow-up. Individual participant data were obtained. Primary outcomes were absolute change in systolic and diastolic BP. Findings Five eligible trials were found (1219 patients) from which data from four studies (1206 patients) were obtained. Mean (SD) baseline systolic and diastolic BP across all four studies was 131.2 (15.8) mm Hg and 80.9 (10.4) mm Hg, respectively. There was a slight increase in systolic BP of 1.1 mm Hg (95% CI −0.2 to 2.3, p=0.086) and a slight reduction in diastolic BP of 0.8 mm Hg (95% CI −1.6 to 0.1, p=0.083), although the results were not statistically significant. There was some evidence of an increase in systolic BP in patients using CPAP <4 h/night (1.5 mm Hg, 95% CI −0.0 to 3.1, p=0.052) and reduction in diastolic BP in patients using CPAP >4 h/night (−1.4 mm Hg, 95% CI −2.5 to −0.4, p=0.008). CPAP treatment reduced both subjective sleepiness ( p<0.001) and OSA severity ( p<0.001). Interpretation Although CPAP treatment reduces OSA severity and sleepiness, it seems not to have a beneficial effect on BP in patients with minimally symptomatic OSA, except in patients who used CPAP for >4 h/night. INTRODUCTION Obstructive sleep apnoea (OSA) is characterised by repetitive apnoea and hypopnoea during sleep, associated with oxygen desaturations and arousals from sleep. It has been estimated that between 2 and 4% of the adult population in Western countries suffer from symptomatic OSA (OSAS), and it is becoming more prevalent as the average population body weight rises. 1 The prevalence of minimally symptomatic OSA presenting without overt daytime sleepiness among middle-aged adults has been shown to be as high as 30%, 1 making OSA a frequent disorder, and thus of major epidemiological interest. OSAS has been proven to be a causal factor in the pathogenesis of vascular dysfunction and hypertension. 2 Treatment of OSAS patients with CPAP has been shown to reduce blood pressure (BP) by approximately 2-10 mm Hg in several randomised controlled trials (RCTs). Furthermore, in patients with resistant hypertension and OSA, CPAP achieves reductions in 24 h BP of about 5-10 mm Hg. 7 8 Whether CPAP has the same beneficial effect on BP in patients with minimally symptomatic OSA is a matter of debate and, because of the high prevalence of the disorder, this question is of considerable clinical interest. The findings from five RCTs 9-13 have been controversial and suggest that perhaps only specific subgroups of minimally symptomatic OSA patients (eg, patients with optimal adherence to CPAP and those with hypertension) may benefit from CPAP in terms of BP reduction. However, as only a limited number of such patients have been included in the individual trials, we performed a meta-analysis using individual patient data from all participants in the existing RCTs in order to explore this clinically and epidemiologically important question. We also use a novel method for analysing treatment-effect interactions with continuous covariates using fractional polynomials, 14 which is more powerful than conventional methods and allows more accurate inference to be drawn on which groups of patients may benefit most from treatment. METHODS The meta-analysis followed a prespecified protocol and statistical analysis plan that outlined the objectives, outcomes, methods for identifying trials and methods of analysis. Inclusion criteria Trials must have been randomised, closed to patient accrual, included patients with a diagnosis of minimally symptomatic or asymptomatic OSA and randomised patients to receive CPAP, or either sham-CPAP or no CPAP (standard care). Trials must also have measured systolic (SBP) and diastolic BP (DBP) on each patient at enrolment and at a follow-up visit. 15 The full electronic search strategy for Medline is shown in the online supplement. Reference lists of all identified trials and review articles were also screened for relevant trials. Clinical trials registries were searched for unpublished trials. Data collection Prior to data collection, a list of the required and desired variables for analysis was included in the protocol. SBP and DBP measurements from baseline and all follow-up visits were required along with treatment allocation, date of randomisation and anonymous patient identifier. Daytime or office BP readings were desirable. Baseline and follow-up data on Epworth Sleepiness Score (ESS) and oxygen desaturation index (ODI) (or apnoea-hypopnoea index (AHI) if ODI was unavailable) were desirable, along with age or date of birth, gender, body mass index (BMI) and antihypertensive medication usage. Although not identical, ODI and AHI are relatively similar and for pragmatic reasons were assumed to be the same. CPAP treatment usage (mean number of hours used per night of follow-up) was collected for patients randomised to CPAP. All data were consistently coded between studies before performing the meta-analysis. Risk of bias assessment All data were checked for completeness, validity and consistency. Data queries were resolved with the relevant trial team. Standard checks such as patterns of treatment allocation and balance of baseline characteristics between treatment arms were used to assess the integrity of the randomisation process in each study. Anonymous baseline and outcome data were sought for all randomised patients, regardless of whether they were excluded from the analysis of their respective trial. Endpoints The primary outcomes were absolute change in SBP and DBP between baseline and follow-up. Secondary outcomes were change in ESS and change in ODI. All outcomes were analysed using data from the 6-month visit or the nearest available visit. Only data from the first period of treatment were used from crossover trials. The effect of CPAP usage (< or >4 h/night) on each outcome, compared with control, was investigated. Furthermore, the variation in the effect of CPAP on BP and ESS was explored over baseline ESS, ODI/AHI, gender, BMI and, for BP outcomes only, baseline BP and antihypertensive medication usage. Statistical analysis Outcomes were analysed on an intention-to-treat basis in each study using multiple linear regression models adjusting for treatment allocation, the corresponding baseline variable of the outcome, age, BMI, gender, ODI (or AHI) and antihypertensive medication usage (BP outcomes only). The adjusted treatment effects for each outcome (adjusted absolute difference in means) were then combined across trials using a fixed-effect inverse-variance-weighted meta-analysis. Treatment effects and 95% CIs for each study and overall were presented using forest plots. Analyses were performed using Stata V.12. Heterogeneity was assessed using the I 2 -statistic 16 and Cochran's 2 test. 17 Regardless of the amount of heterogeneity present, random effects meta-analyses were not performed due to the small number of studies, thus making it difficult to precisely estimate the betweentrial variation. 18 To estimate the effect of CPAP usage on each outcome, patients allocated to CPAP were dichotomised into those who used treatment for < or >4 h/night, which may be considered the minimum usage required for improved cardiovascular outcomes. 9 19 The adjusted treatment effect in each usage group was estimated relative to the controls and pooled across studies using a fixed-effects meta-analysis. Treatment interactions with binary covariates (eg, gender), that is, the difference in the treatment effect between each subgroup (eg, males and females), were estimated for each study using a multiple linear regression model before pooling the coefficients for the interaction terms across studies using a fixed-effects inverse-variance-weighted meta-analysis. The effect of CPAP relative to control at different levels of a continuous covariate (eg, age) was analysed using two approaches: by splitting the continuous covariate at quartiles (determined on the pooled dataset) and pooling the estimated treatment effect in each resulting subgroup across studies and by using fractional polynomials 14 to produce a single, continuous function of the treatment effect over the covariate. For method, an approach used by Sauerbrei and Royston 20 for meta-analysis of continuous covariates in observational studies was adapted. We first estimated the best-fitting fractional polynomial function (with a single transformation of the covariate of interest) on the pooled dataset (stratified by study) adjusting for the same variables as used in the main analysis of the outcome of interest. A study-wise approach, whereby the best function is found for each study, was not used since two of the four studies we investigated had a relatively small sample size and would therefore have lacked power to find any nonlinear relationship. 20 A regression model that included an interaction term between treatment and the transformed covariate was then fitted to each study. The coefficients for this interaction term and the treatment variable were then pooled across studies using a multivariate meta-analysis 21 to produce a single treatment effect curve with 95% CI and a p value to test for the presence of an interaction. The results of both methods were included on the same plot to check the consistency between the two methods of analysis. Agreement between the results increases the plausibility and evidence of any possible treatment-covariate interaction. Disagreement may indicate a type I error of method caused by the flexible modelling or an erroneous model, in which case Sleep the results of the subgroup analysis (method ) were interpreted. RESULTS A total of five eligible trials were found in the study searches (1219 patients in total, see figure 1). Data were available from four trials 9 10 12 13 that randomised 599 patients to CPAP and 607 to control (1206 patients). Data were unavailable for a pilot study of 13 patients 11 and was excluded from the meta-analysis. Data for 99% of all known randomised patients were therefore included in the meta-analysis. Details of each of the four studies are shown in table 1 and baseline characteristics are summarised in table 2. Data checks showed that all studies used adequate methods of randomisation (data not shown). The two largest studies 9 12 compared CPAP against no CPAP and followed up patients at 6 months. The two smaller studies 10 13 used sham-CPAP as the control arm and used shorter follow-up periods. The study by Robinson et al 13 was a crossover trial and so only data from the first period were used. Data for all primary and secondary outcomes were available from each study apart from the study by Barb et al, 9 which did not record AHI during follow-up. All baseline variables that were planned to be adjusted for were available. Most studies regarded patients with an ESS≤10 to be nonsleepy, apart from the study by Craig et al, 9 which used a pragmatic approach of allowing the clinician to decide whether a patient was minimally symptomatic at presentation. Despite this difference, there was only a small difference in average baseline ESS scores between studies (table 2). Both studies by Barb et al 9 10 measured OSA severity using AHI rather than ODI, and table 2 shows that mean AHI in these two studies was higher than mean ODI in the other studies. Primary analyses Complete data from 1074 and 1075 patients were available for the analyses of SBP and DBP, respectively. Figure 2 shows that there was a small increase in SBP of 1.1 mm Hg (95% CI −0.2 to 2.3 mm Hg) and a decrease in DBP of −0.8 mm Hg (95% CI −1.6 to 0.1) in patients on CPAP, relative to control, although these differences were not statistically significant ( p=0.086 and 0.083, respectively). Secondary analyses A total of 1096 and 446 complete cases were available for the analyses of ESS and ODI, respectively. Figure 3 shows that CPAP was estimated to improve subjective sleepiness by 1.1 points in the ESS scale compared with control (95% CI −1.5 to −0.8, p<0.001). There was a large amount of heterogeneity caused by the stronger result of the Craig et al 12 trial. Removing this study led to a conservative estimate of the treatment effect of −0.6 (95% CI −1.1 to −0.2, p=0.004). Figure 3 shows that there was a strong, statistically significant effect of CPAP on ODI, reducing it by 8. Effect of CPAP usage on outcomes A total of 541/599 (90%) patients allocated to receive CPAP had usage data at follow-up. The median usage was 4.7 h/night and 54% of patients used treatment >4 h/night on average (see online supplementary eTable S2.1) Figure 4 shows that the increase in SBP in patients using CPAP <4 h/night (1.5 mm Hg increase, p=0.052) is slightly higher than the overall effect, while there is a smaller effect in patients with higher usage (0.6 mm Hg decrease, p=0.49). However, the difference between these effects was not statistically significant ( p=0.41). Figure 5 shows that there was a reduction in DBP relative to controls of 1.4 mm Hg in patients with optimal CPAP usage ( p=0.008) while there was no change in patients using CPAP <4 h/night ( p=0.79). There was some evidence of a difference between these effects ( p=0.062). Online supplementary figures S3.1 and S3.2 show that, while there were improvements in ESS and OSA severity in both usage groups compared with control, the benefits were greater in patients who used the treatment >4 h/night on average. Treatment interactions There was no evidence of a treatment interaction with gender or antihypertensive medication use (see online supplementary figures S4.1-S4.5). Most of the interactions between continuous covariates and treatment effect on BP were not statistically significant (online supplement) apart from where there was marginal evidence that the effect of CPAP on DBP becomes more favourable as BMI increases ( p=0.052; see online supplementary figure S6). There were statistically significant interactions between the effect of CPAP on ESS and baseline ESS and ODI. The reduction in ESS was stronger in patients with a higher ESS score at baseline ( p=0.030; figure 6) while there was strong evidence that the effect on ESS diminishes in patients with higher ODI (p=0.014; figure 7). Visual comparison of the continuous curves with the effects in quartiles by study and overall showed that in all analyses the two methods of analysis agreed quite well, thus increasing the plausibility of the findings. DISCUSSION The findings of this meta-analysis show that although CPAP reduces OSA severity and subjective sleepiness it seems not to have an overall beneficial effect on BP in patients with Sleep minimally symptomatic OSA. In patients using CPAP <4 h/ night, SBP seemed to increase slightly relative to controls, whereas patients who used CPAP >4 h per night seemed to benefit from a small decrease in DBP. It should be noted, however, that these latter findings are purely exploratory since the characteristics of patients using CPAP less than or more than 4 h/night could differ and thus introduce the possibility of confounding. To date, there are four meta-analyses published on the impact of CPAP on BP in patients with OSAS. 4 Our meta-analysis differs in some important aspects from previously published meta-analyses on the effect of CPAP on BP in patients with OSA. 4 22-24 First, our meta-analysis is focused on trials that included patients with minimally symptomatic OSA only. Previously published meta-analyses have not included the most recent large studies on asymptomatic patients, 9 12 thereby making it difficult to draw definitive conclusions as to whether patients with minimally symptomatic OSA treated with CPAP benefit from a BP reduction. Secondly, our meta-analysis obtained data for individual patients, which allows analysis of different subgroups of patients, thus identifying those who are most likely to benefit from CPAP, and also allows adjustment for covariates, thus reducing potential bias and increasing power. In contrast to the findings of the previous meta-analyses, 4 22-24 we did not find an overall beneficial effect of CPAP on BP in patients with minimally symptomatic OSA, although there is wide interindividual variation. Our findings imply that patients with minimally symptomatic OSA should generally not be treated with CPAP to reduce BP in order to improve cardiovascular risk. However, in clinical practice, a short trial of CPAP treatment may be indicated to identify those patients who do use CPAP for more than 4 h/night, and thus may benefit in terms of a BP reduction and improvement of endothelial Figure 5 Forest plots showing the effect of less than and more than 4 h/night CPAP usage on diastolic blood pressure (mm Hg) compared with control in each study and overall. Difference between pooled treatment effects: p=0.062. Sleep function. 19 In this regard, even small effects on BP in the range of 1-2 mm Hg may be of clinical significance as they are associated with reduced odds of cardiovascular and cerebrovascular events. 25 At the same time, it must be stressed that, if used insufficiently, CPAP may potentially harm patients with minimally symptomatic OSA by slightly increasing SBP. This may be related to increased psychological stress from not being able to fully adapt to CPAP, or from difficulty in falling asleep for a prolonged period; both situations could be associated with increased sympathetic nervous system activation leading to the minimally increased office BP. 26 27 Thus, the absence of an effect on BP in previous studies of minimally symptomatic patients may have been generated by the generally lower compliance with CPAP in these studies. There is accumulating evidence that favourable adherence to CPAP (ie, >4 h/night) is required to achieve beneficial effects on cardiovascular outcomes and this seems to be true not only for asymptomatic but also symptomatic patients with OSA. 4 9 19 However, further research is needed to determine baseline predictors of optimal CPAP usage, thus allowing clinicians to better identify patients who are more likely to benefit from treatment. In the current meta-analysis, there was no evidence that the treatment effect of CPAP on BP was dependent on the severity of OSA (as defined by ODI or AHI), which is in contrast to the report by Haentjens et al. 4 Mean AHI in the two Barb studies was higher than mean ODI in the other studies; however, this is due both to the Spanish studies using a higher inclusion threshold on AHI and that AHI values are usually higher than the equivalent ODI, especially when using >4% dips in SaO 2. A possible limitation of the meta-analysis is that not all studies used the same follow-up length or the same treatment for the control arm. The two largest studies used standard of care and patients were followed up at a 6-month visit. By contrast, the two smaller studies used sham-CPAP and followed patients for a shorter period of time. However, since these two studies consisted of about 7% of the total number of patients only, they are unlikely to greatly influence the overall results of the meta-analysis. Despite the studies in this meta-analysis looking at patients with apparently normal levels of daytime sleepiness and only minimal symptoms, there was an average reduction in ESS of about one point in patients using CPAP relative to control, with considerable variability across baseline ESS. For instance, this improvement was greater in those with an initially higher ESS, which would be expected ( figure 6). There was also a dependence on initial OSA severity (figure 7) with more severe patients benefitting less from treatment. In clinical practice, this means that it is not possible to confidently predict who is going to benefit most from CPAP therapy in terms of sleepiness reduction, again suggesting that a trial of CPAP therapy is required to establish likely benefit. Finally, in this study, we have used a novel method for performing a meta-analysis of treatment interactions with continuous covariates, based on a method introduced by Sauerbrei and Royston for observational studies. 20 By using the full information in the data, this method is more powerful than the common approach of examining the pooled treatment effect in subgroups of a continuous covariate, and more flexible than assuming a linear relationship between treatment effect and the covariate. By showing the interaction in more detail, more precise conclusions can be drawn about which patients may benefit from treatment. We therefore strongly recommend its use in future meta-analyses. Stata software is available from the authors upon request. In conclusion, although CPAP treatment reduces OSA severity and sleepiness, it seems not to have a beneficial effect on BP in patients with minimally symptomatic OSA, except in those patients who used CPAP for >4 h/night. Clinical Trials Unit) for his help in developing the methods for performing the meta-analysis of treatment interactions with continuous covariates. We would also like to thank Claire Vale (MRC Clinical Trials Unit) for her guidance on performing the literature search and for comments on an earlier draft of this manuscript. Contributors DJB was responsible for the search of trials, data extraction, data analysis, interpretation of results and drafting the manuscript. JRS and MK were responsible for the study conception, collection of trial data, data extraction, interpretation of results and drafting the manuscript. FB was responsible for data extraction, interpretation of results and drafting of the manuscript. DJB had full access to all of the data in the study and can take responsibility for the integrity of the data and the accuracy of the data analysis. Competing interests None. Provenance and peer review Not commissioned; externally peer reviewed. Open Access This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 3.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/ licenses/by-nc/3.0/
A simple and intuitive method to calculate $R_0$ in complex epidemic models Epidemic models are a valuable tool in the decision making process. Once a mathematical model for an epidemics has been established, the very next step is calculating a mathematical expression for the basic reproductive number, $R_0$, which is the average number of infections caused by an individual that is introduced in a population of susceptibles. Finding a mathematical expression for $R_0$ is important because it allows to analyze the effect of the different parameters in the model on $R_0$ so that we can act on them to keep $R_0<1$, so that the epidemic fades out. In this work we show how to calculate $R_0$ in complicated epidemic models by using only basic concepts of Markov chains. Introduction This paper is not about building epidemic models but about their analysis, that is, getting conclusions from them. In particular, this paper deals with calculating R 0, the most important quantity in an epidemic model. This paper is a significatively improved version of Hernandez-Suarez. First we need to understand why R 0 is important in an epidemic model. The concept comes from the theory of branching process, in which we deal with the following problem: if an individual (particle or whatever) gives origin (birth) to a random number of descendants, and each of the descendants in turn gives birth to a random number of individuals and so on, what is the probability that the population will become extinct? First, observe that we mentioned a random number of individuals, because if the number of descendants is a constant, then there is no problem at hand. We need to assume that every individual in every generation, leaves X descendants before dying, and X is a random variable with some probability distribution. We leave for appendix A1 the proof of the following result: if the expectation of the random variable X is, then, if ≤ 1 the population will vanish eventually. A corollary is that if > 1, then there is a chance that the population will prevail, and the chances increase with. The above model assumes that all individuals are equally capable to reproduce, that is, that the random variable X follows the same distribution for all individuals, which is nat always true. Assume for instance that the resources to survive are limited, that is, suppose that we are talking about a bacteria population in a petri dish, where the population will start competing for resources (besides the toxicity of chemicals produced by bacteria) and where we know that even if > 1, the population of bacteria will perish because it can go beyond the petri dish. Of course, nobody wants to calculate what is the probability that the bacteria population will thrive and survive, because it is zero. But we may be interested in the conditions to a quick vanishing of the population, because perhaps we don't want it to grow. In this case, the average number of descendants would be tricky to calculate, because the first individual has on average more descendants that those from later generations, due to saturation. But there is a turnaround to the problem: if we find out that the first individual has an average number of descendants smaller or equal to one, then, there is nothing to worry about, because descendants at later generations will have even worst conditions to reproduce than the first one, and that is the reason why we only need to calculate the reproductive potential of the first one, because no other individual born at later generations can have a higher reproductive capability, when resources to reproduce becomes scarce. This also applies to contagion: if the average number of infections is larger than one, the disease may become a large epidemics. Nevertheless, the first individual has always more chances to find susceptible individuals to infect, whereas subsequently infected individuals will find previously infected (and perhaps immune) individuals, so, it is harder for them to keep the same average number of infections than the first infected, but, if the average number of infections of the first individual, the one with more potential to produce more infections (when everybody else is susceptible) is smaller than one, then, all other subsequently infected, with less potential due to saturation, have an average number of infections less than one and no large epidemics will occur. In mathematical epidemiology the equivalent to the average number of descendants is thus the average number of infections. And the average number of descendants,, becomes R 0 or basic reproductive number in mathematical epidemiology. Now it is clear why the definition of R 0 has been established in terms of the potential of the first individual (;Heesterbeek & Dietz, 1996). The life circle There is some care we need to take in the calculation of the average number of individuals produced by an individual. Figure 1, is a typical example of the problem at hand, related to Eratyrus mucronatus, the kissing bug that transmits Chagas' disease (see ). Individuals in the Adult stage produce individuals in the egg stage, but the average number of eggs produced is not, because there is no guarantee that an egg will become an adult and thus, the average number of eggs produced by an adult is not a measure of the ability of an individual to replace itself. What we need to calculate is the average average number of eggs produced by an egg, or the average number of adults produced by an adult, or the average number of stage 1 individuals produced by a stage 1 individual and so on. The previous rationale applies to epidemic models, where a particular individual moves along a series of states or compartments (susceptible, infected, isolated, vaccinated, dead, etc.) according to certain rules, and, for obvious reasons, we are interested in how many infected will be produced by an infected, and it has been clarified already that we need to analyze the infectious potential of the first infected only. Define a contact as any act between two individuals, that would cause the infectious of the susceptible if the contact involves an infectious and a susceptible. This could be sexual contact, sharing hands, sharing needles, talking, etc. With this in mind, observe that the number of contacts that an individual has per unit of time (day, hour, week, etc) is a random variable Y with expectation, and this is the number that matters. Everything we need to know is the number of contacts that an infectious individuals has during the time s/he is infected, because, by assumptions in the model, Y has the same distribution for every individual, regardless if it is the first infected or the one in the middle or the last one, because individuals have contacts whether they are infected or not, that is, contacts occur even before the epidemic starts. This means that to obtain R 0, we need to calculate the average number of contacts an individual has per unit of time and then multiply this number by the average time the individual spends infectious. Methodology We have highlighted the key to calculate R 0, regardless how complicated the model is: if a is the rate (number of events per unit of time) at which an individual has contacts, and w is the average time an infected individual is infectious, then R 0 = a w. The problem is not a, since most of the times it is one of the parameters in the model. The problem is w, the average time infectious, since the model may be very complex and it is not clear how to calculate it. For that, we need some theory of Markov chains. Nevertheless, in many cases, R 0 can be calculated by inspection. For instance, consider the classical SIR model in Figure 1. Above each arrow between any two compartments, there is the rate at which individuals move from one compartment to another. The word rate needs to be clearly defined, and we use for this rate in the sense of a stochastic model: if an individual leaves a box at a rate, it means that the average time an individual spends in that compartment is exponential with mean 1/. Figure 2, every individual has contacts with another at a rate, therefore, the average number of contacts of an infectious individual is /, which is R 0. Applying the same rationale, word by word, the R 0 value is the same for the SEIR model of Figure 2, where E stands for an infected, non infectious ("latent") stage. between SIR and SEIR, but in the model of Figure 4, an infectious individual produces latent individuals and now there is a chance that they may not become infectious. This is a similar problem to the "life cycle" problem mentioned in the previous section. Since what is needed is the average of infectious produced by an infectious, in the modified SEIR the chance of being removed before becoming infectious, must be considered. Thus, the average amount of time that a recently infected individual spends in the infectious compartment I changes from 1/ to (1 − p)/. Since the number of contacts is still, the R 0 for this modified SEIR is (1 − p)/. IS/N I Another example, a bit more complicated is the model of Figure Markov chains We will focus on discrete time, discrete space Markov chains. An excellent review of these process at the level required in this paper is found in Ross ; Grinstead & Snell and a deeper treatment in Iosifescu. Imagine an individual moving across a series of states (see Figure 6) in some random fashion: when it is in stage i, it moves to stage j with some constant probability p ij. The time spent in each stage is, by now, irrelevant. For now, assume that the time spent on each stage is unitary. This is a very useful model with applications in a wide range of areas. Observe that, regardless on where the individual starts, the process never ends, the individual keeps moving: 3,4,2,3,2,3,4,2,1,4,2,3,... But there is a class of Markov chains we will focus on: those that contain one or more absorbing states, where an absorbing state is a black hole that swallows the individual and this cannot leave this stage, and thus the process stops (see Consider for instance the Markov chain of Figure 7. We first need to construct a matrix of transitions between states, where p ij is the probability that once in state i the individual moves to state j. Absorbing states are easily located and correspond to the 1's in the diagonal (states 3 and 5). The rest of the states is called transient states. We first need to rearrange rows and columns of the matrix P so that the absorbing states are located at the bottom right corner, to the form: where U is a k k matrix containing the transitions between the k transient states, R is a k r matrix containing the transitions between the k transient states to the r absorbing states, Z is a r k matrix of zeros and I is the identity matrix of size r. Rearranging matrix P yields: Among the many useful results we can derive from the previous decomposition of matrix P, the one we need is the following result: The average number of visits to each transient state before absorption, is given The proof is in Appendix A2. Matrix N is called the fundamental matrix of P.. From rates to probabilities A useful result from probability theory, in particular, from properties of exponential distributions is related to how likely is that an individual moves from one state to another, given that we do not know transition probabilities, but only exit rates. Figure 8 illustrate the problem. In this Figure, An individual moves from compartment A to B or C at rates and respectively. But it can only move to one of them. The result is that the probability that it moves to A is /( + ) and to B with the remaining probability /( + ). This result can be extended to any number of compartments. Average time in a compartment The last result we need is related to the average time in a compartment. We are now in possession of the three elements that will be used to calculate R 0. Once a model has been established, we need to follow four steps: (i) Convert all rates in the model to probabilities, to build a transition matrix (iv) Multiply the result in (iii) by the contact rate, to obtain the total number of contacts of an infectious individual. This is R 0. Modified SEIR The first example is the modified SEIR ( Figure 4). As previously shown, it does not require the use of Markov chains because is simple enough to be calculated by inspection, but it will be used with the purpose of preparation of more complicated examples. Step (i ) is the calculation of the transition matrix P in this case is: To calculate step (ii ) we need the fundamental matrix N which is: where we can see the expected number of visits to the infectious state I starting from state S (we always start in this state) is 1 − p. For step (iii ), the average time spent in a single visit to the infectious state, this is the inverse of the sum of all exit rates from I, which is 1/ (we need to consider only the individual exit rate, since the total exit rate is I, the result follows). Multiplying this by the average number of visits to the infectious state gives (1 − p)/ as the average total amount of time that an individual stays infectious. R 0 from step (iv ) is just the product of the contact rate and the previous result, (1 − p)/. Modified SEIR with natural mortality The second example is another version of the SEIR in which individuals in states E or I may die (Figure 9). We can actually start directly writing matrix The reason why the probability that the individual moves from state E to I is /( + ) has been explained in Figure 8. The fundamental matrix N is: where we can see the expected number of visits to the infectious state I starting in state S is /( + ). Observe that the average time per visit to state I is ( + ) −1, therefore, the average total time spent in state I is ( + )( + ), multiplying this by the contact rate, yields An Ebola transmission model We will analyze another example where individuals are infectious at different stages and with a different degree of infectiousness. This is an Ebola model in infectious individuals infect susceptible ones while alive outside hospitals, in hospitals and dead but not buried (). We can skip directly to the transition matrix U, that excludes the absorbing states. Thus: The fundamental matrix N is: The infectious states are I, H and F. From the first row of N we can obtain the expected number of visits to each one of these states, that are respectively: 1, 1 and 2 1 + 1 (1 − 1 ). Since the expected times on each visit to these states are, respectively, 1/ I, 1/ H and 1/ F, we finally arrive to the total expected time spent in each one of the infectious states I, H and F. These are respectively: The final expression for R 0 is obtained by multiplying each of those terms by their respective contact rate and adding them. Each of the terms is the expected number of infections produced by an individual in each state, that is, its contribution to R 0 : A model for COVID-19 transmission Another example is a model for Covid-19 transmission. This model was chosen because it seems complex enough, with eight states and twelve possible transitions (). The model has the following states: susceptible class (S), exposed class ( and the first row of fundamental matrix N is: On the other hand, the average time per visit to each of the infectious stages I, H and P is, respectively: z = ( i + a + i ) −1, ( h + r ) −1, ( p + a + i ) −1 and the average time spent on each of the infectious stages I,H and P is the Kronecker or element-by-element multiplication: Since the contact rate of the infectious states are respectively I, H and P, we can make: and then finally we get to R 0 : Tuberculosis transmission model This example is a model for Tuberculosis Blower et al., and it is interesting because the infectious individuals can recover temporarily and become infectious again, alternating between the two states (see Figure 11). In all states there is natural mortality and in some there is additional disease-induced mortality. The states are : S, susceptible; L, latent; T i, infected and infectious; T n, infected but non-infectious; R i, infectious individuals that become recovered and non-infectious for a while; R n, non-infectious individuals that become recovered for a while. All these states are transient, and the transition matrix The expression for the fundamental matrix N is cumbersome, but we only need the third column of the first row, containing the expected amount of visits to the only infectious state, T i, starting in state S, which is: Observe the average time per visit to state T i is (c + + t ) −1. Multiplying this by the average number of visits to T i yields the expected total time in the infectious state T i : Figure 12: Blower's model for Tuberculosis transmission (). Only individuals in T i are infectious. This state communicates with state R i and individuals can alternate between states T i and R i until removal. and to obtain R 0 we need to multiply the previous value by the contact rate in state T i, which according to Blower et al. is /. Discussion We have presented a procedure that, given today's computational capabilities, has as its most complicated step the construction of the transition matrix U. Most epidemic models start with a flux diagram between states with a differential equation for each arrow. In our approach, we substitute differential equations by probabilities. It is also very intuitive. The basic requirement for this approach is that the transition probabilities are constant, so that U does not contain the number of individuals in any stage, only parameters. There are models in which U is not constant and thus, this method can not be applied, for instance Feng et al. suggested an SEIT model in which individuals in state E can die at a constant rate or can move to state I at a rate cI/N, therefore, the probability of moving to state I is: Therefore, we can not use a Markov chain approach to this model. This sort of models are not very common, and as it was shown in an earlier, less efficient version of the approach presented here (see Hernandez-Suarez, 2002), R 0 in Feng's model is not maximized a the onset of the epidemic. There is a correspondence between this approach and the Matrix Population Models theory for population growth (see Caswell, 2009), a stochastic version of deterministic modeling developed by Leslie (1945Leslie (, 1948. If infections are considered births, and every individual remains in a stage a unit of time per visit, then, we can build a matrix of fertility F containing the contact rate of state j at position (i, j) where state i is the state were infected individuals come from (usually the susceptible class). Then: is called the next generation operator, since x(t + 1) = Ax(t), where x(t) is the vector containing the number of individuals in each stage at generation t. Therefore, the dominant eigenvalue of A (the element in the upper left corner of A) is the production of infections of an infected individual during its life time. Since we assumed that every visit to a state last a unit if time, we need to multiply this by the average time in the infectious stage, to obtain our R 0. For instance, in the Tuberculosis model of Figure 13, (), our fertility matrix would be: After obtaining N, the fundamental matrix of U for this model, the dominant eigenvalue of A = FN is identical to times, as expected. Multiplying this by the average time per visit to state T i, (c + + t ) −1, yields the same R 0. Appendix A1 If an individual produces X descendants with probability P X during its life, then P e, the probability of extinction of the population that starts with one individual, is: P e = (P e |X = 0)P 0 + (P e |X = 1)P 1 + (P e |X = 2)P 2 + But by independence of fates of individuals, (P e |X = k) = P k e, thus, P e = P 0 + P e P 1 + P 2 e P 2 + = ∞ i=0 P i e P i = M X (P e ) that is, the probability of extinction equates the Moment Generating Function of X, the offspring production. Thus, the probability of extinction P e is the solution of z = M X (z). This system has always the solution z = 1 and it may exist a second solution in 0 < z < 1. The probability of extinction is the minimum value of all the solutions in. Branching process theory asserts that if the average offspring size is smaller than one, the probability of extinction is one since on average, at generation 1 there will be R 0 individuals; at generation 2, R 2 0 ; at generation 3, R 3 0 and in general, at generation n, R n 0, which tends to 0 when n → ∞ and R 0 < 1. To prove that the mean offspring size is the relevant factor to determine wether extinction is certain or not, is straightforward: observe that we are looking or the solutions of z such that M X (z) = z in the interval 0 ≤ z ≤ 1, and observe that in this interval, the second derivative of M X (z) is that is, M X (z) is a concave function. Since M X = P 0 and M X = 1, the shape of M X (z) is one of the dotted line depicted in Figure 13. Observe also that d dz that is, the slope of M X (z) at z = 1 is the average offspring size, therefore, to intersect the z line in another point in 0 < z < 1, it is necessary the slope at z = 1 (average offspring size) be larger than 1.. We are interested in the minimum of the intersection points with z, that is, the probability of extinction. Since M X (z) is concave and goes always through and (1, P 0 ), if the slope of M X (z) at z = 1 is larger than one, there is always another solution of M X (z) = z in. Since M X is by definition the expected value of X, then, if E(X) > 1, the probability of extinction is less than 1. Appendix A2 Let (I − U)x = 0; that is x = Ux. Iterating this we can see that x = U n x. The probability that the process will be in every state after n steps is U n, thus, the expected number of visits to each state after n transitions is: I + U + U 2 + U 3 +... + U n observe that: (I − U)(I + U + U 2 + U 3 +... + U n ) = I − U n+1 multiplying both sides by (I − U) −1 yields: taking the limit when n → ∞ yields:
For James Howard Kunstler, an author and journalist known for his criticism of American urban and suburban development, the city of Boston is not set to an appropriate urban scale. In Kunstler’s ideal urban landscape, buildings would be no more than five to six stories high, in order to meet the appropriate demands of urban scale and distribution. In his talk on Wednesday evening as part of the Lowell Humanities Series, Kunstler conveyed the harsh reality of America’s economy and environment, and heeded to the American public to make significant changes to their lifestyle. “The outstanding feature of the current situation is we’re doing a poor job of constructing a coherent narrative of what’s happening to us and therefore constructing a plan for what we’re going to do about the predicaments and quandaries that we face,” Kunstler said. Kunstler attributed this to the problematic nature of how our nation’s energy, finance, and urban design sectors interact with each other, and in turn, impact the environment. Kunstler, a successful novelist and journalist, is known for his op-ed columns on current environmental and economic issues, and has been published in The Atlantic Monthly, Slate.com, Rolling Stone, and The New York Times. Among his canon of published work, he has published a trilogy of novels in response to the current urban planning dilemma: The Geography of Nowhere, Home from Nowhere, and The City in Mind. His bestselling novel, The Long Emergency, provides an in-depth discussion of the impact of the current oil crisis and its ramifications for American society. “The basic problem is this…we’re stuck between crushing economies and crushing oil companies’ ability to get the oil,” Kunstler said. Kunstler stressed that although gas and oil prices have recently decreased, it is still difficult for the public to understand that the energy industry is in peril. People believe that as long as there is money, the oil supply will be infinite, and that energy and technology are interchangeable, but this is simply not the case, Kunstler said. As oil becomes more expensive and hazardous to pursue, prices go down. This ultimately harms the oil companies, which are taking on a tremendous amount of debt that cannot be paid back, as well as the economy, whose growth is consequently stunted. Kunstler argued that the American public clamors for superficial solutions to these issues, so that they can continue living and consuming in the same way they are now, without making necessary changes to their behavior. “We want to keep driving to Walmart forever?” Kunstler said. As the global economy shifted in the 1970s—moving from an industrially-oriented economy to an becomes increasingly financially-innovative economy—the public must reexamine their lifestyles, Kunstler said. It is no longer feasible to push for expansion and globalization, which is why it is necessary to consider re-localizing and downscaling activities that comprise the economy. Kunstler noted that this contraction is key to maintaining a more sustainable economy. “Today’s wealth is going into a black hole and will never be seen again,” Kunstler said. Kunstler argued that money necessary to build the next economy is no longer there, as it’s being wasted on “techno-narcissistic fantasies,” citing the nation’s investment in travel to Mars or self-driving cars. This gross misallocation of resources is hurting the nation’s impoverished middle class, which will no longer be credit-worthy in the future, because America’s infrastructure for daily life cannot be sustained, Kunstler said. Beyond financialization, Kunstler criticized the American economy’s shift to elaborating suburban development. An outspoken critic of suburban development, Kunstler believes that, in coming years, urban life will naturally relocate to places that are scaled down to meet the appropriate resources and capital realities. Much like the economy, Kunstler heeded American cities to contract, as they are too large and complex to be properly renovated. He noted the importance of urban design in reviving the country’s landscape, citing abandoned malls, dreary marketplaces, and obsolete skyscrapers as unnecessary products of over-development. “If we have enough places that aren’t worth caring about, we have a country that’s not worth living in,” Kunstler said. For Kunstler, contraction is a viable option because it will help America achieve a more appropriate urban scale, a better distribution of wealth, and a better use of public space. With the current energy, financial, and urbanization crises at hand, he said it’s important to recognize the long emergency that will continue to affect us in the near future. Kunstler stressed that if we, as a society, continue to ignore the necessary changes that lie ahead, we won’t be able to fix them before it’s too late. “We can’t afford to be a nation of clowns anymore,” Kunstler said. The three novels listed are non-fiction, so not really novels. It’s great that James is getting coverage, though. Thanks for the article.
// CreateSendTaokeInfoRequest creates a request to invoke SendTaokeInfo API func CreateSendTaokeInfoRequest() (request *SendTaokeInfoRequest) { request = &SendTaokeInfoRequest{ RpcRequest: &requests.RpcRequest{}, } request.InitWithApiInfo("UniMkt", "2018-12-12", "SendTaokeInfo", "uniMkt", "openAPI") request.Method = requests.POST return }
# file for extracting raw features with atariari import numpy as np import math # function to get raw features and order them by def get_raw_features(env_info, last_raw_features=None, gametype=0): # extract raw features labels = env_info["labels"] # if ball game if gametype == 0: player = [labels["player_x"].astype(np.int16), labels["player_y"].astype(np.int16)] enemy = [labels["enemy_x"].astype(np.int16), labels["enemy_y"].astype(np.int16)] ball = [labels["ball_x"].astype(np.int16), labels["ball_y"].astype(np.int16)] # set new raw_features raw_features = last_raw_features if raw_features is None: raw_features = [player, enemy, ball, None, None, None] else: raw_features = np.roll(raw_features, 3) raw_features[0] = player raw_features[1] = enemy raw_features[2] = ball return raw_features ########################################### # demon attack game elif gametype == 1: player = [labels["player_x"].astype(np.int16), np.int16(3)] # constant 3 enemy1 = [labels["enemy_x1"].astype(np.int16), labels["enemy_y1"].astype(np.int16)] enemy2 = [labels["enemy_x2"].astype(np.int16), labels["enemy_y2"].astype(np.int16)] enemy3 = [labels["enemy_x3"].astype(np.int16), labels["enemy_y3"].astype(np.int16)] #missile = [labels["player_x"].astype(np.int16), # labels["missile_y"].astype(np.int16)] # set new raw_features raw_features = last_raw_features if raw_features is None: raw_features = [player, enemy1, enemy2, enemy3, None, None, None, None] else: raw_features = np.roll(raw_features, 4) raw_features[0] = player raw_features[1] = enemy1 raw_features[2] = enemy2 raw_features[3] = enemy3 return raw_features ########################################### # boxing game elif gametype == 2: player = [labels["player_x"].astype(np.int16), labels["player_y"].astype(np.int16)] enemy = [labels["enemy_x"].astype(np.int16), labels["enemy_y"].astype(np.int16)] # set new raw_features raw_features = last_raw_features if raw_features is None: raw_features = [player, enemy, None, None] else: raw_features = np.roll(raw_features, 2) raw_features[0] = player raw_features[1] = enemy return raw_features
/* * Copyright 2002-2008 the original author or authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.springframework.beans.factory.config; import java.util.HashMap; import java.util.Map; import junit.framework.TestCase; import org.easymock.MockControl; import org.springframework.beans.factory.support.DefaultListableBeanFactory; import org.springframework.mock.easymock.AbstractScalarMockTemplate; import org.springframework.test.AssertThrows; /** * @author <NAME> * @author <NAME> */ public class CustomScopeConfigurerTests extends TestCase { private static final String FOO_SCOPE = "fooScope"; public void testWithNoScopes() throws Exception { new ConfigurableListableBeanFactoryMockTemplate() { protected void doTest(ConfigurableListableBeanFactory factory) { CustomScopeConfigurer figurer = new CustomScopeConfigurer(); figurer.postProcessBeanFactory(factory); } }.test(); } public void testSunnyDayWithBonaFideScopeInstance() throws Exception { MockControl mockScope = MockControl.createControl(Scope.class); final Scope scope = (Scope) mockScope.getMock(); mockScope.replay(); new ConfigurableListableBeanFactoryMockTemplate() { public void setupExpectations(MockControl mockControl, ConfigurableListableBeanFactory factory) { factory.registerScope(FOO_SCOPE, scope); } protected void doTest(ConfigurableListableBeanFactory factory) { Map scopes = new HashMap(); scopes.put(FOO_SCOPE, scope); CustomScopeConfigurer figurer = new CustomScopeConfigurer(); figurer.setScopes(scopes); figurer.postProcessBeanFactory(factory); } }.test(); mockScope.verify(); } public void testSunnyDayWithBonaFideScopeClass() throws Exception { DefaultListableBeanFactory factory = new DefaultListableBeanFactory(); Map scopes = new HashMap(); scopes.put(FOO_SCOPE, NoOpScope.class); CustomScopeConfigurer figurer = new CustomScopeConfigurer(); figurer.setScopes(scopes); figurer.postProcessBeanFactory(factory); assertTrue(factory.getRegisteredScope(FOO_SCOPE) instanceof NoOpScope); } public void testSunnyDayWithBonaFideScopeClassname() throws Exception { DefaultListableBeanFactory factory = new DefaultListableBeanFactory(); Map scopes = new HashMap(); scopes.put(FOO_SCOPE, NoOpScope.class.getName()); CustomScopeConfigurer figurer = new CustomScopeConfigurer(); figurer.setScopes(scopes); figurer.postProcessBeanFactory(factory); assertTrue(factory.getRegisteredScope(FOO_SCOPE) instanceof NoOpScope); } public void testWhereScopeMapHasNullScopeValueInEntrySet() throws Exception { new ConfigurableListableBeanFactoryMockTemplate() { protected void doTest(final ConfigurableListableBeanFactory factory) { new AssertThrows(IllegalArgumentException.class) { public void test() throws Exception { Map scopes = new HashMap(); scopes.put(FOO_SCOPE, null); CustomScopeConfigurer figurer = new CustomScopeConfigurer(); figurer.setScopes(scopes); figurer.postProcessBeanFactory(factory); } }.runTest(); } }.test(); } public void testWhereScopeMapHasNonScopeInstanceInEntrySet() throws Exception { new ConfigurableListableBeanFactoryMockTemplate() { protected void doTest(final ConfigurableListableBeanFactory factory) { new AssertThrows(IllegalArgumentException.class) { public void test() throws Exception { Map scopes = new HashMap(); scopes.put(FOO_SCOPE, this); // <-- not a valid value... CustomScopeConfigurer figurer = new CustomScopeConfigurer(); figurer.setScopes(scopes); figurer.postProcessBeanFactory(factory); } }.runTest(); } }.test(); } public void testWhereScopeMapHasNonStringTypedScopeNameInKeySet() throws Exception { new ConfigurableListableBeanFactoryMockTemplate() { protected void doTest(final ConfigurableListableBeanFactory factory) { new AssertThrows(IllegalArgumentException.class) { public void test() throws Exception { Map scopes = new HashMap(); scopes.put(this, new NoOpScope()); // <-- not a valid value (the key)... CustomScopeConfigurer figurer = new CustomScopeConfigurer(); figurer.setScopes(scopes); figurer.postProcessBeanFactory(factory); } }.runTest(); } }.test(); } private abstract class ConfigurableListableBeanFactoryMockTemplate extends AbstractScalarMockTemplate { public ConfigurableListableBeanFactoryMockTemplate() { super(ConfigurableListableBeanFactory.class); } public final void setupExpectations(MockControl mockControl, Object mockObject) throws Exception { setupExpectations(mockControl, (ConfigurableListableBeanFactory) mockObject); } public final void doTest(Object mockObject) throws Exception { doTest((ConfigurableListableBeanFactory) mockObject); } public void setupExpectations(MockControl mockControl, ConfigurableListableBeanFactory factory) throws Exception { } protected abstract void doTest(ConfigurableListableBeanFactory factory) throws Exception; } }
Al-Joanee Company: support department cost allocations with matrices, to improve decision making The aim of this paper is to solve the complexity of cost allocation problem where the support center at Al-Joaanee company have a reciprocate services along with the production (operation) centers. The study used a spread sheets of excel to apply matrices to allocate the indirect costs. The company has seven centers in which two of them are production centers; the rest are called the supporting centers where they work to facilitate the production process. Many methods in cost accounting have been proposed, Via the direct method, the step-down method, the individuality method and the reciprocal methods. However not all the methods has the same applications at the process of cost allocation on the production methods, for instance, the direct methods assume there is no inter-services among the support centers. The proposed methods though has its own complexity for application but it can solve the problem of reciprocate services among the support centers. The study has proposed a systematic approach to apply the reciprocal method for Al Al-Joanee Company.
Novel method of intraoperative liver tumour localisation with indocyanine green and near-infrared imaging. INTRODUCTION Fluorescence imaging (FI) with indocyanine green (ICG) is increasingly implemented as an intraoperative navigation tool in hepatobiliary surgery to identify hepatic tumours. This is useful in minimally invasive hepatectomy, where gross inspection and palpation are limited. This study aimed to evaluate the feasibility, safety and optimal timing of using ICG for tumour localisation in patients undergoing hepatic resection. METHODS From 2015 to 2018, a prospective multicentre study was conducted to evaluate feasibility and safety of ICG in tumour localisation following preoperative administration of ICG either on Day 0-3 or Day 4-7. RESULTS Among 32 patients, a total of 46 lesions were resected: 23 were hepatocellular carcinomas (HCCs), 12 were colorectal liver metastases (CRLM) and 11 were benign lesions. ICG FI identified 38 (82.6%) lesions prior to resection. The majority of HCCs were homogeneous fluorescing lesions (56.6%), while CLRM were homogeneous (41.7%) or rim-enhancing (33.3%). The majority (75.0%) of the lesions not detected by ICG FI were in cirrhotic livers. Most (84.1%) of ICG-positive lesions detected were < 1 cm deep, and half of the lesions ≥ 1 cm in depth were not detected. In cirrhotic patients with malignant lesions, those given ICG on preoperative Day 0-3 and Day 4-7 had detection rates of 66.7% and 91.7%, respectively. There were no adverse events. CONCLUSION ICG FI is a safe and feasible method to assist tumour localisation in liver surgery. Different tumours appear to display characteristic fluorescent patterns. There may be no disadvantage of administering ICG closer to the operative date if it is more convenient, except in patients with liver cirrhosis.
. Central corneal thickness (CCT) affects IOP measurements and is an independent risk factor for the development of glaucoma. IOP measurements of all common tonometers, such as the Goldmann applanation tonometer, non-contact tonometer and rebound tonometer, are affected by CCT. Nomograms to correct IOP measurements according to CCT have been established. These nomograms lead to a reduction of the measurement error caused by CCT in groups of patients. However, one has to be aware of the fact that, in individuals, the correction of IOP measurements can even increase the deviation of the IOP measurement from the actual IOP. The effect of CCT on dynamic contour tonometry and IOP (cc) measured by ORA is negligible. CCT is an important parameter in glaucoma management and needs to be considered when interpreting IOP measurements. This can be done by using nomograms or by implementing CCT in the calculation of the individual target pressure.
. We describe the first, to our knowledge, case of a feminizing testicle associated with acute polyhydramniosis observed at 21 weeks 3 days gestation. There was no fetal malformation or maternal disease that would explain the polyhydramnios. Prenatal diagnosis of feminizing testicle can now be made but is very difficult to suspect without similar family history or a suggestive ultrasonographic sign. Many fetal malformations have been directly linked to different causes of feminizing testicle, but for other malformations such as acute polyhydramnios, the pathogenesis remains unknown.
import java.io.BufferedReader; import java.io.InputStreamReader; import java.util.HashSet; import java.util.Scanner; public class B { public static void main(String[] args) throws Exception { solve(); } public static HashSet<Integer>[] nodes; public static long a = 0, b = 0; // **SOLUTION** public static void solve() throws Exception { int n = sc.nextInt(); if (n == 1) { System.out.println(0); return; } nodes = new HashSet[n + 1]; for (int i = 0; i < nodes.length; i++) { nodes[i] = new HashSet<>(); } for (int i = 0; i < n - 1; i++) { int u = sc.nextInt(); int v = sc.nextInt(); nodes[u].add(v); nodes[v].add(u); } bipartize(1, 0, new HashSet<>()); b = n - a; long total = (a * 1L) * (b * 1L); System.out.println(total - (n - 1)); } public static void bipartize(int c, int level, HashSet<Integer> hs) { if (hs.contains(c)) return; hs.add(c); if (level % 2 == 0) a++; for (int n : nodes[c]) { bipartize(n, level + 1, hs); } } public static InputStreamReader r = new InputStreamReader(System.in); public static BufferedReader br = new BufferedReader(r); public static Scanner sc = new Scanner(System.in); }
Because hangers designed for children's garments are normally used for a variety of garments and also because many of the garments consist of a set including two or more coordinated garments which should be displayed together, the hangers preferably should have the capacity to hang and properly display a variety of garments of different designs. It is also necessary that the cost of the hanger be kept low and that it be capable of economical shipment. When fully extended to accommodate the various types of garments with which they are designed for use, these hangers are bulky and expensive to ship and store. In this condition, they are large, awkward to ship or store and because the clothing to be displayed is smaller and lighter in weight than adult clothing, can be designed to use a thinner and less sturdy structure. This is important in keeping material costs down. However, their overall size greatly increases molding costs, as well as shipping and storage costs This invention provides a hanger construction which has the capability of functional variety which can also be folded into a reasonably compact and convenient shape for shipment and storage.
A comparative study of image feature detection and description methods for robot vision Detection and description of local features in images is an essential task in robot vision. This task allows to identify and uniquely specify stable and invariant regions in a observed scene. Many successful detectors and descriptors have been proposed. However, the proper combination of a detector and a descriptor is not trivial because there is a trade-off among different performance criteria. This work presents a comparative study of successful image feature detection and description methods in the context of the simultaneous localization and mapping problem. The considered methods are exhaustively evaluated in terms of accuracy, robustness, and processing time.
Good Start for spinal cord injury management: an occupational therapy initiative of Bangladesh Abstract This paper aims to share the experience of a recent occupational therapy practice initiative for assisting in successful transition from the rehabilitation centre to the home setting. Considering the low resources and limited funds availability, this project has shown how the occupational therapist (OT) can facilitate a Good Start for the life of a person with spinal cord injury at home. A smooth transition to home can be facilitated by traveling along with the patient and his/her family after discharge, by sharing the transport cost, and by physically observing the patients home and giving technical support to family members to modify the home environment. This project is a good model for OTs working within developing countries like Bangladesh.
The Pythons come to the Ziegfeld in October. Holy Grail fans, your quest is at an end! The entire Monty Python troupe is reuniting on October 15th at the Ziegfeld Theater for a special event presented by the IFC and BAFTA (the British Academy of Film and Television Arts). Yep, you heard right: The whole troupe, John Cleese, Terry Gilliam, Eric Idle, Terry Jones, Michael Palin and even Graham Chapman, who died in 1989. (Chapman has joined the Python crew posthumously before, appearing in the form of an urn said to contain his ashes, but for this landmark event, we’re gunning for a hologram!). The event will include a special screening of the new IFC original documentary "Monty Python: Almost the Truth (The Lawyer's Cut)" – crammed full of commentary from a veritable who’s who of funnymen, from Russell Brand to Eddie Izzard to Jimmy Fallon -- plus a Q&A with the cast and a lifetime achievement award-type thingy from BAFTA. According to BrooklynVegan, “ticket info for this event is scarce but word on the street is that SOME tickets will be available at www.ifc.com in mid-September.” And that's all we know. Frustrating, innit?
Experiences of intimate partner violence as perpetrated among Japanese university freshmen Objectives: To compare experiences regarding the perpetration of intimate partner violence among Japanese university freshmen between 2008 and 2014. Study design: Two-stage cross-sectional study. Methods: A self-administered questionnaire survey was completed in both 2008 and 2014 by students at the same university. Results: There were significant reductions in episodes of verbal harassment (adjusted odds ratio : 0.601, 95% confidence interval : 0.382, 0.945, P=0.027) that occurred when a boy/girlfriend said you dont give me priority to his/her partner when they did not see them (AOR: 0.450, 95%CI: 0.207, 0.979, P=0.044), and also in instances of irritation that resulted when a boy/girlfriend disobeyed his/her partner (AOR: 0.385, 95%CI: 0.161, 0.921, P=0.032) from 2008 to 2014. The perpetration scores were reduced from 1.87±0.16 in 2008 to 1.41±0.117 in 2014 (t test, P=0.016). The perpetration scores in 2014 were significantly lower than those in 2008, regardless of gender, age, university faculty, and participation in lectures/seminars about domestic violence (DV) and/or dating DV (P=0.030). Conclusions: Findings showed reductions in some types of harassment, as well as in perpetration scores, between 2008 and 2014 among Japanese university freshmen at the same university. However, further study is required to determine the factors related to the perpetration of harassment. Introduction Intimate partner violence (IPV) is a serious worldwide health problem 1,2). Although the prevalence of IPV as reported by Japanese women is low in comparison to other countries that were included in a previous multinational study performed by the World Health Organization (WHO) 1,3), 19.1% of female respondents and 10.6% of male respondents in a 2014 study conducted by the Gender Equality Bureau Cabinet Office in Japan reported that they had experienced some type of violence, including physical, psychological, economic, and/or sexual violence 4). In addition, the prevalence rates increased from 13.7% of females and 5.8% of males in a follow-up study conducted in 2011 5). A previous study indicated that most Japanese university students did not recognize verbal harassment, control by an intimate partner, or unprotected sexual intercourse as forms of violence 6). Thus, these students may not consider the seriousness of the impact of violence in the present or future, although several studies have shown that violence has a negative impact on health over long periods 2,. The Third Gender Equality Participation Basic Plan ratified by the Cabinet Office in 2010 emphasized education as a means of preventing violence beginning in the early stages of childhood and adolescence in Japan. In the Nagasaki prefecture where the 2008 study regarding IPV among university students was conducted 8), the sale and gifting of contraceptive supplies to people under 18 years of age were controlled as part of the Prefectural Juvenile Protection Regulations, which were enacted in 1978. However, these regulations were abolished in 2011. In 2001, The Domestic Violence Prevention Act was established in Japan. The act covered partner violence that occurred in legal marriages, but did not recognize IPV, although it did address post-divorce partner violence. The act was amended in 2013 to cover violence that occurred between unconventionally married couples, such as those in common law marriages, but still does not cover IPV. This includes dating violence that occurs between couples that are not engaged in communal life similar to that in marital relations. A number of reports have included IPV as a serious health challenge for adolescents and youth 10). In addition, the onset of dating violence perpetration has been shown to occur during the teenage years 11). Unfortunately, limited evidence is available regarding approaches and programs that are effective in IPV prevention, including the complex factors that influence both perpetration and victimization 12). In addition, if there is a possibility that the incidence rate of IPV has been underestimated and underreported in Japan 1,3,8), a periodic assessment of the actual situation regarding IPV in Japan, especially among young people, will be important for identifying measures for prevention and preparedness. This study was performed to compare experiences of IPV perpetration among Japanese university freshmen of the same university between 2008 and 2014 8). There were several changes in the laws and regulations regarding sexual health and violence prevention between 2008 and 2014 in both Japan and this study area. However, the changes did not sufficiently correspond to concerns related to violence prevention and the future impact of violence on health conditions. This study was performed to assess differences in the attitudes and behaviors of university students as related to the prevention of IPV, and was designed to especially address experiences of perpetration. Methods This was a two-stage cross-sectional study. A self-administered questionnaire survey was performed in 2014 among freshmen from both medical and non-medical health faculties at a university in the capital city of a Japanese prefecture. The same questionnaire was conducted at this university in 2008 6). Some participants attended a lecture related to DV and IPV as part of liberal studies education for students of all faculties at the university, and/or similar lectures outside the university. Some lectures were attended prior to entering the university. After receiving oral and written explanations regarding the study objectives, procedures, management of data collection, confidentiality, and ethical considerations about participation or refusal to participate in this study, the participants provided informed consent prior to completing the questionnaire. The completed questionnaires were then deposited in a locked opaque box. Data collection was performed in the first academic year of both study years. The questionnaire elicited responses regarding demographic characteristics, being the victim/perpetrator of ha-rassment from and/or toward a boy/girlfriend, and the recognition of harassment as dating violence. If a participant recognized an episode listed in the questionnaire as relating to their own experience of harassment toward and/or from their boy/girlfriend, they answered "yes". These episodes were developed based on a booklet titled "Do you know about dating violence?" that was published by the non-profit organization DV Prevention Nagasaki, and which included physical, psychological, verbal, sexual, and economic forms of harassment. The study methods, including the questionnaire, were previously described in detail 6). This study was conducted between 2008 and 2014, and focused on situations in which dating violence was perpetrated by using 24 episodes of harassment. Either Fisher's exact test or the chi-square test was used to analyze differences in the demographic information of study participants, as well as their experiences of violence perpetration while dating a partner between 2008 and 2014. A logistic regression analysis was performed to assess differences in experiences of violence while dating a partner, regardless of gender, age, or university faculty. The perpetration score was calculated for each participant by totaling the number of "yes" responses to prompts describing 24 episodes of violence. Scores from 2008 and 2014 were then compared using the t test and a linear regression analysis. Statistical analyses were performed using IBM SPSS ver. 22. In all analyses, P < 0.05 was taken to indicate statistical significance. Ethical approval The 2014 study was approved by the Ethical Committees of Nagasaki University Graduate School of Biomedical Sciences (authorization number: 13102461). The same committees also approved utilization of the data collected in 2008 (authorization number: 14031389). Results A total of 274 responses from the 2008 data collection and 371 responses from the 2014 data collection were analyzed. This included data regarding gender, age, university faculty, whether participants had relationships with boy/ girlfriends, and whether they had participated in lectures/ seminars about domestic violence (DV) and/or dating violence. The study participants in 2008 and 2014 represented 16.7% and 22.4% of the university's freshmen, respectively. Responses regarding demographic information and relationships with the opposite gender are shown in a statistical analysis in Table 1, which also provides a comparison of all study participants with those that had a boy/girlfriend. The 2014 study participants were predominantly male (chi-square test, P < 0.001) and younger (chi-square test, P < 0.001), with most belonging to non-medical health faculties (chi-square test, P = 0.001). These participants were more likely to have participated in lectures/seminars about DV and/or dating violence. However, there was no significant difference in the proportion of participants that had experienced a relationship with a boy/girlfriend between 2008 and 2014. Table 2 compares experiences of 24 harassment perpetration episodes between 2008 and 2014 study participants using either the chi-square test or Fisher's exact test, along with a logistic regression analysis adjusting for gender, age, and university faculty. Cronbach's alpha for the 24 episodes had values of 0.719 and 0.658 in 2008 and 2014, respectively. The most common types of harassment in both study years were verbal harassment, checking and controlling boy/ girlfriend's activities, and ignoring a boy/girlfriend. Several episodes were significantly reduced between 2008 and 2014 for both bivariate and logistic regression analyses (i.e., verbal harassment (adjusted odds ratio : 0.601, 95% confidence interval : 0.382, 0.945, P = 0.027), saying "you don't give me priority" when boy/girlfriend does not see her/him (AOR: 0.450, 95%CI: 0.207, 0.979, P = 0.044), and irritation when boy/girlfriend disobeys her/him (AOR: 0.385, 95%CI: 0.161, 0.921, P = 0.032)). "Blaming the boy/ girlfriend if she/he gets angry because he/she is at fault" was also significantly reduced between 2008 and 2014 in the bivariate analysis (chi-square test, P = 0.021), but this dif-ference was not detected in the logistic regression analysis. The means ± standard deviation (SD) of perpetration scores were 1.87 ± 0.16 and 1.41 ± 0.12 in 2008 and 2014, respectively (t test, P = 0.016). Regardless of gender, age, university faculty, and participation in lectures/seminars about DV and/or dating DV, the perpetration score in 2014 was significantly low (P = 0.030) according to model A on the linear regression analysis (Table 3). Participation in lectures/seminars about DV and/or dating DV showed no contribution to the perpetration score because model B without participation in lectures/seminars about DV and/or dating DV as independent variables still showed a significantly low perpetration score in 2014 (P = 0.025). Only two episodes (i.e., verbal harassment (AOR: 1.595, 95%CI: 1.039, 2.451, P = 0.033) and ignoring boy/girlfriend (AOR: 1.743, 95%CI: 1.086, 2.797, P = 0.021)) were more common among female than male participants, regardless of study year and age. There were no significant differences in other episodes between male and female participants, nor were there any significant differences between non-medical health and medical health faculty members, regardless of study year and gender. There was also no significant difference in perpetration scores between male and female study participants (1.61 ± 2.10 and 1.60 ± 1.63, respectively; t test, P < 0.960). The perpetration score among male study participants was significantly different between 2008 and 2014 (2.01 ± 2.41 and 1.39 ± 1.88, respectively; t test, P < 0.033), Tables 2 and 3 show the results of statistical analyses among study participants that had relationships with boy/ girlfriends. Eight (8.0%) of the 100 participants in 2008 and 19 (13.4%) of the 142 participants in 2014 that had experienced a relationship with a boy/girlfriend reported having perpetrated some type of IPV. The most common incidents were verbal harassment, checking and controlling a boy/ girlfriend's activities, and ignoring a boy/girlfriend. However, one participant reported beating or kicking their partner, and another reported hitting something or shouting in front their partner. Discussion Results of the present study indicated reductions in some types of harassment and IPV perpetration scores between 2008 and 2014 among Japanese university freshmen. The results may be interpreted in several ways. First, there are some concerns related to the attitudes and behaviors of participants regarding IPV. For example, participants may not have reported real incidences of IPV perpetration because of an increased awareness that harassment is unacceptable through both formal and informal education. As a result, this study demonstrated no association between participation in IPV educational programs and perpetration scores. Previous studies also indicated that there was no association between IPV knowledge and the perpetration of violence 6,13). Educational programs could contribute to the promotion of awareness regarding harassment, but people may not report their own inappropriate behaviors (e.g., harassment) due to the recognition that such behavior is socially unacceptable. On the other hand, the perception of perpetration may be decreased among youths regardless of their participation in educational programs. Therefore, the participants in this study may not have accurately reported their experiences despite having perpetrated IPV. Another possible explanation is that young Japanese people, such as university students, may not have engaged in intimate relationships with members of the opposite gender. It is possible that such individuals had harassed non-intimate partners (e.g., friends, classmates, and/or others) before forming intimate relationships. These situations were not detected because data collection was performed in the first academic year, and most study participants had been high school students until a few months prior to data collection. Therefore, there is a risk of underreporting. This is reflected by The Gender Equality Bureau Cabinet Office's reports about increases in some types of violence in the Japanese population 4,5). Several studies have also indicated discrepancies in the reporting and underreporting of perpetration and being a victim of violence and abuse. There was no significant difference in perpetration scores between male and female participants in this study, and perpetration scores were significantly reduced among male study participants between 2008 and 2014. It is necessary to consider gender and sexual orientation in IPV studies, which were not examined in this study as only heterosexual IPV was addressed. There is also the possibility of reciprocal IPV and interactions that involve an individual both perpetrating and being a victim of violence 17). Further study is required to evaluate the detailed mechanisms and current trends regarding IPV among the younger Japanese population. Determining the factors related to the perpetration of harassment will require not only emphasizing formal and informal education about IPV, but also addressing socio-familial background, including school-life conditions 18,19). Several studies have indicated associations between neighborhood conditions and the tendency to commit violence 20,21). We suspect that there are higher rates of IPV among people with less supportive and preventive conditions, such as those involving inter-parental violence during childhood, a personal history of victimization, and poor mental health conditions. However, this study found that the number of respondents reporting violent episodes decreased between 2008 and 2014. Although the results of the present study alone cannot be used to propose measures for preventing IPV, it will be crucial to focus on the actual number of reports, and to take the complexity and seriousness of the problem into consideration for the study results to facilitate the promotion of a healthy society. This includes the promotion of community solidarity, school cohesion, family connection, and individual well-being 24,25). This study does not present causal factors related to the perpetration of IPV, and the study sample is not representa- tive of the general Japanese youth population. Although the same questionnaire was used to compare results between 2008 and 2014, it was not an established tool for the assessment of dating violence, and was not verified for validity and reliability. In addition, the Cronbach's alpha in 2014 was smaller than that of 2008, and the questionnaire could therefore not appropriately reflect the contemporary conditions of the attitudes and behaviors of university students as related to dating violence. However, our results indicated a reduction in self-reported IPV instances from 2008 to 2014 among Japanese university freshmen. Further study is needed to identify the factors associated with IPV mitigation within the Japanese youth population. Conflict of interest: The authors have no conflicts of interest to declare.
The Tomb of Antony and Cleopatra? History's most famous suicide happened more than 2,000 years ago: rather than surrender to the Romans who had captured her Egypt, the lovelorn Queen Cleopatra succumbed to the venomous bite of an asp. Ancient historians chronicled the act, Shakespeare dramatized it, and HBO even added its own to spin to the tragedy with the lavish TV series "Rome." Yet while we may know how Cleopatra died of snake poison, after her consort Mark Antony fell on his sword, archaeologists have yet to pin down where the legendary couple was laid to rest. Few countries in the world sit upon as many layers of history and civilization as Egypt, from the pyramids of the Pharaohs to the archives of medieval Jewish merchants. But the specter of Cleopatra has loomed above it all. "Cleopatra has come to symbolize Egypt for a lot of people," says Joyce Tyldesley, an archaeologist at the University of Liverpool and author of Cleopatra: Last Queen of Egypt, published last year. It's a symbol that has not always been flattering. Centuries of Western literature evoked Cleopatra as a lustful seductress, corrupting the stoic Roman men who strayed into her orbit. European empires seized upon this metaphor of temptation and decadence: after Napoleon's ill-fated invasion of Egypt, the French government issued a commemorative coin nevertheless, depicting France as a virile Roman conqueror standing over a bare-breasted, feminine figure of the East. Kathleen Martinez, an archaeologist from the Dominican Republic who has conducted the digs at Abusir for the past three years, told reporters that she wants "to be Cleopatra's lawyer," and prove there is much more to the ancient potentate than the work of two thousands years of Western male imagination. Debates still rage over everything from Cleopatra's identity — cranial scans of her half-sister's skull this year suggested she may be African, though her known lineage was Greek — to her looks. Close scrutiny of coin portraits have led some to believe that she was rather plain, a conclusion borne out by the Roman historian Plutarch who wrote "her beauty was in itself not altogether incomparable, nor such as to strike those who saw her." Even more questions linger surrounding her death, which signaled the dawn of the Roman Empire under Julius Caesar's nephew Octavian, who was waging a bitter civil war with Mark Antony. "She definitely died at a very convenient time for Octavian," says Tyldesley. "There is no absolute proof that she committed suicide, and so it is possible that she was either forced to do so, or that she was killed. Of course," she adds, "there is no proof that she died by snakebite, either." And so now many wait for further developments over the coming weeks, thrilled by the possibility of seeing a legend turn real. It's hard to divine what could be buried by Cleopatra's side, let alone how the storied queen's body itself may be preserved. Could there be treasures? The coiled skin of a snake? "A diary," offers Tyldesley, "would be fantastic." But Hawass and his team must hurry. The dig abuts the summer residence of President Hosni Mubarak, which may force the dozens-strong team of archaeologists to abandon work from May to November. The security concerns of Egypt's current ruler, after all, still outweigh the mystique of its past.
Where there’s a dance-pop No 1, there’s a songwriting dispute. Fresh from the lawsuit that saw Robin Thicke and Pharrell Williams ordered to pay Marvin Gaye’s family $7.4m for copying his music to create Blurred Lines, the songwriting credits for Mark Ronson’s Uptown Funk have been expanded. It is now attributed to 11 different writers. The new credits have been given to the writers of the Gap Band’s Oops Up Side Your Head: group members Ronnie Wilson, Charles Wilson and Robert Wilson, plus producers Rudolph Taylor and Lonnie Simmons. They join the six writers already credited: Ronson, singer Bruno Mars, producers Jeff Bhasker and Philip Lawrence, plus Nicholas Williams (Trinidad James) and Devon Gallaspy, who received credits for the use of a sample of Trinidad James’s All Gold Everything. The credits followed a claim by the publisher Minder Music, reports Billboard. The claim, rather than being sent to Ronson’s label or publisher, was filed with YouTube’s content management system in February. The Oops writers are believed to be taking a 17% share of the song. Uptown Funk has been an enormous commercial success. It topped the Billboard Hot 100 for 14 consecutive weeks, selling more than 5m copies in the US alone. It spent seven non-consecutive weeks at No 1 in the UK, and set a new record for the number of streams. It has sold more than 1.7m copies in the UK.
//========= Copyright � 1996-2005, Valve Corporation, All rights reserved. ============// // // Purpose: // // $NoKeywords: $ //=============================================================================// #ifndef UTLQUEUE_H #define UTLQUEUE_H #ifdef _WIN32 #pragma once #endif #include "utlvector.h" // T is the type stored in the queue template<class T, class M = CUtlMemory<T> > class CUtlQueue { public: // constructor: lessfunc is required, but may be set after the constructor with // SetLessFunc CUtlQueue(int growSize = 0, int initSize = 0); CUtlQueue(T *pMemory, int numElements); // element access T &operator[](int i); T const &operator[](int i) const; T &Element(int i); T const &Element(int i) const; // return the item from the front of the queue and delete it T const &RemoveAtHead(); // return the item from the end of the queue and delete it T const &RemoveAtTail(); // return item at the front of the queue T const &Head(); // return item at the end of the queue T const &Tail(); // put a new item on the queue to the tail. void Insert(T const &element); // checks if an element of this value already exists on the queue, returns true if it does bool Check(T const element); // Returns the count of elements in the queue int Count() const { return m_heap.Count(); } // Is element index valid? bool IsIdxValid(int i) const; // doesn't deallocate memory void RemoveAll() { m_heap.RemoveAll(); } // Memory deallocation void Purge() { m_heap.Purge(); } protected: CUtlVector<T, M> m_heap; T m_current; }; //----------------------------------------------------------------------------- // The CUtlQueueFixed class: // A queue class with a fixed allocation scheme //----------------------------------------------------------------------------- template<class T, size_t MAX_SIZE> class CUtlQueueFixed : public CUtlQueue<T, CUtlMemoryFixed<T, MAX_SIZE> > { typedef CUtlQueue<T, CUtlMemoryFixed<T, MAX_SIZE> > BaseClass; public: // constructor, destructor CUtlQueueFixed(int growSize = 0, int initSize = 0) : BaseClass(growSize, initSize) {} CUtlQueueFixed(T *pMemory, int numElements) : BaseClass(pMemory, numElements) {} }; template<class T, class M> inline CUtlQueue<T, M>::CUtlQueue(int growSize, int initSize) : m_heap(growSize, initSize) { } template<class T, class M> inline CUtlQueue<T, M>::CUtlQueue(T *pMemory, int numElements) : m_heap(pMemory, numElements) { } //----------------------------------------------------------------------------- // element access //----------------------------------------------------------------------------- template<class T, class M> inline T &CUtlQueue<T, M>::operator[](int i) { return m_heap[i]; } template<class T, class M> inline T const &CUtlQueue<T, M>::operator[](int i) const { return m_heap[i]; } template<class T, class M> inline T &CUtlQueue<T, M>::Element(int i) { return m_heap[i]; } template<class T, class M> inline T const &CUtlQueue<T, M>::Element(int i) const { return m_heap[i]; } //----------------------------------------------------------------------------- // Is element index valid? //----------------------------------------------------------------------------- template<class T, class M> inline bool CUtlQueue<T, M>::IsIdxValid(int i) const { return (i >= 0) && (i < m_heap.Count()); } template<class T, class M> inline T const &CUtlQueue<T, M>::RemoveAtHead() { m_current = m_heap[0]; m_heap.Remove((int) 0); return m_current; } template<class T, class M> inline T const &CUtlQueue<T, M>::RemoveAtTail() { m_current = m_heap[m_heap.Count() - 1]; m_heap.Remove((int) (m_heap.Count() - 1)); return m_current; } template<class T, class M> inline T const &CUtlQueue<T, M>::Head() { m_current = m_heap[0]; return m_current; } template<class T, class M> inline T const &CUtlQueue<T, M>::Tail() { m_current = m_heap[m_heap.Count() - 1]; return m_current; } template<class T, class M> void CUtlQueue<T, M>::Insert(T const &element) { int index = m_heap.AddToTail(); m_heap[index] = element; } template<class T, class M> bool CUtlQueue<T, M>::Check(T const element) { int index = m_heap.Find(element); return (index != -1); } #endif // UTLQUEUE_H
<filename>src/main/java/com/dardan/rrafshi/discogs/model/pagination/PageRequest.java package com.dardan.rrafshi.discogs.model.pagination; public final class PageRequest { public static final int DEFAULT_PAGE = 1; public static final int DEFAULT_PAGE_SIZE = 50; private final int page; private final int size; private final Sort sort; private PageRequest(final int page, final int size, final Sort sort) { this.page = page; this.size = size; this.sort = sort; } private PageRequest(final int page, final int size, final Direction direction, final String... properties) { this(page, size, Sort.by(direction, properties)); } private PageRequest(final int page, final int size, final String... properties) { this(page, size, Sort.by(properties)); } private PageRequest(final int page, final int size) { this(page, size, Sort.unsorted()); } private PageRequest() { this(DEFAULT_PAGE, DEFAULT_PAGE_SIZE); } public static PageRequest of(final int page, final int size, final Sort sort) { return new PageRequest(page, size, sort); } public static PageRequest of(final int page, final int size, final Direction direction, final String... properties) { return new PageRequest(page, size, direction, properties); } public static PageRequest of(final int page, final int size, final String... properties) { return new PageRequest(page, size, properties); } public static PageRequest of(final int page, final int size) { return new PageRequest(page, size); } public static PageRequest getDefault() { return new PageRequest(); } public int getPage() { return this.page; } public int getSize() { return this.size; } public Sort getSort() { return this.sort; } }
def _populate_index(self): os.makedirs(self.cache_dir, exist_ok=True) local_files = glob('{}/*'.format(self.cache_dir)) for file in local_files: self._add_to_index(os.path.basename(file), os.path.getsize(file))
Somatic Editing of Ldlr With Adeno-Associated Viral-CRISPR Is an Efficient Tool for Atherosclerosis Research Objective Atherosclerosis studies in Ldlr knockout mice require breeding to homozygosity and congenic status on C57BL6/J background, a process that is both time and resource intensive. We aimed to develop a new method for generating atherosclerosis through somatic deletion of Ldlr in livers of adult mice. Approach and Results Overexpression of PCSK9 (proprotein convertase subtilisin/kexin type 9) is currently used to study atherosclerosis, which promotes degradation of LDLR (low-density lipoprotein receptor) in the liver. We sought to determine whether CRISPR/Cas9 (clustered regularly interspaced short palindromic repeats-associated 9) could also be used to generate atherosclerosis through genetic disruption of Ldlr in adult mice. We engineered adeno-associated viral (AAV) vectors expressing Staphylococcus aureus Cas9 and a guide RNA targeting the Ldlr gene (AAV-CRISPR). Both male and female mice received either saline, AAV-CRISPR, or AAV-hPCSK9 (human PCSK9)-D374Y. A fourth group of germline Ldlr-KO mice was included for comparison. Mice were placed on a Western diet and followed for 20 weeks to assess plasma lipids, PCSK9 protein levels, atherosclerosis, and editing efficiency. Disruption of Ldlr with AAV-CRISPR was robust, resulting in severe hypercholesterolemia and atherosclerotic lesions in the aorta. AAV-hPCSK9 also produced hypercholesterolemia and atherosclerosis as expected. Notable sexual dimorphism was observed, wherein AAV-CRISPR was superior for Ldlr removal in male mice, while AAV-hPCSK9 was more effective in female mice. Conclusions This all-in-one AAV-CRISPR vector targeting Ldlr is an effective and versatile tool to model atherosclerosis with a single injection and provides a useful alternative to the use of germline Ldlr-KO mice.
/* * Copyright (c) 2019 Elastos Foundation * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in all * copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ package org.elastos.wallet.ela.db.table; import android.os.Parcel; import android.os.Parcelable; import io.realm.RealmList; import io.realm.RealmObject; import io.realm.annotations.PrimaryKey; /** * 钱包表 */ public class Wallet extends RealmObject implements Parcelable { @PrimaryKey private String walletId;//钱包id private String walletName;//钱包名称 private String mainWalletAddr;//主钱包地址 private String did;//钱包didsdk生成的didstring private boolean isDefault;//默認錢包 是否是默認的 private boolean singleAddress;//是否单地址 private RealmList<String> walletAddrList;//所有钱包地址*/ private int type = 0;//0 普通单签 1单签只读 2普通多签 3多签只读 private String filed1;// private String filed2;// private String filed3;// public Wallet() { } public void setWalletData(Wallet wallet) { this.walletId = wallet.getWalletId(); this.walletName = wallet.getWalletName(); this.mainWalletAddr = wallet.getMainWalletAddr(); this.did = wallet.getDid(); this.isDefault = wallet.isDefault(); this.singleAddress = wallet.isSingleAddress(); this.walletAddrList = wallet.getWalletAddrList(); this.type = wallet.getType(); this.filed1 = wallet.getFiled1(); this.filed2 = wallet.getFiled2(); this.filed3 = wallet.getFiled3(); } public static final Creator<Wallet> CREATOR = new Creator<Wallet>() { @Override public Wallet createFromParcel(Parcel in) { return new Wallet(in); } @Override public Wallet[] newArray(int size) { return new Wallet[size]; } }; @Override public int describeContents() { return 0; } @Override public void writeToParcel(Parcel dest, int flags) { dest.writeString(walletId); dest.writeString(walletName); dest.writeString(mainWalletAddr); dest.writeString(did); dest.writeByte((byte) (isDefault ? 1 : 0)); dest.writeByte((byte) (singleAddress ? 1 : 0)); //dest.writeList(walletAddrList); dest.writeInt(type); dest.writeString(filed1); dest.writeString(filed2); dest.writeString(filed3); } protected Wallet(Parcel in) { walletId = in.readString(); walletName = in.readString(); mainWalletAddr = in.readString(); did = in.readString(); isDefault = in.readByte() != 0; singleAddress = in.readByte() != 0; /* if (walletAddrList == null) { walletAddrList = new RealmList<String>(); } in.readList(walletAddrList, String.class.getClassLoader());*/ type = in.readInt(); filed1 = in.readString(); filed2 = in.readString(); filed3 = in.readString(); } public String getWalletId() { return walletId; } public void setWalletId(String walletId) { this.walletId = walletId; } public String getWalletName() { return walletName; } public void setWalletName(String walletName) { this.walletName = walletName; } public String getMainWalletAddr() { return mainWalletAddr; } public void setMainWalletAddr(String mainWalletAddr) { this.mainWalletAddr = mainWalletAddr; } //兼容没有did的情况 所以判空一定要用isEmpty public String getDid() { if (did == null) return ""; return did; } public void setDid(String did) { this.did = did; } public boolean isDefault() { return isDefault; } public void setDefault(boolean aDefault) { isDefault = aDefault; } public boolean isSingleAddress() { return singleAddress; } public void setSingleAddress(boolean singleAddress) { this.singleAddress = singleAddress; } public RealmList<String> getWalletAddrList() { return walletAddrList; } public void setWalletAddrList(RealmList<String> walletAddrList) { this.walletAddrList = walletAddrList; } public int getType() { return type; } public void setType(int type) { this.type = type; } public String getFiled1() { return filed1; } public void setFiled1(String filed1) { this.filed1 = filed1; } public String getFiled2() { return filed2; } public void setFiled2(String filed2) { this.filed2 = filed2; } public String getFiled3() { return filed3; } public void setFiled3(String filed3) { this.filed3 = filed3; } @Override public String toString() { return "Wallet{" + "walletId='" + walletId + '\'' + ", walletName='" + walletName + '\'' + ", mainWalletAddr='" + mainWalletAddr + '\'' + ", did='" + did + '\'' + ", isDefault=" + isDefault + ", singleAddress=" + singleAddress + ", walletAddrList=" + walletAddrList + ", type=" + type + ", filed1='" + filed1 + '\'' + ", filed2='" + filed2 + '\'' + ", filed3='" + filed3 + '\'' + '}'; } }
This application relates to the useful manipulation of magnetic components found in toners as commonly utilized in various printer and electrostatographic print environments. More specifically, the present disclosure relates to at least one realization of magnetic encoding of data elements or magnetic marks in combination with distraction patterns. To detect counterfeiting, various document security systems are available. For example, watermarking is a common way to ensure security in digital documents. Many watermarking approaches exist with different trade-offs in cost, fragility, robustness, etc. One prior art approach is to use special ink rendering where the inks are invisible under standard illumination. These inks normally respond to light outside the visible range and thereby may be made visible. Examples of such extra-spectral techniques are UV (ultra-violet) and IR (infrared). This traditional approach is to render the encoded data with special inks that are not visible under normal light, but that have strong distinguishing characteristics under the special spectral illumination. Determination of the presence or absence of such encoding may be thereby subsequently performed using an appropriate light source and detector. One example of this approach is found in U.S. Pat. No. 7,614,558 to Katsurabayashi et al. However, these special inks and materials are often difficult to incorporate into standard electro-photographic or other non-impact printing systems like solid ink printers, either due to cost, availability or physical/chemical properties. This in turn discourages their use in variable data printing arrangements, such as for redeemable coupons or other personalized printed media for example. Another approach taken is a document where copy control is provided by digital watermarking, as for example in U.S. Pat. No. 5,734,752 to Knox, where there is provided a method for generating data encoding in the form of a watermark in a digitally reproducible document which are substantially invisible including the steps of: (1) producing a first stochastic screen pattern suitable for reproducing a gray image on a document; (2) deriving at least one stochastic screen description that is related to said first pattern; (3) producing a document containing the first stochastic screen; (4) producing a second document containing one or more of the stochastic screens in combination, whereby upon placing the first and second document in superposition relationship to allow viewing of both documents together, correlation between the first stochastic pattern on each document occurs everywhere within the documents where the first screen is used, and correlation does not occur where the area where the derived stochastic screens occur and the image placed therein using the derived stochastic screens becomes visible. With each of the above patents and citations, and those mentioned below, the disclosures therein are totally incorporated by reference herein in their entirety for their teachings.