content
stringlengths 7
2.61M
|
---|
There is a “how to” for pretty much everything on the internet. Need to tie a knot or download a Sega Saturn emulator or properly set up a juicer or break some bad news to a friend? There is a YouTube or WikiHow tutorial for it. And yet even within this genre, few are as alluring, mysterious, and inexplicable as Owlcation’s new article, “How To Make Friends With Crows,” which provides pointers on not just how to befriend the swooping metaphors for death but also how to commune with them, to better understand their darkly beautiful animal souls.
They are never going to come running like a dog will for a lick and a pet, and their standoffish attitude is probably a major reason why they have thrived as a species for so long. Remember, crows are wild animals. In the U.S., it is illegal to keep native songbirds (crows included) as pets. If you want a pet you should get one, but if you’re interested in crows, you’ll have to learn to appreciate their charms from afar.
Besides, get real, most humans view crows as ominous, murderous evils (or at best, rats with wings). For centuries, they have played the bad guys in the stories humans tell themselves, and I’m sure those crows have noticed the eye-daggers most people shoot at them, how cars veer to the shoulder to intentionally run them over. Why wouldn’t that distrust be mutual?
So crows will take their own sweet time deciding if they trust you or not, but once they know who you are, they’ll never forget. At first, they may give you the cold shoulder and ignore your offerings, but don’t take it personally. Remember that paranoia is all about survival but patience and vigilance will eventually pay off. If you pass the test, they will decide to trust.
This mysterious relationship with crows can be yours, with only the right mix of time, dedication, maybe some vomit to feed them, also maybe fast food, an appropriate understanding of crow psychology, an inner peace with the darkness at the heart of all living things, probably a big hooded cloak, a faculty for conversational Latin, and the ability to weave a crown of moonlight without upsetting the imperious night-wolves who guard it. |
Interaction between functionalized gold nanoparticles in physiological saline. The interactions between functionalized noble-metal particles in an aqueous solution are central to applications relying on controlled equilibrium association. Herein, we obtain the potentials of mean force (PMF) for pair-interactions between functionalized gold nanoparticles (AuNPs) in physiological saline. These results are based upon >1000 ns experiments in silico of all-atom model systems under equilibrium and non-equilibrium conditions. Four types of functionalization are built by coating each globular Au144 cluster with 60 thiolate groups: GS-AuNP (glutathionate), PhS-AuNP (thiophenol), CyS-AuNP (cysteinyl), and p-APhS-AuNP (para-amino-thiophenol), which are, respectively, negatively charged, hydrophobic (neutral-nonpolar), hydrophilic (neutral-polar), and positively charged at neutral pH. The results confirm the behavior expected of neutral (hydrophilic or hydrophobic) particles in a dilute aqueous environment, however the PMF curves demonstrate that the charged AuNPs interact with one another in a unique way-mediated by H2O molecules and an electrolyte (Na(+), Cl(-))-in a physiological environment. In the case of two GS-AuNPs, the excess, neutralizing Na(+) ions form a mobile (or 'dynamic') cloud of enhanced concentration between the like-charged GS-AuNPs, inducing a moderate attraction (∼25 kT) between them. Furthermore, to a lesser degree, for a pair of p-APhS-AuNPs, the excess, neutralizing Cl(-) ions (less mobile than Na(+)) also form a cloud of higher concentration between the two like-charged p-APhS-AuNPs, inducing weaker yet significant attractions (∼12 kT). On combining one GS- with one p-APhS-AuNP, the direct, attractive Coulombic force is completely screened out while the solvation effects give rise to moderate repulsion between the two unlike-charged AuNPs. |
Bidentate SO2 Complexes of Zirconium and Hafnium Difluorides with Highly Activated S-O Bonds. Transition metal-SO2 complexes with different S-O bond lengths serve as ideal models for the understanding of catalytic activation of SO2 at different stages. Herein, sulfur dioxide complexes of zirconium and hafnium difluorides with highly activated S-O bonds were prepared in cryogenic matrixes. The structures of these complexes were identified by infrared spectroscopy and density functional theory calculations. Both ZrF2(O2S) and HfF2(O2S) were predicted to have singlet ground states and non-planar Cs geometries with the metal center coordinated by both oxygens of SO2. The much lower O-S-O stretching vibrational frequencies (650-730 cm-1) in the ligated complexes indicate that the SO2 ligand should be considered as a singlet SO22-, which is consistent with the rather long S-O bond lengths (~1.7 ) as a result of the two-electron transfer from the metal center to the 1* orbital of SO2 upon formation of the complexes. |
Sources of Emittance in RF Photocathode Injectors: Intrinsic emittance, space charge forces due to non-uniformities, RF and solenoid effects Advances in electron beam technology have been central to creating the current generation of x-ray free electron lasers and ultra-fast electron microscopes. These once exotic devices have become essential tools for basic research and applied science. One important beam technology for both is the electron source which, for many of these instruments, is the photocathode gun. The invention of the photocathode gun and the concepts of emittance compensation and beam matching in the presence of space charge and RF forces have made these high-quality beams possible. Achieving even brighter beams requires taking a finer resolution view of the electron dynamics near the cathode during photoemission and the initial acceleration of the beam. In addition, the high-brightness beam is more sensitive to degradation by the optical aberrations of the gun's field and the magnetic lenses. This paper discusses these topics including the beam properties due to photoemission physics, space charge effects close to the cathode, and optical distortions introduced by the RF and solenoid fields. Analytic relations for these phenomena are derived and compared with numerical simulations. Introduction This paper explores the sources of emittance in the photocathode gun and solenoid system currently used in high brightness injectors. This will be done using a combined analytic and numerical analysis approach to isolate and understand the various mechanisms which generate emittance. Sources of the intrinsic emittance of the cathode, the space charge driven emittance growth near the cathode, emittance due to the gun RF and the optical aberrations of the emittance compensation solenoid are described and compared. The present work begins with a brief introduction to photocathode injector design philosophy. Then there is a discussion about the connection between the intrinsic emittance and the quantum efficiency. The concept of using the tensor properties of the electron's effective mass to reduce the intrinsic emittance while maintaining good QE is explained. Next the emittance due to transverse space charge forces produced by non-uniform emission is derived using an analytic model with some mathematical approximations. Good agreement with experimental results indicates this model provides a useful explanation of the underlying physics despite its simple result. The first-and second-order emittances produced by the timedependence of the gun's RF fields is described. While this effect is absent in the DC gun, a modified version of the formula is still useful for computing the RF emittance of the first accelerator section after the gun. The discussion then turns to the extensive topics of optical distortions and aberrations. The solenoid's chromatic, geometric, and anomalous quadrupole field effects are described and analytic expressions for their emittances are derived. The paper concludes with a summary comparing these phenomena. Pulsed RF guns can achieve high cathode fields which greatly mitigate the space-charge forces by rapidly accelerating the photoelectrons to relativistic energies. The high-field configuration commonly used with pulsed RF guns is shown in Figure 1b. In this configuration, a single solenoid is used to produce a beam waist at or near the linac's entrance. As shown by emittance compensation theory, the distance between the gun and the linac entrance giving the lowest injector emittance is a multiple of one-quarter wave at the bunch's plasma frequency. Theory and experiment show the bunch radius and divergence oscillate along the beamline between the gun and the linac, with these parameters repeating themselves every of the bunch's plasma wavelength. The time-dependence of the higher frequency RF field can be used to chirp the bunch energy and control the bunch length out of the gun. The slice-to-slice energy chirp along the bunch can then be arranged to maintain the laser pulse or even compress the bunch. Hence, the beam out of the high-field gun requires no compression or further acceleration before injection into the first linac. However, it does need to be "emittance matched" into the first accelerator linac. This is done using a single magnetic solenoid to both cancel the gun's large RF defocusing strength, and to compensate for linear space-charge forces on the transverse phase space as shown in Figure 1b. This injector configuration has produced record peak brightness beams and is arguably the current state-of-the-art in pulsed RF injectors for x-ray free electron lasers. The pulsed photocathode RF gun consists of n+1/2 cells where each full cell is 2 long with the cathode at a wall in the middle of the cell which is then 4 long. Guns have been built and operated with n ranging from 0 to 4. Optimizing with beam simulation codes has determined that beam performance is improved if the half-cell is slightly longer at 0.3 rather than 0. 25. Hence most high field RF guns are thus (n+0.6) 2 long. The standing wave RF guns have been demonstrated at frequencies from 144 MHz to 17 GHz. In general, the higher RF frequencies (~GHz and higher) can operate with high peak cathode fields (>40 MV/m) to rapidly accelerate the beam to relativistic energy and mitigate space charge forces. However, the high field comes at the expense of duty factor which is a fraction of a percent at s-band (~3 GHz) and higher frequencies. Lower RF frequency guns are capable of CW operation albeit by limiting the peak cathode field. Further descriptions of RF guns both normal conducting and superconducting can be found in Chapters 1 and 3 of Ref. Figure 1b shows a magnetic solenoid near the high-field RF gun exit. This focusing solenoid cancels the strong RF defocusing of the beam by the gun exit field. This solenoid also matches the bunch to the first accelerator section or linac for optimal emittance compensation. In addition, there is often another coil (not shown in Fig. 1b) positioned just behind the cathode for zeroing or bucking the gun solenoid's fringe field at the cathode. If the cathode magnetic field is not zero, the electrons acquire canonical angular momentum and thus emittance. In this paper, the initial angular momentum is assumed to be zero. In both DC and RF guns, the electron bunches are produced from a photocathode with a drive laser phase-locked to the RF master oscillator. The type of laser used depends upon the cathode material and the duty factor of the system. The cathode material determines the laser wavelength and pulse energy needed given the cathode quantum efficiency (QE) and wavelength sensitivity. In addition, the desired charge and intrinsic emittance are also important factors to consider when designing an injector system. While the largest uncertainly still lies in the cathode properties of QE and intrinsic emittance, there has been considerable progress in understanding the physics and practical aspects of cathodes as documented in the Photocathode Physics for Photoinjectors workshops held only on even-numbered years since 2010. D Evolution of the emittance from the cathode through the injector As the beam is born and accelerated from the cathode it undergoes processes and forces which interact with it and add to its emittance. Figure 2 attempts to make sense of these complex interactions by spatially ordering these processes as a function of distance from the cathode. The flow chart indicates there are five distance scales (yellow-boxes) over which the beam experiences emittance generation and growth. The physical properties or characteristics (grey-boxes) are combined as inputs to 'and-gates' which generate emittance (green-boxes) and more properties/characteristics (grey-boxes). These properties can then combine with another set of external properties like non-linear focusing to produce yet more emittance and more properties as the beam propagates down the beamline. As will be shown this spatial flow of the emittance growth and its interactions provides a useful basis for analyzing the sources of emittance in the photocathode injector. The Figure 2 chart shows the emittance and properties/characteristics flowing from left to right which interact in a series of 'and-gates'. The chart begins with the <Cathode Material Properties> and <Applied Field> interacting in the first 'and-gate' to generated the <Intrinsic Emittance>. Adding with <Surface Roughness> in the next 'and-gate' gives the <Rough Surface Emittance>. On the other hand, the intrinsic emittance isn't necessary to generate the <Applied Field Emittance> some tens of microns from the surface. The emission processes during the laser pulse both below and at the surface establishes/determines the <Cathode Emission Properties> such as response time, linearity, uniformity and image-charge-bunch interactions which are most influential at distances of microns to millimeters from the surface where and while the bunch is still emerging from the cathode. At millimeters from the surface, the <Cathode Emission Properties> are 'added' in the fourth gate with the <Transverse Density Modulation due to the Rough Surface> and the drive laser's <3D Laser Shape> to produce the <6D Phase Space> distribution and two more emittances. At this location, typically a few 10's of mm from the cathode, the electron bunch is fully formed with the bunch tail separate from the cathode surface. <6D Phase Space> then 'adds' with <Non-Linear Focusing and Alignment Errors> for use in relativistic transport codes with space charge to obtain the <Emittances due to Optical Aberrations, Space-Charge and other effects> during acceleration and compression of the electron bunch. Photoemission Theory for Metal Cathodes In this section the quantum efficiency and intrinsic emittance of a metallic photocathode are derived using the Spicer three-step model. In this model photoemission is separated into the steps of photon absorption, electron transport to the surface and electron escape into the vacuum. The discussion begins with a brief description of the electric potential which binds the electrons in the cathode and the potential barrier over which they must pass to escape into the vacuum. Next it is shown how these work functions are used in expressions for the quantum efficiency and intrinsic emittance in terms of the electron excess energy and the electron's effective mass. The electrical potentials at the metal-vacuum interface The forces on an electron near the cathode surface are due to three electric potentials: 1) the material work function, W, produced by a thin layer of electrons forming a surface dipole layer at the cathodevacuum boundary, 2) the image potential due to the electron's equal and opposite image in the metallic surface and 3) the external field which in this case is the RF field. These potentials are plotted in Figure 2. The combined, external fields produce a potential barrier outside the surface and with a height the Schottky work function below the vacuum energy. The quantum efficiency and the intrinsic emittance using these potentials along with the three-step model of photoemission have been derived elsewhere. The reformulated results which now include the effective mass are given here. The occupied electron energy levels or electron density of states inside the cathode (left) and the electric potentials (right) at the cathode-vacuum interface. The distribution of occupied states is given by the Fermi-Dirac (F-D) function, fFD(E) (solid red line, inside cathode). For a metal at 300 degK, fFD(E) can be replaced the Heaviside step function with its step at EF, indicated by the heavy solid line. Outside the cathode there is the potential due to the image charge of the electron (red) as well as the applied field potential (blue). The sum of the image and applied potentials (green) forms a potential barrier which reduces the material work function by the Schottky work function. Inside the cathode the fermionic electrons fill energy states in pairs up to the Fermi energy, EF, which positions the material's work function, W, below the vacuum state energy, Evacuum. In this model the electron energy distribution is given by the Fermi-Dirac function which in turn is replaced by the Heaviside step-function to simplify the calculation. The step-function is a very good approximation at ambient temperatures. In the absence of all other forces the material work function is defined as the energy an electron needs to escape the cathode material. Outside the cathode, the electron's image charge and an accelerating applied field combine to form a shallow potential barrier approximately a few nm from the cathode surface, depending upon the strength of the applied field. The barrier height is a Schottky work function, Schottky, below the vacuum state energy which reduces the material work function to give the effective work function, = − ℎ The photoemission process assumes there is no quantum mechanical tunneling therefore electrons require energies greater than the barrier height of to escape. Thus, when excited by photons having energy , electrons in occupied states with energies between − (ℏ − ) to can escape into the vacuum. After emission, electrons can have energies from zero to ℏ −. Using a step-function for the energy distribution of occupied states inside the cathode, the emitted electron energy spectrum has a full width of ℏ −. It is this energy spread which causes the intrinsic emittance and the yield of this energy spectrum determines the QE. Due to its relevance to both the QE and intrinsic emittance discussed next, it is useful to define the excess energy, as ≡ ℏ − Quantum efficiency and intrinsic emittance theory The quantum efficiency and the photoelectric emittance for a cathode can be derived by following the assumptions of Spicer's model of photoemission. This model defines the following three steps: 1) absorption of photon by a bound electron, 2) excited electron travels to the surface, and 3) electron escapes to vacuum. The mathematical representation for these steps is given in Eqn. The limits of the energy integral reflect using a step-function for the initial occupied energy state distribution. The limits for the integral are from maximum escape angle to normal incidence. The maximum escape angle is discussed later. The azimuth angle integration limits assume the photon excited electrons motion is isotropic inside the cathode. Details of evaluating the electron-electron scattering length and performing these integrals are given in Ref.. Since the QE is defined as the number of emitted electrons per incident photon, Step 1 simply involves the fraction of incident photons which are absorbed, 1-R(). The reflectivity, R(), is obtained from the Fresnel optical relations using the complex index of refraction. The optical absorption depth, opt, used in the second step is given by the imaginary part of the index of refraction also using these optical relations. At 253 nm the reflectivity for copper at normal incidence is approximately 0.3, making the Step 1 factor 0.6. The Step 2 factor is given by the second square bracket and gives the fraction of excited electrons which arrive from below the surface. Here the important parameters are the optical absorption depth D.H. Dowell, Sources of Emittance in RF Photocathode Injectors 7 (described above as opt) and the energy-averaged electron mean free path between scattering events ( − ). The excited electrons can scatter either with the lattice via electron-phonon scattering or with the valence electrons. For a good metal, such as copper, electron-electron scattering dominates while electron-phonon scattering is important for semi-conductor cathodes. For copper illuminated at normal incidence with 4.86 eV photons the optical absorption depth is approximately 10 angstroms and the electron-electron scattering length for energies near the Fermi level is approximately 30 angstroms. Using these values in the bracket 2 term indicates the fraction of excited electrons reaching the surface is approximately 0.2. The term for Step 3 involves integrals over the electrons' energy spectrum and the polar, , and azimuth, angles the electrons have with respect to the surface normal. The energy integration limits of the numerator correspond to the energy range needed to escape over the potential barrier. The energy limits of the integral in the denominator correspond to all the electrons the photon can excite, that is, down to the photon energy below the Fermi level. In passing, it important to point out that other functions for the density of states can and should be used for other cathode materials or at higher photon energies reaching further below the Fermi level. Step 3 also assumes the excited electrons inside the cathode have an isotropic angular distribution. This means the photon's momentum is not conserved in the 3-step model, the electron has no knowledge of the photon's initial direction. However, since the transition is direct, the energy is conserved. The -integration limits are determined by the continuity of the transverse momentum across the cathode-vacuum boundary. It can be shown that the maximum polar angle for which an electron with an initial energy E can escape is cos = √ * (. The fraction of electrons in occupied states which have enough energy and are within the angular escape cone is approximately 0.04, and the fraction of electrons within the maximum internal escape angle is approximately 0.01, for * =. Therefore, the QE is low for metals because first the photon's energy can reach only a limited number of occupied electronic states, and second there is a small acceptance angle at the surface into which the electrons can escape. Reducing reflectivity to zero increases the QE about a factor of two while eliminating e-e scattering would result in approximately five-times the QE. In other words, for photon energies less than a volt greater than the effective work function, the emission yield is only a few percent of the total number of energetically available electrons. And due to refraction at the surface, only electrons within an internal angle of incidence less than ~10 degrees can escape. For copper this is only one percent of the excited electrons. Performing the integrations in Eqn. gives the quantum efficiency with the effective mass, Similarly, the 3-step model can be used to compute the variance of the transverse momentum giving the normalized intrinsic emittance for a transverse rms beam size, x, in terms of the excess energy and the effective mass as, These results show the QE and intrinsic emittance are both increasing functions of the excess energy as commonly accepted. But equally important is the √ * -dependence which could allow achieving ultra-low intrinsic emittance from a practical cathode a real possibility. The effective mass and the general cathode material properties needed to obtain low intrinsic emittance are discussed next. Effective Mass Effects on the QE and Intrinsic Emittance Eqns. and show that in addition to the excess energy, the emittance and QE also depend upon the effective mass the electron has before emission. As pointed out by Berger et al., the intrinsic emittance is proportional to √ *. Therefore, the effective mass should be as small as possible to give a very small transverse momentum and thus an ultra-low intrinsic emittance. Eqn. shows the QE follows the opposite trend by growing with increasing *. This is because the larger effective mass reduces cos (see above equation in text) and thereby increases the internal escape cone angle. Thus, a small effective mass leads to low QE. And it appears there is no easy solution since low intrinsic emittance requires * ≪ 1, yet high QE needs * ≫ 1. This is the same situation as in the case of near threshold photoemission, where decreasing the excess energy reduces the intrinsic emittance but it also lowers the QE. However, there is a possible path around this apparent law of nature. The effective mass is a tensor quantity and for some materials it can have very different values for components along orthogonal axes. In addition, it's important to note that the emittance is driven by the electron's transverse dynamics while the QE results from its longitudinal motion. Therefore, an anisotropic structured, crystalline cathode could have internal electrons with large and small effective masses in orthogonal directions. Then by orienting the axis with large effective mass normal to the surface (along the electron's longitudinal direction), which naturally places the small effective mass axis along the transverse direction. With this arrangement, this small transverse effective mass will produce a low intrinsic emittance, while the large longitudinal effective mass will give preserve the QE. Space Charge Emittance near the Cathode due to Non-Uniform Emission Extensive experimental and theoretical studies have been performed to understand the effect of non-uniform emission upon beam quality in space charge dominated beams. See for example Refs.. This work established the transverse uniformity specifications for low emittance beams of sufficient quality to drive x-ray FELs. The influence emittance and other beam characteristics have upon xfel performance were determined from simulations and analytic theories. Recent xfel experiments performed at the SLAC Linac Coherent Light Source measured how non-uniform emission affects the xfel performance. In these studies, laser patterns consisting of regular rectangular meshes and circular distributions resembling donut, bagel and Airy-like patterns were imaged onto the cathode and the emittance and xfel output and gain were measured. A space charge model was developed to analyze these data. For the rectangular mesh patterns used in the experiment the model is in good agreement with emittance measurements. In this section this space charge model will be discussed and emittance will be given in terms of the number of spatial modulations across the cathode diameter and the transverse variation in peak current. The space charge beamlet model analyzes the regular rectangular mesh pattern to derive the emittance growth due to regular high-spatial frequency (several cycles across the beam diameter) patterns. A beginning assumption is that immediately after emission the space charge forces can be computed classically. Then due to the non-uniform emission the electrons within and at the edges of the beamlets will feel a radial space charge acceleration and the beamlets will expand. When the beamlets overlap, on average the beam becomes more uniform and the space charge force diminishes and the electrons continue to expand with a constant radial velocity. The transverse emittance results from this radial velocity. In most RF guns the cathode field is high and the beamlets overlap a few tens of picoseconds after emission. At the time of overlap, the beam is not yet relativistic which justifies using classical electrostatics in the derivation. This point is discussed in more detail below. Once the beamlets merge the emittance stops growing due to the nearly uniform density distribution and the onset of relativistic effects. This approach can also be used to compute the emittance of other patterns such as the donut and bagel (for example, as in Ref. ), since the physical assumptions can be applied these any emission pattern. In this section the regular rectangular pattern is analyzed to develop an expression useful in the spectral analysis D.H. Dowell, Sources of Emittance in RF Photocathode Injectors 9 of the high spatial frequency variations in the emission. The extension of the theory into a general Fourier analysis of the spatial distribution will be left for future studies. An alternative approach to the theoretical analysis of non-uniform assumes the emittance results from the beam's free energy defined as the potential energy difference between the initial non-stationary and final stationary beam distributions. In the present case of an array of beamlets, the free energy is the transverse kinetic energy and hence produces transverse emittance. This is, in fact, just what's being computed in the expanding beamlet model. It begins with a non-stationary distribution of a regular pattern of beamlets whose potential energy is converted into kinetic energy and emittance as it becomes a stationary or static uniform distribution expanding with constant radial velocity. The beamlet space charge model assumes a beam transverse distribution with overall radius R and full length lb composed of many beamlets arranged in a rectangular pattern as shown in Figure 4. Each beamlet has an initial radius r0 with center to center spacing of 4r0 in a rectangular grid. The transverse space-charge force causes each beamlet to expand and merge with its neighboring beamlets. This radial acceleration gives the beamlets additional transverse momentum leading to larger emittance for the total beam. A basic assumption of the model is that the transverse space charge force goes to zero once the beamlets merge and form an approximately uniform distribution. Therefore, after merging the nonuniformity space charge emittance becomes constant. The theory developed here indicates the beam is born with a constant emittance and remains so until the beam becomes uniform due to the overlap. At this point, the space-charge forces diminish due to merging beamlets. Simulations and analytic modeling of this geometry show the beamlets overlap within tens of picoseconds, therefore the non-uniformity emittance is generated very close to the cathode before the beam can become relativistic for even the very highest cathode RF fields. It is interesting to note that the electrons are still non-relativistic and the beamlets are merging at the head of each bunch even while the tail electrons are just leaving the cathode. The derivation for the radial envelope of a beam with uniform charge density during acceleration begins with the equation of motion for an electron at the beam edge. Reiser gives an excellent discussion and justification for the following equation of motion of the boundary of the beam, Here K is the relativistic generalized perveance, first defined by Lawson in the late 1950's, and is given as ≡ 0 2 ( ) 3 In this expression, is the peak current of the beam out to the envelope radius,, and 0 is the characteristic current. The characteristic current dependents upon the charge and mass of the beam particle. For electrons, it is given by The beam is assumed to initially have zero energy spread and position-dependent velocity ( ) and energy ( ) = 1 +. The model assumes the electrons begin at rest at the cathode ( = 1, = 0) and experience constant acceleration thereafter due to the applied electric field. The electron's normalized rate of energy change along the longitudinal axis is defined as ≡ 2 As expected, this longitudinal acceleration plays a key role in the beam's transverse dynamics. The radial envelope equation of motion is solved by first multiplying both sides of Eqn. by so one can write, The generalized perveance,, does not depend upon. This is because the same charge and hence current is always enclosed by the envelope radius,. This property, in fact, is used to define the beamlet and leads to the following integral equation, Integrating both sides gives For now, assume the initial angle is zero,,0 = 0, then the next integration becomes, The initial, z=0, envelope radius is,0 and the initial radial angle is,0. The upper limit on the z-integral is denoted by for the end of the z-integration. Expressing the left-hand-side in terms of the dimensionless variable = Since there appears to be no known analytic solution for this integral, it is argued that its numerical solution is reasonably well approximated by the 4 th root of u, The level of agreement between this approximation and the exact (numerical) integral is illustrated in Figure 6. Figure 6 indicates that the approximation is reasonable up to ~10. Therefore, if = 40 this approximation is good out to = 10 40 = 25. Since the active length of the UHF gun is much less at 4 or = 1.6, therefore, using this function instead of the more complicated integral is a reasonably valid simplification. However, the approximation begins to fail for high-field guns. For example, the LCLS-I gun operates at = 100 and the gun's active length is 0.12 m, therefore with = 12 which pushes the limits of these approximate functions. Although the approximation for the integral can certainly be improved, this paper will use the 2 1 4 approximation, since it captures many of the important effects occurring during the beam's acceleration from rest and is mathematically simple. And finally, after putting it all together and solving for, the beam envelope radius as a function of distance from the cathode is found to be Therefore, the angle an electron at the envelope radius makes with the z-axis is = ( vs.,0 as given by Eqn.. The red curve is computed for = 40, = 4, and = 0.040. The dashed curve is linear with unit slope. It is useful to plot the envelope radius at the exit of the gun vs. the beginning radius at the cathode. Such a plot of.,0 is shown in Figure 7 for = 40, = 4 and = 0.040. Which are APEX-like parameters. The calculation shows that for large initial envelope radii, the exit envelope approaches the value of the initial radius. This is because the space-charge forces become negligibility small at large beam sizes, and since there are no other focusing or defocusing fields in this model, the beam drifts without growing larger. The difference between initial and final beam sizes falls as 1,0 2 for large,0. However, the envelope radius grows considerably for small values of,0. This same behavior is found numerically using GPT as shown later in Figure 22. This solution provides an easily quantifiable distinction between emittance-dominated and space charge-dominated beams. In the present model, the second term inside the brackets of Eqn. is due to space charge forces based upon the assumptions of a radially symmetric, uniform charge density beam with constant current I and radius in a constant longitudinal accelerating field with no transverse components. These analytic functions for the beam's envelope as it is accelerated in the presence of space-charge forces provides scaling laws and a useful understanding of its evolution in transverse phase space, but the model also requires some physics input as well as some assumptions about the geometry to compute the emittance of the overlapping beamlets. The normalized emittance for an uncorrelated distribution in xx' phase space is Here it is assumed that the electrons diverge radially from the center of each beamlet and the emission from the finely distributed beamlets is all the same. Therefore, the emittance can be written as the divergence of each beamlet uniformly distributed across the full beam area times the full beam size. This same assumption is used to compute the intrinsic emittance. Thus, the beamlet values/parameters for the divergence and the full beam parameter/beam x-rms will be used below when deriving the emittance. If in addition, it is assumed that the distributions in both and are uniform, then the rms-values of their x and px distributions can be written as Given Eqn. and = √ 2 − 1, the radial momentum = of an electron at the beamlet envelope is easily written as a function of ze, The mesh pattern previously shown in Figure 4 has beamlets or current modulations across the beam diameter. Figure 8 provides more detail of the pattern showing the beamlet center-to-center spacing is assumed to be 4-times the beamlet radius. Relating this beamlet spacing with the modulation period gives the initial beamlet radius in terms of the full beam envelope radius, Therefore, the peak current of each beamlet scales as the inverse of the modulation spatial frequency squared, = The x-plane emittance is then found from the following chain of relations, And with the help of the relations for the beamlet envelope radius and current, the emittance is found to be the rather simple expression,, = Since the beamlets have all overlapped and the forces washed out by the smearing of the charges long before is ever close to 2, the -term inside the radical can usually be ignored, and the emittance due to s-c of a rectangular mesh of beamlets becomes It is worth noting that neither of these last two expressions for the emittance depend upon the beam size except through, the number of spatial periods across the diameter. And both equations scale linearly with the current, which is also observed experimentally. In addition, the mesh's space-charge emittance decreases as the inverse of the product of acceleration and the spatial frequency,. Thus, higher cathode fields reduce this emittance as the inverse of the field, and the low spatial frequencies are more importance than the higher spatial frequencies. This expression for the emittance can be compared with experiments performed at the LCLS photocathode injector. In these beam studies, two very different mesh size screens were placed in the drive laser beam and imaged onto the photocathode of a high-field, 1.6-cell, s-band gun to produce beams with each mesh pattern. The experimental emittances and their analysis are given along with images of the virtual cathode in Ref.. This earlier paper presents a non-relativistic version of this analysis and although it gives the correct magnitude for the emittance, unfortunately, the emittance dependences upon the beam current and size are wrong and should be replaced with the above results. Figure 9 shows a plot of the theoretical emittance as a function of distance from the cathode indicates a nearly constant emittance out to a few mm's from the cathode for the mesh patterns. The figure shows, the beam envelope radius increases three or more times in this distance mixing the beamlet charge distributions in real space to give a globally (i.e., on a scale of the cathode radius) uniform charge density. This charge uniformity turns off the space-charge (s-c) force and ends s-c emittance growth of the mesh. At this point the electrons drift with constant radial velocity, and hence constant emittance. Where this transition occurs and how quickly it occurs it is determined by the beam's acceleration. For the meshes shown in the figure, the beamlet emittance due to s-c should stop growing for > 1 due to complete overlap. Hence the mesh emittance becomes whatever value it has at that where mixing is complete. Closer to the cathode, the emittance suddenly jumps to a non-zero constant value produced instantaneously by the radial space-charge field when the beam is born. Eqn. shows the envelope angle diverges at the cathode = 0. Fortunately, it diverges slowly enough (as 1 √ ) that when multiplied by to normalize the emittance produces a finite and constant emittance. Eqn. gives this instantaneous emittance jump due to combined radial s-c forces and longitudinal acceleration at the cathode. RF Emittance The RF emittance is the projected emittance due to the time-dependent RF lens of the gun and is minimized by having the bunch on crest at the exit of the last gun cell as well as by balancing the cell-to cell RF field amplitudes. If one assumes the length of the iris between cells is short, then the beam size is the same at both the exit and entrance of neighboring cells. In this case, for a cell-to-cell phase shift, the defocusing field at the exit of each cell is cancelled by the entrance focus of the next cell. However, the last cell's exit field is not cancelled leaving a strong, time-dependent RF lens at the exit of the gun. This time-varying lens changes each slice's divergence along the bunch producing a projected emittance. The total emittance can be expanded in powers of the rms bunch length, , and combined as the sum of the squares of the first-order and second-order RF emittances. The total RF emittance is given as,, = √ 1 2 + 2 2 The first-and second-order emittances have been computed by Kim which can be summed in quadrature to give the total RF emittance, Here is the rms beam size and is the rms bunch length in radians at the RF frequency. Both are evaluated at the exit of the gun where the bunch-rf phase is given by e. Erf is the peak RF field of the gun and e is the electron phase relative to the RF waveform when the electron bunch reaches the exit of the gun. The total, first-and second-order projected RF emittances as functions of the exit phase are shown in Figure 11. The plots indicate that there is always a RF emittance which even at the minimum of the linear term is where the second-order term is a maximum. Eqn. shows the second-order emittance grows as the square of the bunch length which in practice limits the operating bunch length to approximately ten degrees of RF phase. The second-order emittance can be eliminated by adding a third harmonic of the RF field in a two-frequency RF gun. given in the table was measured with a transverse deflecting RF cavity at 135 MeV. The RF emittance is computed using the experimental bunch lengths at 20 pC, 250 pC and 1 nC. The experimental projected emittance is significantly higher and shown to illustrate the RF emittance is a small contributor to other emittance sources. The magnitude of the RF emittance relative to the other emittances and the total emittance is discussed later in the paper. Chromatic Aberration of the Gun Solenoid Due to the strong defocusing of the RF gun it is necessary to use a comparably strong focusing lens to collimate and match the beam into the high-energy booster linac. If this focusing is done with a solenoid, then its focal strength in the rotating frame of the electrons is 1 = sin with ≡ Where is the peak interior field of the solenoid, L is the solenoid effective length, ( ) 0 is the magnetic rigidity, e is the electron charge and p is the beam momentum. The rigidity can be expressed in the following useful units as with p being the electron momentum. It can be shown that the normalized emittance due to the chromatic aberration of a lens is, ℎ =, 2 | ( 1 )|. Here is the beam velocity divided by the speed of light, is the beam's Lorentz factor,, is the transverse rms beam size at the entrance to the solenoid and is the rms momentum spread of the beam. Using Eqn. in Eqn. gives the chromatic emittance as Figure 12 is a plot of the chromatic emittance of the solenoid as a function of the rms energy spread as given by Eqn. and as simulated by GPT. The beam kinetic energy is 6 MeV and the solenoid effective length is 19.35 cm with a field of 2.4 kG. The initial beam had zero emittance (zero divergence) with a 1 mm-rms transverse beam size. There are similar conditions assumed in the above derivation. There is excellent agreement between the analytic and numerical approaches. Both Eqn. and the simulation assume the initial beam has zero emittance and is perfectly collimated going into the solenoid. Ranges for typical bunch projected and slice electron energy spreads show the projected chromatic emittance is ~0.3 microns and the slice chromatic emittance is 0.02 to 0.03 microns. The LCLS projected emittance measured at 250 pC is 0.7 microns. While the solenoid's chromatic aberration can be a significant part of the projected emittance, its contribution is much less for the slice emittance due to its small slice energy spread of less than a keV. Thus, the chromatic emittance for a slice is only ~0.02 microns/mm-rms. It is also important to note that since the beam size at the solenoid lens enters to the second power in Eqn., the solenoid's chromaticity can introduce considerable emittance if the beam size at the solenoid is large. In practice the beam size at the solenoid varies widely with the cathode size and the bunch charge which strongly influences the projected emittance. These effects are discussed in later sections of the paper. The Solenoid's Geometric Aberration All magnetic field solenoids exhibit a 3 rd order angular aberration also known as the spherical aberration in classic light optics. The fields producing this aberration are dominantly located at the ends of the solenoid. This is because the aberration depends upon the second derivative of the axial field with respect to the beam direction. While in theory the spherical aberration could be computed directly from the solenoid's magnetic field, in practice this is difficult and doesn't account for all the important details of the beam dynamics. Therefore, to numerically isolate the geometrical aberration from other effects, a simulation was performed with only the solenoid followed by a simple drift. Maxwell's equations were used to extrapolate the measured axial magnetic field, Bz(z), to obtain the radial fields. The axial field is shown below in Figure 15. Following tradition, the aberration is illustrated using an initial beam square, 2 mm 2 mm, distribution. The simulation assumed perfect collimation (zero divergence = zero emittance), zero energy spread and an energy of 6 MeV. The transverse beam profiles given in Figure 13 show how an otherwise "perfect" solenoid produces the characteristic "pincushion" distortion. A 4 mm 4 mm (edge-to-edge) object gives 0.01micron rms emittance, while 2 mm 2 mm square results in only 0.0025 microns. Figure 13: Ray tracing simulation of the geometric aberration of the LCLS gun solenoid. Left: the initial transverse particle distribution before the solenoid with zero emittance and energy spread. Center: The transverse beam distribution occurring slightly before the beam focus after the solenoid illustrating the third-order distortion. Right: The beam distribution immediately after the beam focus showing the third-order distortion evolving into the iconic "pincushion" shape of the rotated geometric aberration. Figure 14 plots the simulated emittance due to the geometric aberration as a function of rms beam size at the entrance of the solenoid for an initially uniform, circular beam with an initial zero emittance. The initial beam has an energy of 6 MeV with zero energy spread. The points are the simulation and the green curve gives the 4 th order polynomial fit. Anomalous or Stray Quadrupole Fields in a Solenoid Magnet Beam studies at the SSRL Gun Test Facility (GTF) showed the beam was astigmatic (unequal xand y-plane focusing) which was due either to the single-side RF feed or to the magnetic field asymmetries of the gun solenoid. To understand and distinguish between these effects, the solenoid's multipole magnetic field was measured using a rotating coil. The magnetic measurements showed small quadrupole fields at the ends of the solenoid with equivalent focal lengths at 6 MeV of 20 to 30 meters for the GTF solenoid. However even though these fields were weak, it was decided to install normal and skew quadrupole correctors inside the bore of the solenoid to correct them. As described below, beam measurements show these correctors have a relatively strong influence on the emittance. Technical details of why and how the correctors were incorporated into the gun solenoid are given in Ref. and their use during operation is described in Ref.. This section discusses the dynamics of a beam in combined axial and quadrupole magnetic fields. The interested reader is directed to Ref. for further details. Figure 15 shows the axial magnetic field and the quadrupole magnetic field and its angular orientation or phase angle along the beam axis of the LCLS solenoid. The quadrupole field was measured using a rotating coil which was 2.5 cm long with a 2.8 cm radius. This is the radius for which the quadrupole field is given in the figure. The quadrupole phase angle is the angular rotation of the poles relative to an aligned quadrupole, and is the angle of the quadrupole north pole relative to the y-axis (left when travelling in the beam direction) for beam-centric, right-handed coordinate system. In this coordinate system, a normally aligned quadrupole has a phase angle of 45 degrees. The difference in phase angle between the entrance (z = 9.6 cm) quadrupole field and the exit (z = +9.6 cm) field angle is close to 90 degrees. Thus, these anomalous end fields have opposing polarities which reverse sign when the solenoid's polarity is reversed. These LCLS solenoid fields are qualitatively like those measured previously for the GTF solenoid, although the overall magnitude of the fields is lower. The LCLS solenoid has an equivalent focal length of approximately 50 meters due to these small quadrupole fields while the GTF solenoid's anomalous quadrupole fields corresponded to 20 to 30 meters. As described earlier, the correction of these anomalous quadrupole fields was done by installing normal and skew quadrupoles inside the bore of the solenoid. The effect these correction quadrupoles have upon the emittance is quite profound, as can be seen in Figure 16 where the measured emittance for 1 nC and 250 pC are plotted vs. the normal corrector quadrupole strength. The Emittance due to the Anomalous Quadrupole Fields The beam emittance due to these anomalous quadrupole fields can be computed both in simulation and analytically. The analysis begins by assuming a simple thin quadrupole lens followed by a solenoid with the 44 x-y beam coordinate transformation given by is the interior peak axial magnetic field of the solenoid, and fq is the focal length of the anomalous quadrupole field located at the entrance to the solenoid. The beam is rotated through the angle KL by the solenoid. The 44 covariance matrix of the beam after the combined quadrupole and solenoid is then with the x-plane emittance after the quadrupole and solenoid being given by the determinate of the 22 submatrix,, = √det = √det ( 11 12 12 11 ) And finally, the normalized emittance due to an anomalous quadrupole field near the solenoid entrance is found to be, =,, The x and y transverse rms beam sizes are the entrance to the solenoid are, and,. Figure 17 compares this simple formula with a particle tracking simulation as done for the geometric aberration. In this case the simulation was done for a solenoid followed by a drift with a weak quadrupole field overlapping the solenoid field. The initial beam had zero emittance, zero energy spread and was circular and uniform. No space charge forces are included in the simulation. Figure 17 shows the normalized emittances given by Eqn. and the simulation plotted as a function of the rms beam size at the solenoid entrance. The anomalous quadrupole focal length is 50 meters at 6 MeV which is approximately the same as computed from the magnetic measurements for the LCLS solenoid. Both the analytic theory and the simulation assume a short quadrupole field only at the solenoid's entrance. The simulation is slightly larger since includes both this quadrupole effect and the geometric aberration described above. The good agreement verifies the model's basic assumptions and illustrates how even a very weak quadrupole field can strongly affect the emittance when combined with the rotation in a solenoid field. Eqn. is for a quadrupole plus solenoid where the anomalous quadrupole field isn't rotated with respect to a normally oriented quadrupole. When the quadrupole is rotated about the beam axis by angle, with a simulation for a 50-meter focal length quadrupole followed by a strong solenoid (focal length of ~15 cm). The emittance is plotted as a function of the quadrupole angle of rotation. In both the analytic theory and the simulation, the emittance becomes zero when n KL . The slight shift in angle between the theory and simulation occurs because the simulated solenoid has fringe fields which are ignored in the theory. To model the full effect of skewed quadrupole field errors at both ends of the solenoid it is necessary to express the emittance for a quadrupole pair with rotation angles of 1 and 2, and focal lengths of f1 and f2, respectively. Following the same procedure used to derive Eqn. contribution from the exit quadrupole, the entrance quadrupole still appears skewed by the beam's rotation in the solenoid and the emittance increases unless there is no entrance quadrupole field. For this case the emittance does not depend upon the polarity of the solenoid field. However, this is not true when either 1 2 ≠ 0. Eqn. also shows that if the entrance quadrupole field is skewed, the emittance will depend upon the polarity of the solenoid field. Further details of this effect are discussed in the next section. Lastly, the formula indicates that adding skewed and normal quadrupole correctors near the solenoid can cancel this effect and completely recover the initial emittance. Correcting the Solenoid's Anomalous Quadrupole Field Emittance As just described the emittance growth due to the solenoid's anomalous quadrupole fields can be compensated with the addition of skew and normal corrector quadrupoles. In the LCLS solenoid these correctors consist of eight long wires inside the solenoid field, four in a normal quadrupole configuration and four arranged with a skewed quadrupole angle of 45 degrees. Thus, since corrector quadrupoles overlap the solenoid field, one would expect their skew angles should be added to KL as done in the first term of Eqn.. The emittance due to the composite system of a rotated quadrupole in front of the solenoid, the two corrector quadrupoles inside the solenoid and the exit rotated quadrupole can be computed as the following sum, The first and fourth terms inside the absolute-value brackets are due to the entrance and exit anomalous quadrupole fields with focal lengths f1 and f2 and skew angles of 1 and 2, respectively. The second and third terms are approximations for the normal and skew corrector quadrupoles with focal lengths fnormal and fskew, respectively. The normal and skew corrector quadrupoles are located before the solenoid and are rotated 0 and /4, respectively, about the beam direction or z-axis. As expected, for no solenoid field = 0, and there is no emittance due to the normal and skew quadrupoles. This expression illustrates how the solenoid's rotation of the beam amplifies the effect even weak quadrupole fields have upon the 2D emittance. Since the 4D phase space change is correlated and the 4D emittance remains zero, however, the projected emittances of the 2D subspaces xx' and yy' can become quite large. But in the end, the 4D emittance also increases as the correlation becomes lost in subsequent beam transport and optics. Figure 19 illustrates the emittance due to these effects as a function of the normal and skew corrector quadrupole focal lengths using Eqn.. The entrance and exit anomalous quadrupole focal lengths are 50 meters and their rotation angles as indicated by Figure 15 are -15 and 75 degrees, respectively. The x and y rms beam sizes at the solenoid entrance are 1 mm. The red curves are for the normal corrector quadrupole only with the skew corrector quadrupole off, while the blue curves are given for the skew quadrupole only with the normal quadrupole off. The zero of emittance is shifted for the two correctors since the overall rotation necessary to correct the error fields is neither normal nor skewed, but something in between. Both solid curves asymptotically converge to the uncorrected emittance as the correctors are turned off (infinite focal length). The figure also shows the effect of reversing the polarity of the solenoid with corresponding emittances plotted as dashed lines. In this case the uncorrected emittance clearly approaches a much smaller emittance. As mentioned earlier, the skewed anomalous quadrupole fields make the resulting emittance growth and focusing of the solenoid dependent upon its polarity and provide an experimental signature that the fields are skewed. Therefore, if the anomalous fields are skewed, one polarity of the solenoid results in a lower emittance than the other. This can be seen in Figure 19 in the limit of very weak (infinite focal length) correctors. And finally, it's important noting that the skewed-normal quadrupoles can be sued to correct for field asymmetries of RF couplers. The DC quadrupole fields need to uncouple the RF's anomalous quadrupole field only at the time the beam reaches the coupler field. Therefore, the coupler field asymmetry can be nearly perfectly cancelled with a simple, weak skewed quadrupole field which remove the x-y correlation produced by the asymmetric coupler fields,. Summing the Effects of the Solenoid In this section the solenoid's chromatic, geometric, and anomalous quadrupole emittances are compared as a function of the beam size at the solenoid. A general conclusion is that the emittance due to the solenoid's aberrations can be minimized with a small beam size in the solenoid. For the solenoid's geometric aberration, the emittance is proportional to the 4 th power of the beam size (see Figure 14) with the assumption that the beam is circular xy and equal to, The chromatic emittance was given above as These three expressions are compared in Figure 20 for the 19.35 cm long LCLS solenoid using the Bz(z) field profile given by magnetic measurements. The solenoid interior field is 2.6 kG. The beam total energy is 6 MeV and curves are given for rms-energy spreads of 1 and 20 keV, corresponding to typical slice and projected rms energy spreads. The anomalous quadrupole emittance is computed with a 50-meter focal length. Figure 20 indicates solenoid's largest contribution to the emittance comes from the quadrupolesolenoid aberration. It is important to comment that 50 meters was chosen for the anomalous quadrupole focal length is to illustrate the effect. In general, the anomalous focal length is rarely measured and can vary greatly depending upon the details of the solenoid design. Fortunately, this emittance can be corrected and made essentially zero with normal and skewed correction quadrupoles. The next contributor is the chromatic aberration. The figure shows the effect for a 1 keV rms energy spread which is estimated to be the relevant slice energy spread. The contribution is much larger for the projected energy spread of ~20 keV as measured for the LCLS beam at 6 MeV. Both the chromatic and geometric aberrations, since they depend upon the beam size to the 2 nd -order and 4 th -order respectively, are controlled by reducing the beam size at the solenoid. The anomalous quadrupole emittance can be the largest of the emittances and is due to the correlation between the x-and y-components in 2D trace space caused by the anomalous quadrupole becoming skewed by the beam's rotation in the solenoid. Thus, the trace space x-and y-emittances can be large while the 4D-emittance remains zero. This means the transverse phase space can be rotated with corrector quadrupoles to undo the correlation and correct for the anomalous quadrupole trace space emittance. In light optics, the geometric emittance is known as the spherical aberration and is present in all lenses with rotation symmetry. For electrons, the aberration is a third-order dependence of the divergence upon beam size, making the emittance of spherical aberrations scale as the 4 th power of the beam size. In the paraxial ray equation, the there is a third order radial term whose coefficient is proportional to the sum of the field strength to the 4 th power and the curvature of the solenoid field, Therefore, the strength of the aberration depends upon the shape and strength of the magnetic field. The form of the coefficient of the r 3 term suggests that the spherical aberration of a solenoid can be minimized with proper shaping of the field at the operational field strength. The anomalous quadrupole emittance can also be canceled with a solenoid powered such that each half has opposite axial fields. However, the chromatic emittance is unaffected by this configuration, since it is an even function of K. And as shown by Eqn. the geometric aberration is also unchanged by the change in polarity and in fact is increased by the additional exit and entrance fringe fields between the two halves. However, this scheme does eliminate beam steering due to mechanical misalignment and the additional emittance due to the geometric aberration is typically small. Eqn. gives the chromatic emittance for an rms momentum spread, p, due to either a p uncorrelated or p-z correlated momentum distribution. For a slice, the momentum spread is small and uncorrelated. The projected momentum spread is typically much larger and is usually due to a correlation between the momentum and the longitudinal position along the bunch, although other correlations are possible. This correlation can be one of two basic shapes in phase space. The first occurs when the bunch launch phase is set to produce a linear chip on the bunch. The case of when the bunch is being compressed in the gun is shown in the figure. The second phase space shape occurs when the launch phase is set to place the beam on the crest of the RF which gives the center electrons the highest energy with both the head and tail electrons both at lower energies. Beam Size at the Solenoid vs. Cathode Radius Estimating the solenoid emittance requires knowing the beam size at the solenoid and in most cases the beam size was assumed to be 1 mm-rms. However, in many cases the beam size at the solenoid lens is considerably larger. Figure 21 shows a simulation of the beam size from the cathode to the entrance of the LCLS booster linac, over a distance of approximately 90 cm. The legend to the right gives the bunch charge and the cathode beam size for each curve. The behavior common to all cases is the beam expands from the cathode with a small bump in the size at the gun irises near 8 cm and then is focused by the solenoid which is 19.35 cm long, centered near 30 cm from the cathode. The figure clearly shows the solenoid beam size is dependent upon both charge (1, 10, 100 and 200 pC) and the initial beam size at the cathode (0.01, 0.1, 0.3 and 0.6 mm-rms). The laser spot size on the cathode strongly affects the beam envelope. Indeed, its size at the solenoid for 1 pC can actually be larger than that of a 10 pC bunch if the laser spot is too small. The beam size at the solenoid for 200 pC is 1.8 mm-rms, per Figure 14, corresponding to an emittance of 0.3 microns from the effects discussed above. Therefore, these phenomena can be large contributors to the observed beam emittance, making it important to reduce the beam size at the solenoid in designing future guns. Comparing the effects of intrinsic, RF, space charge, solenoid chromatic and geometric aberrations requires a consistent set of beam sizes at the cathode, gun exit and solenoid entrance. The beam sizes at the gun and solenoid are determined by the cathode radius and the space charge forces. Knowing these beam sizes as a function of the cathode radius gives a consistent analysis of the total emittance and its parts. The beam sizes are simulated using the GPT code to model an s-band gun without and with space charge. Space charge will affect the beam emittance even with perfect laser shaping to limit the space charge emittance. The remaining linear space charge forces defocus the beam, making the beam larger at the gun exit and at the solenoid. This larger beam then increases the other non-charge dependent emittances due to RF, chromatic, geometric, and anomalous-quadrupole aberrations as described earlier. The simulation is used to obtain the beam sizes at the gun exit and the solenoid entrance resulting from the linear space charge force. These sizes are then used in the above derived formulas to obtain the emittance from the various emittance sources. This approach allows one to dissect the final emittance into its components. The beam size was simulated for a 1.6 cell s-band gun with 115 MV/m peak field on the cathode. The laser launch phase with respect to the RF was 30 degS and the bunch charge was 250 pC. The laser pulse shape was a longitudinal Gaussian with a fwhm of 6 ps and the transverse x-distribution was circular and uniform. The simulated beam sizes at the gun exit iris and at the entrance to the solenoid were evaluated at 8 and 15 cm from the cathode, respectively. The full 3D space charge routine was used for the space charge calculations. The rms beam size at these two locations was computed as a function of the cathode radius, i.e. the laser radius on the cathode. The space charge limited emission for these conditions and 250 pC occurs approximately at a cathode radius of 0.3 mm. The rms beam size in millimeters as a function of distance from the cathode for a variety of beam sizes at the cathode and bunch charge. The initial transverse beam distribution is uniform with radius Rc. The curves are for the LCLS gun and solenoid system and are computed using GPT. The shaded region indicates the location of the solenoid axial field. The dependence of beam size with the cathode size given in Figure 22 deserves some discussion. The curves show a minimum near a cathode radius of 0.8 mm with the size increasing more rapidly for smaller cathode radii than for larger radii. The rise in beam size below 0.8 mm is due to the linear space charge force which increases as the cathode size is reduced. A strong effect of space charge defocusing is seen for cathode radii less than ~0.8 mm. The simulation indicates that space charge defocusing adds linearly with the beam size due to the gun's RF optics. Above 0.8 mm space charge defocusing diminishes with increasing cathode radius and the beam sizes are dominated by the gun optics. A similar dependence upon size is observed in the analytic theory described earlier in this paper. Figure 22(color): The rms beam size at the gun exit (blue) and at the solenoid entrance (red) as a function of the cathode radius. The calculation has been done without (dash) and with (solid) space charge using the 3D space charge routine in GPT. Summarizing These Effects In this section the emittances described in the previous sections are compared and summed for the archetypical s-band RF gun and solenoid. The emittances are re-expressed in terms of the cathode radius using the simulation results shown in Figure 22 and are summed in quadrature to obtain the total emittance. Since the emittance due to the anomalous quadrupole field is not included in this analysis since it is due to a correlation and is easily recovered. The total emittance is defined as the square root of the quadratic sum of the five emittances described earlier,, = √ 2 +, 2 +, 2 +, ℎ 2 +, 2 By writing each of the five emittances in terms of the cathode radius, one can easily compare the contribution each makes to the total emittance. The beam sizes plotted in Figure For the comparison assume ns equals 10 periods of modulation across the bunch diameter and a current modulation depth of ten percent of the peak current. At 250 pC the peak current is 40 amperes, thus the current modulation depth, I, is 4 amperes peak-to-peak. The beam bunch is assumed to exit the gun on crest to give the minimum RF emittance, Here as in the measurements E0 is 115 MV/m and the bunch length is 0.74 mm-rms ( = 0.043 radians) as given in Table II for 250 pC. The rms beam size at the gun exit as a function of the cathode beam size is given in Figure 22. The chromatic emittance is given by The chromatic emittance shown in Figure 18 assumes 20 keV/c for the rms momentum spread, p. This is the projected energy spread observed at 6 MeV and 250 pC for the LCLS gun. Using the energy spread of a bunch slice Eqn. gives the slice chromatic emittance. The integrated solenoid field used in the measurements was 0.464 KG-m and the effective length is 0.1935 meters. These are the parameters of the LCLS solenoid. This field corresponds to a K of 5.99 m -1 and KL is then 1.16. And once more, the graph in Figure 22 provides the solenoid entrance beam size with space charge as a function of the cathode radius. Figure 9 showed the following 4 th power fit to the simulated geometric emittance as a function of beam size at the solenoid, Where x,sol (Rcathode) is in units of mm. These five projected emittances and their combined total emittance are plotted in Figure 23 as a function of the cathode radius. The space charge beam sizes at the gun exit and the solenoid entrance given in Figure 22 are used to compute the RF, chromatic and geometric emittances. The minimum in the beam size at the gun exit and solenoid near 0.9 mm cathode radius causes a local minimum in the geometric, chromatic and RF emittances. There is no minimum for the intrinsic and space charge emittances since they occur close to the cathode, far from the gun exit and solenoid. They are linear functions of the cathode radius and are independent of the gun's optics. The total emittance has a minimum shifted to a smaller cathode radius than that shown in Figure 23 for the gun and solenoid beam sizes, 0.7 mm vs. 1 mm, respectively. The measured projected emittance at 250 pC and a cathode radius of 0.6 mm is 0.7 microns. Summary and Conclusions This paper has attempted to identify and quantify the major sources of emittance in a photocathode RF gun and solenoid system. The principal emittances identified were the intrinsic, space charge, RF and solenoid aberration and anomalous quadrupole field emittances. The analysis used a combination of analytic and numerical techniques to derive expressions for these emittances. Simple equations for these emittances were given in terms of the cathode size, the beam size, energy spread, bunch length and charge. A comparison of these effects was done for a typical s-band gun showing that the chromatic, intrinsic and space charge emittances are the main contributors to the total projected emittance. The RF emittance is approximately five-times smaller than the intrinsic emittance. The 4 th order geometric emittance is a decade smaller than the RF emittance. There is general agreement between this model and beam emittance measurements. This work describes physics-based models useful for investigating new photocathode gun designs. It suggests that further improvements in the performance of the photocathode gun can be made by simply reducing the beam size at the solenoid to minimize its aberrations. And non-uniform emission at the few percent level seeds space charge emittance roughly equal to the intrinsic emittance for 250 pC bunches. In addition, expressions for the intrinsic emittance and QE are given which include effective mass effects of the pre-emission electron. This theory suggests using cathodes with large anisotropic effective masses is a possible approach to achieving ultra-low intrinsic emittance. Recent advances in the space charge emittance compensation, symmetric RF fields, and beam dynamics of photocathode RF guns have enabled high gain free electron lasers to become productive 4 th generation light sources. The next developments need to increase beam brightness by improving cathode performance of QE and intrinsic emittance, and by the elimination and correction of aberrations in the electron beam optics. Progress in these areas will produce ever more interesting physics and exciting new applications for high brightness electron beams. |
/**
* @author: billy zhang
*/
@SpringBootApplication
public class WebApplication {
public static void main(String[] args) {
SpringApplication.run(WebApplication.class, args);
}
@Bean
public ObjectMapper objectMapper() {
ObjectMapper mapper = new ObjectMapper();
mapper.setSerializationInclusion(Include.NON_NULL);
return mapper;
}
} |
The '1 Above' pub, where a massive blaze killed 14 people, did not follow fire safety norms and violated regulations on encroachment with obstructions blocking its emergency exit, police and civic officials said today. In its daily crime report, the police also said the pub's manager and other staff fled from the spot instead of helping the customers injured in the blaze. "No fire safety norms were followed by the pub and the management did not make any arrangement for the safe exit of its customers during the blaze," police said. They said there were hindrances created on the emergency exit way. "Negligence on the part of the pub led to the death of 14 customers and injuries to several others."
"The manager and other staff of the pub ran away from the spot without helping those injured in the blaze," police said. Meanwhile, a civic official told PTI the Brihanmumbai Municipal Corporation (BMC) had taken action against the pub more than three times for "violations". According to the official, the pub had obtained the fire safety and building permissions from the civic body in October 2016.
"However, as '1 Above' flouted the rules and regulations by way of encroachment and other violations, the BMC had taken legal action against its management on May 27 for using the open space for commercial activities," he said. Notices had been served on the pub on August 4, September 22 and October 27 this year by the BMC, asking it to stop encroaching on the open space, he said. "On August 2, we razed a portion of the pub for encroaching upon the open space. Thereafter, on October 22, we seized the open space, where it illegally served the customers. Despite that, the owners of the pub had indulged in violations," he added.
The fire started at the rooftop of the pub which was hosting a birthday party and spread rapidly through the building, killing 14 people, most of them women, shortly after midnight. The police have booked Hratesh Sanghvi, Jigar Sanghvi and Abhijeet Manka of C Grade Hospitality, which manages the pub, along with others, under IPC sections 304 (culpable homicide not amounting to murder), 337 (causing hurt by act endangering life or personal safety of others) and 338 (causing grievous hurt by act endangering life or personal safety of others). The case has been lodged at the N M Joshi Marg Police Station. |
<filename>s3_pear-web/src/main/java/co/edu/uniandes/csw/pear/resources/EnvioResource.java
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package co.edu.uniandes.csw.pear.resources;
import co.edu.uniandes.csw.pear.dtos.EnvioDetailDTO;
import co.edu.uniandes.csw.pear.ejb.EnvioLogic;
import co.edu.uniandes.csw.pear.entities.EnvioEntity;
import co.edu.uniandes.csw.pear.exceptions.BusinessLogicException;
import java.util.ArrayList;
import java.util.List;
import javax.enterprise.context.RequestScoped;
import javax.inject.Inject;
import javax.ws.rs.*;
/**
* El formato JSON de este objeto es el siguiente:
{
"id": 123,
"duracion": 200,
"recibido" : 0
]
}
*
* @author js.cabra
*/
@Path("envios")
@Produces("application/json")
@Consumes("application/json")
@RequestScoped
public class EnvioResource {
/**
* Conexion con la Logica
*/
@Inject
private EnvioLogic logic;
/**
* <h1>POST /api/comida : Crear un evento.</h1>
*
* <pre>Cuerpo de petición: JSON {@link EnvioDetailDTO}.
*
* Crea un nuevo envio con la informacion que se recibe en el cuerpo
* de la petición y se regresa un objeto identico con un id auto-generado
* por la base de datos.
*
* Codigos de respuesta:
* <code style="color: mediumseagreen; background-color: #eaffe0;">
* 200 OK Creó la nueva comida .
* </code>
* <code style="color: #c7254e; background-color: #f9f2f4;">
* 412 Precodition Failed: Ya existe el envio.
* </code>
* </pre>
* @param envio {@link EnvioDetailDTO} -El envio que se desea guardar.
* @return JSON {@link EnvioDetailDTO} - El envio guardada con el atributo id autogenerado.
* @throws co.edu.uniandes.csw.pear.exceptions.BusinessLogicException
*/
@POST
public EnvioDetailDTO crearEvento(EnvioDetailDTO envio)throws BusinessLogicException
{
return new EnvioDetailDTO(logic.createEnvio(envio.toEntity()));
}
/**
* <h1>GET /api/envios : Obtener todos envios.</h1>
*
* <pre>Busca y devuelve todos los envios que existen en la aplicacion.
*
* Codigos de respuesta:
* <code style="color: mediumseagreen; background-color: #eaffe0;">
* 200 OK Devuelve todos los envios de la aplicacion.</code>
* </pre>
* @return JSONArray {@link EnvioDetailDTO} - Los envios encontradas en la aplicación. Si no hay ninguna retorna una lista vacía.
*/
@GET
public List<EnvioDetailDTO> getEnvios() {
List<EnvioDetailDTO> dtos = new ArrayList<>();
logic.getEnvios().forEach( envio -> {
dtos.add(new EnvioDetailDTO(envio));
});
return dtos;
}
/**
* <h1>GET /api/eventos/{id} : Obtener evento por id.</h1>
*
* <pre>Busca el evento con el id asociado recibido en la URL y la devuelve.
*
* Codigos de respuesta:
* <code style="color: mediumseagreen; background-color: #eaffe0;">
* 200 OK Devuelve el evento correspondiente al id.
* </code>
* <code style="color: #c7254e; background-color: #f9f2f4;">
* 404 Not Found No existe un evento con el id dado.
* </code>
* </pre>
* @param id Identificador de el evento que se esta buscando. Este debe ser una cadena de dígitos.
* @return JSON {@link EventoDetailDTO} - La cocina buscada
*/
@GET
@Path("{id: \\d+}")
public EnvioDetailDTO getEnvio (@PathParam("id") Long id) {
String constante1= "El recurso /envios/";
String constante2= " no existe.";
EnvioEntity buscado = logic.getEnvio(id);
if ( buscado == null )
throw new WebApplicationException(constante1 + id +constante2 , 404);
return new EnvioDetailDTO(buscado);
}
/**
* <h1>PUT /api/eventos/{id} : Actualizar envio con el id dado.</h1>
* <pre>Cuerpo de petición: JSON {@link EnvioDetailDTO}.
*
* Actualiza el evento con el id recibido en la URL con la informacion que se recibe en el cuerpo de la petición.
*
* Codigos de respuesta:
* <code style="color: mediumseagreen; background-color: #eaffe0;">
* 200 OK Actualiza el evento con el id dado con la información enviada como parámetro. Retorna un objeto identico.</code>
* <code style="color: #c7254e; background-color: #f9f2f4;">
* 404 Not Found. No existe una cocina con el id dado.
* </code>
* </pre>
* @param id Identificador de el evento que se desea actualizar.Este debe ser una cadena de dígitos.
* @param envio {@link EnvioDetailDTO} El evento que se desea guardar.
* @return JSON {@link EnvioDetailDTO} - El evento guardada.
* @throws co.edu.uniandes.csw.pear.exceptions.BusinessLogicException
*/
@PUT
@Path("{id: \\d+}")
public EnvioDetailDTO updateEvento (@PathParam("id") Long id, EnvioDetailDTO envio)throws BusinessLogicException {
if ( logic.getEnvio(id) == null )
throw new WebApplicationException("El recurso /envios/" + id + " no existe.", 404);
envio.setId(id);
return new EnvioDetailDTO(logic.updateEnvio(id, envio.toEntity()));
}
/**
* <h1>DELETE /api/evento/{id} : Borrar evento por id.</h1>
*
* <pre>Borra evento con el id asociado recibido en la URL.
*
* Códigos de respuesta:<br>
* <code style="color: mediumseagreen; background-color: #eaffe0;">
* 200 OK Elimina el evento correspondiente al id dado.</code>
* <code style="color: #c7254e; background-color: #f9f2f4;">
* 404 Not Found. No existe un evento con el id dado.
* </code>
* </pre>
* @param id Identificador de la evento que se desea borrar. Este debe ser una cadena de dígitos.
* @throws co.edu.uniandes.csw.pear.exceptions.BusinessLogicException
*/
@DELETE
@Path("{id: \\d+}")
public void deleteEvento(@PathParam("id") Long id) throws BusinessLogicException {
if ( logic.getEnvio(id) == null )
throw new WebApplicationException("El recurso /envios/" + id + " no existe.", 404);
logic.delete(id);
}
}
|
Imaging features of hepatobiliary MRI and the risk of hepatocellular carcinoma development Abstract Objective This study aimed to determine whether hepatocellular carcinoma (HCC) risk and time to HCC development differ according to hepatobiliary magnetic resonance imaging (MRI) findings among people at risk for developing HCC. Materials and methods A total of 199 patients aged 40 years or older with liver cirrhosis or chronic liver disease who underwent gadoxetic acid-enhanced hepatobiliary MRI between 2011 and 2015 were analyzed. An independent radiologist retrospectively reviewed MRI findings, blinded to clinical information, and categorized them into low-risk features, high-risk features and high-risk nodules. High-risk features were defined as liver cirrhosis diagnosed by imaging. High-risk nodules were defined as LR-3 or LR-4 nodules based on LI-RADS version 2018. The primary outcome was development of HCC within 5-year of MRI evaluation. Results HCC was diagnosed in 28 patients (14.1%). HCC development was null for those with low-risk features (n=84). The cumulative incidence rates of HCC were 0%, 2.3%, 13.4% and 22.1% at 1-, 2-, 3- and 5-year for those with high-risk features (n= 64), and were 19.1%, 31.8%, 37.3% and 46.7% at 1-, 2-, 3- and 5-year for those with high-risk nodules (n= 51). Among 28 patients developed HCC, the median time from baseline MRI to HCC diagnosis was 33.1 months (interquartile range: 25.946.7 months) for high-risk feature group, and 17.3 months (interquartile range: 6.226.5 months) for high-risk nodule group. Conclusions HCC risk and time to HCC development differ according to baseline hepatobiliary MRI findings, indicating that hepatobiliary MRI findings can be used as biomarkers to differentiate HCC risk. |
A NEW SYRIAC-UIGHUR INSCRIPTION FROM CHINA (QUANZHOU, FUJIAN PROVINCE) T he Chinese town of Quanzhou is located between the cities Fuzhou and Xiamen in Fujian province, bordering the bay of Quanzhou in south-east on the downstream of the Jinjiang river. Its old name in Chinese was Citong, phonetic transcription of Arabic Zait n olive, olive-tree, (see also zayt oil and compare with the root zayyata to coat). Emmanuel Diaz, a 17th century Catholic missionary working in China, was the first to notice the existence of a cross in Quanzhou. He mentioned it in his book The Comment on the Nestorian Inscription of Xian fou in the Years under Dynasty of Tang ( Tang jingjiao beisong zhengquan) where he published drawings of three crosses found there. In 1906, Serafin Moya discovered a stone in Quanzhou bearing an inscription and depicted with a cross and an angel. Wu Wenliang was the first to gather and classify these Nestorian inscribed steles from Quanzhou beginning in 1927. He published them in 1958 in the monograph Religious Inscriptions and Funerary Stones in Quanzhou. Between 1927 and 1957 Wu Wenliang discovered more than thirty Nestorian (=rkgn in Turkic and Mongolian) tombstones, some eighty Islamic tombstones, several Manichean carved stones and numerous stone relics belonging to Indian Brahma Religion all in Quanzhou. Among the Nestorian collection of relics, there are about nine tombstones bearing Syriac inscriptions, four tablets with the Pags-pa script, three tablets in Chinese and one tablet with a Uighur inscription. According to Wu Wenliang, about 160 tombstones were either crushed in a stone factory in eastern Quanzhou or were reused in other building projects in the 1930s. In the 1980s several other Nestorian tombstones were found in the same region, including one Uighur inscription. A new bilingual inscription (figure 1) bearing Syriac and Uighur texts came to light in Chidian of Quanzhou city as late as May 2002. This is a fragment of a A NEW SYRIAC-UIGHUR INSCRIPTION FROM CHINA (QUANZHOU, FUJIAN PROVINCE) |
/** Iteratively find the start of instr's block. */
BlockStartInstr* InstrGraph::findBlockStart(Instr* instr) {
assert(instr->isLinked());
if (isBlockEnd(instr))
return blockStart((BlockEndInstr*)instr);
while (!isBlockStart(instr))
instr = instr->prev_;
return (BlockStartInstr*)instr;
} |
44th Annual Meeting of the Association for European Paediatric Cardiology, AEPC with joint sessions with the Japanese Society of Pediatric Cardiology and Cardiac Surgery Innsbruck, Austria May 2629, 2010 Introduction: Standard pre-operative assessment for total cavopulmonary connection (TCPC) in hypoplastic left heart syndrome (HLHS) includes cardiac catheterisation. From 2003 we have used only echocardiography and Magnetic Resonance Imaging (MRI) with central venous pressure (CVP) measurement from an internal jugular vein (representing downstream pulmonary artery pressure) for preoperative assessment. We evaluated the postoperative outcomes in these patients. Methods: Retrospective analysis of medical notes and MRI scans was performed. Information was collected on mortality, duration of ventilation, length of intensive care and inpatient stay, chest drainage and the need for further procedures. Results: Between 2003 and 2009, 47 patients with HLHS were solely investigated with this method and underwent lateral tunnel TCPC with a 4 mm fenestration. 6 (12.8%) patients had tricuspid valve surgery at the time of TCPC. Results are described as median (range). CVP at the time of MRI was 12.4 mmHg (616). The age of patients at operation was 3.2 years (2.35.6) and weight was 14.4 kg (9.119.8). Survival was 98% with only one death in the immediate post-operative period secondary to an intractable arrhythmia. Patients were ventilated for 6.1 hours (1.7523.3) and days spent in intensive care was 3 (211). Duration of chest drainage was 9 days (438) with 9 patients (19.6%) requiring chest drainage for over 2 weeks. Inpatient stay was 13.5 days (644). Spontaneous occlusion of the fenestration occurred in 2 patients, one required stenting of the fenestration on day 5, the other re-operation on day 4. This patient also required catheter occlusion of aorto-pulmonary collaterals. 3 (6.4%) patients required additional pericardial drainage. 5 (10.6%) were readmitted post discharge for drainage of recurrent effusions. 4 (8.5%) developed late complications requiring further intervention. These were protein losing enteropathy (1 patient), recurrent atrial restriction despite atrial septectomy at TCPC (2 patients) and bradycardia requiring permanent pacing (1 patient). No patients required take down of the TCPC. Conclusions: Results from TCPC in this group are favourable. The low reintervention rate suggests that echocardiography combined with MRI and CVP measurement is sufficient for pre-operative assessment for TCPC in HLHS and can obviate the need for cardiac catheterisation. |
Integrated security and error control for communication networks using the McEliece cryptosystem The McEliece public key cryptosystem is modified to create an integrated security/error control system for digital communication networks. The security of the resulting system is examined in detail, with particular emphasis on the trade-off between error control and security. Cryptographic and communication measures of the performance of the system are established. The performance of the system for codewords of length 1024 is reported. A method is described for making the integrated system adaptive to changes in channel conditions.<<ETX>> |
Development of the symptoms and impacts questionnaire for Crohn's disease and ulcerative colitis Summary Background Patientreported outcome (PRO) measures historically used in inflammatory bowel disease have been considered inadequate to support future drug labelling claims by regulatory agencies. Aims To develop PRO tools for use in Crohn's disease (CD) and ulcerative colitis (UC) following guidance issued by the US FDA and the ISPOR (International Society for Pharmacoeconomics and Outcomes Research). Methods Concept elicitation and cognitive interviews were conducted in adult patients (≥18 years) across the United States and Canada. Semistructured interview guides were used to collect data, and interview transcripts were coded and analysed. Concept elicitation results were considered alongside existing literature and clinical expert opinion to identify candidate PRO items. Cognitive interviews evaluated concept relevance, interpretability and structure, and facilitated instrument refinement. Concept elicitation participants, except those with an ostomy, underwent centrally read endoscopy to assess inflammatory status. Results In all, 54 participants (mean age: 46.2 years; 66.7% female) were included in the CD concept elicitation interviews. In total, 80 symptom concepts and 61 impact concepts were identified. After three waves of cognitive interviews, the 31item Symptoms and Impacts Questionnaire for CD (SIQCD) was developed. In the UC concept elicitation phase, 53 participants were interviewed (mean age: 41.4 years; 49.1% female). In total, 79 symptoms concepts and 49 impact concepts were identified. Following two waves of cognitive interviews, the 29item Symptoms and Impacts Questionnaire for UC (SIQUC) was developed. Both instruments include four symptom and six impact domains. Conclusions We developed PROs to support CD and UC drug labelling claims. Psychometric validation studies to evaluate instrument reliability and responsiveness are ongoing. | INTRODUC TI ON The inflammatory bowel diseases, Crohn's disease and ulcerative colitis, are idiopathic disorders characterised by chronic intestinal inflammation. Treatment options have improved over the past two decades with the introduction of several new classes of therapeutic agents; nevertheless, a substantial proportion of patients do not respond or lose response to available treatments. Consequently, multiple compounds are currently in early and late phase clinical trials. 1,2 An important limitation to the development of novel inflammatory bowel disease drugs is that historic Crohn's disease and ulcerative colitis outcome measures 3 -including the Inflammatory Bowel Disease Questionnaire (IBDQ) 4 -were not designed as valid patient-reported outcomes (PROs), which are considered by regulatory agencies to be the gold standard for quantifying patient experience. Guidance documents issued by the European Medicines Agency (EMA) and US Food and Drug Administration (FDA) indicate that a co-primary endpoint consisting of a PRO measure and an endoscopic outcome is required in future inflammatory bowel disease registration trials. 5,6 While the Crohn's Disease Patient-Reported Outcomes Signs and Symptoms (CD-PRO/SS) diary 7 and Ulcerative Colitis Patient-Reported Outcomes Signs and Symptoms (UC-PRO/ SS) diary 8 were established according to the FDA-endorsed pathway, endoscopic disease activity was not evaluated in the development studies. This represents an important potential shortcoming given that objective measures of inflammatory bowel disease activity do not necessarily correspond to symptom-based assessments. PROs, defined as "any report of the status of a patient's health condition that comes directly from the patient, without interpretation of the patient's response by a clinician or anyone else," 9 are widely used to study chronic diseases. According to the aforementioned FDA guidance, a valid and reliable PRO instrument is capable of measuring clinically meaningful aspects of disease activity that are most relevant to patients and therefore allows for assessment of the relative benefits of new treatments in a readily interpretable manner. PRO instrument creation and validation is a rigorous, resource-intensive process that can take several years to complete. 2 11 for the assessment of irritable bowel syndrome and the Eosinophilic Esophagitis Activity Index Patient-Reported Outcome (EEsAI PRO). 12 PRO instruments are composed of individual items, which take the form of a question, statement or task, that evaluate specific aspects (concepts) relevant to patients' well-being. These concepts are often aggregated into sub-concepts, or "domains." 9 For example, in the ESM-PROM, the item "I am having abdominal pain" is grouped under the "physical status" domain. 11 PRO development begins with a literature review that identifies concepts of relevance to patients, in addition to existing PRO instruments. Subsequently, semi-structured interviews are conducted to obtain patient input and generate new concepts and item wording. It is essential that participating patients are clinically well characterised and reflect the study populations in which the PRO instrument will ultimately be used. 9 In the context of inflammatory bowel disease research, endoscopic evaluation is required because demonstration of mucosal inflammation is a critical eligibility criterion for clinical trials of anti-inflammatory drugs. Preliminary PRO concepts and items are then evaluated and refined through iterative waves of cognitive comprehension interviews before being aggregated into a prototypic instrument for the assessment of measurement properties. The creation of PRO instruments for use in inflammatory bowel disease trials is an urgent research priority. In response to this imperative, we developed the Symptoms and Impacts Questionnaire for Crohn's Disease and Ulcerative Colitis (SIQ-CD and SIQ-UC, respectively) using objective assessments of disease activity and regulatory guidelines. | MATERIAL S AND ME THODS The best practice recommendations for the development of PRO tools as outlined by the FDA and ISPOR (International Society for Pharmacoeconomics and Outcomes Research) were followed. This methodology is based upon a mixed-methods approach that involves qualitative and quantitative assessments of disease phenotype and activity, patient interviews, content analysis and concept rating exercises (Figure 1). | Preliminary conceptual model A conceptual model summarises components of the patient experience with respect to having the disease and undergoing treatment(s). We constructed a preliminary conceptual model consisting of symptom and impact concepts using two sources: (a) separate reviews of Crohn's disease and ulcerative colitis literature (which were also designed to identify existing PRO instruments) and (b) input from clinical experts in the treatment of inflammatory bowel disease. 9 Crohn's disease and ulcerative colitis literature searches were conducted in PubMed on 14 May 2014 and 15 April 2016, respectively, using predefined search terms and inclusion criteria (Appendix S1). Ten clinical experts across North America and Europe who specialise in inflammatory bowel disease were then asked to provide feedback on the preliminary conceptual model. Clinicians were selected based on academic expertise and community practice patterns, with practice volumes ranging from several hundred to several thousand inflammatory bowel disease patients. In accordance with the FDA guidance, draft concepts were added, reviewed, revised and prioritised during the clinical expert interviews. Concept elicitation interview guide Following the clinical expert interviews, qualitative researchers with expertise in PRO instrument development (KPM, MLM) built concept elicitation interview guides that reflected the preliminary conceptual model and the scientific objectives of the study. Within each guide, questions were semi-structured to obtain both spontaneous and probed input, including the specific language used by study participants. The severity (ie the level of intensity) and bothersomeness (ie the level of annoyance or aggravation) of each symptom concept and the degree of difficulty experienced by participants while coping with each impact concept were separately rated on a scale from 0 (none) to 10 (extremely severe/bothersome/difficult). Open-ended questions with follow-up probing by the interviewer ensured that the full patient experience was reflected in the interviews. Participant recruitment and quantitative assessments Previous research indicates that approximately 99% of concepts emerge by the 25th elicitation interview in clinical outcome instrument development. 17 We aimed to conduct approximately 60 Crohn's disease and 60 ulcerative colitis concept elicitation interviews to allow for adequate concept emergence and support exploratory analyses in important sub-populations, such as patients with an ostomy or perianal fistulising Crohn's disease. A convenience sample of adult (≥18 years) Crohn's disease and ulcerative colitis patients were prospectively and consecutively recruited from academic and community practice clinics across F I G U R E 1 SIQ-CD and SIQ-UC development activities. We performed quantitative and qualitative assessments when building novel patient-reported outcome (PRO) instruments for use in Crohn's disease (CD) and ulcerative colitis (UC). Disease activity and phenotype were evaluated during the screening process. CD patients were categorised as having "complicated" (ie an ostomy or perianal fistula) or "noncomplicated" disease. The Harvey Bradshaw Index (HBI) was used to quantify clinical disease activity. Participants without complications were required to undergo endoscopy at baseline, and a centrally read Simple Endoscopic Score for Crohn's Disease (SES-CD) was collected to characterise the study population. All UC participants underwent endoscopy at baseline, and centrally read Mayo Clinic Endoscopic Scores (MCES) were used to assess endoscopic disease activity. Simple Clinical Colitis Activity Index (SCCAI) scores and disease extent were also collected to characterise the study population. Qualitative assessments included a literature review, interviews with key opinion leaders and concept elicitation interviews. Once the concept elicitation interview results were analysed, an item generation meeting took place to review draft PRO items. The draft CD and UC instruments were piloted in waves of cognitive comprehension interviews to assess patient understanding and feasibility. Revisions to the draft instruments were made based on the cognitive compression interview results 19 for ulcerative colitis. These tools were selected because they accurately evaluate clinical disease activity and are relatively easy to administer in a routine practice setting. 3 HBI thresholds were used to define clinical remission (HBI < 5), mild-to-moderate disease (HBI = 5-8) and severe disease (HBI > 8). 18 For ulcerative colitis, SCCAI scores were used to define clinical remission (SCCAI < 3), mild disease (SCCAI = 3-5), moderate disease (SCCAI = 6-11) and severe disease (SCCAI > 11). 19 To evaluate endoscopic disease activity, Crohn's disease participants without complications (ie patients without an ostomy or a perianal fistula) underwent a colonoscopy, which was centrally read in a blinded manner, and the Simple Endoscopic Score for Crohn's Disease (SES-CD) was calculated. 20 SES-CD thresholds were used to define endoscopic remission (SES-CD = 0-2), mild disease (SES-CD = 3-6) and moderate-to-severe disease (SES-CD > 7). Endoscopy was not required in participants with an ostomy or perianal fistula. In the ulcerative colitis cohort, all participants underwent a colonoscopy, which was centrally read in a blinded manner, and the Mayo Clinic Endoscopic Subscore (MCES) 21 Concept elicitation interviews Trained interviewers with experience in qualitative data collection for the purposes of PRO development conducted the concept elicitation interviews by telephone using the condition-specific guide. The interviews, which lasted between 60 and 90 minutes, were audiorecorded and transcribed. A quality check was performed to confirm the accuracy of the transcription and redact patient identifiers. | Draft PRO instrument development: item generation The qualitative research experts and study investigators met inperson to review the overall results and discuss each concept from a clinical and measurement perspective before deciding whether to include the concept in PRO measurement. Selected concepts were then cross-referenced against commonly used PRO instruments in inflammatory bowel disease to examine whether existing instruments provided adequate content coverage and if development of novel content was warranted. During PRO item generation, wording was informed by patient quotations coded from the concept elicitation interview transcripts. Item structure, response options and recall period were chosen by the methods experts and study investigators. | Recruitment and quantitative assessments For the cognitive comprehension exercise, we planned to enrol approximately 18 Crohn's disease and 18 ulcerative colitis participants. Previous research suggests that approximately 7 to 10 interviews are sufficient to confirm participant understanding. 13,23 The recruitment process and eligibility criteria for the concept elicitation and cognitive comprehension interviews were identical, expect except that endoscopic assessment was not required to partake in the cognitive comprehension interviews since the goal of this component of the study was to determine participant understanding. Otherwise, the recruitment process and eligibility criteria for the concept elicitation and cognitive comprehension interviews were identical. | Ethics approval The study protocol and interview forms were approved by Quorum Review IRB (Seattle, WA, USA). Where required, local institutional review board approval was obtained before site initiation. All participants provided written informed consent prior to participating in study activities. The study was conducted in compliance with the Declaration of Helsinki, and no changes were made to the participants' existing care in this observational study. | Statistical analyses Qualitative interview data from the interview transcripts were coded as described above. Quantitative data from the screening and enrolment processes and rating exercises were entered into SPSS (version 18.0) to generate tables of descriptive statistics. | Concept elicitation interview guide and baseline characteristics The concept elicitation interview guide for Crohn's disease consisted of symptom and impact content derived from the 30 relevant studies identified by the literature review and expert opinion ( Figure S1). A total of 54 patients with Crohn's disease were recruited from seven sites to participate in the concept elicitation process. Demographic and clinical characteristics are provided in Table 1 All six participants with active perianal fistula also underwent baseline endoscopy. The median SES-CD was 6.0 (IQR 4-14) for this subgroup; all participants with active perianal fistula had active endoscopic disease activity. | Symptoms In total, 80 symptom concepts were identified during Crohn's disease concept elicitation interviews (Table S1) Table S1. Symptoms reported by at least 25% of Crohn's disease study population included "abdominal pain" (74.1%, 40/54), "diarrhoea" The mean severity and bothersome rating for each symptom concept are reported in Table S1. Some symptoms, such as "fistula drainage" and "irritation at stoma site" were rated the maximum score of 10 for severity or bothersomeness; however, only one or two participants rated these symptoms. Symptoms rated by at least three participants with the highest mean severity ratings included "back pain" (9.3, SD 1.0), "muscle cramps" (9.0, SD 1.0) and "headache" (8.7, SD 2.3), while those rated by at least three participants with high mean bothersomeness ratings included "watery/loose stools" (8.3, SD 2.3), "low energy" (8.3, SD 2.1) and "back pain" (7.8, SD 1.7). | Impacts Crohn's disease concept elicitation interviews identified 61 impact concepts (Table S2). "Dietary changes" was the most frequently recorded impact concept, closely followed by "work limitations" (6.7% The mean difficulty rating for each impact concept is reported in Table S2. Two impact concepts, "waking up to use the restroom" and "being misunderstood by doctors," were rated the maximum difficulty score of 10; however, only one participant rated each of these impacts. Of the impact concepts rated by at least three participants, "limitations to parenting/caregiving" (8.5, SD 2.4), "frustration" (7.9, Two patients in the group without comorbidities were missing endoscopic videos, however, still images were available. c All six participants with fistulising CD underwent baseline endoscopy and the SES-CD was calculated, although this was not required. d Participants with UC were asked whether they had ever undergone IPAA surgery, colostomy, ileostomy, sub-total colectomy or proctocolectomy. TA B L E 1 (Continued) SD 1.6) and "negative self-image" (7.7, SD 2.4) had the highest mean difficulty ratings. | Concept elicitation interview guide and baseline characteristics The concept elicitation interview guide for ulcerative colitis consisted of symptom and impact content included in the 29 relevant studies identified by the literature review and expert opinion ( Figure S2). In all, 53 patients with ulcerative colitis were recruited from three sites to participate in the concept elicitation exercise. Demographic and clinical characteristics are provided in Table 1 The mean SCCAI score at baseline was 3.6 (SD 3.5), and the study population was generally representative of the disease activity spectrum. A broad range of endoscopic disease activity was represented in the ulcerative colitis study. The mean UCEIS score was 2.5 (SD 1.9). Approximately 10% of the ulcerative colitis participants were in endoscopic remission (11.3%, 6/53) and 35.8% (19/53), 22.6% (12/53) and 28.3% (15/53) had mild, moderate and severe endoscopic disease, respectively. | Interview coding and saturation of concept The final coding framework for the 53 ulcerative colitis transcripts contained a total of 128 concept codes. Four of the 53 interview transcripts were coded by two independent coders to evaluate IRA. For each of the four transcript pairs, IRA values ranged from 89.5% to 93.9% for concept identification, and from 95.2% to 98.6% for concept assignment. Saturation of concept was observed by the fifth transcript group (ie after approximately 45 of the 53 interviews) as no new concepts were identified in subsequent transcripts. | Impact concepts In all, 49 impact concepts were described in the ulcerative colitis concept elicitation interviews (Table S2) Table S2. Impacts reported by at least 25% of the ulcerative colitis study population included "limitations to overall functioning" (73.6%, 39/53), "dietary changes" (73.6%, 39/53), "need to be near restroom" The mean difficulty rating for each impact concept is reported in Table S2. "Lack of control" and "fertility issues" received the maximum mean difficulty score of 10; however, these impacts were each rated by only two participants. Of the impact concepts rated by at least three participants, "overall emotional health" (9.3, SD 1.2), "limitations to personal care" (8.8, SD 1.5) and "general functioning" (8.9, SD 1.2) had the highest mean difficulty ratings. | Crohn's disease The literature review and experts did not identify existing PRO instruments that incorporated all the concepts that were determined to be relevant for PRO measurement; thus, the need for a novel tool was established. The Crohn's disease concept generation meeting resulted in the removal of 62 symptom and 43 impact concepts (Table S3) (Table S4) during three waves of cognitive comprehension interviews. A cognitive summary table was generated for each wave of interviews to determine whether the questionnaire was feasible and whether participants had difficulty understanding the instrument content. Several symptom concepts ("diarrhoea" and "watery/loose stools,"; "frequent bowel movements" and "using the restroom frequently"; and "fatigue" and "low energy") and impact concepts ("limitations to work" and "limitations to school") were combined according to feedback collected in the cognitive comprehension interviews. This resulted in the 31-item draft SIQ-CD, which consists of 14 symptom and 17 impact concepts ( Table 2). During the final wave of cognitive comprehension interviews, the SIQ-CD was administered using a smartphone-based electronic PRO format and evaluated alongside a paper presentation. No conceptual differences between the paper and electronic versions were identified, which provides support for platform neutrality of the instrument content. It took participants approximately 6 minutes to complete the draft instrument. | Ulcerative colitis The ulcerative colitis generation meeting led to a reduction from 77 to 16 symptom concepts and 47 to 13 impact concepts for inclusion in the novel ulcerative colitis PRO instrument (Table S5). When the Crohn's disease and ulcerative colitis preliminary draft measures were compared, two-thirds of the symptom concepts and half of the impact concepts were found to be common to both instruments. The ulcerative colitis translatability assessment did not identify issues that would impact translation. Given the sizeable overlap across the preliminary draft Crohn's disease and ulcerative colitis measures-along with the completed cognitive work for the Crohn's disease measure that demonstrated acceptable structure, instructional text, response options, recall period and electronic PRO presentation-the targeted number of ulcerative colitis cognitive comprehension interviews was reduced. Seven ulcerative colitis cognitive comprehension interviews (one wave of five participants and one wave of two participants) were conducted, with the results yielding few revisions to the preliminary draft ulcerative colitis questionnaire (Table S4). Both the paper and electronic versions of the draft ulcerative colitis instrument were assessed in cognitive comprehension interviews to determine whether there was conceptual equivalence across modes of administration. Similar to the draft SIQ-CD instrument, no issues with comprehension or feasibility were identified, nor were there substantial differences in the paper versus electronic presentations. Thus, the 29-item draft SIQ-UC remained intact (Table 3). Participants completed the draft SIQ-UC instrument in approximately 6 minutes. The concepts, sub-concepts and domains included in the final draft SIQ-CD and SIQ-UC are depicted in Figure 3. Questionnaire items are grouped into three modules: a daily bowel movement report, a daily symptom assessment and a weekly impact assessment. The first two modules are completed once daily over a 7-day period, while the third module is completed once at the end of a 7-day period. These recall periods were selected based on expert opinion. A modified version of the draft SIQ-CD tool was also designed to accommodate Crohn's disease patients with an ostomy. | D ISCUSS I ON In recent years, the FDA and EMA have recommended that clinical parameters, endoscopic findings and patient-reported symptoms be separately quantified and reported in inflammatory bowel disease trials. 5,24 Conversely, historical outcome measures such as Crohn's Disease Activity Index (CDAI), 25 In the ulcerative colitis concept elicitation phase of our study, endoscopic disease was assessed at baseline and recruitment was monitored to ensure that MCES-defined categories were equally characterised. Additionally, SCCAI and disease extent scores were collected. At least one ulcerative colitis participant belonged to each SCCAI category, and disease extent was evenly distributed. Approximately one-third of participants had proctitis, left-sided colitis and pancolitis, respectively. In the Crohn's disease concept elicitation phase, recruitment was monitored to ensure that HBI-defined clinical disease activity categories were equally represented. The HBI was chosen in favour of the SES-CD because Crohn's disease participants with an ostomy or perianal fistula were not required to undergo endoscopy at baseline. However, endoscopy was required in all patients without complications, and it was confirmed that at minimum one participant from this subgroup belonged to each of the three SES-CD categories. While at least one other inflammatory bowel disease PRO initiative has cited the FDA guidance, endoscopy was not incorporated in the development of the CD-PRO/SS and UC-PRO/SS measures. 7,8 Participants in the CD-PRO/SS development study were enrolled based on physician-confirmed biopsy and clinical disease activity was assessed using the Sandler estimated Crohn's Disease Activity Index (SeCDAI), without endoscopy. 7,26 It is well established that Crohn's disease symptoms do not correlate with the severity of endoscopic disease, and that clinical assessments of disease activity are susceptible to bias. 30 Similarly, baseline endoscopy was not performed in the development of the UC-PRO/SS. It is therefore unclear how many Crohn's disease and ulcerative colitis patients were experiencing active disease at the time of study participation and whether a spectrum of endoscopic disease was incorporated. Furthermore, there was a lack of variation in clinical disease activity among the CD-PRO/SS development population. The majority of participants (83%) had moderate or severe disease, defined as a SeCDAI score of 220 or greater. In our Crohn's disease study, 7. Over the past 7 days, how difficult was it to complete your responsibilities at work or school because of your UC? -Not difficult at all -A little difficult -Moderately difficult -Very difficult -Extremely difficult -Not applicable: I did not work or attend school in the past 7 days because of my UC -Not applicable: I did not work or attend school in the past 7 days for reasons not related to my UC 8. Over the past 7 days, because of your UC, how difficult was it for you to stay asleep after going to bed? -Not difficult at all -A little difficult -Moderately difficult -Very difficult -Extremely difficult 9. Over the past 7 days, how limited was your participation in exercise or sports because of your UC? -Not limited at all -A little limited -Moderately limited -Very limited -Extremely limited 10. Over the past 7 days, how limited was your ability to travel because of your UC? -Not limited at all -A little limited -Moderately limited -Very limited -Extremely limited 11. Over the past 7 days, how much has UC interfered with your quality of life? -Not at all -A little bit -Moderately -Very much -Extremely 12. Over the past 7 days, how often have you worried about having an accident related to your UC? -Never -Rarely -Sometimes -Often -Always 13. Over the past 7 days, how often has your UC caused you to feel embarrassed? -Never -Rarely -Sometimes -Often -Always TA B L E 3 (Continued) F I G U R E 3 Domains and sub-domains included in the SIQ-CD and SIQ-UC. The "symptoms" domain consists of four sub-domains (gastrointestinal, pain and discomfort, nutrition-related and energy-related symptoms), while the "impacts" domain consists of six subdomains (emotional, daily performance, lifestyle and activities, social functioning, dietary and additional impacts) Another important strength of the SIQ-CD and SIQ-UC is the incorporation of an impacts section designed to assess functioning related to disease status. These multi-domain instruments may be able to support claims related to improvement in not only symptoms but also ability to function and emotional state. 9 It is notable there was considerable overlap in Crohn's disease and ulcerative colitis concept elicitation results. Two-thirds (12/18) of the symptom concepts and one-half (10/20) of the impact concepts overlap, notwithstanding that the two development processes were independent of each other. This finding holds out the possibility that with future development a robust combined instrument could be created. Comparisons of the IBDQ, the tools developed by Higgins et al, and the novel questionnaires described in the current manuscript also reveal a sizeable proportion of shared items (Tables S6 and S7). For example, 65% (20/31) and 55% (16/29) of the items included in the SIQ-CD and SIQ-UC are included in the IBDQ, respectively. This is interesting given that the IBDQ was developed two decades before the FDA PRO guidance was issued and raises the question of whether strict adherence to the guidance principles is inherently beneficial, especially since heterogeneous clinical trial outcome measures impede between-study comparisons and meta-analyses. Several limitations to the current study should be acknowledged. First, blinded centrally read endoscopy was not used to prospectively guide recruitment in Crohn's disease cohort. Rather, it was used to confirm that a spectrum of objectively confirmed endoscopic disease was represented in Crohn's disease study population. The study population included only one participant with an SES-CD value greater than 15. Similarly, while a spectrum of endoscopic disease activity was incorporated in the ulcerative colitis cohort, only one participant had a SCCAI score greater than 11. Second, the SIQ-CD and SIQ-UC were both developed in an English-speaking, North American population. While no issues were identified in the translatability assessments of these instruments, additional psychometric testing may be required if substantial adaptations are made to the SIQ-CD and SIQ-UC in the future. Finally, while we used rigorous qualitative and mixed-methods approaches to identify patient-reported concepts, refine the underlying conceptual frameworks, and provide evidence of content validity for the newly developed SIQ-CD and SIQ-UC, cross-sectional and longitudinal measurement properties need to be evaluated in adequately powered studies before these instruments can be used to support labelling claims. Finally, prospective validation is required to confirm the recall periods, as they were selected using expert consensus, and determine instrument scaling and scoring. In conclusion, the SIQ-CD and SIQ-UC are novel PRO draft measures for use in Crohn's disease and ulcerative colitis trials, respectively. They were developed in consonance with a regulatory framework, and hold promise for evaluating both inflammatory bowel disease-related symptoms and impacts in patients with a range of clinical and endoscopic disease severity. Further validation efforts are currently underway within clinical trials programmes to assess validity, reliability and responsiveness. ACK N OWLED G EM ENTS Thank you to Leonardo Guizzetti of Robarts Clinical Trials Inc for providing statistical support. |
<reponame>hris11/Vaprosinator
package hristian.nikola.slav.Dto;
public class PlayerDto {
private Integer id;
private String username;
private Integer wins;
public PlayerDto(Integer id, String username, Integer wins) {
this.id = id;
this.username = username;
this.wins = wins;
}
public PlayerDto() {}
public Integer getId() {
return id;
}
public void setId(Integer id) {
this.id = id;
}
public String getUsername() {
return username;
}
public void setUsername(String username) {
this.username = username;
}
public Integer getWins() {
return wins;
}
public void setWins(Integer wins) {
this.wins = wins;
}
}
|
<gh_stars>0
/* eslint-disable @typescript-eslint/no-unsafe-assignment */
/* eslint-disable @typescript-eslint/no-explicit-any */
import {
// faLevelUpAlt,
// faCalendarAlt,
faCogs,
faSignOutAlt,
faUser,
} from '@fortawesome/free-solid-svg-icons';
import { FontAwesomeIcon } from '@fortawesome/react-fontawesome';
import Heading from 'components/UI/Heading';
import useAuth from 'hooks/useAuth';
import Link from 'next/link';
// import LogOut from 'pages/logOut';
import React from 'react';
import crypto from 'crypto';
import Socials from './Socials';
const ProfileNav = (): JSX.Element => {
const { user } = useAuth();
const { firstName, lastName, email } = user;
const md5 = crypto.createHash('md5').update(email).digest('hex');
const image = `https://www.gravatar.com/avatar/${md5}?r=pg`;
return (
<div className="md:col-span-2 lg:col-span-1 h-screen md:sticky md:top-0">
<div className="flex flex-col bg-secondary h-screen text-white font-mont tracking-widest uppercase items-center">
<img
src={image}
alt="User Profile"
className="border-8 h-1/6 md:w-2/5 rounded-full mt-16"
/>
<Heading level="h4" className="text-2xl py-3">
{!firstName ? 'Teammate' : <p>{`${firstName} ${lastName}`}</p>}
</Heading>
{/* For the ranking system */}
{/* <div className="relative pt-1 w-7/8">
<div className="overflow-hidden h-2 text-xs flex rounded bg-orangeLight">
<div
style={{ width: '30%' }}
className="shadow-none flex flex-col text-center whitespace-nowrap text-white justify-center bg-orange"
/>
</div>
</div>
<p className="text-sm pt-2 pb-6">55/60 XP</p> */}
<Link href="/members">
<p className="border-b w-full border-t py-3 px-3 flex justify-between cursor-pointer">
<FontAwesomeIcon icon={faUser} />
Profile Home
</p>
</Link>
{/* <Link href="/members/levels">
<p className="border-b w-full border-t py-3 px-3 flex justify-between cursor-pointer">
<FontAwesomeIcon icon={faLevelUpAlt} />
Levels & Rewards
</p>
</Link> */}
{/* <p className="border-b w-full py-3 px-3 space-x-3 flex justify-between">
<img
src="/images/Victis_White_clr.png"
alt="victis logo"
className="w-6"
/>
Victis Test
</p> */}
{/* <Link href="/events">
<p className="border-b w-full py-3 px-3 flex justify-between cursor-pointer">
<FontAwesomeIcon icon={faCalendarAlt} />
Giveaways & Events
</p>
</Link> */}
<Link href="/members/settings">
<p className="border-b w-full py-3 px-3 flex justify-between cursor-pointer">
<FontAwesomeIcon icon={faCogs} />
Settings
</p>
</Link>
<Link href="/logout">
<p className="border-b w-full py-3 px-3 flex justify-between cursor-pointer">
<FontAwesomeIcon icon={faSignOutAlt} />
{/* <LogOut /> */}
Logout
</p>
</Link>
<div className="absolute bottom-2 left-4">
<Socials />
</div>
</div>
</div>
);
};
export default ProfileNav;
|
<filename>Platform/CometlakevBoardPkg/Script/StitchIfwiConfig.py
## @ StitchIfwi.py
# This is an IFWI stitch config script for Slim Bootloader
#
# Copyright (c) 2020, Intel Corporation. All rights reserved. <BR>
# SPDX-License-Identifier: BSD-2-Clause-Patent
#
##
extra_usage_txt = \
"""This is an IFWI stitch config script for Slim Bootloader For the FIT tool and
stitching ingredients listed in step 2 below, please contact your Intel representative.
1. Create a stitching workspace directory. The paths mentioned below are all
relative to it.
2. Extract required tools and ingredients to stitching workspace.
- FIT tool
Copy 'fit.exe' or 'fit' and 'vsccommn.bin' to 'Fit' folder
- BPMGEN2 Tool
Copy the contents of the tool to Bpmgen2 folder
Rename the bpmgen2 parameter to bpmgen2.params if its name is not this name.
- Components
Copy 'cse_image.bin' to 'Input/cse_image.bin'
Copy PMC firmware image to 'Input/pmc.bin'.
Copy EC firmware image to 'Input/ec.bin'.
copy ECregionpointer.bin to 'Input/ecregionpointer.bin'
Copy GBE binary image to 'Input/gbe.bin'.
Copy ACM firmware image to 'Input/acm.bin'.
3. Openssl
Openssl is required for stitch. the stitch tool will search evn OPENSSL_PATH,
to find Openssl. If evn OPENSSL_PATH is not found, will find openssl from
"C:\\Openssl\\Openssl"
4. Stitch the final image
EX:
Assuming stitching workspace is at D:\Stitch and building ifwi for CMLV platform
To stitch IFWI with SPI QUAD mode and Boot Guard profile VM:
StitchIfwi.py -b vm -p cmlv -w D:\Stitch -s Stitch_Components.zip -c StitchIfwiConfig.py
"""
def get_bpmgen2_params_change_list ():
params_change_list = []
params_change_list.append ([
# variable | value |
# ===================================
('PlatformRules', 'CMLV Embedded'),
('BpmStrutVersion', '0x20'),
('BpmRevision', '0x01'),
('BpmRevocation', '1'),
('AcmRevocation', '2'),
('NEMPages', '3'),
('IbbFlags', '0x2'),
('IbbHashAlgID', '0x0B:SHA256'),
('TxtInclude', 'FALSE'),
('PcdInclude', 'TRUE'),
('BpmSigScheme', '0x14:RSASSA'),
('BpmSigPubKey', r'<KEY>'),
('BpmSigPrivKey', r'Bpmgen2\keys\bpm_privkey_2048.pem'),
('BpmKeySizeBits', '2048'),
('BpmSigHashAlgID', '0x0B:SHA256'),
])
return params_change_list
def get_platform_sku():
platform_sku ={
'cmlv' : 'H410'
}
return platform_sku
def get_oemkeymanifest_change_list():
xml_change_list = []
xml_change_list.append ([
# Path | value |
# =========================================================================================
('./KeyManifestEntries/KeyManifestEntry/Usage', 'OemDebugManifest'),
('./KeyManifestEntries/KeyManifestEntry/HashBinary', 'Temp/kmsigpubkey.hash'),
])
return xml_change_list
def get_xml_change_list (platform, spi_quad):
xml_change_list = []
xml_change_list.append ([
# Path | value |
# =========================================================================================
#Region Order
('./BuildSettings/BuildResults/RegionOrder', '45321'),
('./FlashLayout/DescriptorRegion/OemBinary', '$SourceDir\OemBinary.bin'),
('./FlashLayout/BiosRegion/InputFile', '$SourceDir\BiosRegion.bin'),
('./FlashLayout/Ifwi_IntelMePmcRegion/MeRegionFile', '$SourceDir\MeRegionFile.bin'),
('./FlashLayout/Ifwi_IntelMePmcRegion/PmcBinary', '$SourceDir\PmcBinary.bin'),
('./FlashLayout/EcRegion/InputFile', '$SourceDir\EcRegion.bin'),
('./FlashLayout/EcRegion/Enabled', 'Enabled'),
('./FlashLayout/EcRegion/EcRegionPointer', '$SourceDir\EcRegionPointer.bin'),
('./FlashLayout/GbeRegion/InputFile', '$SourceDir\GbeRegion.bin'),
('./FlashLayout/GbeRegion/Enabled', 'Enabled'),
('./FlashLayout/SubPartitions/PchcSubPartitionData/InputFile', '$SourceDir\PchcSubPartitionData.bin'),
('./FlashSettings/FlashComponents/FlashComponent1Size', '32MB'),
('./FlashSettings/FlashComponents/SpiResHldDelay', '8us'),
('./FlashSettings/VsccTable/VsccEntries/VsccEntry/VsccEntryName', 'VsccEntry0'),
('./FlashSettings/VsccTable/VsccEntries/VsccEntry/VsccEntryVendorId', '0xEF'),
('./FlashSettings/VsccTable/VsccEntries/VsccEntry/VsccEntryDeviceId0', '0x40'),
('./FlashSettings/VsccTable/VsccEntries/VsccEntry/VsccEntryDeviceId1', '0x19'),
('./IntelMeKernel/IntelMeBootConfiguration/PrtcBackupPower', 'None'),
('./PlatformProtection/ContentProtection/Lspcon4kdisp', 'PortD'),
('./PlatformProtection/PlatformIntegrity/OemPublicKeyHash', '4D 19 B4 F2 3F F9 17 0C 2C 46 B3 D7 6B F0 59 19 A7 FA 8B 6B 11 3D F5 3C 86 C0 E8 00 3C 23 A8 DC'),
('./PlatformProtection/PlatformIntegrity/OemExtInputFile', '$SourceDir\OemExtInputFile.bin'),
('./PlatformProtection/BootGuardConfiguration/BtGuardKeyManifestId', '0x1'),
('./PlatformProtection/IntelPttConfiguration/PttSupported', 'No'),
('./PlatformProtection/IntelPttConfiguration/PttPwrUpState', 'Disabled'),
('./PlatformProtection/IntelPttConfiguration/PttSupportedFpf', 'No'),
('./PlatformProtection/TpmOverSpiBusConfiguration/SpiOverTpmBusEnable', 'Yes'),
('./Icc/IccPolicies/Profiles/Profile/ClockOutputConfiguration/ClkoutSRC3', 'Disabled'),
('./Icc/IccPolicies/Profiles/Profile/ClockOutputConfiguration/ClkoutSRC6', 'Disabled'),
('./Icc/IccPolicies/Profiles/Profile/ClockOutputConfiguration/ClkoutSRC9', 'Disabled'),
('./Icc/IccPolicies/Profiles/Profile/ClockOutputConfiguration/ClkoutSRC10', 'Disabled'),
('./Icc/IccPolicies/Profiles/Profile/ClockOutputConfiguration/ClkoutSRC11', 'Disabled'),
('./Icc/IccPolicies/Profiles/Profile/ClockOutputConfiguration/ClkoutSRC12', 'Disabled'),
('./Icc/IccPolicies/Profiles/Profile/ClockOutputConfiguration/ClkoutSRC13', 'Disabled'),
('./Icc/IccPolicies/Profiles/Profile/ClockOutputConfiguration/ClkoutSRC14', 'Disabled'),
('./Icc/IccPolicies/Profiles/Profile/ClockOutputConfiguration/ClkoutSRC15', 'Disabled'),
('./NetworkingConnectivity/WiredLanConfiguration/GbePCIePortSelect', 'Port13'),
('./NetworkingConnectivity/WiredLanConfiguration/PhyConnected', 'PHY on SMLink0'),
('./InternalPchBuses/PchTimerConfiguration/t573TimingConfig', '100ms'),
('./InternalPchBuses/PchTimerConfiguration/TscClearWarmReset', 'Yes'),
('./Debug/IntelTraceHubTechnology/UnlockToken', '$SourceDir\\UnlockToken.bin'),
('./Debug/EspiFeatureOverrides/EspiEcLowFreqOvrd', 'Yes'),
('./CpuStraps/PlatformImonDisable', 'Enabled'),
('./CpuStraps/IaVrOffsetVid', 'No'),
('./StrapsDifferences/PCH_Strap_CSME_SMT2_TCOSSEL_Diff', '0x00000000'),
('./StrapsDifferences/PCH_Strap_CSME_SMT3_TCOSSEL_Diff', '0x00000000'),
('./StrapsDifferences/PCH_Strap_PN1_RPCFG_2_Diff', '0x00000003'),
('./StrapsDifferences/PCH_Strap_PN2_RPCFG_2_Diff', '0x00000003'),
('./StrapsDifferences/PCH_Strap_ISH_ISH_BaseClass_code_SoftStrap_Diff', '0x00000000'),
('./StrapsDifferences/PCH_Strap_SMB_spi_strap_smt3_en_Diff', '0x00000001'),
('./StrapsDifferences/PCH_Strap_GBE_SMLink1_Frequency_Diff', '0x00000001'),
('./StrapsDifferences/PCH_Strap_GBE_SMLink3_Frequency_Diff', '0x00000003'),
('./StrapsDifferences/PCH_Strap_USBX_XHC_PORT6_OWNERSHIP_STRAP_Diff', '0x00000000'),
('./StrapsDifferences/PCH_Strap_USBX_XHC_PORT5_OWNERSHIP_STRAP_Diff', '0x00000000'),
('./StrapsDifferences/PCH_Strap_USBX_XHC_PORT2_OWNERSHIP_STRAP_Diff', '0x00000000'),
('./StrapsDifferences/PCH_Strap_PMC_MMP0_DIS_STRAP_Diff', '0x00000001'),
('./StrapsDifferences/PCH_Strap_PMC_EPOC_DATA_STRAP_Diff', '0x00000002'),
('./StrapsDifferences/PCH_Strap_spth_modphy_softstraps_com1_com0_pllwait_cntr_2_0_Diff', '0x00000001'),
('./StrapsDifferences/PCH_Strap_SPI_SPI_EN_D0_DEEP_PWRDN_Diff', '0x00000000'),
('./StrapsDifferences/PCH_Strap_SPI_cs1_respmod_dis_Diff', '0x00000000'),
('./StrapsDifferences/PCH_Strap_DMI_OPDMI_LW_Diff', '0x00000003'),
('./StrapsDifferences/PCH_Strap_DMI_OPDMI_TLS_Diff', '0x00000003'),
('./StrapsDifferences/PCH_Strap_DMI_OPDMI_PAD_Diff', '0x0000000F'),
('./StrapsDifferences/PCH_Strap_DMI_OPDMI_ECCE_Diff', '0x00000001'),
('./FlexIO/IntelRstForPcieConfiguration/RstPCIeController3', '1x4'),
('./FlexIO/PcieLaneReversalConfiguration/PCIeCtrl3LnReversal', 'No'),
('./FlexIO/SataPcieComboPortConfiguration/SataPCIeComboPort2', 'PCIe'),
('./FlexIO/SataPcieComboPortConfiguration/SataPCIeComboPort4', 'SATA'),
('./FlexIO/SataPcieComboPortConfiguration/SataPCIeComboPort5', 'SATA'),
('./FlexIO/Usb3PortConfiguration/USB3PCIeComboPort2', 'PCIe'),
('./FlexIO/PcieGen3PllClockControl/PCIeSecGen3PllEnable', 'Yes'),
('./IntelPreciseTouchAndStylus/IntelPreciseTouchAndStylusConfiguration/Touch1MaxFreq', '17 MHz'),
('./FWUpdateImage/FWMeRegion/InputFile', '$SourceDir\FWMeRegion.bin'),
('./FWUpdateImage/FWPmcRegion/InputFile', '$SourceDir\FWPmcRegion.bin'),
('./FWUpdateImage/FWOemKmRegion/InputFile', '$SourceDir\FWOemKmRegion.bin'),
('./FWUpdateImage/FWPchcRegion/InputFile', '$SourceDir\FWPchcRegion.bin'),
('./FlashSettings/BiosConfiguration/TopSwapOverride', '256KB'),
])
return xml_change_list
def get_component_replace_list():
replace_list = [
# Path file name compress Key
('IFWI/BIOS/TS0/ACM0', 'Input/acm.bin', 'dummy', ''),
('IFWI/BIOS/TS1/ACM0', 'Input/acm.bin', 'dummy', ''),
]
return replace_list
|
import { ComponentFixture, TestBed } from '@angular/core/testing';
import { By } from '@angular/platform-browser';
import { ControlButtonComponent } from './control-button.component';
describe('ControlButtonComponent', () => {
let component: ControlButtonComponent;
let fixture: ComponentFixture<ControlButtonComponent>;
beforeEach(async () => {
await TestBed.configureTestingModule({
declarations: [ControlButtonComponent],
}).compileComponents();
});
beforeEach(() => {
fixture = TestBed.createComponent(ControlButtonComponent);
component = fixture.componentInstance;
fixture.detectChanges();
});
afterEach(() => {
jest.clearAllMocks();
});
it('should create', () => {
expect(component).toBeTruthy();
});
it('should emit an event on click', () => {
const buttonClickSpy = jest.spyOn(component.buttonClick, 'emit');
const button = fixture.debugElement.query(By.css('.button'));
button.triggerEventHandler('click', null);
expect(buttonClickSpy).toHaveBeenCalled();
});
});
|
#include <algorithm>
#include <iostream>
#include <cstring>
#include <string>
#include <cstdio>
using namespace std;
int n,a;
bool f,vis[500005];
string s;
int main()
{
cin>>s;
cin>>n;
int l=s.length();
for(int i=1;i<=n;i++)
{
cin>>a;
vis[a-1]=!vis[a-1];
}
for(int i=0;i<l/2;i++)
{
if(vis[i])f=!f;
if(f)swap(s[i],s[l-i-1]);
}
for(int i=0;i<l;i++)cout<<s[i];
}
|
<reponame>gsasikiran/AST_ASSIGNMENT_I
#include "solver.hpp"
#include "solver.hpp"
equation_solver::equation_solver()
{
}
equation_solver::~equation_solver()
{
}
bool equation_solver::solve_equation(const int k , const int n )
{
double loop_result = (o_1)/(1-o_1);
for (int i=2; i<n; i++)
{
if(i<k)
{
loop_result = (1/(o_i_lesser/(1-o_i_lesser)))*loop_result;
}
else
{
loop_result = (1/(o_i_greater/(1-o_i_greater)))*loop_result;
}
}
this->result = loop_result;
return true;
}
ldouble equation_solver::get_result()
{
return this->result;
}
|
from django.core import mail
from django.test import TestCase
from rss_feed.feeds.tests.factories import FeedFactory
from ..utils import send_notification_to_user
class TestTasks(TestCase):
def test_send_notification_to_user(self):
feed = FeedFactory()
send_notification_to_user(feed)
self.assertEqual(len(mail.outbox), 1)
self.assertEqual(mail.outbox[0].subject, 'Feed Auto Update Failed!')
self.assertEqual(mail.outbox[0].to[0], feed.created_by.email)
self.assertIn("will not be auto updated due to some errors.", mail.outbox[0].body)
|
import java.util.*;
public class Main {
public static void main(String args[]) {
Scanner in=new Scanner(System.in);
int t=in.nextInt();
while(t>0){
int l=in.nextInt();
int r=in.nextInt();
String st=((2*l)>r)?"YES":"NO";
// int c1=in.nextInt();
// int h=in.nextInt();
// String s=in.next();
// int cn0=0,cn1=0,tc=0;
// for(int i=0;i<n;i++)
// {
// char ch=s.charAt(i);
// if(ch=='1')
// cn1++;
// else
// cn0++;
// }
// // System.out.println(cn0+" "+cn1);
// if(c0>=(h+c1))
// {
// tc=((h+c1)*cn0)+((c1)*cn1);
// }
// else if(c1>=(h+c0))
// {
// tc=((h+c0)*cn1)+((c0)*cn0);
// }
// else
// tc=(c0*cn0)+(c1*cn1);
System.out.println(st);
t--;
}
}
} |
package sk.skrecyclerview.bean;
/**
* Created by SK on 2017-05-05.
*/
public class HomepageItemEntity extends Entity {
public String img;
public String url;
public String title;
public String price;
public String linfo;
public String rinfo;
}
|
Michael Farris is a constitutional lawyer and Chancellor Emeritus at Patrick Henry College.
Phyllis Schlafly was the general of the army in the battle against the Equal Rights Amendment. I was a line officer called into duty when, in 1978, Congress purported to change the deadline for the ratification of that amendment.
On behalf of three Washington state legislators, I filed the first legal challenge to the constitutionality of the misuse of the Article V process by Congress. My lawsuit was later consolidated with a similar case filed by state legislators from Arizona and Idaho.
Phyllis and I traveled together throughout Washington state to urge support for my lawsuit. She helped raise the funds that allowed us to battle both the federal government and the National Organization for Women.
We won that case at the federal district court level. The Supreme Court granted review but put the case on ice until the second deadline expired. When 38 states failed to ratify by the date of the “extended” deadline, the Supreme Court ruled the whole matter to be moot.
We were together then. Now Phyllis argues against the use of Article V, while I am helping to lead the effort for the Convention of States Project, which seeks to use the power of the states to rein in Washington, D.C.’s, abuse of power.
I tell this story to illustrate two points in response to Phyllis Schlafly’s latest argument against a convention of states. First, knowledgeable conservatives can legitimately disagree on this issue. Phyllis Schlafly and I have been friends for well over 30 years and have worked together on countless causes. She is a true blue conservative, and I am a true blue conservative. We both have substantial experience in Article V issues – she as a political leader and advocate and I as a constitutional litigator.
I am not the only true conservative to disagree with Phyllis on this. Talk-show host Mark Levin and Sen. Tom Coburn are among the many conservative leaders who, like me, have begun to call for a convention of the states to stop Washington, D.C., from abusing its power.
When longtime conservatives disagree, it is time to listen to the merits of their arguments rather than making snap judgments when one side proclaims that no conservative can disagree.
Recounting our work together on the ERA litigation leads to my second point. Phyllis argues, “Article V doesn’t give any power to the courts to correct what does or does not happen.” Phyllis knows better. She was present in the federal courtroom in Boise, Idaho, when I (along with other members of our litigation team) argued that Congress had misused its Article V power. We won in court. And Phyllis and I both celebrated that victory. The courts can and have stopped the abuse of the power granted by Article V.
But I disagree with Phyllis on an even more fundamental issue. She argues that the Constitution was illegally adopted as the result of a runaway convention. This argument is an old one, but the complete history shows it to be an unjustified slander against the Constitution itself.
The anti-federalists invented this calumny against the Constitution, and the public schools have repeated it for so many generations that most Americans accept it as true. I am baffled by any friend of the Constitution who argues that it was illegally adopted. Why should an illegal document be defended at all?
Proponents of the “illegal Constitution theory” like to point to the phrase that called the Convention “for the sole and express purpose of revising the Articles of Confederation.” But the call for the Convention issued by Congress didn’t end with that phrase. The very same sentence also said that the purpose of the Convention was to “render the federal constitution adequate to the exigencies of Government & the preservation of the Union.” The call of the Convention used the terms “Articles of Confederation” and “federal constitution” interchangeably in the same sentence.
The Convention wasn’t limited to proposing one amendment or a thousand. It wasn’t required to send a series of amendments back for individual consideration. It was perfectly within the call of the Convention to put together a new package to “render the federal Constitution adequate” to save the nation. And that is what the participants did.
In order to negate the slander against the Constitution, it is incredibly important to understand the next two steps in the process of its adoption and to compare them with the requirements for amendments to the Articles of Confederation. Any change to the Articles required the approval of Congress and the ratification by all 13 state legislatures.
The Constitutional Convention proposed two important changes in this process. First, rather than having the Constitution approved by state legislatures, they recommended convening special ratification conventions in each state. Second, they recommended that the number of states required to approve the Constitution be changed from 13 to nine.
It is the change in the amendment process that receives the most attention from those who claim that the Constitution was illegally adopted.
But contrary to what you were taught in the public schools, that change in process did not happen without proper approval. Congress first approved the new process and sent this recommendation to the state legislatures. All 13 state legislatures approved the new process by calling for ratification conventions in their own states.
The requirements of the Articles of Confederation were meticulously followed. Congress and all 13 state legislatures approved the change in the ratification process. It is true that the Constitution was approved by this new process, but the change in process itself was first approved by use of the old rules under the Articles. To call the Convention a “runaway convention” is not just a myth – it is defamation against both the Constitution and the Founders.
I respectfully contend that it is time to stop demeaning the Founders and start using the tools they gave us to stop a true runaway government – the one that is functioning today in Washington, D.C.
At the Constitutional Convention, George Mason insisted that the states be given the power to propose amendments to the Constitution without needing approval from Congress. He argued that if the federal government abused its power – as he predicted it certainly would – it would never consent to any corrective action. Only the states could be trusted to rein in federal abuses of power.
Those who oppose the use of Article V have not proposed an equally effective solution to stop the abuse of power in Washington, D.C. Phyllis and I have labored side by side for decades, trying to elect conservatives and lobby Congress to follow the Constitution. While we have had successes here and there, it cannot be doubted that Washington, D.C.’s, abuse of power has grown dramatically – with no end in sight.
Which do we reasonably fear more? A runaway federal government on a path to destroy our liberty? Or a convention of the states given the clear and enforceable mandate to correct the abuses of power by the federal government?
Many knowledgeable conservative scholars have made the unimpeachable case that the checks and balances contained in Article V will prevent any mischief. For heaven’s sake, 38 states are required to ratify a new amendment. If we can’t get 13 states to stop something crazy, we are wasting our time trying to save the republic.
Day by day and year by year, Washington, D.C., is deliberately and persistently increasing its power. Washington, D.C., will never fix itself. The framers gave the states the power to amend the Constitution to limit the power of the federal government should it abuse the original document.
That abuse is more than apparent to any reasonable American. A convention of states under Article V is our only realistic hope of saving our liberty. I fear Washington, D.C., far more than I fear the Founders, the states and Article V. |
ISLAMABAD, Mar 23 (APP):Speaker National Assembly Asad Qaiser and Deputy Speaker Qasim Khan Suri have greeted the nation on the auspicious occasion of Pakistan Day and said that 23 March has a special significance in the history of sub-continent.
In his congratulatory message to the nation on the occasion of Pakistan Day, the Speaker said that 79 years ago, on this day the Muslims of South Asia, under the leadership of Quid-e-Azam Muhammad Ali Jinnah in Lahore, resolved to work for an independent Muslim State.
The Lahore Resolution which later came to be known as Pakistan Resolution, imparted a new strength and motivation to the movement of independence and gave Muslims a new sense of purpose and direction.
Asad Qaiser said that Quaid-e-Azam embodied Islamic principles of patience and humanism which encouraged him to rally around, not only Muslim public opinion but also support from other minorities to demand and create a separate Muslim homeland.
“It is a tribute to his leadership and honesty of purpose that within short period of seven years he secured Pakistan against heavy odds through peaceful means,” the Speaker said.
Deputy Speaker Qasim Suri said that this resolution gave an ideal to the Muslims and united them for the attainment of a shared objective. It was an epoch-making event, which changed the course of history for the Indian Muslims.
Both the leaders assured the nation that the Parliament and present democratic government are committed to preserving great cultural heritage, distinct political and civilizational identity in light of the vision of Quaid-e-Azam Muhammad Ali Jinnah. |
package com.jutils.beanConvert;
import java.beans.BeanInfo;
import java.beans.IntrospectionException;
import java.beans.Introspector;
import java.beans.PropertyDescriptor;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.util.HashMap;
import java.util.Map;
/**
* Bean与Map的转换
*
* @author chenssy
* @date 2016-09-24
* @since 1.0.0
*/
public class BeanMapConvert {
/**
* Bean转换为Map
*
* @param object
* @return String-Object的HashMap
*
* @author chenssy
* @date 2016-09-25
* @since v1.0.0
*/
public static Map<String,Object> bean2MapObject(Object object){
if(object == null){
return null;
}
Map<String, Object> map = new HashMap<String, Object>();
try {
BeanInfo beanInfo = Introspector.getBeanInfo(object.getClass());
PropertyDescriptor[] propertyDescriptors = beanInfo.getPropertyDescriptors();
for (PropertyDescriptor property : propertyDescriptors) {
String key = property.getName();
// 过滤class属性
if (!key.equals("class")) {
// 得到property对应的getter方法
Method getter = property.getReadMethod();
Object value = getter.invoke(object);
map.put(key, value);
}
}
} catch (Exception e) {
e.printStackTrace();
}
return map;
}
/**
* Map转换为Java Bean
*
* @param map
* 待转换的Map
* @param object
* Java Bean
* @return java.lang.Object
*
* @author chenssy
* @date 2016-09-25
* @since v1.0.0
*/
public static Object map2Bean(Map map,Object object){
if(map == null || object == null){
return null;
}
try {
BeanInfo beanInfo = Introspector.getBeanInfo(object.getClass());
PropertyDescriptor[] propertyDescriptors = beanInfo.getPropertyDescriptors();
for (PropertyDescriptor property : propertyDescriptors) {
String key = property.getName();
if (map.containsKey(key)) {
Object value = map.get(key);
// 得到property对应的setter方法
Method setter = property.getWriteMethod();
setter.invoke(object, value);
}
}
} catch (IntrospectionException e) {
e.printStackTrace();
} catch (InvocationTargetException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
}
return object;
}
}
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
class ShapeMismatchError(RuntimeError):
r"""Raised on failure of :class:`combustion.nn.MatchShapes`."""
|
import Menus from "./Menus";
import { signOut } from "next-auth/react";
import { useState } from "react";
const Sidebar = () => {
const [isOpen, setIsopen] = useState(false);
return (
<div className="w-full h-full bg-primary relative z-50">
<div
className={`${
!isOpen && "-top-full"
} transition-all duration-300 fixed lg:relative lg:top-0 bg-primary grid grid-rows-[min-content,1fr,min-content] h-screen w-screen lg:h-full lg:w-full`}
>
<div className="lg:mt-3 p-5">
<h3 className="text-xl font-bold text-white">Dashboard</h3>
<span className="text-sm text-white">Role : Admin</span>
</div>
<div className="mt-10">
<Menus />
</div>
<div className="flex place-content-center">
<button
onClick={() => signOut()}
className="my-2 py-3 px-4 text-white"
>
Keluar
</button>
</div>
</div>
<div className="lg:hidden flex items-center justify-end text-white py-8 px-10">
<button
className="fixed p-1 active:outline-white"
onClick={() => setIsopen(!isOpen)}
>
<svg
className="w-6 h-6"
fill="none"
stroke="currentColor"
viewBox="0 0 24 24"
xmlns="http://www.w3.org/2000/svg"
>
<path
strokeLinecap="round"
strokeLinejoin="round"
strokeWidth="2"
d="M4 6h16M4 12h16M4 18h16"
></path>
</svg>
</button>
</div>
</div>
);
};
export default Sidebar;
|
Most Schools Use Kindergarten Entry Assessments; Do They Help?
Kindergarten entry assessments—quick tests that are given to incoming students and are intended to help teachers tailor their instruction to a child's specific needs—are used in close to three-quarters of the nation's public schools.
But using those tests doesn't appear to have a significant impact on how well a student is reading or doing math in the spring of the kindergarten year, raising the question of whether such tests are serving their intended purpose.
These findings are part of a federally funded report from the Regional Educational Laboratory Northeast & Islands, "How Kindergarten Entry Assessments are Used in Public Schools and How They Correlate with Spring Assessments." There are 10 regional educational laboratories, and the Northeast & Islands branch covers six New England states, New York, Puerto Rico, and the U.S. Virgin Islands.
The researchers used a longitudinal study of data gathered on children who started school in the 2010-11 school year, and the schools those children attended. At that time, 73 percent of public schools that offered kindergarten said they gave incoming students these entry tests. That percentage could be even higher now, because the U.S. Department of Education has been nudging states to create or improve entry assessments with the goal of getting children off to a strong academic start. In 2013, for example, the Education Department distributed $15 million to several states to help them create or improve their kindergarten entry tests.
The researchers found that of the schools that were using these tests, the vast majority—93 percent—said they were doing so in order to "individualize instruction." Forty-one percent used them to help create class assignments, nearly a quarter of schools reported using them to advise parents on delaying kindergarten entry, and 16 percent used them as screening tools for children who were younger than the kindergarten cutoff.
And, despite saying that the intent of the tests is to improve instruction, using them didn't connect to a child's academic performance.
For all their widespread use, these readiness tests are not universally supported. One concern is that they'll could be used to make the case to delay a child's entry into kindergarten even if the child meets the age cutoff, which my colleague Catherine Gewertz wrote about last year. Twenty-four percent of schools reported using the tests to support holding a child out of school for a year.
So what to make of this? The authors of the report say that states must be careful as they roll out these systems that teachers understand how to administer them correctly and to interpret the results.
"In theory the impact of kindergarten entry assessments on student achievement depends on several components working together successfully," the report said. Those components include reliable tests, administered correctly, by teachers who can understand the results and make quick adjustments on the fly when necessary.
"In practice, schools operate at varying levels of quality in each link in this chain," the authors wrote. |
The Qasr Al-Yahud baptism site in the Jordan Valley. Photo: High Contrast via Wikimedia Commons.
JNS.org – An Israeli soldier was lightly wounded in an attempted car-ramming terror attack near the Qasr Al-Yahud baptism site in the Jordan Valley on Friday.
The incident came a day after hundreds of Orthodox Christian pilgrims re-enacted the baptism of Jesus at the same spot on the Jordan River. The wounded soldier was treated at the scene by a Magen David Adom emergency medical team and did not require hospitalization.
The Palestinian driver was quickly detained. The IDF classed the incident as an attempted terrorist attack. According to reports, the driver had approached the baptism site in his vehicle and was stopped by soldiers manning a security checkpoint. The driver demanded to be let in and an altercation erupted, with the terrorist refusing to heed the soldiers’ instructions and driving through the checkpoint. He then hit a combat soldier patrolling the site. |
Laboratory strategic defense initiatives against transmission of human immune deficiency virus in blood and blood products. Serological methods based on enzyme linked immunosorbent assay (ELISA) and Western blot tests for detecting the presence of antibodies against the human immune deficiency virus are the standard techniques for identifying infected blood donors. However, these tests could not detect infected seronegative donors who were in the window period at the time of donation. Such donors can be identified by more elaborate methods including antigen detecting ELISA and polymerase chain reaction, which can detect viral antigens and nucleic acids in infected donor blood even in window period. In addition, the process of donor selection whereby individuals who were at high risk for HIV infections were excluded from the donor panel had substantially reduced the risk of window period donation. Furthermore, in order to ensure greater safety, transfusion centers nowadays undertake additional measures in the form of virucidal techniques such as the use of heat, detergents and photochemical agents to treat blood and blood products. Despite all of these measures, a risk-free transfusion was not practically achievable. However, risk-free transfusion is now possible with the introduction of recombinant blood products, the use of which is severely limited by their cost. Nonetheless, a risk-free transfusion is still achievable at a relatively little cost by transfusing suitably eligible patients with their own blood through the autologous blood transfusion program. Antibody testing is virtually the only method currently available in Nigerian blood banks. There is the need to reactivate and expand the scope of our National Blood Transfusion Service in order to make our blood and products safer. |
NH7 Weekender
History
The genesis of the festival dates back to 2009, when British music executive Stephen Budd was in Mumbai judging the Indian leg of the British Council’s Young Music Entrepreneur Awards and met Vijay Nair, CEO of Only Much Louder. Nair went to the UK in the following year and was introduced by Budd to Martin Elbourne, best-known for booking the Glastonbury Festival. The three of them subsequently came together and created first NH7 Weekender that was held in Pune in December 2010.
In 2018, several senior management executives at Only Much Louder were accused of sexual harassment by various female employees and associates . This led to a few artistes pulling out of the event, and several others taking a public stand against it.
NH7 Weekender 2010
The festival's debut edition was held from December 10–12 at Koregaon Park in Pune. The lineup hosted some of the best Indian acts such as Zero, Swarathma, Pentagram and Blackstratblues along with international acts like Asian Dub Foundation and The Magic Numbers. The festival had four stages and hosted over 35 artists across three days.
NH7 Weekender 2011
The second edition of NH7 Weekender was held between 18–20 November 2011. It took place at the Laxmi Lawns near Magarpatta City in Pune and had progressed to five stages as opposed to the four in the previous year. Over 25,000 people attended the three-day event in Pune, as compared to the 10,000 attendees in the festival's first outing. Grammy winner Imogen Heap and the British electronic music act Basement Jaxx along with BBC DJs Bobby Friction and DJ Nihal were part of the international artists’ lineup. Midival Punditz, The Raghu Dixit Project, Indian Ocean, Swarathma and other notable Indian acts also performed at the festival.
NH7 Weekender 2012
In 2012 and in its third year, NH7 Weekender expanded to Delhi and Bangalore. The festival, now comprising close to 200 artists, 60 festival pre-party gigs, and six stages, took place in Delhi, on October 13–14, at the Buddh International Circuit in Noida; in Pune from November 2–4 at Amanora Park Town; and the final leg in Bangalore, at the Embassy International Riding School, on 15–16 December. The biggest international acts included the likes of Karnivool, Seun Kuti and Egypt 80, Buraka Som Sistema, Megadeth, Bombay Bicycle Club performing a special acoustic set, Anoushka Shankar, Jinja Safari, Big Scary, and Fink among others.
2012 onward, Sounds Australia, in partnership with OML, brought down The Aussie BBQ to NH7 Weekender. As a result, the 2012 edition saw Australian acts such as indie outfit Big Scary, DJ duo The Aston Shuffle, afro-pop rhythms outfit Jinja Safari and indie pop band Sheppard, and rockers Karnivool as one of the headlining acts.
NH7 Weekender 2013
In 2013, the festival travelled to four cities; Kolkata (Dec 14-15) was added to the list, in addition to Pune (Oct 28-29), Bangalore (Nov 23-24) and Delhi NCR (Nov 30-Dec 1). It featured London electronic music act Chase and Status and hosted a range of other international and Indian musicians. Apart from Chase & Status, the stages were also headlined by Dutch metalcore band Textures and British electronic duo Simian Mobile Disco. MUTEMATH, Shankar Tucker, Meshuggah, TesseracT, Skindred, Noisia, Benga, Dry the River and Irish post-rockers And So I Watch You From Afar were some of the other international acts that performed in 2013.
NH7 Weekender 2014
For the 2014 edition, the festival travelled to four cities in the span of a month. Starting off with Kolkata (Nov 1-2), the festival moved to Bangalore (Nov 8-9), Pune (Nov 21-23) and finished its journey in Delhi NCR (Nov 29-30). English indie rockers The Vaccines, heavy-metal band Fear Factory, MUTEMATH, Cloud Control, Dinosaur Pile-Up, Motopony, Luke Sital-Singh, As Animals, Mr. Woodnote & Lil Rhys, Amit Trivedi, Fossils, and many more Indian and international artists performed at the festival. Apart from the music, over the years, the festival has also hosted a vibrant flea market, live graffiti setups, an elaborate food court, and several interesting art installations. In the 2014 edition, artist Shilo Shiv Suleman’s Pulse And Bloom installation, which was also part of the Burning Man festival in the same year, was showcased on the festival grounds.
NH7 Weekender 2015
In 2015, another city, Shillong (Oct 23-24), was added to the NH7 Weekender's venue lineup. After Shillong, the festival travelled to Kolkata from Oct 31-Nov 1, followed by Delhi on Nov 28-29, and in Pune and Bangalore from Dec 4-6. This was the first year when the festival took place in two cities, Pune and Bangalore, on the same dates. Many prolific Indian artists like Niladri Kumar, Baiju Dharmajan, L. Subramaniam, and A. R. Rahman performed at the festival, alongside some of the biggest international acts such as Mogwai, Megadeth, Mark Ronson, Rodrigo y Gabriela, Flying Lotus, SBTRKT (DJ Set), The Wailers, and more.
NH7 Weekender 2016
In 2016, Shillong kicked off the Weekender (Oct 21-22) and saw close to 40,000 people over the weekend. The Weekender traveled to a new city, Hyderabad (Nov 5-6) before culminating in a new venue in Pune, Life Republic in Hinjewadi. The three cities saw a cumulative attendance of over 110,000. 2016 saw names such as Steven Wilson, Farhan Akhtar Live, Shankar Mahadevan, José González, Anoushka Shankar, Patrick Watson, The Joy Formidable, Skyharbor, Nucleya, Dualist Inquiry, Thaikkudam Bridge and several others. This year also saw the festival travel to five cities as one-day 'Express' Editions: Kolkata, Puducherry, Mysore, Nagpur and Jaipur.
NH7 Weekender 2017
The eighth edition of the festival began with the festival's return to the state of Meghalaya between Oct 27 and 28. The edition saw over 25,000 attendees, before traveling back to its home city, Pune from Dec 8-10, which saw a footfall of 45,000. Both cities saw new venues - Wenfield, The Festive Hills, Meghalaya and Mahalakshmi Lawns, Nagar Road, Pune respectively. 2017 saw several artists across genres, including American guitar virtuoso Steve Vai, Dutch metal band Textures, American metalcore act The Dillinger Escape Plan, American dreampop band CAS, punk legend Marky Ramone, Bollywood composers Ram Sampath and Vishal Bhardwaj, Indian electro-rock icons Pentagram and British rapper Madame Gandhi. The 2017 editions were notable for a number of reasons, including hosting the farewell tours of The Dillinger Escape Plan and Textures, the latter of which played its last ever show at the Pune edition. Steve Vai made his India debut, playing at both editions. Fans of homegrown music had plenty to cheer about - Pentagram made its long-awaited return to stage, as did mathcore band Scribe, fronted by original vocalist and ad-film maker, Vishwesh Krishnamoorthy. Also, for the first time at a major music festival in India, there was a strong element of comedy, with over 20 of the country's top stand-up and sketch comedians performing on stage in Pune - including Biswa Kalyan Rath, Kanan Gill, All India Bakchod members Rohan Joshi, Tanmay Bhat and Ashish Shakya, Kunal Kamra, Azeem Banatwalla and musical-satire trio Aisi Taisi Democracy.
There were also nine one-day 'Express' Editions for the festival, up from the previous season's five editions. The festival traveled to Kolkata, Bangalore, Jaipur, Puducherry, Indore, Kochi, Goa, Hyderabad and Mysore.
NH7 Weekender 2019
The 10th edition of NH7 Weekender will be held at Jaintia Hills in Meghalaya (November 1-2), and Pune (November 29-December 1). In August 2019, to celebrate 10 editions of the music festival in Pune, where the first NH7 Weekender was held, pre-sale tickets were put on sale at the same prices they were sold at in 2010 — ₹750 (under 21 season tickets) and ₹1500 (regular season tickets). The artist line up for the 2019 Pune edition will have Opeth, Chet Faker, Kodaline and other artists from Comicstaan. |
Validity and reliability of the AD8 informant interview in dementia Objective: To establish the validity, reliability, and discriminative properties of the AD8, a brief informant interview to detect dementia, in a clinic sample. Methods: We evaluated 255 patientinformant dyads. We compared the number of endorsed AD8 items with an independently derived Clinical Dementia Rating (CDR) and with performance on neuropsychological tests. Construct and concurrent validity, testretest, interrater and intermodal reliability, and internal consistency of the AD8 were determined. Receiver operator characteristic curves were used to assess the discriminative properties of the AD8. Results: Concurrent validity was strong with AD8 scores correlating with the CDR (r = 0.75, 95% CI 0.63 to 0.88). Construct validity testing showed strong correlation between AD8 scores, CDR domains, and performance on neuropsychological tests. The Cronbach alpha of the AD8 was 0.84 (95% CI 0.80 to 0.87), suggesting excellent internal consistency. The AD8 demonstrated good intrarater reliability and stability (weighted kappa = 0.67, 95% CI 0.59 to 0.75). Both in-person and phone administration showed equal reliability (weighted kappa = 0.65, 95% CI 0.57 to 0.73). Interrater reliability was very good (Intraclass correlation coefficient = 0.80, 95% CI 0.55 to 0.92). The area under the curve was 0.92 (95% CI 0.88 to 0.95), suggesting excellent discrimination between nondemented individuals and those with cognitive impairment regardless of etiology. Conclusion: The AD8 is a brief, sensitive measure that validly and reliably differentiates between nondemented and demented individuals. It can be used as a general screening device to detect cognitive change regardless of etiology and with different types of informants. |
Detection of Black Hole Attack on Max LEACH Protocol Wireless networks are one of the branches of telecommunication science in the present age, the effects of which are clearly seen in daily life. In a special type of these networks, a set of sensors are connected to each other to collect information in a wireless environment and are known as wireless sensor networks. Because security and privacy are so important in many of the proposed applications for wireless sensor networks, this type of networks are usually set up to collect records from an unsafe environment. Almost all WSN security protocols emphasize that an attacker can generally control a sensor node by direct physical access. The advent of sensor networks as one of the major technologies in the future has poses various challenges for researchers. Black hole attack is very common because it drops the packets and invades communication process. There are different strategies to detect black hole attack in network, like measurement of residual energy or packet delivery ratio and also because it identifies themselves as cluster head (CH) another strategy can be based on how many times node is chosen as CH. In this paper first consider 10 nodes as black hole node and then based on residual energy, detected black hole attack on Max LEACH protocol. |
You might remember that Sony, a few Christmases back, did a great job of promoting two of its strongest brands by offering a free PlayStation 3 with every Bravia TV purchase. Now LG and Microsoft are teaming up to offer a similar deal. Post the release of Xbox One, LG is offering a free console to anyone who purchases a TV from its select LG ULTRA HDTV range.
The deal is valid until January 6, 2014.
This cross promotion, however, is limited only to LG's 4k televisions, which means you'll really have to break the bank if you want to get a free Xbox One. There are three televisions to choose from. The 55LA9700 retails at around $4499.99 and the larger version of that TV will set you back $6499.99.
The cheapest TV you can purchase whilst still receiving a free Xbox One is the 55LA9650, which retails at $3499.99.
And it is somewhat ironic that you are required to buy a 4k television for a console that seems to struggle to output its games at 1080p. Sure there are some launch titles running at full HD — namely the incredible looking Forza Motorsport 5 — but I don't foresee that many games on the Xbox One will be running at the 3840 × 2160 resolution anytime soon. |
<reponame>qing199/TipDM
package com.tipdm.framework.common.controller.base;
import com.tipdm.framework.common.utils.StringKit;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.web.util.HtmlUtils;
import java.beans.PropertyEditorSupport;
/**
* Created by zhoulong on 2018/8/29.
*/
public class StringEditor extends PropertyEditorSupport {
private final static Logger LOG = LoggerFactory.getLogger(StringEditor.class);
@Override
public void setAsText(String text) throws IllegalArgumentException {
if(StringKit.isNotBlank(text)) {
// xss过滤
text = HtmlUtils.htmlEscape(text, "utf-8");
}
setValue(text);
}
@Override
public String getAsText() {
Object value = getValue();
if(value != null) {
return value.toString();
}else {
return "";
}
}
}
|
SAT0055JOINT SPECIFIC TNF RESPONSE OF SYNOVIAL FIBROBLASTS IN RHEUMATOID ARTHRITIS Background: Synovial fibroblasts (SF) in rheumatoid arthritis (RA) play a major role in chronic inflammation and joint destruction. We showed previously that their epigenetic and transcriptional profile as well as their function vary significantly between different joints. However, it is unknown whether there is a joint-specific inflammatory response of SF. Objectives: To compare transcriptional changes between SF from hand, shoulder and knee joints after stimulation with the pro-inflammatory cytokine TNF. Methods: We cultured SF from synovial tissues of hand (metacarpophalangeal and proximal interphalangeal joint), shoulder and knee joints of RA patients. After stimulation of the SF with 10 ng/ul TNF for 24h (n=2 for each joint location), RNA was sequenced on the Illumina HiSeq4000 platform. We analyzed differential gene expression with R v3.5.2 and CuffDiff and DESeq2 packages. Results: As shown in Figure 1, principal component analysis showed evident separation of joint locations and condition (unstimulated vs TNF stimulated). In a sample-to-sample distance matrix, hand samples (TNF stimulated and unstimulated) grouped apart from shoulder and knee samples (Figure 2). Of the regulated genes, 26% appeared in all three joint locations and 56% overlapped between knee and shoulder, but only 30% overlapped between hand and shoulder and between hand and knee SF. Similarly, also enriched pathways differed particularly between hands and the more proximal joints. Defense response (p=8.1110-23) and cytokine activity (p=1.2710-21) were the most significantly enriched gene ontology terms for genes regulated in TNF stimulated hand SF. These processes were less prominently enriched in stimulated knee (1.58x10-15 and 8.84x 10-08) and shoulder SF (2.02x10-11 and 3.20x10-07), where cell cycle (p=2.2610-30 in knee and p=2.1810-32 in shoulder), and DNA packaging complex (p=4.6310-49 in knee and p=8.9110-36 in shoulder) were the most significantly enriched gene ontology terms. These processes were not significantly enriched in stimulated hand SF. Conclusion: SF from different joints in RA react differently to TNF stimulation. In particular hand SF reacted different to TNF stimulation than shoulder and knee SF, which appeared more similar. These qualitative and quantitative differences of the inflammatory response might translate into joint-specific pathotypes of synovitis with distinct therapeutic responses and disease outcomes.Figure 1 Principal component analysis of TNF stimulated and unstimulated synovial fibroblastsFigure 2 Sample-to-sample distance matrix of TNF stimulated and unstimulated synovial fibroblasts Acknowledgement: This work was supported by the Institute for Rheumatic Research (IRR), Epalinges, Switzerland. Disclosure of Interests: Raphael Micheroli: None declared, Amanda McGovern: None declared, Kerstin Klein: None declared, Xiangyu Ge: None declared, Paul Martin: None declared, Oliver Distler Grant/research support from: Prof. Distler received research funding from Actelion, Bayer, Boehringer Ingelheim and Mitsubishi Tanabe to investigate potential treatments of scleroderma and its complications, Consultant for: Prof. Distler has/had consultancy relationship within the last 3 years with Actelion, AnaMar, Bayer, Boehringer Ingelheim, ChemomAb, espeRare foundation, Genentech/Roche, GSK, Inventiva, Italfarmaco, iQvia, Lilly, medac, MedImmune, Mitsubishi Tanabe Pharma, Pharmacyclics, Novartis, Pfizer, Sanofi, Serodapharm and UCB in the area of potential treatments of scleroderma and its complications. In addition, he had/has consultancy relationship within the last 3 years with A. Menarini, Amgen, Abbvie, GSK, Mepha, MSD, Pfizer and UCB in the field of arthritides and related disorders, Mojca Frank-Bertoncelj : None declared, Stephen Eyre: None declared, Caroline Ospelt: None declared |
The present invention relates generally to hardware and software additions to an LDPC (Low Density Parity Check) decoder to implement a post-processing algorithm, and more particularly to additions which inject noise into the decoder to help it converge to a valid codeword and thereby lower the error floor.
Some Low Density Parity Check (LDPC) codes show an “error floor”, which is a reduction in the slope of the BER (Bit Error Rate) vs. channel SNR (signal-to-noise) curve, at low BER levels. This implies that the bit error rate at a given signal-to-noise ratio is higher than expected. This is undesirable for wireless backhaul customers. (The term “wireless backhaul” refers to communication links between cellular base-stations. It is a technology that is linked with carrying communication traffic among sites that are spaced in a circular manner, and is also used for two-way data transmission lines. More generally, error floor issues are a concern in any system requiring very low bit error rates.)
Post-processing is a technique that has been used to resolve a type of decoding errors called “trapping set errors”, which dominate in the error floor region. A trapping set error causes the decoder to be trapped in a local minimum with respect to a “cost function” that characterizes the quality of the decoder output. This implies the decoder did not find the global minimum of the cost function and was thus unable to converge to a valid codeword. Post-processing typically resolves trapping set errors by injecting noise into the LDPC decoder to break away from the local minimum (in this case, to find the global minimum point of a cost function which is also the global optimum point) and allow the decoder to converge.
In information theory, a low-density parity-check (LDPC) code is a linear error correcting code for a method of transmitting a message over a noisy transmission channel. An LDPC is constructed using a sparse bipartite graph (A bipartite graph is a graph whose vertices are divided into two independent sets. In a sparse bipartite graph there are relatively few edges or connections between the two sets.) LDPC codes are capacity-approaching codes, which means that practical constructions exist that allow the noise threshold to be set very close, or even arbitrarily close on the canonical binary erasures channel (BEC), to the theoretical maximum (the Shannon limit) for a symmetric memoryless channel. (The binary erasures channel is a common model of a communication channel.) The noise threshold defines an upper bound for the channel noise, up to which the probability of lost information can be made as small as desired. Using iterative BP (belief propagation) techniques, LDPC codes (also known as Gallager codes) can be decoded in time linear to their block length. To form a codeword, the K input data bits are repeated and distributed to a set of constituent encoders. (A “frame” is equal to a codeword. Encoding means taking data bits and computing the corresponding parity bits. These are concatenated together to form the codeword.) The constituent encoders typically are accumulators, and each accumulator is used to generate a parity symbol. A single copy of the original data is transmitted with the parity bits (P) to make up the code symbols. The S bits from each constituent encoder are discarded. The foregoing encoding process is straightforward. The difficult problems lie in practical implementation of the decoding process. A brief description of the decoding process is given below.
The forward error-correction (FEC) requirements for “next-generation” wireless backhaul systems typically require a BER (Bit Error Rate) lower than 10−12 and a frame error rate lower than 10−10, a network throughput rate greater than 1 gigabytes per second, low power consumption, and low area in a silicon implementation. LDPC codes are becoming a very good candidate to meet the foregoing requirements, and have demonstrated a capability to provide performance very close to the Shannon limit when decoded with a low complexity iterative decoding algorithm. An LDPC code is defined by a sparse m×n parity check matrix H, where “n” represents the number of bits in the codeword and “m” represents the number of parity checks. A parity check matrix or H matrix contains “1”s and “0”s. Each row of the H matrix represents a parity constraint. For example, one row of the H matrix has n entries in total, with some entries being “1” and others being “0”. To define the parity constraint of this row, first note the positions of the “1” entries. Bits in the codeword in these positions must sum up to even parity. In this way, each row of the H matrix defines a different parity constraint involving a different set of bits in the codeword. The H matrix of an LDPC code can be illustrated graphically using a “bipartite graph” or “factor graph”, where each bit is represented by a variable processing node (VN) and each check is represented by a check node (CN). A variable node is also called a “bit node” or simply a “bit”, and these terms are used interchangeably. An “edge” exists between a variable node “i” and a check node “j” if and only if H(j,i)=1, where H(j,i)=1 means the element on the jth row and ith column of the parity check matrix H equals 1. Therefore, the positions of “1”s in the H matrix show the connections between VNs and CNs.
An LDPC code is decoded using a BP (belief propagation) algorithm that operates on the factor graph. In a BP (Belief Propagation) decoding, “soft messages” representing reliabilities are exchanged between variable nodes (VNs) and check nodes (CNs) to compute the likelihood of whether a bit is 1 or 0. (The “reliabilities” indicate the current belief that a given bit is 1 or 0.) The BP algorithm has two common implementations, including a precise “sum-product algorithm” and an approximate “min-sum algorithm”. The min-sum algorithm is simpler to implement and, with suitable modifications, provides excellent decoding performance.
As an example, a binary phase-shift keying (BPSK) modulation and an additive white Gaussian noise (AWGN) communication channel are assumed. The binary values 0 and 1 representing data bits are respectively mapped to 1 and −1 before transmission over the channel. The min-sum decoding can be explained using the factor graph. In the first step of decoding, each variable node xi is initialized with the subsequently described prior log-likelihood ratio (LLR) based on the received channel output yi. After initialization, variable nodes send the prior LLRs to the check nodes along the edges defined by the factor graph. The LLRs are re-computed based on parity constraints at each check node, and then are returned to the variable nodes. Each variable node then updates its decision based on a “posterior” LLR that is computed as the sum of the prior LLRs from the channel and the LLRs received from the check nodes. One round of message exchange between variable nodes and check nodes completes one iteration of decoding. To start the next iteration, each variable node passes the updated LLRs to the check nodes.
The LLRs passed between variable nodes and check nodes are known as “variable-to-check messages (L(qij))” and “check-to-variable messages (L(rij))”, where “i” is the variable node index and “j” is the check node index. In representing the connectivity of the factor graph, Col[i] refers to the set of all the check nodes “connected” to the “i”th variable node and Row[j] refers to the set of all the variable nodes “connected to” the “j”th check node. (The term “connected” refers to the variable nodes and check nodes that exchange messages with each other, i.e., communicate with each other.) A “hard decision” can optionally be made in each iteration based on the above mentioned posterior LLR. (A hard decision can be checked after each iteration, or some iterations can be run first and then checked once afterward.) The iterative decoding is allowed to run until the hard decisions satisfy all of the parity check equations or when an upper limit on the number of iterations is reached.
It is well-known that LDPC decoders suffer from the previously mentioned error floor problems. The post-processing approach and hardware are designed to improve the error floor. Over the past decade, it has been found that the excellent performance of LDPC is only observed up to a moderate bit error rate (BER), leading to the previously mentioned “error floor”. The error floor phenomenon can be characterized as an abrupt slope decrease of a code's performance curve past a certain moderate BER level. Solving the error floor problem has been a critical issue for both coding theorists and practitioners, since more and more systems, such as data storage devices and high-speed communications systems, require extremely low error rates.
Solving the error floor problem has been an important focus of research in coding theory and practical decoder designs. Past experiments have shown that error floors can be caused by various practical decoder implementations. Improved algorithm implementation and better numerical quantization can suppress these effects. However, error floors are fundamentally attributed to non-codeword “trapping sets” associated with LDPC codes. A trapping set refers to a set of bits in a codeword which, when received incorrectly, causes the belief propagation (BP) decoding algorithm to be trapped in the above mentioned “local minimum”. A trapping can be thought of as a “special combinatorial structure” involving cycles in the LDPC bipartite graph that reinforces incorrect bits during BP decoding.
Much work has been done on lowering the error floor by improving code constructions using methods such as progressive edge growth (PEG), cycle avoidance, code doping, and cyclic lifting. Although these methods are effective, the resulting code structures often complicate the decoder hardware design. An alternative way is to improve the BP decoding algorithm by methods such as scaling, offsetting, or trial and error, but these methods are mostly based on heuristics and their effectiveness is limited. Some of these methods even require extra steps that are incompatible with BP decoding, leading to a higher complexity and much longer latency (the time it takes for the decoder to produce the decoded codeword). A theoretically more effective approach is to target the combinatorial structures of absorbing sets to modify the decoding algorithm, an example of which is the bi-mode syndrome erasure decoding algorithm, although it sometimes falls short when the erasure decoding runs into its own local minima. For example, See “An Efficient 10 GBASE-T Ethernet LDPC Decoder Design with Low Error Floors” by Zhengya Zhang, et. al, IEEE Journal of Solid-State circuits, Volume 45, No. 4, April, 2010, especially FIG. 7 which shows hard decision outputs used to determine whether a message should be biased before check node processing. Also see “Lowering LDPC Error Floors by Postprocessing” by Zhengya Zhang, et al., for publication in the IEEE “GLOBECOM” 2008 proceedings.
The above-mentioned prior art in post-processing hardware only injects noise once (single-shot noise injection) in the decoding process. Furthermore, the prior art in post-processing hardware only allows changing magnitude of the noise. In the error floor region, the prior art LDPC decoders cannot successfully decode certain received codewords. Prior art post-processing helps the decoder decode some of these failures, but the real goal is to be able to decode all of the failures, and unfortunately, the techniques of the prior art can only resolve a limited type and number of errors. This consequently directly limits the amount of error floor improvement that as a practical matter is achievable by the prior art.
Thus, there is an unmet need for a better way of solving the error floor problems that have been critical issues in designing data storage devices and high-speed communications systems which require extremely low error rates.
There also is an unmet need for a post-processing system and method that can resolve more types of decoding errors than the prior art, thus improving the bit error rate in the error floor region.
There also is an unmet need for a post-processing system and method for implementing the described post-processing technique that are compatible with existing high throughput decoder architectures.
There also is an unmet need for improved post-processing capable of better improving the error floor for LDPC decoding for a substantially higher bit error rate (BER) then has been achievable by prior art post-processing. |
One of Paris's most prestigious theatres was being protected by riot police and guard-dog patrols on Thursday after it became the latest target in a wave of Catholic protests across France against so-called "blasphemous" plays.
The head of the Théâtre du Rond-Point on the Champs-Elysées complained of death threats in the runup to Thursday's premiere of the play Golgota Picnic by the Madrid-based, Argentinian writer Rodrigo García. Two men reported to have links to fundamentalist Catholic groups were arrested at the weekend while attempting to disable the theatre's security system.
Several Catholic groups have called for peaceful demonstrations, prayer-vigils and the laying down of white flowers outside the building every night the play is shown, while the archbishop of Paris will lead protest prayers against the play at Notre Dame Cathedral.
The demonstrations over Golgota Picnic come after a rise in fundamentalist religious protest action against some of France's most high-profile theatres, including pelting the audience with eggs, letting off stinkbombs and the invasion of the stage of Paris's esteemed Théâtre de la Ville mid-performance by outraged Catholics carrying banners reading "Stop Christianophobia".
Earlier this year, young French fundamentalist Catholics staged an unprecedented attack on a gallery in Avignon, slashing photographs including Piss Christ by the New York artist Andres Serrano. More peaceful Catholic protests outside theatres, including young people kneeling with wooden crosses outside venues from Lille to Toulouse, have led the French culture pages to question the rise in rightwing and nationalist feeling among hardline Christian groups.
Paris remains sensitive about Christian demonstrations since the fire-bombing of a cinema showing Martin Scorsese's The Last Temptation of Christ in 1988. Political commentators have speculated that some traditionalist Catholics in the demonstrations had broken off from the Front National after the leadership was taken over by Jean-Marie Le Pen's daughter Marine.
Golgota Picnic, which takes place on a stage strewn with burger buns, has several religious references including readings and a crucifixion scene. But Paris theatre critics said it was absurd to call it anti-Catholic or blasphemous and questioned whether its religious critics had actually seen it.
Yet in a move that went further than the recent protests over Théâtre de la Ville's staging of On the Concept of the Face, Regarding the Son of God by the Italian Romeo Castellucci, Paris's archbishop, André Vingt-Trois, deemed Golgota Picnic, which he had not seen, "deliberately offensive" and said he would lead a protest prayer at Notre Dame.
Jean-Michel Ribes, head of the Théâtre de Rond-Point, appealed for calm. He said: "The Théâtre du Rond-Point isn't an anti-Christian, anti-Muslim or anti-Jewish place." But he said the role of artists was to fight against "suffocating dogma". Theatregoers have been advised to arrive an hour early to get through the airport-style security before reaching their seats.
Paris city hall's art supremos rushed to defend the theatre community against what it said was fundamentalists holding art to ransom, saying a "silent minority" of Catholics did not share the notion of making threats or stifling freedom of expression.
Civitas, a lobby group that says it aims to re-Christianise France, has called for a large, peaceful street demonstration "against Christianophobia" this weekend. |
//------------------------------------------------------------------------------
// Search for a cached blockset entry by URL.
//------------------------------------------------------------------------------
cached_blockset *
find_cached_blockset(const char *href)
{
cached_blockset *cached_blockset_ptr = cached_blockset_list;
while (cached_blockset_ptr != NULL) {
if (!_stricmp(href, cached_blockset_ptr->href))
return(cached_blockset_ptr);
cached_blockset_ptr = cached_blockset_ptr->next_cached_blockset_ptr;
}
return(NULL);
} |
A funeral has been held for a female Saudi student who was stabbed to death in England last week.
Nahid Almanea’s body was returned to Saudi Arabia on Saturday and a funeral was held immediately, Arab News reported.
British police have said the 31-year-old, who was stabbed 16 times, may have been targeted because of her Muslim dress.
She was wearing a navy blue full-length Abaya and a multi-coloured hijab headscarf and had been walking along an open path when she was attacked at 10.40am on Tuesday, UK media reported.
Police also are investigating whether her murder is linked to that of a man who was stabbed 102 times in a park in the same area in March. That victim had an obvious mental slowness due to a head injury four years ago that made him more vulnerable, according to local media.
“We are conscious that the dress of the victim will have identified her as likely being a Muslim and this is one of the main lines of the investigation but again there is no firm evidence at this time that she was targeted because of her religion,” Detective Superintendent Tracy Hawkings was quoted as saying.
A 52-year-old suspect has been released and police have called for help in identifying a young man seen running from the area about the time of the attack.
The Saudi embassy said its ambassador to the UK was taking an active role in the case.
“Prince Mohammed bin Nawaf expressed in a telephone call on Tuesday to the brother of the deceased his sincerest condolences to her family, affirming the embassy's speed in taking all the procedures for the transfer of the body of the deceased to the kingdom. He also asserted that the case is in his personal attention,” the embassy said in a statement quoted by The Guardian.
The murder has sparked concern among the hundreds of other Saudi students in the UK, who are being offered support.
Nahid has been described as a virtuous woman, who was seeking to serve herself, her people, and homeland through her education and knowledge, Arab News said. |
A temperature-compensated ultradian clock ticks in Schizosaccharomyces pombe. An ultradian oscillation is described for Schizosaccharomyces pombe which meets the criteria for a cellular clock, i.e. timekeeping device. The rhythm can be induced by transfer from circadian conditions (stationary phase or very slow growth) to ultradian conditions (rapid growth). It can also be synchronized by ultradian temperature cycles of 6 degrees C difference. Released to constant temperature, the rhythm persists for 20 h without damping. The period of the free-running rhythm is temperature-compensated and in no experiment did period length fall outside the narrow range between 40 and 44 min. The parameter observed is the septum index, i.e. the percentage of cells occupying the last stage of the cell cycle in wild-type cells before final division. The results suggest control of the cell division processes by the ultradian clock. |
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# 打印list:
names = ['Michael', 'Bob', 'Tracy']
for name in names:
print(name)
# 打印数字 0 - 9
for x in range(10):
print(x)
sum1 = 0
for x in range(101):
sum1 = sum1 + x
print("1+2+3+...+100 = ", sum)
sum2 = 0
n = 99
while n > 0:
sum2 = sum2 + n
n = n - 2
print("100以内所有奇数的和为:", sum)
L = ['Bart', 'Lisa', 'Adam']
for name in L:
print('Hello,', name)
|
Conceptual foundations for designing a human resource management system in the field of physical culture and sports at the regional level The purpose of the study is to determine the totality and content of key methodological approaches as conceptual foundations for designing the process of improving human resources of physical culture and sports. The article discusses the key provisions of resource, regional, functional, optimization; attention is focused on the concretization of the immanent principles of these approaches in relation to the problem of optimal management of human resources of physical culture and sports in the regional aspect. The scientific novelty lies in the determination of the methodological contribution to the design of the concept of human resources management of physical culture and sports at the regional level of four basic approaches and their pairwise integration. As a result, the methodological contribution of the basic approaches to the developed concept was revealed and the key importance of their pairwise integration as target, meaningful, methodological guidelines for management was determined. |
The School of Engineering recognized three faculty and one staff member for their work inside and outside the classroom with 2018 Vision Awards. The awards were presented at the School’s fall faculty/staff meeting in August.
The 2018 Vision Award recipients include: Dr. Kenya Crosson for Community, Dr. Sid Gunasekaran for Innovation, Dr. Andrew Sarangan for Excellence, and Ms. Peg Mount for Engagement and Service.
Crosson’s service as a mentor for minority engineering students, her commitment to diversity, equity and inclusion through the Office of Multicultural Affairs, participation in Minority Engineering and Technology Enrichment Camp and other outreach activities has led to her recognition. Her nominators noted her service as the LTC Fellow for Faculty Development in Diversity and Inclusion and her overall passion for enhancing the climate for diversity and inclusion during her time at UD.
Gunasekaran’s willingness to teach outside of his discipline by serving as a faculty fellow at the Institute of Applied Creativity and Transformation (IACT) and as a KEEN Fellow are some of the reasons the committee selected him. His participation in the development of courses through the Hanley Sustainability Institute and development of innovative teaching spaces and equipment through the LTC add to his award. Gunasekaran’s use of highly innovative pedagogical techniques including portfolios and “passion projects” in his classes, which resulted in feature stories in KEEN magazine and the December 2017 issue of Aerospace America, were especially noticed by the committee.
Sarangan has received significant funding totaling more than $6 million during his time at UD, created the nanofab laboratory for making nanoscale electronic and photonics components, and mentored numerous graduate students. Nominations for Sarangan highlighted how he has authored and co-authored more than 100 journal articles and conference proceedings, as well as a textbook and book chapters. Among other notable activities, Sarangan received a Nanotechnology Undergraduate Education in Engineering award from the National Science Foundation for developing a nanotechnology curriculum for undergraduate students. He led the development of an electronic link between his cleanroom at the University with a classroom at Sinclair Community College to give demonstrations in real time.
Mount, who retired as a senior administrative assistant in the Department of Engineering Management, Systems and Technology, at the end of the summer, was recognized for her long-term service to the School of Engineering. While at UD, Mount served as an active Marianist Educational Associate (MEA), was the co-founder of the School of Engineering’s MEA Welcome Committee, and developed and lead the School of Engineering’s Breakaway retreat inspired by the Office of Mission and Rector’s annual mission-based staff retreat. Her willingness to help and support faculty, staff and students in the School of Engineering and across campus, her prayerful support of people in their time of need, and her participation in numerous service activities are to be commended. Mount’s commitment to serving as an advocate for our international students is particularly noteworthy. Her leadership in the School of Engineering and daily interactions with faculty, staff and students demonstrate and support the Marianist charism.
The ETHOS Center has two new videos. Nicaragua: 10-day International Breakout, was created by our own, Christian Cubacub, computer engineering major, who traveled with ETHOS and personally filmed. |
<reponame>vixorien/ggp-demos
#include "Game.h"
#include "Vertex.h"
#include "Input.h"
#include "Assets.h"
#include "TerrainMesh.h"
#include "WICTextureLoader.h"
#include <stdlib.h> // For seeding random and rand()
#include <time.h> // For grabbing time (to seed random)
// Needed for a helper function to read compiled shader files from the hard drive
#pragma comment(lib, "d3dcompiler.lib")
#include <d3dcompiler.h>
// For the DirectX Math library
using namespace DirectX;
// Helper macro for getting a float between min and max
#define RandomRange(min, max) (float)rand() / RAND_MAX * (max - min) + min
// --------------------------------------------------------
// Constructor
//
// DXCore (base class) constructor will set up underlying fields.
// DirectX itself, and our window, are not ready yet!
//
// hInstance - the application's OS-level handle (unique ID)
// --------------------------------------------------------
Game::Game(HINSTANCE hInstance)
: DXCore(
hInstance, // The application's handle
"DirectX Game", // Text for the window's title bar
1280, // Width of the window's client area
720, // Height of the window's client area
true), // Show extra stats (fps) in title bar?
camera(0),
ambientColor(0, 0, 0), // Ambient is zero'd out since it's not physically-based
lightCount(3),
sky(0),
drawLights(true)
{
#if defined(DEBUG) || defined(_DEBUG)
// Do we want a console window? Probably only in debug mode
CreateConsoleWindow(500, 120, 32, 120);
printf("Console window created successfully. Feel free to printf() here.\n");
#endif
}
// --------------------------------------------------------
// Destructor - Clean up anything our game has created:
// - Release all DirectX objects created here
// - Delete any objects to prevent memory leaks
// --------------------------------------------------------
Game::~Game()
{
// Since we've created these objects within this class (Game),
// this is also where we should delete them!
for (auto& e : entities) delete e;
delete camera;
delete sky;
delete& Assets::GetInstance();
}
// --------------------------------------------------------
// Called once per program, after DirectX and the window
// are initialized but before the game loop.
// --------------------------------------------------------
void Game::Init()
{
// Seed random
srand((unsigned int)time(0));
// Loading scene stuff
LoadAssetsAndCreateEntities();
// Set up lights
lightCount = 3;
GenerateLights();
// Tell the input assembler stage of the pipeline what kind of
// geometric primitives (points, lines or triangles) we want to draw.
// Essentially: "What kind of shape should the GPU draw with our data?"
context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
// Create the camera
camera = new Camera(0, 30, -200, 5.0f, 5.0f, XM_PIDIV4, (float)width / height, 0.01f, 1000.0f, CameraProjectionType::Perspective);
}
// --------------------------------------------------------
// Loads all necessary assets and creates various entities
// --------------------------------------------------------
void Game::LoadAssetsAndCreateEntities()
{
// Initialize the asset manager and set up for on-demand loading
Assets& assets = Assets::GetInstance();
assets.Initialize("../../../Assets/", device, context, true, true);
// Set up sprite batch and sprite font
spriteBatch = std::make_unique<SpriteBatch>(context.Get());
// Create a sampler state for texture sampling options
Microsoft::WRL::ComPtr<ID3D11SamplerState> sampler;
D3D11_SAMPLER_DESC sampDesc = {};
sampDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP; // What happens outside the 0-1 uv range?
sampDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
sampDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
sampDesc.Filter = D3D11_FILTER_ANISOTROPIC; // How do we handle sampling "between" pixels?
sampDesc.MaxAnisotropy = 16;
sampDesc.MaxLOD = D3D11_FLOAT32_MAX;
device->CreateSamplerState(&sampDesc, sampler.GetAddressOf());
// Create the sky (loading custom shaders in-line below)
sky = new Sky(
GetFullPathTo_Wide(L"../../../Assets/Skies/Clouds Blue/right.png").c_str(),
GetFullPathTo_Wide(L"../../../Assets/Skies/Clouds Blue/left.png").c_str(),
GetFullPathTo_Wide(L"../../../Assets/Skies/Clouds Blue/up.png").c_str(),
GetFullPathTo_Wide(L"../../../Assets/Skies/Clouds Blue/down.png").c_str(),
GetFullPathTo_Wide(L"../../../Assets/Skies/Clouds Blue/front.png").c_str(),
GetFullPathTo_Wide(L"../../../Assets/Skies/Clouds Blue/back.png").c_str(),
assets.GetMesh("Models/cube"),
assets.GetVertexShader("SkyVS"),
assets.GetPixelShader("SkyPS"),
sampler,
device,
context);
// Load terrain mesh
// Note: You need to know the bit-depth of the heightmap, as well as
// the pixel dimensions, since RAW files do not contain this
// information! If you get it wrong, things won't look right!
std::shared_ptr<TerrainMesh> terrainMesh = std::make_shared<TerrainMesh>(
device,
GetFullPathTo("../../../Assets/Heightmaps/terrain_513x513.r16").c_str(),
513,
513,
TerrainBitDepth::BitDepth_16,
100.0f,
0.75f);
// Create terrain material
std::shared_ptr<SimpleVertexShader> vertexShader = assets.GetVertexShader("VertexShader");
std::shared_ptr<SimplePixelShader> terrainPS = assets.GetPixelShader("TerrainPS");
std::shared_ptr<Material> terrainMat = std::make_shared<Material>(terrainPS, vertexShader, XMFLOAT3(1, 1, 1), XMFLOAT2(20, 20));
terrainMat->AddSampler("BasicSampler", sampler);
terrainMat->AddTextureSRV("BlendMap", assets.GetTexture("Textures/terrain_splatmap"));
terrainMat->AddTextureSRV("Albedo0", assets.GetTexture("Textures/PBR/snow_albedo"));
terrainMat->AddTextureSRV("NormalMap0", assets.GetTexture("Textures/PBR/snow_normals"));
terrainMat->AddTextureSRV("RoughnessMap0", assets.GetTexture("Textures/PBR/snow_roughness"));
terrainMat->AddTextureSRV("MetalMap0", assets.GetTexture("Textures/PBR/snow_metal"));
terrainMat->AddTextureSRV("Albedo1", assets.GetTexture("Textures/PBR/grass_albedo"));
terrainMat->AddTextureSRV("NormalMap1", assets.GetTexture("Textures/PBR/grass_normals"));
terrainMat->AddTextureSRV("RoughnessMap1", assets.GetTexture("Textures/PBR/grass_roughness"));
terrainMat->AddTextureSRV("MetalMap1", assets.GetTexture("Textures/PBR/grass_metal"));
terrainMat->AddTextureSRV("Albedo2", assets.GetTexture("Textures/PBR/rock_albedo"));
terrainMat->AddTextureSRV("NormalMap2", assets.GetTexture("Textures/PBR/rock_normals"));
terrainMat->AddTextureSRV("RoughnessMap2", assets.GetTexture("Textures/PBR/rock_roughness"));
terrainMat->AddTextureSRV("MetalMap2", assets.GetTexture("Textures/PBR/rock_metal"));
GameEntity* terrain = new GameEntity(terrainMesh, terrainMat);
entities.push_back(terrain);
}
void Game::GenerateLights()
{
// Reset
lights.clear();
// Setup directional lights
Light dir1 = {};
dir1.Type = LIGHT_TYPE_DIRECTIONAL;
dir1.Direction = XMFLOAT3(1, -1, 1);
dir1.Color = XMFLOAT3(1, 1, 1);
dir1.Intensity = 1.0f;
Light dir2 = {};
dir2.Type = LIGHT_TYPE_DIRECTIONAL;
dir2.Direction = XMFLOAT3(-1, -0.25f, 0);
dir2.Color = XMFLOAT3(1, 1, 1);
dir2.Intensity = 0.8f;
Light dir3 = {};
dir3.Type = LIGHT_TYPE_DIRECTIONAL;
dir3.Direction = XMFLOAT3(0, -0.1f, 1);
dir3.Color = XMFLOAT3(1, 1, 1);
dir3.Intensity = 0.5f;
// Add light to the list
lights.push_back(dir1);
lights.push_back(dir2);
lights.push_back(dir3);
// Create the rest of the lights
while (lights.size() < MAX_LIGHTS)
{
Light point = {};
point.Type = LIGHT_TYPE_POINT;
point.Position = XMFLOAT3(RandomRange(-200.0f, 200.0f), RandomRange(0.0f, 20.0f), RandomRange(-200.0f, 200.0f));
point.Color = XMFLOAT3(RandomRange(0, 1), RandomRange(0, 1), RandomRange(0, 1));
point.Range = RandomRange(50.0f, 100.0f);
point.Intensity = RandomRange(0.1f, 3.0f);
// Add to the list
lights.push_back(point);
}
// Make sure we're exactly MAX_LIGHTS big
lights.resize(MAX_LIGHTS);
}
// --------------------------------------------------------
// Handle resizing DirectX "stuff" to match the new window size.
// For instance, updating our projection matrix's aspect ratio.
// --------------------------------------------------------
void Game::OnResize()
{
// Handle base-level DX resize stuff
DXCore::OnResize();
// Update the camera's projection to match the new aspect ratio
if (camera) camera->UpdateProjectionMatrix((float)width / height);
}
// --------------------------------------------------------
// Update your game here - user input, move objects, AI, etc.
// --------------------------------------------------------
void Game::Update(float deltaTime, float totalTime)
{
// In the event we need it below
Assets& assets = Assets::GetInstance();
// Example input checking: Quit if the escape key is pressed
Input& input = Input::GetInstance();
if (input.KeyDown(VK_ESCAPE))
Quit();
// Update the camera this frame
camera->Update(deltaTime);
// Handle light count changes, clamped appropriately
if (input.KeyPress(VK_TAB)) GenerateLights();
if (input.KeyDown('R')) lightCount = 3;
if (input.KeyDown('L')) drawLights = !drawLights;
if (input.KeyDown(VK_UP)) lightCount++;
if (input.KeyDown(VK_DOWN)) lightCount--;
lightCount = max(1, min(MAX_LIGHTS, lightCount));
// Move lights
for (int i = 0; i < lightCount; i++)
{
// Only adjust point lights
if (lights[i].Type == LIGHT_TYPE_POINT)
{
// Adjust either X or Z
float lightAdjust = sin(totalTime + i) * 50;
if (i % 2 == 0) lights[i].Position.x = lightAdjust;
else lights[i].Position.z = lightAdjust;
}
}
}
// --------------------------------------------------------
// Clear the screen, redraw everything, present to the user
// --------------------------------------------------------
void Game::Draw(float deltaTime, float totalTime)
{
// Background color (Black in this case) for clearing
const float color[4] = { 0, 0, 0, 0 };
// Clear the render target and depth buffer (erases what's on the screen)
// - Do this ONCE PER FRAME
// - At the beginning of Draw (before drawing *anything*)
context->ClearRenderTargetView(backBufferRTV.Get(), color);
context->ClearDepthStencilView(depthStencilView.Get(), D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);
// Loop through the game entities in the current scene and draw
for (auto& e : entities)
{
std::shared_ptr<SimplePixelShader> ps = e->GetMaterial()->GetPixelShader();
ps->SetFloat3("ambientColor", ambientColor);
ps->SetData("lights", &lights[0], sizeof(Light) * (int)lights.size());
ps->SetInt("lightCount", lightCount);
// Draw one entity
e->Draw(context, camera);
}
// Draw the sky after all regular entities
sky->Draw(camera);
// Draw the light sources
if (drawLights)
DrawLightSources();
// Draw the UI on top of everything
DrawUI();
// Present the back buffer to the user
// - Puts the final frame we're drawing into the window so the user can see it
// - Do this exactly ONCE PER FRAME (always at the very end of the frame)
swapChain->Present(0, 0);
// Due to the usage of a more sophisticated swap chain,
// the render target must be re-bound after every call to Present()
context->OMSetRenderTargets(1, backBufferRTV.GetAddressOf(), depthStencilView.Get());
}
void Game::DrawLightSources()
{
Assets& assets = Assets::GetInstance();
std::shared_ptr<Mesh> lightMesh = assets.GetMesh("Models/sphere");
std::shared_ptr<SimpleVertexShader> vs = assets.GetVertexShader("VertexShader");
std::shared_ptr<SimplePixelShader> ps = assets.GetPixelShader("SolidColorPS");
// Turn on the light mesh
Microsoft::WRL::ComPtr<ID3D11Buffer> vb = lightMesh->GetVertexBuffer();
Microsoft::WRL::ComPtr<ID3D11Buffer> ib = lightMesh->GetIndexBuffer();
unsigned int indexCount = lightMesh->GetIndexCount();
// Turn on these shaders
vs->SetShader();
ps->SetShader();
// Set up vertex shader
vs->SetMatrix4x4("view", camera->GetView());
vs->SetMatrix4x4("projection", camera->GetProjection());
for (int i = 0; i < lightCount; i++)
{
Light light = lights[i];
// Only drawing point lights here
if (light.Type != LIGHT_TYPE_POINT)
continue;
// Set buffers in the input assembler
UINT stride = sizeof(Vertex);
UINT offset = 0;
context->IASetVertexBuffers(0, 1, vb.GetAddressOf(), &stride, &offset);
context->IASetIndexBuffer(ib.Get(), DXGI_FORMAT_R32_UINT, 0);
// Calc quick scale based on range
float scale = light.Range / 200.0f;
XMMATRIX scaleMat = XMMatrixScaling(scale, scale, scale);
XMMATRIX transMat = XMMatrixTranslation(light.Position.x, light.Position.y, light.Position.z);
// Make the transform for this light
XMFLOAT4X4 world;
XMStoreFloat4x4(&world, scaleMat * transMat);
// Set up the world matrix for this light
vs->SetMatrix4x4("world", world);
// Set up the pixel shader data
XMFLOAT3 finalColor = light.Color;
finalColor.x *= light.Intensity;
finalColor.y *= light.Intensity;
finalColor.z *= light.Intensity;
ps->SetFloat3("Color", finalColor);
// Copy data
vs->CopyAllBufferData();
ps->CopyAllBufferData();
// Draw
context->DrawIndexed(indexCount, 0, 0);
}
}
void Game::DrawUI()
{
// Grab the font from the asset manager
Assets& assets = Assets::GetInstance();
std::shared_ptr<SpriteFont> fontArial12 = assets.GetSpriteFont("Fonts/Arial12");
spriteBatch->Begin();
// Basic controls
float h = 10.0f;
fontArial12->DrawString(spriteBatch.get(), L"Controls:", XMVectorSet(10, h, 0, 0));
fontArial12->DrawString(spriteBatch.get(), L" (WASD, X, Space) Move camera", XMVectorSet(10, h + 20, 0, 0));
fontArial12->DrawString(spriteBatch.get(), L" (Left Click & Drag) Rotate camera", XMVectorSet(10, h + 40, 0, 0));
fontArial12->DrawString(spriteBatch.get(), L" (Arrow Up/Down) Increment / decrement lights", XMVectorSet(10, h + 60, 0, 0));
fontArial12->DrawString(spriteBatch.get(), L" (TAB) Randomize lights", XMVectorSet(10, h + 80, 0, 0));
fontArial12->DrawString(spriteBatch.get(), L" (R) Reset light count", XMVectorSet(10, h + 100, 0, 0));
fontArial12->DrawString(spriteBatch.get(), L" (L) Draw lights", XMVectorSet(10, h + 120, 0, 0));
spriteBatch->End();
// Reset render states, since sprite batch changes these!
context->OMSetBlendState(0, 0, 0xFFFFFFFF);
context->OMSetDepthStencilState(0, 0);
}
|
package hbvi.util;
/**
* 废弃不用
*/
public class CommonTools {
/*
@Autowired
JdbcTemplate jdbcTemplate;
public BigDecimal tradingFee(String ccy,String buy_sell){
String tradingFeeBuy = "select total_buy from tbl_trading_fee where market = ?";
String tradingFeeSell = "select total_sell from tbl_trading_fee where market = ?";
String sql = null;
if("LONG".equals(buy_sell.toUpperCase())){
sql = tradingFeeBuy;
}else {
sql = tradingFeeSell;
}
BigDecimal tradingFee = jdbcTemplate.queryForObject(sql,BigDecimal.class,ccy.toUpperCase());
return tradingFee;
}*/
}
|
package inc.flide.vi8.keyboardActionListners;
import android.os.Handler;
import android.view.HapticFeedbackConstants;
import android.view.View;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import inc.flide.vi8.MainInputMethodService;
import inc.flide.vi8.structures.Constants;
import inc.flide.vi8.structures.FingerPosition;
import inc.flide.vi8.keyboardHelpers.KeyboardAction;
public class MainKeyboardActionListener {
private MainInputMethodService mainInputMethodService;
private View mainKeyboardView;
private Map<List<FingerPosition>, KeyboardAction> keyboardActionMap;
private List<FingerPosition> movementSequence;
private FingerPosition currentFingerPosition;
private boolean isLongPressCallbackSet;
public MainKeyboardActionListener(MainInputMethodService inputMethodService,
View view) {
this.mainInputMethodService = inputMethodService;
this.mainKeyboardView = view;
keyboardActionMap = mainInputMethodService.buildKeyboardActionMap();
movementSequence = new ArrayList<>();
currentFingerPosition = FingerPosition.NO_TOUCH;
}
public void movementStarted(FingerPosition fingerPosition) {
currentFingerPosition = fingerPosition;
movementSequence.clear();
movementSequence.add(currentFingerPosition);
initiateLongPressDetection();
}
public void movementContinues(FingerPosition fingerPosition) {
FingerPosition lastKnownFingerPosition = currentFingerPosition;
currentFingerPosition = fingerPosition;
boolean isFingerPositionChanged = (lastKnownFingerPosition != currentFingerPosition);
if(isFingerPositionChanged){
interruptLongPress();
movementSequence.add(currentFingerPosition);
if(currentFingerPosition == FingerPosition.INSIDE_CIRCLE
&& keyboardActionMap.get(movementSequence)!=null){
processMovementSequence(movementSequence);
movementSequence.clear();
movementSequence.add(currentFingerPosition);
}
}else if(!isLongPressCallbackSet){
initiateLongPressDetection();
}
}
public void movementEnds() {
interruptLongPress();
currentFingerPosition = FingerPosition.NO_TOUCH;
movementSequence.add(currentFingerPosition);
processMovementSequence(movementSequence);
movementSequence.clear();
}
private final Handler longPressHandler = new Handler();
private Runnable longPressRunnable = new Runnable() {
@Override
public void run() {
List<FingerPosition> movementSequenceAgumented = new ArrayList<>(movementSequence);
movementSequenceAgumented.add(FingerPosition.LONG_PRESS);
processMovementSequence(movementSequenceAgumented);
longPressHandler.postDelayed(this, Constants.DELAY_MILLIS_LONG_PRESS_CONTINUATION);
}
};
private void initiateLongPressDetection(){
isLongPressCallbackSet = true;
longPressHandler.postDelayed(longPressRunnable, Constants.DELAY_MILLIS_LONG_PRESS_INITIATION);
}
private void interruptLongPress(){
longPressHandler.removeCallbacks(longPressRunnable);
List<FingerPosition> movementSequenceAgumented = new ArrayList<>(movementSequence);
movementSequenceAgumented.add(FingerPosition.LONG_PRESS_END);
processMovementSequence(movementSequenceAgumented);
isLongPressCallbackSet = false;
}
private void processMovementSequence(List<FingerPosition> movementSequence) {
KeyboardAction keyboardAction = keyboardActionMap.get(movementSequence);
boolean isMovementValid = true;
if(keyboardAction == null){
movementSequence.clear();
return;
}
switch (keyboardAction.getKeyboardActionType()){
case INPUT_TEXT:
mainInputMethodService.handleInputText(keyboardAction);
processPredictiveTextCandidates();
break;
case INPUT_KEY:
mainInputMethodService.handleInputKey(keyboardAction);
break;
case INPUT_SPECIAL:
mainInputMethodService.handleSpecialInput(keyboardAction);
break;
default:
isMovementValid = false;
}
if(isMovementValid){
mainKeyboardView.performHapticFeedback(HapticFeedbackConstants.KEYBOARD_TAP, HapticFeedbackConstants.FLAG_IGNORE_GLOBAL_SETTING);
}
}
private void processPredictiveTextCandidates() {
}
public boolean areCharactersCapitalized() {
return mainInputMethodService.areCharactersCapitalized();
}
public void setModifierFlags(int modifierFlags) {
this.mainInputMethodService.setModifierFlags(modifierFlags);
}
public void sendKey(int keycode, int flags) {
this.mainInputMethodService.sendKey(keycode, flags);
}
public void handleSpecialInput(KeyboardAction keyboardAction) {
this.mainInputMethodService.handleSpecialInput(keyboardAction);
}
}
|
Diagnosis and preoperative evaluation of pancreatic cancer, with implications for management. At present, diagnosis of pancreatic cancer can be achieved only in symptomatic patients. Better diagnostic and staging procedures, which include angiography, laparoscopy, and peritoneal cytology, allow a more accurate preoperative assessment of resectability and may spare the patient an unnecessary operation or provide him or her with the option of transfer to a specialized unit for a definitive operation with lower morbidity and mortality rates. |
Enhanced Thermoelectric Performance in Hybrid Nanoparticle--Single Molecule Junctions It was recently suggested that molecular junctions would be excellent elements for efficient and high-power thermoelectric energy conversion devices. However, experimental measurements of thermoelectric conversion in molecular junctions have indicated rather poor efficiency, raising the question of whether it is indeed possible to design a setup for molecular junctions that will exhibit enhanced thermoelectric performance. Here we suggest that hybrid single-molecule nanoparticle junctions can serve as efficient thermoelectric converters. The introduction of a semiconducting nanoparticle introduces new tuning capabilities, which are absent in conventional metal-molecule-metal junctions. Using a generic model for the molecule and nanoparticle with realistic parameters, we demonstrate that the thermopower can be of the order of hundreds of microvolts per degree Kelvin, and that the thermoelectric figure of merit can reach values close to one, an improvement of four orders of magnitude improvement over existing measurements. This favorable performance persists over a wide range of experimentally relevant parameters, and is robust against disorder (in the form of surface-attached molecules) and against electron decoherence at the nanoparticle molecule interface. INTRODUCTION In recent years, the study of single-molecule junctions -the ultimate limit of electronic nano-technology -has progressed well beyond their role as functional elements in electronic devices, and today additional functionalities, from opto-electronics and spintronics, through phononics, to thermoelectricity are under investigation. The potential applicability of single-molecule junctions in thermoelectricity -the conversion of heat into electric power -is of particular interest, since thermoelectricity may turn out to be an important element in addressing global energy issues. The clear advantages of thermoelectric energy conversion, such as the "green" nature of the energy conversion process, its applicability for waste heat harvesting, and the easy device maintenance due to the absence of moving parts, would lead us to think that thermoelectric devices should already be a substantial part of the energy market, yet this is not the case. The reason lies in the simple fact that current thermoelectric devices are not efficient enough, making competition with traditional (large scale) energy conversion systems virtually impossible. The thermopower S measures the voltage generated per unit temperature difference (in the linear response). Values of S ∼ 10 2 − 10 3 V/K are typical for standard semi-conductor based thermoelectrics, yet molecular junctions exhibit small values of S, typically S ∼ 5 − 50 V/K. The FOM, namely ZT, defined as ZT = GS 2 /T, where G is the conductance of the junction, = e + phn is the total thermal conductance, which includes both electronic (e) and phononic (phn) contributions, and T is the temperature. ZT is directly related to the device efficiency, and ZT → ∞ corresponds to the Carnot efficiency (thus, theoretically, there is no upper bound on ZT ). It is commonly held that, from the efficiency perspective, a ZT of ∼ 4 is required for thermoelectric conversion to be competitive. However, typical ZT values obtained from measurements in molecular junctions are ZT ∼ 10 −3 − 10 −5. Thus, there seems to be a discrepancy between theoretical and computational studies of thermoelectric conversion in molecular junctions, which in many cases predict S ∼ 10 2 − 10 3 V/K and FOM of ZT 1, and the experimental evaluation of the two measures. The origin of this discrepancy seems to stem from two factors. First, one has to be careful to take the phonon contribution to the thermal conductance into account; a typical and realistic value for the phonon thermal conductance in molecular junctions is phn = 10−100 pW/K [13,19,. Second, many calculations show enhanced thermoelectric performance based on tuning of the molecular orbitals or, equivalently, the Fermi level of the electrodes. However, in reality this tuning is very difficult to achieve [24,, and the only "tuning parameter" available for molecular junctions is the choice of the molecular moiety. It is thus a central challenge to design molecular junctions that are both tunable in some way and show favorable thermoelectric performance with realistic parameters. In this paper, we present a new setup for molecular junctions that has the potential to achieve this goal. Our setup is based on a hybrid single-molecule semiconducting nanoparticle (SC-NP) structure in which a molecule is connected on one side to a metallic electrode (as in conventional molecular junctions), and on the other side connected to the second electrode through a SC-NP placed on the electrode, as is schematically depicted in Fig. 1(a). In this setup, the molecular junction can be tuned via tuning the electronic properties of the nanoparticle, namely, by choosing a suitable material and by controlling the size and shape of the nanoparticle (for reviews see, e.g., ). Fabrication of hybrid NP-single moelcule junctions have already been demonstrated with Au NPs (see, e.g., ), making our suggested system experimentally feasible. Using a generic model for the SC-NP, we show that this junction can reach values of S ∼ 100 − 400 V/K and ZT ∼ 1 for a broad range of realistic parameters. The origin of the enhanced thermoelectric performance can be traced to the interplay between the local transport properties of the molecule and the gapped density of states (DOS) of the nanoparticle. Thermoelectricity typically requires a large particle-hole asymmetry (reflected in the fact that, at low temperatures, the thermopower is proportional to the derivative of the transmission function ). This asymmetry is enhanced in this junction due to the presence of the semiconducting gap in the SC-NP, an effect that is rectified due to the finite size of the nanoparticle. We show that the optimal parameters for thermoelectric conversion depend on the geometry of the nanoparticle and the contact geometry between the nanoparticle and the molecule. Further, we demonstrate that the favorable thermoelectric performance is robust against disorder (in the form of surface dangling molecules) and dephasing, and finally, we discuss the temperature dependence of the FOM, which exhibits a maximum at T ∼ 450 K. MODEL AND CALCULATION The transport and thermoelectric properties of the hybrid molecule-nanoparticle junction are calculated by using the non-equilibrium Green's function approach, which has become the standard tool for such calculations. The junction (graphically depicted in Fig. 1(a)) is described using the Hamiltonian: that includes the bottom electrode (B), the nanoparticle (NP), the molecule (M), the top electrode (T), and the coupling between the bottom electrode and nanoparticle (B-NP), between the NP and the molecule (NP-M) and between the molecule and the top electrode (M-T). Since electron spin does not play a role in the mechanisms we describe here for enhancement of thermoelectricity, we treat spinless electrons. To describe the SC-NP, we use a generic tight-binding model for a semiconductor of the form where the summation is taken over the atom positions r (assumed to form a cubic lattice); c r (c r ) creates (annihilates) an electron at position r; t is the nearest-neighbor hopping matrix element; c is the position of the bandgap center; and ∆ is the semiconductor band-gap. The function P r = x + y + z gives a modulation of the on-site energy c ± ∆ 2 between neighboring atoms, generating a gapped band at the thermodynamic limit. We point that although ab initio methods for calculating properties of NPs are emerging, these are still not developed for transport calculations. Furthermore, since we are aiming at presenting general properties of hybrid junctions, our tight-binding calculation is generic and not limited to a specific system. The molecule is described as a single orbital, which can correspond to either the HOMO or LUMO, depending on the position of the orbital energy. The molecular Hamiltonian is simply where d (d) creates (annihilates) an electron at the molecular orbital. In addition to the calculations described below, we performed calculations that also include a Coulomb interaction term (for the molecule) and spin-full fermions (by using the equations-of-motion method ) but found no quantitative change in the main results, and therefore we keep the description here as simple as possible and treat electrons as noninteracting particles. The coupling between the molecule and the nanoparticle is described by the Hamiltonian: where r M is the position of the atom in the nanoparticle that is in contact with the molecule, and t 1 is the hopping matrix element related to the overlap integral between the molecular orbital and the atomic level at r M. The metallic top and bottom electrodes are assumed to be non-interacting metals with the Hamiltonian H X = k k c k,X c k,X, X = T, B. The coupling between the molecule and the top electrode (representing, e.g., the tip of a scanning tunneling microscope) is represented by the Hamiltonian term. Similarly, the contact between the bottom electrode and the nanoparticle is described by. To proceed with the transport calculations, the metallic electrodes are treated within a wide-band approximation (valid since our model is for non-interacting electrodes), where the top electrode is defined by a (retarded and advanced) self-energy term r,a T = ∓i T 2 |M M |, in which |M is the single-particle molecular orbital, and T is the electrode-induced level broadening. Similarly, the bottom electrode is defined via the self-energy term r,a B = ∓i B 2 r∈B |r r|, where B is the broadening due to the bottom electrode and the summation is taken over all the atom positions in the nanoparticle that are in contact with the bottom electrode. We point that the model described above (and specifically the wide band approximation) implies electron-phononinduced thermalization in the electrodes, but no electronphonon interaction in the junction at this stage. All the relevant energies-the Fermi level of the electrodes, NP valence and conduction bands, and molecular orbitalsare schematically also shown in Fig. 1(a). Once the Hamiltonian and the self-energies are defined, the calculation proceeds via the non-equilibrium Green's function approach, which is reduced to the Landauer formalism for non-interacting electrons.. The Green's functions are determined via G r,a =, and the transport coefficients, namely, the conductance G, the thermopower S, and the thermal conductance, are determined within the Landauer formalism as T is the temperature (room temperature, unless otherwise stated) and L n = − 1 hd ET (E)(E − ) n ∂f ∂E are the Landauer integrals, with h being Planck's constant, the chemical potential of the electrodes, and f (E) the Fermi-Dirac distributions. The thermoelectric FOM ZT is given by ZT = GS 2 /T. RESULTS We start by describing the thermopower S and FOM ZT for a single molecule placed on a square pyramidshaped nanoparticle (the bottom electrode is in contact with the plane), as shown in Fig. 1(a). Since we present here a generic model for nanoparticles, we are not aiming at obtaining quantitative results describing a specific system. We are, nonetheless, aware that it is essential to take numerical parameters that are realistic and readily describe experimental systems. We thus choose the semiconducting band center at c = −4.8 eV and ∆ = 0.8 eV, corresponding to PbSe nanoparticles, and = −5.1 eV as the electrode chemical potential (corresponding to Au electrodes). Other numerical parameters were t = 1.6 eV, B = 0.05 eV, T = 0.01 eV (describing weakly coupled molecules, see, e.g., ). The pyramid basis contains 10 10 atoms, and our results only weakly (and quantitatively) depend on the size of the nanoparticle. In Fig. 1(b), we show the thermopower S as a function of molecular orbital energy level 0 and the moleculenanoparticle coupling t 1. In experiments, 0 can be tuned by choice of molecule and by choice of the nanoparticle composition and size. The coupling t 1 can be additionally tuned by stretching or squeezing the molecular junction with the top electrode. As may be seen, S can reach values as high as ±200 V/K and can change sign according to the position of the molecular level with respect to the semiconducting band edge. The appearance of a thermopower maximum upon a change in 0 is not surprising, since one would expect that tuning 0 would lead to a near resonance in transmission (which implies a maximum thermopower). However, the appearance of a maximum upon a change of t 1 is surprising; in a typical single-molecule junction, stretching the junction will only lead to a change in the moleculeelectrode coupling, and will nor result in a thermopower maximum. Here, since the molecule and the NP hybridize, changing t 1 (experimentally -by means of pulling the junction) is similar to changing the molecular orbital, i.e. pulling the junction plays the role of gating. Since gating a molecular junction is a challenging task [24,, this result puts an additional advantage on hybrid junctions. To calculate ZT, we add to the electronic thermal conductance a phononic term phn = 50 pW/K, a realistic value for molecular junctions [19,. In Fig. 2, we plot the central result of this paper, ZT of the hybrid single-molecule-NP junction, as a function of 0 and t 1. The FOM reaches ZT > 0.8, i.e., more than three orders of magnitude larger than the values measured in regular molecular junctions. The realistic parameters considered here and the wide range of parameters in which ZT is large implies that this regime should be accessible for experiments. In the inset of Fig. 2, we show ZT as a function of 0 for a constant t 1 = 0.1 eV for three different configurations. The first (solid blue line) is the same as the setup in the main figure. In the second setup (dotted orange line), the molecule is in contact with a pyramid nanoparticle, but the positions of the ±∆ terms in the nanoparticle Hamiltonian of Eq. 2 are switched, modeling a change in the atom species at the apex of the nanoparticle (for instance either Pb or Se in PbSe nanoparticles). The third setup (dashed green line) describes a molecule in contact with a cube-shaped (as opposed to a pyramid) nanoparticle. As may be seen, the contact configuration and the shape of the nanoparticle can have a strong effect on ZT ; although values of ZT > 0.8 can be achieved in these configura- tions, the optimal parameters vary between the different setups. In experiments, a reasonable situation would be for ad-ditional molecules to attach to the surface of the nanoparticle. To model this scenario, we add to the Hamiltonian additional molecules, with the same orbital level 0 and coupling t 1 but with a random coupling point to the surface of the nanoparticle (top inset in Fig. 3). The transport properties are then averaged over 10 4 realizations of random positions. In Fig. 3, ZT is plotted as a function of surface coverage (in percents) of attached molecules. Surprisingly, we found a slight increase of ZT for a small number (∼ 8%) of attached molecules, followed by a decrease in ZT when the number of attached molecules was increased. To elucidate the origin of this result, the lower inset shows the average conductance (blue circles) and thermopower (orange triangles) as a function of surface coverage (in percents). We found that the average conductance actually increased with the number of surface molecules, but the thermopower decreased, thus eventually leading to a decrease in ZT. The origin of this effect is the fact that the surface coverage induces two competing processes. On one hand, the addition of molecules bound to the surface adds conduction channels (i.e. local resonances in the transmission function) and therefore increases the conductance. One the other hand, the presence of disorder tends to flatten the resonances on average, and as a result the thermopower, which is proportional to the derivative of the transmission function, and hence becomes smaller as the transmission resonance becomes wider, is reduced. This competition is reflected in the opposite trends of G and S in the inset of Fig. 3. Since ZT is the product of the conductance which increases and the thermopower which decreases, it exhibits non-monotonic behavior. At room temperature it is possible that the electron motion across the junction will not be coherent and that the electrons will dephase as they cross the molecule-NP interface. This may occur due to the interaction of electrons with the (soft) phonons of the NP, or due to vibrations in the position of the molecule with respect to the electrodes (but not due to interaction of electrons with the molecular vibrations, which are typically high-energy modes; such interactions were discussed in, e.g. ). If this is the case, the formulation presented above is invalid, as it describes coherent transport. To account for decoherence, we note that if the electron loses its phase on the molecule-NP interface, then the NP can, in fact, be considered as a semiconducting electrode. This results in an effective SC-molecule-metal junction, which can again be described using the Landauer formula, with the appropriate choice of self-energies. To model the SC electrode, we recall that the imaginary part of the self-energy describes the electrode DOS. Similar to the wide-band approximation for the metallic electrode, we model the semiconducting electrode as constant outside the gap and zero in the gap. The imaginary part of the semiconducting self-energy can thus be written with the use of a heaviside step function as, and the real part of the self-energy determined via the Kramers-Kronig relation. In the numerical calculations given below, the step-function discontinuity is broadened by 0.01 eV, a value that can arise in realistic systems from lattice impurities and dislocations or thermal fluctuations. In Fig. 4 we plot the (a) conductance, (b) thermopower, (c) thermal conductance and (d) ZT, for the hybrid SC-NP-molecule-metal junction (solid lines) as a function of 0, with B = 0.05 eV, T = 0.01 eV, c = −4.8 eV, and ∆ = 0.8 eV. Looking at ZT, we find that the dephasing process in the NP does not substantially reduce the maximal ZT from the coherent case (the maximal value of Fig. 2). For comparison, the dashed lines of Fig. 4(a-d) Up till now, we considered a molecule which is weakly coupled to the electrodes. However, depending on the chemical moiety, the coupling between the molecule and the electrodes can be much larger (see, e.g. ). It is therefore of interest to see whether the increase in ZT in hybrid junctions compared to the M-M-M junctions is maintained also for strongly coupled molecules. In Fig. 4(e-h), the same as in Fig. 4(a-d) is plotted for a strongly coupled molecule, with B = 0.5 eV, T = 0.1 eV. Again, the two most striking differences between the hybrid junction and the M-M-M junction are the thermopower and ZT, which are even more profound for the strongly coupled junction. Due to the large value of the coupling, the transmission resonance of the M-M-M junction is very broad, and therefore the thermopower is very small. That, in addition with the fact that the WF law is obeyed (shown in the inset of Fig. 4(h), dashed line) results in a small ZT ∼ 10 −2. In contrast, the hybrid junction (solid lines) shows only slight reduction of the thermopower (Fig. 4(f)), because although the molecular level is broadened due to the metallic lead, the band-edge of the SC electrode still introduces a sharp feature to the transmission function. Along with a violation of the WF law (inset of Fig. 4(h)), this leads to a relatively large ZT ∼ 1. Surprisingly, ZT for the strongly-coupled hybrid junction displays larger ZT than the weakly-coupled junction, with large values of ZT for 0 well inside the SC band gap (as a result of the WF law violation inside the gap). As was noted earlier, one of the advantages of the hybrid NP-molecule junctions is the ability to tune not the molecular orbitals, but the SC-NP band structure. With this in mind, in Fig. 5 we plot ZT for a constant value of the molecular orbital 0 = −5.2 eV, as a function of the band-gap ∆ (the band center is at c = −4.8 eV and ∆ = 0 corresponds to a M-M-M junction, see Fig. 1(a)), exploring the two cases of weakly coupled molecule (solid blue line) and strongly coupled molecule (dashed red line), as described above. For the weakly coupled molecule we see an increase of a factor 2 from the M-M-M junction to the optimal band-gap. For the strongly coupled junction we find that ZT can rise as high as ∼ 0.6, with a four orders of magnitude increase in ZT compared to the M-M-M junctions. The inset shows S as a function of ∆, and both the weakly-coupled and strongly-coupled molecules exhibit orders-of-magnitude increase in S compared to the M-M-M junctions. Finally, in Fig. 6, the temperature dependence of ZT is examined for the weakly-coupled hybrid molecular junction, evaluated for 0 = −5.2 eV (corresponding to the maximum in ZT from Fig. 4(d)) (the rest of the pa- rameters are the same as in Fig. 4(a-d)). ZT is found to increases with temperature, exhibiting a maximum of ZT ∼ 2 at T ∼ 450 K, followed by a moderate decrease. This finding again implies that relatively large values of ZT persist in a broad range of parameters. In fact, in the inset of Fig. 5, we plot ZT calculated with an over-evaluated value of the phonon thermal conductance phn = 150 pW/K, and we find that ZT reaches values of ZT ∼ 0.3, which is still several orders of magnitude larger than the observed values. SUMMARY AND CONCLUSIONS In summary, we presented calculations of thermopower and thermoelectric FOM for a hybrid metal-single molecule-semiconducting nanoparticle-metal molecular junction. The presence of the SC-NP and the position of the molecular orbital close to the semiconducting band edge enhanced the particle-hole asymmetry required for efficient thermoelectric conversion. The resulting values of thermopower and ZT were much larger than those measured in experiments, and reach values as high as S ∼ 500 V/K and ZT ∼ 1 (an improvement of four orders of magnitude improvement over measured molecular junctions). We showed that this enhanced thermoelectric performance persists over a wide range of parameters and is robust against disorder in the form of surface-attached molecules. We showed that decoherence at the molecule-nanoparticle boundary is not detrimental to the thermoelectric performance, and that large values of ZT persist up to high temperatures. Comparing hybrid SC-NP-molecule junctions to the more "standard" metal-molecule-metal junction, we found that for weaklycoupled molecules there is a factor ∼ 2 increase in ZT and a factor ∼ 4 in the thermopower. For strongly coupled molecules, the advantage of hybrid NP-molecule junctions is even more profound, with 2 − 4 orders of magnitude increase in ZT and thermopower n hybrid junctions. The model we presented here is a generic model, not aimed at any specific system. Nevertheless, our numerical parameters were taken from experimentally observed value, including the phononic contribution to the thermal conductance. This, along with the fact that enhanced thermoelectric performance was found for a wide range of molecular parameters and was robust against disorder, decoherence and high temperatures, strongly suggest that high values of thermopower can be reached in future experiments on molecule-nanoparticle junctions, which are promising candidates for nano-scale thermoelectric conversion. |
""" Local filesystem adaptor implementation """
import os
import radical.utils as ru
from ... import exceptions as rse
from ...utils import pty_shell as sups
from ...adaptors import base as rsab
from ...adaptors.cpi import filesystem as cpi
from ... import filesystem as api
from ...adaptors.cpi.decorators import SYNC_CALL
###############################################################################
# adaptor info
#
_ADAPTOR_NAME = 'radical.saga.adaptor.srm_file'
_ADAPTOR_SCHEMAS = ['srm']
_ADAPTOR_OPTIONS = [
{
'category': 'radical.saga.adaptor.srm_file',
'name': 'pty_url',
'type': str,
'default': 'fork://localhost/',
'documentation': '''The local or remote url the adaptor connects to.''',
'env_variable': None
}
]
_ADAPTOR_CAPABILITIES = {}
_ADAPTOR_DOC = {
'name' : _ADAPTOR_NAME,
'cfg_options' : _ADAPTOR_OPTIONS,
'capabilities' : _ADAPTOR_CAPABILITIES,
'description' : 'The SRM filesystem adaptor.',
'details' : """This adaptor interacts with SRM Storage Elements
""",
'schemas' : {'srm': 'srm filesystem.'}
}
_ADAPTOR_INFO = {
'name' : _ADAPTOR_NAME,
'version' : 'v0.3',
'schemas' : _ADAPTOR_SCHEMAS,
'cpis' : [
{
'type' : 'radical.saga.namespace.Directory',
'class' : 'SRMDirectory'
},
{
'type' : 'radical.saga.namespace.Entry',
'class' : 'SRMFile'
},
{
'type' : 'radical.saga.filesystem.Directory',
'class' : 'SRMDirectory'
},
{
'type' : 'radical.saga.filesystem.File',
'class' : 'SRMFile'
}
]
}
TRANSFER_TIMEOUT = 3600 # Timeout of the SRM plugin for the transfer
OPERATION_TIMEOUT = 3600 # Should be greater or equal than TRANSFER_TIMEOUT
CONNECTION_TIMEOUT = 180 # Technically same as OPERATION_TIMEOUT,
# but used for non-transfer operations.
###############################################################################
# The adaptor class
class Adaptor(rsab.Base):
"""
This is the actual adaptor class, which gets loaded by SAGA (i.e. by the
SAGA engine), and which registers the CPI implementation classes which
provide the adaptor's functionality.
"""
def __init__(self) :
rsab.Base.__init__(self, _ADAPTOR_INFO, _ADAPTOR_OPTIONS)
self.pty_url = self._cfg['pty_url']
def sanity_check(self):
pass
def file_get_size(self, shell, url):
try:
# Following columns are displayed for each entry:
# mode, number of links, group id, userid, size, last modification time, and name.
rc, out, _ = shell.run_sync("gfal-ls --color never --timeout %d --long %s" % (CONNECTION_TIMEOUT, url))
except:
shell.finalize(kill_pty=True)
raise Exception("get_size failed")
if rc != 0:
if 'SRM_INVALID_PATH' in out:
raise rse.DoesNotExist(url)
else:
raise Exception("Couldn't list file")
fields = out.split()
# -rw-r--r-- 1 45 44 19 May 30 15:29 srm://osg-se.sprace.org.br:8443/srm/managerv2?SFN=/pnfs/sprace.org.br/data/osg/marksant/TESTFILE
_, _, _, _, size_str, _, _, _, _ = fields
return int(size_str)
def srm_stat(self, shell, url):
# In case of an URL the fields are:
# file mode, number of links to the file, user id, group id, file size(bytes), locality, file name.
# srm://srm.hep.fiu.edu:8443/srm/v2/server?SFN=/mnt/hadoop/osg/marksant/TESTFILE")
# -rwxr-xr-x 1 1 2 19 ONLINE /mnt/hadoop/osg/marksant/TESTFILE
try:
# Following columns are displayed for each entry:
# mode, number of links, group id, userid, size, last modification time, and name.
rc, out, _ = shell.run_sync(
"gfal-ls --color never --timeout %d --directory --long %s" % (CONNECTION_TIMEOUT, url))
except:
shell.finalize(kill_pty=True)
raise Exception("stat failed")
if rc != 0:
if 'SRM_INVALID_PATH' in out:
raise rse.DoesNotExist(url)
if 'SRM_FAILURE' in out and 'forbidden' in out:
raise rse.AuthorizationFailed(url)
if 'Command timed out after' in out:
raise rse.Timeout("Connection timeout")
if 'Communication error on send' in out:
# (gfal-ls error: 70 (Communication error on send) -
# srm-ifce err: Communication error on send,
# err: [SE][Ls][] httpg://cit-se.ultralight.org:8443/srm/v2/server:
# CGSI-gSOAP running on nodo86 reports could not open connection
# to cit-se.ultralight.org:8443\n\n\n)
raise rse.NoSuccess("Connection failed")
else:
raise rse.NoSuccess("Couldn't list file")
# Sometimes we get cksum too, which we ignore
fields = out.split()[:7]
stat_str, _, _, _, size_str, _, _ = fields
mode = stat_str[0]
if mode == '-':
mode = 'file'
elif mode == 'd':
mode = 'dir'
elif mode == 'l':
mode = 'link'
else:
raise rse.BadParameter("stat() unknown mode: '%s' (%s)" % (mode, out))
size = int(size_str)
return {
'mode': mode,
'size': size
}
# --------------------------------------------------------------------------
#
def srm_transfer(self, shell, flags, src, dst):
if isinstance(src, ru.Url):
src = src.__str__()
if isinstance(dst, api.file.File):
dst = dst.get_url()
try:
rc, out, _ = shell.run_sync('gfal-copy --parent --timeout %d --transfer-timeout %d %s %s' % (
OPERATION_TIMEOUT, TRANSFER_TIMEOUT, src, dst))
except:
shell.finalize(kill_pty=True)
raise Exception("transfer failed")
if rc != 0:
if 'SRM_INVALID_PATH' in out:
raise rse.DoesNotExist(src)
elif '(File exists)' in out:
raise rse.AlreadyExists(dst)
elif 'Could not open destination' in out:
raise rse.DoesNotExist(dst)
else:
raise Exception("Copy failed.")
# --------------------------------------------------------------------------
#
def srm_file_remove(self, shell, flags, tgt):
if isinstance(tgt, api.file.File):
tgt = tgt.get_url()
try:
rc, out, _ = shell.run_sync("gfal-rm --timeout %d %s" % (CONNECTION_TIMEOUT, tgt))
except:
shell.finalize(kill_pty=True)
raise Exception("remove failed")
if rc != 0:
if 'SRM_INVALID_PATH' in out:
raise rse.DoesNotExist(tgt)
else:
raise Exception("Remove failed.")
# --------------------------------------------------------------------------
#
def srm_dir_remove(self, shell, flags, tgt):
if isinstance(tgt, api.directory.Directory):
tgt = tgt.get_url()
if isinstance(tgt, api.file.File):
tgt = tgt.get_url()
if isinstance(tgt, ru.Url):
tgt = str(tgt)
try:
rc, out, _ = shell.run_sync("gfal-rm --recursive %s" % tgt)
except:
shell.finalize(kill_pty=True)
raise Exception("remove failed")
if rc != 0:
if 'SRM_INVALID_PATH' in out:
raise rse.DoesNotExist(tgt)
else:
raise Exception("Remove failed.")
# --------------------------------------------------------------------------
#
def srm_list(self, shell, url, npat, flags):
if npat:
raise rse.NotImplemented("no pattern selection")
if isinstance(url, api.Directory):
url = url.get_url()
if isinstance(url, api.File):
url = url.get_url()
try:
rc, out, _ = shell.run_sync("gfal-ls --color never --timeout %d %s" % (CONNECTION_TIMEOUT, url))
except:
shell.finalize(kill_pty=True)
raise Exception("list failed")
if rc != 0:
if 'SRM_INVALID_PATH' in out:
raise rse.DoesNotExist(url)
else:
raise Exception("Couldn't list directory.")
return out.split('\n')
# --------------------------------------------------------------------------
#
def srm_list_kind(self, shell, url):
try:
rc, out, _ = shell.run_sync("gfal-ls --color never --timeout %d --long %s" % (CONNECTION_TIMEOUT, url))
except:
shell.finalize(kill_pty=True)
# TODO: raise something else or catch better?
raise Exception("list failed")
if rc != 0:
if 'SRM_INVALID_PATH' in out:
raise rse.DoesNotExist(url)
else:
raise Exception("Couldn't list directory.")
entries = out.split('\n')
files = []
dirs = []
# Output format
# ---------- 1 0 0 1048576000 May 24 23:14 1000M
# ---------- 1 0 0 104857600 May 24 23:13 100M
# ---------- 1 0 0 10485760 May 24 23:13 10M
# ---------- 1 0 0 1048576 May 24 22:59 1M
# d--------- 1 0 0 0 Jun 1 11:37 tmp
for entry in entries:
if not entry:
continue
kind = entry[0]
name = entry.split()[8]
if kind == '-':
files.append(name)
elif kind == 'd':
dirs.append(name)
return (files, dirs)
# --------------------------------------------------------------------------
#
def surl2query(self, url, surl, tgt_in):
url = ru.Url(url)
if tgt_in:
surl = os.path.join(surl, str(tgt_in))
url.query = 'SFN=%s' % surl
return url
###############################################################################
#
class SRMDirectory (cpi.Directory):
# --------------------------------------------------------------------------
#
def __init__(self, api, adaptor):
_cpi_base = super(SRMDirectory, self)
_cpi_base.__init__(api, adaptor)
# --------------------------------------------------------------------------
#
def _alive(self):
alive = self.shell.alive()
if not alive:
self.shell = sups.PTYShell(self._adaptor.pty_url)
# --------------------------------------------------------------------------
#
@SYNC_CALL
def init_instance(self, adaptor_state, url, flags, session):
self._url = ru.Url(url) # deep copy
self._flags = flags
self._session = session
self._init_check()
try:
# open a shell
self.shell = sups.PTYShell(self._adaptor.pty_url, self.session)
except:
raise rse.BadParameter("Couldn't open shell (%s)" % self._adaptor.pty_url)
#
# Test for valid proxy
#
try:
rc, out, _ = self.shell.run_sync("grid-proxy-info")
except:
self.shell.finalize(kill_pty=True)
raise rse.NoSuccess("grid-proxy-info failed (runsync)")
if rc != 0:
raise rse.NoSuccess("grid-proxy-info failed (rc!=0)")
if 'timeleft : 0:00:00' in out:
raise rse.AuthenticationFailed("x509 proxy expired.")
#
# Test for gfal2 tool
#
try:
rc, _, _ = self.shell.run_sync("gfal2_version")
except:
self.shell.finalize(kill_pty=True)
raise rse.NoSuccess("gfal2_version")
if rc != 0:
raise rse.DoesNotExist("gfal2 client not found")
return self.get_api()
# --------------------------------------------------------------------------
#
def _init_check(self):
url = self._url
flags = self._flags
if url.fragment :
raise rse.BadParameter ("Cannot handle url %s (has fragment)" % url)
if url.username :
raise rse.BadParameter ("Cannot handle url %s (has username)" % url)
if url.password :
raise rse.BadParameter ("Cannot handle url %s (has password)" % url)
self._path = url.path
(prefix, surl) = url.query.split('=')
if prefix != 'SFN':
raise rse.BadParameter("SURL prefix %s is not SFN." % prefix)
self._surl = surl
# --------------------------------------------------------------------------
#
@SYNC_CALL
def get_url(self):
return self._url
# --------------------------------------------------------------------------
#
@SYNC_CALL
def list(self, npat, flags):
self._alive()
url = self._adaptor.surl2query(self._url, self._surl, None)
return self._adaptor.srm_list(self.shell, url, npat, flags)
# ----------------------------------------------------------------
#
@SYNC_CALL
def make_dir(self, tgt_in, flags):
self._alive()
url = self._adaptor.surl2query(self._url, self._surl, tgt_in)
try:
rc, out, _ = self.shell.run_sync("srmmkdir %s" % url)
except:
self.shell.finalize(kill_pty=True)
raise Exception(" failed")
if rc != 0:
if 'SRM_DUPLICATION_ERROR' in out:
# Throw exception only if Exclusive flag was set.
if flags & api.EXCLUSIVE:
raise rse.AlreadyExists(url)
else:
raise Exception("Couldn't create directory.")
# ----------------------------------------------------------------
#
@SYNC_CALL
def is_dir_self(self):
return self.is_dir(None)
# --------------------------------------------------------------------------
#
@SYNC_CALL
def is_dir(self, tgt_in):
url = self._adaptor.surl2query(self._url, self._surl, tgt_in)
stat = self._adaptor.srm_stat(self.shell, url)
if stat['mode'] == 'dir':
return True
else:
return False
# ----------------------------------------------------------------
#
@SYNC_CALL
def is_link_self(self):
return self.is_link(None)
# --------------------------------------------------------------------------
#
@SYNC_CALL
def is_link(self, tgt_in):
url = self._adaptor.surl2query(self._url, self._surl, tgt_in)
stat = self._adaptor.srm_stat(self.shell, url)
if stat['mode'] == 'link':
return True
else:
return False
# ----------------------------------------------------------------
#
@SYNC_CALL
def is_file_self(self):
return self.is_file(None)
# --------------------------------------------------------------------------
#
@SYNC_CALL
def is_file(self, tgt_in):
url = self._adaptor.surl2query(self._url, self._surl, tgt_in)
stat = self._adaptor.srm_stat(self.shell, url)
if stat['mode'] == 'file':
return True
else:
return False
# --------------------------------------------------------------------------
#
@SYNC_CALL
def get_size(self, tgt_in):
if '/' in tgt_in:
# Assume absolute URI
url = tgt_in
else:
url = self._adaptor.surl2query(self._url, self._surl, tgt_in)
return self._adaptor.file_get_size(self.shell, url)
# ----------------------------------------------------------------
#
@SYNC_CALL
def remove(self, tgt, flags):
if flags & api.RECURSIVE:
self._adaptor.srm_dir_remove(self.shell, flags, tgt)
else:
self._adaptor.srm_file_remove(self.shell, flags, tgt)
# ----------------------------------------------------------------
#
@SYNC_CALL
def remove_self(self, flags):
self.remove(self._url, flags)
# ----------------------------------------------------------------
#
@SYNC_CALL
def copy_self(self, tgt, flags):
return self.copy(src_in=None, tgt_in=tgt, flags=flags)
# ----------------------------------------------------------------
#
@SYNC_CALL
def copy(self, src, tgt, flags):
self._alive()
self._adaptor.srm_transfer(self.shell, flags, src, tgt)
# ----------------------------------------------------------------
#
@SYNC_CALL
def exists(self, tgt):
self._alive()
try:
self._adaptor.srm_stat(self.shell, tgt)
except rse.DoesNotExist:
return False
return True
# ----------------------------------------------------------------
#
@SYNC_CALL
def close(self, timeout=None):
if timeout:
raise rse.Timeout("timeout for close not supported")
######################################################################
#
# file adaptor class
#
class SRMFile(cpi.File):
def __init__(self, api, adaptor):
_cpi_base = super(SRMFile, self)
_cpi_base.__init__(api, adaptor)
def _dump(self):
print "url : %s" % self._url
print "flags : %s" % self._flags
print "session: %s" % self._session
# --------------------------------------------------------------------------
#
def _alive(self):
alive = self.shell.alive()
if not alive:
self.shell = sups.PTYShell(self._adaptor.pty_url)
@SYNC_CALL
def init_instance(self, adaptor_state, url, flags, session):
self._url = url
self._flags = flags
self._session = session
self._init_check()
try:
# open a shell
self.shell = sups.PTYShell(self._adaptor.pty_url, self.session)
except:
raise rse.NoSuccess("Couldn't open shell")
#
# Test for valid proxy
#
try:
rc, out, _ = self.shell.run_sync("grid-proxy-info")
except:
self.shell.finalize(kill_pty=True)
raise rse.NoSuccess("grid-proxy-info failed")
if rc != 0:
raise rse.NoSuccess("grid-proxy-info failed")
if 'timeleft : 0:00:00' in out:
raise rse.AuthenticationFailed("x509 proxy expired.")
#
# Test for gfal2 tool
#
try:
rc, _, _ = self.shell.run_sync("gfal2_version")
except:
self.shell.finalize(kill_pty=True)
raise rse.NoSuccess("gfal2_version")
if rc != 0:
raise rse.DoesNotExist("gfal2 client not found")
return self.get_api()
def _init_check(self):
url = self._url
flags = self._flags
if url.username :
raise rse.BadParameter ("Cannot handle url %s (has username)" % url)
if url.password :
raise rse.BadParameter ("Cannot handle url %s (has password)" % url)
self._path = url.path
path = url.path
@SYNC_CALL
def get_url(self):
return self._url
@SYNC_CALL
def get_size_self(self):
self._alive()
return self._adaptor.file_get_size(self.shell, self._url)
# ----------------------------------------------------------------
#
@SYNC_CALL
def copy_self(self, dst, flags):
self._alive()
self._adaptor.srm_transfer(self.shell, flags, self._url, dst)
# ----------------------------------------------------------------
#
@SYNC_CALL
def remove_self(self, flags):
self._alive()
self._adaptor.srm_file_remove(self.shell, flags, self._url)
# ----------------------------------------------------------------
#
@SYNC_CALL
def is_file_self(self):
self._alive()
stat = self._adaptor.srm_stat(self.shell, self._url)
if stat['mode'] == 'file':
return True
else:
return False
# ----------------------------------------------------------------
#
@SYNC_CALL
def is_link_self(self):
self._alive()
stat = self._adaptor.srm_stat(self.shell, self._url)
if stat['mode'] == 'link':
return True
else:
return False
# ----------------------------------------------------------------
#
@SYNC_CALL
def is_dir_self(self):
self._alive()
stat = self._adaptor.srm_stat(self.shell, self._url)
if stat['mode'] == 'dir':
return True
else:
return False
# ----------------------------------------------------------------
#
@SYNC_CALL
def close(self, timeout=None):
if timeout:
raise rse.BadParameter("timeout for close not supported")
# vim: tabstop=8 expandtab shiftwidth=4 softtabstop=4
|
Low-Complexity Algorithm for Radio Astronomy Observation Data Transport in an Integrated NGSO Satellite Communication and Radio Astronomy System An integrated non-geostationary orbit (NGSO) satellite communication and radio astronomy system (SCRAS), was recently proposed in as a new coexistence paradigm. In SCRAS, the transportation of the radio astronomical observation (RAO) data from the space to the ground stations is an important problem, for which proposed a linear programming optimization based algorithm. However, the computational complexity of this algorithm is quite high which is exacerbated by the need to re-compute the algorithm frequently due to the time-varying characteristics of the observing satellites, the Earth stations, and the RAO region. In this paper, to address this high complexity issue, we develop a low-complexity RAO data transport algorithm. In addition to the static resource constraint scenario considered in the existing work, we introduce a dynamic resource constraint scenario to exploit the knowledge of the satellite communication system (SCS) traffic statistics. We also present a modified version of the algorithm in for the dynamic resource constraint scenario. In addition to the number of inter-satellite link (ISL) hops as the RAO data transport cost metric, we also evaluate the metric of the sum of the squared ISL hop distances which reflects the transmission energy cost. Furthermore, computational complexity analysis and data transport costs of the algorithms are presented. Our results show that the proposed algorithm yields several orders of computational complexity saving over the existing method while incurring a modest increase in the data transport cost. Our proposed dynamic resource constraint scenario provides plausible reduction of the RAO data transport cost. |
Cladribine repurposed in multiple sclerosis: making a fortune out of a generic drug Cladribine (CdA), a purine nucleoside analogue (PNA) that targets anti-CD4 and 8 T-cells, has recently been repositioned by Merck as an oral disease-modifying therapy for of highly active relaspe-remitting Multiple Sclerosis (RRMS), available as oral cladribine tablets (Mavenclad 10mg). Its surplus value in the existing panel of disease-modifying therapy (DMT) for MS like the anti-CD20 B-cell targeting monoclonal antibodies, that is, rituximab (mouse chimeric), ocrelizumab (humanised) and ofatumumab (fully human) of which present data suggest that these are very effective in multiple sclerosis, is curious.1 In this personal viewpoint, we would like to highlight the potentially usefulness of PNAs available and their limitations. PNAs are active in chronic lymphocytic leukaemia, hairy cell leukaemia (HCL) and off-label in low-grade lymphomas. Cladribine has been used for HCL since the early 1980s as intravenous therapy.2 Cladribine delivered subcutaneously (SC) appeared to be most convenient in HCL and is considered to have equal efficacy compared with intravenous administration. Oral CdA use has been suggested since the early 1990s by Carson et al 3 were it not that being unstable at acidic pH and is degraded by bacterial nucleoside phosphorylases. Other available PNAs are fludarabine (F-Ara) and clofarabine (CAFdA), which all are deoxyadenosine derivatives that act as antimetabolites that compete with natural deoxynucleosides used for DNA synthesis (figure 1). Figure 1 Chemical structures of purine analogues cladribine, fludarabine and clofarabine, compared with their natural deoxynucleoside, deoxyadenosine. All of these PNAs need to be metabolised to exert their cytotoxic Correspondence to Dr Hans J C Buiter, Clinical Pharmacology and Pharmacy, Amsterdam University Medical Centres, Amsterdam 1081 HV, The Netherlands; hjc.buiter{at}amsterdamumc.nl and transportation by specialised nucleoside membrane transporters and subsequent phosphorylation to their active corresponding nucleotide, they are supposed to be especially active against low-grade malignancies with similar toxicity profiles for above-mentioned diseases that include moderate to profound and prolonged immunosuppression, thus being clinically effective in haematological malignant disorders and autoimmune disorders, including RRMS. 4 It is worth to note that CAFdA was developed as a rational extension of the deoxyadenosine analogues to overcome the per oral bioavailability limitations and incorporate the best qualities of both F-Ara and CdA while having a similar metabolic/ toxicity profile. The prolonged immunosuppression by PNAs can indeed be beneficial for controlling relapsing remitting MS. As was shown for fludarabine, which was investigated as adjunct therapy in interferon-(beta)-treated RRMS. 5 Preliminary interim analyses suggest that temporary fludarabine therapy may provide sustained immunosuppression. Cladribine performs similar as fludarabine and was first licenced for RRMS in 2011, yet later withdrawn when regulators requested more studies to address issues related to severe lymphopaenia. After the registration of alemtuzumab for relapsing remitting MS, which induces significantly more lymphopaenia and side effects than CdA, resubmission of CdA tablets was prompted to the regulators. 6 Intriguingly, in the first observational pilot studies for MS, CdA was given intravenously. 7 To date, CAFdA has not yet been investigated as a DMT for MS. In general, oral drugs cost equal, usually less than parenteral drugs. As is the case for fludarabine, where intravenous versus oral drug costs per milligram are similar, that is, €2.57 versus €2.77 per mg. Apparently not for cladribine. Interestingly, the price of oral CdA with the registered indication for RRMS is over 20 000 euro per patient per year compared with less than 1000 euro per patient per year for equivalent dosing by parenteral administration. In our opinion, drug industry strives to optimise profits by selecting markets where they can easily obtain a monopoly position while ensuring adequate drug production to meet market needs. We give two examples. 8 In 2010, Valeant Pharmaceuticals acquired the rights to Syprine (trientine dihydrochloride), a drug from the 1960s used to treat Wilson's disease. They raised its price substantially, by more than 3000% for a monthly supply: from $652 to $21 267. In 2015, Turing Pharmaceuticals acquired the rights to Daraprim (pyrimethamine), a drug approved by the Food and Drug Administration (FDA) in 1953 for toxoplasmosis. Turing, then the only manufacturer of pyrimethamine, raised the price of Daraprim by more than 5000% for one tablet: from $13.50 to $750. Oral CdA formulation is being marketed as Mavenclad in the Netherlands for list price of €2785 per 10 mg tablet compared with €283 for a 10 mg ampoule, taking into account that cladribine as active pharmaceutical ingredient costs only approximately €9 per mg. Therefore, it seems to us that Merck, by registering and marketing oral CdA (Maviclad) for RRMS is following a similar approach as described above and is mining Health Care to pay for this. Editorial Obviously for patients, an oral formulation of a PNA seem to have clear benefits compared with parenteral formulations because of patient comfort and potentially less outpatient clinic admissions to receive PNA parenteral drug infusion. On the contrary, in general. compliance is better registered and controlled by parenteral drug administration at the outpatient clinic. In case of CdA, oral administration is not preferred per se because of its low bioavailability and interpatient variation; bioavailability of 10 mg oral CdA is approximately 40% (summary of product characteristics (SPC)). Parenteral CdA, for example, subcutaneous administration, would result in lower dosages due to higher bioavailability. Using subcutaneous cladribine instead of oral would potentially result in a total cost reduction 95% (Per Os: €32 500 vs SC €1600 based on the Mavenclad treatment scheme for RRMS for a patient with 70 kg bodyweight as described in the SPC). In our opinion, the only PNA that would be beneficial for oral administration would be CAFdA, given its potential higher bioavailability compared with F-Ara and CdA. 9 Unfortunately, an oral drug formulation of CAFdA is not yet available. Drug repositioning or reprofiling/repurposing is the process of discovering, validating and marketing previously approved drugs for new indications. This process is of growing interest to academia and industry because of reduced time and costs associated with developing repositioned drugs. Newly licenced pharmaceutical indications are frequently approved without any controlled trial results, particularly in solid and haematological malignancies. Cladribine has been available off label and reported in neurological journals since the early 1990s and has been well studied both orally and parenterally. However, only orally administered CdA has been licenced for the treatment of RRMS. Subcutaneous CdA for MS was originally developed for compassionate use. As such, it has only been administered to MS patients in few centres around the world. Therefore, we would like to suggest to hospital pharmacists to promote the use of equivalent subcutaneous dosing with PNA therapy to their healthcare providers, like haematologists do worldwide, especially for CdA when considered for MS patients, as this will result in a 95% cost reduction, better bioavailability and less interpatient variation for their patients. |
The gallant service of the U.S. Navy in the War of 1812 is well know, but American Privateers in the War of 1812 deals with the other fleet the nation sent to sea.
operators. This marked what was something of the “golden age” of privateering.
American Privateers in the War of 1812 opens with an introduction by Good, a National Parks employee and independent scholar, which explains the concept of privateering and gives a short history of privateers in the service of the United States. He then has a chapter that catalogs the operations of privateers against British shipping during the war, followed by one that does so for the Navy and other agents. There follows a list complied from the pages of the Baltimore newspaper Niels’ Weekly Register, of each ship taken, whether by privateers, the Navy, or other agents. For each capture, all available details are given, which means that at times entries can be quite sparse, while at others quite detailed. Several appendices provide summary information on the privateers, such as their home ports.
Nevertheless, American Privateers in the War of 1812 is a valuable resource for anyone interested in maritime side of the war or in economic warfare. |
. The authors have studied the cutaneous vascularization of the lateral side of the ankle. Different technics of vascular injection have identified a cutaneous flap based on the lateral calcaneal artery, a collateral branch of the posterior peroneal artery. This flap can be used to cover the chronic ulceration of the achilleen region. |
def calc_heat_flow(self, t_out, internal_gains, solar_gains, energy_demand):
self.phi_ia = 0.5 * internal_gains + energy_demand
self.phi_st = (1 - (self.A_m / self.A_t) -
(self.h_tr_w / (9.1 * self.A_t))) * (0.5 * internal_gains + solar_gains)
self.phi_m = (self.A_m / self.A_t) * \
(0.5 * internal_gains + solar_gains)
self.phi_loss = ( self.h_tr_w / (9.1 * self.A_t) ) * (0.5 * internal_gains + solar_gains)
return self.phi_loss |
With Christian Ponder once again starting (and playing poorly) for the Minnesota Vikings, pictures of this jersey emerged, detailing all the people to start at QB for the Vikes since the Tarvaris Jackson era:
It's obviously inspired by the famous Browns jersey:
And it's also been ripped off by an enterprising Bills fan:
We get it, y'all. Your team has had bad quarterbacks. But nobody's going to take anything away from Browns Jersey Lady.
That jersey is the OG and the best, because a) it literally contains every QB in the brief, wildly unsuccessful existence of the second Cleveland Browns franchise and b) every quarterback listed is horrible. That Vikings jersey has Brett Favre on it! That doesn't pass the awful test, even if it was late edition Favre.
I know it's tempting to want to be a Browns fan, but ... actually it isn't very tempting at all. |
<filename>app/src/main/java/com/nmwilkinson/rxjavaworkout/di/AsteroidsApplicationComponent.java
package com.nmwilkinson.rxjavaworkout.di;
import android.app.Application;
import com.nmwilkinson.rxjavaworkout.AsteroidsApplication;
import com.nmwilkinson.rxjavaworkout.model.Asteroids;
import com.nmwilkinson.rxjavaworkout.rest.NeoWsService;
import javax.inject.Singleton;
import dagger.Component;
import rx.Observable;
/**
* A component whose lifetime is the life of the application.
*/
@Singleton // Constraints this component to one-per-application or unscoped bindings.
@Component(modules = AsteroidsApplicationModule.class)
public interface AsteroidsApplicationComponent
{
// Field injections of any dependencies of the DemoApplication
void inject(AsteroidsApplication application);
// Exported things that non-di setup code will need (e.g. an activity).
Application application();
Observable<Asteroids> asteroidsObservable();
}
|
package com.gempukku.swccgo.cards.set7.dark;
import com.gempukku.swccgo.cards.AbstractUsedInterrupt;
import com.gempukku.swccgo.cards.GameConditions;
import com.gempukku.swccgo.common.*;
import com.gempukku.swccgo.filters.Filter;
import com.gempukku.swccgo.filters.Filters;
import com.gempukku.swccgo.game.PhysicalCard;
import com.gempukku.swccgo.game.SwccgGame;
import com.gempukku.swccgo.logic.actions.PlayInterruptAction;
import com.gempukku.swccgo.logic.actions.TractorBeamAction;
import com.gempukku.swccgo.logic.effects.RespondablePlayCardEffect;
import com.gempukku.swccgo.logic.effects.TargetCardOnTableEffect;
import com.gempukku.swccgo.logic.effects.UseTractorBeamEffect;
import com.gempukku.swccgo.logic.modifiers.*;
import com.gempukku.swccgo.logic.timing.Action;
import java.util.Collections;
import java.util.LinkedList;
import java.util.List;
/**
* Set: Special Edition
* Type: Interrupt
* Subtype: Used
* Title: In Range
*/
public class Card7_254 extends AbstractUsedInterrupt {
public Card7_254() {
super(Side.DARK, 6, "In Range", Uniqueness.UNIQUE);
setLore("'They'll be in range of our tractor beam in moments, my lord.' 'Good. Prepare the boarding party and set your weapons for stun.'");
setGameText("If you have a Star Destroyer in a battle, during the weapons segment use its tractor beam for free. Add 2 to tractor beam destiny if targeting a unique (*) starship. If not captured, target is power and maneuver -3 for remainder of turn.");
addIcons(Icon.SPECIAL_EDITION);
}
@Override
protected List<PlayInterruptAction> getGameTextTopLevelActions(final String playerId, final SwccgGame game, final PhysicalCard self) {
// Check condition(s)
if (GameConditions.isDuringBattleWithParticipant(game,
Filters.and(Filters.your(playerId), Filters.Star_Destroyer, Filters.hasAttached(Filters.tractor_beam)))
) {
final PlayInterruptAction action = new PlayInterruptAction(game, self);
action.setText("Use tractor beam");
// Allow response(s)
Filter tractorBeamFilter = Filters.and(Filters.tractor_beam, Filters.attachedTo(Filters.and(Filters.participatingInBattle, Filters.your(playerId), Filters.Star_Destroyer)));
TargetingReason targetingReason = TargetingReason.OTHER;
action.appendTargeting(
new TargetCardOnTableEffect(action, playerId, "Choose tractor beam to use", targetingReason, tractorBeamFilter) {
@Override
protected void cardTargeted(final int targetGroupId, final PhysicalCard targetedCard) {
action.addAnimationGroup(targetedCard);
// Pay cost(s)
TractorBeamAction tractorBeamAction = targetedCard.getBlueprint().getTractorBeamAction(game, targetedCard);
Filter targetFilter = tractorBeamAction.getPossibleTargets();
TargetingReason targetingReason2 = TargetingReason.TO_BE_CAPTURED;
action.appendTargeting(
new TargetCardOnTableEffect(action, playerId, "Target with tractor beam", targetingReason2, targetFilter) {
@Override
protected void cardTargeted(final int starshipTargetGroupId, final PhysicalCard targetedCard) {
action.allowResponses("Use tractor beam",
new RespondablePlayCardEffect(action) {
@Override
protected void performActionResults(Action targetingAction) {
PhysicalCard finalTractorBeam = action.getPrimaryTargetCard(targetGroupId);
PhysicalCard finalTarget = action.getPrimaryTargetCard(starshipTargetGroupId);
TotalTractorBeamDestinyModifier destinyModifier = new TotalTractorBeamDestinyModifier(self, finalTractorBeam, 2);
List<Modifier> modifiers = new LinkedList<Modifier>();
modifiers.add(new PowerModifier(self, Filters.none, -3));
modifiers.add(new ManeuverModifier(self, Filters.none, -3));
// Perform result(s)
action.appendEffect(
new UseTractorBeamEffect(action, finalTractorBeam, true, Filters.unique, destinyModifier, modifiers, finalTarget));
}
});
}
}
);
}
}
);
return Collections.singletonList(action);
}
return null;
}
} |
A Review of the Parameters Affecting a Heat Pipe Thermal Management System for Lithium-Ion Batteries The thermal management system of batteries plays a significant role in the operation of electric vehicles (EVs). The purpose of this study is to survey various parameters enhancing the performance of a heat pipe-based battery thermal management system (HP-BTMS) for cooling the lithium-ion batteries (LIBs), including the ambient temperature, coolant temperature, coolant flow rate, heat generation rate, start-up time, inclination angle of the heat pipe, and length of the condenser/evaporator section. This review provides knowledge on the HP-BTMS that can guarantee achievement of the optimum performance of an EV LIB at a high charge/discharge rate. |
<gh_stars>10-100
package com.github.natanbc.lavadsp.volume;
import com.github.natanbc.lavadsp.util.FloatToFloatFunction;
import com.github.natanbc.lavadsp.util.VectorSupport;
import com.sedmelluq.discord.lavaplayer.filter.FloatPcmAudioFilter;
/**
* Updates the effect volume, with a multiplier ranging from 0 to 5.
*/
public class VolumePcmAudioFilter implements FloatPcmAudioFilter {
private final FloatPcmAudioFilter downstream;
private float volume = 1.0f;
public VolumePcmAudioFilter(FloatPcmAudioFilter downstream) {
this.downstream = downstream;
}
@Deprecated
public VolumePcmAudioFilter(FloatPcmAudioFilter downstream, int channelCount, int bufferSize) {
this(downstream);
}
@Deprecated
public VolumePcmAudioFilter(FloatPcmAudioFilter downstream, int channelCount) {
this(downstream);
}
/**
* Returns the volume multiplier. 1.0 means unmodified.
*
* @return The current volume.
*/
public float getVolume() {
return volume;
}
/**
* Sets the volume multiplier. 1.0 means unmodified.
*
* @param volume Volume to use.
*
* @return {@code this}, for chaining calls.
*/
public VolumePcmAudioFilter setVolume(float volume) {
if(volume < 0) {
throw new IllegalArgumentException("Volume < 0.0");
}
if(volume > 5) {
throw new IllegalArgumentException("Volume > 5.0");
}
this.volume = volume;
return this;
}
/**
* Updates the volume multiplier, using a function that accepts the current value
* and returns a new value.
*
* @param function Function used to map the depth.
*
* @return {@code this}, for chaining calls
*/
public VolumePcmAudioFilter updateVolume(FloatToFloatFunction function) {
return setVolume(function.apply(volume));
}
@Override
public void process(float[][] input, int offset, int length) throws InterruptedException {
if(Math.abs(1.0 - volume) < 0.02) {
downstream.process(input, offset, length);
return;
}
for(float[] array : input) {
VectorSupport.volume(array, offset, length, volume);
}
downstream.process(input, offset, length);
}
@Override
public void seekPerformed(long requestedTime, long providedTime) {
//nothing to do
}
@Override
public void flush() {
//nothing to do
}
@Override
public void close() {
//nothing to do
}
}
|
The passenger cruise ship Saint Laurent has been safely refloated and removed from the Eisenhower Lock chamber in the Saint Lawrence Seaway following Thursday’s allision with a bumper.
The Saint Lawrence Seaway Development Corporation reports that navigation on the St. Lawrence Seaway resumed at 4 p.m. Saturday and vessels are once again transiting Eisenhower Lock. During the approximately 42 hours that navigation was suspended, 15 vessels were delayed.
The Saint Laurent cruise ship struck the upstream bumper at Eisenhower Lock in Massena, New York at 9:15 p.m. Thursday.
Thirty injuries were reported among the 192 passengers, 81 crew, and a pilot on board the vessel at the time of the accident. All passengers and crew on board have been safely evacuated from the vessel. Photos of the ship posted to Twitter show extensive damage to the bow.
No pollution was detected as a result of incident. Meanwhile, a preliminary inspection showed no significant damage to Eisenhower Lock infrastructure, but SLSDC safety inspectors are continuing their review.
The Saint Laurent is a cruise ship owned by International Shipping Partners.
The Eisenhower Lock is one of two U.S. locks on the 10-mile-long Wiley-Dondero Canal, which provides access to Lake St. Lawrence and is operated by the SLSDC, a modal administration of the U.S. Department of Transportation. |
class ListNode:
def __init__(self, val=0, next=None):
self.val = val
self.next = next
def removeNthFromEnd(head: ListNode, n: int) -> ListNode:
f = head
for _ in range(n):
f = f.next
cur = head
while f:
f = f.next
cur = cur.next
cur.next = cur.next.next
return head
if __name__ == '__main__':
main()
|
/// get the physical memory type of a date type
fn physical_type(&self) -> DataType {
match self.data_type() {
DataType::Date64
| DataType::Timestamp(_, _)
| DataType::Interval(IntervalUnit::DayTime) => DataType::Int64,
DataType::Date32 | DataType::Interval(IntervalUnit::YearMonth) => DataType::Int32,
dt => panic!("already a physical type: {:?}", dt),
}
} |
Politics on the Endless Frontier: Postwar Research Policy in the United States Toward what end does the U.S. government support science and technology? How do the legacies and institutions of the past constrain current efforts to restructure federal research policy? Not since the end of World War II have these questions been so pressing, as scientists and policymakers debate anew the desirability and purpose of a federal agenda for funding research. Probing the values that have become embodied in the postwar federal research establishment, Politics on the Endless Frontier clarifies the terms of these debates and reveals what is at stake in attempts to reorganize that establishment. Although it ended up as only one among a host of federal research policymaking agencies, the National Science Foundation was originally conceived as central to the federal research policymaking system. Kleinman's historical examination of the National Science Foundation exposes the sociological and political workings of the system, particularly the way in which a small group of elite scientists shaped the policymaking process and defined the foundation's structure and future. Beginning with Vannevar Bush's 1945 manifesto The Endless Frontier, Kleinman explores elite and populist visions for a postwar research policy agency and shows how the structure of the American state led to the establishment of a fragmented and uncoordinated system for federal research policymaking. His book concludes with an analysis of recent efforts to reorient research policy and to remake federal policymaking institutions in light of the current "crisis" of economic competitiveness. A particularly timely study, Politics on the Endless Frontier will be of interest to historians and sociologists of science and technology and to science policy analysts. |
Measuring complex phenotypes: A flexible high-throughput design for micro-respirometry Variation in tissue-specific metabolism between species and among individuals is thought to be adaptively important; however, understanding this evolutionary relationship requires reliably measuring this trait in many individuals. In most higher organisms, tissue specificity is important because different organs (heart, brain, liver, muscle) have unique ecologically adaptive roles. Current technology and methodology for measuring tissue-specific metabolism is costly and limited by throughput capacity and efficiency. Presented here is the design for a flexible and cost-effective high-throughput micro-respirometer (HTMR) optimized to measure small biological samples. To verify precision and accuracy, substrate specific metabolism was measured in heart ventricles isolated from a small teleost, Fundulus heteroclitus, and in yeast (Saccharomyces cerevisiae). Within the system, results were reproducible between chambers and over time with both teleost hearts and yeast. Additionally, metabolic rates and allometric scaling relationships in Fundulus agree with previously published data measured with lower-throughput equipment. This design reduces cost, but still provides an accurate measure of metabolism in small biological samples. This will allow for high-throughput measurement of tissue metabolism that can enhance understanding of the adaptive importance of complex metabolic traits. 34 Understanding evolution and ecological adaptation can be enhanced by combining 35 genomics with quantitative analyses of complex phenotypic traits. This integrative approach 36 requires sufficient sample size (i.e. 100s to 1000s) with precise measure of phenotypes, however 37 it can be challenging to obtain economical equipment for such high-throughput quantification. 38 To address this challenge, we present an inexpensive custom design to measure metabolism in 39 small biological samples such as cell suspensions, individual tissues or possibly small organisms. 40 Metabolism is a complex trait intricated in most physiological processes and is important to 41 organismal success. Thus, metabolism is ecologically and evolutionarily important. The 42 effect of the environment on metabolism as well as tissue-specific variation can vary 43 considerably among individuals and populations. These and other data suggest that 44 measuring metabolism can provide insights into the ecology and evolution of organisms. 45 Metabolism is typically quantified via oxygen consumption rates (MO2). Numerous 46 systems to measure MO2 are available from companies including Unisense, PreSens, and Loligo, 47 but each has limitations with respect to technical design, throughput capacity, and cost. For small 48 biological samples, systems often have limited capacity (e.g. Oxygraph 2-K, OROBOS 49 INSTRUMENTS, Innsbruck, Austria) or require expensive reagents and disposables (e.g., 50 Seahorse XF Analyzers, Agilent, Santa Clara, CA). Therefore, these systems are not ideal for 51 high-throughput experimental designs as it becomes time consuming and expensive to measure 52 many samples. There is a need in the field for a simple design that can measure multiple sample 53 simultaneously at a reduced cost. Here we present a design for a high-throughput micro-54 respirometer (HTMR) that increases throughput of tissue-specific metabolism while minimizing 55 costs maintaining and maintaining efficacy. We validate the precision and accuracy of this 56 4 system by measuring both Saccharomyces cerevisiae and substrate specific metabolism in 57 Fundulus heteroclitus heart ventricles. 58 60 The HTMR consists of a custom external plexiglass water bath designed to enclose 1-ml 61 micro-respiration chambers (Unisense) (Fig 1A). The water bath is connected to a temperature-62 controlled, re-circulating system, and placed on a multi-place stir plate. Each chamber contains a 63 stir bar and nylon mesh screen for mixing media while keeping tissues suspended ( Fig 1B). 64 Exact chamber volumes were determined by measuring the mass (to 0.001 g) of water that 65 completely filled individual chambers with the mesh screen and stir bar. A fluorometric oxygen 66 sensor spot (PreSens) is adhered to the internal side of the chamber lid with a polymer optical 67 fiber cable affixed to each chamber lid for contactless oxygen measurement through the sensor 68 spot. All cables are connected to a 10-channel microfiber-optic oxygen meter (PreSens), which 69 uses PreSens Measurement Studio 2 software to collect oxygen data at a sampling rate of 20 70 measurements per minute. Sensors were calibrated at 0% (using 0.05 g sodium dithionite per 1 71 ml of media) and 100% air saturation (fully oxygenated media). Validation of the HTMR was 72 carried out using a four-chamber system; however, it can easily be extended to a 10-chamber 73 system as in Fig 1A. 74 81 Each ventricle is measured in each substrate for six minutes and then placed in the following substrate while 82 exchanging chamber media. Substrate conditions are measured as follows: 1) 5 mM glucose; 2) 1 mM palmitic acid; 86 Metabolic rate determinations 87 The precision and accuracy of the HTMR was validated by measuring MO2 in both yeast 88 (Saccharomyces cerevisiae) and teleost (F. heteroclitus) heart ventricles. MO2 is measured in the 89 sealed chambers by measuring oxygen concentration at a rate of 20 measurements per minute, 90 over a six-minute period. During each daily measurement, a minimum of three blank 91 measurements, during which only media was in the chamber, were run to determine any 92 background flux. For each six-minute measurement, the last three minutes (60 datapoints) were 93 used for calculating metabolic rate. To do so, oxygen concentration was regressed against time to 94 determine the raw oxygen consumption rate (pmol*l-1 *min -1 ). Slopes were also calculated for 95 each blank measurement and averaged by chamber to quantify background flux, then subtracted 96 from each slope. Metabolic rate was measured as MO2 = (Msample -Mblank) * Vchamber * 1/60, 97 where MO2 is the final metabolic rate in pmol*s -1, M is the slope of oxygen consumption per 98 sample in pmol*l -1 *min -1, and V is the volume of each chamber in l. 99 PreSens datafiles provide data per sensor with oxygen concentration (mol*L -1 ) at each 100 time point (minutes). An R markdown file detailing this analysis of the raw PreSens data files 101 can be found at https://github.com/ADeLiberto/FundulusGenomics.git. Desert Island, ME, and Deer Isle, ME. Individuals were trapped on public land, and no permit 105 was needed to catch these marine minnows for non-commercial purposes. All fish were common 106 gardened at 20°C and salinity of 15 ppt in re-circulating aquaria for at least five months and then 107 acclimated to 12°C or 28°C for at least two months prior to metabolic measurements. Fish were 108 randomly selected, weighed, and then sacrificed by cervical dislocation. Heart ventricles were 109 isolated and immediately placed in Ringer's media (1.5mM CaCl2, 10 mM Tris-HCl pH 7.5, 150 110 mM NaCl, 5mM KCl, 1.5mM MgSO4) supplemented with 5 mM glucose and 10 U/ml heparin to 111 expel blood. Media was incubated at the measurement temperature prior to use. Ventricles were 112 then splayed following precedent of previous cardiac metabolism measurements in F. 113 heteroclitus. Splaying the hearts decreases variation and increases overall oxygen 114 consumption rates, as greater internal surface area is exposed to the substrate media. After 115 splaying, hearts were not further stimulated, as mechanical disruption or homogenization can 116 increase variability in oxygen consumption rates. All animal husbandry and experimental 117 procedures were approved through the University of Miami Institutional Animal Care and Use 118 Committee (Protocol # 19-045). 119 Methodological validation 120 In order to validate the HTMR performance, several parameters were tested: 1) net flux at 121 multiple oxygen concentrations, 2) between-chamber variability in MO2, and 3) consistency of 122 MO2 over time. To quantify net flux and confirm equal rates between chambers, flux was 123 measured at multiple oxygen concentrations in each chamber. Here we define net flux as both 7 background oxygen consumption and oxygen diffusion into the system. Flux at 100% air 125 saturation was measured with fully oxygenated Ringer's media. To measure net flux at lower 126 oxygen saturations, Ringer's media was deoxygenated to the desired level with nitrogen gas. 127 85% air saturation was chosen because cardiac MO2 measurements over the six minutes typically 128 deplete oxygen to approximately 92% of air saturation but do not exceed 85%. To determine net 129 flux, oxygen concentration was measured in each chamber for 10 minutes and repeated in 130 triplicate. 131 Biological repeatability between chambers was tested with yeast at 28°C. A cell 132 suspension was prepared using 1 g of yeast per 10 ml of Ringer's media supplemented with 5 133 mM glucose. In each chamber, 100 l of the suspension was injected to account for variation in 134 chamber volume. Oxygen consumption was measured for 10 minutes in triplicate. MO2 was 135 calculated as above to confirm there were no differences among chambers. 136 In order to assess metabolic consistency over the time-course of the experiment as well as 137 chamber repeatability, hearts from four fish were isolated, and glucose metabolism was assayed 138 in each chamber at 28°C. Hearts were randomly assigned to one of the four chambers and cycled 139 through each of them, with media exchange between each measurement. Three blank 140 measurements were run at the conclusion of the experiment. MO2 was calculated as above and 141 then regressed against relative time of initial oxygen measurement per cycle to determine 142 metabolic rate consistency of heart tissue over time. 169 The HTMR is a simple custom design composed of a plexiglass water bath enclosing 170 micro-respiration chambers connected to a multi-channel oxygen meter (Fig 1A). For a 10-171 chamber system, the approximate cost per chamber is $1870, including the cost of the oxygen 172 meter and stir-plate. The full cost of the system is broken down in Table 1. The oxygen meter 173 itself represents the highest cost (~$14,000 for ten inputs). followed by 1 ml glass chambers with 174 lid containing two injection ports (~$300 each). Optical-fiber cables and sensor spots combined 175 are approximately $95 each. A multi-place stir-plate is also necessary (~$900). 176 193 To test biological repeatability among the chambers, yeast metabolism per chamber was 194 measured. Average MO2 was 12.502 ± 1.907 pmol*s -1, and there were no significant differences 195 in metabolism between each of the chambers (ANOVA, p = 0.538; Fig 2C). This MO2 for yeast 196 is approximately 40-fold higher than the net flux at 85% air saturation. In addition to yeast 197 measurements, heart ventricles were measured across all four chambers over a 45-minute time 198 period to validate both repeatability among chambers and that ventricles can maintain consistent 199 metabolic activity over time. Among the four replicates, there was no significant difference in 200 metabolic rate when regressed against time (linear model, p = 0.657; Fig 3A). Additionally, there 201 were no significant differences in metabolic activity among the four chambers for each heart 202 (ANOVA, p = 0.363; Fig 3B). 203 (Fig 4). For substrate specific metabolism, data was analyzed separately for 212 individuals measured at 12°C or 28°C. There is a large inter-individual variation in substrate 213 specific metabolism, which reduces the statistical power to reject the null hypothesis of no 214 difference among substrates. To avoid this type II error, we apply paired t-tests that compare 215 substrates within each individual and use Bonferroni's test to correct for multiple tests. 216 Glucose, FA, and LKA metabolism were significantly greater than endogenous (p=0.001, 217 Bonferroni's corrected p=0.006); except FA at 12°C (p=0.03; Bonferroni's corrected p=0.18; Fig 218 4A). Glucose metabolism was significantly greater than FA and LKA (Bonferroni's corrected 219 p=0.006) at both 12°C and 28°C. FA metabolism was significantly greater than LKA metabolism 220 at 28°C (Bonferroni's corrected p=0.006; Fig 4B) but not at 12°C (p=0.5; Fig 4A). 221 12 The log10 substrate specific MO2 from Maine and Massachusetts acclimated to 12°C and 228 28°C determined here can be compared to MO2 measured in Oleksiak et. al for Maine 229 individuals acclimated to 20°C using an ANCOVA with log10 body mass and temperature as 230 linear covariates. There were no significant differences (p = 0.55, 0.15, and 0.85 for glucose, FA 231 and LKA respectively) and the least squares fall within 5% of one another. 232 Allometric scaling of metabolism 233 Both body mass and heart ventricle mass were measured of each F. heteroclitus 234 individual measured at each temperature. The mean body mass of individuals measured at 12°C 235 and 28°C was 9.11 ± 2.87 g and 9.32 ± 2.90 g, respectively, and was not significantly different 236 between temperatures. Additionally, average ventricle masses were 0.013 ± 0.005 and 0.010 ± 237 0.004 for 12°C and 28°C, respectively and did not significantly differ between acclimation 238 temperatures. In F. heteroclitus body mass and heart mass are highly correlated (linear 239 regression at 12°C R 2 = 0.74, p<0.0001; for 28°C, R 2 = 0.66, p < 0.001) thus, body mass was used 240 to correct for variation due to mass between individuals, as done previously. Body mass 241 explained a significant amount of the variation (30-70%) in metabolism among individuals for 242 all conditions (Fig 5). Variance explained by body mass (R 2 ), was higher at 12°C than at 28°C 243 (Fig 5, S1Table). For glucose MO2, allometric scaling was identical (to the 2 nd significant digit) 244 to previous determinations and nearly the same as in Jayasundara et al.. Examining the 245 effect of temperature and substrates, allometric scaling coefficients (S1 Table), were between 246 0.65 to 1.29. While body mass contributed significantly to the variation between individuals, 247 there was no effect of sex on cardiac metabolism by linear regression at each substrate-248 temperature combination. A three-way ANOVA including substrate, body mass and sex showed 249 13 no significant differences between males and females in cardiac metabolism at 12°C or 28°C (p 250 = 0.0963 and p= 0.4143, respectively). 251 and at 28°C n=95. For full data on regression slopes, see S1 Table. 255 257 The HTMR provides a simple custom design for measuring small biological samples that 258 allows higher throughput measurements at lower costs. While cost is still not negligible, the probes were tested; however, they were more fragile and cumbersome to use. Temperature 281 control is also essential for consistent and repeatable measurements. Temperature has a large 282 impact on oxygen solubility; thus, precise temperature control is necessary and was closely 283 regulated and monitored during measurements. 284 Chamber mixing was another important factor to control. Without thorough mixing from 285 the stir-bars, oxygen measurements are inconsistent and inaccurate. This is an advantage of this 286 system design over other multi-well plate style oxygen readers in which the media is unstirred 287 during measurement, but instead rely on mixing prior to measurement. Continuous stirring 288 allows for longer measurement periods. The size of the mesh that holds the tissue above the stir 289 bar was also optimized: very small mesh inhibits mixing, but too large mesh would not separate 290 the tissue from the stir bar in the bottom of the chamber. Additionally, nylon mesh was used over 291 steel mesh, as it did not as readily retain air bubbles. 292 Finally, leak was tested extensively. At 85% air saturation, background flux (O2 use not 293 associated with biological sample) was small (< 1 pmol*s -1 ) compared to heart ventricle and 294 yeast MO2. To account for any amount of flux, blank measurements were taken throughout runs 295 and corrected for in each chamber, with no significant differences in leak among chambers. 296 Initially, the Unisense microinjection lids were chosen for flexibility; however, after completing 297 tests and measurements, the manufacturers released information that this particular model was 298 less airtight than other models. For future design construction, we recommend that researchers 299 use single-port lids with a sufficient path length to further minimize leak through diffusion. 300 301 The HTMR is sensitive to both substrates used to fuel heart ventricle metabolism and body 302 mass. HTMR determinations were very similar to previously published data (within 5%). 303 Additionally, substrate specific patterns, with highest rates supported by glucose, agree with 304 previous measurements in F. heteroclitus. Metabolism was unaffected by the time course 305 for measuring the four substrate conditions. Ventricles continue to contract over the duration of 306 the experiment and show no significant decline in metabolic activity (Fig 2A.). Importantly, 307 body mass accounted for a significant amount of variation in these individuals following an 308 allometric scaling pattern, and the log mass against logMO2 linear regression has nearly identical 309 slopes to those determined by others. These data suggest that the system is both precise 310 because the variation among samples did not obscure substrate or body mass effects and accurate 311 in that substrate specific metabolic rates are similar to previous measures and have 312 allometric scaling coefficients very similar to published data. 313 314 The HTMR was designed to measure metabolism in many individuals, because of the large 315 individual variation and adaptive significance of the trait. We were specifically interested in 316 cardiac metabolism in F. heteroclitus, as this species shows large inter-individual variance in 317 cardiac metabolism and the mRNAs associated with this variance, and these patterns may 318 hold true for many species. To better understand the physiological and evolutionary importance 319 of this variance requires many individuals which the HTMR allows. For example, in this study, 320 metabolism was quantified in approximately 200 ventricles in only 10 days. Although here the 321 system is tested only with F. heteroclitus, this system could easily be extended to study other 322 types of tissue-specific metabolism in many individuals. The decreased cost and efficiency of the 323 design can have countless applications, allowing for high-throughput measurement of tissue 324 metabolism that can enhance our understanding of the adaptive importance of these traits. 325 |
package org.zmsoft.persistent.common.DBManager;
import org.apache.ibatis.annotations.Mapper;
import org.zmsoft.framework.db.ISDatabaseSupport;
/** 系统菜单 */
@Mapper
public interface DBManagerMapper extends ISDatabaseSupport<DBManagerDBO> {
/**
* 删除数据表
* @param _DBManagerDBO_
* @return
*/
int dropTable(DBManagerDBO _DBManagerDBO_);
/**
* 创建数据表
* @param _DBManagerDBO_
* @return
*/
int createTable(DBManagerDBO _DBManagerDBO_);
}
|
from rest_framework import serializers
from .models import Api
class ApiSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = Api
fields = ['name', 'description']
|
Opération Turquoise was a French-led military operation in Rwanda in 1994 under the mandate of the United Nations.
Background [ edit ]
On 6 April 1994 Rwandan President Juvénal Habyarimana and Burundian President Cyprien Ntaryamira were assassinated, sparking the 1994 Rwandan Genocide. The United Nations already had a peacekeeping force, the United Nations Assistance Mission for Rwanda (UNAMIR), in Kigali that had been tasked with observing that the Arusha Accords were being carried out. Following the start of the genocide and the deaths of several kidnapped Belgian soldiers, Belgium withdrew its contribution to UNAMIR, which was commanded by Canadian Roméo Dallaire; Dallaire was prohibited from involving the force in the protection of civilians. By late April, several of the nonpermanent members of the United Nations Security Council (UNSC) were trying to convince the major powers to agree to a UNAMIR II. As opposed to UNAMIR, which had a peacekeeping mandate under Chapter VI of the U.N. Charter, UNAMIR II would be authorised under Chapter VII to enable the UN to prevent further harm.
The French had provided the Hutu-dominated Habyarimana government with extensive military, and diplomatic support, including a military intervention to save the government during an offensive by the rebel Tutsi-led Rwandan Patriotic Front (RPF) in 1990. Immediately after the genocide began, the RPF began another offensive to overthrow the genocidal government and steadily gained ground. By late June, the RPF controlled much of the country and was nearing a complete victory. RPF units carried out retributive attacks within areas they controlled, but they were not of the scale and organization as those carried out in the genocide.
Implementation [ edit ]
French parachutists, part of the international military force supporting the Rwandan relief effort, stand guard at the airport.
On 19 June, the French government made an announcement of their intentions to organize, establish, and maintain a "safe zone" in the south-west of Rwanda. At the brink of defeat and retreat, the news of an intervention from their allies was broadcast across the country by the genocidaires, with a consequent increase in their confidence, and the continuation of their hunt for genocide survivors.[1] The French said the objectives of Opération Turquoise were:
to maintain a presence pending the arrival of the expanded UNAMIR… The objectives assigned to that force would be the same ones assigned to UNAMIR by the Security Council, i.e. contributing to the security and protection of displaced persons, refugees and civilians in danger in Rwanda, by means including the establishment and maintenance, where possible, of safe humanitarian areas.[2]
On 20 June, France sent a draft resolution to the UNSC for authorization of Operation Turquoise under a two-month Chapter VII mandate. After two days of consultations and the personal approval of the U.N. Secretary General, it was adopted as Resolution 929 (1994), on 22 June, with 10 votes of approval and five abstentions. The "multilateral" force consisted of 2,500 troops, only 32 of them being from Senegal and the rest French.[3] The equipment included 100 APCs, 10 helicopters, a battery of 120 mm mortars, 4 Jaguar fighter bombers, 8 Mirage fighters, and reconnaissance aircraft.[4] The helicopters were intended to lay a trail of food, water and medicine. The area that was selected ended up with the result that refugees were enabled to escape predominantly westward, into eastern Zaire. The zone affected by Operation Turquoise was changed after 2 members of a French reconnaissance unit were captured by the victorious RPF rebels and were released in exchange for a revision in the area of Operation Turquoise.[5]
There was an evacuation of the population westward, enforced by the Hutu regime, now set to flee from the Tutsi rebels, after it had been made clear the French were only there to provide a "safe zone", rather than assistance in the conflict. Unfortunately, there were roadblocks and checkpoints along the way, and the Tutsis left alive, and even Hutus without ID cards, were killed.[6] The outflow of refugees exacerbated the already great numbers of refugees in the region, known as the Great Lakes refugee crisis spilling out of Rwanda, and neighbouring Hutu-Tutsi Burundi, predominantly into Zaire. Approximately, 2.1 million people lived in Zaire in refugee camps. The militarization of these camps led to the invasion of Zaire by Rwanda and Uganda, known as the First Congo War.
The area of French influence, known as Zone Turquoise, within the Cyangugu-Kibuye-Gikongoro triangle, was spread across a fifth of the country.[7] Although it was meant to save lives and stop the mass killings, killings did occur. When the Hutu government moved the Radio Télévision Libre des Mille-Collines radio transmitter, a key tool in encouraging Hutus to kill their Tutsi neighbors, into the Zone Turquoise, the French did not seize it. The radio broadcast from Gisenyi, calling on 'you Hutu girls to wash yourselves and put on a good dress to welcome our French allies. The Tutsi girls are all dead, so you have your chance.'[8] The French did not detain the government officials they knew had helped coordinate the genocide. When asked to explain that in the French parliament, the French foreign minister of the time argued that the UN mandate given to the French contained no authorization to investigate or arrest suspected war criminals.[9] Regardless, French President François Mitterrand claimed that the Operation had saved "tens of thousands of lives".[10]
The force left as the mandate of the operation expired on 21 August. The RPF immediately occupied the region, causing another refugee outflow.
Controversy [ edit ]
Opération Turquoise is controversial for two reasons: accusations that it was a failed attempt to prop up the genocidal Hutu regime and that its mandate undermined the UNAMIR.
The RPF, well aware that French assistance to the government had helped blunt their 1990 offensive, opposed the deployment of a French-led force. By early June, the RPF had managed to sweep through the eastern half of the country and move south and west, while besieging Kigali in the center. The advance resulted in a massive refugee outflow, though the Hutu government was also implicated in encouraging the flight (see Great Lakes refugee crisis.) Regardless, the Zone Turquoise was created in the steadily shrinking areas out of RPF control. The RPF was one of many organisations that noted that the French initiative to safeguard the populace was occurring six weeks after it had become apparent mass killings were occurring in Rwanda. On 22 July, French Prime Minister Edouard Balladur addressed the Security Council, stating that France had a "moral duty" to act without delay and that "without swift action, the survival of an entire country was at stake and the stability of a region seriously compromised."[7]
In May 2006, the Paris Court of Appeal accepted six courtsuits deposed by victims of the genocide to magistrate Brigitte Reynaud.[11] The charges raised against the French army during Operation Turquoise from June to August 1994 are of "complicity of genocide and/or complicity of crimes against humanity." The victims allege that French soldiers engaged in Operation Turquoise helped Interahamwe militias in finding their victims, and have themselves carried out atrocities.[12] The former Rwandan ambassador to France and co-founder of the RPF Jacques Bihozagara testified, "Operation Turquoise was aimed only at protecting genocide perpetrators, because the genocide continued even within the Turquoise zone." France has always denied any role in the killing.[13]
UNAMIR Force Commander Dallaire had also opposed the deployment, having sent extensive communication back to U.N. Headquarters that the placement of two U.N.-authorised commands with different mandates and command structures into the same country was problematic. Dallaire was also a strong proponent of strengthening UNAMIR and transitioning it to a Chapter VII mandate, rather than introducing a new organisation. Concern over conflicting mandates led to five countries on the UNSC to abstain in the vote approving the force. The UN-sponsored "Report of the Independent Inquiry into the Actions of the UN during the 1994 Genocide in Rwanda" found it "unfortunate that the resources committed by France and other countries to Operation Turquoise could not instead have been put at the disposal of UNAMIR II." On 21 June, Dallaire replaced 42 UNAMIR peacekeepers from Francophone Congo, Senegal and Togo with UN staff from Kenya after the negative reaction of the RPF to Opération Turquoise. Over the two months of the mandate, there were confrontations, and risk of confrontations, between RPF and French-led units around the zone, during which UNAMIR was asked to convey messages between the two. The UN independent inquiry drily noted that this was "a role which must be considered awkward to say the least."[2]
Notes on the text [ edit ] |
She is the raunchy Australian model who reportedly enjoyed a 'fling' with Justin Bieber in 2016.
But on Tuesday Sahara Ray appeared to have a new admirer as she shared suggestive photos of herself partying at Coachella with her male fashion designer pal Daf Orlovsky.
Clad in raunchy G-string bottoms and a crop-top, the 26-year-old's outfit left little to the imagination as Daf playfully groped her derriere on Instagram.
Sahara certainly wasn't shy to reveal a generous amount of skin as she peered seductively over her shoulder and arched her rear toward the camera.
Meanwhile, Instagram star Daf opted for a more low-key look, combining a grey T-shirt with a pair of black tracksuit bottoms.
In another photo, the duo are seen cosying up as they pose together in front of the rugged Californian terrain.
Daf placed a protective hand on Sahara's lower stomach as she showed off her hourglass frame and washboard abs.
As well as posing for two intimate photos in the rural setting, the pair shared a passionate embrace in a short clip filmed next to a camper van.
Over the weekend, Sahara flashed the flesh again in a double denim two-piece by Australian label, I AM GIA. |
<reponame>HelixOS/prebuilts-ndk<gh_stars>0
/****************************************************************************
****************************************************************************
***
*** This header was automatically generated from a Linux kernel header
*** of the same name, to make information necessary for userspace to
*** call into the kernel available to libc. It contains only constants,
*** structures, and macros generated from the original header, and thus,
*** contains no copyrightable information.
***
****************************************************************************
****************************************************************************/
#ifndef _LINUX_GENHD_H
#define _LINUX_GENHD_H
#include <linux/types.h>
enum {
DOS_EXTENDED_PARTITION = 5,
LINUX_EXTENDED_PARTITION = 0x85,
WIN98_EXTENDED_PARTITION = 0x0f,
LINUX_SWAP_PARTITION = 0x82,
LINUX_RAID_PARTITION = 0xfd,
SOLARIS_X86_PARTITION = LINUX_SWAP_PARTITION,
NEW_SOLARIS_X86_PARTITION = 0xbf,
DM6_AUX1PARTITION = 0x51,
DM6_AUX3PARTITION = 0x53,
DM6_PARTITION = 0x54,
EZD_PARTITION = 0x55,
FREEBSD_PARTITION = 0xa5,
OPENBSD_PARTITION = 0xa6,
NETBSD_PARTITION = 0xa9,
BSDI_PARTITION = 0xb7,
MINIX_PARTITION = 0x81,
UNIXWARE_PARTITION = 0x63,
};
struct partition {
unsigned char boot_ind;
unsigned char head;
unsigned char sector;
unsigned char cyl;
unsigned char sys_ind;
unsigned char end_head;
unsigned char end_sector;
unsigned char end_cyl;
unsigned int start_sect;
unsigned int nr_sects;
} __attribute__((packed));
#endif
|
Temperance, temples and colonies: Reading the book of Haggai in Saskatoon In this paper, I situate myself as a reader reading from the former temperance colony of Saskatoon. Taking as my starting point John Kessler's heuristic device of a Persian-period "charter group", I ask how my situation in Saskatoon affects how I read the book of Haggai, and how my reading of Haggai affects my understanding of Saskatoon. I conclude with some remarks on the possibility of examining my readings typologically; that is, seeing my readings as "types" for Canadian scholars abroad and in Canada studying the texts and text-worlds of Persian-period Yehud. |
Homologous temperature dependence of the yield stress of icosahedral quasicrystals and its implication Icosahedral quasicrystals are commonly plastically deformable at high temperatures, but the temperature ranges of the plastic deformation are largely different among different types of the quasicrystal. However, when we convert the upper yield stress vs. the temperature relations for various icosahedral quasicrystals into the non-dimensional stress normalised by Young's modulus E vs. the non-dimensional temperature normalised by ( : the average atomic diameter, kB : the Boltzmann constant), the date fall around a universal curve. This homologous relation indicates that the deformation mechanism is common to all types of icosahedral quasicrystals. Assuming that the plasticity is carried by dislocation climb process, it is concluded that about a half of the activation enthalpy of the dislocation climb is due to the jog-pair enthalpy and the other half to the enthalpy of vacancy diffusion, both of which are a function of. |
AGEDEPENDENT SALT HYPERTENSION IN BRATTLEBORO RATS: A HEMODYNAMIC ANALYSIS The hemodynamic effects of 0.6% saline, consumed either from youth (4th week of age) or from adulthood (12th week of age), were studied in unanesthetized, unoperated, and uninephrectomized homozygous female Brattleboro rats. Long-term saline drinking induced a general decrease of blood pressure in unoperated rats which was more pronounced in rats drinking it from youth. The relation of low systemic resistance and high cardiac output (observed at the age of 10-15 weeks) to the high mortality of these rats was discussed. Two phases were recognized in the development of salt hypertension in uninephrectomized rats drinking saline from youth. The increased systemic resistance played a major role during the early phase (13-15 weeks), while changes of body fluids as well as altered arterial compliance contributed to the elevation of systolic blood pressure in the late phase of salt hypertension (20-30 weeks of age). In uninephrectomized rats drinking saline from adulthood, the late blood pressure response was only slightly attenuated in comparison with uninephrectomized rats drinking saline from youth. The absence of increased arterial rigidity in the former group was the only major hemodynamic difference between these two groups of uninephrectomized rats aged 20-30 weeks. |
Evidence for strong evolution of the cosmic star formation density at high redshift Deep HST/ACS and VLT/ISAAC data of the GOODS-South field were used to look for high-redshift galaxies in the rest-frame UV wavelength range and to study the evolution of the cosmic star-formation density at z~7. The GOODS-South area was surveyed down to a limiting magnitude of about (J+Ks)=25.5 looking for drop-out objects in the z ACS filter. The large sampled area would allow for the detection of galaxies which are 20 times less numerous and 1-2 magnitudes brighter than similar studies using HST/NICMOS near-IR data. Two objects were initially selected as promising candidates of galaxies at z~7, but have subsequently been dismissed and identified as Galactic brown dwarfs through a detailed analysis of their morphology and Spitzer colors, as well as through spectroscopic information. As a consequence, we conclude that there are no galaxies at z~7 down to our limiting magnitude in the field we investigated. Our non detection of galaxies at z~7 provides clear evidence for a strong evolution of the luminosity function between z=6 and z=7, i.e. over a time interval of only ~170 Myr. Our constraints also provide evidence for a significant decline of the total star formation rate at z=7, which must be less than 40% of that at z=3 and 40-80% of that at z=6. We also derive an upper limit to the ionizing flux at z=7, which is only marginally consistent with that required to completely ionize the Universe. Introduction The sensitivity of new generation instruments, coupled with 8-10-m class ground-based telescopes and with the HST, allowed impressive progress in observational cosmology during the past few years. Several surveys aimed at searching for high redshift galaxies have obtained large samples of objects at increasing redshifts: z∼1 (e.g., ), z∼3 ((Steidel et al., 2003, z∼4 (, z∼5 (, ), z∼6 (;;;;), up to the currently accepted record of z∼6.6 (;, ;). Observations are approaching the interesting redshift range between z=6 and z=10 where current cosmological models expect to find the end of the reionization Send offprint requests to: F. Mannucci e-mail: filippo@arcetri.astro.it ⋆ Based on observations collected at the European Southern Observatory, Chile, proposals 074.A-0233 and 076.A-0384 period and the "starting point" of galaxy evolution (e.g., Stiavelli, Fall & Panagia 2004) The results from all these surveys have had a tremendous impact on our understanding of the cosmic history of star formation and on the reionization epoch (Gunn & Peterson 1965). As an example, the detection of highredshift quasars by Fan et al. allowed the measurement of the fraction of neutral gas at z∼6 (;;). The discovery that this fraction is low but significantly non-zero (nearly the same as or larger than 1%) puts severe constraints on reionization models. On the contrary, the detection of a high opacity due to ionized hydrogen in the WMAP data () points toward a high reionization redshift (z∼11). Thus, detecting objects at even higher redshifts could put strong constraints on the properties of the intergalactic medium at those redshifts (;Gnedin & Prada 2004;;). Up to a redshift of about 6, galaxies can be detected by using optical data alone, as the Lyman break occurs at < 0.85m and the continuum above Ly can be sampled, at least, by the z 850 HST/ACS filter. This allowed for the detection of a large number of objects at z=6, which provides evidence of strong evolution of the Luminosity Function (LF) of the LBGs between z=6 and z=3. This is described as the brightening, with increasing cosmic age, of the typical luminosity (L * ) of about 0.7 mag or the increase in the comoving density ( * ) by a factor of 6 (). At redshifts higher than 6 the use of near-IR images is mandatory, which makes detection much more difficult. Deep J-band images from HST/NICMOS exist, but their field-of-view is limited to a few sq.arcmin. Larger fields can be observed by ground-based telescopes, but at the expense of worse PSFs (about 0.5-1.0 FWHM instead of 0.4 for NICMOS/NIC3 and 0.1 for HST/ACS) and brighter detection limits. Bouwens at al. (2004b) used one of these deep NICMOS fields in the Hubble UltraDeep Field (HUDF) to detect z -dropout objects. They detected 5 objects in a 5.7 sq.arcmin field down to a limiting mag of H AB ∼27.5, showing the possible existence of a reduction in the Star Formation Density (SFD) at z>6. Recently, enlarged this search to ∼19 sq.arcmin using new NICMOS data and improved data reduction. They detected four possible z∼7 objects, while 17 were expected based on the z=6 LF. Also, they found that at least 2 of the 5 sources in their previous study were spurious. These new results point toward the existence of a strong reduction in the LF with increasing redshift at z>6. In contrast, Mobasher et al., looking for J-dropout galaxies in the HUDF, detected one object that could be a massive post-starburst galaxy at z∼6.5, formed at high redshift, z>9, (but see also,, who suggest lower redshifts for this object). Deep near-infrared data were also used by Richard et al. to examine two lensing clusters, and detect eight optical dropouts showing Spectral Energy distributions (SEDs) compatible with high-redshift galaxies. After correcting for lensing amplification, they derived an SFD well in excess of the one at z=3, hinting at large amounts of star formation activity during the first Gyr of the universe. By extrapolating the LF observed at z∼6 ) toward higher redshifts, it is possible to see that those studies based on small-field, deep near-IR data are able to detect only galaxies that are much more numerous, and therefore fainter, than L * galaxies. Their faint magnitudes imply that their redshifts cannot be spectroscopically confirmed with the current generation of telescopes, and it is difficult to distinguish these candidates from the Galactic brown dwarfs with the same colors. As a consequence, these data cannot be used to confidently put strong constraints on the amount of evolution of the LF at z> 6: if all the candidates are real star-forming galaxies at the proposed redshift, the SFD at z=7 and z=10 could be similar to that at z=6. If, on the contrary, none of them are real, the upper limits of the SFD would imply a decrease of the SFD toward high redshifts. As a consequence, the evolution of the SFD at z> 6 is not yet well constrained. The knowledge of this evolution is fundamental for a good understanding of the primeval universe, for example, of the sources of reionization. Here we present a study aimed at detecting bright z=7 objects in a large (unlensed) field, in order to measure the cosmic star-formation density at this redshift. This is important for two main reasons: first, comparing these results for bright objects with those by Bouwens et al. (2004b) and for fainter z -dropouts will allow us to study different part of the LF. This is needed, for example, to distinguish between luminosity and density evolution and to compare it with the dark matter distribution predicted by the cosmological models. Second, the detections of relatively bright objects would allow a more complete study of the properties of these galaxies in terms of morphology, spectroscopy, and SEDs. We will show that the absence of z∼7 objects with M < −21.4 in the survey field means that the galaxy luminosity function evolved significantly in the short time (about 170 Myr) between z=7 and z=6. The epoch at z=7, about 750 Myr after the Big Bang, can therefore be considered the beginning of the bulk of star formation. Observations and object catalog The GOODS-South field, centered on the Chandra Deep Field South (CDFS), is a region of about 1015 arcmin that is the subject of deep observations by many telescopes. HST observed it with ACS using the broad-band filters b 435, v 606, i 775, and z 850 (a). Most of the GOODS-South was observed in J, H, and Ks with the VLT/ISAAC (;Vandame et al. in preparation; see also ). Many other deep observations are available for the GOODS-South field. The Spitzer satellite obtained images of this field in all its bands, in particular in the IRAC bands 1 and 2, corresponding to 3.6 and 4.5m. The main catalog is based on ESO/VLT J and Ks data covering 141 sq.arcmin 1. The typical exposure time is 3.5h in J and 6h in K, with a seeing of about 0.45 FWHM (see also Fig. 4). We used the J+Ks sum image to build the main object catalog: actively star forming galaxies are expected to have a flat (in f units) spectrum between J and K, and therefore the use of the combined image is expected to improve the object detectability. The program Sextractor (Bertin & Arnouts 1996) was used to extract the object catalog. We selected objects having flux above 1.2 times the RMS of the sky over a 0.22 sq.arcsec area (10 contiguous pixels). The part of the images near the edges were excluded as the noise increases because of telescope nodding. The final covered area is 133 sq.arcmin. The average 6 magnitude limit inside a 1 aperture is 25.5, estimated from the statistics of the sky noise. This agrees well with the histogram of the detected sources as a function of the (J+Ks) magnitude shown in Fig. 1, showing that this magnitude corresponds to 95% of completness. The main catalog comprises about 11.000 objects. Fig. 2. Color-color selection diagram for z>7 galaxies. The solid lines show the variation in the colors with redshift expected for local galaxies () and galaxy models by the Bruzual & Charlot. Thin and thick lines show the expected colors for galaxies below and above z=7, respectively. The three groups of lines correspond to three different amounts of extinctions (E(B-V)=0.0, 0.3, and 0.6, from left to right), using the Cardelli et al. extinction law. Stars show the expected positions of Galactic brown dwarfs (ranging from T8 to L1, from left to right). Small dots show the galaxies detected in the GOODS-South in all three bands, while the arrows show the positions of the objects undetected in K. The dashed line shows the color threshold. Above this threshold only objects with no counterparts either in b 435, v 606, and i 775 or in the sum b 435 +v 606 +i 775 image are shown. Large solid dots show the two objects discussed in Sect. 4. The labels show their entry number in our catalog. The colors of the objects were computed from the photometry inside an aperture of 1 of diameter for the HST/ACS and VLT/ISAAC images. Candidate selection As discussed above, galaxies at z∼7 can be selected as z'dropouts, i.e. objects with very red z'-J colors, indicating the presence of a break, and with blue J-K colors, which disentangles high-redshift star-forming galaxies from reddened foreground galaxies. This is illustrated by the z'-J versus J-K color diagram in Fig.2, where we plot the colors expected for different classes of galaxies affected by various amounts of dust reddening (from Bruzual &Charlot 2003, ) along with the data from our sample. The figure shows that objects with both red z'-J and red J-K color are likely to be reddened galaxies at intermediate redshift. In Fig.2 the thick tracks indicate the colors expected for star forming galaxies at z>7, which are indeed separated from reddened galaxies on this diagram. Another class of possible interlopers are Galactic brown dwarfs. Indeed, these stars can have blue J-K colors and strong methane absorption shortward of ∼ 1m, strongly suppressing the emission in the z' band and thus mimicking z'-dropouts of high redshift galaxies. The coldest brown dwarfs are also undetected in all other optical bands. This is illustrated in Fig.2, where stars show the colors of brown dwarfs obtained by convolving the spectra in Testi et al. with the ACS and ISAAC filter transmission curves. These colors extend in the region of the diagram used for selecting of z'-dropouts. Summarizing, the use of near-IR colors alone is enough to exclude low/intermediate galaxies, but not to distinguish true z∼7 star-forming galaxies from high-z QSOs or Galactic brown dwarfs. We selected objects that are undetected (< 1) in i 775 and in all bluer bands and that have z -J>0.9 and J − Ks < 1.2 * (z − J) − 0.28, as shown in Figure 2. This region is larger than expected for galaxies at z>7, so it is possible that a number of lower-redshift interlopers are present in the catalog. This choice is motivated by not having detected any reliable candidate (see below) and is therefore needed to exclude high-redshift galaxies not being selected because of colors just outside an exceedingly narrow selection area. Despite the common name of "dropouts", we do not require that our candidates are undetected in z 850. In fact, z=7 sources can be quite luminous in this band for two possible reasons: 1) although the z 850 cut-on filter convolved with the ACS detector efficiency has a response peaking at ∼8800, its red wing gathers some flux up to 1.05m, and as a consequence, some flux at or above the Ly line could be transmitted by the filter (indeed this is predicted by tracks of galaxies at z>7 in Figure 2); 2) if the reionization epoch is at z>7 (as the recent WMAP results and the spectrum of the galaxy at z=6.56 suggest, ;), then the Ly-forest may still have some transmission, similar to what is observed in QSOs at z∼6 (). An initial catalog was constructed by using the optical photometry from the public GOODS catalog (a) with the aperture photometry computed by Sextractor. This produced the selection of a few tens of z -dropouts. These objects were checked by eye to remove spurious detections or objects whose photometry is seriously affected by bright nearby objects. The photometry of the 6 objects that passed this check was measured again by using the IRAF/PHOT program in the same 1 aperture. Using a local sky computed in an annular region around the object, this program is expected to provide better photometry of faint sources. For 4 of these objects, the new z -J color is much bluer, below the detection threshold. This is an indication that Sextractor overestimated the local sky near these sources. Two objects remain with colors that are compatible with both star-forming galaxies at z>7 and Galactic brown dwarfs. Their images are shown in Figure 3, while their properties are listed in Table 1 and discussed in the next section. The two candidates The SED of the two candidates are compatible overall with being high-redshift star-forming galaxies. The 9 available photometric bands, from b 435 to Spitzer/IRAC 4.5m, can be used to derive photometric redshifts. These two objects are contained in the GOODS-MUSIC catalog (), object 4409 with ID 7004 and object 6968 with ID 11002 with photometric redshifts of 6.91 and 6.93, respectively. Both objects have already been selected as i-dropout objects by other groups. Object 4419 was selected by Stanway et al. (object SBM03#5) and further studied by Bunker et al. (ID number 2140), while object 6968 was selected by Eyles et al. (ID 33 12465) and by. Both objects were identified as Galactic stars on the basis of the compact ACS morphology and of the zJK colors. In the following we use the morphologies, Spitzer data, and spectra to further investigate the nature of these sources. Both candidates show very blue J-K colors and have z -J∼1.9. This blue J-K color could be due to starburst galaxies or faint AGNs at z∼7 with a UV spectral slope (with f ∝ at rest = 1500) that is more negative than −2. These colors are also typical of Galactic brown dwarf stars of type T6-8, as can be seen in Fig. 2. In this section we present their properties in terms of the morphology in the optical and near-infrared images (Sect. 4.1), mid-infrared colors (Sect. 4.2), and optical spectra (Sect. 4.3). Morphologies Both objects are detected in z 850 and J. We studied the morphology of these objects in both bands: the HST/ACS z 850 images have a much higher resolution (about 0.1 FWHM), but both objects are much fainter in this band. In contrast, the VLT/ISAAC J band images have a lower resolution (about 0.45 ), but the higher signal-to-noise ratio per pixel allows a more accurate study. In both cases we compared the object luminosity profile with the PSF derived from a few nearby point sources. As shown in Fig. 4, both objects are consistent with being point sources, as no significant differences are seen with the local PSF. Spitzer colors The near-to mid-IR colors can be used to investigate the nature of these objects. Young starburst galaxies at z ∼ 7 are expected to be intrinsically quite blue and show a flat spectrum above the Lyman limit. We computed the photometry of our candidates in the Spitzer/IRAC 3.6m and 4.5m images using 3 apertures, in order to have aperture corrections similar to the those in the K band, i.e., about 0.42 mag (see, for example, ) and to compare the results with the near-IR data. To compute the photometry of object 4419, the contribution by the nearby bright galaxy was estimated by convolving the z 850 band image with the Spitzer PSF. This introduces a large additional uncertainty. Figure 5 shows the J-K vs. K-4.5m and 3.6-4.5m color-color diagrams, comparing the colors of both candidates with those of the Galactic stars (derived from the brown dwarf models by ) and of high-redshift galaxies and type 1 AGNs (Seyfert 1 and QSOs, Francis & Koratkar, 1995). It is evident that these diagrams can be used to distinguish compact galaxies from stars. Even if the Spitzer flux of 4419 is uncertain, because it is affected by the nearby bright galaxy, it appears that both objects show colors compatible with being Galactic stars. The flux in the Spitzer bands of object 6968 was already discussed by Eyles et al., and they also ), starburst galaxies at z∼7 (connected black dots, representing galaxies with UV spectral slope =-2.3,-2.0,-1.7,-1.3, and -1.1, from left to right) and high redshift Seyfert1/QSOs (connected empty dots, for redshifts between 4 and 9 with steps of 0.5, see the nearby redshift labels ). The two candidates fall in the region occupied by the brown dwarfs. concluded that they are compatible with being a Galactic brown dwarf. Optical spectra The two candidates were observed for 5.0 hr with the FORS2 spectrograph on the ESO VLT in October 2005. The multi-slit facility allowed us to observe the two candidates together with other objects having either very faint counterparts in the i 775 band or colors above the selection threshold when measured by Sextractor (see Sect. 3). We used 1 wide slits and the G600z grism, providing a dispersion of 1.6/pix and a resolution of about /∆ = 1400. The covered wavelength range was between 0.75 and 1.05m. In all cases, no significant emission line and spectral break were detected in this wavelength range. It should be noted that an optical spectrum of object 4419 have already been obtained by Stanway et al. (2004b) with DEIMOS on the Keck-II telescope, but no spectral features were detected. The emission-line detection limit was computed from the background noise. We obtain a 5 limiting magnitude for unresolved sources of about 3 10 −18 erg sec −1 cm −2 in the wavelength range 0.900-0.995m, corresponding to a redshift range for the Ly line of 6.4<z<7.2. This limit holds for the part of the wavelength range that is not covered by bright sky lines, which corresponds to about 88% of the total range. Such a sensitivity would be more than adequate to detect the Ly emission from star forming galaxies at these redshift. This flux can be estimated by assuming that the properties of the i-dropout galaxies at z∼6 also holds at these higher redshifts. The spectroscopic observations of i−dropouts a;) have shown that colorselected galaxy have Ly lines with rest-frame equivalent widths (EWs) of 20-30, corresponding to 2 − 3 10 −17 erg sec −1 cm −2 for the faintest of the two candidates, well above the detection limit. It should be noted that lineselected high-redshift galaxies show much larger EWs, on the order of 200 (e.g., Malhotra & Rhoads 2002). Similar values are derived for high redshift AGNs: scaling the Ly fluxes in z∼6 quasars observed by Fan et al. and Maiolino et al. to the continuum luminosity of the current sample, we would obtain fluxes on the order of 1 − 4 10 −17 erg cm −2 sec −1. As a conclusion, all of the four investigations presented above confirm the identification of both sources as Galactic stars. Also their luminosity is consistent with this identification: if they are brown dwarfs of type from T6 to T8, their absolute mag would be between M(J)=15.5 and M(J)=16.5 (), placing them between 200 and 700 pc from us. As a consequence, no z=7 object is present in the survey field above our detection limit. Evolution of the luminosity function and of the cosmic star-formation density The detection of no z>7 objects in the field can be used to place an upper limit to the LF of these objects. To do this we need to accurately estimate the volume sampled by our survey. Two effects produce the dependence of this volume on the magnitude of the objects. First, fainter objects are detected in the (J+Ks) image only at lower redshifts; second, only lower limits of the z -J color can be measured for objects selected in the (J+Ks) image and undetected in z 850. This lower limit is above the selection threshold only for objects that are bright enough in the J band, while fainter objects can have a color limit below threshold. The effective sampled volume V ef f can be computed as a function of the absolute magnitude of the objects using redshift z using the selection in Fig. 2, and dV /dz is the comoving volume per unit solid angle at redshift z. The results of this computation are shown in Fig. 6. The detection probability p(M, z) was obtained by computing the expected apparent magnitudes and colors of starburst galaxies (modeled as f ∝ ) of varying intrinsic luminosity and spectral slope, placed at different redshifts. Figure 6 (left panel) shows the results of this computation. The redshift sensitivity of our selection method starts at about z=6.7, followed by a peak at z=7 and a shallow decrease towards high redshifts. By integrating this function over the redshift, we obtain the results in the right panel of Fig. 6. The limiting apparent magnitude (∼25.5, see Sect. 2) corresponds, at z=7, to an absolute magnitude M=-21.4. Using the standard relation between UV luminosity and SFR (), this value corresponds to an SFR of ∼20M ⊙ /yr. This value is adequate for sampling the brighter part of the LF of LBG at any redshift. As an example, Stanway et al. found 9 i-dropout galaxies at z∼6 with typical SFRs of 20-30 M⊙/yr, while the SFR of L * galaxies in is 9 M ⊙ /yr. As a reference, the SFR corresponding to an L * galaxy in the z ∼ 3 sample by Steidel et al. is about 15-20 M⊙/yr (Giavalisco, 2002;). This volume can be used to estimate an upper limit to the density of objects at this redshift range. The limiting density for a given Confidence Level (CL) is given by where V ef f (M ) is given by Eq. 1 and N (CL) is the maximum number of objects in the field corresponding to the limiting density at the chosen CL (2.3 objects for CL=90%). Figure 7 shows the 1 upper limits to the density of objects, compared with the LF of the LBGs at z=6 from (a Schechter function with parameters M * =-20.25, =-1.73, and * =0.00202 Mpc) and at z=3 obtained by Steidel et al.. The LF at z=6 is still subject to large uncertainties because it is based on a compilation of 500 i-dropouts coming from various fields with different limiting magnitudes. As a consequence, cosmic variance could have different effects in different parts of the LF. The faint-end slope is particularly uncertain, as it is based on faint objects observed at low signal-to-noise. At this signal level, many interlopers at lower redshift could be present in the sample, with the uncertainty on the i − z color causing galaxies to scatter into the selection region, hence overestimating the number of faint sources. It should also be noted that the VVDS collaboration (Le ;) studied the LF of the high-z galaxy population with 3<z<4 using a purely magnitude-selected spectroscopic sample, and found an unexpectedly large number of very bright galaxies, pointing toward the possibility that the color selection could be effected by large incompleteness. Besides these uncertainties, it is evident that our upper limits indicate an evolution of the LF from z=6, even if only 170 Myr have passed since then. In the no-evolution case, in fact, we would expect to detect 5.5 objects, with a Poisson probability of no detection smaller than 0.5%. This is confirmed by the results in who detected 7 objects at z=6 in the same luminosity range in the same field, in good agreement with our expectations. Using a CL=90% and assuming density evolution of the LF, we found that the normalization of the LF at z=7 must be at most 40% of that at z=6. The evolution of the LF from z=7 to z=6 could also be in luminosity rather than density, corresponding to a brightening of the objects with cosmic time rather than to an increase in their number. This could be a better approximation of the real evolution if the systems at higher redshifts tend to be smaller or less active than the corresponding systems at lower redshift. In this case the minimum evolution of the average luminosity compatible with our upper limits at CL=90% is about 0.22 mag. This evolution corresponds to a reduction of 20% in the total SFD, obtained by integrating the LF assuming a constant faint-end slope. Very similar results are obtained if the LF at z=6 by Bunker et al. is used as a reference point. By analyzing a much smaller sample of LBGs, these authors found that the shift in the LF between z=3 and z=6 is consistent with density rather than luminosity evolution. Using this determination, the ratio SFD(z=7)/SFD(z=6) is about 0.3 for the luminosity evolution and 0.8 for the density evolution. Our limits can be compared with the LF at z=3 (). Assuming luminosity evolution we obtain a shift of L * about 0.9 mag, corresponding to SFD(z=7)/SFD(z=3)=0.42, a reduction of more than a factor of 2. A pure density evolution would require SFD(z=7)/SFD(z=3)=0.05. Table 2. Values of the UV luminosity density and SFD at z=7, where the limits are at 90% confidence level A strong reduction in the LF from z=6 to z=7 is consistent with the tentative detection of one z -dropout LBG at z=7 in the HUDF by Bouwens et al. (2004b) as revised by. The evolution of the star formation activity These findings on the relative evolution of the SFD can be compared with the results from other studies at lower and higher redshifts. The UV luminosity density 1500 can be converted to SFD by using the Madau et al. ratio. Our upper limits to the density of galaxies at z=7 with (J+Ks)<25.5 can be directly converted into an upper limit to the SFD contained in galaxies with M 1500 <-21.44, which turns out to be about <30% of that at z=6 and <5% of that at z=3. As a consequence, our data show a strong reduction in SF activity in bright galaxies. In Fig. 8 we show the values of the SFD obtained by integrating the observed LFs above a given luminosity threshold. The most interesting quantity, the total SFD at each redshift, would be obtained by using a very low luminosity threshold. Unfortunately, this would introduce large uncertainties due to the unobserved part of the LF at low luminosities. For example, Steidel et al. observe their LBGs at z=3 down to ∼0.1L *, while for their value of the faint-end slope of the LF ( = −1.6), about half of the total SFD takes place in galaxies below this limit. This implies that a correction of about a factor of two is needed to obtain the total SFD from the observed part of the LF. As for < −2.0, the integral of the LF no longer converges, and this correction becomes even larger for more negative values of : it is 2.4 for = −1.73 (as in ) and even 6 for = −1.9, the most negative value of compatible with the i-dropouts in. To avoid this additional uncertainty, it is common to refer to the SFD derived from galaxies that where observed directly or by using a small extrapolation of the LF. For the value at z=7, we plot the average between the two extremes of pure luminosity and pure density evolution, while the upper value corresponds to pure luminosity evolution. The two panels of Fig. 8 show the SFD as obtained by integrating the observed LFs with two different lower limits. In the upper panel we show the results when considering, at any redshift, the same range of absolute magnitudes (M >-19.32), i.e., limiting the luminosity to above 20% of the L * magnitude derived by Steidel et al. for their LBGs at z=3 (L > 0.2L * (z = 3)). The derived SFD is directly related to the total SFD in the case of pure density evolution, as the fraction of SFD from galaxies below the threshold would be constant. The use of the absolute limits of integration is very common (see, for example, ). The resulting SFD appears to vary rapidly, increasing by more than a factor of 30 from z=0.3 to z=3 and then decreasing by a factor of 2.2 to z=6 and ∼4 to z=7. In the case of luminosity evolution, the use of a constant limit of integration results in considering a variable fraction of LF. For example, Arnouts et al. measured the UV LF at low redshift and found a strong luminosity evolution, with M * =-18.05 at z=0.055 and M * =-20.11 at z=1.0. In this case, at low redshift the limit of integration used above (M >-19.32) is even brighter than L * and only a small fraction of the LF is integrated to obtain a value of the SFD. This is the reason that the upper panel of Fig. 8 shows such a strong evolution at low redshift. In the lower panel of Fig. 8, we use a variable limit of integration, set to be 0.2L * (z) at each redshift. This is more suitable to reproducing the total cosmic SFD, as the luminosity evolution appears to dominate both at low and at high redshifts. In this case, the obtained evolution is much milder with an increase of a factor of 5 from z=0.3 to z=3 and a decrease of ∼1.4 to z=6 and ∼2.5 to z=7. The resulting values of the UV luminosity density and of the SFD are listed in Table 2 for both limits of integration. Discussion The search for galaxies at even higher redshift, z>8, by looking for J-dropout galaxies, faces another problem. The absence of deep enough data at wavelengths longer than Ks make it impossible to select galaxies with the standard color-color technique, and the selection relies on the J-Ks color alone. As a consequence, this technique is prone to the presence of many interlopers, and the constraints on the SFD at these redshifts are correspondingly weaker. Bouwens et al. looked for z∼10 galaxies by selecting J-dropouts in a deep HST/NICMOS field down to H∼28. They detected 3 candidates, one of which is considered reliable. It is currently not possible to investigate the nature of these 3 objects in greater detail. This fact spoils the possibility of using this result to put constraints on the SFD at z=10, which is only constrained not to be larger than at z=6. For this reason our results are broadly consistent with these limits. On the contrary, our results are not consistent with the high value of the SFD derived at z=6-10 by Richard et al.. They measure a value of the SFD higher than at z=3, but they also warn against measuring SFD by using strongly clustered fields. Fig. 7. The limits to the density of z=7 galaxies (connected downward arrows) are compared with the LFs of LBGs at z=3 (dashed line, ) and z=6 (solid line,, assuming M-M=0.10, as for the typical z=3 LBGs). The upper limit with a cross corresponds to the object detected by Bouwens et al. (2004b) in the HUDF and considered to be at z∼7. Data are plotted as a function of the absolute magnitude M at 1500, while the upper axis shows the corresponding value of the SFR. Even if the survey area is quite large for the obtained magnitude limits (25 times larger than the Hubble Deep Field, for example), the cosmic variance is still a potentially important concern. From the cross-correlation of the galaxies at this magnitude range it is possible to estimate (see, for example, ) that the observed density can vary by about 20% of the cosmic average because of this effect. Similar results were obtained by Somerville et al. on the basis of cosmological simulations. As a consequence, cosmic variance is not expected to be a dominant effect. All the data in Fig. 8 are derived from UV observations and, as a consequence, they are very sensitive to dust extinction, as discussed by a large number of authors (see, for example, ). Variation in the dust content along the cosmic age is one of the effects that could contribute to shaping the observed evolution of the SFD. The typical color of z=6 LBGs, represented by the UV spectral slope, is bluer at z=6 ( = −1.8 according to, revised. The black dot with error bars is obtained from the upper limit from the present work, and the empty dots are obtained by integrating several published LFs (triangles: ;squares: ;stars: ;circle: ) over the same interval. The diamonds at z=7.5 and z=10 are from Bouwens et al. (2004bBouwens et al. (, 2005 and are the only points not derived from an LF. The original published values referred to L > 0.3L * (z = 3) and were corrected to L > 0.2L * (z = 3) by assuming = −1.73 as observed at z=6. The dotted lines shows an empirical fit to the data in the upper panel, the dashed line to those in the lower panel. extinction at high redshift of about a factor of two. As a consequence, the observed reduction in the SFD cannot be due to an increase of the dust content. Actually, considering this effect would make the increase of SFD with cosmic time even more pronounced. As we observe the bright part of the LF, we cannot exclude that the reduction in the number density of bright galaxies is not associated with an increase in that of the faint galaxies. If, for example, both * and L * vary together so that * L * remains constant, the resulting total SFD also remains constant. We cannot exclude this effect, even if the upper limits to the galaxies with L ∼ L * from Bouwens et al. (2004b) tend to exclude this possibility. Consequences on reionization of the primordial universe It is widely accepted that high redshift starburst galaxies can contribute substantially to the reionization of the universe. Madau et al. (see also ) estimated the amount of star formation needed to provide enough ionizing photons to the intergalactic medium. By assuming an escape fraction of the photons f esc of 0.5 and a clumping factor C of 30 (), we find that at z=7 the necessary SFD is SFD(needed) ∼ 7.8 10 −2 M ⊙ yr −1 Mpc −3 The observed total amount of SFD derived from UV observations can be derived by integrating the observed LF down to zero luminosity. Assuming luminosity evolution and integrating the LF down to 0.01L *, we obtain SFD(observed) = 2.9 10 −2 M ⊙ yr −1 Mpc −3 where about half of this comes in very faint systems, below 0.1L *. This value can be be increased up to ∼ 5 10 −2 M ⊙ /yr −1 Mpc −3 assuming the steeper faint-end slope on the LF ( = −1.9) compatible with the data in. The uncertainties involved in this computation (as the actual values of f esc and C, the amount of evolution of the LF between z=6 and z=7, the faint-end slope of the LF) are numerous and large for both SFD(needed) and SFD(observed). Also, Stiavelli, Fall & Panagia discuss how very metal-poor stars can overproduce ionizing photons and how a warmer IGM could lower the required flux. Nevertheless, the amount of ionizing photons that can be inferred at z=7 from observations is less or, at most, similar to the needed value. A measure of the SFD at even higher redshifts or tighter constraints to the faintend slope of the LF at z=6 can significantly reduce these uncertainties and can easily reveal that it falls short of the required value. Conclusions The existing multi-wavelength deep data on the large GOODS-South field allowed us to search for z=7 starforming galaxies by selecting z -dropouts. The accurate study of the dropouts in terms of colors, morphology, and spectra allows us to exclude the presence of any z=7 galaxy in the field above our detection threshold. We used this to derive evidence for the evolution of the LF from z=7 to z=6 to z=3, and to determine an upper limit to the global star formation density at z=7. These limits, together with the numerous works at lower redshifts, point toward the existence of a sharp increase of the star formation density with cosmic time from z=7 to z=4, a flattening between z=4 and z=1, and a decrease afterward. The ionizing flux from starburst galaxies at z=7 could be too low to produce all the reionization. The first star forming systems at z=7 appear to be too faint to be studied in detail with the current generation of telescopes, and the detailed comprehension of their properties will be possible when the telescopes of the next generation come into use. |
In the coming days, we can see the fascination of Facebook and Whatsapp. It’s been revealed that this social media company is testing a new feature on the Facebook app, with the help of this new feature users will be able to swiftly switch between Facebook and WhatsApp. This will be possible through a shortcut button.
Actually, the company has given a new Whatsapp shortcut button in the Facebook feed. This shortcut button is only available to select user on Facebook Android app. As soon as the user clicks on the Whatsapp shortcut button appearing in the menu area, the Whatsapp app will open.
Only a select user who chose the default language for Danish language can view this feature. It is possible that Facebook has not released this special button for every area.
It is not clear whether users who did not have a Whatsapp account, then what will happen if they tab on this shortcut button? It is being speculated that Facebook is in the process of increasing the number of WhatsApp users through this feature.
Because most people in the US and other Western countries do not use WhatsApp. It is estimated that the company is currently testing this feature in a select country. Only after seeing the user’s response will it be rolled out to the rest of the countries.
Facebook bought Whatsapp in 2014. Since then, the company has been using both platforms for the benefit of each other. Last year, the company started using the user’s Whatsapp application for advertising and other purposes. Many privacy advocates of privacy also opposed this move of Whatsapp. |
<reponame>carlwgeorge/py-leveldb
// Copyright (c) <NAME>.
// See LICENSE for details.
#include "leveldb_ext.h"
#include <leveldb/comparator.h>
static PyObject* PyLevelDBIter_New(PyObject* ref, PyLevelDB* db, leveldb::Iterator* iterator, std::string* bound, int include_value, int is_reverse);
static PyObject* PyLevelDBSnapshot_New(PyLevelDB* db, const leveldb::Snapshot* snapshot);
static void PyLevelDB_set_error(leveldb::Status& status)
{
PyErr_SetString(leveldb_exception, status.ToString().c_str());
}
const char pyleveldb_destroy_db_doc[] =
"leveldb.DestroyDB(db_dir)\n\nAttempts to recover as much data as possible from a corrupt database."
;
PyObject* pyleveldb_destroy_db(PyObject* self, PyObject* args)
{
const char* db_dir = 0;
if (!PyArg_ParseTuple(args, (char*)"s", &db_dir))
return 0;
std::string _db_dir(db_dir);
leveldb::Status status;
leveldb::Options options;
Py_BEGIN_ALLOW_THREADS
status = leveldb::DestroyDB(_db_dir.c_str(), options);
Py_END_ALLOW_THREADS
if (!status.ok()) {
PyLevelDB_set_error(status);
return 0;
}
Py_INCREF(Py_None);
return Py_None;
}
static void PyLevelDB_dealloc(PyLevelDB* self)
{
Py_BEGIN_ALLOW_THREADS
delete self->_db;
delete self->_options;
delete self->_cache;
if (self->_comparator != leveldb::BytewiseComparator())
delete self->_comparator;
Py_END_ALLOW_THREADS
self->_db = 0;
self->_options = 0;
self->_cache = 0;
self->_comparator = 0;
self->n_iterators = 0;
self->n_snapshots = 0;
#if PY_MAJOR_VERSION >= 3
Py_TYPE(self)->tp_free((PyObject*)self);
#else
((PyObject*)self)->ob_type->tp_free((PyObject*)self);
#endif
}
static void PyLevelDBSnapshot_dealloc(PyLevelDBSnapshot* self)
{
if (self->db && self->snapshot) {
Py_BEGIN_ALLOW_THREADS
self->db->_db->ReleaseSnapshot(self->snapshot);
Py_END_ALLOW_THREADS
}
if (self->db)
self->db->n_snapshots -= 1;
Py_DECREF(self->db);
self->db = 0;
self->snapshot = 0;
#if PY_MAJOR_VERSION >= 3
Py_TYPE(self)->tp_free((PyObject*)self);
#else
((PyObject*)self)->ob_type->tp_free((PyObject*)self);
#endif
}
static void PyWriteBatch_dealloc(PyWriteBatch* self)
{
delete self->ops;
#if PY_MAJOR_VERSION >= 3
Py_TYPE(self)->tp_free((PyObject*)self);
#else
((PyObject*)self)->ob_type->tp_free((PyObject*)self);
#endif
}
static PyObject* PyLevelDB_new(PyTypeObject* type, PyObject* args, PyObject* kwds)
{
PyLevelDB* self = (PyLevelDB*)type->tp_alloc(type, 0);
if (self) {
self->_db = 0;
self->_options = 0;
self->_cache = 0;
self->_comparator = 0;
self->n_iterators = 0;
self->n_snapshots = 0;
}
return (PyObject*)self;
}
static PyObject* PyWriteBatch_new(PyTypeObject* type, PyObject* args, PyObject* kwds)
{
PyWriteBatch* self = (PyWriteBatch*)type->tp_alloc(type, 0);
if (self) {
self->ops = new std::vector<PyWriteBatchEntry>;
if (self->ops == 0) {
#if PY_MAJOR_VERSION >= 3
Py_TYPE(self)->tp_free((PyObject*)self);
#else
((PyObject*)self)->ob_type->tp_free((PyObject*)self);
#endif
return PyErr_NoMemory();
}
}
return (PyObject*)self;
}
static PyObject* PyLevelDBSnapshot_new(PyTypeObject* type, PyObject* args, PyObject* kwds)
{
PyLevelDBSnapshot* self = (PyLevelDBSnapshot*)type->tp_alloc(type, 0);
if (self) {
self->db = 0;
self->snapshot = 0;
}
return (PyObject*)self;
}
// Python 2.6+
#if PY_MAJOR_VERSION >= 3 || (PY_MAJOR_VERSION >= 2 && PY_MINOR_VERSION >= 6)
#define PY_LEVELDB_DEFINE_BUFFER(n) Py_buffer n; (n).buf = 0; (n).len = 0; (n).obj = 0
#define PY_LEVELDB_RELEASE_BUFFER(n) if (n.obj) {PyBuffer_Release(&n);}
#define PARAM_V(n) &(n)
#define PY_LEVELDB_BEGIN_ALLOW_THREADS Py_BEGIN_ALLOW_THREADS
#define PY_LEVELDB_END_ALLOW_THREADS Py_END_ALLOW_THREADS
#define PY_LEVELDB_SLICE_VALUE(n) leveldb::Slice((const char*)(n).buf, (size_t)(n).len)
#define PY_LEVELDB_STRING(n) std::string((const char*)(n).buf, (size_t)(n).len)
#if PY_MAJOR_VERSION >= 3
#define PARAM_S "y*"
#define PY_LEVELDB_STRING_OR_BYTEARRAY PyByteArray_FromStringAndSize
#else
#define PARAM_S "s*"
#define PY_LEVELDB_STRING_OR_BYTEARRAY PyString_FromStringAndSize
#endif
// Python 2.4/2.5
#else
#define PY_LEVELDB_DEFINE_BUFFER(n) const char* s_##n = 0; int n_##n
#define PY_LEVELDB_RELEASE_BUFFER(n)
#define PARAM_V(n) &s_##n, &n_##n
#define PY_LEVELDB_BEGIN_ALLOW_THREADS
#define PY_LEVELDB_END_ALLOW_THREADS
#define PY_LEVELDB_SLICE_VALUE(n) leveldb::Slice((const char*)s_##n, (size_t)n_##n)
#define PY_LEVELDB_STRING(n) std::string((const char*)s_##n, (size_t)n_##n);
#define PARAM_S "t#"
#define PY_LEVELDB_STRING_OR_BYTEARRAY PyString_FromStringAndSize
#endif
class PythonComparatorWrapper : public leveldb::Comparator {
public:
PythonComparatorWrapper(const char* name, PyObject* comparator) :
name(name),
comparator(comparator),
last_exception_type(0),
last_exception_value(0),
last_exception_traceback(0)
{
Py_INCREF(comparator);
#if PY_MAJOR_VERSION >= 3
zero = PyLong_FromLong(0);
#else
zero = PyInt_FromLong(0);
#endif
}
~PythonComparatorWrapper()
{
Py_DECREF(comparator);
Py_XDECREF(last_exception_type);
Py_XDECREF(last_exception_value);
Py_XDECREF(last_exception_traceback);
Py_XDECREF(zero);
}
private:
int GetSign(PyObject* i, int* c) const
{
#if PY_MAJOR_VERSION >= 3
if (PyLong_Check(i)) {
#else
if (PyInt_Check(i) || PyLong_Check(i)) {
#endif
#if PY_MAJOR_VERSION >= 3
if (PyObject_RichCompareBool(i, zero, Py_LT))
*c = -1;
else if (PyObject_RichCompareBool(i, zero, Py_GT))
*c = 1;
else
*c = 0;
#else
*c = PyObject_Compare(i, zero);
#endif
if (PyErr_Occurred())
return 0;
return 1;
}
PyErr_SetString(PyExc_TypeError, "comparison value is not an integer");
return 0;
}
void SetError() const
{
// we don't do too much
fprintf(stderr, "py-leveldb: Python comparison failure. Unable to reliably continue. Goodbye cruel world.\n\n");
PyErr_Print();
fflush(stderr);
abort();
// assert(PyErr_Occurred());
// Py_XDECREF(last_exception_type);
// Py_XDECREF(last_exception_value);
// Py_XDECREF(last_exception_traceback);
// PyErr_Fetch(&last_exception_type, &last_exception_value, &last_exception_value);
}
public:
// bool CheckAndSetError()
// {
// if (last_exception_type) {
// PyErr_Restore(last_exception_type, last_exception_value, last_exception_traceback);
// last_exception_type = 0;
// last_exception_value = 0;
// last_exception_traceback = 0;
// return true;
// }
//
// return false;
// }
// this can be called from pretty much any leveldb threads
int Compare(const leveldb::Slice& a, const leveldb::Slice& b) const
{
// http://docs.python.org/dev/c-api/init.html#non-python-created-threads
PyGILState_STATE gstate;
gstate = PyGILState_Ensure();
// acquire python thread
PyObject* a_ = PY_LEVELDB_STRING_OR_BYTEARRAY(a.data(), a.size());
PyObject* b_ = PY_LEVELDB_STRING_OR_BYTEARRAY(b.data(), b.size());
if (a_ == 0 || b_ == 0) {
Py_XDECREF(a_);
Py_XDECREF(b_);
SetError();
PyGILState_Release(gstate);
return 0;
}
PyObject* c = PyObject_CallFunctionObjArgs(comparator, a_, b_, 0);
int cmp = 0;
Py_XDECREF(a_);
Py_XDECREF(b_);
if (c == 0 || !GetSign(c, &cmp))
SetError();
PyGILState_Release(gstate);
return cmp;
}
const char* Name() const
{
return name.c_str();
}
void FindShortestSeparator(std::string*, const leveldb::Slice&) const { }
void FindShortSuccessor(std::string*) const { }
private:
std::string name;
PyObject* comparator;
PyObject* last_exception_type;
PyObject* last_exception_value;
PyObject* last_exception_traceback;
PyObject* zero;
};
static PyObject* PyLevelDB_Put(PyLevelDB* self, PyObject* args, PyObject* kwds)
{
const char* kwargs[] = {"key", "value", "sync", 0};
PyObject* sync = Py_False;
PY_LEVELDB_DEFINE_BUFFER(key);
PY_LEVELDB_DEFINE_BUFFER(value);
leveldb::WriteOptions options;
leveldb::Status status;
if (!PyArg_ParseTupleAndKeywords(args, kwds, (char*)PARAM_S PARAM_S "|O!", (char**)kwargs, PARAM_V(key), PARAM_V(value), &PyBool_Type, &sync))
return 0;
PY_LEVELDB_BEGIN_ALLOW_THREADS
leveldb::Slice key_slice = PY_LEVELDB_SLICE_VALUE(key);
leveldb::Slice value_slice = PY_LEVELDB_SLICE_VALUE(value);
options.sync = (sync == Py_True) ? true : false;
status = self->_db->Put(options, key_slice, value_slice);
PY_LEVELDB_END_ALLOW_THREADS
PY_LEVELDB_RELEASE_BUFFER(key);
PY_LEVELDB_RELEASE_BUFFER(value);
if (!status.ok()) {
PyLevelDB_set_error(status);
return 0;
}
Py_INCREF(Py_None);
return Py_None;
}
static PyObject* PyLevelDB_Get_(PyLevelDB* self, leveldb::DB* db, const leveldb::Snapshot* snapshot, PyObject* args, PyObject* kwds)
{
PyObject* verify_checksums = Py_False;
PyObject* fill_cache = Py_True;
PyObject* failobj = 0;
const char* kwargs[] = {"key", "verify_checksums", "fill_cache", "default", 0};
leveldb::Status status;
std::string value;
PY_LEVELDB_DEFINE_BUFFER(key);
if (!PyArg_ParseTupleAndKeywords(args, kwds, (char*)PARAM_S "|O!O!O", (char**)kwargs, PARAM_V(key), &PyBool_Type, &verify_checksums, &PyBool_Type, &fill_cache, &failobj))
return 0;
PY_LEVELDB_BEGIN_ALLOW_THREADS
leveldb::Slice key_slice = PY_LEVELDB_SLICE_VALUE(key);
leveldb::ReadOptions options;
options.verify_checksums = (verify_checksums == Py_True) ? true : false;
options.fill_cache = (fill_cache == Py_True) ? true : false;
options.snapshot = snapshot;
status = db->Get(options, key_slice, &value);
PY_LEVELDB_END_ALLOW_THREADS
PY_LEVELDB_RELEASE_BUFFER(key);
if (status.IsNotFound()) {
if (failobj) {
Py_INCREF(failobj);
return failobj;
}
PyErr_SetNone(PyExc_KeyError);
return 0;
}
if (!status.ok()) {
PyLevelDB_set_error(status);
return 0;
}
return PY_LEVELDB_STRING_OR_BYTEARRAY(value.c_str(), value.length());
}
static PyObject* PyLevelDB_Get(PyLevelDB* self, PyObject* args, PyObject* kwds)
{
return PyLevelDB_Get_(self, self->_db, 0, args, kwds);
}
static PyObject* PyLevelDBSnaphot_Get(PyLevelDBSnapshot* self, PyObject* args, PyObject* kwds)
{
return PyLevelDB_Get_(self->db, self->db->_db, self->snapshot, args, kwds);
}
static PyObject* PyLevelDB_Delete(PyLevelDB* self, PyObject* args, PyObject* kwds)
{
PyObject* sync = Py_False;
const char* kwargs[] = {"key", "sync", 0};
PY_LEVELDB_DEFINE_BUFFER(key);
leveldb::Status status;
if (!PyArg_ParseTupleAndKeywords(args, kwds, (char*)PARAM_S "|O!", (char**)kwargs, PARAM_V(key), &PyBool_Type, &sync))
return 0;
PY_LEVELDB_BEGIN_ALLOW_THREADS
leveldb::Slice key_slice = PY_LEVELDB_SLICE_VALUE(key);
leveldb::WriteOptions options;
options.sync = (sync == Py_True) ? true : false;
status = self->_db->Delete(options, key_slice);
PY_LEVELDB_END_ALLOW_THREADS
PY_LEVELDB_RELEASE_BUFFER(key);
if (!status.ok()) {
PyLevelDB_set_error(status);
return 0;
}
Py_INCREF(Py_None);
return Py_None;
}
static PyObject* PyWriteBatch_Put(PyWriteBatch* self, PyObject* args)
{
// NOTE: we copy all buffers
PY_LEVELDB_DEFINE_BUFFER(key);
PY_LEVELDB_DEFINE_BUFFER(value);
if (!PyArg_ParseTuple(args, (char*)PARAM_S PARAM_S, PARAM_V(key), PARAM_V(value)))
return 0;
PyWriteBatchEntry op;
op.is_put = true;
PY_LEVELDB_BEGIN_ALLOW_THREADS
op.key = PY_LEVELDB_STRING(key);
op.value = PY_LEVELDB_STRING(value);
PY_LEVELDB_END_ALLOW_THREADS
PY_LEVELDB_RELEASE_BUFFER(key);
PY_LEVELDB_RELEASE_BUFFER(value);
self->ops->push_back(op);
Py_INCREF(Py_None);
return Py_None;
}
static PyObject* PyWriteBatch_Delete(PyWriteBatch* self, PyObject* args)
{
// NOTE: we copy all buffers
PY_LEVELDB_DEFINE_BUFFER(key);
if (!PyArg_ParseTuple(args, (char*)PARAM_S, PARAM_V(key)))
return 0;
PyWriteBatchEntry op;
op.is_put = false;
PY_LEVELDB_BEGIN_ALLOW_THREADS
op.key = PY_LEVELDB_STRING(key);
PY_LEVELDB_END_ALLOW_THREADS
PY_LEVELDB_RELEASE_BUFFER(key);
self->ops->push_back(op);
Py_INCREF(Py_None);
return Py_None;
}
static PyObject* PyLevelDB_Write(PyLevelDB* self, PyObject* args, PyObject* kwds)
{
PyWriteBatch* write_batch = 0;
PyObject* sync = Py_False;
const char* kwargs[] = {"write_batch", "sync", 0};
if (!PyArg_ParseTupleAndKeywords(args, kwds, (char*)"O!|O!", (char**)kwargs, &PyWriteBatch_Type, &write_batch, &PyBool_Type, &sync))
return 0;
leveldb::WriteOptions options;
options.sync = (sync == Py_True) ? true : false;
leveldb::WriteBatch batch;
leveldb::Status status;
for (size_t i = 0; i < write_batch->ops->size(); i++) {
PyWriteBatchEntry& op = (*write_batch->ops)[i];
leveldb::Slice key(op.key.c_str(), op.key.size());
leveldb::Slice value(op.value.c_str(), op.value.size());
if (op.is_put) {
batch.Put(key, value);
} else {
batch.Delete(key);
}
}
Py_BEGIN_ALLOW_THREADS
status = self->_db->Write(options, &batch);
Py_END_ALLOW_THREADS
if (!status.ok()) {
PyLevelDB_set_error(status);
return 0;
}
Py_INCREF(Py_None);
return Py_None;
}
static PyObject* PyLevelDB_RangeIter_(PyLevelDB* self, const leveldb::Snapshot* snapshot, PyObject* args, PyObject* kwds)
{
int is_from = 0;
int is_to = 0;
PY_LEVELDB_DEFINE_BUFFER(a);
PY_LEVELDB_DEFINE_BUFFER(b);
PyObject* _a = Py_None;
PyObject* _b = Py_None;
PyObject* verify_checksums = Py_False;
PyObject* fill_cache = Py_True;
PyObject* include_value = Py_True;
PyObject* is_reverse = Py_False;
const char* kwargs[] = {"key_from", "key_to", "verify_checksums", "fill_cache", "include_value", "reverse", 0};
if (!PyArg_ParseTupleAndKeywords(args, kwds, (char*)"|OOO!O!O!O!", (char**)kwargs, &_a, &_b, &PyBool_Type, &verify_checksums, &PyBool_Type, &fill_cache, &PyBool_Type, &include_value, &PyBool_Type, &is_reverse))
return 0;
std::string from;
std::string to;
leveldb::ReadOptions read_options;
read_options.verify_checksums = (verify_checksums == Py_True) ? true : false;
read_options.fill_cache = (fill_cache == Py_True) ? true : false;
read_options.snapshot = snapshot;
if (_a != Py_None) {
is_from = 1;
if (!PyArg_Parse(_a, (char*)PARAM_S, PARAM_V(a)))
return 0;
}
if (_b != Py_None) {
is_to = 1;
if (!PyArg_Parse(_b, (char*)PARAM_S, PARAM_V(b)))
return 0;
}
if (is_from)
from = PY_LEVELDB_STRING(a);
if (is_to)
to = PY_LEVELDB_STRING(b);
leveldb::Slice key(is_reverse == Py_True ? to.c_str() : from.c_str(), is_reverse == Py_True ? to.size() : from.size());
if (is_from)
PY_LEVELDB_RELEASE_BUFFER(a);
if (is_to)
PY_LEVELDB_RELEASE_BUFFER(b);
// create iterator
leveldb::Iterator* iter = 0;
Py_BEGIN_ALLOW_THREADS
iter = self->_db->NewIterator(read_options);
// if we have an iterator
if (iter) {
// forward iteration
if (is_reverse == Py_False) {
if (!is_from)
iter->SeekToFirst();
else
iter->Seek(key);
} else {
if (!is_to) {
iter->SeekToLast();
} else {
iter->Seek(key);
if (!iter->Valid()) {
iter->SeekToLast();
} else {
leveldb::Slice a = key;
leveldb::Slice b = iter->key();
int c = self->_options->comparator->Compare(a, b);
if (c) {
iter->Prev();
}
}
}
}
}
Py_END_ALLOW_THREADS
if (iter == 0)
return PyErr_NoMemory();
// if iterator is empty, return an empty iterator object
if (!iter->Valid()) {
Py_BEGIN_ALLOW_THREADS
delete iter;
Py_END_ALLOW_THREADS
return PyLevelDBIter_New(0, 0, 0, 0, 0, 0);
}
// otherwise, we're good
std::string* s = 0;
if (is_reverse == Py_False && is_to) {
s = new std::string(to);
if (s == 0) {
Py_BEGIN_ALLOW_THREADS
delete iter;
Py_END_ALLOW_THREADS
return PyErr_NoMemory();
}
} else if (is_reverse == Py_True && is_from) {
s = new std::string(from);
if (s == 0) {
Py_BEGIN_ALLOW_THREADS
delete iter;
Py_END_ALLOW_THREADS
return PyErr_NoMemory();
}
}
return PyLevelDBIter_New((PyObject*)self, self, iter, s, (include_value == Py_True) ? 1 : 0, (is_reverse == Py_True) ? 1 : 0);
}
static PyObject* PyLevelDB_RangeIter(PyLevelDB* self, PyObject* args, PyObject* kwds)
{
return PyLevelDB_RangeIter_(self, 0, args, kwds);
}
static PyObject* PyLevelDBSnapshot_RangeIter(PyLevelDBSnapshot* self, PyObject* args, PyObject* kwds)
{
return PyLevelDB_RangeIter_(self->db, self->snapshot, args, kwds);
}
static PyObject* PyLevelDB_GetStatus(PyLevelDB* self)
{
std::string value;
if (!self->_db->GetProperty(leveldb::Slice("leveldb.stats"), &value)) {
PyErr_SetString(PyExc_ValueError, "unknown property");
return 0;
}
#if PY_MAJOR_VERSION >= 3
return PyUnicode_DecodeLatin1(value.c_str(), value.size(), 0);
#else
return PyString_FromString(value.c_str());
#endif
}
static PyObject* PyLevelDB_CreateSnapshot(PyLevelDB* self)
{
const leveldb::Snapshot* snapshot = self->_db->GetSnapshot();
//! TBD: check for GetSnapshot() failures
return PyLevelDBSnapshot_New(self, snapshot);
}
static PyObject* PyLevelDB_CompactRange(PyLevelDB* self, PyObject* args, PyObject* kwds)
{
PyObject* _start = Py_None;
PyObject* _end = Py_None;
int is_start = 0;
int is_end = 0;
PY_LEVELDB_DEFINE_BUFFER(a);
PY_LEVELDB_DEFINE_BUFFER(b);
const char* kwargs[] = {"start", "end", 0};
if (!PyArg_ParseTupleAndKeywords(args, kwds, (char*)"|OO", (char**)kwargs, &_start, &_end))
return 0;
if (_start != Py_None) {
is_start = 1;
if (!PyArg_Parse(_start, (char*)PARAM_S, PARAM_V(a)))
return 0;
}
if (_end != Py_None) {
is_end = 1;
if (!PyArg_Parse(_end, (char*)PARAM_S, PARAM_V(b)))
return 0;
}
Py_BEGIN_ALLOW_THREADS
leveldb::Slice start_slice("");
leveldb::Slice end_slice("");
if (is_start)
start_slice = PY_LEVELDB_SLICE_VALUE(a);
if (is_end)
end_slice = PY_LEVELDB_SLICE_VALUE(b);
self->_db->CompactRange(is_start ? &start_slice : 0, is_end ? &end_slice : 0);
Py_END_ALLOW_THREADS
if (is_start)
PY_LEVELDB_RELEASE_BUFFER(a);
if (is_end)
PY_LEVELDB_RELEASE_BUFFER(b);
Py_INCREF(Py_None);
return Py_None;
}
static PyMethodDef PyLevelDB_methods[] = {
{(char*)"Put", (PyCFunction)PyLevelDB_Put, METH_VARARGS | METH_KEYWORDS, (char*)"add a key/value pair to database, with an optional synchronous disk write" },
{(char*)"Get", (PyCFunction)PyLevelDB_Get, METH_VARARGS | METH_KEYWORDS, (char*)"get a value from the database" },
{(char*)"Delete", (PyCFunction)PyLevelDB_Delete, METH_VARARGS | METH_KEYWORDS, (char*)"delete a value in the database" },
{(char*)"Write", (PyCFunction)PyLevelDB_Write, METH_VARARGS | METH_KEYWORDS, (char*)"apply a write-batch"},
{(char*)"RangeIter", (PyCFunction)PyLevelDB_RangeIter, METH_VARARGS | METH_KEYWORDS, (char*)"key/value range scan"},
{(char*)"GetStats", (PyCFunction)PyLevelDB_GetStatus, METH_VARARGS | METH_NOARGS, (char*)"get a mapping of all DB statistics"},
{(char*)"CreateSnapshot", (PyCFunction)PyLevelDB_CreateSnapshot, METH_NOARGS, (char*)"create a new snapshot from current DB state"},
{(char*)"CompactRange", (PyCFunction)PyLevelDB_CompactRange, METH_VARARGS | METH_KEYWORDS, (char*)"Compact keys in the range"},
{NULL}
};
static PyMethodDef PyWriteBatch_methods[] = {
{(char*)"Put", (PyCFunction)PyWriteBatch_Put, METH_VARARGS, (char*)"add a put op to batch" },
{(char*)"Delete", (PyCFunction)PyWriteBatch_Delete, METH_VARARGS, (char*)"add a delete op to batch" },
{NULL}
};
static PyMethodDef PyLevelDBSnapshot_methods[] = {
{(char*)"Get", (PyCFunction)PyLevelDBSnaphot_Get, METH_VARARGS | METH_KEYWORDS, (char*)"get a value from the snapshot" },
{(char*)"RangeIter", (PyCFunction)PyLevelDBSnapshot_RangeIter, METH_VARARGS | METH_KEYWORDS, (char*)"key/value range scan"},
{NULL}
};
static int pyleveldb_str_eq(PyObject* p, const char* s)
{
// 8-bit string
#if PY_MAJOR_VERSION < 3
if (PyString_Check(p) && strcmp(PyString_AS_STRING(p), "bytewise") == 0)
return 1;
#endif
// unicode string
if (PyUnicode_Check(p)) {
size_t i = 0;
Py_UNICODE* c = PyUnicode_AS_UNICODE(p);
while (s[i] && c[i] && (int)s[i] == (int)c[i])
i++;
return ((int)s[i] == (int)c[i]);
}
return 0;
}
static const leveldb::Comparator* pyleveldb_get_comparator(PyObject* comparator)
{
// default comparator
if (comparator == 0 || pyleveldb_str_eq(comparator, "bytewise"))
return leveldb::BytewiseComparator();
// (name-ascii, python-callable)
const char* cmp_name = 0;
PyObject* cmp = 0;
if (!PyArg_Parse(comparator, (char*)"(sO)", &cmp_name, &cmp) || !PyCallable_Check(cmp)) {
PyErr_SetString(PyExc_TypeError, "comparator must be a string, or a 2-tuple (name, func)");
return 0;
}
const leveldb::Comparator* c = new PythonComparatorWrapper(cmp_name, cmp);
if (c == 0) {
PyErr_NoMemory();
return 0;
}
return c;
}
const char pyleveldb_repair_db_doc[] =
"leveldb.RepairDB(db_dir)\n\nAttempts to recover as much data as possible from a corrupt database."
;
PyObject* pyleveldb_repair_db(PyLevelDB* self, PyObject* args, PyObject* kwds)
{
const char* db_dir = 0;
const char* kwargs[] = {"filename", "comparator", 0};
PyObject* comparator = 0;
if (!PyArg_ParseTupleAndKeywords(args, kwds, (char*)"s|O", (char**)kwargs,
&db_dir,
&comparator))
return 0;
// get comparator
const leveldb::Comparator* c = pyleveldb_get_comparator(comparator);
if (c == 0) {
PyErr_SetString(leveldb_exception, "error loading comparator");
return NULL;
}
std::string _db_dir(db_dir);
leveldb::Status status;
leveldb::Options options;
options.comparator = c;
Py_BEGIN_ALLOW_THREADS
status = leveldb::RepairDB(_db_dir.c_str(), options);
Py_END_ALLOW_THREADS
if (!status.ok()) {
PyLevelDB_set_error(status);
return 0;
}
Py_INCREF(Py_None);
return Py_None;
}
static int PyLevelDB_init(PyLevelDB* self, PyObject* args, PyObject* kwds)
{
// cleanup
if (self->_db || self->_cache || self->_comparator || self->_options) {
Py_BEGIN_ALLOW_THREADS
delete self->_db;
delete self->_options;
delete self->_cache;
if (self->_comparator != leveldb::BytewiseComparator())
delete self->_comparator;
Py_END_ALLOW_THREADS
self->_db = 0;
self->_options = 0;
self->_cache = 0;
self->_comparator = 0;
}
// get params
const char* db_dir = 0;
PyObject* create_if_missing = Py_True;
PyObject* error_if_exists = Py_False;
PyObject* paranoid_checks = Py_False;
int block_cache_size = 8 * (2 << 20);
int write_buffer_size = 4<<20;
int block_size = 4096;
int max_open_files = 1000;
int block_restart_interval = 16;
int max_file_size = 2 << 20;
const char* kwargs[] = {"filename", "create_if_missing", "error_if_exists", "paranoid_checks", "write_buffer_size",
"block_size", "max_open_files", "block_restart_interval", "block_cache_size", "max_file_size", "comparator", 0};
PyObject* comparator = 0;
if (!PyArg_ParseTupleAndKeywords(args, kwds, (char*)"s|O!O!O!iiiiiiO", (char**)kwargs,
&db_dir,
&PyBool_Type, &create_if_missing,
&PyBool_Type, &error_if_exists,
&PyBool_Type, ¶noid_checks,
&write_buffer_size,
&block_size,
&max_open_files,
&block_restart_interval,
&block_cache_size,
&max_file_size,
&comparator))
return -1;
if (write_buffer_size < 0 || block_size < 0 || max_open_files < 0 || block_restart_interval < 0 || block_cache_size < 0) {
PyErr_SetString(PyExc_ValueError, "negative write_buffer_size/block_size/max_open_files/block_restart_interval/cache_size");
return -1;
}
// get comparator
const leveldb::Comparator* c = pyleveldb_get_comparator(comparator);
if (c == 0)
return -1;
// open database
self->_options = new leveldb::Options();
self->_cache = leveldb::NewLRUCache(block_cache_size);
self->_comparator = c;
if (self->_options == 0 || self->_cache == 0 || self->_comparator == 0) {
Py_BEGIN_ALLOW_THREADS
delete self->_options;
delete self->_cache;
if (self->_comparator != leveldb::BytewiseComparator())
delete self->_comparator;
Py_END_ALLOW_THREADS
self->_options = 0;
self->_cache = 0;
self->_comparator = 0;
PyErr_NoMemory();
return -1;
}
self->_options->create_if_missing = (create_if_missing == Py_True) ? true : false;
self->_options->error_if_exists = (error_if_exists == Py_True) ? true : false;
self->_options->paranoid_checks = (paranoid_checks == Py_True) ? true : false;
self->_options->write_buffer_size = write_buffer_size;
self->_options->block_size = block_size;
self->_options->max_open_files = max_open_files;
self->_options->block_restart_interval = block_restart_interval;
self->_options->compression = leveldb::kSnappyCompression;
self->_options->block_cache = self->_cache;
self->_options->max_file_size = max_file_size;
self->_options->comparator = self->_comparator;
leveldb::Status status;
// note: copy string parameter, since we might lose it when we release the GIL
std::string _db_dir(db_dir);
int i = 0;
Py_BEGIN_ALLOW_THREADS
status = leveldb::DB::Open(*self->_options, _db_dir, &self->_db);
if (!status.ok()) {
delete self->_db;
delete self->_options;
delete self->_cache;
//! move out of thread block
if (self->_comparator != leveldb::BytewiseComparator())
delete self->_comparator;
self->_db = 0;
self->_options = 0;
self->_cache = 0;
self->_comparator = 0;
i = -1;
}
Py_END_ALLOW_THREADS
if (i == -1)
PyLevelDB_set_error(status);
return i;
}
static int PyWriteBatch_init(PyWriteBatch* self, PyObject* args, PyObject* kwds)
{
self->ops->clear();
static char* kwargs[] = {0};
if (!PyArg_ParseTupleAndKeywords(args, kwds, (char*)"", kwargs))
return -1;
return 0;
}
static int PyLevelDBSnapshot_init(PyLevelDBSnapshot* self, PyObject* args, PyObject* kwds)
{
if (self->db && self->snapshot) {
self->db->n_snapshots -= 1;
self->db->_db->ReleaseSnapshot(self->snapshot);
Py_DECREF(self->db);
}
self->db = 0;
self->snapshot = 0;
PyLevelDB* db = 0;
const leveldb::Snapshot* snapshot;
const char* kwargs[] = {"db", 0};
if (!PyArg_ParseTupleAndKeywords(args, kwds, (char*)"O!", (char**)kwargs, &PyLevelDB_Type, &db))
return -1;
snapshot = db->_db->GetSnapshot();
//! TBD: deal with GetSnapshot() failure
self->db = db;
self->snapshot = snapshot;
Py_INCREF(self->db);
self->db->n_snapshots += 1;
return 0;
}
static int PyLevelDBSnapshot_traverse(PyLevelDBSnapshot* iter, visitproc visit, void* arg)
{
Py_VISIT((PyObject*)iter->db);
return 0;
}
PyDoc_STRVAR(PyLevelDB_doc,
"LevelDB(filename, **kwargs) -> leveldb object\n"
"\n"
"Open a LevelDB database, from the given directory.\n"
"\n"
"Only the parameter filename is mandatory.\n"
"\n"
"filename the database directory\n"
"create_if_missing (default: True) if True, creates a new database if none exists\n"
"error_if_exists (default: False) if True, raises and error if the database already exists\n"
"paranoid_checks (default: False) if True, raises an error as soon as an internal corruption is detected\n"
"block_cache_size (default: 8 * (2 << 20)) maximum allowed size for the block cache in bytes\n"
"write_buffer_size (default 2 * (2 << 20)) \n"
"block_size (default: 4096) unit of transfer for the block cache in bytes\n""max_open_files: (default: 1000)\n"
"block_restart_interval \n"
"\n"
"Snappy compression is used, if available.\n"
"\n"
"Some methods support the following parameters, having these semantics:\n"
"\n"
" verify_checksum: iff True, the operation will check for checksum mismatches\n"
" fill_cache: iff True, the operation will fill the cache with the data read\n"
" sync: iff True, the operation will be guaranteed to sync the operation to disk\n"
"\n"
"Methods supported are:\n"
"\n"
" Get(key, verify_checksums = False, fill_cache = True): get value, raises KeyError if key not found\n"
"\n"
" key: the query key\n"
"\n"
" Put(key, value, sync = False): put key/value pair\n"
"\n"
" key: the key\n"
" value: the value\n"
"\n"
" Delete(key, sync = False): delete key/value pair, raises no error kf key not found\n"
"\n"
" key: the key\n"
"\n"
" Write(write_batch, sync = False): apply multiple put/delete operations atomatically\n"
"\n"
" write_batch: the WriteBatch object holding the operations\n"
"\n"
" RangeIter(key_from = None, key_to = None, include_value = True, verify_checksums = False, fill_cache = True): return iterator\n"
"\n"
" key_from: if not None: defines lower bound (inclusive) for iterator\n"
" key_to: if not None: defined upper bound (inclusive) for iterator\n"
" include_value: if True, iterator returns key/value 2-tuples, otherwise, just keys\n"
"\n"
" GetStats(): get a string of runtime information\n"
);
PyDoc_STRVAR(PyWriteBatch_doc,
"WriteBatch() -> write batch object\n"
"\n"
"Create an object, which can hold a list of database operations, which\n"
"can be applied atomically.\n"
"\n"
"Methods supported are:\n"
"\n"
" Put(key, value): add put operation to batch\n"
"\n"
" key: the key\n"
" value: the value\n"
"\n"
" Delete(key): add delete operation to batch\n"
"\n"
" key: the key\n"
);
PyDoc_STRVAR(PyLevelDBSnapshot_doc, "");
PyTypeObject PyLevelDB_Type = {
#if PY_MAJOR_VERSION >= 3
PyVarObject_HEAD_INIT(NULL, 0)
#else
PyObject_HEAD_INIT(NULL)
0,
#endif
(char*)"leveldb.LevelDB", /*tp_name*/
sizeof(PyLevelDB), /*tp_basicsize*/
0, /*tp_itemsize*/
(destructor)PyLevelDB_dealloc, /*tp_dealloc*/
0, /*tp_print*/
0, /*tp_getattr*/
0, /*tp_setattr*/
0, /*tp_compare*/
0, /*tp_repr*/
0, /*tp_as_number*/
0, /*tp_as_sequence*/
0, /*tp_as_mapping*/
0, /*tp_hash */
0, /*tp_call*/
0, /*tp_str*/
0, /*tp_getattro*/
0, /*tp_setattro*/
0, /*tp_as_buffer*/
Py_TPFLAGS_DEFAULT, /*tp_flags*/
(char*)PyLevelDB_doc, /*tp_doc */
0, /*tp_traverse */
0, /*tp_clear */
0, /*tp_richcompare */
0, /*tp_weaklistoffset */
0, /*tp_iter */
0, /*tp_iternext */
PyLevelDB_methods, /*tp_methods */
0, /*tp_members */
0, /*tp_getset */
0, /*tp_base */
0, /*tp_dict */
0, /*tp_descr_get */
0, /*tp_descr_set */
0, /*tp_dictoffset */
(initproc)PyLevelDB_init, /*tp_init */
0, /*tp_alloc */
PyLevelDB_new, /*tp_new */
};
PyTypeObject PyWriteBatch_Type = {
#if PY_MAJOR_VERSION >= 3
PyVarObject_HEAD_INIT(NULL, 0)
#else
PyObject_HEAD_INIT(NULL)
0,
#endif
(char*)"leveldb.WriteBatch", /*tp_name*/
sizeof(PyWriteBatch), /*tp_basicsize*/
0, /*tp_itemsize*/
(destructor)PyWriteBatch_dealloc, /*tp_dealloc*/
0, /*tp_print*/
0, /*tp_getattr*/
0, /*tp_setattr*/
0, /*tp_compare*/
0, /*tp_repr*/
0, /*tp_as_number*/
0, /*tp_as_sequence*/
0, /*tp_as_mapping*/
0, /*tp_hash */
0, /*tp_call*/
0, /*tp_str*/
0, /*tp_getattro*/
0, /*tp_setattro*/
0, /*tp_as_buffer*/
Py_TPFLAGS_DEFAULT, /*tp_flags*/
(char*)PyWriteBatch_doc, /*tp_doc */
0, /*tp_traverse */
0, /*tp_clear */
0, /*tp_richcompare */
0, /*tp_weaklistoffset */
0, /*tp_iter */
0, /*tp_iternext */
PyWriteBatch_methods, /*tp_methods */
0, /*tp_members */
0, /*tp_getset */
0, /*tp_base */
0, /*tp_dict */
0, /*tp_descr_get */
0, /*tp_descr_set */
0, /*tp_dictoffset */
(initproc)PyWriteBatch_init, /*tp_init */
0, /*tp_alloc */
PyWriteBatch_new, /*tp_new */
};
PyTypeObject PyLevelDBSnapshot_Type = {
#if PY_MAJOR_VERSION >= 3
PyVarObject_HEAD_INIT(NULL, 0)
#else
PyObject_HEAD_INIT(NULL)
0,
#endif
(char*)"leveldb.Snapshot", /*tp_name*/
sizeof(PyLevelDBSnapshot), /*tp_basicsize*/
0, /*tp_itemsize*/
(destructor)PyLevelDBSnapshot_dealloc, /*tp_dealloc*/
0, /*tp_print*/
0, /*tp_getattr*/
0, /*tp_setattr*/
0, /*tp_compare*/
0, /*tp_repr*/
0, /*tp_as_number*/
0, /*tp_as_sequence*/
0, /*tp_as_mapping*/
0, /*tp_hash */
0, /*tp_call*/
0, /*tp_str*/
PyObject_GenericGetAttr, /* tp_getattro */
0, /* tp_setattro */
0, /* tp_as_buffer */
Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC, /* tp_flags */
(char*)PyLevelDBSnapshot_doc, /*tp_doc */
(traverseproc)PyLevelDBSnapshot_traverse, /* tp_traverse */
0, /*tp_clear */
0, /*tp_richcompare */
0, /*tp_weaklistoffset */
0, /*tp_iter */
0, /*tp_iternext */
PyLevelDBSnapshot_methods, /*tp_methods */
0, /*tp_members */
0, /*tp_getset */
0, /*tp_base */
0, /*tp_dict */
0, /*tp_descr_get */
0, /*tp_descr_set */
0, /*tp_dictoffset */
(initproc)PyLevelDBSnapshot_init, /*tp_init */
0, /*tp_alloc */
PyLevelDBSnapshot_new, /*tp_new */
};
static void PyLevelDBIter_clean(PyLevelDBIter* iter)
{
if (iter->db)
iter->db->n_iterators -= 1;
Py_BEGIN_ALLOW_THREADS
delete iter->iterator;
delete iter->bound;
Py_END_ALLOW_THREADS
Py_XDECREF(iter->ref);
iter->ref = 0;
iter->db = 0;
iter->iterator = 0;
iter->bound = 0;
iter->include_value = 0;
}
static void PyLevelDBIter_dealloc(PyLevelDBIter* iter)
{
PyLevelDBIter_clean(iter);
PyObject_GC_Del(iter);
}
static int PyLevelDBIter_traverse(PyLevelDBIter* iter, visitproc visit, void* arg)
{
Py_VISIT((PyObject*)iter->ref);
return 0;
}
static PyObject* PyLevelDBIter_next(PyLevelDBIter* iter)
{
// empty, do cleanup (idempotent)
if (iter->ref == 0 || !iter->iterator->Valid()) {
PyLevelDBIter_clean(iter);
return 0;
}
// if we have an upper/lower bound, and we have run past it, clean up and return
if (iter->bound) {
leveldb::Slice a = leveldb::Slice(iter->bound->c_str(), iter->bound->size());
leveldb::Slice b = iter->iterator->key();
int c = iter->db->_options->comparator->Compare(a, b);
if (!iter->is_reverse && !(0 <= c)) {
PyLevelDBIter_clean(iter);
return 0;
} else if (iter->is_reverse && !(0 >= c)) {
PyLevelDBIter_clean(iter);
return 0;
}
}
// get key and (optional) value
PyObject* key = PY_LEVELDB_STRING_OR_BYTEARRAY(iter->iterator->key().data(), iter->iterator->key().size());
PyObject* value = 0;
PyObject* ret = key;
if (key == 0)
return 0;
if (iter->include_value) {
value = PY_LEVELDB_STRING_OR_BYTEARRAY(iter->iterator->value().data(), iter->iterator->value().size());
if (value == 0) {
Py_XDECREF(key);
return 0;
}
}
// key/value pairs are returned as 2-tuples
if (value) {
ret = PyTuple_New(2);
if (ret == 0) {
Py_DECREF(key);
Py_XDECREF(value);
return 0;
}
PyTuple_SET_ITEM(ret, 0, key);
PyTuple_SET_ITEM(ret, 1, value);
}
// get next/prev value
if (iter->is_reverse) {
iter->iterator->Prev();
} else {
iter->iterator->Next();
}
// return k/v pair or single key
return ret;
}
PyTypeObject PyLevelDBIter_Type = {
#if PY_MAJOR_VERSION >= 3
PyVarObject_HEAD_INIT(NULL, 0)
#else
PyObject_HEAD_INIT(NULL)
0,
#endif
(char*)"leveldb-iterator", /* tp_name */
sizeof(PyLevelDBIter), /* tp_basicsize */
0, /* tp_itemsize */
(destructor)PyLevelDBIter_dealloc, /* tp_dealloc */
0, /* tp_print */
0, /* tp_getattr */
0, /* tp_setattr */
0, /* tp_compare */
0, /* tp_repr */
0, /* tp_as_number */
0, /* tp_as_sequence */
0, /* tp_as_mapping */
0, /* tp_hash */
0, /* tp_call */
0, /* tp_str */
PyObject_GenericGetAttr, /* tp_getattro */
0, /* tp_setattro */
0, /* tp_as_buffer */
Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC, /* tp_flags */
0, /* tp_doc */
(traverseproc)PyLevelDBIter_traverse, /* tp_traverse */
0, /* tp_clear */
0, /* tp_richcompare */
0, /* tp_weaklistoffset */
PyObject_SelfIter, /* tp_iter */
(iternextfunc)PyLevelDBIter_next, /* tp_iternext */
0, /* tp_methods */
0,
};
static PyObject* PyLevelDBIter_New(PyObject* ref, PyLevelDB* db, leveldb::Iterator* iterator, std::string* bound, int include_value, int is_reverse)
{
PyLevelDBIter* iter = PyObject_GC_New(PyLevelDBIter, &PyLevelDBIter_Type);
if (iter == 0) {
Py_BEGIN_ALLOW_THREADS
delete iterator;
Py_END_ALLOW_THREADS
return 0;
}
Py_XINCREF(ref);
iter->ref = ref;
iter->db = db;
iter->iterator = iterator;
iter->is_reverse = is_reverse;
iter->bound = bound;
iter->include_value = include_value;
if (iter->db)
iter->db->n_iterators += 1;
PyObject_GC_Track(iter);
return (PyObject*)iter;
}
static PyObject* PyLevelDBSnapshot_New(PyLevelDB* db, const leveldb::Snapshot* snapshot)
{
PyLevelDBSnapshot* s = PyObject_GC_New(PyLevelDBSnapshot, &PyLevelDBSnapshot_Type);
if (s == 0) {
db->_db->ReleaseSnapshot(snapshot);
return 0;
}
Py_INCREF(db);
s->db = db;
s->snapshot = snapshot;
s->db->n_snapshots += 1;
PyObject_GC_Track(s);
return (PyObject*)s;
}
|
import numpy as np
from piepline import AbstractMetric
from torch import Tensor
__all__ = ['calc_tp_fp_fn', 'f_beta_score']
def _calc_boxes_areas(boxes: np.ndarray):
"""
Calculate areas of array of boxes
:param boxes: array of boxes with shape [N, 4]
:return: array of boxes area with shape [N]
"""
xx, yy = np.take(boxes, [0, 2], axis=1), np.take(boxes, [1, 3], axis=1) # [N, 2], [N, 2]
boxes_x_min, boxes_x_max = xx.min(1), xx.max(1) # [N], [N]
boxes_y_min, boxes_y_max = yy.min(1), yy.max(1) # [N], [N]
return (boxes_x_max - boxes_x_min) * (boxes_y_max - boxes_y_min) # [N], [N]
def _compute_boxes_iou(box: np.ndarray or [], boxes: np.ndarray or [], box_area: float, boxes_area: [float]) -> np.ndarray:
"""
Calculates IoU of the given box with the array of the given boxes.
Args:
box: 1D vector [y1, x1, y2, x2]
boxes: [N, (y1, x1, y2, x2)]
box_area: float. the area of 'box'
boxes_area: array of boxes areas with shape [batch, boxes_count].
Note: the areas are passed in rather than calculated here for
efficency. Calculate once in the caller to avoid duplicate work.
Returns:
array of iou in shape [N]
"""
xmin = np.maximum(box[0], boxes[:, 0])
ymin = np.maximum(box[1], boxes[:, 1])
xmax = np.minimum(box[2], boxes[:, 2])
ymax = np.minimum(box[3], boxes[:, 3])
intersection = (xmax - xmin) * (ymax - ymin)
intersection[xmin > xmax] = 0
intersection[ymin > ymax] = 0
intersection[intersection < 0] = 0
union = box_area + boxes_area[:] - intersection[:]
iou = intersection / union
return iou
def calc_tp_fp_fn(pred: np.ndarray, target: np.ndarray, threshold: float) -> tuple:
"""
Calculate true positives, false positives and false negatives number for predicted and target boxes
Args:
pred: Array of predicted boxes with shape [N, 4]
target: Array of ground truth boxes with shape [N, 4]
threshold: the threshold for iou metric
Returns:
Return list of [tp, fp, fn]
"""
pred_areas = _calc_boxes_areas(pred) # [N], [N]
target_areas = _calc_boxes_areas(target) # [N], [N]
ious = []
for instance_idx in range(pred.shape[0]):
ious.append(_compute_boxes_iou(pred[instance_idx], target, pred_areas[instance_idx], target_areas))
matches_matrix = np.array(ious)
matches_matrix[matches_matrix < threshold] = 0
tp = matches_matrix[matches_matrix > 0].shape[0]
fn = target.shape[0] - tp
fp = pred.shape[0] - tp
return tp, fn, fp
def f_beta_score(pred: np.ndarray, target: np.ndarray, beta: int, thresholds: [float]) -> np.ndarray:
"""
Calculate F-Beta score.
Args:
pred (np.ndarray): predicted bboxes of shape [B, N, 4]
target (np.ndarray): target bboxes of shape [B, N, 4]
beta (int): value of Beta coefficient
thresholds ([float]): list of thresholds
There is N - number of instance masks
Returns:
np.ndarray: array with values of F-Beta score. Array shape: [B]
"""
beta_squared = beta ** 2
res = []
for batch_idx in range(pred.shape[0]):
batch_results = []
for thresh in thresholds:
tp, fp, fn = calc_tp_fp_fn(pred[batch_idx], target[batch_idx], thresh)
precision = tp / (tp + fp)
recall = tp / (tp + fn)
batch_results.append((beta_squared + 1) * (precision * recall) / (beta_squared * precision + recall + 1e-7))
res.append(np.mean(batch_results))
return np.array(res)
class FBetaMetric(AbstractMetric):
def __init__(self, beta: int, thresholds: [float]):
super().__init__('f_beta')
self._beta = beta
self._thresholds = thresholds
def calc(self, output: Tensor, target: Tensor) -> np.ndarray or float:
return f_beta_score(output.data.cpu().numpy(), target.data.cpu().numpy(), self._beta, self._thresholds)
|
<reponame>DevFactory/mf-ganapathy-df-jenkins-test
package com.restoratio.monaco.ruletest.squid.compliant;
public class S1174Rule {
@Override
protected void finalize() throws Throwable { //compliant. Is protected
super.finalize();
}
public String finalize(String method) { //Should not be detected.
return "";
}
}
|
graphic by twolf
While Mitch McConnell and other Republicans have hinted that their opposition to investment in the Big Three is all about busting the unions, Jim DeMint refreshingly came out and admitted it yesterday on NPR.
Norris: Now, you know the unions are saying this is also a political ploy on the part of the Republicans to try get rid of unions and use the auto industry troubles to do just that. DeMint: Well, I’m not trying to get rid of the unions, but I am saying that they appear to be an antiquated concept in today’s economy.
And if that wasn’t explicit enough for you…
DeMint: These car companies are in real trouble. And they should’ve been planning to restructure for a long time. But the political aspect of this is most of this is being done to protect unions, uh, it’s not to protect the workers. And what I want to do is make sure we have jobs for these workers and we have first-class American automobile companies. And we’re not going to do it with the barnacles of unionism wrapped around their necks.
I’m not saying Jim DeMint is using the credit crisis to take down the unions, but he does appear to be a union-busting asshole.*
(*$1 to Hamsher) |
Sequence dependence of electron-induced DNA strand breakage revealed by DNA nanoarrays The electronic structure of DNA is determined by its nucleotide sequence, which is for instance exploited in molecular electronics. Here we demonstrate that also the DNA strand breakage induced by low-energy electrons (18eV) depends on the nucleotide sequence. To determine the absolute cross sections for electron induced single strand breaks in specific 13mer oligonucleotides we used atomic force microscopy analysis of DNA origami based DNA nanoarrays. We investigated the DNA sequences 5-TT(XYX)3TT with X = A, G, C and Y = T, BrU 5-bromouracil and found absolute strand break cross sections between 2.66 10−14cm2 and 7.06 10−14cm2. The highest cross section was found for 5-TT(ATA)3TT and 5-TT(ABrUA)3TT, respectively. BrU is a radiosensitizer, which was discussed to be used in cancer radiation therapy. The replacement of T by BrU into the investigated DNA sequences leads to a slight increase of the absolute strand break cross sections resulting in sequence-dependent enhancement factors between 1.14 and 1.66. Nevertheless, the variation of strand break cross sections due to the specific nucleotide sequence is considerably higher. Thus, the present results suggest the development of targeted radiosensitizers for cancer radiation therapy. D NA exhibits nucleotide sequence dependent electronic properties, which affect its charge transport properties, UV stability, and sensitivity toward radiation. These properties manifest themselves in quantities such as the ionization potential 4,5, in the dynamics of the electronic states (excitonic coupling, lifetime of excited states) 1,6, but also in the reactivity for instance towards low-energy electrons 3,7,8. A large number of secondary electrons is formed along the radiation track of high-energy radiation, which is routinely applied in radiation therapy to kill tumor tissue. These low-energy electrons (LEEs) have a most probable energy around 10 eV 9 and are able to directly induce DNA single and double strand breaks (SSBs and DSBs) via dissociative electron attachment through the formation of negative ion resonances. Although it is experimentally extremely challenging to quantify the LEE induced DNA strand break (SB) yield of oligonucleotides of specific nucleotide sequence, a number of studies suggested a sequence dependence of electroninduced DNA strand breakage. In electron transmission through self-assembled DNA monolayers it was demonstrated that the number of electrons trapped in the DNA film depends strongly on the guanine (G) content 3. These experiments did not yield any information on the damage of the DNA film, but the SSB yield upon irradiation with 1 eV electrons was later quantified using microarrays of DNA SAMs and fluorescence detection of the hybridisation efficiency 7. It was found that the SSB yield increases linearly with the number of G bases present in the oligonucleotide. This behaviour is explained by the relative instability of G compared to the other DNA bases 13, which is also reflected in the low ionization potential (IP) of G 14. However, in addition to the basedependent damage, it was also demonstrated that the specific base sequence has a strong effect on the DNA damage. In electron transmission experiments the human telomeric repeat TTAGGG turned out to be particularly prone to LEE capture 15. The particular role of the telomere sequence was confirmed by ab initio calculations showing that the IP of a TTAGGG sequence is smaller than the IP of a TTGGGG sequence, although isolated G has a lower IP than A 16. Despite the particular role of G it was recently demonstrated by using HPLC analysis of oligonucleotide trimers irradiated with 10 eV electrons, that the total DNA damage was largest for TTT and smallest for TGT 8. Nevertheless, the ratio of strand breaks (C-O cleavage) to nucleobase loss (C-N cleavage) was found to be highest for TGT 8. The investigation of LEE induced strand breakage of specific oligonucleotides longer than a few nucleobases is very challenging due to the small penetration depth of LEEs resulting in small amounts of damaged material 8. Recently, we demonstrated the detection of LEEinduced bond cleavage in DNA origami based DNA nanoarrays on the single molecule level using atomic force microscopy (AFM). The visualization of DNA strand breakage by AFM using DNA nanoarrays has several advantages: (i) Due to the detection of DNA strand breaks at a single-molecule level only miniscule amounts of material are required to establish sub-monolayer surface coverage. (ii) Two or more different oligonucleotide sequences can be irradiated within a single experiment to efficiently compare a number of different DNA structures. (iii) Absolute strand break cross sections (s SSB ) are readily accessible (vide infra) thus providing benchmark values for further experimental and theoretical studies. (iv) The DNA nanoarray technique is not limited to single strands, but can be extended to quantify double strand breaks and to investigate higher-order DNA structures. Here, we compare the absolute strand break cross sections of different 13 mer oligonucleotide sequences. We compare the sequences 59-TT(XTX) 3 TT with X 5 A, C or G to evaluate the role of the different DNA nucleobases for DNA strand breakage. In a next step, we study the sensitizing effect of the incorporation of 5-bromouracil (BrU) by comparing the absolute strand break cross sections of the sequences 59-TT(XBrUX) 3 TT with X 5 A, C or G. Fig. 1a shows a scheme of a triangular DNA origami platform carrying six target sequences, which represents the DNA nanoarray. Three biotinylated target sequences are situated in the center of the trapezoids, and three are located on the sides of the trapezoids. Basically, the nucleotide sequence of each of the six target oligonucleotides can be freely chosen. In the present study the three central target strands (green in Fig. 1a), and the three side positions (black in Fig. 1a) cannot be distinguished in the AFM images and are hence chosen to have the same sequence. Thus, two different target sequences are studied within one irradiation experiment, and the specific sequences and their positions are indicated in Fig. 1a. In Fig. 1b typical AFM images from DNA origami samples after incubation with streptavidin (SAv) which binds to biotin (Bt) are shown. The left image shows a control sample that was not irradiated while the image on the right was obtained from a sample irradiated with 18 eV electrons at a fluence of 5.0 3 10 12 cm 22. The energy of 18 eV was chosen due to its relevance for the damage induced by secondary electrons originating from the ionization track of high-energy radiation. For secondary electrons the damage probability has a global maximum around 18 eV, i.e. the damage induced mainly by ionization and electronic excitation weighted by the LEE distribution in aqueous samples irradiated with high-energy radiation 20. In the AFM image on the right-hand side of Fig. 1 the number of specifically bound SAv is reduced compared to the non-irradiated control sample indicating that a number of target sequences have been damaged by electron-induced strand breakage. To determine the absolute cross section for strand breakage (see Methods section), the relative number of SBs (N SB ) was recorded as a function of the electron fluence. The fluence dependence of N SB for the target sequences TT(XTX) 3 TT with X 5 A, C, G is displayed in Fig. 2a. From the linear fit in the lowfluence regime s SSB is determined, which is shown in Fig. 2b. The oligonucleotide TT(GTG) 3 TT shows the lowest response to 18 eV electrons. To ensure for an accurate linear fit, a smaller fluence increment and thus more data points were chosen for TT(GTG) 3 TT. Results and Discussion In general, there are three basic mechanisms that could account for the electron-induced DNA strand cleavage at 18 eV (in the following the general form TT(XYX) 3 3 TT and TT(GBrUG) 3 TT oligonucleotides is shown. On the right, a typical AFM image after irradiation with 18 eV electrons is shown. The number of specifically bound SAv is reduced due to strand breaks in the protruding sequences. www.nature.com/scientificreports SCIENTIFIC REPORTS | 4 : 7391 | DOI: 10.1038/srep07391 ment at 18 eV is very unlikely, but the initial incoming electron might undergo inelastic scattering in the surrounding of the target sequence and might attach at lower energies with higher cross sections. DEA cross sections for instance for the loss of hydrogen from the thymine anion at 1.0 eV have been previously determined to be 7.8 ? 10 217 cm 2,23. Alternatively, low energy secondary electrons can be generated either from the substrate, or by electron impact ionization according to (i), which can induce a SB by DEA. iii. Neutral dissociation: TT(XYX) 3 TT 1 e 2 (18 eV) R TT(XYX) 3 TT** 1 e 2 (,18 eV) R SB. The transfer of electronic energy from the incoming electron to the oligonucleotide might be associated with a dissociative state resulting in a SB. The inelastic electron scattering cross section is comparable with the cross section for electron impact ionization and is e.g. for pyrimidine of the order of 10 216 cm 2,24. Bond breaking by a catalytic electron is a similar mechanism that involves transient negative ions and neutral fragmentation products 25. Judging from the magnitude of cross sections it is most likely that the initial step in strand breakage is either ionization or electronic excitation of the target strands. For a DNA strand break to occur a bond within the phosphate-sugar backbone needs to be cleaved. A direct SB without the involvement of the nucleobases as it was suggested previously is thus feasible 26,27. The IP of the phosphate-sugar backbone is with about 11.7 eV considerably higher than the IPs of the nucleobases, nevertheless electron impact ionization cross sections of the DNA backbone are calculated to be higher than that of the nucleobases due to the higher number of electrons in the sugarphosphate backbone 22. However, the absolute strand break cross sections determined here (Fig. 2) show a strong dependence on the specific nucleotide sequence with the TT(GTG) 3 TT sequence having the lowest s SSB and TT(ATA) 3 TT having the highest s SSB. This indicates that the nucleobases have a strong influence on the strand breakage and a damage mechanism involving only the DNA backbone is not likely. The absolute cross sections for strand breakage vary from (2.21 6 0.87) ? 10 214 cm 2 to (6.00 6 0.86) ? 10 214 cm 2 depending on the specific nucleotide sequence (see table 1 for details). The cross sections are comparable with the cross sections found for electron induced strand breakage in plasmid DNA. Boudaiffa et al. found an effective cross section for SSBs at 10 eV electron energy of 2.6 ? 10 215 cm 2 using the plasmid pGEM 3Zf with 3199 base pairs 28. Later, Panajotovic et al. reported effective SSB cross sections of 10.8 ? 10 215 cm 2 (at 10 eV) and 24.8 ? 10 215 cm 2 (at 1 eV) using the same plasmid 29. Very recently the absolute cross section for loss of supercoiled plasmid DNA was determined at 10 eV to be 3.0 ? 10 214 cm 2,30, which is in accordance with the values found in the present study. In the previous studies agarose gel electrophoresis has been used to detect SSBs in plasmid DNA. This method and the present DNA nanoarray based method both detect the first SSB that occurs upon electron irradiation. Subsequent SSBs that occur due to the impact of additional electrons are no more detected. Since the geometrical cross sections of DNA with more than a few nucleotides (1 nm 2 5 10 214 cm 2 corresponds to about three nucleotides) are larger than the SSB cross sections, the reported cross sections should be independent of the size of the system (plasmid DNA vs. oligonucleotides). The trend of s SSB with respect to the nucleotide sequence is surprising since G containing sequences are generally assumed to be particularly fragile due to the small ionization potential of G. Both, the vertical and adiabatic IPs of the isolated nucleobases increase in the order of G, A, C, T 14. A similar order is found for the total electron impact ionization cross sections at low energy (20 eV): G < A. T < C 21. The IP can be considered an important quantity in this context since at an energy of 18 eV the electron ionization cross section is presumably orders of magnitude higher than the DEA cross section (vide supra). However, the absolute strand break cross sections found in the present study for the different sequences cannot be explained with the electron impact ionization properties of the individual nucleobases. The adjacent nucleobases interact via stacking interactions between the aromatic systems, and it was previously found that the strength of p-p interaction depends strongly on the type of nucleobases. Stacking interactions are strongest between the purine bases A and G, and in the case of A nucleobases the stacking interactions further increase by a propeller twist of the A bases. This effect plays also a role for AT base stacks, but is not relevant for GG and GT interactions 31. In DFT calculations it was also found that the IP of G stacks decreases more with the number of bases than in the case of A stacks 5. Nevertheless it was found that the difference between vertical and adiabatic IP, i.e. the reorganization energy, is smaller for A stacks. A smaller reorganization energy facilitates hole delocalization. On the other hand the large nuclear reorganization energy results in hole localization in G stacks 5. This is consistent with the observation that G stacks act as hole traps in DNA. In order to find out whether the nucleotide sequences studied here give rise to specific modifications of their electronic structure we have computed the IPs of stacked oligonucleotide trimers. The IPs obtained with two different theoretical methods (MP2 and GW) are presented in Table 2. According to the results obtained by MP2 calculations the ATA stack has an IP of 8.01 eV, which is indeed lower than the IP of the GTG stack (8.19 eV) and thus does not follow the same trend as the IPs of the isolated nucleobases 14. The vertical IP of isolated A is about 0.4 eV higher than that of isolated G 14. Nevertheless the IP of the ATA stack is 0.18 eV lower than the IP of GTG according to the MP2 calculations, which is a considerable shift compared to what is expected from the isolated nucleobases. According to the GW method, however, the trend of IPs reflects the trend observed for the isolated nucleobases, i.e. the GTG stack has the lowest IP (7.96 eV). This difference between MP2 and GW method might be explained when considering also the excited states of the system by means of CASPT2. A detailed comparison of the MP2 and GW methods is presented in the supporting information. As a consequence, a correlation of the observed absolute strand break cross sections and the computed IPs for the stacked trimers is not straightforward. Here it should be noted that in the calculations only a single conformation of the stacked trimers is considered and environmental effects are not included. Furthermore, the experimental SB cross sections are determined for oligonucleotide 13 mers, but due to limited computing resources we could only consider the trimers listed in table 2. Consequently, further detailed investigations have to be performed on complex systems to elucidate possible connections between the sequence-dependence of strand breakage and the respective IPs. A particular role of the electronic states of A containing oligonucleotides was found already in previous studies. It was demonstrated that due to electronic coupling between A molecules in an A homopolymer excitons in the VUV (190 nm) spectral region are extended over up to eight nucleobases 6. However, the electronic coupling can be eliminated by a single T spacer 32. On the other hand, by using HPLC analysis of oligonucleotide trimers irradiated with 10 eV electrons the damage due to strand breaks was found to increase in the row of sequences TCT < TGT, TAT, which is consistent with the trend of s SSB values determined here 33. 5-Bromouracil (BrU) is a well-known radiosensitizer, whose incorporation into DNA by replacing T increases the SSB and DSB yields upon irradiation with electrons and photons. We have studied the effect of BrU incorporation on the strand break cross sections by using the sequences discussed above and replacing the three central T bases by BrU. The absolute strand break cross sections determined for 18 eV electron irradiation are shown in Fig. 3a. Again, the TT(ABrUA) 3 TT sequence exhibits the highest s SSB, whereas the TT(CBrUC) 3 TT sequence has the lowest s SSB. There is only a small difference between TT(CBrUC) 3 TT and TT(GBrUG) 3 TT, but the cross section of TT(ABrUA) 3 TT is about twice as high as that of TT(CBrUC) 3 TT. The exact values of s SSB of the different sequences are summarized in Table 1. The similarity of s SSB for oligonucleotides with and without BrU indicates that s SSB is much more sensitive to the nucleotide sequence than to the presence of BrU. The influence of BrU on s SSB might be higher at lower electron energies, since DEA to gas phase BrU shows the highest cross sections for Br 2 formation close to zero eV 37. In Table 2 the computed IPs of the BrU containing stacked trimers are listed. With both methods of calculation, MP2 and GW, the same trend of IPs is observed, i.e. the IP of GBrUG is the lowest and CBrUC is the highest. There are basically no differences between the IPs of stacked trimers with and without BrU. The only exception is the IP of AYA calculated by MP2, which is 8.01 eV for Y 5 T, and 8.52 eV for Y 5 BrU. Fig. 3b shows the enhancement factors for BrU incorporation (EF 5 s SSB (XTX)/s SSB (XBrUX)), which are found to be the highest for TT(GYG) 3 TT (1.66) and the lowest for TT(CYC) 3 TT (1.14), see Table 1 for details. Thus, the effect of BrU incorporation is highest when BrU is directly adjacent to G. In a recent study using femtosecond laser spectroscopy, the observation of anionic transients suggested that the effect of BrU should be the strongest in close proximity to A. A similar effect but slightly weaker was inferred to G, since G was assumed to be the major damaging site 38. However, in this particular study, only mixtures of BrdU and dAMP/dGMP were investigated instead of oligonucleotides so that no information about the occurrence of strand breaks could be obtained. In contrast, we directly probe s SSB of different oligonucleotide sequences in our experiments. In a recent study using HPLC analysis of trimers irradiated with 10 eV electrons it was found that the total damage of TBrUT was about 50% higher than that of TTT. However, most of the damage was associated with formation of TUT 39. In another study using the same technique it was found that the amount of fragments associated with strand breaks is approximately the same for TBrUT and TUT, i.e. no increase of strand breaks was observed with TBrUT 33. Therefore, when comparing our results with previous studies, the different sequences, lengths of oligonucleotides and the electron energy must be taken into account. The present study represents a starting point for a global assessment of the sequence dependence of LEE induced DNA damage, for which a large number of DNA sequences at a range of electron energies must be studied in the future. The DNA nanoarray technique is suitable for that since different oligonucleotides can be probed in a single irradiation experiment and absolute strand break cross sections serving as benchmark values are obtained. Conclusions We have determined the absolute cross sections for DNA single strand breakage induced by 18 eV electrons using DNA origami based DNA nanoarrays. Since the analysis of irradiated samples is performed by atomic force microscopy at a single molecule level this novel method represents a simple way to access absolute strand break cross sections. The absolute single strand break cross sections depend strongly on the nucleotide sequence and we find values varying between (2.21 6 0.87) ? 10 214 cm 2 and (6.00 6 0.86) ? 10 214 cm 2 for the oligonucleotide sequences TT(XTX) 3 TT with X 5 A, C, G. Furthermore, we find that exchange of the central T bases by 5bromouracil increases the strand break cross section in a sequencedependent manner by a factor of 1.14 to 1.66. The observed trend in the absolute strand break cross sections agrees qualitatively with previous HPLC studies investigating the fragmentation of oligonucleotide trimers of the sequence TXT with X 5 A, C, G irradiated with 10 eV electrons 33. As in the here presented results, the authors observed almost identical yields of strand break fragments for the C-and G-containing trimers while the yield for the A-containing trimer was about twice as high. In addition, the absolute strand break cross sections measured here are comparable in magnitude with cross sections for strand breakage in different plasmid DNAs induced by 1-10 eV electrons as determined by agarose gel electrophoresis 29,30. The DNA nanoarray technique thus bridges the gap between genomic dsDNA several kbp in size and very short oligonucleotides only few nt long, and enables the detailed investigation of sequence-dependent processes in DNA radiation damage. The observed sequence-specificity most likely results from the modification of electronic states by electronic coupling of the individual nucleobases. The sensitivity of the A containing sequences might be associated with the strong stacking interactions and pronounced hole delocalization in adjacent A bases. Further experimental and theoretical studies will be carried out covering a broad range of electron energies and DNA sequences to elucidate the most relevant damage mechanisms. The present results suggest that radiosensitizers applied in tumor radiation therapy could operate more efficiently if they targeted specific nucleotide sequences that have the highest damage cross sections. Further extended experiments have to be performed to explore the electron energy dependence of radiosensitization and thus the physico-chemical mode of action of established and potential radiosensitizers 40,41. Methods Preparation of DNA nanoarrays. Triangular DNA origami nanostructures have been prepared from the circular single-stranded viral DNA scaffold strand M13mp18 and 208 short artificial staple strands according to the original design and procedure by Rothemund 42. Selected staple strands have been extended with a specific target sequence and a Bt modification at the 59-end. The extended staple strands form a nanoscale array, and the individual target sequences can be visualized with AFM after incubation with SAv, which binds to the Bt modifications of the intact protruding strands. For each irradiation experiment two different target sequences have been selected and in total 6 staple strands per DNA origami structure have been modified (see Fig. 1). The DNA origami structures are assembled by annealing of the scaffold strand with a 30-fold excess of staple stands from 65uC to 4u within approximately 2 hours (1 3 TAE buffer, 10 mM MgCl 2 ). The excess staple strands are separated from the assembled DNA origami structures by spinfiltering twice using Amicon Ultra centrifuge filters (100.000 Da MWCO). Electron irradiation and AFM analysis. The detailed procedure of immobilization and LEE irradiation of the DNA nanoarrays is given in ref. 17. In brief, the DNA origami structures are bound electrostatically to Si/SiO 2 by incubation in 10 3 TAE and 100 mM MgCl 2 for approximately one hour. Afterwards, the dry samples are transferred into ultrahigh vacuum (UHV), and irradiated with LEEs of defined fluence at a current of 1-10 nA. After irradiation the samples are removed from the UHV chamber and rinsed to remove fragmentation products. To identify the intact remaining target sequences the samples are exposed to a 50 nM solution of SAv for about 2-10 minutes. Then the solution is rinsed again and the dry samples are analysed by AFM. From the AFM images the relative number of DNA strand breaks of a given sequence N SB can be determined from the number of intact oligonucleotides after electron irradiation compared to the initial number of oligonucleotides prior to irradiation (i.e. three oligonucleotides per DNA origami triangle). From N SB the sequence-specific absolute cross section for strand breakage can be determined. The cross section for DNA damage can be described by the following exposureresponse relation 29 : with N 0 being the initial number of DNA oligonucleotides protruding from the DNA origami platform, s the cross section for DNA damage, J the electron flux and t the irradiation time. The number of strand breaks N SB (i.e. the relative number of damaged oligonucleotides: 1 -N(t)/N 0 ) can be approximated for short irradiation times by using a Taylor series: with F 5 Jt being the electron fluence. Thus, the cross section s can be determined from the slope of N SB (F) in the low-fluence regime. The determined cross section can be conceived as an absolute cross section since it is based on single molecule measurements, i.e. it is not effective for a specific DNA density or film thickness. The obtained absolute strand break cross sections are corrected for electron-induced damage to the biotin label, which was previously determined to be (1.1 6 0.2) ? 10 214 cm 2 18. To support the presented data the absolute strand break cross section for brominated sequences was determined also in a second, different design of DNA nanoarrays (see supporting information), and the obtained s SSB values agree with the ones shown here. Calculation of IPs. The ground-state geometries of all the investigated structures (i.e. XTX and XBrUX with X 5 C, A and G) were first optimized in the gas phase at the density functional theory (DFT) level of theory using the B97-D functional 43 and a 6-311G(d,p) polarized basis set. Frequency calculations for each derivative have been performed with the same level of theory, verifying that all structures correspond to true minima of the potential energy surface. IPs were calculated using spin-restricted open-shell second-order Mller-Plesset perturbation theory (MP2) 44 and a 6-31111G(d,p) basis set. Vertical IPs were obtained for all structures from the difference in total energy between the neutral species and the radical cation and anion, respectively, evaluated at the optimized geometry of the neutral species. All MP2 calculations have been achieved with the Gaussian 09 program suite. In addition, we employed many-body perturbation theory calculations in the GW approximation 45,46. GW is considered as one of the most accurate electronic structure method for the calculation of charged excitations (such as IPs) feasible for the systems studied in this work. All GW calculations were performed in FHIaims 47 using the consistent starting point scheme 48 and a converged basis set (tier 4). More details on the calculations are provided in the SI. |
package com.shuzijun.leetcode.plugin.actions.toolbar;
import com.intellij.openapi.actionSystem.AnActionEvent;
import com.shuzijun.leetcode.plugin.actions.AbstractAction;
import com.shuzijun.leetcode.plugin.manager.ViewManager;
import com.shuzijun.leetcode.plugin.model.Config;
import com.shuzijun.leetcode.plugin.utils.*;
import com.shuzijun.leetcode.plugin.window.NavigatorTable;
import com.shuzijun.leetcode.plugin.window.WindowFactory;
/**
* @author shuzijun
*/
public class LogoutAction extends AbstractAction {
@Override
public void actionPerformed(AnActionEvent anActionEvent, Config config) {
HttpRequest httpRequest = HttpRequest.get(URLUtils.getLeetcodeLogout());
HttpResponse httpResponse = HttpRequestUtils.executeGet(httpRequest);
HttpRequestUtils.resetHttpclient();
MessageUtils.getInstance(anActionEvent.getProject()).showInfoMsg("info", PropertiesUtils.getInfo("login.out"));
NavigatorTable navigatorTable = WindowFactory.getDataContext(anActionEvent.getProject()).getData(DataKeys.LEETCODE_PROJECTS_TREE);
if(navigatorTable == null){
return;
}
ViewManager.loadServiceData(navigatorTable, anActionEvent.getProject());
}
}
|
def _initialise_header_information(self, hdr):
self.machine = hdr.e_machine
self.elf_type = hdr.e_type
self.entry_point = hdr.e_entry
self.osabi = hdr.ident.ei_osabi
self.abiversion = hdr.ident.ei_abiversion
self.flags = hdr.e_flags |
Transcriptome Analysis Reveals Differential Gene Expression between the Closing Ductus Arteriosus and the Patent Ductus Arteriosus in Humans The ductus arteriosus (DA) immediately starts closing after birth. This dynamic process involves DA-specific properties, including highly differentiated smooth muscle, sparse elastic fibers, and intimal thickening (IT). Although several studies have demonstrated DA-specific gene expressions using animal tissues and human fetuses, the transcriptional profiles of the closing DA and the patent DA remain largely unknown. We performed transcriptome analysis using four human DA samples. The three closing DA samples exhibited typical DA morphology, but the patent DA exhibited aorta-like elastic lamellae and poorly formed IT. A cluster analysis revealed that samples were clearly divided into two major clusters, the closing DA and patent DA clusters, and showed distinct gene expression profiles in IT and the tunica media of the closing DA samples. Cardiac neural crest-related genes such as JAG1 were highly expressed in the tunica media and IT of the closing DA samples compared to the patent DA sample. Abundant protein expressions of jagged 1 and the differentiated smooth muscle marker calponin were observed in the closing DA samples but not in the patent DA sample. Second heart field-related genes such as ISL1 were enriched in the patent DA sample. These data indicate that the patent DA may have different cell lineages compared to the closing DA. Introduction The ductus arteriosus (DA) is a fetal vascular shunt that connects the pulmonary artery and the aorta, and it is essential for maintaining fetal circulation. The DA begins to close immediately after birth. This closing process is characterized by several DA-specific features including differentiated contractile smooth muscle cells (SMCs), fragmentation of internal elastic laminae, sparse elastic fiber formation in the tunica media, and intimal thickening (IT) formation. Histological analyses of human DA samples and several animal models indicated that these DA-specific features gradually develop throughout the fetal and neonatal periods. Patent DA is a condition where the DA does not close properly after birth. Patent DA occurs in approximately 1 in 2000 full-term infants and occurs more frequently in premature neonates. PDA samples exhibit fewer DA-specific structural features. Comprehensive analysis of gene expression comparing the closing DA and the patent DA is necessary in order to better understand DA-specific remodeling and to explore methods to regulate patency of the DA. Total RNA Preparation and Microarray Analysis Four human DA tissues were subjected to tissue staining and transcriptome analysis. Each DA tissue was divided into two pieces. One piece was fixed with 10% buffered formalin (FUJIFILM Wako Pure Chemical Corporation, Osaka, Japan) for tissue staining. The other piece of tissue was prepared for microarray analysis as follows. After the adventitia was removed, each DA tissue was divided into two parts: the inner part and the outer part, which mainly contained IT and the tunica media, respectively, as indicated by the yellow dotted lines in Figure 1A. The tissues were immediately frozen in liquid nitrogen and stored at −80 C until all patient samples were collected. Total RNA preparation and microarray analysis were performed as described previously. Briefly, the frozen tissues were disrupted by a multi-bead shocker instrument (Yasui Kikai, Osaka, Japan). After buffer RLT with -mercaptoethanol was added to the tissues, they were sonicated to ensure the samples were uniformly homogeneous. Total RNA was isolated using a RNeasy Mini Kit (Qiagen, Venlo, The Netherlands). Microarray experiments were carried out using a SurePrint G3 Human GE 8 60 K v2 Microarray (Agilent, Santa Clara, CA, USA) according to the manufacturer's protocol. was removed, each DA tissue was divided into two parts: the inner part and the outer part, which mainly contained IT and the tunica media, respectively, as indicated by the yellow dotted lines in Figure 1A. The tissues were immediately frozen in liquid nitrogen and stored at −80 °C until all patient samples were collected. Total RNA preparation and microarray analysis were performed as described previously. Briefly, the frozen tissues were disrupted by a multi-bead shocker instrument (Yasui Kikai, Osaka, Japan). After buffer RLT with -mercaptoethanol was added to the tissues, they were sonicated to ensure the samples were uniformly homogeneous. Total RNA was isolated using a RNeasy Mini Kit (Qiagen, Venlo, The Netherlands). Microarray experiments were carried out using a SurePrint G3 Human GE 8 60 K v2 Microarray (Agilent, Santa Clara, CA, USA) according to the manufacturer's protocol. Generation of a Dendrogram, Venn Diagrams, and a Heatmap The dendrogram was generated with Ward's method using the hclust and dendrogram functions in R. The packages gplots and genefilter in R were used to create a heatmap in which data was normalized into a z-score. The mapping grids were subsequently colored according to their z-score. Venn diagrams of the number of differentially expressed genes in each sample group were generated using the gplots package in R. Gene Set Enrichment Analyses (GSEAs) Gene set enrichment analyses (GSEAs) were conducted to investigate the functions of genes that significantly correlated with each sample group. GSEAs ranked the gene list Generation of a Dendrogram, Venn Diagrams, and a Heatmap The dendrogram was generated with Ward's method using the hclust and dendrogram functions in R. The packages gplots and genefilter in R were used to create a heatmap in which data was normalized into a z-score. The mapping grids were subsequently colored according to their z-score. Venn diagrams of the number of differentially expressed genes in each sample group were generated using the gplots package in R. Gene Set Enrichment Analyses (GSEAs) Gene set enrichment analyses (GSEAs) were conducted to investigate the functions of genes that significantly correlated with each sample group. GSEAs ranked the gene list by the correlation between genes and phenotype, and an enrichment score was calculated to as-sess the gene distribution. Each analysis was carried out with 1000 permutations. Gene sets were considered significantly enriched if the false discovery rate (FDR) q-value < 0.25. Tissue Staining and Immunohistochemistry Paraffin-embedded blocks containing the human DA tissues were cut into 4 m thick sections and placed on glass slides. Elastica van Gieson staining was performed for morphological analysis to evaluate IT and the tunica media, as described previously. Immunohistochemistry was performed using primary antibodies for jagged 1 (sc-390177, Santa Cruz Biotechnology, Dallas, TX, USA) and calponin (M3556, DakoCytomation, Glostrup, Denmark). Biotinylated rabbit antibody (Vectastain Elite ABC IgG kit, Vector Labs, Burlingame, CA, USA) was used as a secondary antibody, and the presence of targeted proteins was determined by 3,3 -diaminobenzidine tetrahydrochloride (DAB) (DakoCytomation, Glostrup, Denmark). Negative staining of immunohistochemistry was confirmed by the omission of primary antibodies. DA-Related Clinical Course of Each Participant Four patients with congenital heart diseases were analyzed in this study. Patient profiles are presented in Table 1. Case 1 was considered a patent DA case because the DA did not exhibit closing tendency throughout the clinical course. The DA tissue was isolated during an operation for an atrioventricular septal defect closure and repairs of the aortic arch and pulmonary venous returns. The other three cases (Cases 2-4) exhibited complex congenital heart diseases that required DA patency to maintain systemic circulation. Cases 2-4 were administered prostaglandin E1 (PGE 1 ) because they exhibited closing tendency of the DA. In Case 2, lipo-PGE 1 (1 ng/kg/min) was administered 8 h after birth when an echocardiography indicated narrowing of the DA. Case 2 continued lipo-PGE 1 treatment until the operation. The patient had an aortic repair and pulmonary artery banding (PAB) conducted on postnatal day 5. Case 3 showed closing tendency of the DA soon after birth and was administrated lipo-PGE 1 (2 ng/kg/min). The dose of lipo-PGE 1 was increased (4 ng/kg/min) 8 h after birth due to further closing tendency. The patient had PAB conducted on postnatal day 3. The closing tendency of the DA remained and required PGE 1 -cyclodextrin (30 ng/kg/min) on postnatal day 4. The patient received PGE 1 -cyclodextrin until the Norwood operation was conducted on postnatal day 24. Case 4 showed closing tendency of the DA at 9 h after birth and received a lipo-PGE 1 infusion (1 ng/kg/min). The patient underwent PAB on postnatal day 3. The DA was gradually narrowed and required an increased dose of lipo-PGE 1 (5 ng/kg/min) on postnatal day 70. The patient underwent the Norwood operation on postnatal day 98. On the basis of their clinical courses, Cases 2-4 were considered as closing DAs. Histological Differences between the Patent DA and the Closing DA Tissues The Elastica van Gieson stain demonstrated that Case 1 had well-organized layered elastic fibers in the tunica media and a poorly formed IT ( Figure 1A, upper panel). In Case 1, there was no overt fragmentation of the internal elastic laminae ( Figure 1B, upper panel). Case 2 and Case 3 showed prominent IT formation that protruded into the lumen ( Figure 1A, middle panels). Circumferentially oriented layered elastic fibers in the tunica media were sparsely formed and the internal elastic laminae were highly fragmented ( Figure 1B, middle panels). Similarly, Case 4, who received PGE 1 administration for more than 3 months, had a prominent IT ( Figure 1A, lower panel). However, the entire tunica media consisted of sparse elastic fibers radially oriented toward the internal lumen, and circumferentially oriented elastic fibers were not recognized ( Figure 1B, lower panel). These findings indicated that the closing DA had well-recognized, DA-specific morphological features, including prominent IT formation, fragmented internal elastic laminae, and less elastic fibers in the tunica media, which seemed to reflect a normal closing process. On the other hand, the patent DA tissue (Case 1) was devoid of these structures and exhibited aortification of the vascular wall, which was consistent with previously reported morphological characteristics of the patent DA. Microarray Analysis of the IT and the Tunica Media of Human DA Tissues To elucidate a differential gene expression profile between the patent DA (Case 1) and the closing DA tissues (Cases 2-4), we performed an unbiased transcriptomic analysis using these human DA tissues. Each DA sample was divided into the IT and the tunica media in Cases 1-3. In Case 4, circumferentially oriented SMCs and layered elastic laminae could not be identified; therefore, the IT-like wall was divided into the inner part and the outer part ( Figure 1A, lower). These samples were subjected to microarray analysis. The dendrogram demonstrated that the human DA tissues were clearly divided into two major clusters, A and B (Figure 2). Cluster A consisted of both the IT and the tunica media from Case 1. Cluster B consisted of the samples from Cases 2-4, and this cluster was further divided into two subgroups, B1 and B2. Cluster B1 consisted of the IT tissues from Cases 2 and 3 and both the inner and outer IT-like parts from Case 4. Cluster B2 consisted of the tunica media samples from Cases 2 and 3. These data suggested that the patent DA tissue (Case 1) had a distinct gene expression pattern compared to the other closing DA samples (Cases 2-4). In Cases 2 and 3, the gene expression patterns of the tunica media samples were relatively similar. Additionally, the IT samples showed similar gene expression profiles, which were distant from the tunica media samples of Cases 2 and 3. In agreement with histological analysis showing that two parts of the DA tissue from Case 4 (inner and outer parts) exhibited an IT-like structure, these two samples of Case 4 showed similar gene patterns, which were close to that of the IT of Cases 2 and 3. consisted of sparse elastic fibers radially oriented toward the internal lumen, and circumferentially oriented elastic fibers were not recognized ( Figure 1B, lower panel). These findings indicated that the closing DA had well-recognized, DA-specific morphological features, including prominent IT formation, fragmented internal elastic laminae, and less elastic fibers in the tunica media, which seemed to reflect a normal closing process. On the other hand, the patent DA tissue (Case 1) was devoid of these structures and exhibited aortification of the vascular wall, which was consistent with previously reported morphological characteristics of the patent DA. Microarray Analysis of the IT and the Tunica Media of Human DA Tissues To elucidate a differential gene expression profile between the patent DA (Case 1) and the closing DA tissues (Cases 2-4), we performed an unbiased transcriptomic analysis using these human DA tissues. Each DA sample was divided into the IT and the tunica media in Cases 1-3. In Case 4, circumferentially oriented SMCs and layered elastic laminae could not be identified; therefore, the IT-like wall was divided into the inner part and the outer part ( Figure 1A, lower). These samples were subjected to microarray analysis. The dendrogram demonstrated that the human DA tissues were clearly divided into two major clusters, A and B ( Figure 2). Cluster A consisted of both the IT and the tunica media from Case 1. Cluster B consisted of the samples from Cases 2-4, and this cluster was further divided into two subgroups, B1 and B2. Cluster B1 consisted of the IT tissues from Cases 2 and 3 and both the inner and outer IT-like parts from Case 4. Cluster B2 consisted of the tunica media samples from Cases 2 and 3. These data suggested that the patent DA tissue (Case 1) had a distinct gene expression pattern compared to the other closing DA samples (Cases 2-4). In Cases 2 and 3, the gene expression patterns of the tunica media samples were relatively similar. Additionally, the IT samples showed similar gene expression profiles, which were distant from the tunica media samples of Cases 2 and 3. In agreement with histological analysis showing that two parts of the DA tissue from Case 4 (inner and outer parts) exhibited an IT-like structure, these two samples of Case 4 showed similar gene patterns, which were close to that of the IT of Cases 2 and 3. Transcriptomic Differences between the Tunica Media of Closing DA Tissues and the Patent DA Tissue Both the histological assessment and the cluster analysis of DA tissues demonstrated that the gene expression profile of the tunica media of the patent DA tissue (Case 1) was Transcriptomic Differences between the Tunica Media of Closing DA Tissues and the Patent DA Tissue Both the histological assessment and the cluster analysis of DA tissues demonstrated that the gene expression profile of the tunica media of the patent DA tissue (Case 1) was markedly different from that of the closing DA tissues (Cases 2 and 3). We, thus, compared gene expressions between the tunica media of the patent DA and closing DA tissues. The GSEAs between the tunica media of the closing DA and the patent DA tissues, using all gene sets related to biological processes in the Gene Ontology (GO) (size > 300), revealed that the closing DA tissues were significantly correlated to 87 biological processes (FDR < 0.25, Table 2). Notably, vascular development-related gene sets (GO_REGULATION_ OF_VASCULATURE_DEVELOPMENT and GO_BLOOD_VESSEL_ MORPHOGENESIS) were highly enriched in the tunica media of closing DA tissues ( Figure 3). Kinase activationrelated gene sets (GO_REGULATION_OF_MAP_KINASE_ ACTIVITY, GO_ACTIVATION_ OF_PROTEIN_KINASE_ACTIVITY, and GO_POSITIVE_ REGULATION_OF_PROTEIN_ SERINE_THREONINE_KINASE_ACTIVITY) and three catabolic process-related gene sets, including GO_REGULATION_OF_PROTEIN_CATABOLIC_PROCESS, were positively correlated with the closing DA tunica media tissue. This suggested that intracellular signaling was more actively regulated in the closing DA tissues compared to the patent DA tissue. Protein secretion-related gene sets (GO_POSITIVE_REGULATION_OF_SECRETION and GO_GOLGI_VESICLE_TRANSPORT) and adhesion-related gene sets (GO_POSITIVE_ REG-ULATION_OF_CELL_ADHESION, GO_REGULATION_OF_CELL_CELL_ADHESION, and GO_CELL_SUBSTRATE_ ADHESION) were also enriched in the closing DA tissues, which support previous reports which found that multiple extracellular matrices and cell-matrix interactions play roles in DA-specific physiological remodeling. The gene set GO_RESPONSE_TO_OXYGEN_ LEVELS was positively correlated to the closing DA tissues ( Figure 3). In this gene set, EGR1, which was previously shown to increase immediately after birth in rat DA tissues, was upregulated in the tunica media of the closing DA tissues. Enrichment of the gene set GO_ACTIN_FILAMENT_ORGANIZATION in the closing DA tissues ( Figure 3) contained the Rho GTPase RHOD, which regulates directed cell migration. This may support the migratory feature of SMCs in the closing DA tissue. Table 2. Gene Ontology biological process terms (size > 300) that were significantly upregulated (FDR < 0.25) in the tunica media of the closing human DA tissues (Cases 2 and 3) compared to that of the patent DA tissue (Case 1). Gene Set Name Size NES FDR q-Value Although we demonstrated several genes that were highly expressed in the tunica media of the closing DAs compared to that of the patent DA ( Figure 3, and Tables 2 and 3), postnatal PGE1 administration possibly affected these gene expressions of the tunica media. To address this issue, we compared gene expressions of the tunica media between shorter-term PGE1-treated DAs (less than one month of administration, Cases 2 and 3) and Although we demonstrated several genes that were highly expressed in the tunica media of the closing DAs compared to that of the patent DA (Figure 3, and Tables 2 and 3), postnatal PGE 1 administration possibly affected these gene expressions of the tunica media. To address this issue, we compared gene expressions of the tunica media between shorterterm PGE 1 -treated DAs (less than one month of administration, Cases 2 and 3) and a longer-term PGE 1 -treated DA (more than three months of administration, Case 4). The GSEAs revealed that the outer part of longer-term PGE 1 -treated DA was significantly correlated to eight biological processes related to cell-cycle regulation (FDR < 0.25, Table S1 and Figure S1A,B, Supplementary Materials) compared to the tunica media of shorter-term PGE 1 -treated DAs. Among these gene sets, the gene sets GO_ORGANELLE_FISSION and GO_NEGATIVE_REGULATION_OF_CELL_CYCLE_PROCESS belonged to gene sets that were highly expressed in the tunica media of the closing DAs (Table 2). These two gene sets may be associated with PGE 1 administration, but not with specific features of the closing DA. However, the remaining 85 gene sets in Table 2 seemed to be independent of duration of PGE 1 administration. Vascular Development-Related Genes in Human DA Tissues The vascular development-related gene sets noted above (Table 2 and Figure 3) contain cardiovascular cell lineage-related genes. A heatmap composed of cardiovascular cell lineage-related genes demonstrated distinct gene expression patterns between the closing DA tissues (Cases 2 and 3) and the patent DA tissue (Case 1) ( Figure 4A). The genes SEMA5A, SFRP1, NRG1, CTNNB1, PHACTR4, and JAG1 were highly expressed in the ITs of the closing DA tissues. Among these genes, the expression of PHACTR4 and JAG1, which are cardiac neural crest-related genes, was greater in the tunica media of the closing DA tissues than the patent DA tissue. The expression levels of CFL1, TWIST1, EDNRB, SMO, and MAPK1 were greater in the tunica media of the patent DA tissue compared to the closing DA tissues. Similarly, expressions of SEMA4F, NRP1, LTBP3, EDN3, and FGF8 were enriched in the tunica media of the patent DA tissue. SEMA3G, ALX1, SOX8, ALDH1A2, and SEMA7A were relatively highly expressed in the entire tissue of the patent DA. WNT8A, KLHL12, FBXL17, and ISL1, which is a second heart field-related gene, were relatively enriched in the patent DA tissue, and the expression levels of these genes were higher in the IT than in the tunica media. Figure 4B presents a Venn diagram that shows probe sets that were upregulated (>8fold) in the tunica media of the closing DA tissues (Cases 2 and 3) compared to the patent DA tissue (Case 1). Twenty-one overlapped probe sets consisted of 16 genes (Table 3). APLN, CEMIP2, and GHRL are related to vascular development. There were several genes related to adhesion and protein secretion such as APLN, CD83, FLCN, and NEDD9. GHRL and NEDD9 were reported to regulate actin filament organization. APLN was reported to promote proliferation and migration of vascular SMCs, as well as promote SMC contraction. NEDD9 is involved in embryonic neural crest cell development and promotes cell migration, cell adhesion, and actin fiber formation. To examine the effect of PGE1 administration on the human DAs, we compared gene expressions of the tunica media between shorter-term PGE1-treated DAs (Cases 2 and 3) and a longer-term PGE1-treated DA (Case 4) using a Venn diagram ( Figure S1C, Supplementary Materials). We identified 20 probe sets that overlapped and were enriched in the outer part of a longer-term PGE1-treated DA compared to the IT of shorter-term PGE1treated DAs (Table S2, Supplementary Materials). These genes did not belong to the genes in Table 3, suggesting that the genes presented in Table 3 did not seem to have been strongly influenced by PGE1 administration. The Closing or Patent DA Tissue-Specific Gene Expression In the Venn diagram in Figure 4C, 116 probe sets are presented, which were upregulated (>8-fold) in the tunica media of the patent DA tissue (Case 1) compared to the closing 3.6. The Closing or Patent DA Tissue-Specific Gene Expression Figure 4B presents a Venn diagram that shows probe sets that were upregulated (>8-fold) in the tunica media of the closing DA tissues (Cases 2 and 3) compared to the patent DA tissue (Case 1). Twenty-one overlapped probe sets consisted of 16 genes (Table 3). APLN, CEMIP2, and GHRL are related to vascular development. There were several genes related to adhesion and protein secretion such as APLN, CD83, FLCN, and NEDD9. GHRL and NEDD9 were reported to regulate actin filament organization. APLN was reported to promote proliferation and migration of vascular SMCs, as well as promote SMC contraction. NEDD9 is involved in embryonic neural crest cell development and promotes cell migration, cell adhesion, and actin fiber formation. To examine the effect of PGE 1 administration on the human DAs, we compared gene expressions of the tunica media between shorter-term PGE 1 -treated DAs (Cases 2 and 3) and a longer-term PGE 1 -treated DA (Case 4) using a Venn diagram ( Figure S1C, Supplementary Materials). We identified 20 probe sets that overlapped and were enriched in the outer part of a longer-term PGE 1 -treated DA compared to the IT of shorter-term PGE 1 -treated DAs (Table S2, Supplementary Materials). These genes did not belong to the genes in Table 3, suggesting that the genes presented in Table 3 did not seem to have been strongly influenced by PGE 1 administration. In the Venn diagram in Figure 4C, 116 probe sets are presented, which were upregulated (>8-fold) in the tunica media of the patent DA tissue (Case 1) compared to the closing DA tissues (Cases 2 and 3). These probe sets contained 52 genes (Table 4). Latent transforming growth factor beta-binding protein 3 (LTBP3) was upregulated in the tunica media of the patent DA tissue. LTBP3 is related to extracellular matrix constituents and second heart field-derived vascular SMCs. Expression of PRSS55, identified as an aorta-dominant gene in rodent microarray data, was elevated in the patent DA tissue. Table 4. Fifty-two genes that overlapped and were enriched (>8-fold) in the tunica media of the patent DA tissue (Case 1) compared to that of the closing DA tissues (Cases 2 and 3). Jagged 1 Was Highly Expressed in the Closing DA Tissues Previous reports using genetically modified mice clearly demonstrated that SMCs of the DA are derived from cardiac neural crest cells, and these cells contribute to SMC differentiation in the DA. Since the transcriptome analysis revealed that the neural crest cell-related gene JAG1 was abundantly expressed in the closing DA tissues compared to the patent DA tissue ( Figure 4A), we performed immunohistochemistry to examine protein expression of jagged 1. In agreement with the transcriptome data, jagged 1 was highly expressed in the closing DA tissues (Cases 2-4) ( Figure 5A). Calponin is well recognized as a differentiated SMC marker, and it was decreased in the DA tissues of Jag1-deficient mice. A strong immunoreaction for calponin was observed in the closing DA tissues but was not as strong in the patent DA tissue ( Figure 5B). Jagged 1 Was Highly Expressed in the Closing DA Tissues Previous reports using genetically modified mice clearly demonstrated that SMCs of the DA are derived from cardiac neural crest cells, and these cells contribute to SMC differentiation in the DA. Since the transcriptome analysis revealed that the neural crest cell-related gene JAG1 was abundantly expressed in the closing DA tissues compared to the patent DA tissue ( Figure 4A), we performed immunohistochemistry to examine protein expression of jagged 1. In agreement with the transcriptome data, jagged 1 was highly expressed in the closing DA tissues (Cases 2-4) ( Figure 5A). Calponin is well recognized as a differentiated SMC marker, and it was decreased in the DA tissues of Jag1deficient mice. A strong immunoreaction for calponin was observed in the closing DA tissues but was not as strong in the patent DA tissue ( Figure 5B). Transcriptomic Characteristics of the IT and the Tunica Media in the Closing DA Tissues Lastly, we investigated the difference in gene expression between the IT and the tunica media in the closing DA tissues. IT formation is partly attributed to migration and proliferation of the tunica media-derived SMCs 7]. Gene expression analysis indicated that there were different transcriptomic characteristics between the IT and the tunica media in the closing DA tissues (Clusters B1 and B2 in Figure 2). We, thus, compared the expression of genes between the IT and the tunica media of the closing DA tissues (Cases 2 and 3). The GSEAs between the IT and the tunica media, using all gene sets which related to biological processes in the Gene Ontology (GO) (size > 300), were performed. The analyses revealed that the IT was significantly correlated to 89 biological processes (FDR < 0.25, Table 5), and that the tunica media correlated to 81 biological processes (FDR < 0.25, Table 6). The IT of the closing DAs was significantly correlated to more than 10 migration-and proliferation-related gene sets (GO_MICROTUBULE_CYTOSKELETON_ORGANIZA-TION and GO_CELL_DIVISION, etc.) ( Figure 6A,B). Wnt signaling-related gene sets (GO_REGULATION_OF_WNT_SIGNALING_PATHWAY, GO_CANONI-CAL_WNT_SIGNALING_PATHWAY, and GO_CELL_CELL_SIGNALING_BY_WNT) were also enriched in the IT of the closing DA tissues. The tunica media of the closing DAs Transcriptomic Characteristics of the IT and the Tunica Media in the Closing DA Tissues Lastly, we investigated the difference in gene expression between the IT and the tunica media in the closing DA tissues. IT formation is partly attributed to migration and proliferation of the tunica media-derived SMCs 7]. Gene expression analysis indicated that there were different transcriptomic characteristics between the IT and the tunica media in the closing DA tissues (Clusters B1 and B2 in Figure 2). We, thus, compared the expression of genes between the IT and the tunica media of the closing DA tissues (Cases 2 and 3). The GSEAs between the IT and the tunica media, using all gene sets which related to biological processes in the Gene Ontology (GO) (size > 300), were performed. The analyses revealed that the IT was significantly correlated to 89 biological processes (FDR < 0.25, Table 5), and that the tunica media correlated to 81 biological processes (FDR < 0.25, Table 6). The IT of the closing DAs was significantly correlated to more than 10 migration-and proliferation-related gene sets (GO_MICROTUBULE_CYTOSKELETON_ORGANIZATION and GO_CELL_DIVISION, etc.) ( Figure 6A,B). Wnt signaling-related gene sets (GO_ REGU-LATION_OF_WNT_SIGNALING_PATHWAY, GO_CANONICAL_WNT_SIGNALING_PAT HWAY, and GO_CELL_CELL_SIGNALING_BY_WNT) were also enriched in the IT of the closing DA tissues. The tunica media of the closing DAs was significantly correlated to vascular development-related gene sets (GO_REGULATION_OF VASCULA-TURE_DEVELOPMENT, GO_BLOOD_VESSEL_MORPHOGENESIS, GO_VASCULAR _DEVELOPMENT, and GO_CIRCULATORY_SYSTEM_DEVELOPMENT) ( Figure 6C,D). Five adhesion-related gene sets, including GO_BIOLOGICAL_ADHESION, were enriched in the tunica media compared to the IT in the closing DA tissues. Figure 6E presents a Venn diagram that shows probe sets that were upregulated (>8fold) in the IT of closing DA tissues compared to the tunica media of the same DA tissues Figure 6E presents a Venn diagram that shows probe sets that were upregulated (>8-fold) in the IT of closing DA tissues compared to the tunica media of the same DA tissues (Cases 2 and 3). Eight overlapped probe sets consisted of eight genes ( Figure 6E and Table 7). POU4F1, FGF1, and PROCR are related to cell division and cell cycle. FGF1 is reportedly involved in proliferation and migration of vascular SMCs. A Venn diagram in Figure 6F shows 12 probe sets that were commonly upregulated (>8-fold) in the tunica media of the closing DA tissues compared to the IT of the same DA tissues (Cases 2 and 3), which consisted of eight genes ( Figure 6F and Table 8). There were several genes related to muscle structure development such as BDKRB2, MSC, and DCN. DCN is also involved in extracellular constituents and stabilizes collagen and elastic fibers. Discussion The present study demonstrated that neonatal closing DAs exhibited prominent IT and sparse elastic fiber formation, which are typical human DA characteristics. Postnatal closing DA tissues had abundant expression of cardiac neural crest-related protein jagged 1 and the differentiated smooth muscle marker calponin compared to the patent DA tissue. On the other hand, the patent DA tissue had a distinct morphology (e.g., aorta-like elastic lamellae and a poorly formed IT) and gene profiles, such as second heart field-related genes, compared to the closing DA tissues. The DA is originally derived from the sixth left aortic artery and has a unique celllineage. SMCs of the DA are derived from cardiac neural crest cells and the DA endothelial cells (ECs) are from second heart field, while both SMCs and ECs of the adjacent pulmonary artery are derived from second heart field. In the ascending aorta, ECs are derived from second heart field, and SMCs of the inner medial layer and outer layer are derived from neural crest cells and second heart field, respectively. The heatmap of transcriptome data indicated that these cell lineage-related genes were differentially expressed between the closing DA tissues and the patent DA tissue. Although it was difficult to clearly classify these genes into each lineage due to some overlap, cardiac neural crest-related genes such as JAG1 were highly expressed in the closing DA tissues. In contrast, second heart field-related genes, such as ISL1, were enriched in the patent DA tissue. The DA has been reported to have differentiated SMCs compared to the adjacent great arteries. Slomp et al. demonstrated high levels of calponin expression in the tunica media of the fetal human DA. Similarly, Kim et al. reported the presence of highly differentiated SMCs in the fetal rabbit DA, according to SM2 expression. These differentiated SMCs have a high contractile apparatus, which makes them compatible with the postnatal potent DA contraction. Additionally, several mutant mice with the patent DA had less differentiated SMCs. Ivey et al. reported that mice lacking Tfap2, which is a neural crest-enriched transcription factor, had decreased expression of calponin on embryonic day 18.5. Huang et al. utilized mice that harbored a neural crest-restricted deletion of the myocardin gene and demonstrated that decreased SMC contractile proteins were present on embryonic day 16.5. In addition, SMC-specific Jag1-deficient mice had a limited expression of SMC contractile proteins, even at postnatal day 0. The transcriptome data of the human DA tissues in the present study exhibited decreased protein levels of Jagged 1 and calponin in the patent DA tissue compared to the closing DA tissues. These altered SMC differentiation markers may contribute to postnatal DA patency in humans. Prematurity and several genetic syndromes are reported to increase the incidence of the patent DA, and the patent DA can be classified into three groups, i.e., patent DA in preterm infants, patent DA as a part of a clinical syndrome, and non-syndromic patent DA. This study included only one case with patent DA who had heterotaxy syndrome (polysplenia), which is a major study limitation. Indeed, in the present study, the expression of Nodal, which plays a primary role in the determination of left-right asymmetry, was positively correlated to the closing DAs compared to the patent DA with heterotaxy ( Figure 3B). Therefore, it was not able to conclude that differentially expressed genes between in the closing DAs and the patent DA were associated with DA patency, but not with heterotaxy. There are several syndromes (mutated genes) associated with the patent DA, such as Cant (ABCC9), Char (TFAP2B), DiGeorge (TBX1), Holt-Oram (TBX5), and Rubinstein-Taybi (CREBBP) syndromes. In addition to these syndromes, heterotaxy syndrome was reported to have a higher incidence of patent DA. Notch signaling pathways have been reported to play a role in the establishment of left-right asymmetry via regulating Nodal expression. Mutant for the Notch ligand Dll1 or double mutants for Notch1 and Notch2 exhibited defects in left-right asymmetry. Dll1-null mutants die at early embryonic days due to severe hemorrhages, and it is not yet elucidated whether this Dll1-mediated Notch signaling is involved in the pathogenesis of patent DA. It has been reported that combined SMC-specific deletion of Notch2 and heterozygous deletion of Notch3 in mice showed the patent DA, but not heterotaxy. In this study, we delineated the low levels of JAG1, which is a Notch ligand, in the patent DA with heterotaxy syndrome. Mice with Jag1-null mutant are early embryonic lethal due to hemorrhage, and SMCspecific Jag1-deleted mice are postnatal lethal due to patent DA. These Jag1 mutants were not reported to exhibit heterotaxy. In humans, there is no obvious relationship between Alagille syndrome (JAG1) and patent DA or heterotaxy. In mice, phenotypes of patent DA or heterotaxy seem to depend on ligands and isoforms of receptors of Notch signaling. In addition, there are differences in phenotypes caused by genetic mutations between in mice and humans. Analysis of non-syndromic patent DA would provide further insights into molecular mechanisms of closing and patency of the human DA. Yarboro et al. performed RNA sequencing to determine genes that were differentially expressed in the preterm human DA and aorta at 21 weeks gestation, which was a much earlier time point than we used in our study. They found that several previously recognized DA-dominant genes in rodent studies (e.g., ABCC9, PTGER4, and TFAP2B) were also upregulated in the preterm human DA tissues compared to the aorta. Some DA-dominant genes (e.g., ERG1 and SFRP1) in the preterm human DA were upregulated in the closing DAs compared to the patent DA in the present study. In addition, some aorta-dominant genes, including ALX1, were upregulated in the patent DA compared to the closing DA, which might partly support the aortification phenotype of the patent DA. Jag1 was reported to be the term DA dominant gene rather than the preterm DA dominant gene in rats, suggesting that Jag1 contributes to normal DA development. In addition to these previously reported genes, the gene profiles in our study potentially provide novel candidate genes (e.g., APLN and LTBP3) that may contribute to vascular SMC development and function. Further study is needed to understand the roles of these genes in DA development. Mueller et al. performed DNA microarray analysis using postnatal human DAs. Their DA samples were composed of two stent-implanted DAs, one ligamentous DA, and one un-stented open DA. We compared our data to their un-stented open DA dominant genes; however, we could not find obvious overlapped genes among them. This study elucidated the transcriptomic difference between the closing DA tissues and the patent DA tissue in humans. However, one of the limitations in this study pertains to the different durations of PGE 1 administration. In utero, the DA is dilated by prostaglandin E2 (PGE 2 ), which is mainly derived from the placenta. After birth, the loss of the placenta and the increased flow of the lung, which is the major site of PGE catabolism, cause a decline in circulating PGE 2. This decline in PGE 2 contributes to a postnatal DA contraction. We previously reported PGE 2 -induced structural DA remodeling via the prostaglandin receptor EP4 (e.g., IT formation and attenuation of elastic laminae in the tunica media ). In humans, Mitani et al. reported that lipo-PGE 1 administration increased IT formation in the DA. Gittenberger-de Groot et al. reported that PGE 1 treatment induced histopathologic changes (e.g., edema) in the human DA. In this study, Case 4 who received the longest PGE 1 administration, for 98 days, had prominent IT formation and less visible layered elastic fibers. This study demonstrated that the duration of PGE 1 -treatment affected gene expressions such as cell-cycle process-related genes. On the basis of these findings, postnatal PGE 1 administration was thought to influence not only structural changes but also gene expression in the postnatal DA. In 1977, Gittenberger-de Groot et al. performed histological analysis of 42 specimens of postnatal human DAs ranging in age from 12 h after premature delivery to 32 years. An abnormal wall structure of the DA was found in all 14 patients that were over 4 months of age, and the most prominent feature was an aberrant distribution of elastic material, such as unfragmented subendothelial elastic lamina. Three of the 14 patients also showed countable elastic laminae in the tunica media, namely, an aortification. The histological finding of patent DA (Case 1) in the present study was consistent with this aortification type, showing aberrant distribution of elastic materials. As mentioned above, a major limitation of this study is the use of only one sample of the patent DA, which could not represent the whole entity of patent DA. It is difficult to obtain large numbers of samples with a variety of different congenital heart diseases because isolation of the DA is possible only in the case of a limited number of surgical procedures (e.g., aortic arch repair). However, transcriptome comparisons of different types of patent DA tissues would be more informative to elucidate the pathogenesis of the human patent DA. Conclusions Transcriptome analysis using the IT and the tunica media of human DA tissue revealed different gene profiles between the patent DA and the closing DA tissues. Cardiac neural crest-related genes such as JAG1 were highly expressed in the tunica media and IT of the closing DA tissues compared to the patent DA. Second heart field-related genes, such as ISL1, were enriched in the patent DA. The data from this study indicate that patent DA tissue may have different cell lineages from closing DA tissue. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10.3390/jcdd8040045/s1: Figure S1. Differential gene expression between the longer-term PGE 1 -treated human ductus arteriosus (DA) tissue (Case 4) and the shorter-term PGE 1 -treated DA tissues (Cases 2 and 3); Table S1. Gene Ontology biological process terms (size > 300) that were significantly upregulated (FDR < 0.25) in the outer part of long-term PGE 1 -treated human ductus arteriosus (DA) tissue (Case 4) compared to the tunica media of short-term PGE 1 -treated DA tissues (Case 2 and Case 3); Table S2. Twenty genes that overlapped and were enriched (>8-fold) in the outer part of a longer-term PGE 1 -treated DA tissue (Case 4) compared to the IT of shorter-term PGE 1 -treated DA tissues (Cases 2 and 3). Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Ethics Committee of Tokyo Medical University and Kanagawa Children's Medical Center (protocol codes: T2020-0238 and 1502-05; date of approval: 16-November-2020 and 9-July-2015). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available within the article. |
A combination of vigenere algorithm and one time pad algorithm in the three-pass protocol Cryptography is the method of delivery of messages in secret, thus only the intended message recipient can read the message. In this study, the cryptographic algorithms which used are Vigenere Cipher and One Time Pad. However, the security of both algorithms depends on the security of the algorithm key. Three-Pass Protocol is a scheme of work that lets two people exchange secret messages without doing a key exchange. So, both the symmetric cryptographic algorithms combined on a Three-Pass Protocol scheme. The purpose of the combination of two algorithms in the three-pass protocol is to secure the image message without exchange key process between sender and recipient. The results of the research and testing using GetPixel pointed out that safeguarding the image file using the combination of Vigenere Cipher and One Time Pad algorithm restores the original image files intact. Therefore, it meets the parameters of the integrity of the data. The test results based on time parameter shows that time of the program execution process is directly proportional to the size of the image. The result is related with the formula which calculate every pixels of the the image. Introduction The exchange of information today is effortless, especially for digital images or images. This can be seen from the increasing number of social media that provide the main features to exchange pictures, such as Instagram. Images created from pixels which contain three colors red, green and blue are called as RGB. Each pixel color includes one byte-sized information that indicates the density of the image's color. The ease in transferring images makes people competing to display the best picture when not a few of them are private. A problem is when irresponsible parties are abusing the image of another's property, and this poses a threat to the related party. For that, much needed security for the digital images. One of the data security techniques is cryptography. Cryptography is both the arts and science of protecting the message by encoding it into a form that can no longer be understood. The primary objectives of cryptography are integrity, authentication, confidentiality, and nonrepudiation. Vigenere Cipher is a symmetric key algorithm where the encryption key is the same as the decryption key. In the Vigenere Cipher algorithm, the keyword is repeated as much as is required with the length of plaintext. The One Time Pad algorithm is known as a perfect secrecy algorithm. In the One Time Pad algorithm, the number of keys is the same length as the number of plaintexts. The security of the two algorithms above depends on the key security of the algorithm. The Three-Pass Protocol is a work scheme that allows two people to exchange messages without exchanging keys. Symmetric algorithms have a weakness in the key because it is predictable and straightforward, in addition to the network security factor when the key exchange also determines the security of the message. Therefore, the purpose of this research to cover the weakness of symmetry algorithm done by merging two symmetry algorithm and use of three pass protocol scheme to no longer needed key exchange process. Cryptography Cryptography is the science of the way of delivery of messages in secret (i.e., encrypted or disguise form) so that only the intended recipient of the message can let go incognito and read (or understand). The original message called plaintext and the secret word called ciphertext. The process of changing plaintext into ciphertext is called encryption. The method of restoring the ciphertext into plaintext, which is carried out by the recipient who has the knowledge to let go incognito, is called decryption. The purpose of cryptography is four, namely : 1. Confidentiality: information is kept secret from anyone except the official party. 2. Integrity: the message has not been changed at all during the shipping process. MATEC Web of Conferences 197, 03008 https://doi.org/10.1051/matecconf/201819703008 AASEC 2018 3. Authentication: the sender of the message is genuine. The alternative term is the authentication of the origin of the data. 4. Non-repudiation: the message sender cannot deny message creation. Vigenere Algorithm The Vigenere Cipher algorithm was first published in 1586 by Blaise de Vigenere. Vigenere Cipher is a method for encrypting alphabetical text using a different set of Caesar Cipher based on the letters in the keyword. This algorithm is a symmetric key algorithm considering that knowing encryption is tantamount to understanding decryption and is a polyalphabetic substitution cipher. The keywords on Vigenere Cipher are repeated as much as necessary to cover all the plaintext. Encryption and decryption in Vigenere cipher can be done easily using an enciphering table and its corresponding deciphering table. Mathematically, the process of encryption and decryption is formulated as follows: where P is plaintext, K is key, C is ciphertext, and n is some characters used. One Time Pad Algorithm Gilbert S. Vernam invented the One Time Pad algorithm in 1917. One Time Pad is an algorithm with a completely random key, used only once and the key length equal to the length of the plaintext. Therefore, this algorithm has perfect secrecy. One Time Pad earned a reputation as a powerful yet simple algorithm with a high level of security. The algorithm is also better than modern cryptographic algorithms. Mathematically, the process of encryption and decryption is formulated as follows: where P is plaintext, K is key, C is ciphertext, and n is the number of characters used. Three-Pass Protocol The Three-Pass protocol is a framework that allows a party send a message encrypted securely to the other party without having key exchange process. Adi Shamir discovered this protocol. This protocol allows Alice and Bob to communicate securely without the exchange of keys either a secret key or a public key. This assumes a commutative symmetrical cipher, EA (EB (P)) = EB (EA (P)). Alice's secret key is A. Bob's secret key is B. Alice sends a message (P) to Bob. Here is the scheme works: 1. Alice encrypts P with the key and sends it to Bob 2. Bob encrypts C1 with his key and sends it to Alice 3. Alice decrypts C2 with her key and sends it to Bob C3 = DA (EB (EA (P))) = DA (EA (EB (P))) =EB(P) 4. Bob decrypts C3 with his key to get P. The Combination of Vigenere and One Time Pad Algorithm in Three-Pass Protocol How the combination of vigenere and one time pad algorithm works in Three-Pass Protocol is shown in figure 1. This figure 1 shows that Three-Pass Protocol is a framework that allows the sender to send encrypted messages to the recipient without the need to distribute the sender's key to the receiver. Sender and receiver have their respective key, in this case, Alice uses the key from vigenere cipher, and Bob uses the key from one time pad. The Calculation Process The calculation process using the combination of the two algorithms in the three pass protocol scheme is as follows: For example, sender wants to encrypt plain image which shown in figure 2 by using Vigenere Cipher. The same calculation method is done up to the last row and column. This first encryption process by sender will produce a cipher image I is shown in figure 3: The same calculation method is done up to the last row and column. This second encryption process by receiver will produce a cipher image II is shown in figure 4: The same calculation method is done up to the last row and column. This first decryption process by the sender will produce a cipher image III is shown in figure 5: The same calculation method is done up to the last row and column. This last decryption process by the receiver will produce a cipher image IV is shown in figure 6: Data Integrity Testing Data integrity is one of the parameters used to test the implementation of the Three-Pass Protocol scheme with a combination of two classical cryptographic algorithms. Testing is done by using GetPixel in C # language to see the RGB pixel values in the image. Based on the results of system testing in the whole process of encryption and decryption of a plain image with a combination of Vigenere Cipher and One Time Pad algorithms in the Three-Pass Protocol scheme described earlier, it is seen that the RGB plain image pixel value before being encrypted is equal to the RGB pixel value of image which results from decryption of One Time Pad. The RGB plain image pixel value before being encrypted equals after being decrypted with the One Time Pad algorithm (cipher image IV). This proves that the combination of Vigenere Cipher and One Time pad algorithms in the Three-Pass Protocol scheme meets the parameters of data integrity. https://doi.org/10.1051/matecconf/201819703008 AASEC 2018 Testing Algorithm against Process Time The relation of the program execution time to the size of an image can be seen in figure 7: Fig. 7. Graph Testing Algorithm against Process Time. Based on the figure 7 can be seen the relationship of processing time is linearly straight to the size of the image which means that the larger the image size, the longer it will take time to execute the program. Conclusions The conclusions that can be drawn from this research are Three-Pass Protocol Implementation with the combination of Vigenere Cipher algorithm and One Time Pad algorithm can secure the image file successfully because of the cipher image looks different with the original image. Both of sender and receiver can use their key and algorithm, without doing key exchange. Based on test results with GetPixel, encryption and decryption process in image file security using a combination of Vigenere Cipher algorithm and One Time Pad on Three-Pass Protocol fulfill the data integrity parameter. Based on the graph of the relationship between the processing time encryption and image decryption with pixel size indicates that the processing time is directly proportional to the size of the image. The larger the image pixel size, the greater the time of encryption and decryption process. |
Fatty acid alkyl ester is widely used in the fields of cosmetics and medicines, as well as fatty acid alkyl ester derived from animal fats and vegetable oils is used in foods. Further, attention is paid to fatty acid alkyl ester serving as a fuel to be added to gas oil. In other words, this is a biodiesel fuel derived from animal fats and vegetable oils, which has been developed in order to reduce the amount of carbon dioxide to be exhausted. Fatty acid alkyl ester is directly used as a substitute for gas oil etc., or used as a fuel to be added to gas oil etc. with a certain ratio. The biodiesel fuel has various advantages such that it gives less damage to environment compared with a conventional diesel fuel derived from petroleum.
Further, glycerin is used mainly as a raw material for nitroglycerin, and also used in various fields such as a raw material for alkyd resin, medicines, foods, printing ink, cosmetics etc.
An example of a method for producing such fatty acid alkyl ester and glycerin is a method in which triglyceride that is a main component of fat and oil is subjected to ester exchange with alcohol.
where R represents an alkyl group having 6-22 carbon atoms or an alkenyl group having 6-22 carbon atoms with one or more unsaturated bonding.
In general, as the method with use of a transesterification between fats and oils and alcohols, a method using a homogeneous alkaline catalyst is industrially used.
However, usage of a homogeneous alkaline catalyst makes a step of separating and removing a catalyst complicated. Further, free fatty acid included in fat and oil is saponified by the alkaline catalyst and consequently a soap is produced as a by-product. This requires a step of cleaning the soap with a large amount of water and decreases the yield of fatty acid alkyl ester due to emulsification of the soap. Further, a step of refining glycerin is complicated.
In order to solve the problem, there have been developed methods for producing fatty acid alkyl ester and/or glycerin over a solid catalyst instead of a homogeneous alkaline catalyst (see Patent Documents 1-5 for example). The method using the solid catalyst has a less complicated process, and has a less amount of wastes such as waste water and waste salts produced in the reaction, compared with the method using the homogeneous alkaline catalyst. Further, Patent Document 6 discloses a method for producing fatty acid alkyl ester and/or glycerin over a solid catalyst.
Production of fatty acid alkyl ester and glycerin over the solid catalyst does not require a complicated operation in its production process, and has a less amount of wastes such as waste water and waste salts produced in the reaction, compared with the method using the homogeneous alkaline catalyst.
However, in general, a transesterification is an equilibrium reaction. Therefore, both in a case of the homogeneous alkaline catalyst and a case of the solid catalyst, it is necessary to use an excessive amount of a raw material (alcohol in general) in order to obtain a high yield of a product.
Recently, in view of consideration on environment and reduction of costs for production, it is requested that a material that can be reused by reproduction is reused as far as possible. Therefore, in producing fatty acid alkyl ester and glycerin, it is requested that out of an excessive amount of alcohol used in the transesterification, unreacted alcohol that remains without being used in the reaction is separated and refined from a reaction liquid so as to be reused as a raw material. For example, Patent Document 7 discloses a method in which unreacted alcohol that remains without being used in a reaction is evaporated from a reaction liquid by a pressure flash and then refined as alcohol through evaporation and reused as a raw material for a transesterification. Patent Document 1: Japanese Unexamined Patent Publication No. Tokukai 2005-200398 (published on Jul. 28, 2005) Patent Document 2: Japanese Unexamined Patent Publication No. Tokukai 2006-225352 (published on Aug. 31, 2006) Patent Document 3: Japanese Unexamined Patent Publication No. Tokukai 2005-177722 (published on Jul. 7, 2005) Patent Document 4: Japanese Unexamined Patent Publication No. Tokukaihei 7-173103 (published on Jul. 11, 1995) Patent Document 5: French Patent Publication No. 2752242, specification Patent Document 6: Japanese Unexamined Patent Publication No. Tokukai 2005-206575 (published on Aug. 4, 2005) Patent Document 7: U.S. Unexamined Patent Publication No. 2004/0034244, specification Patent Document 8: U.S. Unexamined Patent Publication No. 2005/0113588, specification |
// PublicKey returns CA public key
//
// - Output sample
// {
// "result":"success",
// "public_key":"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC6rGI3i3D1fvay1MFKHjEfcvKA
// A6vuNH5ayPcmOIoeHvkXPO6uCp4pbSNmy45szxyTEjGYJx0F6qylUzi4jZ+1BIpq5QStetsP4pryLhd
// vK21bkCIBAqZbmw6Wc4D2Z+Qc7Is1/ZBr3g2lmfWApNqFmlwnDGpH6Hp0lRdBtanTz3/er99JS9WRXF
// c/uRGkY6n/fX3VELTixmcyRIIQDI66Cy+6jkS9nDn4E8Hu2mshWP/VtOok4DsIBk1YQb9wSeTOtmIZf
// EjBbzcKyBorYHWqYvNXN4wDtKtSTypjE1d42qodK3sKNMqqrIXdicHUId967oL7497+jDklpfZ24z3O
// gM7rdXRijDJUP6RcBpKFSriGOV6wolYop7Rc/DLgA16MOx8Zh/iVh3LI0zKyeQhG5tNO/hoNPe8Bp0k
// IXio9xBt/TyAHl3OfFQ6rYOwefvmp2ladV2Wy/BeIOPnswO0jk288qpzUDYE8sOlrtn3DZfqG5auDAe
// A+7XNuDuwUmwjSFTRz4nAtooCaF8UTysIfHYFgtKvU+xCIXWsHMr4BSaF1B3f2434r4Hn0gfWeg5CSu
// 0nO45S07q3TKjnoo644zmHtuUUw/+fG1ctmmjq1DO85TcotqdW1oT/SZwYxK7hqwvY7S5uClkUSXmDG
// UY3HMVIFLJPzCBi4bjhIX6Jbdw==\n"
// }
func (h AppHandler) PublicKey(c echo.Context) error {
var publicKey string
if h.config.GetBool("ca_external") {
v := Vault{h.config.GetString("ca_role_id"), h.config.GetString("ca_external_secret_id"), h.config, ""}
var err error
publicKey, err = v.GetExternalPublicKey()
if err != nil {
return c.JSON(http.StatusInternalServerError,
map[string]string{"result": "fail", "message": "Error getting ssh public key", "details": err.Error()})
}
} else {
publicKey = h.config.GetString("ca_public_key")
}
return c.JSON(http.StatusOK, map[string]string{"result": "success", "public_key": publicKey})
} |
Premenstrual syndrome prevalence in Turkey: a systematic review and meta-analysis. The aim of this study was to determine the prevalence of premenstrual syndrome among reproductive age women living in Turkey with a systematic review and meta-analysis study. In this study were scanned keywords in the databases including Turkish Medline, PubMed, Google Scholar, Scopus and ISI Web of Knowledge. This study included full-text research articles from conducted in Turkey, published in Turkish or English between 2014 and 2018 and indicating prevalence. This study included a total of 18 studies conducted in Turkey reporting the prevalence of premenstrual syndrome. A total of 6890 women participated in these studies. The overall premenstrual syndrome prevalence in the studies examined in this systematic review was 52.2%. Subgroup prevalence was found to be 59% in high school students, 50.3% in university students and 66% of women in general population. In the meta-regression analysis showed that there was no significant relationship between the mean age of the participants with the prevalence of premenstrual syndrome. The results of the study showed that premenstrual syndrome was prevalent among Turkish reproductive age women. Health professionals should organize training for women to gain the ability to manage PMS symptoms. Further interventional studies are needed to cope with PMS. |
package com.mglj.wms.es.server;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class WmsEsServerApplication {
public static void main(String[] args) {
SpringApplication.run(WmsEsServerApplication.class, args);
}
}
|
Using hybrid model of Artificial Bee Colony and Genetic Algorithms in Software Cost Estimation The Software Cost Estimation (SCE) is one of most important issues in the cycle of development, management decision, and in the quality of software project. In the case of the lack of certainty of the exact cost for software projects' development, companies encounter with numerous challenges. Also, due to inaccurate cost estimates, making wrong decisions by project managers make irreparable damage. To reach this propos, the effective factors for developing software projects should be evaluated to ensure of the projects success. COCOMO model is the main model for SCE which acts based on criteria and quantities such as number of Line of Code (LOC) or the Function Point Analysis (FPA). Research in recent years has shown that COCOMO model has not a good performance in the SCE. In this paper, we studied SCE by using a hybrid of Genetic Algorithm (GA) and Artificial Bee Colony (ABC) which are Meta-Heuristic Algorithms. Test results show that proposed model, GA and ABC algorithms have less MRE errors values than the COCOMO model. Also, the hybrid model has better convergence comparing with the GA and ABC algorithms. |
class LogNormalDistribution:
"""
log d ~ Normal(mu, sigma)
Note that in some references we use \rho = 1/\sigma^2
"""
def __init__(self, mu, sigma):
self.mu = mu # this is log of what we'd think of as mu
self.sigma = sigma
def prob(self, v):
return gaussian(math.log(v), self.mu, self.sigma)
def mean(self):
return exp(self.mu + self.sigma ^ 2 / 2)
def mode(self):
return exp(self.mu - self.sigma ^ 2)
def median(self):
return exp(self.mu)
def variance(self):
ssigma = self.sigma ^ 2
return exp(2 * self.mu + ssigma) * (exp(ssigma) - 1)
def draw(self):
return exp(random.normalvariate(self.mu, self.sigma)) |
Democratic members of the Senate Select Committee on Intelligence are in a quandary about how to respond after Chairman Pat Roberts Charles (Pat) Patrick Roberts Embattled senators fill coffers ahead of 2020 Republicans writing off hard-line DHS candidate The Hill's Morning Report - Trump seeks tougher rules on asylum seekers MORE (R-Kan.) defeated their proposals to change the panel’s rules and investigate the treatment of intelligence detainees.
Roberts’s uncompromising management of the meeting stunned Democratic lawmakers at a closed-door meeting before the congressional recess. They had obliged Roberts to schedule the meeting by sending him a letter signed by seven members, two more than committee rules require to force the chairman’s hand.
Their lack of headway poses a dilemma for them: Should they break the committee’s traditional nonpartisanship or keep it? That nonpartisanship gives Republicans only a one-vote advantage over Democrats. But it also gives the majority staff director unique control over almost the entire committee. Some Democrats say sticking to the present arrangement ties them to what they believe is the GOP’s feckless oversight of intelligence.
To gain clout, committee Democrats also sought to change committee rules and, in their reckoning, bring them into line with Senate Resolution 445, the internal reorganizing measure the chamber passed last year to improve oversight of the intelligence community. Roberts agreed to the resolution on a voice vote.
Both of the committee Democrats’ efforts failed. So, despite successfully amending last year’s internal organizing resolution to gain greater control of the committee, Democrats have little more authority in practice.
Intelligence Committee proceedings are usually kept quiet, but the highly charged fight over the rules and controversy over detainee treatment have forced details to surface.
One proposed rule change would have given Democratic lawmakers power to hire the staff designated to them without approval by a vote of the full committee. The second would have given majority and minority staff directors joint tasking authority over the staff paid for with Democratic funds. An amendment to S.R. 445 last year gave Democrats control over 40 percent of the panel’s resources, but the Republican staff director has managing control over all but three committee aides.
Roberts would allow Democrats only one vote on changing the rules, forcing them either to combine their two proposals or drop one. Rockefeller elected to drop his proposal on joint tasking authority, apparently in the belief that the changes would have a better prospect of attracting Republican support if voted on individually. The proposal failed on a party-line vote.
But Roberts refused to allow a vote on an inquiry into detainee interrogation, leading some Democrats to speculate that he was attempting to prevent centrist Republican members of the committee such as Sens. Olympia Snowe (Maine) and Mike DeWine (Ohio) from casting what could become a tough vote.
Democrats believe Roberts is attempting to stymie an investigation of detainees to shield the administration from embarrassment. Revelations of the treatment of prisoners in Iraq’s Abu Ghraib prison was a major embarrassment in Bush’s first term, and recent reports of the deaths of detainees under CIA control, the rendition of prisoners to countries that use torture as an interrogation tool, and the existence of ghost detainees threaten to become a growing scandal in the second term.
A GOP aide said complaints about allowing Democrats only one vote on changing the rules, as opposed to separate votes on rule changes, created a “distinction without a difference.” The aide said Democrats can call another meeting at which their second proposal, joint tasking authority, can be voted on, and likely would be defeated by another party-line vote.
Democrats are likely to continue working within the committee to initiate an inquiry on detainee treatment but may become more inclined to take the issue to the floor for a vote. Among the possibilities is the offer of an amendment to legislation that would direct the Intelligence Committee to launch an investigation into detainee abuse or an amendment that would set up an independent commission to look into the issue.
During a speech at the Woodrow Wilson International Center for Scholars last month, Roberts defended his decision not to take up a formal investigation of allegations of detainee abuse, saying, “Congress created the CIA’s Office of Inspector General and the Department of Justice to conduct these types of investigations in the first place. Let’s allow them to do their work.
But Democrats counter that the committee has previously pursued investigations simultaneously with the CIA’s inspector general. |
/// Step 5.
///
/// Some Column's are too big and need to be split.
/// We're now going to simulate how this might look like.
/// The reason for this is the way we're splitting, which is to prefer a split at a delimiter.
/// This can lead to a column needing less space than it was initially assigned.
///
/// Example:
/// A column is allowed to have a width of 10 characters.
/// A cell's content looks like this `sometest sometest`, which is 17 chars wide.
/// After splitting at the default delimiter (space), it looks like this:
/// ```text
/// sometest
/// sometest
/// ```
/// Even though the column required 17 spaces beforehand, it can now be shrunk to 8 chars width.
///
/// By doing this for each column, we can save a lot of space in some edge-cases.
fn optimize_space_after_split(
table: &Table,
columns: &[Column],
infos: &mut DisplayInfos,
mut remaining_width: usize,
mut remaining_columns: usize,
) -> (usize, usize) {
let mut found_smaller = true;
// Calculate the average space that remains for each column.
let mut average_space = remaining_width / remaining_columns;
// Do this as long as we find a smaller column
while found_smaller {
found_smaller = false;
for column in columns.iter() {
// We already checked this column, skip it
if infos.contains_key(&column.index) {
continue;
}
let longest_line = get_longest_line_after_split(average_space, column, table);
// If there's a considerable amount space left after splitting, we freeze the column and
// set its content width to the calculated post-split width.
let remaining_space = average_space.saturating_sub(longest_line);
if remaining_space >= 3 {
let info =
ColumnDisplayInfo::new(column, longest_line.try_into().unwrap_or(u16::MAX));
infos.insert(column.index, info);
remaining_width = remaining_width.saturating_sub(longest_line);
remaining_columns -= 1;
if remaining_columns == 0 {
break;
}
average_space = remaining_width / remaining_columns;
found_smaller = true;
}
}
}
(remaining_width, remaining_columns)
} |
<reponame>zheng-framework/zheng-framework<filename>zheng-configuration/src/main/java/com/github/zhengframework/configuration/source/AbstractConfigurationSource.java
package com.github.zhengframework.configuration.source;
/*-
* #%L
* zheng-configuration
* %%
* Copyright (C) 2020 <NAME>
* %%
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
* #L%
*/
import com.github.zhengframework.configuration.environment.Environment;
import com.google.common.collect.MapDifference;
import com.google.common.collect.MapDifference.ValueDifference;
import com.google.common.collect.Maps;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Objects;
import org.apache.commons.lang3.StringUtils;
public abstract class AbstractConfigurationSource implements ConfigurationSource {
private Map<String, Map<String, String>> oldConfig = new HashMap<>();
private List<ConfigurationSourceListener> listenerList = new ArrayList<>();
@Override
public void addListener(ConfigurationSourceListener listener) {
listenerList.add(Objects.requireNonNull(listener));
}
@Override
public void removeListener(ConfigurationSourceListener listener) {
listenerList.remove(Objects.requireNonNull(listener));
}
private void fireEvent(String environmentName, Map<String, String> newConfig) {
Map<String, String> map = oldConfig.getOrDefault(environmentName, Collections.emptyMap());
MapDifference<String, String> difference = Maps.difference(map, newConfig);
oldConfig.put(environmentName, newConfig);
if (!difference.areEqual()) {
for (Entry<String, ValueDifference<String>> entry :
difference.entriesDiffering().entrySet()) {
for (ConfigurationSourceListener listener : listenerList) {
if (listener.accept(entry.getKey())) {
ValueDifference<String> valueDifference = entry.getValue();
listener.onChanged(
entry.getKey(), valueDifference.leftValue(), valueDifference.rightValue());
}
}
}
}
}
protected abstract Map<String, String> getConfigurationInternal(Environment environment);
@Override
public Map<String, String> getConfiguration(Environment environment) {
final Map<String, String> newConfig = getConfigurationInternal(environment);
fireEvent(StringUtils.trimToEmpty(environment.getName()), newConfig);
return newConfig;
}
}
|
<reponame>evilmartians/monstro
//
// MonstroView.h
// Monstro
//
// Created by codefo on 22/10/16.
// Copyright © 2016 <NAME>. All rights reserved.
//
#import <ScreenSaver/ScreenSaver.h>
@interface MonstroView : ScreenSaverView
@end
|
Even at the time it must have seemed like a truly bizarre idea. In the midst of the Northern Ireland Troubles, it was suggested that the entire population of Hong Kong should be uprooted and relocated in a new city built in the middle of the province.
While the scheme may appear too preposterous for words, newly-released files at the National Archives in Kew, west London, show that it nevertheless sparked a flurry of correspondence in Whitehall.
The plan was the brainchild of a lecturer at Reading University, Christy Davies, who warned that when Britain handed back Hong Kong to China in 1997, there would be no future for its 5.5 million inhabitants.
The alternative, he suggested, was to resettle them in a new “city state” to be established between Coleraine and Derry — a move, he said, which could revitalise the stagnant Northern Ireland economy.
When details of his scheme appeared in the Belfast News Letter in October 1983, they caught the eye of George Fergusson, an official in the Northern Ireland Office. He fired off a memorandum to a colleague in the Department of the Foreign Affairs here, declaring: “At this stage we see real advantages in taking the proposal seriously.”
Among the benefits, he suggested, was that it would help convince the unionist population that the government in Westminster was truly committed to retaining Northern Ireland in the UK.
“If the plantation were undertaken it would have evident advantages in reassuring Unionist opinion of the open-ended nature of the Union. There would be corresponding disadvantages in relation to the minority community (and Dublin),” he said.
It is not clear whether his tongue was in his cheek when he wrote, but by the time the reply came back two weeks later from the David Snoxell at the Foreign Office, somebody had twigged that the idea was perhaps not entirely serious. “My initial reaction, however, is that the proposal could be useful to the extent that the arrival of 5.5 million Chinese in Northern Ireland may induce the indigenous peoples to forsake their homeland for a future elsewhere,” Mr Snoxell drily replied. “We should not underestimate the danger of this taking the form of a mass exodus of boat refugees in the direction of South East Asia.
“On the other hand, the countries of that region may view with equanimity the prospect of receiving a God-fearing, law-abiding people with an ingrained work ethic, to replace those that have left.”
Worse, he added, the plan could have serious implications for the UK’s dispute with Dublin over the sovereignty of Lough Foyle. “The Chinese people of Hong
Kong are essentially a fishing and maritime people,” he wrote. “I am sure you would share our view that it would be unwise to settle the people of Hong
Kong in the vicinity of Lough Foyle until we had established our claims on the lough and whether these extended to the high or low water mark.”
A British Foreign Office colleague noted: “My mind will be boggling for the rest of the day.” |
Cluster Model Wave Function and the r.m.s. Radius of 7Li The ground state wave function of 7Li, considered to be made of an alpha particle and a triton cluster configuration, including all the nucleon exchange terms arising out of the antisymmetrization of a two oscillator shell model wave function, is used in the calculation of the root mean square (r.m.s.) radius of 7Li with a view to determine the range of the parameters in the wave function. |
1. Field of the Invention
This invention relates to a method of assaying urea and, more particularly, this invention relates to a colorimetric urea determination method and reagent especially suitable for assaying the urea concentration in body fluids.
2. Description of the Prior Art
Colorimetric methods of urea determination utilizing o-phthalaldehyde and a chromogenic compound are well known. Jung U.S. Pat. No. 3,890,099 (June 17, 1975) and Jung et al. "New Colorimetric Reaction for End Point, Continuous Flow, and Kinetic Measurement of Urea" (Clin. Chem., Vol. 21, No. 8 at 1136-40, 1975) describe a urea determination method and reagent wherein o-phthalaldehyde and a chromogenic compound are mixed with a urea-containing liquid sample.
The o-phthalaldehyde reacts with urea to produce a substantially colorless isoindoline derivative intermediate having one or both of two alternate structures.
The chromogenic compound, N-(1-naphthyl) ethylenediamine dihydrochloride, reacts with the intermediate to form a colored reaction product of unknown structure, the concentration of which is reportedly linearly related to the urea concentration of the sample and which follows Beer's law. The concentration of the colored substance is colorimetrically determinable at an absorbance maximum position of 505 nm. N-(1-naphthyl) ethylenediamine dihydrochloride has the following structure: ##STR1##
Denney U.S. Pat. No. 4,105,408 (Aug. 8, 1978), the details of which are hereby incorporated by reference, discloses five classes of chromogenic compounds which may be substituted for the chromogenic compound of the Jung disclosure. One of the classes of chromogenic compounds disclosed by Denney comprises 1, or 1,3 mono- or disubstituted hydroxy or methoxy naphthalene compounds having the following general structure: ##STR2## where R.sub.1 =--H or --CH.sub.3 and R.sub.2 =--H or --OCH.sub.3, or --OH. A preferred chromogenic compound of Denney is 1,3 dihydroxynaphthalene.
Each of the foregoing systems suffers from several disadvantages. According to Denney U.S. Pat. No. 4,105,408 (see col. 3, 1. 64-col. 4, 1. 21), the chromogenic compound of Jung is synthesized from .alpha.-naphthylamine, a known carcinogen, and therefore may contain at least trace amounts thereof. Furthermore, since the Jung reagents are conventionally stored in acidic solution, it is possible that the chromogen may decompose to form its carcinogenic precursor, .alpha.-naphthylamine. Also, the Jung reagents are reportedly interfered with by a variety of sulfa drugs, at least some of which are commonly present in body fluids subject to urea analysis.
Although the Denney reagents are not derived from a known carcinogen, several disadvantages are encountered with the Denney system. Firstly, it is believed that aqueous 1,3-naphthalene diols exhibit only limited stability in the presence of acid, rendering it impossible to store an acidic working reagent solution for an extended period of time. Unsubstituted 1,3-naphthalene diols are readily transported across all membranes, thereby increasing the risk of toxicity to laboratory personnel. Also, it is believed that 1,3-naphthalene diols may be interfered with to a significant extent by sulfa drugs and other drugs sometimes found in human body fluids. |
The invention relates to an adjustable anti-theft device for use primarily with wheelchairs of the folding variety, however, this adjustable anti-theft device may be suitable for use with other types of folding apparatuses as well.
Wheelchairs are relatively expensive devices costing thousands of dollars in some cases. They generally come in two varietiesxe2x80x94folding and non-folding types. The folding variety is susceptible to an increasing amount of theft because of their high value and the ease with which some can be folded, stored and carried off. As one might expect, the incidence of theft of non-folding wheelchairs is substantially less.
Many hospitals, nursing homes, rest homes and similar facilities are experiencing ever increasing costs due to the loss of folding wheelchairs. Most wheelchairs utilized in the aforementioned facilities are of a conventional folding type. Consequently, these wheelchairs are capable of being easily folded and inadvertently carried off by patients"" family or deliberately stolen by folding them and putting them in the trunk of a car.
The prior art, U.S. Pat. No. 5,149,120, also discloses a locking mechanism which may be designed into the wheelchair at the time of manufacture or which may be added later as in the present invention. However, said prior art prohibits a folding wheelchair from being folded by restricting a pair of centrally pivoted cross braces under the wheelchair seat which move from an X-shaped configuration to a more parallel configuration when the wheelchair is being folded. This mechanism is difficult to reach and to operate and particularly presents a problem when the wheelchair user is unassisted in the locking and securing the anti-theft device.
By being able to selectively restrict a folding wheelchair from folding, some of the aforementioned theft could be avoided. Consequently, there is a market demand for an inexpensive anti-theft device that can be designed into newly manufactured wheelchairs or, alternatively, easily fitted to existing folding wheelchairs which selectively prohibits them from folding when they are used and allows them to be folded and stored when they are not being used.
Therefore, the several objects of the present invention are to provide an improved wheelchair with an anti-theft device, the location of which is easily accessible making for easy locking and unlocking. It is another object of the invention to provide an easily accessible anti-theft device for use with wheelchairs and other folding apparatuses. It is another object of the invention to permit usability on many types of folding wheelchairs, not just those types that have an X-shaped folding cross-member configuration as in the prior art. It is another object of the invention to permit alternative placement of the anti-theft device, both in the front and at the rear of the folding wheelchair or between many of the pairs of vertical or horizontal collapsedly opposed structural members of the wheelchair. Furthermore, it is an object of the present invention to provide a low cost solution to a high cost problem.
The present invention is an anti-theft device primarily for use with a folding wheelchair operated by selectively preventing folding of the wheelchair when the anti-theft device is engaged. This is accomplished by preventing from folding, any pair of collapsedly opposing members comprising a portion of the chassis of the folding wheelchair that collapse toward one another when the wheelchair is being folded.
A left member and a right member of the anti-theft device are pivotally attached to said pair of collapsedly opposing members of the folding wheelchair by an attachment means. Said left and right members overlap and are pivotally connected to one another by a pivotal attachment means. A locking mechanism selectively prevents the left and right members from pivoting relative to one another about the pivotal attachment means, thus preventing the wheelchair from folding. |
A wide variety of aesthetic treatment handpieces has been disclosed in the prior art. Electromagnetic waves such as light, radio frequency (RF) and microwaves are known energy sources in the prior art for treating human skin. Non-electromagnetic energy sources such as ultrasound, shockwaves and cryogenic sources are also common in the aesthetic industry in general and more particularly in the treatment of skin. Whatever the technology utilized, most of these systems have in common some sort of applicator device which applies the particular technology to the human skin.
In fact, combinations of multiple energy sources are known and incorporated into in a single handpiece, and may comprise the same type of energy or a combination of different energy types. These combinations may include a plurality of small energy sources configured to deliver a patterned fractional treatment effect or may be a smaller number of larger energy sources configured to deliver either a more focused and/or bulk treatment of large skin surfaces. Energy may be delivered or applied to different skin organs invasively or non-invasively. In some cases, the handpiece may itself incorporate the mechanism to generate the applied energy while in other cases the handpiece may only deliver and couple energy from a source which is external to the handpiece.
Aesthetic and skin treatments are applied to different body areas. Some body areas are large, uniform, relatively flat and easy to access, like the abdominal area or the calf, while others are not. Small treatment areas like the face may pose a challenge due to the basic size and geometry of the treatment handpiece. Challenges of accessibility to the skin surface may result in less than desired treatment efficiency. Serving the need to treat different types of body areas has been often met in currently available devices in the industry by providing a single main unit to which multiple and different handpieces may be connected. This appears to be a major element in some companies' business model which force their customers to acquire multiple handpieces to accomplish multiple tasks. Alternatively, some companies provide handpieces that have a fixed structure of energy sources and geometry while having a modular energy coupling element which may better access problematic areas. Since treatment efficiency is highly dependent on the energy distribution within the skin and on the energy interaction with different skin organs, treatment handpieces are optimized for their intended uses and, as such, have very limited flexibility or modularity.
Thus, what is needed in the industry are handpiece structures which obviate most if not all of the shortcomings of the presently available devices. It is the subject of the present invention to teach an alternative approaches and treatment handpiece structures. |
0
Sometimes, it’s best not to acquiesce to an actor’s suggestion. If a story from filmmaker and film critic Charlie Lyne is to be believed, then director Sam Mendes ended up learning this lesson the hard way during the production of Skyfall.
The story goes that one day, Daniel Craig ended up bringing a pair of leather gloves to set. He tells a tired Sam Mendes that these gloves are something that James Bond would wear and that he should wear them in the scene. Mendes, wanting to keep his star happy, agrees, and Craig shoots the scene wearing the leather gloves.
Three months later while they’re editing the movie, someone pipes up and points out the problem with Bond wearing gloves during the Macao sequence. See if you can guess what the problem was before watching the videos below [via The Playlist]. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.