content
stringlengths 7
2.61M
|
---|
International efforts to contain fires and underground hotspots across Indonesia continued on Wednesday as the minister in charge of the archipelago’s anti-haze operation called for 15 more aircraft to join the air operation.
In West Kalimantan, 46 complainants have joined in a class action lawsuit against the government.
“The right to a decent environment and healthy life is a basic right,” Sulistiono, the chairman of the plaintiffs, told Mongabay. “It is in the state’s interest to ensure the realization of the constitutional mandate to protect all citizens and the homeland of Indonesia.”
Schools in Palangkaraya and other parts of Central Kalimantan remained closed on Tuesday. The city government said it had released 1.5 billion rupiah ($109,000) in additional funding since declaring a state of emergency, some of which has gone directly to the police. Flights in several parts of Kalimantan and Papua remain subject to delays and cancellation.
In perhaps the strongest sign yet that the Indonesian government’s patience is wearing increasingly thin with the governance of some companies, the minister in charge of the haze operation and the country’s police chief both publicly criticized Sinar Mas, the parent company of Asia Pulp & Paper.
“There are fires in a Sinar Mas concession that are close to sources of water,” said Badrodin, the police chief. “Why can’t they put out the fires? We are talking about thousands of liters of water near the location of fires. They should not rely too heavily on us.”
President Joko “Jokowi” Widodo told Indonesia’s state-owned Antara news agency that companies need to boost their firefighting capacity.
“Companies need to take responsibility and ensure they have the means to handle fires,” the president said. “It cannot be that we have been on fire for 18 years and yet we are still dealing with the same problem.”
Air quality was back in the unhealthy range on Monday and Tuesday, while Malaysian officials were assessing whether this weekend’s Moto GP in Kuala Lumpur would go ahead as scheduled.
Singapore has launched a total of 47 water-bombing operations and says it has “put out” 35 hotspots.
Singapore’s education ministry is working with the health ministry and environment agency on revised education guidelines amid haze following a petition by a parent. The petition calls on the education ministry to ensure adequate filtration in classrooms, educate children in the correct use of N95 face masks and conduct non-essential lessons via e-learning.
The Association of Indonesian Forest Concessionaires (APHI) said on Tuesday it had lobbied the government to provide assistance to the Indonesian companies being sued under Singapore’s transboundary pollution law.
“This should be helped by diplomacy,” APHI’s vice chairman said. “Don’t let the companies be subjected to this alone.”
As Indonesian and Singaporean armies began an annual 12-day joint military exercise, former Singaporean ambassador to Indonesia Barry Desker writes that relations between the two countries could be tested.
“Underlying the approach of many Indonesian policymakers is the belief that Singapore has no natural resources and benefits from exploiting Indonesia,” Desker writes. “The self-image is that of Indonesia as a pretty girl courted by everyone at the party.”
Elsewhere, a major 328-hectare “eco-resort” in Indonesia, which will be accessible only by boat from Singapore when it opens next year, claimed it would manage the impact of smoke next year by planning “more underwater activities which will not be affected by haze.“ |
The Past and Future of Econ 101: The John R. Commons Award Lecture The introductory economics course, often called Econ 101, is where most economists get their start and where many students receive their only exposure to the field. This essay discusses the courses evolution. It first looks back at how economics was taught at Harvard University in the 19th century, based on a textbook by Professor Francis Bowen. It then looks ahead at how the introductory course may change as pedagogical tools improve, as society confronts new challenges, and as the field accumulates new knowledge. JEL Classification: A2 |
Ethical Governance for Sustainable Development in Higher Education Institutions In contemporary societies, higher education institutions face the impact of globalization, which is mainly demanding in imposing and shaping ethical practices. While higher education systems and dynamics cannot be understood apart from this broader context, its primary focus seems to remain as equal as ever: the creation of knowledge-based societies and economies, education and the creation of socially responsible citizens. Against this background, this chapter aims to present and critically discuss the strategies implemented in a higher education institution towards building a culture of integrity. The empirical focus is a small-scale university located in southern Europe, peripheral to prominent universities and major countries. |
Passes and Protection in the Making of a British Mediterranean Abstract Between the end of the seventeenth century and the beginning of the nineteenth, the security of British navigation in and around the corsair-infested waters of the Mediterranean depended on indented parchment passportsMediterranean passes. This article recovers the history of the Mediterranean pass and traces the development of the Mediterranean pass system from its origins in England's mid-seventeenth-century treaties with the North African regencies to its role in the emergence of Britain's Mediterranean empire over the course of the long eighteenth century. At its inception, the Mediterranean pass system formed an interstate regulatory regime that mediated between North African and British naval power by providing a means to identify British vessels at sea and to limit the protection of Britain's treaties to them. During the eighteenth century, however, foreign merchants and shipowners, especially from Genoa, sought out the security of British passes by moving to Britain's colonies at Gibraltar and Minorca. The resulting incorporation of foreigners into the British pass system fundamentally altered the nature and significance of the pass and contributed to the development of Britain's imperial presence in the Mediterranean. This article reveals how the growth of British power and the interactions of British consuls and imperial officials with mariners and merchants from around the Mediterranean transformed the pass from a document of identification into an instrument of imperial protection that helped sustain Britain's Mediterranean outposts in the eighteenth century and make possible the dramatic expansion of the British Empire further into that sea at the start of the nineteenth. |
Disentangling the timescales behind the non-perturbative heavy quark potential The static part of the heavy quark potential has been shown to be closely related to the spectrum of the rectangular Wilson loop. In particular the lowest lying positive frequency peak encodes the late time evolution of the two-body system, characterized by a complex potential. While initial studies assumed a perfect separation of early and late time physics, where a simple Lorentian (Breit-Wigner) shape suffices to describe the spectral peak, we argue that scale decoupling in general is not complete. Thus early time, i.e. non-potential effects, significantly modify the shape of the lowest peak. We derive on general grounds an improved peak distribution that reflects this fact. Application of the improved fit to non-perturbative lattice QCD spectra now yields a potential that is compatible with a transition to a deconfined screening plasma. I. INTRODUCTION The search for a description of heavy quarkonia at finite temperature in terms of a nonrelativistic Schrdinger equation with an instantaneous potential has a long history. Ever since Matsui and Satz proposed the melting of the J/ particle as signal for the deconfinement transition in heavy ion collisions, it has been the goal for theory to put their phenomenological arguments on a solid field theoretical footing. The success of relativistic heavy ion experimental groups to indeed measure suppression patterns at RHIC and LHC has further invigorated interest in the topic. Even though initial attempts focussed mainly on model potentials, the last decade has seen technical advances that allow us to actually derive a potential from the underlying theory of strong interactions QCD via a systematic coarse graining procedure. Based on the concept of effective field theories (EFT) such as pNRQCD or quantum mechanical path integrals, the static part of the in-medium potential can be readily defined from the late-time limit of the real-time Wilson loop. Using the hard thermal loop approximation, Laine et. al. succeeded in calculating the real-time Wilson loop to first non-trivial order and obtained a closed expression for the potential. They found a real part featuring Debye screening and an imaginary part, which was attributed to Landau damping. First corrections to this quantity in an effective field theory approach were derived in ref. and the corresponding quarkonium spectral functions were computed in ref.. Questions remain however, such as how the confining, i.e. linear part of the potential of the hadronic phase behaves when approaching and surpassing the deconfinement temperature. Hence we are urged to extend the results of perturbative calculations into the nonperturbative regime. One path that has proven viable in spectral studies of heavy quarkonium is the use of lattice regularized QCD, which is amenable to Monte Carlo simulations at any temperature. As non-perturbative calculations of the Wilson loop based on lattice QCD are only possible in Euclidean time, we face the challenging task to connect imaginary time information with real-time dynamics in the definition of the potential. A recent study suggested that the use of Wilson loop spectral functions and the investigation of their peak structure allows a non-perturbative extraction of both real and imaginary part of the potential. At the heart of this procedure is the correct determination of the shape of the lowest lying spectral peak. From the point of view of the present study, the authors in assumed that the scales of early time (bound state formation) and late time (potential) physics are completely separated. Hence they fitted the spectrum with a simple Breit-Wigner and used its peak position and width to determine V (r). In the following we wish to demonstrate that such an assumption is not valid in general and one has to take into account modifications to the spectrum that arise from a coupling of the timescales. These lead to a skewing and shifting of the Breit-Wigner shape. After deriving the shape of the lowest lying peak on general grounds in section II A, we apply the functional form to the highest temperature data of ref. and show in section II B that the resulting potential is compatible with a scenario of heavy QQ in a deconfined screening plasma. The counterintuitive, large rise of the real part of the potential found in ref. disappears as a result of the improved fit. II. HEAVY QUARK POTENTIAL FROM A SPECTRAL ANALYSIS In order to define a potential for static heavy quarks, we rely on the EFT framework. As the constituents of the QQ system are heavy (m Q T, QCD ) and their velocities v are small, there exists a hierarchy of scales according to which an EFT description can be constructed. arXiv:1208.1899v1 9 Aug 2012 In a first step the hard scale is integrated out to give the theory of nonrelativistic QCD (NRQCD), where the heavy quarks are described by two-component Pauli spinors. One can proceed by integrating out the soft scale for the medium degrees of freedom to end up with the theory of potential NRQCD (pNRQCD) where color singlets and octets are the dynamical fields. A. Coupling of scales and spectral shapes The matching between QCD and the EFT tells us that the time evolution of the Wilson loop in general follows with a time dependent complex function (r, t) that asymptotes to the singlet potential To connect this formalism to lattice QCD observables, we deploy a spectral decomposition of the Wilson loop which allows us to extract the potential via We note that if a well defined lowest lying spectral peak exists, it will eventually dominate the dynamics in the late time limit and thus encodes all necessary information on the potential. In the following we will therefore calculate the shape of the low lying peak, starting from the general equations. To this end, we rewrite the function (r, t) = V (r) + (r, t), where the time dependent part (r, t) vanishes after a characteristic time t QQ. Intuitively t QQ is the time needed for the two body system of quark and anti-quark to form a bound state described by the static potential V (r). It is possible to formally solve eq. for t > 0. Bearing in mind that the definition of W (r, t) implies Im(r) < 0 we obtain Here the function (r, t) = t 0 (r, t)dt is defined as the integral over the time dependent part and its asymp- From the inversion of eq. and the positivity condition W (r, −t) = W * (r, t), we calculate the spectral function: The above expression enables us to write separately the actual peak structure coming from the late time physics and a background contribution that arises due to the time variation of (r, t) The first integral, which runs over the whole time axis can be calculated analytically and will contribute to the well defined peak structure encoding the potential. Since we wish to fit the spectra only around the maximum of this peak, i.e. where ( − ReV (r))t QQ 1, we can expand the first term in the second integral exp [i( − Re(r))t] in this region. We find that the spectrum can thus be written as Note that the first term reduces to a simple Breit-Wigner only in the case of ∂ t (r, t) = 0, which would imply that the time-independent potential picture is applicable at all times. In general eq. deviates from the simple Lorentian through an additional phase ∞ and background terms c i (r), arising from the early time variation of the potential. Even in the region close to the maximum of the peak, where all c i (r) with i > 0 can be ignored, the influence of the early time physics modifies the spectral shape through c 0 and Re(r) 1. In other words, these two coefficients have to be considered no matter what fitting range is chosen. We recover a time independent potential if the spectral function is plugged back into eq.. Interestingly all coefficients c i (r) as well as contributions from ∞ drop out in the integration 2. To confirm the validity of our idea, we also calculated the spectral function at leading order in hard thermal loop resummed perturbation theory and observed that it encodes a lowest lying peak with exactly the functional form of. B. Improved fitting on lattice QCD spectra The aim of extracting the potential from spectral functions was to open a window into the non-perturbative context of lattice QCD. Monte Carlo simulations allow the discrete estimation of correlation functions, such as the Wilson loop, in Euclidean time. To connect the realtime potential of eq. and simulated data W (r, i ), we have to invert the following Laplace transformation The problem we face is ill-defined, since we wish to extract an almost continuous function from a discrete and noisy set of points. Fortunately, according to the discussion following eq., we are interested only in the lowest lying peak of the spectrum. This part of the spectrum is amenable to analysis via the Maximum Entropy Method (MEM), a form of Bayesian inference. In practice its results can depend strongly on the quality of the data, i.e. the available signal to noise ratio. Even though the position of peaks can be determined reliably, the error in the width of spectral structures is more difficult to ascertain, due to the confluence of systematics and statistics. In the following, we will apply the improved fitting functions of eq. to the datasets of. Spectra are available based on quenched lattice QCD data from configurations with = 6.1, a bare anisotropy of b = 3.2108 and extend 20 3 36, 24 and 12, which yields a = 0.097fm and corresponds to the three temperatures T = 0.78T C, 1.17T C and 2.33T C respectively. For the case of the highest temperature, the number of datapoints is quite small for the use in MEM, so we provide lattice data as a crosscheck at a more finely spaced setting of = 7, which together with b = 3.5 yields a lattice spacing of a = 0.039fm. Fig.1 shows two Wilson loop spectra from the coarser lattice at r = 0.19fm and r = 0.39fm, which are fitted within the shaded region by a naive Lorentian (L), a skewed Lorentian (SL, c i = 0) and the skewed Lorentian with up to quadratic background terms (SQL, c i>2 = 0). We find that the peaks are much better reproduced, once the fitting function takes into account at least c 0 and slightly improve with additional c i. The improved fit is stable against a change of the fitting range and against considering additional c i. Especially in the right hand panel of Fig.1 it is obvious that a large background contribution interferes strongly with a naive Lorentian fit of the peak. Note that in the Wilson Loop case at r = 0.39fm the signal to background ratio is relatively small, hence we only show data up to this distance. The values for the real part of the potential V (r) from the coarse and fine lattice (the = 7 points being shifted to account for the different renormalization scale) can be found in the left panel of Fig.2. While the naive Lorentian fit denoted by (L) shows a very strong rise in Re(r), similar to the one observed in, improving the fitting function diminishes this effect significantly. We find that at small r, the real part Re(r) coincides within its errorbars, with the temperature independent part of the the color singlet free energies and then appears to become slightly larger than F 1 (r) at r ∼ 0.3fm. Most importantly we now understand that the previously observed extremely strong rise in the real part can be attributed to the presence of scale coupling not accounted for in the Breit-Wigner fitting function. The imaginary part on the other hand does not change significantly under the improved fitting. How much of the spectral width corresponds to a physical width or is induced by an artificial broadening of the MEM is however difficult to establish. Without question a width is present due to the observed curvature in the Euclidean data. Nevertheless until a follow up investigation based on e.g. the multilevel algorithm can simulate the Wilson loop at any separation distance with equally small relative errors, we suggest to take the right hand side of Fig.2 as a qualitative result. One challenging aspect of the lattice QCD determination of the potential is that the Wilson loop in Euclidean time becomes more and more suppressed at intermediate, with increasing spatial distance. This suppression in turn translates into a decreasing amplitude of the lowest lying peak, which hence becomes very hard to extract for r > 0.4 just where the non-perturbative range sets in. To mitigate these adverse effects, the authors of also considered an alternative observable, i.e. the Wilson line correlator W || (r, i ) in Coulomb gauge, to define a potential V || (r) in eq.. The benefit of using this gauge fixed quantity is that it shows much less suppression in the early and late region, which are assumed not to contribute to the values of the potential but can Fitting with a naive Lorentian is inadequate already at the second smallest separation distance. We find that skewing alone does not remedy the fit, while including up to quadratic terms allows us to reconstruct the spectral peak very accurately. Note that the naive Lorentian fit systematically overestimates the value of the real-part of the potential for r > 0.1fm. complicate their extraction. The shape of the spectra from the Wilson line correlator for the coarser lattice tell us that the background contributions are significantly reduced compared to the Wilson loop case. We are thus able to identify a well defined peak up to distances of r 1fm. At small separation distances r < 0.4fm, the naive fitting with a Lorentian and the improved fitting functions yield a very similar peak position. Only by going to larger separation distances the improved fitting gives significantly better results. Turning to the reconstructed values of the potential V || (r) from the coarse and fine lattice as shown in Fig.3, we find that the real part shows a behavior much closer to intuition as was found without improved fitting functions. Instead of a perpetual linear rise, Re(r) grows only until distances of r 0.45fm, where it begins to flatten off to a value slightly larger than the color-singlet free energies. Note that the real part of V || (r) and V (r) mostly agree within their error bars in the range where V (r) can be determined. The imaginary part Im(r) is however smaller than Im(r), which could be a sign of better acuity in the spectral peaks due to a higher signal to noise ratio. III. CONCLUSIONS AND OUTLOOK We have shown that the coupling of early and late time scales leads to spectral structures different from a Breit-Wigner, which have to be taken into account to extract the heavy quark potential reliably. After deriving improved fitting functions in, we applied them to the non-perturbative Wilson loop spectra and found that the fits reconstruct all spectral functions excellently, which is. We find that the extraction of the real part between different fitting functions only starts to differ for values above r > 0.4fm, due to the smaller background contributions in || (r, ). The real part Re(r) grows to values slightly larger than the color singlet free energies before it asymptotes to a constant value at r > 0.5fm. (right) Values of the imaginary part of the potential from a fit of the spectral width. by far not the case for the Lorentian. With our improved fitting, we are able to determine a real-and imaginary part of the potential, which is compatible with the presence of a deconfined and screening quark-gluon plasma. The error bars for the extracted values of the potential, especially in the case of the Wilson loop, are still relatively large. Thus to make more quantitative statements in the future, efforts need to be continued to increase the signal to noise ratio of the Euclidean correlator data in order for the MEM to improve. |
<gh_stars>0
//
// Decompiled by Procyon v0.5.36
//
package org.apache.velocity.runtime.directive;
public interface DirectiveConstants
{
public static final int BLOCK = 1;
public static final int LINE = 2;
}
|
Cosmological constant dominated transit universe from early deceleration to current acceleration phase in Bianchi-V space-time The paper presents the transition of universe from early decelerating phase to current accelerating phase with viscous fluid and time dependent cosmological constant $(\Lambda)$ as source of matter in Bianchi-V space-time. To study the transit behaviour of universe, we have assumed the scale factor as increasing function of time which generates a time dependent deceleration parameter (DP). The study reveals that the cosmological term does not change its fundamental nature for $\xi$ = constant and $\xi=\xi(t)$, where $\xi$ is the coefficient of bulk viscosity. The $\Lambda(t)$ is found to be positive and is decreasing function of time. The same is observed by recent supernovae observations. The physical behaviour of universe has been discussed in detail. Introduction One of the out standing problem in particle physics and cosmology is the cosmological constant problem: its theoretical expectation values from quantum field theory exceed observational limits by 120 orders of magnitude (Padmanabhan 2003). Even if such high energy are suppressed by super-symmetry, the electroweak corrections are still 56 orders highers. This problem was further sharpened by recent observation of supernova Ia (SN Ia), which reveal the striking discovery that our universe has lately been in its accelerated expansion phase (;) cross checks from the cosmic background microwave radiation (CMBR) and large scale structure (LSS), all confirm this unexpected result (; ). Numerous dynamical dark energy models have been proposed in the literature, such as quintessence (Ratra and Peebles 1988), phantom (Caldwell 2002), k-essence (), tachyon (Padmanabhan 2002), DGP () and chaplygin gas (Kamenshchik et al 2001). However, the simplest and most theoretically appealing candidate for dark energy is the vacuum energy (or the cosmological constant ) with a constant equation of state parameter equal to −1. Experimental study of isotropy of the cosmic microwave background radiation (CMBR) and speculation about the amount of helium formed at the early stages of the evolution of universe have stimulated theoretical interest in anisotropic cosmological models. At the present state of evolution, the universe is spherically symmetric and the matter distributed in it is on the whole isotropic and homogeneous. But in its early stages of evolution it could not have such a smoothed out picture because near the big bang singularity neither the assumption of spherical symmetry nor of isotropy can be strictly valid. Anisotropy of the cosmic expansion, which is supposed to be damped out in the course of cosmic evolution, is an important quantity. Recent experimental data and critical arguments support the existence of an anisotropic phase of the cosmic expansion that approaches an isotropic one. Therefore, it make sense to consider models of universe with anisotropic background. Here we confine ourselves of models of Bianchi-type V. Bianchi-V space-time has a fundamental role in constructing cosmological models suitable for describing the early stages of evolution of universe. In literature Collins, Maarteens and Nel The investigation of relativistic cosmological models usually has the cosmic fluid as perfect fluid. However, these models do not incorporate dissipative mechanism responsible for smoothing out initial anisotropies. It is believed that during neutrinos decoupling, the matter behaved like a viscous fluid (Klimek 1976) in early stages of evolution. It has been suggested in large class of homogeneous but anisotropic universe, the anisotropy dies away rapidly. The most important mechanism in reducing the anisotropy is neutrinos viscosity at temperature just above 10 10 K. It is important to develop a model of dissipative cosmological processes in general, so that one can analyze the overall dynamics of dissipation without getting lost in the detail of complex processes. Coley studied Bianchi-V viscous fluid cosmological models for barotropic fluid distribution. Murphy has investigated the role of viscosity in avoiding the initial big bang singularity. Padmanabhan and Chitre have shown that bulk viscosity leads to inflationary like solution. Pradhan et al (, 2005 In this paper, we have studied the transit behaviour of universe with time dependent in Bianchi-V space-time. To study the transit behaviour of universe, we have assumed the scale factor as increasing function of time which generates a time dependent DP. The paper is organized as follows. In section 2, the model and generalized law for scale factor have been presented. The section 3 deals with field equations. Some particular models have been discussed in section 4 and section 5. The last section 6 contains the concluding remarks. Model and generalized law for scale factor that yielding time dependent DP We consider the space-time metric of spatially homogeneous and anisotropic Bianchi-V of the form where A(t), B(t) and C(t) are the scale factors in different spatial directions and is a constant. We define the average scale factor (a) of Bianchi-type V model as a = (ABC) The spatial volume is given by Therefore, the generalized mean Hubble's parameter (H) read as where H 1 = A, H 2 = B and H 3 = C are the directional Hubble's parameters in the direction of x, y and z respectively. An over dot denotes differentiation with respect to cosmic time t. Since metric is completely characterized by average scale factor therefore let us consider that the average scale factor is increasing function of time as following a = (t n e t ) 1 m where m > 0 and n ≥ 0 are constant. Such type of ansatz for scale factor has already been considered by Yadav which generalized the one proposed by Pradhan and Amirhashchi. The proposed law yields a time dependent DP which describes the transition of universe from early decelerating phase to current accelerating phase. The value of DP (q) for model is found to be From equation, it is clear that the DP (q) is time dependent. Also, the transition redshift from deceleration expansion to accelerated expansion is about 0.5. Now for a universe which was decelerating in past and accelerating at present time, DP must show signature flipping (Amendola 2003;). Field equations For the bulk viscous fluid, the energy momentum tensor is given by where is the energy density,p is the effective pressure of the fluid, and v i is the fluid four velocity vector. In co moving system of co-ordinates, we have v i =. The effective pressurep is related to the equilibrium pressure p by (Yadav 2011b) where is the coefficient of bulk viscosity that determines the magnitude of viscous stress relative to expansion. The Einstein's field equations with cosmological constant (in gravitational units c = 1, 8G = 1) read as The Einstein's field equations for the line-element lead to the following system of equations Combining equations -, one can easily obtain continuity equation a Integrating equation and absorbing the constant of integration into B or C, we obtain Subtracting equations from, from, from and taking second integral of each, we obtain the following three relations respectively where b 1, b 2, b 3, x 1, x 2 and x 3 are constant of integrations. From equations - and, the metric functions can be explicitly written as The physical parameters such as scalar of expansion (), spatial volume (V ), anisotropy parameter (), shear scalar ( 2 ) and directional Hubble's parameters (H x, H y, H z ) are respectively given by It is observed that the spatial volume is zero and expansion scalar is infinite at t = 0, which shows that the universe starts evolving with zero volume at initial epoch at t = 0 with an infinite rate of expansion. The scale factor also vanish at initial moment hence the model has a point type singularity at t = 0. For t → ∞, we get q = −1 and dH dt = 0, which implies the greatest value Hubble's parameter and fastest rate of expansion of the universe. It is evident that negative value of q would accelerate and increase the age of universe. Figure 1 shows the dynamics of DP versus cosmic time. It is observed that initially the DP evolves with positive sign but later on, DP grows with negative sign. This behaviour of DP clearly explain the decelerated expansion in past and accelerated expansion of universe at present as observed in recent observation of SN Ia. Thus the derived model can be utilized to describe the dynamics of the late time evolution of the observed universe. In the derived model, the present value of DP is estimated as Where H 0 is the present value of Hubble's parameter and t 0 is the age of universe at present epoch. If we set n = 0.27m in equation, we obtain q 0 = −0.73 which is exactly match with the observed value of DP at present epoch (). Thus we constraint m = 3 and n = 0.27m in the remaining discussions of the model and graphical representations of physical parameters. Figure 2 depicts the variation of anisotropy parameter () versus cosmic time. It is shown that A decreases with time and tends to zero for sufficiently large times. Thus the anisotropic behaviour of universe dies out on later times and the observed isotropy of universe can be achieved in derived model at present epoch. The effective pressure (p) and energy density () of the model read as Equations and lead to For the specification of, we assume that the fluid obey the equation of state of the form where (0 ≤ ≤ 1) is constant. Thus, we can solve the cosmological parameters by taking different physical assumption on (t). Model with constant coefficient of bulk viscosity we assume that Now, Equation, with use of equations and reduces to Eliminating (t) between equations and, we get = (3 + 1) m 2 ( + 1) From equation, we observe that the cosmological constant is decreasing function of time and it approaches a small positive value as time progresses (i. e. present epoch). Model with bulk viscosity proportional to the energy density i. e. ∝ We assume that = 0 Firstly Murphy has constructed a class of viscous cosmological models with = 0 which possesses an interesting feature that the big bang type singularity of infinite space-time curvature does not occurs at finite past. Later on Pradhan et al (, 2005 presented bulk viscous cosmological models for = 0 in harmony with SN Ia observations. Now, equation with use of equations, and reduces to Eliminating (t) between equations and, we obtain From equation, it is observed that the cosmological term is positive and decreasing function of time (i. e. present epoch) which supports the result obtained from recent type Ia supernova observations (;Perlmutter at al. 1997). The behaviour of cosmological constant is clearly shown in Figure 3. The models have non-vanishing cosmological constant and energy density as t → ∞. It is well known that with the expansion of universe i. e. with the increase of time t, the energy density decreases and becomes too small to be ignored. We can express equations − in terms of H, q and asp From equations and, we obtain which is Raychaudhuri's equation for given distribution. Equation shows that for + 3p = 0, acceleration is initiated by bulk viscosity and term. In absence of bulk viscosity only contributes the acceleration that seems to relate with dark energy. It also shows that for a positive the universe may accelerate with the condition + 3p ≤ 0 i. e. p is negative for positive energy density () with a definite contribution of in the acceleration. In the observational front, the data set coming from supernova legacy survey (SNLS) show that the dark energy behaves in the same manner as that. Concluding remarks In this paper, we have presented the generalized law for scale factor in homogeneous and anisotropic Bianchi-V space-time that yields the time dependent DP, representing a model which generates a transition of universe from early decelerating phase to recent accelerating phase. The spatial scale factors and volume scalar vanish at t = 0. The energy density and pressure are infinite at this initial epoch. As t → ∞, the scale factor diverge and, p both tend to zero. and 2 are very large at initial moment but decrease with cosmic time and vanish at t → ∞. The model shows isotropic state in later time of its evolution. Also we observe thatp = − as t → ∞. For n = 0, all matter and radiation is concentrated at the big bang epoch and the cosmic expansion is driven by the big bang impulse. The model has a point type singularity at the initial moment as the scale factors and volume vanish at t = 0. For n = 0, the model has no real singularity and energy density becomes finite. Thus the universe has non singular origin and the cosmic expansion is driven by the creation of matter particles. It has been observed that lim t→0 2 turns out to be constant. Thus the model approaches homogeneity and matter is dynamically negligible near the origin. The cosmological constant given in sections 4 and 5 are decreasing function of time and they are approach a small positive value as time increases (i. e. present epoch). The value of cosmological constant for these models are supported by the results from recent supernovae observations recently obtained by the High-Z Supernovae Team and Supernovae Cosmological Project ((Perlmutter et al., 1998(Perlmutter et al., 1999Riess et al., 2004. A positive cosmological constant resists the attractive gravity of matter due to its negative pressure. For most universes, the positive cosmological constant eventually dominates over the attraction of matter and drives the universe to expands exponentially. Thus, with our approach, we obtain a physically relevant decay law for the cosmological constant unlike other authors where adhoc laws were used to arrive at a mathematical expressions for decaying. Thus the derived models are more general than those studied earlier. |
<gh_stars>0
package io.piveau.hub.handler;
import io.piveau.hub.util.ErrorCodeResponse;
import io.vertx.core.http.HttpHeaders;
import io.vertx.ext.auth.PubSecKeyOptions;
import io.vertx.ext.auth.jwt.JWTAuth;
import io.vertx.ext.auth.jwt.JWTAuthOptions;
import io.vertx.ext.jwt.JWTOptions;
import io.vertx.ext.web.RoutingContext;
import io.vertx.ext.web.handler.JWTAuthHandler;
import static io.piveau.hub.util.Constants.API_KEY_AUTH;
import static io.piveau.hub.util.Constants.AUTHENTICATION_TYPE;
import static io.piveau.hub.util.Constants.JWT_AUTH;
public class AuthenticationHandler {
private final String publicKey;
private final String apiKey;
private final String clientID;
private final String BEARER = "Bearer";
public AuthenticationHandler(String publicKey, String apiKey, String clientID) {
this.publicKey = publicKey;
this.apiKey = apiKey;
this.clientID = clientID;
}
/**
* Authenticate request via Api-Key or RTP token
* @param context
*/
public void handleAuthentication(RoutingContext context) {
String authorization = context.request().getHeader(HttpHeaders.AUTHORIZATION);
if (authorization == null) {
ErrorCodeResponse.badRequest(context, "Header field \'Authorization\' missing");
} else if (authorization.contains(BEARER)) {
JWTAuthOptions authOptions = new JWTAuthOptions()
.addPubSecKey(new PubSecKeyOptions()
.setAlgorithm("RS256") // probably shouldn't be hardcoded
.setPublicKey(publicKey))
.setPermissionsClaimKey("realm_access/roles") // probably shouldn't be hardcoded
.setJWTOptions(new JWTOptions().addAudience(clientID));
JWTAuth authProvider = JWTAuth.create(context.vertx(), authOptions);
JWTAuthHandler authHandler = JWTAuthHandler.create(authProvider);
context.data().put(AUTHENTICATION_TYPE, JWT_AUTH);
authHandler.handle(context);
} else {
if (this.apiKey.isEmpty()) {
ErrorCodeResponse.internalServerError(context, "Api-Key is not specified");
} else if (authorization.equals(this.apiKey)) {
context.data().put(AUTHENTICATION_TYPE, API_KEY_AUTH);
context.next();
} else {
ErrorCodeResponse.forbidden(context, "Incorrect Api-Key");
}
}
}
}
|
Cytological and Histological Studies on the Hepatotoxic Effects of Sorafenib (Nexavar) in Albino Rats Sorafenib (Nexavar) is an oral inhibitor of multi-kinase proteins approved in 2005 for treatment of metastatic advanced hepatocellular carcinoma. It causes many metabolic side effects, including diarrhea, hypertension, hand-foot skin reaction, and fatigue. This study aims to detect the histopathological, histochemical and DNA contents changes of the rat's liver under Nexavar treatment. The rats were divided into 3 groups. Group 1: Served as control (rats were orally administrated with ml of normal saline for a month. Group 2: Rats of this group were treated with the multikinase inhibitor Sorafenib (60 mg/kg body weight/day) for 15 days by gavage. Group 3: Rats of this group were treated with the multikinase inhibitor Sorafenib (60 mg/kg body weight/day) for 30 days by gavage. Animals were sacrificed and specimens from the liver were processed for histopathological, histochemical; by estimation of total carbohydrates and total protein contents; and cytological studies by estimation of DNA contents at the different stages of the cell cycle by the flow cytometer analysis. The results explained that in treated animals, there were histopathological and histochemical alterations, as a destruction of the normal hepatic architecture, swollen hepatocytes with vacuolar degenerated cytoplasm. Some hepatocytes showed mild to severe signs of injury such as swelling of their nuclei. Karyolysis of other hepatocytes are encountered. Severe reduction in the glycogen and proteins contents of the hepatocytes was observed by using PAS and bromophenol blue staining techniques. In addition, the results showed that Nexavar causes apoptosis by 15.41% and 13.72% in both groups 2 and 3, respectively. Liver genotoxicity induced by Nexavar for 15 and 30 days decreases the G1 cells constitute to 5.08% and 6.50% and increases the S-phase cells constitute to 19.17% and 20.28%, respectively. Moreover, the G2 cells increases to 2.32% and 2.45, about half of the last amount is aneuploidy cells. As a conclusion, Nexavar treatment showed mild to moderate hepatotoxic effects and induces many histological, histochemical and cytological changes causing liver damage. |
/**
* InfixToPostfix
*
* The InfixToPostfix takes the infix expression from command line (first argument)
* and converts it to postfix expression and prints the output
*
* @author Nithin Biliya
* @version 1.0
* @since 10/01/2018
*/
class InfixToPostfix {
private Stack<String> myStack;
/**
* Constructor to initialize the empty stack
*/
public InfixToPostfix() {
myStack=new Stack<String>();
}
public static void main(String[] args) throws Exception {
if(args.length==0) {
System.out.println("infix expression string has to be sent in as first argument");
} else {
InfixToPostfix itp=new InfixToPostfix();
System.out.println("posfix expression - "+itp.convert(args[0]));
}
}
/**
* converts from infix to postfix
* @param infix expression string
* @return String postfix expression string
* @throws Exception if there is invalid data in stack
*/
public String convert(String infix) throws Exception {
String tmp=null, postfix=new String();
//myStack.push("(");
infix="( "+infix+" )";
for(String token : infix.split(" ")) {
//System.out.println("token - "+token);
//if(!myStack.empty()) System.out.println("top - "+myStack.peek());
//System.out.println("postfix - "+postfix);
//System.out.println("");
if(getOperatorPrecedence(token)>0) { // if token is operator
while(getOperatorPrecedence(myStack.peek())>0 && operatorPrecedence(token,myStack.peek())>=0) { // if top of stack is operator and operator token is higher precedence than top of stack
postfix+=myStack.pop();
postfix+=" ";
}
myStack.push(token);
} else if(isBracket(token)<0) { // if token is an open bracket
myStack.push(token);
} else if(isBracket(token)>0) { // if token is close bracket
do {
tmp=myStack.pop();
if(getOperatorPrecedence(tmp)>0) { // if tmp is operator
postfix+=tmp;
postfix+=" ";
} else if(isBracket(tmp)<0 && isMatching(tmp,token)) { // if tmp is open bracket and is matching the closing bracket (token)
break;
} else {
throw new Exception("Error - invalid data in stack - "+tmp);
}
} while(true);
} else { // if token is operand
postfix+=token;
postfix+=" ";
}
}
return postfix;
}
/**
* checks if the token is an operator adn returns it precedence
* Supported operators - +(3),-(3),*(2),/(2),^(1)
* @param token string to be checked if it is an operator
* @return int returns the precedence of the operator. 0 if it is not an operator
*/
private int getOperatorPrecedence(String token) {
switch(token) {
case "+" :
case "-" :
return 3;
case "*" :
case "/" :
return 2;
case "^" :
return 1;
default :
return 0;
}
}
/**
* checks if the token is a bracket. Open or closed
* Supported brackets - [,],{,},(,)
* @param token string to be checked if it is a bracket
* @return int returns -1 if open bracket, 1 ifi closed bracket, 0 if not a bracket
*/
private int isBracket(String token) {
switch(token) {
case "[" :
case "{" :
case "(" :
return -1;
case "]" :
case "}" :
case ")" :
return 1;
default :
return 0;
}
}
/**
* checks if openBracket matches the closeBracket
* Supported brackets - [,],{,},(,)
* @param openBracket open bracket
* @param closeBracket close bracket
* @return boolean returns true if the openBracket matches the closeBracket. else false.
*/
private boolean isMatching(String openBracket,String closeBracket) {
if((openBracket.equals("(") && closeBracket.equals(")")) || (openBracket.equals("[") && closeBracket.equals("]")) || (openBracket.equals("{") && closeBracket.equals("}"))) {
return true;
}
return false;
}
/**
* checks the precedence of the operators
* @param op1 first operator
* @param op2 second operator
* @return int returns 0 if op1=op2, returns 1 if op1>op2, returns -1 if op1<op2
*/
private int operatorPrecedence(String op1,String op2) {
if(getOperatorPrecedence(op1) > getOperatorPrecedence(op2)) return 1;
else if(getOperatorPrecedence(op1) < getOperatorPrecedence(op2)) return -1;
else return 0;
}
} |
<reponame>ersincebi/SOFT3102<gh_stars>0
package com.fmway.models.chat;
import org.junit.Test;
import static org.junit.Assert.assertEquals;
public class ChatParseDefinitionsTest {
private ChatParseDefinitions chatParseDefinitions;
public void setUp(){ chatParseDefinitions = new ChatParseDefinitions(); }
public void tearDown(){ chatParseDefinitions = null; }
@Test
public void getClassName_test_expectClassName(){
setUp();
final String expected = chatParseDefinitions.className;
final String actual = chatParseDefinitions.getClassName();
assertEquals(expected,actual);
tearDown();
}
@Test
public void getObjectIdKey_test_expectObjectIdKey(){
setUp();
final String expected = chatParseDefinitions.objectIdKey;
final String actual = chatParseDefinitions.getObjectIdKey();
assertEquals(expected,actual);
tearDown();
}
@Test
public void getCreatedAtKey_test_expectCreatedAtKey(){
setUp();
final String expected = chatParseDefinitions.createdAtKey;
final String actual = chatParseDefinitions.getCreatedAtKey();
assertEquals(expected,actual);
tearDown();
}
@Test
public void getTripIdKey_test_expectTripIdKey(){
setUp();
final String expected = chatParseDefinitions.tripIdKey;
final String actual = chatParseDefinitions.getTripIdKey();
assertEquals(expected,actual);
tearDown();
}
@Test
public void getPersonIdKey_test_expectPersonIdKey(){
setUp();
final String expected = chatParseDefinitions.personIdKey;
final String actual = chatParseDefinitions.getPersonIdKey();
assertEquals(expected,actual);
tearDown();
}
@Test
public void getMessageKey_test_expectMessageKey(){
setUp();
final String expected = chatParseDefinitions.messageKey;
final String actual = chatParseDefinitions.getMessageKey();
assertEquals(expected,actual);
tearDown();
}
}
|
Platelet-derived growth factor-induced p42/44 mitogen-activated protein kinase activation and cellular growth is mediated by reactive oxygen species in the absence of TSC2/tuberin. Tuberous sclerosis complex (TSC) is a genetic disorder caused by inactivating mutations in the TSC1 or TSC2 genes, which encode hamartin and tuberin, respectively. TSC is characterized by multiple tumors of the brain, kidney, heart, and skin. Tuberin and hamartin inhibit signaling by the mammalian target of rapamycin (mTOR) but there are limited studies of their involvement in other pathways controlling cell growth. Using ELT-3 cells, which are Eker rat-derived smooth muscle cells, we show that ELT-3 cells expressing tuberin (TSC2+/+) respond to platelet-derived growth factor (PDGF) stimulation by activating the classic mitogen-activated protein (MAP)/extracellular signal-regulated kinase kinase (MEK)-1-dependent phosphorylation of p42/44 MAP kinase (MAPK) with nuclear translocation of phosphorylated p42/44 MAPK. In contrast, in tuberin-deficient ELT-3 cells (TSC2-/-), PDGF stimulation results in MEK-1-independent p42/44 MAPK phosphorylation with reduced nuclear localization of phosphorylated p42/44 MAPK. Moreover, in TSC2-/- cells but not in TSC2+/+ cells, cellular growth and activation of p42/44 MAPK by PDGF requires the reactive oxygen species intermediate, superoxide anion (O2*-). Both baseline and PDGF-induced O2*- levels were significantly higher in TSC2-/- cells and were reduced by treatment with rapamycin and inhibitors of mitochondrial electron transport. Furthermore, the exogenous production of O2*- by the redox cycling compound menadione induced MEK-1-independent cellular growth and p42/44 MAPK phosphorylation in TSC2-/- cells but not in TSC2+/+ cells. Together, our data suggest that loss of tuberin, which causes mTOR activation, leads to a novel cellular growth-promoting pathway involving mitochondrial oxidant-dependent p42/44 MAPK activation and mitogenic growth responses to PDGF. |
/**
* Ensures that there are not concurrent executions of same task (either on this host or any other cluster host)
*
* @author <a href="mailto:mposolda@redhat.com">Marek Posolda</a>
*/
public class ClusterAwareScheduledTaskRunner extends ScheduledTaskRunner {
private final int intervalSecs;
public ClusterAwareScheduledTaskRunner(KeycloakSessionFactory sessionFactory, ScheduledTask task, long intervalMillis) {
super(sessionFactory, task);
this.intervalSecs = (int) (intervalMillis / 1000);
}
@Override
protected void runTask(final KeycloakSession session) {
session.getTransaction().begin();
ClusterProvider clusterProvider = session.getProvider(ClusterProvider.class);
String taskKey = task.getClass().getSimpleName();
ExecutionResult<Void> result = clusterProvider.executeIfNotExecuted(taskKey, intervalSecs, new Callable<Void>() {
@Override
public Void call() throws Exception {
task.run(session);
return null;
}
});
session.getTransaction().commit();
if (result.isExecuted()) {
logger.debugf("Executed scheduled task %s", taskKey);
} else {
logger.debugf("Skipped execution of task %s as other cluster node is executing it", taskKey);
}
}
} |
EQUAL Community Initiative
Themes
EQUAL projects were classified into the four pillars of the European Employment Strategy, and more precisely into nine themes:
1. Employability
a) Facilitating access and return to the labour market for those who have difficulty in being integrated or re-integrated into a labour market which must be open to all
b) Combating racism and xenophobia in relation to the labour market
2. Entrepreneurship
c) Opening up the business creation process to all by providing the tools required for setting up in business and for the identification and exploitation of new possibilities for creating employment in urban and rural areas
d) Strengthening the social economy (the third sector), in particular the services of interest to the community, with a focus on improving the quality of jobs
3. Adaptability
e) Promoting lifelong learning and inclusive work practices which encourage the recruitment and retention of those suffering discrimination and inequality in connection with the labour market
f) Supporting the adaptability of firms and employees to structural economic change and the use of information technology and other new technologies
4. Equal Opportunities for women and men
g) Reconciling family and professional life, as well as the reintegration of men and women who have left the labour market, by developing more flexible and effective forms of work organisation and support services
h) Reducing gender gaps and supporting job desegregation.
5. (i) Asylum seekers |
I know Alabama running back Trent Richardson is the consensus pick, the safest pick.
I can still hear LaDainian Tomlinson�s mother telling me before the 2001 draft that she hoped the Browns selected the charity-minded TCU running back because of what he could do for the Cleveland community.
But I�m still hoping the Browns come out of the NFL draft with an Oklahoma State package � wide receiver Justin Blackmon and quarterback Brandon Weeden.
I know No. 4 is probably too high to draft Blackmon, who supposedly can�t compare with A.J. Green or Julio Jones, the best receivers in last year�s draft. But I look at the 40 touchdowns Blackmon has scored in three years, 38 in the past two seasons, and think that his addition to an offense devoid of playmakers is exactly what the Browns need.
I remember new offensive coordinator Brad Childress talking about the West Coast offense in February and saying that it�s all about the YAC � yards after the catch. That�s where Blackmon could instantly help.
I also keep thinking about the Browns� 2007 season and the special chemistry quarterback Derek Anderson and receiver Braylon Edwards showed. That�s when Edwards scored 16 touchdowns, breaking Gary Collins� single-season record of 13 that had stood since 1963. That�s what Weeden and Blackmon could do if paired together. Weeden threw 75 TD passes in his final three seasons, 40 to Blackmon.
Other than 2007, the Browns have rarely seen that chemistry in the expansion era, even when Kevin Johnson was scoring eight touchdowns in 1999 and nine in 2001. Even in the good old days of Bernie Kosar, the Browns never relied on one big-play receiver. Webster Slaughter�s single-season high for touchdowns was seven (in 1987), Reggie Langhorne�s was seven (in 1988) and Brian Brennan�s was six (in 1987 and �88).
Blackmon could be the game-breaker Edwards was for only one year.
I love covering the NFL Combine because I like interviewing prospects and deciding which ones I like and which ones I want to follow for the rest of their careers. But the draft has become so hyped that I long for the days when the Browns are picking 30th and we don�t have to spend four months debating Blackmon vs. Richardson and picking apart their every flaw.
Pat McManamon of Foxsportsohio.com suggested the other day that the Browns should take North Alabama cornerback Janoris Jenkins with the 37th overall pick. I�m with him on this one. Yes, the Browns might be scared away by Jenkins� past marijuana use and his four children with three different women. But Jenkins is a top 10 talent who could flourish with his former Florida teammate Joe Haden, whom Jenkins reportedly has called his �big bro.� Haden could be the kind of influence on Jenkins� life that Jenkins needs and his proximity to former North Alabama coach Terry Bowden, now at the University of Akron, could also help.
Every year I have an Ohio State player I want the Browns to pick and every year that wish goes unfulfilled (most notably center LeCharles Bentley in 2002). I don�t feel that strongly about anyone this year. I�m not shooting for the stars, but I do like receiver DeVier Posey, projected as a fifth- or sixth-rounder. After missing 10 games last season due to NCAA suspensions, Posey should be highly motivated. I like the way he tracks the deep ball. Offensive tackle Mike Adams might be a steal if the Browns could nab him at No. 37, but his college inconsistency worries me.
If the Browns are going to go defense on the first two days, I like Jenkins and/or Nebraska linebacker Lavonte David, who would be perfect on the weak side for the Browns. He may be undersized, but his instincts and energy could inspire the entire unit. I don�t think the Browns have anyone who can make plays like David did taking the ball away from Ohio State quarterback Braxton Miller last season.
The Browns desperately need to find a right tackle in this draft, but I�m OK if they wait until Saturday to do it.
Nate Ulrich, our beat writer, suggested today that the Browns could take a player at about any position except for tight end and the specialists. I agree to a point, but I don�t think they have a tight end in the mold of those who took the NFL by storm last season. It doesn't appear to be Jordan Cameron, last year's fourth round pick, who was inactive for eight games and caught six passes for 33 yards. That need may have to wait because only three tights ends � Georgia�s Orson Charles, Stanford�s Coby Fleener and Clemson�s Dwayne Allen � are projected as every-down tight ends who will be selected in the first two rounds. |
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
/*!
* \file np_delete_op.cc
* \brief Implementation of the API of functions in src/operator/numpy/np_delete_op.cc
*/
#include <mxnet/api_registry.h>
#include <mxnet/runtime/packed_func.h>
#include <vector>
#include "../utils.h"
#include "../../../operator/numpy/np_delete_op-inl.h"
namespace mxnet {
MXNET_REGISTER_API("_npi.delete")
.set_body([](runtime::MXNetArgs args, runtime::MXNetRetValue* ret) {
using namespace runtime;
static const nnvm::Op* op = Op::Get("_npi_delete");
nnvm::NodeAttrs attrs;
op::NumpyDeleteParam param;
int num_inputs = 0;
param.start = dmlc::nullopt;
param.step = dmlc::nullopt;
param.stop = dmlc::nullopt;
param.int_ind = dmlc::nullopt;
param.axis = dmlc::nullopt;
if (args.num_args == 3) {
if (args[1].type_code() == kDLInt ||
args[1].type_code() == kDLFloat) {
if (args[1].type_code() == kDLInt) {
param.int_ind = args[1].operator int();
} else if (args[1].type_code() == kDLFloat) {
param.int_ind = static_cast<int>(args[1].operator double());
}
if (args[2].type_code() == kDLInt) {
param.axis = args[2].operator int();
} else if (args[2].type_code() == kDLFloat) {
param.axis = static_cast<int>(args[2].operator double());
}
num_inputs = 1;
} else {
if (args[2].type_code() == kDLInt) {
param.axis = args[2].operator int();
} else if (args[2].type_code() == kDLFloat) {
param.axis = static_cast<int>(args[2].operator double());
}
num_inputs = 2;
}
} else {
num_inputs = 1;
if (args[1].type_code() == kDLInt) {
param.start = args[1].operator int();
} else if (args[1].type_code() == kDLFloat) {
param.start = static_cast<int>(args[1].operator double());
}
if (args[2].type_code() == kDLInt) {
param.stop = args[2].operator int();
} else if (args[2].type_code() == kDLFloat) {
param.stop = static_cast<int>(args[2].operator double());
}
if (args[3].type_code() == kDLInt) {
param.step = args[3].operator int();
} else if (args[3].type_code() == kDLFloat) {
param.step = static_cast<int>(args[3].operator double());
}
if (args[4].type_code() == kDLInt) {
param.axis = args[4].operator int();
} else if (args[4].type_code() == kDLFloat) {
param.axis = static_cast<int>(args[4].operator double());
}
}
std::vector<NDArray*> inputs;
for (int i = 0; i < num_inputs; ++i) {
inputs.push_back(args[i].operator mxnet::NDArray*());
}
attrs.parsed = param;
attrs.op = op;
SetAttrDict<op::NumpyDeleteParam>(&attrs);
int num_outputs = 0;
auto ndoutputs = Invoke(op, &attrs, num_inputs, inputs.data(), &num_outputs, nullptr);
*ret = ndoutputs[0];
});
} // namespace mxnet
|
// NewBroker instantiates a new broker
func NewBroker() (*Broker, error) {
broker := &Broker{
MID: 0,
Config: newConfig(),
Modules: make(map[string]*Module),
ApiResponses: make(map[int32]chan map[string]interface{}),
cbIndex: make(map[string]map[string]interface{}),
WriteThread: &WriteThread{
Chan: make(chan Event),
SyncChan: make(chan bool),
},
QuestionThread: &QuestionThread{
userdex: make(map[string]QuestionQueue),
},
SigChan: make(chan os.Signal),
SyncChan: make(chan bool),
}
Logger.SetLevel(logging.GetLevelValue(strings.ToUpper(broker.Config.LogLevel)))
broker.cbIndex[M] = make(map[string]interface{})
broker.cbIndex[E] = make(map[string]interface{})
broker.cbIndex[T] = make(map[string]interface{})
broker.cbIndex[L] = make(map[string]interface{})
broker.cbIndex[Q] = make(map[string]interface{})
broker.WriteThread.broker = broker
broker.QuestionThread.broker = broker
socket, meta, err := broker.getASocket()
if err != nil {
return nil, err
}
broker.Socket = socket
broker.SlackMeta = meta
broker.Brain, err = broker.newBrain()
if err != nil {
return nil, err
}
if err = broker.Brain.Open(); err != nil {
Logger.Error(`couldn't open mah brain! `, err)
return broker, err
}
return broker, nil
} |
<filename>include/tasks.h
/*
* tasks.h
* for tasknc
* by mjheagle
*/
#ifndef _TASKS_H
#define _TASKS_H
#include <stdbool.h>
#include "common.h"
char free_task(struct task* tsk);
void free_tasks(struct task* head);
struct task* get_task_by_position(int n);
int get_task_position_by_uuid(const char* uuid);
struct task* get_tasks(char* uuid);
unsigned short get_task_id(char* uuid);
struct task* malloc_task(void);
struct task* parse_task(char* line);
void reload_task(struct task* this);
void reload_tasks(void);
void remove_char(char* str, char remove);
void set_position_by_uuid(const char* uuid);
int task_background_command(const char* cmdfmt);
void task_count(void);
int task_interactive_command(const char* cmdfmt);
bool task_match(const struct task* cur, const char* str);
void task_modify(const char* argstr);
extern FILE* logfp;
extern struct task* head;
extern bool redraw;
extern int selline;
extern int taskcount;
extern char* active_filter;
#endif
// vim: et ts=4 sw=4 sts=4
|
Use of healthcare claims to validate the Prevention of Arrhythmia Device Infection Trial cardiac implantable electronic device infection risk score Abstract Aim The Prevention of Arrhythmia Device Infection Trial (PADIT) infection risk score, developed based on a large prospectively collected data set, identified five independent predictors of cardiac implantable electronic device (CIED) infection. We performed an independent validation of the risk score in a data set extracted from U.S. healthcare claims. Methods and results Retrospective identification of index CIED procedures among patients aged ≥18years with at least one record of a CIED procedure between January 2011 and September 2014 in a U.S health claims database. PADIT risk factors and major CIED infections (with system removal, invasive procedure without system removal, or infection-attributable death) were identified through diagnosis and procedure codes. The data set was randomized by PADIT score into Data Set A (60%) and Data Set B (40%). A frailty model allowing multiple procedures per patient was fit using Data Set A, with PADIT score as the only predictor, excluding patients with prior CIED infection. A data set of 54 042 index procedures among 51 623 patients with 574 infections was extracted. Among patients with no history of prior CIED infection, a 1 unit increase in the PADIT score was associated with a relative 28% increase in infection risk. Prior CIED infection was associated with significant incremental predictive value (HR 5.66, P<0.0001) after adjusting for PADIT score. A Harrells C-statistic for the PADIT score and history of prior CIED infection was 0.76. Conclusion The PADIT risk score predicts increased CIED infection risk, identifying higher risk patients that could potentially benefit from targeted interventions to reduce the risk of CIED infection. Prior CIED infection confers incremental predictive value to the PADIT score. Introduction Cardiac implantable electronic devices (CIEDs) have become prevalent lifesaving and improving technologies, with a growing rate of implantation over the last decade. 1 Despite significant advances in surgical technique and use of peri-operative antibiotics, infection remains the most common indication for CIED extraction. 2 Device-related infection is not a benign complication; it is associated with significantly increased morbidity, mortality, and healthcare cost. Defining predictors of increased CIED infection risk can aid in the development of patient specific peri-implant strategies to reduce infection. The recent Prevention of Arrhythmia Device Infection Trial (PADIT) study used a large, prospectively collected data set to identify independent predictors of device infection. 7,8 The PADIT infection risk score was subsequently developed and is composed of age, procedure type, renal insufficiency, immunocompromised status, and number of previous procedures ( Table 1). While these observations are consistent with a meta-analysis of several smaller studies, 9 the authors advocated for validation of the risk score in an independent cohort, and consideration of predictors that were missing from the PADIT data set. In this context, we performed an independent What's new? Cardiac implantable electronic device (CIED) infection is associated with increased morbidity, mortality, and cost. Multivariable risk scores, to identify patients at risk of infection after primary or secondary CIED procedures, have been developed-but due to lack of external validation, their use in clinical practice is not commonplace. We performed an independent validation of the recently developed Prevention of Arrhythmia Device Infection Trial (PADIT) risk score in a data set extracted from US healthcare claims, which included 54 042 index procedures among 51 623 patients with 574 infections. We report that the relative risk of a major CIED infection increases by 28% for each one unit increase in the PADIT risk score. Inclusion of prior CIED infection history as an additional risk factor increased the predictive value of the score. The PADIT risk score could potentially be used in clinical practice to identify patients who may benefit from targeted interventions to reduce infection risk during implant, upgrade or revision. Graphical Abstract Validation of the PADIT CIED infection risk score validation of the PADIT risk score using a data set extracted from U.S. healthcare claims. Data and patient selection A retrospective analysis using Optum's de-identified Clinformatics V R Data Mart Database was performed. This database includes approximately 17-19 million annual covered lives, including both Commercial and Medicare Advantage health plan data. 10. According to the Health Insurance Portability and Accountability Act (HIPAA), the use of de-identified data did not require institutional review board (IRB) approval or a waiver of authorization. 11 The study population included patients aged > _18 years, with at least one record of a CIED procedure between January 2011 and September 2014, with at least 12-months of continuous health plan enrolment prior to their index CIED procedure date. A CIED index procedure was defined as any of the following: CIED implant, replacement, revision, or upgrade. A patient could have multiple CIED procedures in the claims data set and therefore contribute multiple index dates. Multiple procedures per patient were selected to ensure a representation of all procedures instead of a bias towards earlier or later procedures. The full list of CIED procedure codes for implantable pulse generator (IPG), implantable cardioverter-defibrillator (ICD), cardiac resynchronization therapy pacemaker (CRT-P) and cardiac resynchronization therapy-defibrillator (CRT-D) index procedures can be found in Supplementary Table S1. Patients were excluded if the type of device could not be determined, or if there was a record of a separate major cardiac procedure on the same date (Supplementary Table S2), or the CIED procedure was not conducted in certain places of service (inpatient hospital, outpatient hospital, ambulatory surgical centre), or if the follow-up period was <1 day. Outcomes and measures To best capture the presence of risk factors prior to the procedure, the baseline period was defined as all health plan data for the patient prior to the index CIED procedure date. Baseline patient characteristics included age, procedure type, renal insufficiency, immunocompromised, prior CIED procedure, and prior CIED infection history. Baseline comorbidities were defined as > _1 corresponding diagnosis code (ICD-9-CM), in any position on a medical claim (Supplementary Table S3). Immunocompromised status was supplemented with relevant medication information within 30 days prior to index date. The follow-up period began the day after an outpatient index CIED procedure and the day after discharge for an inpatient procedure. The follow-up ended with the first occurrence of any of the following: 12-months after the start date of follow-up, end of insurance coverage, death, new CIED procedure, or date of major CIED infection. In order to select the most clinically meaningful events, major CIED infection was chosen as the primary outcome in this study. CIED infection was identified when either of the following conditions were met: (a) > _1 claim with ICD-9-CM code 996.61 (infection due to cardiac device, implant, and graft) in any position on the claim; (b) > _1 claim with a CIED implant, revision or removal code, in any position, and > _1 claim with an infection diagnosis code (CIED-specific or bloodstream/non-CIED specific), in any position, on the same date of service. Major CIED infections (infection associated with system removal, invasive procedure without system removal, or death attributable to infection) were identified through diagnosis and procedure codes indicating a CIED infection associated with or without system removal (Supplementary Table S4), or death attributable to infection (Supplementary Table S5). Index characteristics included age, device type (IPG, ICD, CRT-P, and CRT-D), and procedure type (implant, replacement, upgrade, and revision). History of CIED infection was identified as a suspected risk factor and included in the analysis as a potential predictor, above and beyond the original PADIT risk score factors. Data sets We chose to partition the data into two independent data sets so that the first data set could be utilized for modelling of the PADIT score as a predictor of major CIED infection, and so the second data set would be available for independent modelling should revisions to the PADIT score be deemed appropriate. The cohort data set was partitioned into Data Set A (60% of the full data set) and Data Set B (40%) consistent with the prospectively defined analysis plan. Patients were randomly assigned to either group, with randomization stratified by PADIT score and history of CIED infection at time of the patient's index procedure. If a patient had multiple procedures, then all procedures for that patient were assigned to the same data set. Frailty model A frailty model was chosen as a robust model to account for multiple procedures per patient, to validate the known PADIT infection risk score Prior procedure(s) d None 0 One 1 Two or more 3 The table shows the points for each of the 5 independent predictors (P: prior procedures; A: age; D: depressed estimated glomerular filtration rate; I: immunocompromised; and T: type of procedure). 7,8 The risk score assigns weighted points based on characteristics of the procedure and the patient's medical history (predictors) and determines the level of risk based on the total number of accumulated points, which categorizes patients into low, intermediate, and high (> _7) risk groups. a New pacemaker/ICD/CRT pacemaker or defibrillator or generator change; revision/upgrade includes pocket and/or lead revision and/or system upgrade, that is, with adding new lead(s). b Depressed renal function (estimated glomerular filtration rate <30 mL/min). c Receiving therapy that suppresses resistance to infection or has a disease that is sufficiently advanced to suppress resistance to infection. factors, and to allow for potential unknown risk factors. A frailty model was fit in Data Set A, excluding index procedures in which the patient had a history of CIED infection at the time of the procedure since such procedures were excluded in the PADIT study. The model included PADIT risk score at the time of the procedure as the only independent variable, as well as a random frailty effect. A frailty model was also fit including procedures in which the patient had a history of CIED infection, and that model included 'Prior CIED infection' as an additional independent variable. The software package S 8.2 was used with the frailty option utilizing the AIC (Akaike Information Criteria) method. Annualized rates of cardiac implantable electronic device procedures Since no changes or transformations were applied to the PADIT risk score following analysis of Data Set A, both Data Set A and Data Set B were pooled to generate annualized rates of CIED procedures resulting in major CIED infection by PADIT risk score and history of CIED infection. If the first procedure per subject during the study period was not a de novo procedure (e.g. replacements, revisions, and upgrades), it could not be determined how many prior procedures (a component of the PADIT risk score) those patients had undergone. Thus, it is possible the PADIT risk score for these procedures was underestimated; as such, the calculation of annualized rates of procedures resulting in major CIED infection was repeated with these procedures removed from the analysis. Major cardiac implantable electronic device infection incidence rates Generation of incidence rates was performed with one procedure per patient using the pooled data set; however, to mitigate the possible underestimation of the PADIT risk score among some procedures and allow for a more heterogenous distribution of PADIT risk scores represented, the second procedure per patient was used for patients with more than one procedure. The analysis was repeated excluding non-de novo first procedures (refers to the earliest procedure found in the claims data and describes that this was not a de novo procedure, indicating that there is uncertainty about the total number of prior procedures) per patient, as for these procedures the number of prior procedures (a PADIT component) the patient had experienced at the time was not known. Concordance To determine a measure of concordance that the PADIT risk score combined with history of CIED infection offers, the full data set was restricted to one procedure per subject. If a subject had more than one procedure, the second procedure was chosen to ensure that the number of prior procedures (one or more than one) was known for accurate calculation of the PADIT risk score. A Cox proportional hazards model was fit, and Harrell's C-statistic was determined. Data sets There were 54 042 index CIED procedures among 51 623 patients in the full data set (Figure 1), with 574 total infections. These procedures were randomized at the patient level into Data Set A consisting of 32 464 index procedures among 30 974 patients with 369 infections, and Data Set B consisting of 21 578 index procedures among 20 649 patients with 205 infections. Patients in Data Sets A and B were similar with regards to the individual components of the PADIT risk score and prior CIED infection ( Table 2). Similar to the PADIT study, the most common index procedure was de novo pacemaker implant or generator replacement (51.5% in Data Set A vs. 48.9% in the PADIT study). Approximately 59.2% of procedures were a de novo procedure; however, multiple procedures per patient were incorporated into the modelling. Frailty model Frailty models were fit using Data Set A. The first model excluded index procedures in which the patient had a history of CIED infection. The PADIT risk score was found to be highly significant (P < 0.0001), with an estimated hazard ratio (HR) and corresponding 95% confidence interval (CI) of 1.28 (95% CI 1.22, 1.33) ( Table 3); thus, the relative risk of a major CIED infection increased by 28% for each one unit increase in PADIT risk score. A second frailty model including procedures in which the patient had a history of CIED infection was fit; the HR and 95% CI for the PADIT risk score was 1.26 (95% CI 1.21-1.32). While the relative number of procedures with prior CIED infection was small, the corresponding hazard ratio for history of CIED infection was 5.66 (95% CI 4.03-7.93). Both the PADIT risk score and history of CIED infection were highly significant in this second analysis, so after accounting for history of CIED infection, the PADIT risk score still served as a predictor of increased infection risk. The random effects of the frailty estimates did not identify other factors that were predictive beyond the PADIT infection risk score variables and the history of CIED infection. Predictive value of PADIT risk score Because the number of index procedures in which the patient had a prior CIED infection made up only 1% of the Data Set A procedures, we generated the annualized rate of procedures followed by a major CIED infection (within 12 months) by PADIT risk score, separately for procedures with and without a history of CIED infection (Figure 2A). Only PADIT risk score/groups in which there were at least 20 procedures were plotted, and groups with scores of 10 or more were combined. A positive monotonic relationship between PADIT risk score and annualized rate of index procedures followed by major CIED infection was generally observed among procedures in which the patient did not have a history of CIED infection. For the smaller collection of procedures in which the patient did have a history of CIED infection, the pattern was less consistent, but still trended upward with PADIT risk scores. This analysis was repeated excluding first procedures per patient in which the procedure was a revision/replacement/upgrade, for while it was certain the patient had at least one prior procedure, it could not be determined if the number of prior procedures was one or at least two. The exclusion of 18 257 index procedures still left 35 785 procedures to evaluate. The earlier trend noted among procedures in which the patient did not have a prior CIED infection remained ( Figure 2B). This method was used to further generate incidence rates of index procedures followed by major CIED infections using the full cohort ( Figure 3A) and the reduced cohort excluding non-de novo first procedures per patient ( Figure 3B), also showing an increased rate of infection incidence with PADIT risk score. In both cases, procedures in which the patient had a prior history of CIED infection were Figure 1 Algorithm for initial or replacement CIED procedural group classification. Medical history considered was defined as that occurring within a minimum of 12 months prior to the index CIED procedure. De novo implants or generator replacements of pacemakers, ICDs, or CRT devices were assigned to the Pacemaker, ICD, and CRT categories, respectively, in the determination of the PADIT score. *Data and outliers cleaning process: excluded patients had more than 10 index procedures/dates; excluded records which index procedure is removal only; if records had multiple index procedures dates with same claims ID, they were only considered as one index procedure; for duplicated or overlapped records, they were combined and considered as one index procedure; if patients had multiple index procedures during the same inpatient admission, only the last one was considered, and so on. **If records with missing value for key variables in raw data, for example, place of service variable. ***Place of service in inpatient hospital, outpatient hospital, or ambulatory surgical centre. CIED, cardiac implantable electronic device; CRT, cardiac resynchronization therapy device; ICD, implantable cardiac defibrillator. excluded. The incidence rates for higher PADIT risk scores were more elevated in the analysis of the reduced cohort. Concordance Because the modelling did not result in altering the PADIT risk score, the full data set (Data Sets A and B) was combined and restricted to one procedure per subject to assess concordance of PADIT score and history of CIED infection using a Cox proportional hazards model. The resulting Harrell's C-statistic was 0.76. Discussion In the current analysis we validated the predictive value of the PADIT risk score using U.S. healthcare claims data, and confirmed that it predicts increased CIED infection risk, identifying higher risk patients who may derive benefit from targeted interventions to reduce infection risk. Our validation confirms that the PADIT risk score identifies an accurate set of strong predictive risk factors for CIED infection. The risk of a major CIED infection increased by 28% for each one unit increase in PADIT risk score in a linear fashion. In this analysis, we also accounted for history of CIED infection, a variable previously not included in the PADIT risk score model. While the PADIT risk score still served as a predictor of increased infection risk, inclusion of prior CIED infection history conferred additional predictive value to the risk score. CIED infection is a major complication associated with significant morbidity, mortality and costs. Although the health-economic and financial cost of infection is high, we should bear in mind that the greatest impact of infection is borne by the patient, as 3-year mortality is up to 50%. 12 In view of this, infection risk is an important consideration in patient selection for CIED therapy and predictive risk scores are likely to be helpful to physicians and patients in shared decision making about device therapy and peri-operative management. In order to balance the benefit of additional measures with the costs of any proposed interventions, accurate estimation of risk is essential. Validation of the PADIT CIED infection risk score Figure 2 (A) Annualized rate of major CIED infections by PADIT score. Blue line represents a population similar to the original PADIT risk score analysis (no prior CIED infections) and the orange line represents patients with prior CIED infections. The insert depicts the original PADIT risk score analysis scaled up for clarity. Rates unadjusted for patient, as multiple index procedures per patient were included; only one major CIED infection was allowed per index procedure. (B) Annualized Rate of Major CIED Infections by PADIT score excluding non-de novo first procedures. In the inset, the blue line represents a population similar to the original PADIT risk score analysis (no prior CIED infections) and the orange line represents patients with prior CIED infections. The main figure depicts the original PADIT risk score analysis scaled up for clarity. Non-de novo first procedures were excluded as the number of prior procedures was uncertain owing to lack of this component of the patient history; this analysis was restricted to index procedures for which sufficient patient history was available to determine a PADIT score. The sample size with PADIT score 10-13 was combined due to low numbers. CIED, cardiac implantable electronic device; PADIT, Prevention of Arrhythmia Device Infection Trial. Figure 3 (A) Incidence rates of major CIED infection by PADIT score excluding procedures with history of CIED infection. One index procedure was allowed per patient; for patients with more than one procedure the second procedure was used to mitigate the possible underestimation of the PADIT score due to incomplete patient history. (B) Incidence rates of major CIED infection by PADIT score excluding non-de novo first procedures and procedures in patients with prior CIED Infection. Non-de novo first procedures were excluded as the number of prior procedures was uncertain owing to a lack of full patient history; this analysis was restricted to index procedures for which sufficient patient history was available to determine a PADIT score. CIED, cardiac implantable electronic device; PADIT, Prevention of Arrhythmia Device Infection Trial. Validation of the PADIT CIED infection risk score The PADIT risk score 7 is a novel CIED infection risk prediction score that identifies significant predictors of device infection (age, procedure type, renal insufficiency, immunocompromised, and prior CIED procedure), which are largely consistent with observations from a recent meta-analysis 9 and Danish device cohort registry study 13 evaluating risk factors for CIED infection. A further validation of the predictive utility of the PADIT risk score in an independent data set was determined to be warranted to confer generalizability and validity. Several reports in the literature have examined risk factors for CIED infection, however, the majority of these have relatively small sample sizes and low-infection rates 9 prompting the need for studies with more representative sample sizes. 14 The sample size and infection rates in the PADIT trial provide adequate power for such analyses to identify a smaller but more predictive set of risk factors. Heterogeneity among studies that previously attempted to identify predictors of infection resulted in a large discordant list of potential risk factors; however, a recent meta-analysis by Polyzos et al., synthesized these findings to identify a smaller set of risk factors that proved valid given a higher quality standard of evidence. 9 Consistent with the meta-analysis by Polyzos et al., procedure type, renal insufficiency, immuno-compromised, and previous procedures were identified as significant predictors of infection in the PADIT study. Interestingly, additional patient-specific variables associated with infection that were identified in the meta-analysis were not identified in the PADIT risk score, such as diabetes, which may have been assumed to be a risk factor based on clinically plausible mechanisms and not based on direct evidence of infection causality. In this analysis, index CIED procedures identified in the claims data set were stratified by PADIT risk score into two data sets. Data Set A was found to be concordant with the PADIT risk score and these modelling analyses indicated that predictive capability of the PADIT risk score was highly significant, as was history of CIED infection. The Frailty model showed that a one unit increase in the PADIT risk score predicts higher infection risk (28%) in the claims data set. Prior CIED infection was associated with a dramatic increase in overall infection rate ( Figure 2A), with strong additional predictive value after adjusting for PADIT risk score. Thus, we recommend accounting for history of CIED infection as an independent variable supplementary to the PADIT risk score. Because the number of index procedures in which the patient had a prior CIED infection made up only 1% of the Data Set A procedures, we generated the annualized rate of procedures followed by a major CIED infection (within 12 months) by PADIT risk score, separately for procedures with and without a history of CIED infection. A positive monotonic relationship between PADIT risk score and annualized rate of index procedures followed by major CIED infection was observed among procedures where the patient did not have a history of CIED infection. Device infection risk estimates can be useful for understanding the benefit of conventional antibiotic therapy vs. prophylactic infection prevention strategies aimed at reducing the risk of CIED infection. Preventive measures were recently described in a consensus document. 15 However, accurate estimation of risk is central to ensuring that enhanced infection prevention measures are directed towards the right patients. Recently, the World-wide Randomized Antibiotic Envelope Infection Prevention Trial (WRAP-IT) 16 found a 40% relative risk reduction in major CIED infection with the use of the absorbable antibacterial envelope (TYRX TM, Medtronic, Mounds View, MI, USA). This effect was driven by a significant 61% reduction in pocket infections. The previously described consensus document provided a 'green heart' recommendation for the antibiotic envelope in high-risk situations. 15 It is reasonable to consider using the PADIT risk score to identify procedures of high risk in which the envelope should be considered. Incorporation of the PADIT risk score with the inclusion of prior CIED infection in study inclusion criteria of future device infection prevention trials can also provide estimates of event rates and sample sizes needed to detect a difference in relative risk reduction between control and experimental study arms. Limitations The PADIT risk score was based on a large prospective data set subject to inclusion/exclusion criteria, and this validation effort was based on a retrospective analysis of claims data so there may have been differences in the population studied. However, since these results are consistent with the PADIT results, this may represent a corroboration of clinical trial results in real-world data. In addition, the purpose of claims databases is primarily for billing rather than clinical determinations, it is possible that the coding may be incorrect or not fully representative of clinical circumstances. However, the use of claims data sets for research purposes is a well-established practice in the literature, 17 and the overall concordance of these results with the PADIT findings suggests that the data are generally appropriate for the purposes of this analysis. Although healthcare claims data sets are widely used to address specific research questions, we acknowledge that international classification of disease diagnosis codes could be augmented by linkage with electronic health records and laboratory databases to provide more granular detail concerning continuous clinical parameters, which may be subject to high variability (i.e. estimated glomerular filtration rate). The study follow-up period was limited to 12 months and is not representative of long-term infection risk, however, this length of follow-up is consistent with recent clinical trials. 7,16 It is possible that there are additional predictors of infection, for example specific procedural characteristics that are important and were not included in this analysis. Finally, the definition of major CIED infection in this study was derived from the primary endpoint of the WRAP-IT trial and may have differed from previous definitions of CIED infection; however, CIED infections are highly consequential and easily identifiable thus robust across minor variations in definition. Conclusion In the largest external validation of a CIED risk score, the PADIT risk score predicts increased CIED infection risk, identifying higher risk patients that can benefit from targeted interventions to reduce the risk of CIED infection. Prior CIED infection brings additional predictive value to the PADIT score. |
If you purchased your Nintendo 3DS before the price drop and took time to log into the eShop at least once, you’re a Nintendo 3DS Ambassador. That entitles you to 20 free games. Nintendo already released 10 games from the NES library, and they had said they would release 10 Game Boy Advance titles before the end of 2011.
According to GamesRadar, that’s happening this Friday. If so, we’ll be sure to update you with instructions once they are available.
During the second month of my working here at TechnoBuffalo, I ranked my three favorite Nintendo handheld systems. The winner was The NES Classic Game Boy Advance SP, a variant of Nintendo’s Game Boy Advance. Part of the reason for my selecting that portable in particular came from the GBA’s stunning software library.
Now, early 3DS adopters will get a taste of what that machine had to offer…at no cost beyond buying the 3DS before the price drop.
Here’s the roster of games Nintendo 3DS Ambassadors will be able to download for free:
F-Zero: Maximum Velocity
Yoshi’s Island: Super Mario Advance 3
The Legend of Zelda: The Minish Cap
Fire Emblem: The Sacred Stones
Kirby & The Amazing Mirror
Mario Kart: Super Circuit
Mario vs Donkey Kong
Metroid Fusion
Wario Land 4
WarioWare, Inc.: Mega Microgames!
If you owned a Game Boy Advance, you’ll likely recognize that these titles are among the best for the device. There are some marquee absences, like Mario & Luigi: Superstar Saga or Pokémon Ruby/Saphire, but these 10 games are stellar.
Wario Land 4 is regarded as one of the system’s best platformers, Fire Emblem is one of Nintendo‘s greatest RPGs (and exceptionally long), The Minish Cap is a highlight amongst Link’s adventures and F-Zero: Maximum Velocity is probably one of the best portable racers ever.
Tip o’ the hat to PressTheButtons.com, where we first saw the news.
Does this make you glad you landed your Nintendo 3DS near launch?
[via GamesRadar] |
The Structure Studies on Si-N Thin Layers In structure studies performed using the Grazing Incident X-ray Diffraction Geometry (GIXD) for different incident angles it was indicated that the Si-N layers are non-homogenous and their structure depends on the penetration depth. The layers close to substrate ( = 2, 1°) show the presence of the Si3N4, SiO2, SiO2, SiC phases and an amorphous Si-N phase as well. The layers near the surface ( = 0.5; 0.25; 0.15°) are poorer in Si-N phases. There are only observed the presence of the Si3N4 and SiO2 phases. The obtained results confirm the non-homogenity of layers. |
<gh_stars>0
"""
Python Interface to UpCloud's API.
"""
# flake8: noqa
from __future__ import unicode_literals
from __future__ import absolute_import
__version__ = '0.3.9'
__author__ = '<NAME>'
__author_email__ = '<EMAIL>'
__license__ = 'MIT'
__copyright__ = 'Copyright (c) 2015 Elias Nygren'
from upcloud_api.upcloud_resource import UpCloudResource
from upcloud_api.errors import UpCloudClientError, UpCloudAPIError
from upcloud_api.constants import OperatingSystems, ZONE
from upcloud_api.storage import Storage
from upcloud_api.ip_address import IPAddress
from upcloud_api.server import Server, login_user_block
from upcloud_api.firewall import FirewallRule
from upcloud_api.tag import Tag
from upcloud_api.cloud_manager.cloud_manager import CloudManager
|
<filename>uitest/src/main/java/com/vaadin/tests/components/grid/GridSortIndicator.java
/*
* Copyright 2000-2016 Vaadin Ltd.
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not
* use this file except in compliance with the License. You may obtain a copy of
* the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations under
* the License.
*/
package com.vaadin.tests.components.grid;
import com.vaadin.annotations.Widgetset;
import com.vaadin.data.provider.GridSortOrder;
import com.vaadin.server.VaadinRequest;
import com.vaadin.shared.data.sort.SortDirection;
import com.vaadin.tests.components.AbstractTestUI;
import com.vaadin.tests.data.bean.Person;
import com.vaadin.ui.Button;
import com.vaadin.ui.Grid;
import com.vaadin.ui.renderers.NumberRenderer;
@Widgetset("com.vaadin.DefaultWidgetSet")
public class GridSortIndicator extends AbstractTestUI {
@Override
protected void setup(VaadinRequest request) {
final Grid<Person> grid = getGrid();
addComponent(grid);
addComponent(new Button("Sort first", event -> grid
.sort(grid.getColumn("name"), SortDirection.ASCENDING)));
addComponent(new Button("Sort both",
event -> grid
.setSortOrder(GridSortOrder.asc(grid.getColumn("name"))
.thenAsc(grid.getColumn("age")))));
}
private final Grid<Person> getGrid() {
Grid<Person> grid = new Grid<>();
grid.addColumn(Person::getFirstName).setId("name");
grid.addColumn(Person::getAge, new NumberRenderer()).setId("age");
grid.setItems(createPerson("a", 4), createPerson("b", 5),
createPerson("c", 3), createPerson("a", 6),
createPerson("a", 2), createPerson("c", 7),
createPerson("b", 1));
return grid;
}
private Person createPerson(String name, int age) {
Person person = new Person();
person.setFirstName(name);
person.setAge(age);
return person;
}
@Override
public String getTestDescription() {
return "UI to test server-side sorting of grid columns "
+ "and displaying sort indicators";
}
@Override
public Integer getTicketNumber() {
return 17440;
}
}
|
<reponame>suytingwan/models<gh_stars>1-10
import numpy as np
import pts_utils
a = np.random.random((16384, 3)).astype('float32')
b = np.random.random((64, 7)).astype('float32')
c = pts_utils.pts_in_boxes3d(a, b)
print(a, b, c, c.shape, np.sum(c))
|
PHOENIX - The wife of Arizona Diamondbacks CEO Derrick Hall has been diagnosed with an aggressive form of breast cancer.
"I struggled with sharing this one and was not willing to do so unless my wife Amy was comfortable with me doing so," Hall wrote in a letter sent to the organization Sunday. "We feel it is important because I share everything with you, my Team Players and family."
Hall says they learned of the aggressive form of "triple negative breast cancer" a little over a week ago. The treatment, he says, requires four to six months of chemotherapy, followed by surgery and radiation
"The process is daunting, but the positive is just how responsive this cancer is to chemotherapy," he wrote.
The Halls are no strangers to this battle. The Dbacks CEO was diagnosed with prostate cancer in 2011.
"These challenges that our family continues to face have only, and will only, make us stronger, Hall wrote. "Amy's fight, love for her family, and passion for our D-backs will help her triumph during this test."
For now, the Halls will be taking a vacation before the treatment schedule begins. In his letter, Hall thanked everyone for the well-wishes and thoughts, and he said they are looking forward to feeling all of the prayers and positive energy in the coming months.
"She will beat this and chalk up another much-needed victory in this season's win column," Hall wrote. "She may lose hair, but she will win life. And we will be two, young, lovingly married, cancer survivors."
Our thoughts are with the Hall and Dbacks family.
Copyright 2016 KPNX |
A new state law will allow Pennsylvanians to have something that residents of many other states have enjoyed for years.The Pocono Record reported that House Bill 542, which was quietly passed on Oct. 30, permits Pennsylvania residents to buy and use aerial fireworks.Stay updated with breaking news: Download the WTAE mobile appFireworks will be subject to a 12 percent tax in addition to Pennsylvania's 6 percent state sales tax, with the money going to a fund for first responders, according to the Record report.For years, the only fireworks that state residents could legally purchase were items such as sparklers, fountains and novelty items.
A new state law will allow Pennsylvanians to have something that residents of many other states have enjoyed for years.
The Pocono Record reported that House Bill 542, which was quietly passed on Oct. 30, permits Pennsylvania residents to buy and use aerial fireworks.
Advertisement
Stay updated with breaking news: Download the WTAE mobile app
Fireworks will be subject to a 12 percent tax in addition to Pennsylvania's 6 percent state sales tax, with the money going to a fund for first responders, according to the Record report.
For years, the only fireworks that state residents could legally purchase were items such as sparklers, fountains and novelty items.
AlertMe |
Samsung occupies an interesting place in the smartphone market — it's one of the biggest smartphone vendors in the world, and it also manufactures key components, including displays and processors. There have been indications that Apple wants to move away from relying on Samsung for components for future iOS devices, and now an HTC executive is speaking out about how Samsung uses its power in the component business to gain leverage over the competition. Way back in 2010, HTC's Nexus One and Desire smartphones used Samsung's AMOLED displays — but a report from Focus Taiwan says that Samsung "strategically declined" to offer up its display tech to HTC for follow-up smartphones.
"We found that key component supply can be used as a competitive weapon," said Jack Tong, president of HTC North Asia. Of course, HTC's devices in recent years have been widely praised for having excellent, non-Samsung displays — but the company has also suffered through a number of component shortages, While the HTC One X and its successor the HTC One both have best-in-class displays when they were released, the company has had a hard time keeping up with Samsung's hugely successful Galaxy lineup — though that's probably due as much to Samsung's superior marketing budget as it is to the occasional supply issues HTC has dealt with. |
System for S-transform Realization The properties of both the short-time Fourier transform (STFT) and the wavelet transform are combined in the S-transform. On the one hand, preservation of the phase information of a signal as in the STFT is achieved, while, on the other hand, the variable resolution as in the wavelet transform is provided. To extend areas of its practical applications, in this paper, we propose an efficient hardware realization of the S-transform. The proposed solution is implemented on a field-programmable gate array (FPGA) circuit and is designed in a multiple-clock-cycle manner. The design is verified on a test signal corrupted by white Gaussian noise. |
Influence of plant growth regulators and salicylic acid on the production of some secondary metabolites in callus and cell suspension culture of Satureja sahendica Bornm. The impact of combinations of plant growth regulators (PGRs) on callus culture of Satureja sahendica Bornm. was investigated. In nodal explants, the response of secondary metabolite production to different concentrations of PGRs was analyzed regarding the presence and absence of polyvinylpyrrolidone (PVP). The explants were cultured on MS media in presence of auxins (2,4-dichlorophenoxyacetic acid and naphthylacetic acid) and cytokinins (thidiazuron and kinetin); which were used in equal concentrations of 0.5, 1, and 2 mg l-1. The treatment of 2 mg l-1 2,4-D + 2 mg l-1 Kin (MD3) led to the highest production of total phenolics (4.303 ± 0.449 mg GAE g-1) and flavonoids (24.903 ± 7.016 mg QE g-1). Moreover, the effect of salicylic acid (SA) on the production of secondary metabolites in cell suspension culture of Satureja sahendica was evaluated. The cell suspension culture was established by culturing the nodal-derived friable callus in the liquid medium containing different concentrations of SA (0, 100, 150, 200 M). An inverse relationship exists between the fresh mass and secondary metabolites contents. In addition, there was a significant difference among concentrations of SA in the production of total phenolics and flavonoid compounds. SA enhances secondary metabolites production and decreases cell fresh mass. |
<filename>Telespot/Telespot/Presentation/ViewController/SMMapFilterViewController.h
//
// SMMapFilterViewController.h
// Spot Maps
//
// Created by JG on 4/13/15.
// Copyright (c) 2015 TeleSpot. All rights reserved.
//
#import <UIKit/UIKit.h>
@interface SMMapFilterViewController : UIViewController<UITableViewDelegate, UITableViewDataSource>
@end
|
Novel 3D graphene foam-polyaniline-carbon nanotubes supercapacitor prepared by electropolymerization In this work, we present a novel supercapacitor (SC) material based on 3D graphene foam-polyaniline (Pani)-Carbon nanotubes (CNTs) composite for supercapacitor applications. Graphene foam was fabricated by chemical vapor deposition (CVD) on Ni foam using acetylene carbon source and hydrogen gas carrier at 700°C for 3 minutes. Next, the foam was etched in 3M HCl for an hour to remove most of Ni support. Multi-wall CNTs powder were then dispersed in 1M HCl and 0.2 M aniline monomer was then added, stirred and filtered to remove non-dispersed CNTs. Electro-polymerization in the CNTs-aniline monomer solution was then conducted at working electrode potential of 0.55 V vs. Ag/AgCl. SEM and Raman characteirzation confirmed the incorporation of CNTs in Pani/graphene foam network with a number of nanowire features appeared on graphene foam surface and dominant D and G carbon's peaks. SC performances were then tested by cyclic voltammetry (CV) and galvanostatic charge-discharge (GCD) measurements in 2M H2SO4 electrolyte. CV results showed that PANI's redox peaks were broadened due to the presence of CNTs, indicating enhanced pseudocapacitance. From GCD measurements, it is found that CNTs-Pani-graphene foam exhibits a high specific capacitance of 920 Fg-1 at a specific current of 0.8 Ag-1, which is more than twice higher than that of Pani-graphene foam (430 Fg-1). |
// Evaluate parameters in reverse order.
static void evaluateParameters(deSignature signature, deDatatype datatype,
deExpression parameters, bool isMethodCall) {
deExpression parameter;
deExpression firstNamedParameter;
deBlock block;
bool isStruct = datatype != deDatatypeNull && deDatatypeGetType(datatype) == DE_TYPE_STRUCT;
if (isStruct) {
block = deFunctionGetSubBlock(deDatatypeGetFunction(datatype));
} else {
block = deSignatureGetBlock(signature);
}
uint32 numParamVars = countBlockParamVars(block);
uint32 numParams = countPositionalParams(parameters, &firstNamedParameter);
deVariable paramVar = findLastParamVar(block);
uint32 effectiveNumParamVars = numParamVars;
if (signature != deSignatureNull) {
if (deFunctionGetType(deSignatureGetFunction(signature)) == DE_FUNC_CONSTRUCTOR) {
effectiveNumParamVars--;
} else if (isMethodCall) {
numParams++;
}
}
if (effectiveNumParamVars > numParams) {
for (int32 xParam = effectiveNumParamVars - 1; xParam >= (int32)numParams; xParam--) {
if (signature == deSignatureNull || deSignatureParamInstantiated(signature, xParam)) {
utSym name = deVariableGetSym(paramVar);
deExpression namedParameter = deFindNamedParameter(firstNamedParameter, name);
if (namedParameter != deExpressionNull) {
generateExpression(deExpressionGetLastExpression(namedParameter));
} else {
deExpression defaultValue = deVariableGetInitializerExpression(paramVar);
generateExpression(defaultValue);
}
}
paramVar = deVariableGetPrevBlockVariable(paramVar);
}
}
uint32 xParam = numParamVars;
if (firstNamedParameter != deExpressionNull) {
parameter = deExpressionGetPrevExpression(firstNamedParameter);
} else {
parameter = deExpressionGetLastExpression(parameters);
}
while (parameter != deExpressionNull) {
xParam--;
if (!deExpressionIsType(parameter)) {
generateExpression(parameter);
if (signature == deSignatureNull || deSignatureParamInstantiated(signature, xParam)) {
llElement *elementPtr = topOfStack();
if (deVariableConst(paramVar) || isStruct) {
derefElement(elementPtr);
} else {
utAssert(llElementIsRef(*elementPtr));
}
} else {
popElement(false);
}
}
paramVar = deVariableGetPrevBlockVariable(paramVar);
parameter = deExpressionGetPrevExpression(parameter);
}
} |
import os
import argparse
import random
import math
import datetime
import textattack
import transformers
import datasets
import pandas as pd
from configs import DATASET_CONFIGS
LOG_TO_WANDB = True
def filter_fn(x):
"""Filter bad samples."""
if x["label"] == -1:
return False
if "premise" in x:
if x["premise"] is None or x["premise"] == "":
return False
if "hypothesis" in x:
if x["hypothesis"] is None or x["hypothesis"] == "":
return False
return True
def main(args):
if args.train not in DATASET_CONFIGS:
raise ValueError()
dataset_config = DATASET_CONFIGS[args.train]
if "local_path" in dataset_config:
train_dataset = datasets.load_dataset(
"csv",
data_files=os.path.join(dataset_config["local_path"], "train.tsv"),
delimiter="\t",
)["train"]
else:
train_dataset = datasets.load_dataset(
dataset_config["remote_name"], split="train"
)
if "local_path" in dataset_config:
eval_dataset = datasets.load_dataset(
"csv",
data_files=os.path.join(dataset_config["local_path"], "val.tsv"),
delimiter="\t",
)["train"]
else:
eval_dataset = datasets.load_dataset(
dataset_config["remote_name"], split="validation"
)
if args.augmented_data:
pd_train_dataset = train_dataset.to_pandas()
feature = train_dataset.features
augmented_dataset = datasets.load_dataset(
"csv",
data_files=args.augmented_data,
delimiter="\t",
features=feature,
)["train"]
augmented_dataset = augmented_dataset.filter(filter_fn)
sampled_indices = list(range(len(augmented_dataset)))
random.shuffle(sampled_indices)
sampled_indices = sampled_indices[
: math.ceil(len(sampled_indices) * args.pct_of_augmented)
]
augmented_dataset = augmented_dataset.select(
sampled_indices, keep_in_memory=True
).to_pandas()
train_dataset = datasets.Dataset.from_pandas(
pd.concat((pd_train_dataset, augmented_dataset))
)
train_dataset = train_dataset.filter(lambda x: x["label"] != -1)
eval_dataset = eval_dataset.filter(lambda x: x["label"] != -1)
train_dataset = textattack.datasets.HuggingFaceDataset(
train_dataset,
dataset_columns=dataset_config["dataset_columns"],
label_names=dataset_config["label_names"],
)
eval_dataset = textattack.datasets.HuggingFaceDataset(
eval_dataset,
dataset_columns=dataset_config["dataset_columns"],
label_names=dataset_config["label_names"],
)
if args.model_type == "bert":
pretrained_name = "bert-base-uncased"
elif args.model_type == "roberta":
pretrained_name = "roberta-base"
if args.model_chkpt_path:
model = transformers.AutoModelForSequenceClassification.from_pretrained(
args.model_chkpt_path
)
else:
num_labels = dataset_config["labels"]
config = transformers.AutoConfig.from_pretrained(
pretrained_name, num_labels=num_labels
)
model = transformers.AutoModelForSequenceClassification.from_pretrained(
pretrained_name, config=config
)
tokenizer = transformers.AutoTokenizer.from_pretrained(
pretrained_name, use_fast=True
)
model_wrapper = textattack.models.wrappers.HuggingFaceModelWrapper(model, tokenizer)
if args.attack == "a2t":
attack = textattack.attack_recipes.A2TYoo2021.build(model_wrapper, mlm=False)
elif args.attack == "a2t_mlm":
attack = textattack.attack_recipes.A2TYoo2021.build(model_wrapper, mlm=True)
else:
raise ValueError(f"Unknown attack {args.attack}.")
training_args = textattack.TrainingArgs(
num_epochs=args.num_epochs,
num_clean_epochs=args.num_clean_epochs,
attack_epoch_interval=args.attack_epoch_interval,
parallel=args.parallel,
per_device_train_batch_size=args.per_device_train_batch_size,
gradient_accumulation_steps=args.grad_accumu_steps,
num_warmup_steps=args.num_warmup_steps,
learning_rate=args.learning_rate,
num_train_adv_examples=args.num_adv_examples,
attack_num_workers_per_device=1,
query_budget_train=200,
checkpoint_interval_epochs=args.checkpoint_interval_epochs,
output_dir=args.model_save_path,
log_to_wandb=LOG_TO_WANDB,
wandb_project="nlp-robustness",
load_best_model_at_end=True,
logging_interval_step=10,
random_seed=args.seed,
)
trainer = textattack.Trainer(
model_wrapper,
"classification",
attack,
train_dataset,
eval_dataset,
training_args,
)
trainer.train()
if __name__ == "__main__":
def int_or_float(v):
try:
return int(v)
except ValueError:
return float(v)
parser = argparse.ArgumentParser()
parser.add_argument(
"--train",
type=str,
required=True,
choices=sorted(list(DATASET_CONFIGS.keys())),
help="Name of dataset for training.",
)
parser.add_argument(
"--augmented-data",
type=str,
required=False,
default=None,
help="Path of augmented data (in TSV).",
)
parser.add_argument(
"--pct-of-augmented",
type=float,
required=False,
default=0.2,
help="Percentage of augmented data to use.",
)
parser.add_argument(
"--eval",
type=str,
required=True,
choices=sorted(list(DATASET_CONFIGS.keys())),
help="Name of huggingface dataset for validation",
)
parser.add_argument(
"--parallel", action="store_true", help="Run training with multiple GPUs."
)
parser.add_argument(
"--model-type",
type=str,
required=True,
choices=["bert", "roberta"],
help='Type of model (e.g. "bert", "roberta").',
)
parser.add_argument(
"--model-save-path",
type=str,
default="./saved_model",
help="Directory to save model checkpoint.",
)
parser.add_argument(
"--model-chkpt-path",
type=str,
default=None,
help="Directory of model checkpoint to resume from.",
)
parser.add_argument(
"--num-epochs", type=int, default=4, help="Number of epochs to train."
)
parser.add_argument(
"--num-clean-epochs", type=int, default=1, help="Number of clean epochs"
)
parser.add_argument(
"--num-adv-examples",
type=int_or_float,
help="Number (or percentage) of adversarial examples for training.",
)
parser.add_argument(
"--attack-epoch-interval",
type=int,
default=1,
help="Attack model to generate adversarial examples every N epochs.",
)
parser.add_argument(
"--attack", type=str, choices=["a2t", "a2t_mlm"], help="Name of attack."
)
parser.add_argument(
"--per-device-train-batch-size",
type=int,
default=8,
help="Train batch size (per GPU device).",
)
parser.add_argument(
"--learning-rate", type=float, default=5e-5, help="Learning rate"
)
parser.add_argument(
"--num-warmup-steps", type=int, default=500, help="Number of warmup steps."
)
parser.add_argument(
"--grad-accumu-steps",
type=int,
default=1,
help="Number of gradient accumulation steps.",
)
parser.add_argument(
"--checkpoint-interval-epochs",
type=int,
default=None,
help="If set, save model checkpoint after every `N` epochs.",
)
parser.add_argument("--seed", type=int, help="Random seed")
args = parser.parse_args()
main(args)
|
March was a battle between one very pissed off Spartan, a girl named Lightning and too many Pokemon to count. Who came out on top in U.S. video game sales for the month of March?
According to new figures from the NPD Group, God of War III was last month's bestselling individual video game, moving more than 1.1 million copies of the ultra-violent PlayStation 3 adventure. Kratos had some competition though, in the form of multiplatform releases like Final Fantasy XIII and Battlefield: Bad Company 2. Both of those titles, when combining sales from their respective platforms, managed to sneak past Kratos while he was beheading and amputating the gods of Olympus.
March's biggest game(s), however, was the double release of Pokemon HeartGold and SoulSilver. The Nintendo DS remakes combined for an impressive 1.78 million units between them.
Not to take anything away from God of War III's impressive first month performance, mind you, which was 32% better than that of God of War II's when it bowed on the PlayStation 2 in March 2007.
Want to see the raw numbers from NPD? Here you go.
All told, U.S. consumers spent an impressive $875.3 million on video games in March. The good news? That's a 10% uptick from March 2009.
In other bouts of good news for video game publishers, Final Fantasy XIII saw "the best launch for any item in the franchise" with Battlefield: Bad Company 2 surpassing its predecessor by 170%.
Call of Duty: Modern Warfare 2 is now the second-best-selling game of all time, after Wii Play, according to NPD. |
FORMATION OF PROFESSIONAL COMPETENCIES THROUGH THE APPLICATION OF PROJECT TRAINING Monitoring of leading specialists of employers was carried out. As a result of the analysis of the survey of employers, shortcomings in the training of specialists at the university were identified. The main disadvantage is the lack of preparedness of graduates in project activities as part of a team. In accordance with this, project-based learning was introduced at the Tyumen Industrial University. The curriculum of future technical specialists was supplemented with the disciplines Project activity, Team building, Stress management, Time management, Fundamentals of oratory and others. A system of practical training for students in the divisions of industrial enterprises was designed. |
n = int(input())
MOD = int(1e9) + 7
fact = 1
for i in range(1, n + 1):
fact = fact * i
fact %= MOD
res = fact + MOD - pow(2, (n - 1), MOD)
print(res % MOD) |
/* AUTO-GENERATED FILE. DO NOT MODIFY.
*
* This class was automatically generated by the
* java mavlink generator tool. It should not be modified by hand.
*/
// MESSAGE COMMAND_ACK PACKING
package com.MAVLink.common;
import com.MAVLink.MAVLinkPacket;
import com.MAVLink.Messages.MAVLinkMessage;
import com.MAVLink.Messages.MAVLinkPayload;
/**
* Report status of a command. Includes feedback whether the command was executed. The command microservice is documented at https://mavlink.io/en/services/command.html
*/
public class msg_command_ack extends MAVLinkMessage {
public static final int MAVLINK_MSG_ID_COMMAND_ACK = 77;
public static final int MAVLINK_MSG_LENGTH = 10;
private static final long serialVersionUID = MAVLINK_MSG_ID_COMMAND_ACK;
/**
* Command ID (of acknowledged command).
*/
public int command;
/**
* Result of command.
*/
public short result;
/**
* WIP: Also used as result_param1, it can be set with an enum containing the errors reasons of why the command was denied, or the progress percentage when result is MAV_RESULT_IN_PROGRESS (255 if the progress is unknown).
*/
public short progress;
/**
* WIP: Additional parameter of the result, example: which parameter of MAV_CMD_NAV_WAYPOINT caused it to be denied.
*/
public int result_param2;
/**
* WIP: System ID of the target recipient. This is the ID of the system that sent the command for which this COMMAND_ACK is an acknowledgement.
*/
public short target_system;
/**
* WIP: Component ID of the target recipient. This is the ID of the system that sent the command for which this COMMAND_ACK is an acknowledgement.
*/
public short target_component;
/**
* Generates the payload for a mavlink message for a message of this type
* @return
*/
@Override
public MAVLinkPacket pack() {
MAVLinkPacket packet = new MAVLinkPacket(MAVLINK_MSG_LENGTH,isMavlink2);
packet.sysid = 255;
packet.compid = 190;
packet.msgid = MAVLINK_MSG_ID_COMMAND_ACK;
packet.payload.putUnsignedShort(command);
packet.payload.putUnsignedByte(result);
if (isMavlink2) {
packet.payload.putUnsignedByte(progress);
packet.payload.putInt(result_param2);
packet.payload.putUnsignedByte(target_system);
packet.payload.putUnsignedByte(target_component);
}
return packet;
}
/**
* Decode a command_ack message into this class fields
*
* @param payload The message to decode
*/
@Override
public void unpack(MAVLinkPayload payload) {
payload.resetIndex();
this.command = payload.getUnsignedShort();
this.result = payload.getUnsignedByte();
if (isMavlink2) {
this.progress = payload.getUnsignedByte();
this.result_param2 = payload.getInt();
this.target_system = payload.getUnsignedByte();
this.target_component = payload.getUnsignedByte();
}
}
/**
* Constructor for a new message, just initializes the msgid
*/
public msg_command_ack() {
this.msgid = MAVLINK_MSG_ID_COMMAND_ACK;
}
/**
* Constructor for a new message, initializes msgid and all payload variables
*/
public msg_command_ack( int command, short result, short progress, int result_param2, short target_system, short target_component) {
this.msgid = MAVLINK_MSG_ID_COMMAND_ACK;
this.command = command;
this.result = result;
this.progress = progress;
this.result_param2 = result_param2;
this.target_system = target_system;
this.target_component = target_component;
}
/**
* Constructor for a new message, initializes everything
*/
public msg_command_ack( int command, short result, short progress, int result_param2, short target_system, short target_component, int sysid, int compid, boolean isMavlink2) {
this.msgid = MAVLINK_MSG_ID_COMMAND_ACK;
this.sysid = sysid;
this.compid = compid;
this.isMavlink2 = isMavlink2;
this.command = command;
this.result = result;
this.progress = progress;
this.result_param2 = result_param2;
this.target_system = target_system;
this.target_component = target_component;
}
/**
* Constructor for a new message, initializes the message with the payload
* from a mavlink packet
*
*/
public msg_command_ack(MAVLinkPacket mavLinkPacket) {
this.msgid = MAVLINK_MSG_ID_COMMAND_ACK;
this.sysid = mavLinkPacket.sysid;
this.compid = mavLinkPacket.compid;
this.isMavlink2 = mavLinkPacket.isMavlink2;
unpack(mavLinkPacket.payload);
}
/**
* Returns a string with the MSG name and data
*/
@Override
public String toString() {
return "MAVLINK_MSG_ID_COMMAND_ACK - sysid:"+sysid+" compid:"+compid+" command:"+command+" result:"+result+" progress:"+progress+" result_param2:"+result_param2+" target_system:"+target_system+" target_component:"+target_component+"";
}
/**
* Returns a human-readable string of the name of the message
*/
@Override
public String name() {
return "MAVLINK_MSG_ID_COMMAND_ACK";
}
}
|
How Context and User Behavior Affect Indoor Navigation Assistance for Blind People Recent techniques for indoor localization are now able to support practical, accurate turn-by-turn navigation for people with visual impairments (PVI). Understanding user behavior as it relates to situational contexts can be used to improve the ability of the interface to adapt to problematic scenarios, and consequently reduce navigation errors. This work performs a fine-grained analysis of user behavior during indoor assisted navigation, outlining different scenarios where user behavior (either with a white-cane or a guide-dog) is likely to cause navigation errors. The scenarios include certain instructions (e.g., slight turns, approaching turns), cases of error recovery, and the surrounding environment (e.g., open spaces and landmarks). We discuss the findings and lessons learned from a real-world user study to guide future directions for the development of assistive navigation interfaces that consider the users' behavior and coping mechanisms. |
Tara Lipinski gained attention at the age of 13 and made international history just two years later at the 1998 Olympic Games.
At just 13-years-old, Tara Lipinski gained international attention for qualifying for the U.S. figure skating team at 1996 world championships.
She finished 15th, but followed that up a year later by becoming the youngest to ever win a World Figure Skating title at 14.
She was just getting started.
Entering the 1998 Olympic Games, Lipinski was the silver medal favorite behind fellow American Michelle Kwan. Trailing Kwan after the short program, Lipinski's long program performance earned first-place votes from six of the nine judges, making her the youngest winner of an individual medal at the Winter Games at the age of 15 years and 255 days.
Lipinski is now a commentator for NBC and will be covering the Pyeongchang Winter Olympics in the coming weeks.
- Apolo Ohno becomes most decorated American Winter Olympian in 2010.
- Picabo Street recovers from crash to win gold in Nagano. |
Tobgay, who arrived here on a three-day visit yesterday, met Modi at the Hyderabad House.
NEW DELHI: Talks between Prime Minister Narendra Modi and his Bhutanese counterpart Tshering Tobgay are underway, during which defence, security and strategic cooperation between the two neighbouring countries are expected to be discussed.
"Exemplary relationship worth celebrating! PM @narendramodi welcomes Prime Minister of #Bhutan @tsheringtobgay to India during the Golden Jubilee Year of our relationship, which is based on shared perceptions, utmost trust, goodwill and understanding," Ministry of External Affairs Spokesperson Raveesh Kumar tweeted.
External Affairs Minister Sushma Swaraj had called on Tobgay yesterday and discussed ways to deepen the bilateral cooperation.
In their talks, Modi and Tobgay are also expected to deliberate on the situation at the Doklam tri-junction, the site of 73-day-long standoff between Indian and Chinese armies last year.
In February, Tobgay had visited Guwahati to participate in an investors' summit on the sidelines of which he and Modi had held talks.
Troops of India and China were locked in a 73-day-long standoff in Doklam from June 16 last year after the Indian side stopped construction of a road at the disputed Doklam tri-junction by the Chinese army.
Bhutan and China have a dispute over Doklam. The face-off ended on August 28. China and Bhutan are engaged in talks over the resolution of the dispute in the area. |
package veating.dao;
import veating.bean.Business;
import veating.bean.User;
public interface BusinessDao {
public int save(User user);
public int delete(String phone);
public int update(User user, String phone);
public boolean login(String phone, String password);
public Business findByPhone(String phone);
}
|
Burden of de novo malignancy in the liver transplant recipient Recipients of liver transplantation (LT) have a higher overall risk (23 times on average) of developing de novo malignancies than the general population, with standardized incidence ratios ranging from 1.0 for breast and prostate cancers to 34 for colon cancer and up to 12 for esophageal and oropharyngeal cancers. Aside from immunosuppression, other identified risk factors for de novo malignancies include the patient's age, a history of alcoholic liver disease or primary sclerosing cholangitis, smoking, and viral infections with oncogenic potential. Despite outcome studies showing that de novo malignancies are major causes of mortality and morbidity after LT, there are no guidelines for cancer surveillance protocols or immunosuppression protocols to lower the incidence of de novo cancers. Patient education, particularly for smoking cessation and excess sun avoidance, and regular clinical followup remain the standard of care. Further research in epidemiology, risk factors, and the effectiveness of screening and management protocols is needed to develop evidencebased guidelines for the prevention and treatment of de novo malignancies. Liver Transpl, 2012. © 2012 AASLD. |
<reponame>ant0ine/phantomjs
/*
* Copyright (C) 2008 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY APPLE COMPUTER, INC. ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE COMPUTER, INC. OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
* OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include "config.h"
#include "CSSGradientValue.h"
#include "CSSValueKeywords.h"
#include "CSSStyleSelector.h"
#include "GeneratedImage.h"
#include "Gradient.h"
#include "Image.h"
#include "IntSize.h"
#include "IntSizeHash.h"
#include "NodeRenderStyle.h"
#include "PlatformString.h"
#include "RenderObject.h"
using namespace std;
namespace WebCore {
PassRefPtr<Image> CSSGradientValue::image(RenderObject* renderer, const IntSize& size)
{
if (size.isEmpty())
return 0;
bool cacheable = isCacheable();
if (cacheable) {
if (!m_clients.contains(renderer))
return 0;
// Need to look up our size. Create a string of width*height to use as a hash key.
Image* result = getImage(renderer, size);
if (result)
return result;
}
// We need to create an image.
RefPtr<Image> newImage = GeneratedImage::create(createGradient(renderer, size), size);
if (cacheable)
putImage(size, newImage);
return newImage.release();
}
// Should only ever be called for deprecated gradients.
static inline bool compareStops(const CSSGradientColorStop& a, const CSSGradientColorStop& b)
{
double aVal = a.m_position->getDoubleValue(CSSPrimitiveValue::CSS_NUMBER);
double bVal = b.m_position->getDoubleValue(CSSPrimitiveValue::CSS_NUMBER);
return aVal < bVal;
}
void CSSGradientValue::sortStopsIfNeeded()
{
ASSERT(m_deprecatedType);
if (!m_stopsSorted) {
if (m_stops.size())
std::stable_sort(m_stops.begin(), m_stops.end(), compareStops);
m_stopsSorted = true;
}
}
static inline int blend(int from, int to, float progress)
{
return int(from + (to - from) * progress);
}
static inline Color blend(const Color& from, const Color& to, float progress)
{
// FIXME: when we interpolate gradients using premultiplied colors, this should also do premultiplication.
return Color(blend(from.red(), to.red(), progress),
blend(from.green(), to.green(), progress),
blend(from.blue(), to.blue(), progress),
blend(from.alpha(), to.alpha(), progress));
}
struct GradientStop {
Color color;
float offset;
bool specified;
GradientStop()
: offset(0)
, specified(false)
{ }
};
void CSSGradientValue::addStops(Gradient* gradient, RenderObject* renderer, RenderStyle* rootStyle, float maxLengthForRepeat)
{
RenderStyle* style = renderer->style();
if (m_deprecatedType) {
sortStopsIfNeeded();
// We have to resolve colors.
for (unsigned i = 0; i < m_stops.size(); i++) {
const CSSGradientColorStop& stop = m_stops[i];
Color color = renderer->document()->styleSelector()->getColorFromPrimitiveValue(stop.m_color.get());
float offset;
if (stop.m_position->primitiveType() == CSSPrimitiveValue::CSS_PERCENTAGE)
offset = stop.m_position->getFloatValue(CSSPrimitiveValue::CSS_PERCENTAGE) / 100;
else
offset = stop.m_position->getFloatValue(CSSPrimitiveValue::CSS_NUMBER);
gradient->addColorStop(offset, color);
}
// The back end already sorted the stops.
gradient->setStopsSorted(true);
return;
}
size_t numStops = m_stops.size();
Vector<GradientStop> stops(numStops);
float gradientLength = 0;
bool computedGradientLength = false;
FloatPoint gradientStart = gradient->p0();
FloatPoint gradientEnd;
if (isLinearGradient())
gradientEnd = gradient->p1();
else if (isRadialGradient())
gradientEnd = gradientStart + FloatSize(gradient->endRadius(), 0);
for (size_t i = 0; i < numStops; ++i) {
const CSSGradientColorStop& stop = m_stops[i];
stops[i].color = renderer->document()->styleSelector()->getColorFromPrimitiveValue(stop.m_color.get());
if (stop.m_position) {
int type = stop.m_position->primitiveType();
if (type == CSSPrimitiveValue::CSS_PERCENTAGE)
stops[i].offset = stop.m_position->getFloatValue(CSSPrimitiveValue::CSS_PERCENTAGE) / 100;
else if (CSSPrimitiveValue::isUnitTypeLength(type)) {
float length = stop.m_position->computeLengthFloat(style, rootStyle, style->effectiveZoom());
if (!computedGradientLength) {
FloatSize gradientSize(gradientStart - gradientEnd);
gradientLength = gradientSize.diagonalLength();
}
stops[i].offset = (gradientLength > 0) ? length / gradientLength : 0;
} else {
ASSERT_NOT_REACHED();
stops[i].offset = 0;
}
stops[i].specified = true;
} else {
// If the first color-stop does not have a position, its position defaults to 0%.
// If the last color-stop does not have a position, its position defaults to 100%.
if (!i) {
stops[i].offset = 0;
stops[i].specified = true;
} else if (numStops > 1 && i == numStops - 1) {
stops[i].offset = 1;
stops[i].specified = true;
}
}
// If a color-stop has a position that is less than the specified position of any
// color-stop before it in the list, its position is changed to be equal to the
// largest specified position of any color-stop before it.
if (stops[i].specified && i > 0) {
size_t prevSpecifiedIndex;
for (prevSpecifiedIndex = i - 1; prevSpecifiedIndex; --prevSpecifiedIndex) {
if (stops[prevSpecifiedIndex].specified)
break;
}
if (stops[i].offset < stops[prevSpecifiedIndex].offset)
stops[i].offset = stops[prevSpecifiedIndex].offset;
}
}
ASSERT(stops[0].specified && stops[numStops - 1].specified);
// If any color-stop still does not have a position, then, for each run of adjacent
// color-stops without positions, set their positions so that they are evenly spaced
// between the preceding and following color-stops with positions.
if (numStops > 2) {
size_t unspecifiedRunStart = 0;
bool inUnspecifiedRun = false;
for (size_t i = 0; i < numStops; ++i) {
if (!stops[i].specified && !inUnspecifiedRun) {
unspecifiedRunStart = i;
inUnspecifiedRun = true;
} else if (stops[i].specified && inUnspecifiedRun) {
size_t unspecifiedRunEnd = i;
if (unspecifiedRunStart < unspecifiedRunEnd) {
float lastSpecifiedOffset = stops[unspecifiedRunStart - 1].offset;
float nextSpecifiedOffset = stops[unspecifiedRunEnd].offset;
float delta = (nextSpecifiedOffset - lastSpecifiedOffset) / (unspecifiedRunEnd - unspecifiedRunStart + 1);
for (size_t j = unspecifiedRunStart; j < unspecifiedRunEnd; ++j)
stops[j].offset = lastSpecifiedOffset + (j - unspecifiedRunStart + 1) * delta;
}
inUnspecifiedRun = false;
}
}
}
// If the gradient is repeating, repeat the color stops.
// We can't just push this logic down into the platform-specific Gradient code,
// because we have to know the extent of the gradient, and possible move the end points.
if (m_repeating && numStops > 1) {
// If the difference in the positions of the first and last color-stops is 0,
// the gradient defines a solid-color image with the color of the last color-stop in the rule.
float gradientRange = stops[numStops - 1].offset - stops[0].offset;
if (!gradientRange) {
stops.first().offset = 0;
stops.first().color = stops.last().color;
stops.shrink(1);
numStops = 1;
} else {
float maxExtent = 1;
// Radial gradients may need to extend further than the endpoints, because they have
// to repeat out to the corners of the box.
if (isRadialGradient()) {
if (!computedGradientLength) {
FloatSize gradientSize(gradientStart - gradientEnd);
gradientLength = gradientSize.diagonalLength();
}
if (maxLengthForRepeat > gradientLength)
maxExtent = maxLengthForRepeat / gradientLength;
}
size_t originalNumStops = numStops;
size_t originalFirstStopIndex = 0;
// Work backwards from the first, adding stops until we get one before 0.
float firstOffset = stops[0].offset;
if (firstOffset > 0) {
float currOffset = firstOffset;
size_t srcStopOrdinal = originalNumStops - 1;
while (true) {
GradientStop newStop = stops[originalFirstStopIndex + srcStopOrdinal];
newStop.offset = currOffset;
stops.prepend(newStop);
++originalFirstStopIndex;
if (currOffset < 0)
break;
if (srcStopOrdinal)
currOffset -= stops[originalFirstStopIndex + srcStopOrdinal].offset - stops[originalFirstStopIndex + srcStopOrdinal - 1].offset;
srcStopOrdinal = (srcStopOrdinal + originalNumStops - 1) % originalNumStops;
}
}
// Work forwards from the end, adding stops until we get one after 1.
float lastOffset = stops[stops.size() - 1].offset;
if (lastOffset < maxExtent) {
float currOffset = lastOffset;
size_t srcStopOrdinal = originalFirstStopIndex;
while (true) {
GradientStop newStop = stops[srcStopOrdinal];
newStop.offset = currOffset;
stops.append(newStop);
if (currOffset > maxExtent)
break;
if (srcStopOrdinal < originalNumStops - 1)
currOffset += stops[srcStopOrdinal + 1].offset - stops[srcStopOrdinal].offset;
srcStopOrdinal = (srcStopOrdinal + 1) % originalNumStops;
}
}
}
}
numStops = stops.size();
// If the gradient goes outside the 0-1 range, normalize it by moving the endpoints, and adjusting the stops.
if (numStops > 1 && (stops[0].offset < 0 || stops[numStops - 1].offset > 1)) {
if (isLinearGradient()) {
float firstOffset = stops[0].offset;
float lastOffset = stops[numStops - 1].offset;
float scale = lastOffset - firstOffset;
for (size_t i = 0; i < numStops; ++i)
stops[i].offset = (stops[i].offset - firstOffset) / scale;
FloatPoint p0 = gradient->p0();
FloatPoint p1 = gradient->p1();
gradient->setP0(FloatPoint(p0.x() + firstOffset * (p1.x() - p0.x()), p0.y() + firstOffset * (p1.y() - p0.y())));
gradient->setP1(FloatPoint(p1.x() + (lastOffset - 1) * (p1.x() - p0.x()), p1.y() + (lastOffset - 1) * (p1.y() - p0.y())));
} else if (isRadialGradient()) {
// Rather than scaling the points < 0, we truncate them, so only scale according to the largest point.
float firstOffset = 0;
float lastOffset = stops[numStops - 1].offset;
float scale = lastOffset - firstOffset;
// Reset points below 0 to the first visible color.
size_t firstZeroOrGreaterIndex = numStops;
for (size_t i = 0; i < numStops; ++i) {
if (stops[i].offset >= 0) {
firstZeroOrGreaterIndex = i;
break;
}
}
if (firstZeroOrGreaterIndex > 0) {
if (firstZeroOrGreaterIndex < numStops && stops[firstZeroOrGreaterIndex].offset > 0) {
float prevOffset = stops[firstZeroOrGreaterIndex - 1].offset;
float nextOffset = stops[firstZeroOrGreaterIndex].offset;
float interStopProportion = -prevOffset / (nextOffset - prevOffset);
Color blendedColor = blend(stops[firstZeroOrGreaterIndex - 1].color, stops[firstZeroOrGreaterIndex].color, interStopProportion);
// Clamp the positions to 0 and set the color.
for (size_t i = 0; i < firstZeroOrGreaterIndex; ++i) {
stops[i].offset = 0;
stops[i].color = blendedColor;
}
} else {
// All stops are below 0; just clamp them.
for (size_t i = 0; i < firstZeroOrGreaterIndex; ++i)
stops[i].offset = 0;
}
}
for (size_t i = 0; i < numStops; ++i)
stops[i].offset /= scale;
gradient->setStartRadius(gradient->startRadius() * scale);
gradient->setEndRadius(gradient->endRadius() * scale);
}
}
for (unsigned i = 0; i < numStops; i++)
gradient->addColorStop(stops[i].offset, stops[i].color);
gradient->setStopsSorted(true);
}
static float positionFromValue(CSSPrimitiveValue* value, RenderStyle* style, RenderStyle* rootStyle, const IntSize& size, bool isHorizontal)
{
float zoomFactor = style->effectiveZoom();
switch (value->primitiveType()) {
case CSSPrimitiveValue::CSS_NUMBER:
return value->getFloatValue() * zoomFactor;
case CSSPrimitiveValue::CSS_PERCENTAGE:
return value->getFloatValue() / 100.f * (isHorizontal ? size.width() : size.height());
case CSSPrimitiveValue::CSS_IDENT:
switch (value->getIdent()) {
case CSSValueTop:
ASSERT(!isHorizontal);
return 0;
case CSSValueLeft:
ASSERT(isHorizontal);
return 0;
case CSSValueBottom:
ASSERT(!isHorizontal);
return size.height();
case CSSValueRight:
ASSERT(isHorizontal);
return size.width();
}
default:
return value->computeLengthFloat(style, rootStyle, zoomFactor);
}
}
FloatPoint CSSGradientValue::computeEndPoint(CSSPrimitiveValue* first, CSSPrimitiveValue* second, RenderStyle* style, RenderStyle* rootStyle, const IntSize& size)
{
FloatPoint result;
if (first)
result.setX(positionFromValue(first, style, rootStyle, size, true));
if (second)
result.setY(positionFromValue(second, style, rootStyle, size, false));
return result;
}
bool CSSGradientValue::isCacheable() const
{
for (size_t i = 0; i < m_stops.size(); ++i) {
const CSSGradientColorStop& stop = m_stops[i];
if (!stop.m_position)
continue;
unsigned short unitType = stop.m_position->primitiveType();
if (unitType == CSSPrimitiveValue::CSS_EMS || unitType == CSSPrimitiveValue::CSS_EXS || unitType == CSSPrimitiveValue::CSS_REMS)
return false;
}
return true;
}
String CSSLinearGradientValue::cssText() const
{
String result;
if (m_deprecatedType) {
result = "-webkit-gradient(linear, ";
result += m_firstX->cssText() + " ";
result += m_firstY->cssText() + ", ";
result += m_secondX->cssText() + " ";
result += m_secondY->cssText();
for (unsigned i = 0; i < m_stops.size(); i++) {
const CSSGradientColorStop& stop = m_stops[i];
result += ", ";
if (stop.m_position->getDoubleValue(CSSPrimitiveValue::CSS_NUMBER) == 0)
result += "from(" + stop.m_color->cssText() + ")";
else if (stop.m_position->getDoubleValue(CSSPrimitiveValue::CSS_NUMBER) == 1)
result += "to(" + stop.m_color->cssText() + ")";
else
result += "color-stop(" + String::number(stop.m_position->getDoubleValue(CSSPrimitiveValue::CSS_NUMBER)) + ", " + stop.m_color->cssText() + ")";
}
} else {
result = m_repeating ? "-webkit-repeating-linear-gradient(" : "-webkit-linear-gradient(";
if (m_angle)
result += m_angle->cssText();
else {
if (m_firstX && m_firstY)
result += m_firstX->cssText() + " " + m_firstY->cssText();
else if (m_firstX || m_firstY) {
if (m_firstX)
result += m_firstX->cssText();
if (m_firstY)
result += m_firstY->cssText();
}
}
for (unsigned i = 0; i < m_stops.size(); i++) {
const CSSGradientColorStop& stop = m_stops[i];
result += ", ";
result += stop.m_color->cssText();
if (stop.m_position)
result += " " + stop.m_position->cssText();
}
}
result += ")";
return result;
}
// Compute the endpoints so that a gradient of the given angle covers a box of the given size.
static void endPointsFromAngle(float angleDeg, const IntSize& size, FloatPoint& firstPoint, FloatPoint& secondPoint)
{
angleDeg = fmodf(angleDeg, 360);
if (angleDeg < 0)
angleDeg += 360;
if (!angleDeg) {
firstPoint.set(0, 0);
secondPoint.set(size.width(), 0);
return;
}
if (angleDeg == 90) {
firstPoint.set(0, size.height());
secondPoint.set(0, 0);
return;
}
if (angleDeg == 180) {
firstPoint.set(size.width(), 0);
secondPoint.set(0, 0);
return;
}
float slope = tan(deg2rad(angleDeg));
// We find the endpoint by computing the intersection of the line formed by the slope,
// and a line perpendicular to it that intersects the corner.
float perpendicularSlope = -1 / slope;
// Compute start corner relative to center.
float halfHeight = size.height() / 2;
float halfWidth = size.width() / 2;
FloatPoint endCorner;
if (angleDeg < 90)
endCorner.set(halfWidth, halfHeight);
else if (angleDeg < 180)
endCorner.set(-halfWidth, halfHeight);
else if (angleDeg < 270)
endCorner.set(-halfWidth, -halfHeight);
else
endCorner.set(halfWidth, -halfHeight);
// Compute c (of y = mx + c) using the corner point.
float c = endCorner.y() - perpendicularSlope * endCorner.x();
float endX = c / (slope - perpendicularSlope);
float endY = perpendicularSlope * endX + c;
// We computed the end point, so set the second point, flipping the Y to account for angles going anticlockwise.
secondPoint.set(halfWidth + endX, size.height() - (halfHeight + endY));
// Reflect around the center for the start point.
firstPoint.set(size.width() - secondPoint.x(), size.height() - secondPoint.y());
}
PassRefPtr<Gradient> CSSLinearGradientValue::createGradient(RenderObject* renderer, const IntSize& size)
{
ASSERT(!size.isEmpty());
RenderStyle* rootStyle = renderer->document()->documentElement()->renderStyle();
FloatPoint firstPoint;
FloatPoint secondPoint;
if (m_angle) {
float angle = m_angle->getFloatValue(CSSPrimitiveValue::CSS_DEG);
endPointsFromAngle(angle, size, firstPoint, secondPoint);
} else {
firstPoint = computeEndPoint(m_firstX.get(), m_firstY.get(), renderer->style(), rootStyle, size);
if (m_secondX || m_secondY)
secondPoint = computeEndPoint(m_secondX.get(), m_secondY.get(), renderer->style(), rootStyle, size);
else {
if (m_firstX)
secondPoint.setX(size.width() - firstPoint.x());
if (m_firstY)
secondPoint.setY(size.height() - firstPoint.y());
}
}
RefPtr<Gradient> gradient = Gradient::create(firstPoint, secondPoint);
// Now add the stops.
addStops(gradient.get(), renderer, rootStyle, 1);
return gradient.release();
}
String CSSRadialGradientValue::cssText() const
{
String result;
if (m_deprecatedType) {
result = "-webkit-gradient(radial, ";
result += m_firstX->cssText() + " ";
result += m_firstY->cssText() + ", ";
result += m_firstRadius->cssText() + ", ";
result += m_secondX->cssText() + " ";
result += m_secondY->cssText();
result += ", ";
result += m_secondRadius->cssText();
// FIXME: share?
for (unsigned i = 0; i < m_stops.size(); i++) {
const CSSGradientColorStop& stop = m_stops[i];
result += ", ";
if (stop.m_position->getDoubleValue(CSSPrimitiveValue::CSS_NUMBER) == 0)
result += "from(" + stop.m_color->cssText() + ")";
else if (stop.m_position->getDoubleValue(CSSPrimitiveValue::CSS_NUMBER) == 1)
result += "to(" + stop.m_color->cssText() + ")";
else
result += "color-stop(" + String::number(stop.m_position->getDoubleValue(CSSPrimitiveValue::CSS_NUMBER)) + ", " + stop.m_color->cssText() + ")";
}
} else {
result = m_repeating ? "-webkit-repeating-radial-gradient(" : "-webkit-radial-gradient(";
if (m_firstX && m_firstY) {
result += m_firstX->cssText() + " " + m_firstY->cssText();
} else if (m_firstX)
result += m_firstX->cssText();
else if (m_firstY)
result += m_firstY->cssText();
else
result += "center";
if (m_shape || m_sizingBehavior) {
result += ", ";
if (m_shape)
result += m_shape->cssText() + " ";
else
result += "ellipse ";
if (m_sizingBehavior)
result += m_sizingBehavior->cssText();
else
result += "cover";
} else if (m_endHorizontalSize && m_endVerticalSize) {
result += ", ";
result += m_endHorizontalSize->cssText() + " " + m_endVerticalSize->cssText();
}
for (unsigned i = 0; i < m_stops.size(); i++) {
const CSSGradientColorStop& stop = m_stops[i];
result += ", ";
result += stop.m_color->cssText();
if (stop.m_position)
result += " " + stop.m_position->cssText();
}
}
result += ")";
return result;
}
float CSSRadialGradientValue::resolveRadius(CSSPrimitiveValue* radius, RenderStyle* style, RenderStyle* rootStyle, float* widthOrHeight)
{
float zoomFactor = style->effectiveZoom();
float result = 0;
if (radius->primitiveType() == CSSPrimitiveValue::CSS_NUMBER) // Can the radius be a percentage?
result = radius->getFloatValue() * zoomFactor;
else if (widthOrHeight && radius->primitiveType() == CSSPrimitiveValue::CSS_PERCENTAGE)
result = *widthOrHeight * radius->getFloatValue() / 100;
else
result = radius->computeLengthFloat(style, rootStyle, zoomFactor);
return result;
}
static float distanceToClosestCorner(const FloatPoint& p, const FloatSize& size, FloatPoint& corner)
{
FloatPoint topLeft;
float topLeftDistance = FloatSize(p - topLeft).diagonalLength();
FloatPoint topRight(size.width(), 0);
float topRightDistance = FloatSize(p - topRight).diagonalLength();
FloatPoint bottomLeft(0, size.height());
float bottomLeftDistance = FloatSize(p - bottomLeft).diagonalLength();
FloatPoint bottomRight(size.width(), size.height());
float bottomRightDistance = FloatSize(p - bottomRight).diagonalLength();
corner = topLeft;
float minDistance = topLeftDistance;
if (topRightDistance < minDistance) {
minDistance = topRightDistance;
corner = topRight;
}
if (bottomLeftDistance < minDistance) {
minDistance = bottomLeftDistance;
corner = bottomLeft;
}
if (bottomRightDistance < minDistance) {
minDistance = bottomRightDistance;
corner = bottomRight;
}
return minDistance;
}
static float distanceToFarthestCorner(const FloatPoint& p, const FloatSize& size, FloatPoint& corner)
{
FloatPoint topLeft;
float topLeftDistance = FloatSize(p - topLeft).diagonalLength();
FloatPoint topRight(size.width(), 0);
float topRightDistance = FloatSize(p - topRight).diagonalLength();
FloatPoint bottomLeft(0, size.height());
float bottomLeftDistance = FloatSize(p - bottomLeft).diagonalLength();
FloatPoint bottomRight(size.width(), size.height());
float bottomRightDistance = FloatSize(p - bottomRight).diagonalLength();
corner = topLeft;
float maxDistance = topLeftDistance;
if (topRightDistance > maxDistance) {
maxDistance = topRightDistance;
corner = topRight;
}
if (bottomLeftDistance > maxDistance) {
maxDistance = bottomLeftDistance;
corner = bottomLeft;
}
if (bottomRightDistance > maxDistance) {
maxDistance = bottomRightDistance;
corner = bottomRight;
}
return maxDistance;
}
// Compute horizontal radius of ellipse with center at 0,0 which passes through p, and has
// width/height given by aspectRatio.
static inline float horizontalEllipseRadius(const FloatSize& p, float aspectRatio)
{
// x^2/a^2 + y^2/b^2 = 1
// a/b = aspectRatio, b = a/aspectRatio
// a = sqrt(x^2 + y^2/(1/r^2))
return sqrtf(p.width() * p.width() + (p.height() * p.height()) / (1 / (aspectRatio * aspectRatio)));
}
// FIXME: share code with the linear version
PassRefPtr<Gradient> CSSRadialGradientValue::createGradient(RenderObject* renderer, const IntSize& size)
{
ASSERT(!size.isEmpty());
RenderStyle* rootStyle = renderer->document()->documentElement()->renderStyle();
FloatPoint firstPoint = computeEndPoint(m_firstX.get(), m_firstY.get(), renderer->style(), rootStyle, size);
if (!m_firstX)
firstPoint.setX(size.width() / 2);
if (!m_firstY)
firstPoint.setY(size.height() / 2);
FloatPoint secondPoint = computeEndPoint(m_secondX.get(), m_secondY.get(), renderer->style(), rootStyle, size);
if (!m_secondX)
secondPoint.setX(size.width() / 2);
if (!m_secondY)
secondPoint.setY(size.height() / 2);
float firstRadius = 0;
if (m_firstRadius)
firstRadius = resolveRadius(m_firstRadius.get(), renderer->style(), rootStyle);
float secondRadius = 0;
float aspectRatio = 1; // width / height.
if (m_secondRadius)
secondRadius = resolveRadius(m_secondRadius.get(), renderer->style(), rootStyle);
else if (m_endHorizontalSize || m_endVerticalSize) {
float width = size.width();
float height = size.height();
secondRadius = resolveRadius(m_endHorizontalSize.get(), renderer->style(), rootStyle, &width);
aspectRatio = secondRadius / resolveRadius(m_endVerticalSize.get(), renderer->style(), rootStyle, &height);
} else {
enum GradientShape { Circle, Ellipse };
GradientShape shape = Ellipse;
if (m_shape && m_shape->primitiveType() == CSSPrimitiveValue::CSS_IDENT && m_shape->getIdent() == CSSValueCircle)
shape = Circle;
enum GradientFill { ClosestSide, ClosestCorner, FarthestSide, FarthestCorner };
GradientFill fill = FarthestCorner;
if (m_sizingBehavior && m_sizingBehavior->primitiveType() == CSSPrimitiveValue::CSS_IDENT) {
switch (m_sizingBehavior->getIdent()) {
case CSSValueContain:
case CSSValueClosestSide:
fill = ClosestSide;
break;
case CSSValueClosestCorner:
fill = ClosestCorner;
break;
case CSSValueFarthestSide:
fill = FarthestSide;
break;
case CSSValueCover:
case CSSValueFarthestCorner:
fill = FarthestCorner;
break;
}
}
// Now compute the end radii based on the second point, shape and fill.
// Horizontal
switch (fill) {
case ClosestSide: {
float xDist = min(secondPoint.x(), size.width() - secondPoint.x());
float yDist = min(secondPoint.y(), size.height() - secondPoint.y());
if (shape == Circle) {
float smaller = min(xDist, yDist);
xDist = smaller;
yDist = smaller;
}
secondRadius = xDist;
aspectRatio = xDist / yDist;
break;
}
case FarthestSide: {
float xDist = max(secondPoint.x(), size.width() - secondPoint.x());
float yDist = max(secondPoint.y(), size.height() - secondPoint.y());
if (shape == Circle) {
float larger = max(xDist, yDist);
xDist = larger;
yDist = larger;
}
secondRadius = xDist;
aspectRatio = xDist / yDist;
break;
}
case ClosestCorner: {
FloatPoint corner;
float distance = distanceToClosestCorner(secondPoint, size, corner);
if (shape == Circle)
secondRadius = distance;
else {
// If <shape> is ellipse, the gradient-shape has the same ratio of width to height
// that it would if closest-side or farthest-side were specified, as appropriate.
float xDist = min(secondPoint.x(), size.width() - secondPoint.x());
float yDist = min(secondPoint.y(), size.height() - secondPoint.y());
secondRadius = horizontalEllipseRadius(corner - secondPoint, xDist / yDist);
aspectRatio = xDist / yDist;
}
break;
}
case FarthestCorner: {
FloatPoint corner;
float distance = distanceToFarthestCorner(secondPoint, size, corner);
if (shape == Circle)
secondRadius = distance;
else {
// If <shape> is ellipse, the gradient-shape has the same ratio of width to height
// that it would if closest-side or farthest-side were specified, as appropriate.
float xDist = max(secondPoint.x(), size.width() - secondPoint.x());
float yDist = max(secondPoint.y(), size.height() - secondPoint.y());
secondRadius = horizontalEllipseRadius(corner - secondPoint, xDist / yDist);
aspectRatio = xDist / yDist;
}
break;
}
}
}
RefPtr<Gradient> gradient = Gradient::create(firstPoint, firstRadius, secondPoint, secondRadius, aspectRatio);
// addStops() only uses maxExtent for repeating gradients.
float maxExtent = 0;
if (m_repeating) {
FloatPoint corner;
maxExtent = distanceToFarthestCorner(secondPoint, size, corner);
}
// Now add the stops.
addStops(gradient.get(), renderer, rootStyle, maxExtent);
return gradient.release();
}
} // namespace WebCore
|
<filename>java/debugger/impl/src/com/intellij/debugger/impl/descriptors/data/LocalData.java
// Copyright 2000-2018 JetBrains s.r.o. Use of this source code is governed by the Apache 2.0 license that can be found in the LICENSE file.
package com.intellij.debugger.impl.descriptors.data;
import com.intellij.debugger.jdi.LocalVariableProxyImpl;
import com.intellij.debugger.ui.impl.watch.LocalVariableDescriptorImpl;
import com.intellij.openapi.project.Project;
import org.jetbrains.annotations.NotNull;
public class LocalData extends DescriptorData<LocalVariableDescriptorImpl>{
private final LocalVariableProxyImpl myLocalVariable;
public LocalData(LocalVariableProxyImpl localVariable) {
super();
myLocalVariable = localVariable;
}
@Override
protected LocalVariableDescriptorImpl createDescriptorImpl(@NotNull Project project) {
return new LocalVariableDescriptorImpl(project, myLocalVariable);
}
public boolean equals(Object object) {
if(!(object instanceof LocalData)) return false;
return ((LocalData)object).myLocalVariable.equals(myLocalVariable);
}
public int hashCode() {
return myLocalVariable.hashCode();
}
@Override
public DisplayKey<LocalVariableDescriptorImpl> getDisplayKey() {
return new SimpleDisplayKey<>(myLocalVariable.typeName() + "#" + myLocalVariable.name());
}
}
|
<gh_stars>1-10
import asyncio
import json
import re
import time
import unittest
from decimal import Decimal
from typing import Awaitable, List, Dict
from unittest.mock import patch
from aioresponses import aioresponses
from hummingbot.connector.exchange.gate_io.gate_io_exchange import GateIoExchange
from hummingbot.connector.exchange.gate_io import gate_io_constants as CONSTANTS
from hummingbot.core.network_iterator import NetworkStatus
from hummingbot.connector.exchange.gate_io.gate_io_in_flight_order import GateIoInFlightOrder
from hummingbot.core.event.event_logger import EventLogger
from hummingbot.core.event.events import MarketEvent, TradeType, OrderType
from test.hummingbot.connector.network_mocking_assistant import NetworkMockingAssistant
class TestGateIoExchange(unittest.TestCase):
# logging.Level required to receive logs from the exchange
level = 0
@classmethod
def setUpClass(cls) -> None:
super().setUpClass()
cls.ev_loop = asyncio.get_event_loop()
cls.base_asset = "COINALPHA"
cls.quote_asset = "HBOT"
cls.trading_pair = f"{cls.base_asset}-{cls.quote_asset}"
cls.api_key = "someKey"
cls.api_secret = "someSecret"
def setUp(self) -> None:
super().setUp()
self.log_records = []
self.mocking_assistant = NetworkMockingAssistant()
self.exchange = GateIoExchange(self.api_key, self.api_secret, trading_pairs=[self.trading_pair])
self.event_listener = EventLogger()
self.exchange.logger().setLevel(1)
self.exchange.logger().addHandler(self)
def handle(self, record):
self.log_records.append(record)
def _is_logged(self, log_level: str, message: str) -> bool:
return any(record.levelname == log_level and record.getMessage() == message
for record in self.log_records)
def async_run_with_timeout(self, coroutine: Awaitable, timeout: int = 1):
ret = self.ev_loop.run_until_complete(asyncio.wait_for(coroutine, timeout))
return ret
@staticmethod
def get_currency_data_mock() -> List:
currency_data = [
{
"currency": "GT",
"delisted": False,
"withdraw_disabled": False,
"withdraw_delayed": False,
"deposit_disabled": False,
"trade_disabled": False,
}
]
return currency_data
def get_trading_rules_mock(self) -> List:
trading_rules = [
{
"id": f"{self.base_asset}_{self.quote_asset}",
"base": self.base_asset,
"quote": self.quote_asset,
"fee": "0.2",
"min_base_amount": "0.001",
"min_quote_amount": "1.0",
"amount_precision": 3,
"precision": 6,
"trade_status": "tradable",
"sell_start": 1516378650,
"buy_start": 1516378650
}
]
return trading_rules
def get_order_create_response_mock(self, cancelled: bool = False, exchange_order_id: str = "someExchId") -> Dict:
order_create_resp_mock = {
"id": exchange_order_id,
"text": "t-123456",
"create_time": "1548000000",
"update_time": "1548000100",
"create_time_ms": 1548000000123,
"update_time_ms": 1548000100123,
"currency_pair": f"{self.base_asset}_{self.quote_asset}",
"status": "cancelled" if cancelled else "open",
"type": "limit",
"account": "spot",
"side": "buy",
"iceberg": "0",
"amount": "1",
"price": "5.00032",
"time_in_force": "gtc",
"left": "0.5",
"filled_total": "2.50016",
"fee": "0.005",
"fee_currency": "ETH",
"point_fee": "0",
"gt_fee": "0",
"gt_discount": False,
"rebated_fee": "0",
"rebated_fee_currency": "BTC"
}
return order_create_resp_mock
def get_in_flight_order(self, client_order_id: str, exchange_order_id: str = "someExchId") -> GateIoInFlightOrder:
order = GateIoInFlightOrder(
client_order_id,
exchange_order_id,
self.trading_pair,
OrderType.LIMIT,
TradeType.BUY,
price=Decimal("5.1"),
amount=Decimal("1"),
)
return order
def get_user_balances_mock(self) -> List:
user_balances = [
{
"currency": self.base_asset,
"available": "968.8",
"locked": "0",
},
{
"currency": self.quote_asset,
"available": "543.9",
"locked": "0",
}
]
return user_balances
def get_open_order_mock(self, exchange_order_id: str = "someExchId") -> List:
open_orders = [
{
"currency_pair": f"{self.base_asset}_{self.quote_asset}",
"total": 1,
"orders": [
{
"id": exchange_order_id,
"text": f"{CONSTANTS.HBOT_ORDER_ID}-{exchange_order_id}",
"create_time": "1548000000",
"update_time": "1548000100",
"currency_pair": f"{self.base_asset}_{self.quote_asset}",
"status": "open",
"type": "limit",
"account": "spot",
"side": "buy",
"amount": "1",
"price": "5.00032",
"time_in_force": "gtc",
"left": "0.5",
"filled_total": "2.50016",
"fee": "0.005",
"fee_currency": "ETH",
"point_fee": "0",
"gt_fee": "0",
"gt_discount": False,
"rebated_fee": "0",
"rebated_fee_currency": "BTC"
}
]
}
]
return open_orders
@patch("hummingbot.connector.exchange.gate_io.gate_io_exchange.retry_sleep_time")
@aioresponses()
def test_check_network_not_connected(self, retry_sleep_time_mock, mock_api):
retry_sleep_time_mock.side_effect = lambda *args, **kwargs: 0
url = f"{CONSTANTS.REST_URL}/{CONSTANTS.NETWORK_CHECK_PATH_URL}"
resp = ""
for i in range(CONSTANTS.API_MAX_RETRIES):
mock_api.get(url, status=500, body=json.dumps(resp))
ret = self.async_run_with_timeout(coroutine=self.exchange.check_network())
self.assertEqual(ret, NetworkStatus.NOT_CONNECTED)
@aioresponses()
def test_check_network(self, mock_api):
url = f"{CONSTANTS.REST_URL}/{CONSTANTS.NETWORK_CHECK_PATH_URL}"
resp = self.get_currency_data_mock()
mock_api.get(url, body=json.dumps(resp))
ret = self.async_run_with_timeout(coroutine=self.exchange.check_network())
self.assertEqual(ret, NetworkStatus.CONNECTED)
@aioresponses()
def test_update_trading_rules_polling_loop(self, mock_api):
url = f"{CONSTANTS.REST_URL}/{CONSTANTS.SYMBOL_PATH_URL}"
resp = self.get_trading_rules_mock()
called_event = asyncio.Event()
mock_api.get(url, body=json.dumps(resp), callback=lambda *args, **kwargs: called_event.set())
self.ev_loop.create_task(self.exchange._trading_rules_polling_loop())
self.async_run_with_timeout(called_event.wait())
self.assertTrue(self.trading_pair in self.exchange.trading_rules)
@aioresponses()
def test_create_order(self, mock_api):
trading_rules = self.get_trading_rules_mock()
self.exchange._trading_rules = self.exchange._format_trading_rules(trading_rules)
url = f"{CONSTANTS.REST_URL}/{CONSTANTS.ORDER_CREATE_PATH_URL}"
regex_url = re.compile(f"^{url}".replace(".", r"\.").replace("?", r"\?"))
resp = self.get_order_create_response_mock()
mock_api.post(regex_url, body=json.dumps(resp))
self.exchange.add_listener(MarketEvent.BuyOrderCreated, self.event_listener)
order_id = "someId"
self.async_run_with_timeout(
coroutine=self.exchange._create_order(
trade_type=TradeType.BUY,
order_id=order_id,
trading_pair=self.trading_pair,
amount=Decimal("1"),
order_type=OrderType.LIMIT,
price=Decimal("5.1"),
)
)
self.assertEqual(1, len(self.event_listener.event_log))
event = self.event_listener.event_log[0]
self.assertEqual(order_id, event.order_id)
self.assertTrue(order_id in self.exchange.in_flight_orders)
@aioresponses()
def test_create_order_when_order_is_instantly_closed(self, mock_api):
trading_rules = self.get_trading_rules_mock()
self.exchange._trading_rules = self.exchange._format_trading_rules(trading_rules)
url = f"{CONSTANTS.REST_URL}/{CONSTANTS.ORDER_CREATE_PATH_URL}"
regex_url = re.compile(f"^{url}".replace(".", r"\.").replace("?", r"\?"))
resp = self.get_order_create_response_mock()
resp["status"] = "closed"
mock_api.post(regex_url, body=json.dumps(resp))
event_logger = EventLogger()
self.exchange.add_listener(MarketEvent.BuyOrderCreated, event_logger)
order_id = "someId"
self.async_run_with_timeout(
coroutine=self.exchange._create_order(
trade_type=TradeType.BUY,
order_id=order_id,
trading_pair=self.trading_pair,
amount=Decimal("1"),
order_type=OrderType.LIMIT,
price=Decimal("5.1"),
)
)
self.assertEqual(1, len(event_logger.event_log))
self.assertEqual(order_id, event_logger.event_log[0].order_id)
self.assertTrue(order_id in self.exchange.in_flight_orders)
@aioresponses()
def test_create_order_when_order_is_instantly_closed(self, mock_api):
trading_rules = self.get_trading_rules_mock()
self.exchange._trading_rules = self.exchange._format_trading_rules(trading_rules)
url = f"{CONSTANTS.REST_URL}/{CONSTANTS.ORDER_CREATE_PATH_URL}"
regex_url = re.compile(f"^{url}".replace(".", r"\.").replace("?", r"\?"))
resp = self.get_order_create_response_mock()
resp["status"] = "closed"
mock_api.post(regex_url, body=json.dumps(resp))
event_logger = EventLogger()
self.exchange.add_listener(MarketEvent.BuyOrderCreated, event_logger)
order_id = "someId"
self.async_run_with_timeout(
coroutine=self.exchange._create_order(
trade_type=TradeType.BUY,
order_id=order_id,
trading_pair=self.trading_pair,
amount=Decimal("1"),
order_type=OrderType.LIMIT,
price=Decimal("5.1"),
)
)
self.assertEqual(1, len(event_logger.event_log))
self.assertEqual(order_id, event_logger.event_log[0].order_id)
self.assertTrue(order_id in self.exchange.in_flight_orders)
@aioresponses()
def test_order_with_less_amount_than_allowed_is_not_created(self, mock_api):
trading_rules = self.get_trading_rules_mock()
self.exchange._trading_rules = self.exchange._format_trading_rules(trading_rules)
url = f"{CONSTANTS.REST_URL}/{CONSTANTS.ORDER_CREATE_PATH_URL}"
regex_url = re.compile(f"^{url}".replace(".", r"\.").replace("?", r"\?"))
mock_api.post(regex_url, exception=Exception("The request should never happen"))
self.exchange.add_listener(MarketEvent.BuyOrderCreated, self.event_listener)
order_id = "someId"
self.async_run_with_timeout(
coroutine=self.exchange._create_order(
trade_type=TradeType.BUY,
order_id=order_id,
trading_pair=self.trading_pair,
amount=Decimal("0.0001"),
order_type=OrderType.LIMIT,
price=Decimal("5.1"),
)
)
self.assertEqual(0, len(self.event_listener.event_log))
self.assertNotIn(order_id, self.exchange.in_flight_orders)
self.assertTrue(self._is_logged(
"WARNING",
"Buy order amount 0.000 is lower than the minimum order size 0.001."
))
@patch("hummingbot.client.hummingbot_application.HummingbotApplication")
@aioresponses()
def test_create_order_fails(self, _, mock_api):
trading_rules = self.get_trading_rules_mock()
self.exchange._trading_rules = self.exchange._format_trading_rules(trading_rules)
url = f"{CONSTANTS.REST_URL}/{CONSTANTS.ORDER_CREATE_PATH_URL}"
regex_url = re.compile(f"^{url}".replace(".", r"\.").replace("?", r"\?"))
resp = self.get_order_create_response_mock(cancelled=True)
mock_api.post(regex_url, body=json.dumps(resp))
self.exchange.add_listener(MarketEvent.BuyOrderCreated, self.event_listener)
order_id = "someId"
self.async_run_with_timeout(
coroutine=self.exchange._create_order(
trade_type=TradeType.BUY,
order_id=order_id,
trading_pair=self.trading_pair,
amount=Decimal("1"),
order_type=OrderType.LIMIT,
price=Decimal("5.1"),
)
)
self.assertEqual(0, len(self.event_listener.event_log))
self.assertTrue(order_id not in self.exchange.in_flight_orders)
@aioresponses()
def test_execute_cancel(self, mock_api):
url = f"{CONSTANTS.REST_URL}/{CONSTANTS.ORDER_CREATE_PATH_URL}"
regex_url = re.compile(f"^{url}".replace(".", r"\.").replace("?", r"\?"))
resp = self.get_order_create_response_mock(cancelled=True)
mock_api.delete(regex_url, body=json.dumps(resp))
client_order_id = "someId"
exchange_order_id = "someExchId"
self.exchange._in_flight_orders[client_order_id] = self.get_in_flight_order(client_order_id, exchange_order_id)
self.exchange.add_listener(MarketEvent.OrderCancelled, self.event_listener)
self.async_run_with_timeout(
coroutine=self.exchange._execute_cancel(self.trading_pair, client_order_id)
)
self.assertEqual(1, len(self.event_listener.event_log))
event = self.event_listener.event_log[0]
self.assertEqual(client_order_id, event.order_id)
self.assertTrue(client_order_id not in self.exchange.in_flight_orders)
def test_cancel_order_not_present_in_inflight_orders(self):
client_order_id = "test-id"
event_logger = EventLogger()
self.exchange.add_listener(MarketEvent.OrderCancelled, event_logger)
result = self.async_run_with_timeout(
coroutine=self.exchange._execute_cancel(self.trading_pair, client_order_id)
)
self.assertEqual(0, len(event_logger.event_log))
self.assertTrue(self._is_logged(
"WARNING",
f"Failed to cancel order {client_order_id}. Order not found in inflight orders."))
self.assertFalse(result.success)
@patch("hummingbot.connector.exchange.gate_io.gate_io_exchange.GateIoExchange.current_timestamp")
@aioresponses()
def test_status_polling_loop(self, current_ts_mock, mock_api):
balances_url = f"{CONSTANTS.REST_URL}/{CONSTANTS.USER_BALANCES_PATH_URL}"
balances_resp = self.get_user_balances_mock()
balances_called_event = asyncio.Event()
mock_api.get(
balances_url, body=json.dumps(balances_resp), callback=lambda *args, **kwargs: balances_called_event.set()
)
client_order_id = "someId"
exchange_order_id = "someExchId"
self.exchange._in_flight_orders[client_order_id] = self.get_in_flight_order(client_order_id, exchange_order_id)
order_status_url = f"{CONSTANTS.REST_URL}/{CONSTANTS.ORDER_STATUS_PATH_URL}"
regex_order_status_url = re.compile(f"^{order_status_url[:-4]}".replace(".", r"\.").replace("?", r"\?"))
order_status_resp = self.get_order_create_response_mock(cancelled=True, exchange_order_id=exchange_order_id)
order_status_called_event = asyncio.Event()
mock_api.get(
regex_order_status_url,
body=json.dumps(order_status_resp),
callback=lambda *args, **kwargs: order_status_called_event.set(),
)
current_ts_mock.return_value = time.time()
self.ev_loop.create_task(self.exchange._status_polling_loop())
self.exchange._poll_notifier.set()
self.async_run_with_timeout(balances_called_event.wait())
self.async_run_with_timeout(order_status_called_event.wait())
self.assertEqual(self.exchange.available_balances[self.base_asset], Decimal("968.8"))
self.assertTrue(client_order_id not in self.exchange.in_flight_orders)
@aioresponses()
def test_get_open_orders(self, mock_api):
url = f"{CONSTANTS.REST_URL}/{CONSTANTS.USER_ORDERS_PATH_URL}"
regex_url = re.compile(f"^{url}".replace(".", r"\.").replace("?", r"\?"))
resp = self.get_open_order_mock()
mock_api.get(regex_url, body=json.dumps(resp))
ret = self.async_run_with_timeout(coroutine=self.exchange.get_open_orders())
self.assertTrue(len(ret) == 1)
def test_process_trade_message_matching_order_by_internal_order_id(self):
self.exchange.start_tracking_order(
order_id='OID-1',
exchange_order_id=None,
trading_pair=self.trading_pair,
trade_type=TradeType.BUY,
price=Decimal(10000),
amount=Decimal(1),
order_type=OrderType.LIMIT)
trade_message = {
"id": 5736713,
"user_id": 1000001,
"order_id": "EOID-1",
"currency_pair": "BTC_USDT",
"create_time": 1605176741,
"create_time_ms": "1605176741123.456",
"side": "buy",
"amount": "1.00000000",
"role": "maker",
"price": "10000.00000000",
"fee": "0.00200000000000",
"point_fee": "0",
"gt_fee": "0",
"text": "OID-1"
}
asyncio.get_event_loop().run_until_complete(self.exchange._process_trade_message(trade_message))
order = self.exchange.in_flight_orders["OID-1"]
self.assertIn(str(trade_message["id"]), order.trade_update_id_set)
self.assertEqual(Decimal(1), order.executed_amount_base)
self.assertEqual(Decimal(10000), order.executed_amount_quote)
self.assertEqual(Decimal("0.002"), order.fee_paid)
|
/*
Copyright 2020 Center for Digital Matter HGK FHNW, Basel.
Copyright 2020 info-age GmbH, Basel.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS-IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package search
import (
"fmt"
"html/template"
"net"
"net/http"
"strings"
)
func (s *Server) clusterAllHandler(w http.ResponseWriter, req *http.Request) {
var err error
if pusher, ok := w.(http.Pusher); ok {
// Push is supported.
furl := "/" + s.prefixes["static"] + "/font/inter/Inter-roman.var.woff2?v=3.15"
s.log.Infof("pushing font %s", furl)
if err := pusher.Push(furl, nil); err != nil {
s.log.Errorf("Failed to push %s: %v", furl, err)
}
furl = "/" + s.prefixes["static"] + "/font/inter/Inter-Bold.woff2?v=3.15"
s.log.Infof("pushing font %s", furl)
if err := pusher.Push(furl, nil); err != nil {
s.log.Errorf("Failed to push %s: %v", furl, err)
}
}
status := ClusterStatus{
BaseStatus: BaseStatus{
User: nil,
Token: "",
AmpBase: "",
Type: "Collections",
Notifications: []Notification{},
Self: fmt.Sprintf("%s/%s", s.addrExt, strings.TrimLeft(req.URL.Path, "/")),
BaseUrl: s.addrExt.String(),
SelfPath: req.URL.Path,
RelPath: s.relPath(req.URL.Path),
LoginUrl: s.loginUrl,
Title: "Wissenscluster",
Prefixes: map[string]string{
"detail": s.prefixes["detail"],
"search": s.prefixes["search"],
"collections": s.prefixes["collections"],
"cluster": s.prefixes["cluster"],
"google": s.prefixes["cse"],
},
InstanceName: s.instanceName,
server: s,
},
QueryApi: template.URL(fmt.Sprintf("%s/search", s.prefixes["api"])),
Result: []*SourceData{},
}
jwt, ok := req.URL.Query()["token"]
if ok {
// jwt in parameter?
if len(jwt) == 0 {
s.DoPanicf(nil, req, w, http.StatusForbidden, "invalid token %v", false, jwt)
return
}
tokenstring := jwt[0]
if tokenstring != "" {
status.Token = tokenstring
user, err := s.userFromToken(tokenstring, "cluster")
if err != nil {
status.Notifications = append(status.Notifications, Notification{
Id: "notificationInvalidAccessToken",
Message: fmt.Sprintf("%s - User logged out", err.Error()),
})
status.User = NewGuestUser(s)
status.User.LoggedOut = true
} else {
status.User = user
}
}
}
if status.User == nil {
status.User = NewGuestUser(s)
}
if status.User.LoggedIn {
_, err := NewJWT(
status.User.Server.jwtKey,
"search",
"HS256",
int64(status.User.Server.linkTokenExp.Seconds()),
"catalogue",
"mediathek",
status.User.Id)
if err != nil {
s.DoPanicf(nil, req, w, http.StatusInternalServerError, "create token: %v", false, err)
return
}
//status.QueryApi = template.URL(fmt.Sprintf("%s/%s?token=%s", s.addrExt, "api/search", jwt))
status.QueryApi = template.URL(fmt.Sprintf("%s/%s", s.addrExt, "api/search"))
}
ip, _, _ := net.SplitHostPort(req.RemoteAddr)
for _, grp := range s.locations.Contains(ip) {
status.User.Groups = append(status.User.Groups, grp)
}
//qstr := "*:*"
//s.log.Infof("Query: %s", qstr)
filters_fields := make(map[string][]string)
filters_fields["catalog"] = []string{s.clusterCatalog}
var facets map[string]TermFacet
cfg := &SearchConfig{
FiltersFields: filters_fields,
QStr: "",
Facets: facets,
Groups: status.User.Groups,
ContentVisible: status.SearchResultVisible,
Start: int(0),
Rows: int(1000),
IsAdmin: status.User.inGroup(s.adminGroup),
}
_, docs, total, _, err := s.mts.Search(cfg)
if err != nil {
s.DoPanicf(nil, req, w, http.StatusInternalServerError, "cannot execute solr query: %v", false, err)
return
}
// sort documents into result sets
for _, doc := range docs {
if srch, ok := (*doc.Meta)["Archive"]; ok && strings.TrimSpace(srch) != "" {
status.Result = append(status.Result, doc)
}
}
//status.SearchResult = template.JS(json)
status.SearchResultRows = len(docs)
status.SearchResultTotal = int(total)
status.SearchResultStart = int(0)
status.MetaDescription = "Search Cluster of Mediathek HGK FHNW"
w.Header().Set("Cache-Control", "max-age=14400, s-maxage=12200, stale-while-revalidate=9000, public")
if tpl, ok := s.templates["clusterall.amp.gohtml"]; ok {
if err := tpl.Execute(w, status); err != nil {
s.DoPanicf(nil, req, w, http.StatusInternalServerError, "cannot render template: %v", false, err)
return
}
}
return
}
|
#ifndef TFMP_TFM_TFMP_TFM_DATA_HEADER_H_
#define TFMP_TFM_TFMP_TFM_DATA_HEADER_H_
#ifndef TFMP_DISPLAY_TFMP_DISPLAY_BOARD_H_
#include "display/tfmp_display_board.h"
#endif
#include <fstream>
#include <string>
namespace tfmp {
namespace tfm{
namespace data{
class Header{
private:
public:
unsigned short lh_; /* length of the header data, in words */
signed int checksum_; /* header[0] */
signed int design_size_; /* header[1] */
std::string coding_scheme_; /* header[2...11], if present */
std::string font_family_; /* header[12...16], if present */
char sbsf_face_[4]; /* header[17], seven bit safe flag and
face(“weight, slope, and expansion) */
std::string the_rest_; /* header[18...whatever] */
Header();
int Parse(unsigned short lh_, std::ifstream* tfm_ifs);
int Show(tfmp::DisplayBoard *display_board);
};
} // namespace data
} // namespace tfm
} // namespace tfmp
#endif |
Chapter 6 : Exercise 1
#include <iostream>
#include <string>
#include "gtest/gtest.h"
using namespace std;
std::string Pointer(bool checkAddress) {
std::ostringstream out;
int* pint = nullptr;
pint = new int;
if(checkAddress) out << "pint = " << pint << endl;
delete pint;
if (checkAddress) out << "pint = " << pint << endl;
pint = new int{ 33333 };
out << "*pint = " << *pint << endl;
delete pint;
return out.str();
}
TEST(Chapter6, Exercise1) {
EXPECT_EQ("*pint = 33333\n", Pointer(false));
EXPECT_EQ("*pint = 33333\n", Pointer(true));
}
int main(int argc, char *argv[])
{
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
Chapter 6 : Exercise 2
#include <iostream>
#include <string>
#include "gtest/gtest.h"
using namespace std;
std::ostringstream out;
class noisy
{
int i_;
public:
noisy(int i) : i_(i)
{
out << "constructing noisy " << i << endl;
}
~noisy()
{
out << "destroying noisy " << i_ << endl;
}
};
std::string ClassDynamicCreation () {
noisy N(1);
noisy* p = new noisy(2);
delete p;
return out.str();
}
TEST(Chapter6, Exercise2) {
EXPECT_EQ("constructing noisy 1\nconstructing noisy 2\ndestroying noisy 2\n", ClassDynamicCreation());
}
int main(int argc, char *argv[])
{
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
Chapter 6 : Exercise 3
#include <iostream>
#include <string>
#include "gtest/gtest.h"
using namespace std;
std::string ArrayDynamicCreation () {
std::ostringstream out;
char const* cp = "arbitrary null terminated text string";
char* buffer = new char[strlen(cp) + 1];
strcpy(buffer, cp);
out << "buffer = " << buffer << endl;
delete[] buffer;
return out.str();
}
TEST(Chapter6, Exercise3) {
EXPECT_EQ("buffer = arbitrary null terminated text string\n", ArrayDynamicCreation());
}
int main(int argc, char *argv[])
{
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
Chapter 6 : Exercise 4
#include <iostream>
#include <string>
#include "gtest/gtest.h"
using namespace std;
std::ostringstream out;
struct noisy
{
noisy() { out << "constructing noisy" << endl; }
~noisy() { out << "destroying noisy" << endl; }
};
std::string DynamicArrayOfClass () {
out << "getting a noisy array" << endl;
noisy* pnoisy = new noisy[3];
out << "deleting noisy array" << endl;
delete[] pnoisy;
return out.str();
}
TEST(Chapter6, Exercise4) {
EXPECT_EQ("getting a noisy array\nconstructing noisy\nconstructing noisy\nconstructing noisy\ndeleting noisy array\ndestroying noisy\ndestroying noisy\ndestroying noisy\n", DynamicArrayOfClass());
}
int main(int argc, char *argv[])
{
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
Chapter 6 : Exercise 6
#include <iostream>
#include <string>
#include "gtest/gtest.h"
using namespace std;
std::string TestCase () {
std::ostringstream out;
char* p = new char[10];
p[0] = '!';
delete[] p;
out << "p[0] = " << p[0] << endl;
return out.str();
}
TEST(Chapter6, Exercise6) {
EXPECT_EQ("p[0] = \xDD\n", TestCase());
}
int main(int argc, char *argv[])
{
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
Chapter 6 : Exercise 7
#include <iostream>
#include <string>
#include "gtest/gtest.h"
using namespace std;
std::string TestCase () {
std::ostringstream out;
char* p = new char[10];
p[0] = '!';
out << "p[0] = " << p[0] << endl;
return out.str();
}
TEST(Chapter6, Exercise7) {
EXPECT_EQ("p[0] = !", TestCase());
}
int main(int argc, char *argv[])
{
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
Chapter 6 : Exercise 8
#include <iostream>
#include <string>
#include "gtest/gtest.h"
using namespace std;
std::ostringstream out;
class noisy
{
int i_;
public:
noisy(int i) : i_(i)
{
out << "constructing noisy " << i << endl;
}
~noisy()
{
out << "destroying noisy " << i_ << endl;
}
};
std::string TestCase () {
noisy N(1);
noisy* p = new noisy(2);
p = new noisy(3);
delete p;
return out.str();
}
TEST(Chapter6, Exercise8) {
EXPECT_EQ("constructing noisy 1\nconstructing noisy 2\nconstructing noisy 3\ndestroying noisy 3\n", TestCase());
}
int main(int argc, char *argv[])
{
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
Chapter 6 : Exercise 11
#include <iostream>
#include <string>
#include "gtest/gtest.h"
using namespace std;
std::ostringstream out;
struct noisy
{
noisy() { out << "constructing noisy" << endl; }
~noisy() { out << "destroying noisy" << endl; }
};
std::string TestCase () {
noisy* p = new noisy;
delete[] p;
return out.str();
}
TEST(Chapter6, Exercise11) {
EXPECT_EQ("constructing noisy 1\nconstructing noisy 2\nconstructing noisy 3\ndestroying noisy 3\n", TestCase());
}
int main(int argc, char *argv[])
{
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
Chapter 6 : Exercise 12
#include <iostream>
#include <string>
#include "gtest/gtest.h"
using namespace std;
std::ostringstream out;
struct numeric_item
{
int value_;
numeric_item* next_;
};
numeric_item* head = nullptr;
void add(int v, numeric_item** pp)
{
numeric_item* newp = new numeric_item;
newp->value_ = v;
newp->next_ = *pp;
*pp = newp;
}
numeric_item** find(int v, numeric_item** pp)
{
while ((*pp) != nullptr && (*pp)->value_ != v)
{
pp = &((*pp)->next_);
}
return pp;
}
void print()
{
for (numeric_item* p = head; p != nullptr; p = p->next_)
{
out << p->value_ << " ";
}
out << endl;
}
std::string TestCase () {
for (int i = 1; i < 10; i = i + 2)
{
add(i, &head);
}
print();
numeric_item** pp;
pp = find(7, &head);
add(8, pp);
print();
add(0, find(-1, &head));
print();
while (head != nullptr)
{
numeric_item* p = head;
head = head->next_;
out << "deleting " << p->value_ << endl;
delete p;
}
return out.str();
}
TEST(Chapter6, Exercise12) {
EXPECT_EQ("9 7 5 3 1 \n9 8 7 5 3 1 \n9 8 7 5 3 1 0 \ndeleting 9\ndeleting 8\ndeleting 7\ndeleting 5\ndeleting 3\ndeleting 1\ndeleting 0\n", TestCase());
}
int main(int argc, char *argv[])
{
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
Chapter 6 : Activity 1
#include <iostream>
#include <string>
#include "gtest/gtest.h"
using namespace std;
std::ostringstream out;
struct numeric_tree
{
int value_;
numeric_tree* left_;
numeric_tree* right_;
};
numeric_tree* root = nullptr;
void add(int v, numeric_tree** pp)
{
*pp = new numeric_tree;
(*pp)->value_ = v;
(*pp)->left_ = (*pp)->right_ = nullptr;
}
void delete_tree(numeric_tree* item)
{
if (item == nullptr)
{
return;
}
else
{
delete_tree(item->left_);
delete_tree(item->right_);
out << "deleting " << item->value_ << endl;
delete item;
}
}
numeric_tree** find(int v, numeric_tree** pp)
{
if (*pp == nullptr)
{
return pp;
}
else if (v < (*pp)->value_)
{
return find(v, &((*pp)->left_));
}
else
{
return find(v, &((*pp)->right_));
}
}
void print(numeric_tree* item)
{
if (item == nullptr)
{
return;
}
else
{
print(item->left_);
out << item->value_ << " ";
print(item->right_);
}
}
std::string TestCase () {
int insert_order[]{ 4, 2, 1, 3, 6, 5 };
for (int i = 0; i < 6; ++i)
{
int v = insert_order[i];
add(v, find(v, &root));
}
print(root);
out << endl;
delete_tree(root);
return out.str();
}
TEST(Chapter6, Acitivity1) {
EXPECT_EQ("1 2 3 4 5 6 \ndeleting 1\ndeleting 3\ndeleting 2\ndeleting 5\ndeleting 6\ndeleting 4\n", TestCase());
}
int main(int argc, char *argv[])
{
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
|
GAINESVILLE, Ga., Oct. 25, 2012 /Christian Newswire/ -- What began as a dream in one man's garage has morphed into a global movement, sending over 100,000 volunteers around the world. Annually, Adventures in Missions sends over 6,000 adults, students, and families on mission trips to more than 70 locations on several continents, with their trips ranging from a few days to a few years.
"It's counter-cultural, but it's not new. Our methods are Jesus' own," said Seth Barnes, Adventures' founder. "They aren't based on a curriculum or a program. We believe in impacting the world one relationship at a time."
Adventures hasn't always been the global organization it is today. This nonprofit originated in Seth Barnes' Wellington, Fla. home 23 years ago. The tiny operation quickly outgrew the kitchen, so it moved to the garage. In both places, the Barnes children were the organization's hardest workers, doing odd jobs like licking stamps and folding newsletters. As the ministry grew, so did the challenges, including enduring the skepticism of others.
"Life on the home front was difficult. We were pregnant with our fifth child, had no insurance, and nobody to pay me a salary," Barnes said. "But it was better to abandon everything and trust in God than to trust in my own competence."
Today, nearly a quarter century later, Barnes is still trusting God to guide and provide for his family, which now includes dozens of full-time staff members who train, support, and work alongside thousands of volunteers annually.
What makes Adventures different from other nonprofit organizations is the way in which its staff invites participants to purposely choose things that are eternal over temporary. Instead of striving for the American Dream, they challenge people to reach for God's dream -- for themselves, their families and their community, Barnes said, and others agree.
"I see how Adventures impacts not just the world, but is changing generations," said Selena Day of the Georgia-based Present Day Truth Ministries, an Adventures' ministry partner.
Adventures' 100,000th participant, Shelley Manning, launched her trip in July.
"Sometimes I feel very inadequate, but God is completely trustworthy and this is in my heart to do." Manning said. Although she left her job as a pediatric nurse, Manning believes she is still taking care of people, only in different ways.
She's part of the World Race, Adventures' fastest growing program taking participants to eleven countries in eleven months. This is Manning's first mission trip, and she's excited about both the adventure and the way God will develop her character along the way.
As Adventures begins a new chapter, World Race participant Lydia Shaw is the first of the next 100,000 volunteers.
"God impressed on me that now is the time. So I go," Shaw said of her decision to be a short-term missionary. "A missionary is someone who listens and follows."
In addition to Adventures' variety of trips, it also encourages and facilitates partnerships between North American churches and those in host countries. Its newest program, the Center for Global Action, provides opportunities for Adventures' alumni to dive deeper into discovering their call from God.
For more about Adventures in Missions and to watch an exclusive video, visit Adventures' 100K website. |
Nashi, Youth Voluntarism, and Potemkin NGOs: Making Sense of Civil Society in Post-Soviet Russia By interrogating Putin-era civil society projects, this article tracks the aftermath of international development aid in post-Soviet Russian socialist space. State-run organizations such as the pro-Kremlin youth organization Nashi (Ours) are commonly read as evidence of an antidemocratic backlash and as confirmation of Russia's resurgent authoritarianism. Contributing to recent scholarship in the anthropology of postsocialism, Julie Hemment seeks here to account for Nashi by locating it in the context of twenty years of international democracy promotion, global processes of neoliberal governance, and the disenchantments they gave rise to. Drawing on a collaborative ethnographic research project involving scholars and students in the provincial city Tver', Hemment reveals Nashi's curiously hybrid nature: At the same time as it advances a trenchant critique of 1990s-era interventions and the models and paradigms that guided democracy assistance, it also draws on them. Nashi respins these resources to articulate a robust national-interest alternative that is persuasive to many young people. Moreover, rather than a static, top-down political technology project, Nashi offers its participants a range of registers and voices in which they can articulate their own individualized agendas. |
/**
* This code is called when a key event is generated because the user
* pressed a key
*/
public void keyPressed(KeyEvent k) {
if (!ai) {
switch (k.getKeyCode()) {
case (KeyEvent.VK_KP_DOWN):
case (KeyEvent.VK_DOWN):
game.localPlayer[0].getPacman().addDirection(
MovementDirection.SOUTH);
break;
case (KeyEvent.VK_KP_UP):
case (KeyEvent.VK_UP):
game.localPlayer[0].getPacman().addDirection(
MovementDirection.NORTH);
break;
case (KeyEvent.VK_KP_RIGHT):
case (KeyEvent.VK_RIGHT):
game.localPlayer[0].getPacman().addDirection(
MovementDirection.EAST);
break;
case (KeyEvent.VK_KP_LEFT):
case (KeyEvent.VK_LEFT):
game.localPlayer[0].getPacman().addDirection(
MovementDirection.WEST);
break;
case (KeyEvent.VK_SPACE):
if (game.localPlayer[0].getPacman().getItem() != null) {
if (game.localPlayer[0].getPacman().getItem().getName()
.equals("bomb")) {
game.putMine(game.localPlayer[0].getPacman()
.getCurrentPosition(), 0);
game.localPlayer[0].getPacman().removeItem();
}
}
break;
case (KeyEvent.VK_S):
game.localPlayer[1].pacman
.addDirection(MovementDirection.SOUTH);
break;
case (KeyEvent.VK_W):
game.localPlayer[1].pacman
.addDirection(MovementDirection.NORTH);
break;
case (KeyEvent.VK_D):
game.localPlayer[1].pacman.addDirection(MovementDirection.EAST);
break;
case (KeyEvent.VK_A):
game.localPlayer[1].pacman.addDirection(MovementDirection.WEST);
break;
case (KeyEvent.VK_SHIFT):
if (game.localPlayer[1].getPacman().getItem() != null) {
if (game.localPlayer[1].getPacman().getItem().getName()
.equals("bomb")) {
game.putMine(game.localPlayer[1].getPacman()
.getCurrentPosition(), 1);
game.localPlayer[1].getPacman().removeItem();
}
}
break;
}
}
switch (k.getKeyCode()) {
case (KeyEvent.VK_P):
game.gui.pause();
break;
case (KeyEvent.VK_M):
game.gui.setSound();
break;
case (KeyEvent.VK_ESCAPE):
game.localPlayer[0].setLives(-1);
game.localPlayer[1].setLives(-1);
game.restart();
break;
}
} |
As I traveled across the fifth district last week, I spoke with many small business leaders throughout Central and Southside Virginia about ways to promote a pro-growth agenda that will stimulate our economy and provide new job opportunities for hard-working fifth district Virginians. The overarching theme of so many of the conversations was that government red tape and overregulation are hurting our small businesses and stifling their ability to grow and create jobs.
I was particularly concerned by the sentiment expressed by so many small business owners and family farmers that the government is not only their primary obstacle to success, but it is actually taking actions that harm jobs and the economy. Overreaching regulations are causing a lack of access to capital, unstable energy prices, and uncertainty regarding health care costs.
I heard about how the excessive regulation of utilities is driving up costs for energy-intensive businesses like manufacturing and agriculture. The president’s healthcare law is also piling on additional costs that are hampering the ability of small businesses to create jobs for our stagnant economy. It is forcing these vital employers to cut employee hours and trim their workforces at a time when we cannot afford to be doing so. Business owners and farmers also told me that their lack of access to capital is hampering their success. A healthy economy should provide numerous avenues for capital formation for businesses and farms, but excessively burdensome regulations like those put forth by the Dodd-Frank Act are preventing these businesses from obtaining the capital they need to invest in their enterprises and create jobs.
With the unemployment rate in many Central and Southside Virginia localities still exceeding the national rate, government-created obstacles to economic growth are unacceptable. Freeing our main street businesses and family farms from the burdens of unnecessary government regulations continues to be a top priority, and I look forward to continuing to work with my colleagues in this effort so that we may allow these small businesses and family farms to create much-needed jobs and stimulate economic growth. |
FS Class E.430
The FS Class E.430 locomotives, initially classed as RA 34, were three-phase alternating current electric locomotives of the Italian railways. They were built for Ferrovia della Valtellina by Ganz and MÁVAG in 1901 and had a power output of 440 kW (about 600 metric horsepower) and a haulage capacity of 300 tons. One locomotive is preserved.
History
Class E.430 is the first example, worldwide, of an electric locomotive powered by three-phase current. It was built for Rete Adriatica (the Adriatic Network), which at that time operated the Ferrovia della Valtellina, by Ganz Works, for the electrical part, and by the Royal Hungarian State Machine Factory (MÁVAG), for the mechanical part. These were, at the time, the most advanced factories in the world in the electric railway sector. The locomotives were numbered 34.1 and 34.2 under the management of the Adriatic Network. Acquired in 1905 by the Ferrovie dello Stato, the numbering changed to 0341-0342 and in 1914 they were re-numbered E.430.1 and E.430.2.
Since the Valtellina lines were the first in Italy to use three-phase electric power for the haulage of trains, the E.430 was used from the beginning. The Adriatic Network had commissioned the entire electrification project from the Ganz company in Budapest. The equipment was built under the supervision of Kálmán Kandó, one of the pioneers of three-phase traction in Italy. The electrification work began in 1897, with the establishment of a government commission to experiment with different electrification systems: one with accumulators (Bologna - San Felice and Milan - Monza lines), one with direct current at 650 V from a third rail (Milan - Varese), and finally the three-phase system on the Valtellina line.
The tests of the electric power lines at 3,000 - 3,300 Volt, at frequency 15 - 16.7 Hz, powered by the Campovico hydroelectric plant were carried out between 26 July 1902 and 4 September 1902, while tests on the Lecco - Colico - Chiavenna and Colico - Sondrio lines officially began on 15 October 1902. The passenger service was entrusted to a fleet of 10 electric railcars belonging to Class RA 32, while the freight was entrusted to the two locomotives of Class RA 34 numbered 34.1 and 34.2 (later E.430 FS).
The electric locomotives, built by Ganz and Mavag in 1901 and with a power of 440 kW and a haulage capacity of 300 tons, easily out-performed the steam locomotives of the time. The electric railcars, on the other hand, proved insufficient for hauling passenger trains and, subsequently de-motored, they were transformed into passenger coaches of Class RBz.
From 1928 the locomotives were removed from the Valtellina line and transferred to the stations at Bolzano and Fortezza where, from 1929, they were used as shunting locomotives to assemble trains of wagons in transit towards the Brenner Pass.
Technical details
The design of the locomotive was unusual. It comprised two half-locomotives coupled back-to-back with a bellows joint in the middle of the cab. This gave the wheel arrangement of Bo+Bo, rather than the more common Bo-Bo bogie system in later years. Front and rear visibility was ensured by three glass panels and there were four more on each side. Windshield wipers and washers were not provided. Each half-locomotive had two axles with leaf springs.
The four 150-horsepower traction motors were mounted coaxially on the axles, with a bellcrank linkage to the wheels, similar to that also used for the Valtellina electric railcars. The wheels and motors were covered by sloping bonnets, each equipped with four doors to allow maintenance. Current collection was by two bow collectors, controlled by groups of four cylindrical springs each.
Preservation
E.430.001 is in the Museo Nazionale Scienza e Tecnologia Leonardo da Vinci in Milan. This is the unit once in service at Fortezza station. It is displayed in its later condition with modified current collectors and three large headlights. |
The cancer preventive effects of edible mushrooms. An increasing body of scientific literature suggests that dietary components may exert cancer preventive effects. Tea, soy, cruciferous vegetables and other foods have been investigated for their cancer preventive potential. Some non-edible mushrooms like Reishi (Ganoderma lucidum) have a history use, both alone and in conjunction with standard therapies, for the treatment of various diseases including cancer in some cultures. They have shown efficacy in a number of scientific studies. By comparison, the potential cancer preventive effects of edible mushrooms have been less well-studied. With similar content of putative effective anticancer compounds such as polysaccharides, proteoglycans, steroids, etc., one might predict that edible mushrooms would also demonstrate anticancer and cancer preventive activity. In this review, available data for five commonly-consumed edible mushrooms: button mushrooms (Agaricus bisporus), A. blazei, oyster mushrooms (Pleurotus ostreatus), shiitake mushrooms (Lentinus edodes), and maitake (Grifola frondosa) mushrooms is discussed. The results of animal model and human intervention studies, as well as supporting in vitro mechanistic studies are critically evaluated. Weaknesses in the current data and topics for future work are highlighted. |
<filename>spring-boot-starter-kafka/src/main/java/com/elderbyte/kafka/streams/managed/KafkaStreamsContextImpl.java
package com.elderbyte.kafka.streams.managed;
import com.elderbyte.commons.exceptions.ArgumentNullException;
import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.streams.KafkaClientSupplier;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.processor.StateRestoreListener;
import org.apache.kafka.streams.processor.internals.DefaultKafkaClientSupplier;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.kafka.KafkaException;
import org.springframework.kafka.config.KafkaStreamsConfiguration;
import org.springframework.kafka.config.KafkaStreamsCustomizer;
import org.springframework.kafka.core.CleanupConfig;
import java.util.Optional;
import java.util.concurrent.TimeUnit;
/**
* Represents a KafkaStreams Topology
*/
public class KafkaStreamsContextImpl implements KafkaStreamsContext {
/***************************************************************************
* *
* Fields *
* *
**************************************************************************/
private static final Logger logger = LoggerFactory.getLogger(KafkaStreamsContextImpl.class);
private final KafkaStreamsConfiguration streamsConfig;
private final StreamsCleanupConfig cleanupConfig;
private final Topology topology;
private KafkaStreams kafkaStreams = null;
private KafkaClientSupplier clientSupplier = new DefaultKafkaClientSupplier();
private KafkaStreamsCustomizer kafkaStreamsCustomizer;
private KafkaStreams.StateListener stateListener;
private StateRestoreListener stateRestoreListener;
private Thread.UncaughtExceptionHandler uncaughtExceptionHandler;
private boolean autoStartup = true;
private int phase = Integer.MAX_VALUE - 1000; // NOSONAR magic #
private int closeTimeout = 10;
private volatile boolean running;
/***************************************************************************
* *
* Constructor *
* *
**************************************************************************/
/**
* Creates a new KafkaStreamsBuilderImpl
*/
public KafkaStreamsContextImpl(
Topology topology,
KafkaStreamsConfiguration streamsConfig,
StreamsCleanupConfig cleanupConfig,
KafkaStreamsCustomizer kafkaStreamsCustomizer
) {
if(topology == null) throw new ArgumentNullException("topology");
if(streamsConfig == null) throw new ArgumentNullException("streamsConfig");
if(cleanupConfig == null) throw new ArgumentNullException("cleanupConfig");
this.topology = topology;
this.streamsConfig = streamsConfig;
this.cleanupConfig = cleanupConfig;
this.kafkaStreamsCustomizer = kafkaStreamsCustomizer;
uncaughtExceptionHandler = (t, ex) -> {
logger.error("Kafka Streams thread "+t.getName()+" died due unhandled exception!", ex);
};
stateRestoreListener = new StateRestoreListener() {
@Override
public void onRestoreStart(TopicPartition topicPartition, String storeName, long startingOffset, long endingOffset) {
logger.info("Start Restoring Store: " + storeName + " at topic " + topicPartition + " for " + startingOffset + " - " + endingOffset + "...");
}
@Override
public void onBatchRestored(TopicPartition topicPartition, String storeName, long batchEndOffset, long numRestored) {
logger.info("Restoring Store: " + storeName + " at topic " + topicPartition + " restored: " + numRestored);
}
@Override
public void onRestoreEnd(TopicPartition topicPartition, String storeName, long totalRestored) {
logger.info("Completed Restored Store : " + storeName + " at topic " + topicPartition + " total restored: " + totalRestored);
}
};
stateListener = (state, old) -> {
logger.info("State: " + state);
if(state == KafkaStreams.State.ERROR){
if(cleanupConfig.cleanupOnError()){
logger.warn("Since Kafka Streams died in an Error, we clean up local storage to improve recovery chances!");
kafkaStreams.cleanUp();
}
}
};
}
/***************************************************************************
* *
* Properties *
* *
**************************************************************************/
public Topology getTopology(){
return topology;
}
public Optional<KafkaStreams> getKafkaStreams(){
return Optional.ofNullable(kafkaStreams);
}
/***************************************************************************
* *
* Smart Lifecycle *
* *
**************************************************************************/
public void setAutoStartup(boolean autoStartup) {
this.autoStartup = autoStartup;
}
public void setPhase(int phase) {
this.phase = phase;
}
@Override
public boolean isAutoStartup() {
return this.autoStartup;
}
@Override
public void stop(Runnable callback) {
stop();
if (callback != null) {
callback.run();
}
}
@Override
public synchronized void start() {
if (!this.running) {
try {
var properties = streamsConfig.asProperties();
this.kafkaStreams = new KafkaStreams(topology, properties, this.clientSupplier);
this.kafkaStreams.setStateListener(this.stateListener);
this.kafkaStreams.setGlobalStateRestoreListener(this.stateRestoreListener);
this.kafkaStreams.setUncaughtExceptionHandler(this.uncaughtExceptionHandler);
if (this.kafkaStreamsCustomizer != null) {
this.kafkaStreamsCustomizer.customize(this.kafkaStreams);
}
if (this.cleanupConfig.cleanupOnStart()) {
this.kafkaStreams.cleanUp();
}
this.kafkaStreams.start();
this.running = true;
}
catch (Exception e) {
throw new KafkaException("Could not start stream: ", e);
}
}
}
@Override
public synchronized void stop() {
if (this.running) {
try {
if (this.kafkaStreams != null) {
this.kafkaStreams.close(this.closeTimeout, TimeUnit.SECONDS);
if (this.cleanupConfig.cleanupOnStop()) {
this.kafkaStreams.cleanUp();
}
this.kafkaStreams = null;
}
}
catch (Exception e) {
logger.error("Failed to stop streams", e);
}
finally {
this.running = false;
}
}
}
@Override
public synchronized boolean isRunning() {
return this.running;
}
/***************************************************************************
* *
* Private methods *
* *
**************************************************************************/
}
|
Profiling Fast Healthcare Interoperability Resources (FHIR) of Family Health History based on the Clinical Element Models In this study we developed a Fast Healthcare Interoperability Resources (FHIR) profile to support exchanging a full pedigree based family health history (FHH) information across multiple systems and applications used by clinicians, patients, and researchers. We used previously developed clinical element models (CEMs) that are capable of representing the FHH information, and derived essential data elements including attributes, constraints, and value sets. We analyzed gaps between the FHH CEM elements and existing FHIR resources. Based on the analysis, we developed a profile that consists of 1) FHIR resources for essential FHH data elements, 2) extensions for additional elements that were not covered by the resources, and 3) a structured definition to integrate patient and family member information in a FHIR message. We implemented the profile using an open-source based FHIR framework and validated it using patient-entered FHH data that was captured through a locally developed FHH tool. |
#pragma once
#include "fw/filesystem.hpp"
#include <string>
std::string as_dotted(std::string);
fs::path removeDot(fs::path const &p);
|
/**
* Handling multi lines by conventional separator
*
* @param originText
* @return
*/
public static List<String> handleSplitMultiLines(@NonNull String originText) {
String legalText = originText.replaceAll(ORIGIN_COMPATIBILITY_SEPARATOR.pattern(), ConstantUtil.ENGLISH_COMMA)
.replaceAll(WHITE_CHAR_PATTERN.pattern(), ConstantUtil.EMPTY)
.replaceAll(START_WITH_ENGLISH_COMMA_PATTERN.pattern(), ConstantUtil.EMPTY);
String splitSymbol = ConstantUtil.ENGLISH_COMMA;
return CollectionUtil.splitDelimitedStringToList(legalText, splitSymbol, String::toString);
} |
Development of PLM-102, a novel dual inhibitor of FLT3 and RET as a new therapeutic option for acute myeloid leukemia. e15103 Background: Mutations of the FMS-like tyrosine kinase 3 (FLT3) gene occur in approximately 30% of all acute myeloid leukemia (AML) cases, with the internal tandem duplication (ITD) representing the most common type of FLT3 mutation (FLT3-ITD; approximately 25% of all AML cases). Although several FLT3 inhibitors have been developed, occurrence of secondary TKD mutations of FLT3 such as FLT3/D835Y and FLT3/F691L causes the acquired resistance to the current FLT3 inhibitors and eventually become a key area of unmet medical needs. Here, we have revealed that PLM-102, a novel, orally active FLT3 and RET dual inhibitor, has a potential to overcome the acquired resistance to current FLT3 inhibitors. Methods: 1. Kinase assay- Biochemical assays for FLT3 (WT and D835Y) and RET (WT and Mutants) were performed according to ADP-Glo kinase assay protocol(Promega). 2. Cell proliferation and Apoptosis- Human leukemia cell line MV4-11 and MOLM-14 were purchased from ATCC and DSMZ. Cells were seeded at a density of 2 X 103 cells per well and treated with the indicated concentrations of inhibitors for 72 hours at 37°C. Cell viability was determined by an Alamar Blue assay (Bio-Rad). Caspase-3/7 activity was measured by using the Caspase-Glo 3/7 assay (Promega). 3. Western blot analysis- Immunoblotting using MOLM-14 cells was performed using anti-phospho-FLT3 (Cell Signaling Technology #3461) and anti-FLT3 antibody (Cell Signaling Technology #3462). 4. In vivo mouse models- The MV4-11 and MOLM-14 cells are implanted into the subcutaneous space of the left flank of the mice. Resulting tumors are monitored by calipering twice weekly. Treatment started after randomization when tumor volumes had reached a size of approximately 100-150 mm3. For statistical analysis, analysis of variance (ANOVA) was performed using Prism 9.0 to examine statistical differences. Results: In compared to FDA-approved Gilteritinib, PLM-102 showed stronger sub-nanomolar IC50 values in FLT3 kinases regardless of wild-type, ITD, TKD and ITD/TKD mutants in both kinase- and cell-based assays. PLM-102 inhibited phosphorylation of FLT3 and its downstream signaling pathways, and induced apoptosis as evidenced by PARP-cleavage and caspase-3 activation. Moreover, PLM-102 showed an excellent anti-tumor activity in mouse xenograft models implanted with MV4-11 and MOLM-14 AML cells. Conclusions: Taken together, PLM-102 showing potent anti-cancer activity against various in vitro and in vivo AML models could be developed as a valuable agent overcoming the acquired resistance to the current FLT3 inhibitor. |
package socialnetwork.repository.file;
import socialnetwork.domain.Entity;
import socialnetwork.domain.User;
import socialnetwork.domain.validators.Validator;
import socialnetwork.repository.Repository;
import socialnetwork.repository.memory.InMemoryRepository;
import java.io.*;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Arrays;
import java.util.List;
public abstract class AbstractFileRepository<ID, E extends Entity<ID>> extends InMemoryRepository<ID,E> {
String fileName;
protected Repository<Long, User> userRepository;
/**
* Constructor that creates a new AbstractFileRepository
* @param fileName String, representing the name of the file where the data is loaded from / stored to
* @param validator Validator<E>, representing the validator of the AbstractFileRepository
*/
public AbstractFileRepository(String fileName, Validator<E> validator) {
super(validator);
this.fileName = fileName;
loadData();
}
public AbstractFileRepository(String fileName, Validator<E> validator, Repository<Long, User> repository) {
super(validator);
this.fileName = fileName;
this.userRepository = repository;
loadData();
}
/**
* Method that loads the data from the file
*/
private void loadData() {
Path path = Paths.get(fileName);
try {
List<String> lines = Files.readAllLines(path);
lines.forEach(line -> {
List<String> attributes = Arrays.asList(line.split(";")); // Split the attributes by ";"
E e = extractEntity(attributes); // Create the Entity based on the attributes
super.save(e); // Add the loaded Entity in the Repository
});
} catch (IOException e) {
e.printStackTrace();
}
}
/**
* Method that reloads the data from the file
*/
private void reload() {
Iterable<E> currentEntities = super.findAll();
try {
PrintWriter writer = new PrintWriter(fileName);
writer.print("");
writer.close();
} catch (FileNotFoundException e) {
System.out.println("File to reload doesn't exist!");
}
currentEntities.forEach(this::writeToFile);
}
/**
* (Template method design pattern)
* Method that extracts an entity of type E having a specified list of attributes
* @param attributes List<String>, representing the attributes of the Entity to be extracted
* @return E, based on the given attributes
*/
public abstract E extractEntity(List<String> attributes);
/**
* Method that gets the serialization of an entity
* @param entity Entity, representing the entity whose serialization is being determined
* @return String, representing the serialization of the entity
*/
protected abstract String createEntityAsString(E entity);
/**
* Method that adds a new entity to the AbstractFileRepository
* @param entity E, representing the entity to be added
* entity must be not null
* @return null, if the given entity is saved
* non-null entity, otherwise (it already exists)
*/
@Override
public E save(E entity){
E e = super.save(entity);
if (e==null) {
writeToFile(entity);
}
return e;
}
/**
* Method that deletes an entity from the AbstractFileRepository
* @param id ID, representing the ID of the Entity to be deleted,
* id must be not null
* @return E, representing the removed entity or null if the entity doesn't exist
*/
@Override
public E delete(ID id) {
E e = super.delete(id);
if (e != null) {
this.reload();
}
return e;
}
/**
* Method that updates an entity in the AbstractFileRepository
* @param entity E, representing the new entity,
* entity must not be null
* @return null, if the entity was updated
* non-null entity, otherwise (doesn't exist)
*/
@Override
public E update(E entity) {
E e = super.update(entity);
// if (e == null) {
// writeToFile(entity);
// }
return e;
}
/**
* Method that writes the entity (data) to the file
* @param entity E, representing the entity to be written to the file
*/
protected void writeToFile(E entity){
try (BufferedWriter bW = new BufferedWriter(new FileWriter(fileName,true))) {
bW.write(createEntityAsString(entity));
bW.newLine();
} catch (IOException e) {
e.printStackTrace();
}
}
}
|
<reponame>trainsn/CSE_5543<filename>lab6/half_edge_mesh_DCMT.py
## \file half_edge_mesh_DCMT.py
# Extension of half edge mesh supporting decimation (DCMT) operations.
# Copyright (C) 2021 <NAME>
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public License
# (LGPL) as published by the Free Software Foundation; either
# version 2.1 of the License, or any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
import half_edge_mesh
import half_edge_mesh_coord
from half_edge_mesh_coord import compute_squared_distance
from half_edge_mesh_coord import compute_cos_triangle_angle
from half_edge_mesh_coord import compute_midpoint
## Vertex class supporting mesh decimation.
class VERTEX_DCMT_BASE(half_edge_mesh.VERTEX_BASE):
## Initialize
def __init__(self):
super(VERTEX_DCMT_BASE,self).__init__()
## Internal flag used for detecting vertex adjacencies.
self._visited_flag = False
## Return visited_flag.
def IsVisited(self):
return self._visited_flag
## SetVisitedFlag to False.
def ClearVisitedFlag(self):
self._visited_flag = False
## Set visited_flag to flag in all neighbors of self.
def SetVisitedFlagsInAdjacentVertices(self, flag):
for k in range(0,self.NumHalfEdgesFrom()):
half_edgeA = self.KthHalfEdgeFrom(k)
half_edgeA.ToVertex()._visited_flag = flag
half_edgeB = half_edgeA.PrevHalfEdgeInCell()
# Set half_edgeB.FromVertex()._visited_fag in case
# of boundary edges or cells with arbitrary orientations.
half_edgeB.FromVertex()._visited_flag = flag
## Set visited_flag to False in all neighbors of self.
def ClearVisitedFlagsInAdjacentVertices(self):
self.SetVisitedFlagsInAdjacentVertices(False)
## Compare first and last half edges in half_edges_from[].
# - Swap half edges if last half edge is a boundary edge
# and first half edge is internal or if both are internal,
# but half_edge_from[-1].PrevHalfEdgeInCell() is boundary,
# while half_edge_from[0].PrevHalfEdgeInCell()) is interior.
def _ProcessFirstLastHalfEdgesFrom(self):
if (self.NumHalfEdgesFrom() < 2):
# No swap
return
if (self.KthHalfEdgeFrom(0).IsBoundary()):
# No swap
return
if (self.KthHalfEdgeFrom(-1).IsBoundary()):
self._SwapHalfEdgesInHalfEdgeFromList(0,-1)
return
if (self.KthHalfEdgeFrom(0).PrevHalfEdgeInCell().IsBoundary()):
# No swap.
return
if (self.KthHalfEdgeFrom(-1).PrevHalfEdgeInCell().IsBoundary()):
self._SwapHalfEdgesInHalfEdgeFromList(0,-1)
return
# *** Public ***
## Return True if vertex is incident on more than edges.
def IsIncidentOnMoreThanTwoEdges(self):
TWO = 2
num_half_edges_from = self.NumHalfEdgesFrom()
if (num_half_edges_from > TWO):
return True
if not(self.IsBoundary()):
return False
if (num_half_edges_from == TWO):
# Boundary vertex in two cells must have at least
# three incident edges.
return True
else:
# Boundary vertex is in just one cell, and has exactly
# two incident edges
return False
## Half edge class supporting mesh decimation.
class HALF_EDGE_DCMT_BASE(half_edge_mesh.HALF_EDGE_BASE):
## Compute square of edge length.
def ComputeLengthSquared(self):
return compute_squared_distance\
(self.FromVertex().coord, self.ToVertex().coord)
## Returns cosine of angle at FromVertex() in triangle formed by
# PrevHalfEdge().FromVertex(), FromVertex(), ToVertex().
# - Returns also flag_zero indicating if middle vertex has same
# coordinates as one of the other two (or middle vertex is
# very, very close to one of the other two.)
def ComputeCosAngleAtFromVertex(self):
prev_half_edge = self.PrevHalfEdgeInCell()
v0 = prev_half_edge.FromVertex()
v1 = self.FromVertex()
v2 = self.ToVertex()
return compute_cos_triangle_angle\
(v0.coord, v1.coord, v2.coord)
## Cell class supporting mesh decimation.
class CELL_DCMT_BASE(half_edge_mesh.CELL_BASE):
## Set visited_flag to flag in all cell vertices.
def SetVisitedFlagsInAllVertices(self, flag):
half_edge = self.HalfEdge()
for k in range(0,self.NumVertices()):
half_edge.FromVertex()._visited_flag = flag
half_edge = half_edge.NextHalfEdgeInCell()
## Set visited_flag to False in all cell vertices.
def ClearVisitedFlagsInAllVertices(self):
self.SetVisitedFlagsInAllVertices(False)
## Return min and max squared edge lengths in a cell.
# - Return also half edges with min and max edge lengths.
def ComputeMinMaxEdgeLengthSquared(self):
if (self.NumVertices() == 0):
# Empty cell.
return 0.0, 0.0, 0, 0
half_edge = self.HalfEdge()
min_edge_length_squared = half_edge.ComputeLengthSquared()
max_edge_length_squared = min_edge_length_squared
ihalf_edge_min = half_edge.Index();
ihalf_edge_max = ihalf_edge_min
for i in range(1,self.NumVertices()):
half_edge = half_edge.NextHalfEdgeInCell()
length_squared = half_edge.ComputeLengthSquared()
if (length_squared < min_edge_length_squared):
min_edge_length_squared = length_squared
ihalf_edge_min = half_edge.Index()
# Note: min_edge_length_squared <= max_edge_length_squared, so
# if length_squared < min_edge_length_squared, then
# (length_squared > max_edge_length_squared) is False.
elif (length_squared > max_edge_length_squared):
max_edge_length_squared = length_squared
ihalf_edge_max = half_edge.Index()
return min_edge_length_squared, max_edge_length_squared,\
ihalf_edge_min, ihalf_edge_max
## Return cosine of min and max angles between consecutive cell edges.
# - Return also half edges whose from vertices are incident
# on the two cell edges forming the min and max angles.
# - Note: The smallest angle has the largest cosine and
# the largest angle has the smallest cosine.
def ComputeCosMinMaxAngle(self):
if (self.NumVertices() == 0):
# Empty cell.
return 0.0, 0.0, 0, 0
# Initialize
cos_min_angle = 0.0
cos_max_angle = 0.0
ihalf_edge_min = 0
ihalf_edge_max = 0
half_edge = self.HalfEdge()
flag_found = False
for i in range(0, self.NumVertices()):
cos_angle, flag_zero = half_edge.ComputeCosAngleAtFromVertex()
if (not(flag_zero)):
if (not(flag_found)):
cos_min_angle = cos_angle
cos_max_angle = cos_angle
ihalf_edge_min = half_edge.Index()
ihalf_edge_max = ihalf_edge_min
flag_found = True
elif (cos_angle > cos_min_angle):
# Remember: Small angles have large cos values.
cos_min_angle = cos_angle
ihalf_edge_min = half_edge.Index()
# Note: cos_min_angle >= cos_max_angle, so
# if cos_angle > cos_min_angle, then
# (cos_angle < cos_max_angle) is False.
elif (cos_angle < cos_max_angle):
cos_max_angle = cos_angle
ihalf_edge_max = half_edge.Index()
half_edge = half_edge.NextHalfEdgeInCell()
return cos_min_angle, cos_max_angle,\
ihalf_edge_min, ihalf_edge_max
# Half edge mesh decimation class.
# - Initialized with vertex, half edge and cell classes.
# - These classes should be derived from VERTEX_DCMT_BASE,
# HALF_EDGE_DCMT_BASE and CELL_DCMT_BASE.
class HALF_EDGE_MESH_DCMT_BASE(half_edge_mesh.HALF_EDGE_MESH_BASE):
# Public find edge function.
## Return half edge (v0,v1) or (v1,v0) if it exists
# - Return None if no edge found.
def FindEdge(self, v0, v1):
half_edge = v0.FindIncidentHalfEdge(v1.Index())
if not(half_edge is None):
return half_edge
half_edge = v1.FindIncidentHalfEdge(v0.Index())
return half_edge
# Private member functions.
def __init__(self, classV, classHE, classC):
super(HALF_EDGE_MESH_DCMT_BASE,self).__init__(classV, classHE, classC)
## Remove half edge from the half_edge_from list of its from_vertex.
# - Does not ensure that first element is a boundary half edge.
# - Call _MoveBoundaryHalfEdgeToIncidentHalfEdgeFrom0() to ensure
# that first half edge in vertex list is a boundary edge.
def _RemoveHalfEdgeFromVertexList(self, half_edge0):
v0 = half_edge0.FromVertex()
list_length = v0.NumHalfEdgesFrom()
ilast = list_length-1
for k in range(0,list_length):
half_edge = v0.KthHalfEdgeFrom(k)
if (half_edge0 is half_edge):
if (k != ilast):
# Replace half_edge0 with last entry.
v0.half_edge_from[k] = v0.half_edge_from[ilast]
v0.half_edge_from.pop()
return
## Move half edges in vA.half_edge_from[] to vB.half_edge_from[].
# - Clear vA.half_edge_from[]
def _MoveVertexHalfEdgeFromList(self, vA, vB):
for k in range(0,vA.NumHalfEdgesFrom()):
half_edge = vA.KthHalfEdgeFrom(k)
half_edge.from_vertex = vB
# Add vA.half_edge_from[] to vB.half_edge_from[].
vB.half_edge_from.extend(vA.half_edge_from)
vA.half_edge_from.clear()
## Swap next_half_edge_around_edge.
def _SwapNextHalfEdgeAroundEdge(self, half_edgeA, half_edgeB):
tempA = half_edgeA.NextHalfEdgeAroundEdge()
tempB = half_edgeB.NextHalfEdgeAroundEdge()
half_edgeA.next_half_edge_around_edge = tempB
half_edgeB.next_half_edge_around_edge = tempA
## Return previous half edge around edge.
def _FindPrevHalfEdeAroundEdge(self, half_edge0):
max_numh = half_edge0.FromVertex().NumHalfEdgesFrom() +\
half_edge0.ToVertex().NumHalfEdgesFrom()
half_edge = half_edge0.NextHalfEdgeAroundEdge()
for k in range(0,max_numh):
if (half_edge0 is half_edge.NextHalfEdgeAroundEdge()):
return half_edge
half_edge = half_edge.NextHalfEdgeAroundEdge()
# Should never reach here. Data structure inconsistency.
raise Exception\
("Programming error. Unable to find previous half edge around edge.")
## Find some half edge (v0,v1) or (v1,v0) and link with half_edgeA
# in half edge around edge cycle.
# - If half edge not found, then do nothing.
def _FindAndLinkHalfEdgeAroundEdge(self, v0, v1, half_edgeA):
half_edgeB = self.FindEdge(v0, v1)
if (half_edgeB is None):
return
self._SwapNextHalfEdgeAroundEdge(half_edgeA, half_edgeB)
# Reorder v1.half_edge_from in case half_edgeA
# is no longer a boundary edge.
v1.MoveBoundaryHalfEdgeToIncidentHalfEdge0()
## Link half edges merged by merging v0 and v1.
# - Search for all possible merged edges, not just triangles
# containing v0 and v1, to handle non-manifolds.
# - Running time: O(v0.NumHalfEdgesFrom() + v1.NumHalfEdgesFrom())
def _LinkHalfEdgesAroundMergedEdges(self,v0,v1):
v1.ClearVisitedFlagsInAdjacentVertices()
v0.SetVisitedFlagsInAdjacentVertices(True)
for k in range(0, v1.NumHalfEdgesFrom()):
half_edgeA = v1.KthHalfEdgeFrom(k)
vtoA = half_edgeA.ToVertex()
if (vtoA.IsVisited()):
# vtoA is a neighbor of v0 and v1.
self._FindAndLinkHalfEdgeAroundEdge(v0, vtoA, half_edgeA)
# Set vtoA.visited_flag to False so that vtoA
# will not be processed twice.
vtoA.ClearVisitedFlag()
half_edgeB = half_edgeA.PrevHalfEdgeInCell()
vfromB = half_edgeB.FromVertex()
# Check vfromB to handle boundary edges and/or cells
# with arbitrary orientations.
if (vfromB.IsVisited()):
# vfromB is a neighbor of v0 and v1.
self._FindAndLinkHalfEdgeAroundEdge(v0, vfromB, half_edgeB)
# Set vfromB.visited_flag to False so that vfromB
# will not be processed twice.
vfromB.ClearVisitedFlag()
## Reliink half edges in cell.
# - Overwrites previous links.
def _RelinkHalfEdgesInCell(self, hprev, hnext):
hprev.next_half_edge_in_cell = hnext
hnext.prev_half_edge_in_cell = hprev
## Delete vertex.
def _DeleteVertex(self, v):
if (v is None):
# Can't delete None.
return
iv = v.Index()
self._vertex_dict.pop(iv,0)
## Delete half edge.
def _DeleteHalfEdge(self, half_edge0):
if not(half_edge0.IsBoundary()):
next_half_edge_around_edge = half_edge0.NextHalfEdgeAroundEdge()
prev_half_edge_around_edge =\
self._FindPrevHalfEdeAroundEdge(half_edge0)
prev_half_edge_around_edge.next_half_edge_around_edge =\
next_half_edge_around_edge
self._RemoveHalfEdgeFromVertexList(half_edge0)
ihalf_edge0 = half_edge0.Index()
self._half_edge_dict.pop(ihalf_edge0,0)
## Delete half edges around edge.
# - max_numh is an upper bound on the number of half edges
# around the edge containing half_edge0.
def _DeleteHalfEdgesAroundEdge(self, half_edge0, max_numh):
half_edge = half_edge0
for k in range(0, max_numh):
next_half_edge_around_edge = half_edge.NextHalfEdgeAroundEdge()
if (next_half_edge_around_edge is half_edge):
# Delete half edge.
ihalf_edge = half_edge.Index()
self._RemoveHalfEdgeFromVertexList(half_edge)
self._half_edge_dict.pop(ihalf_edge)
return
else:
# Delete next_half_edge_around_edge.
half_edge.next_half_edge_around_edge =\
next_half_edge_around_edge.NextHalfEdgeAroundEdge()
inext_half_edge = next_half_edge_around_edge.Index()
self._RemoveHalfEdgeFromVertexList(next_half_edge_around_edge)
self._half_edge_dict.pop(inext_half_edge,0)
## Delete cell
def _DeleteCell(self, cell):
icell = cell.Index()
self._cell_dict.pop(icell,0)
# *** Internal split functions.
def _SplitInternalEdge(self, half_edgeA):
if (half_edgeA is None):
raise Exception\
("Programming error. Argument to _SplitInternalEdge is None.")
half_edgeB = half_edgeA.NextHalfEdgeAroundEdge()
if not(half_edgeB.NextHalfEdgeAroundEdge() is half_edgeA):
raise Exception\
("Programming error. Half edge passed to _SplitInternalEdge is in an edge shared by three or more cells.")
if (half_edgeB is half_edgeA):
raise Exception\
("Programming error. Half edge passed to _SplitInternalEdge is a boundary edge. Call _SplitBoundaryEdge().")
vA = half_edgeA.FromVertex()
vB = half_edgeB.FromVertex()
cellA = half_edgeA.Cell()
cellB = half_edgeB.Cell()
numvA = cellA.NumVertices()
numvB = cellB.NumVertices()
nextA = half_edgeA.NextHalfEdgeInCell()
nextB = half_edgeB.NextHalfEdgeInCell()
# Create a new vertex.
ivnew = self.MaxVertexIndex()+1
newv = self.AddVertex(ivnew)
if (newv is None):
raise Exception("Error creating new vertex. Out of memory?")
# Set newv to midpoint of (vA,vB).
compute_midpoint(vA.coord, vB.coord, newv.coord)
# Create two new half edges.
inew_half_edgeA = self.MaxHalfEdgeIndex()+1
# _AddHalfEdge increments cellA.num_vertices.
new_half_edgeA = self._AddHalfEdge(inew_half_edgeA, cellA, newv)
inew_half_edgeB = self.MaxHalfEdgeIndex()+1
# _AddHalfEdge increments cellB.num_vertices.
new_half_edgeB = self._AddHalfEdge(inew_half_edgeB, cellB, newv)
newv.half_edge_from.append(new_half_edgeA)
newv.half_edge_from.append(new_half_edgeB)
# Relink half edges in cell.
self._RelinkHalfEdgesInCell(half_edgeA, new_half_edgeA)
self._RelinkHalfEdgesInCell(half_edgeB, new_half_edgeB)
self._RelinkHalfEdgesInCell(new_half_edgeA, nextA)
self._RelinkHalfEdgesInCell(new_half_edgeB, nextB)
# Unlink half_edgeA.next_half_edge_around_edge and
# half_edgeB.next_half_edge_around_edge.
half_edgeA.next_half_edge_around_edge = half_edgeA
half_edgeB.next_half_edge_around_edge = half_edgeB
# Link half edges around edge.
self._LinkHalfEdgesAroundEdge(half_edgeA, new_half_edgeB)
self._LinkHalfEdgesAroundEdge(half_edgeB, new_half_edgeA)
# half_edgeA and half_edgeB are not boundary edges,
# but the previous edges in the cell might be boundary edges.
vA.MoveBoundaryHalfEdgeToIncidentHalfEdge0()
vB.MoveBoundaryHalfEdgeToIncidentHalfEdge0()
return newv
## Split a boundary edge.
# - Returns new vertex.
# @pre half_edgeA is a boundary edge.
def _SplitBoundaryEdge(self, half_edgeA):
if (half_edgeA is None):
raise Exception\
("Programming error. Argument to _SplitBoundaryEdge is None.")
if not(half_edgeA.IsBoundary()):
raise Exception\
("Programming error. Half edge passed to _SplitBoundaryEdges is not a boundary edge. Call _SplitInternalEdge().")
vA = half_edgeA.FromVertex()
vB = half_edgeA.ToVertex()
cellA = half_edgeA.Cell()
numvA = cellA.NumVertices()
nextA = half_edgeA.NextHalfEdgeInCell()
# Create a new vertex.
ivnew = self.MaxVertexIndex()+1
newv = self.AddVertex(ivnew)
if (newv is None):
raise Exception("Error creating new vertex. Out of memory?")
# Set newv to midpoint of (vA,vB).
compute_midpoint(vA.coord, vB.coord, newv.coord)
# Create new half edge.
inew_half_edgeA = self.MaxHalfEdgeIndex()+1
# _AddHalfEdge() increments cellA.num_vertices.
new_half_edgeA = self._AddHalfEdge(inew_half_edgeA,cellA,newv)
newv.half_edge_from.append(new_half_edgeA)
# Relink half edges in cell.
self._RelinkHalfEdgesInCell(half_edgeA, new_half_edgeA)
self._RelinkHalfEdgesInCell(new_half_edgeA, nextA)
# No need to move edges in half_edge_from[] lists.
return newv
# Public member functions.
# *** Collapse/join/split functions ***
## Collapse edge, mapping two vertices to a single vertex.
def CollapseEdge(self, ihalf_edge0):
half_edge0 = self.HalfEdge(ihalf_edge0);
if (half_edge0 is None):
# Can't collapse a half edge that doesn't exist.
return None;
# Don't collapse half_edge0 if its two endoints (vA,vB) are
# in some cell, but edge (vA,vB) is not in the cell.
if (self.IsIllegalEdgeCollapseH(half_edge0)):
return None
max_num_half_edges_around_edge =\
half_edge0.FromVertex().NumHalfEdgesFrom() +\
half_edge0.ToVertex().NumHalfEdgesFrom()
v0 = half_edge0.FromVertex()
v1 = half_edge0.ToVertex()
# Update *.next_half_edge_around_edge links.
self._LinkHalfEdgesAroundMergedEdges(v0,v1)
# Update *.prev_half_edge_in_cell and *.next_half_edge_in_cell links.
# Delete triangles cells containing edge (v0,v1).
half_edge = half_edge0
k = 0
while True:
cell = half_edge.Cell()
prev_half_edge = half_edge.PrevHalfEdgeInCell()
next_half_edge = half_edge.NextHalfEdgeInCell()
if (cell.HalfEdge() == half_edge):
cell.half_edge = next_half_edge
if (cell.IsTriangle()):
# Cell is a triangle.
# Collapsing half edge removes cell.
v2 = prev_half_edge.FromVertex()
self._DeleteHalfEdge(prev_half_edge)
self._DeleteHalfEdge(next_half_edge)
self._DeleteCell(cell)
v2.MoveBoundaryHalfEdgeToIncidentHalfEdge0()
else:
# Link previous and next half edge.
self._RelinkHalfEdgesInCell(prev_half_edge, next_half_edge)
cell.num_vertices = cell.num_vertices-1
half_edge = half_edge.NextHalfEdgeAroundEdge()
k = k+1
if (k >= max_num_half_edges_around_edge) or\
(half_edge is half_edge0):
break
# Compute an upper bound on the number of half edges
# with endpoints (v0,v1).
max_numh0 = v0.NumHalfEdgesFrom() + v1.NumHalfEdgesFrom()
# Move all half edges from v0 to be from v1.
# - Moves v0.half_edge_from to v1.half_edge_from
self._MoveVertexHalfEdgeFromList(v0, v1)
# Set v1 to midpoint of (v0,v1).
compute_midpoint(v0.coord, v1.coord, v1.coord)
# Delete half edges around edge (v0,v1).
self._DeleteHalfEdgesAroundEdge(half_edge0, max_numh0)
self._DeleteVertex(v0)
v1.MoveBoundaryHalfEdgeToIncidentHalfEdge0()
return v1
## Split cell with diagonal connecting the two from vertices.
# - Diagonal (half_edgeA.FromVertex(), half_edgeB.FromVertex()).
def SplitCell(self, ihalf_edgeA, ihalf_edgeB):
half_edgeA = self.HalfEdge(ihalf_edgeA)
half_edgeB = self.HalfEdge(ihalf_edgeB)
if (half_edgeA is None) or (half_edgeB is None):
raise Exception("Programming error. Arguments to SplitCell are not half edge indices.")
if self.IsIllegalSplitCell(half_edgeA, half_edgeB):
return None
vA = half_edgeA.FromVertex()
vB = half_edgeB.FromVertex()
half_edgeC = self.FindEdge(vA, vB)
cellA = half_edgeA.Cell()
numvA = cellA.NumVertices() # Store before adding diagA to mesh
icellB = self.MaxCellIndex()+1
cellB = self._AddCell(icellB)
idiagA = self.MaxHalfEdgeIndex()+1
diagA = self._AddHalfEdge(idiagA, cellA, vB)
idiagB = self.MaxHalfEdgeIndex()+1
diagB = self._AddHalfEdge(idiagB, cellB, vA)
# Link diagA and diagB around edge.
diagA.next_half_edge_around_edge = diagB
diagB.next_half_edge_around_edge = diagA
if not(half_edgeC is None):
# Link half_edge_around_edge cycle of half_edgeC and diagA/diagB
self._SwapNextHalfEdgeAroundEdge(half_edgeC, diagA)
# Add diagA and diagB to vertex half_edge_from[] lists.
diagA.from_vertex.half_edge_from.append(diagA)
diagB.from_vertex.half_edge_from.append(diagB)
# Change cell of half edges from half_edgeB to half_edgeA.
half_edge = half_edgeB
k = 0
while (k < numvA) and not(half_edge is half_edgeA):
half_edge.cell = cellB
half_edge = half_edge.NextHalfEdgeInCell()
k = k+1
# Set num_vertices in cellA and cellB
cellB.num_vertices = k+1
cellA.num_vertices = numvA+1-k
# Set cellB.half_edge.
cellB.half_edge = half_edgeB
# Change cellA.half_edge, if necessary.
if not(cellA.HalfEdge().Cell() is cellA):
cellA.half_edge = half_edgeA
hprevA = half_edgeA.PrevHalfEdgeInCell()
hprevB = half_edgeB.PrevHalfEdgeInCell()
# Link half edges in cell.
self._RelinkHalfEdgesInCell(hprevB, diagA)
self._RelinkHalfEdgesInCell(diagA, half_edgeA)
self._RelinkHalfEdgesInCell(hprevA, diagB)
self._RelinkHalfEdgesInCell(diagB, half_edgeB)
# Swap first and last edges in half_edge_list[], if necessary.
# diagA and diagB are not boundary edges, but
# diagA.PrevEdgeInCell() or diagB.PrevEdgeInCell() could
# be boundary edges.
diagA.FromVertex()._ProcessFirstLastHalfEdgesFrom()
diagB.FromVertex()._ProcessFirstLastHalfEdgesFrom()
return diagA
## Joint two cells sharing an edge.
# - Returns edge incident on the joined cell.
def JoinTwoCells(self, ihalf_edgeA):
half_edgeA = self.HalfEdge(ihalf_edgeA)
if (half_edgeA is None):
raise Exception("Programming error. Argument to JoinTwoCells is not a cell index.")
if (self.IsIllegalJoinCells(half_edgeA)):
return None
half_edgeB = half_edgeA.NextHalfEdgeAroundEdge()
if (half_edgeB.NextHalfEdgeAroundEdge() != half_edgeA):
raise Exception("Programming error. Half edge passed to JoinToCells is in an edge shared by three or more cells.")
vA = half_edgeA.FromVertex()
vB = half_edgeB.FromVertex()
cellA = half_edgeA.Cell()
cellB = half_edgeB.Cell()
numvA = cellA.NumVertices()
numvB = cellB.NumVertices()
prevA = half_edgeA.PrevHalfEdgeInCell()
prevB = half_edgeB.PrevHalfEdgeInCell()
nextA = half_edgeA.NextHalfEdgeInCell()
nextB = half_edgeB.NextHalfEdgeInCell()
if not(vA.IsIncidentOnMoreThanTwoEdges()) or\
not(vB.IsIncidentOnMoreThanTwoEdges()):
# Can't remove an edge if some endpoint only has degree 2.
return None
# Change cellA.HalfEdge() if necessary.
if (cellA.HalfEdge() is half_edgeA):
cellA.half_edge = half_edgeA.NextHalfEdgeInCell()
# Change edges in cellB to be in cellA.
half_edge = half_edgeB.NextHalfEdgeInCell()
for k in range(0, numvB-1):
half_edge.cell = cellA
half_edge = half_edge.NextHalfEdgeInCell()
# Set number of vertices in cell.
cellA.num_vertices = numvA + numvB - 2
# Relink half edges in cell.
self._RelinkHalfEdgesInCell(prevA, nextB)
self._RelinkHalfEdgesInCell(prevB, nextA)
# Delete cellB and half_edgeA and half_edgeB.
self._DeleteHalfEdge(half_edgeA)
self._DeleteHalfEdge(half_edgeB)
self._DeleteCell(cellB)
# half_edgeA and half_edgeB are not boundary edges,
# but the previous edges in the cell might be boundary edges.
vA.MoveBoundaryHalfEdgeToIncidentHalfEdge0()
vB.MoveBoundaryHalfEdgeToIncidentHalfEdge0()
return nextA
## Split edge at midpoint.
# - Returns new vertex.
def SplitEdge(self, ihalf_edgeA):
half_edgeA = self.HalfEdge(ihalf_edgeA)
if (half_edgeA is None):
raise Exception\
("Programming error. Argument to SplitEdge is not a half edge index.")
if (half_edgeA.IsBoundary()):
return self._SplitBoundaryEdge(half_edgeA)
else:
return self._SplitInternalEdge(half_edgeA)
# *** Functions to check potential edge collapses. ***
## Return True if half edge endpoints and v are in a mesh triangle.
def IsInTriangle(self, half_edge0, v):
# Cannot have more than max_numh half edges around an edge.
max_numh = half_edge0.FromVertex().NumHalfEdgesFrom() +\
half_edge0.ToVertex().NumHalfEdgesFrom()
half_edge = half_edge0
k = 0
while (True):
if (half_edge.Cell().IsTriangle()):
prev_half_edge = half_edge.PrevHalfEdgeInCell()
if (prev_half_edge.FromVertex() is v):
return True
half_edge = half_edge.NextHalfEdgeAroundEdge()
k = k+1
if (k >= max_numh) or (half_edge is half_edge0):
break
return False
## Return True if both endpoints (vfrom,vto) of half_edge
# are neighbors of some vertex vC, but (vfrom, vto, vC)
# is not a mesh triangle.
# - Returns also ivC, the index of the third vertex vC.
def FindTriangleHole(self, half_edge):
# Initialize.
ivC = 0
vfrom = half_edge.FromVertex()
vto = half_edge.ToVertex()
vto.ClearVisitedFlagsInAdjacentVertices()
vfrom.SetVisitedFlagsInAdjacentVertices(True)
for k in range(0, vto.NumHalfEdgesFrom()):
half_edgeA = vto.KthHalfEdgeFrom(k)
vtoA = half_edgeA.ToVertex()
if (vtoA.IsVisited()):
# vtoA is a neighbor of vfrom and vto.
if not(self.IsInTriangle(half_edge, half_edgeA.ToVertex())):
ivC = vtoA.Index()
return True, ivC
half_edgeB = half_edgeA.PrevHalfEdgeInCell()
vfromB = half_edgeB.FromVertex()
## Check vfromB to handle boundary edges and/or cells
# with arbitrary orientations.
if vfromB.IsVisited():
# vfromB is a neighbor of vfrom and vto.
if not(self.IsInTriangle(half_edge, vfromB)):
ivC = vfromB.Index()
return True, ivC
return False, 0
## Return True if cell icell is a triangle whose 3 edges
# are boundary edges.
def IsIsolatedTriangle(self, icell):
THREE = 3
cell = self.Cell(icell)
if (cell is None):
return False
if not(cell.IsTriangle()):
return False
half_edge = cell.HalfEdge()
for i in range(0,THREE):
if not(half_edge.IsBoundary()):
return False
# Cell has three vertices (and three edges) and all edges
# are boundary edges.
return True;
## Return True if cell icell is in the boundary of a tetrahedron.
def IsInTetrahedron(self, icell):
cell0 = self.Cell(icell)
if (cell0 is None):
return False
if not(cell0.IsTriangle()):
return False
half_edge0 = cell0.HalfEdge()
v2 = half_edge0.PrevHalfEdgeInCell().FromVertex()
# Cannot have more than max_numh half edges around an edge.
max_numh = half_edge0.FromVertex().NumHalfEdgesFrom() +\
half_edge0.ToVertex().NumHalfEdgesFrom()
half_edge = half_edge0.NextHalfEdgeAroundEdge()
k = 0
while (k < max_numh and not(half_edge is half_edge0)):
cell = half_edge.Cell()
if cell.IsTriangle():
prev_half_edge = half_edge.PrevHalfEdgeInCell()
next_half_edge = half_edge.NextHalfEdgeInCell()
if self.IsInTriangle(prev_half_edge, v2) and\
self.IsInTriangle(next_half_edge, v2):
# cell0, cell, and two triangles form a tetrahedron.
return True
k = k+1
half_edge = half_edge.NextHalfEdgeAroundEdge()
return False
## Count number of vertices shared by two cells.
def CountNumVerticesSharedByTwoCells(self, cellA, cellB):
num_shared_vertices = 0
cellB.ClearVisitedFlagsInAllVertices()
cellA.SetVisitedFlagsInAllVertices(True)
half_edgeB = cellB.HalfEdge()
for k in range(0, cellB.NumVertices()):
v = half_edgeB.FromVertex()
if (v.IsVisited()):
num_shared_vertices = num_shared_vertices+1
half_edgeB = half_edgeB.NextHalfEdgeInCell()
return num_shared_vertices
## Return True if edge collapse is illegal.
# - Edge collapse (vA,vB) is illegal if some cell contains
# both vA and vB but not edge (vA,vB).
# - Version that takes two vertices.
# - NOTE: Function suffix is 'V' (for vertex arguments).
def IsIllegalEdgeCollapseV(self, vA, vB):
if (vA.NumHalfEdgesFrom() > vB.NumHalfEdgesFrom()):
# Swap vA and vB to reduce number of cells processed.
return self.IsIllegalEdgeCollapseV(vB, vA)
else:
for k in range(0, vA.NumHalfEdgesFrom()):
half_edge0 = vA.KthHalfEdgeFrom(k)
cell = half_edge0.Cell()
if (cell.NumVertices() < 4):
# All pairs of cell vertices form an edge.
continue
half_edge =\
(half_edge0.NextHalfEdgeInCell()).NextHalfEdgeInCell()
for i in range(2,cell.NumVertices()-1):
if (half_edge.FromVertex() == vB):
return True
return False
## Return True if edge collapse is illegal.
# - NOTE: Function suffix is 'H' (for half_edge argument).
def IsIllegalEdgeCollapseH(self, half_edge):
return self.IsIllegalEdgeCollapseV\
(half_edge.FromVertex(), half_edge.ToVertex())
## Return True if split cell is illegal.
# - Split cell is illegal
# if half_edgeA and half_edgeB are in different cells or
# if half_edgeA.FromVertex() and half_edgeB.FromVertex()
# are adjacent vertices.
def IsIllegalSplitCell(self, half_edgeA, half_edgeB):
if not(half_edgeA.Cell() is half_edgeB.Cell()):
return True
if (half_edgeA is half_edgeB):
return True
if (half_edgeA.FromVertex() is half_edgeB.ToVertex()):
return True
if (half_edgeA.ToVertex() is half_edgeB.FromVertex()):
return True
return False
## Return True if join cells is illegal.
# - Join cells is illegal if half_edge is a boundary half edge
# or more than two cells are incident on the edge
# or some endpoint of half edge has degree 2.
def IsIllegalJoinCells(self, half_edge):
TWO = 2
if (half_edge.IsBoundary()):
return True
if not(half_edge.FromVertex().IsIncidentOnMoreThanTwoEdges()):
return True
if not(half_edge.ToVertex().IsIncidentOnMoreThanTwoEdges()):
return True
half_edgeX = half_edge.NextHalfEdgeAroundEdge()
if not(half_edge is half_edgeX.NextHalfEdgeAroundEdge()):
# More than two cells are incident on edge
# (half_edge.FromVertex(), half_edge.ToVertex()).
return True
if (self.CountNumVerticesSharedByTwoCells\
(half_edge.Cell(), half_edgeX.Cell()) > TWO):
# Cells share more than two vertices.
return True
# Join is LEGAL
return False
# *** Compute mesh information. ***
## Return min and max squared edge lengths over all mesh edges.
# - Return also half edges with min and max edge lengths.
def ComputeMinMaxEdgeLengthSquared(self):
flag_found = False;
# Initialize
min_edge_length_squared = 0.0;
max_edge_length_squared = 0.0;
ihalf_edge_min = 0;
ihalf_edge_max = 0;
for ihalf_edge in self.HalfEdgeIndices():
half_edge = self.HalfEdge(ihalf_edge)
if (half_edge is None):
# Shouldn't happen but just in case.
continue;
length_squared = half_edge.ComputeLengthSquared()
if (not(flag_found) or (length_squared < min_edge_length_squared)):
min_edge_length_squared = length_squared
ihalf_edge_min = half_edge.Index()
if (not(flag_found) or length_squared > max_edge_length_squared):
max_edge_length_squared = length_squared
ihalf_edge_max = half_edge.Index()
flag_found = True
return min_edge_length_squared, max_edge_length_squared,\
ihalf_edge_min, ihalf_edge_max
## Return min squared ratio of the min to max edge in any cell.
# - Ignores cells with all edge lengths 0.
# - Return also cell index and length and indices
# of shortest and longest half edges in the cell.
# - Returns 1.0 if there are no cells or all edges are length 0.
def ComputeMinCellEdgeLengthRatioSquared(self):
# Initialize.
min_edge_length_ratio_squared = 1.0
icell_min_ratio = 0
min_edge_length_squared = 0.0
max_edge_length_squared = 0.0
ihalf_edge_min = 0
ihalf_edge_max = 0
for icell in self.CellIndices():
cell = self.Cell(icell);
if (cell is None):
# Shouldn't happen but just in case.
continue
min_Lsquared, max_Lsquared, ihalf_min, ihalf_max =\
cell.ComputeMinMaxEdgeLengthSquared()
if (max_Lsquared == 0 or cell.NumVertices() == 0):
continue
ratio = min_Lsquared/max_Lsquared;
if (ratio < min_edge_length_ratio_squared):
min_edge_length_ratio_squared = ratio
icell_min_ratio = icell
min_edge_length_squared = min_Lsquared
max_edge_length_squared = max_Lsquared
ihalf_edge_min = ihalf_min
ihalf_edge_max = ihalf_max
return min_edge_length_ratio_squared, icell_min_ratio,\
min_edge_length_squared, max_edge_length_squared,\
ihalf_edge_min, ihalf_edge_max
## Compute cosine min and max cell angles over all mesh cells.
# - Return also half edges whose from vertices are incident
# on the two cell edges forming the min and max angles.
# - Note: The smallest angle has the largest cosine and
# the largest angle has the smallest cosine.
def ComputeCosMinMaxAngle(self):
# Initialize
cos_min_angle = 0.0;
cos_max_angle = 0.0;
ihalf_edge_min = 0;
ihalf_edge_max = 0;
flag_found = False
for icell in self.CellIndices():
cell = self.Cell(icell);
if (cell is None):
# Shouldn't happen but just in case.
continue
cos_minA, cos_maxA, ihalf_min, ihalf_max =\
cell.ComputeCosMinMaxAngle()
if not(flag_found) or (cos_minA > cos_min_angle):
cos_min_angle = cos_minA
ihalf_edge_min = ihalf_min
if not(flag_found) or (cos_maxA < cos_max_angle):
cos_max_angle = cos_maxA
ihalf_edge_max = ihalf_max
flag_found = True
return cos_min_angle, cos_max_angle, ihalf_edge_min, ihalf_edge_max
|
Suramin antagonizes responses to P2purinoceptor agonists and purinergic nerve stimulation in the guineapig urinary bladder and taenia coli 1 Suramin, an inhibitor of several types of ATPase, was investigated for its ability to antagonize responses mediated via P2Xpurinoceptors in the guineapig urinary bladder and P2Ypurinoceptors in the guineapig taenia coli. 2 In isolated strips of bladder detrusor muscle, suramin (100 m1 mm) caused a noncompetitive antagonism of responses to,methylene ATP with an estimated pA2 of approximately 4.7, and inhibited responses to stimulation of the intramural purinergic nerves, with a similar pA2 value. At a concentration of 10 m, suramin had little effect, but at a concentration of 1 m, suramin potentiated responses to,methylene ATP, and potentiated responses to electrical stimulation of intramural purinergic nerves. 3 In isolated strips of taenia coli, in which a standard tone had been induced by carbachol (100 nm), suramin at 100 m and 1 mm significantly antagonized relaxant responses to ATP (at an EC50 concentration) with an estimated pA2 of 5.0 ± 0.82 and relaxant responses to electrical stimulation of the intramural nonadrenergic, noncholinergic inhibitory nerves, either single pulses or trains of 8 Hz for 10 s, with estimated pA2 values of 4.9 ± 0.93 and 4.6 ± 1.01, respectively. Suramin had no significant effect at 1 or 10 m. 4 Suramin, at any of the concentrations tested, did not affect contractile responses to histamine (10 m) or carbachol (10 m) in the bladder detrusor preparations. In the taenia coli, suramin did not affect either the relaxant responses to noradrenaline (at an EC50 concentration) or the contractile responses to carbachol (100 nm). 5 Thus, suramin at concentrations above 10 m blocked actions mediated via P2X and P2Ypurinoceptors in the guineapig urinary bladder and taenia coli respectively. Potentiation of purinoceptormediated activity was seen only at a low concentration of suramin (1 m) and only in the urinary bladder (P2Xpurinoceptor). For its antagonistic activity suramin did not discriminate between P2X and P2Ypurinoceptors, but it was selective for P2purinoceptormediated activity rather than that mediated via cholinoceptors, adrenoceptors or histamine receptors. |
package com.vitreoussoftware.bioinformatics.sequence.io;
import com.vitreoussoftware.bioinformatics.sequence.Sequence;
import com.vitreoussoftware.bioinformatics.sequence.basic.BasicSequence;
import com.vitreoussoftware.bioinformatics.sequence.encoding.ExpandedIupacEncodingScheme;
import com.vitreoussoftware.bioinformatics.sequence.io.reader.StringStreamReader;
import java.io.FileNotFoundException;
/**
* Base Class for Test Data classes to extend. Helps to generify test classes
* Created by John on 2016/10/15
*/
public abstract class TestData {
/**
* file with three full records
*/
private static final String COMPLEX = "ComplexExamples";
/**
* file with three full records
*/
private static final String EMPTY = "Empty";
/**
* file with three full records
*/
private static final String EMPTY_WHITESPACE = "EmptyWhitespace";
/**
* file with three full records
*/
public static final String REAL_EXAMPLES = "RealExamples";
/**
* file with only a small amount of data
*/
private static final String SIMPLE_EXAMPLE = "SimpleExample";
/**
* file with enough data that pages must be performed
*/
private static final String PAGING_REQUIRED = "PagingRequired";
/**
* file with enough data that pages must be performed
*/
private static final String EXTRA_NEWLINE = "ExtraNewline";
/**
* Extremely large file for testing that all records can be loaded
*/
private static final String BIG = "Big";
/**
* Example metadata for use when testing reader/writer interation
*/
private final String METADATA = "AB000263 standard; RNA; PRI; 368 BP.";
/**
* Shortened sequence string for basic testing
*/
private final String RECORD_SIMPLE = "CAGGCUUAACACAUGCAAGUCGAACGAAGUUAGGAAGCUUGCUUCUGAUACUUAGUGGCGGACGGGUGAGUAAUGCUUAGG";
/**
* First record in the real examples file
*/
private final String REAL_EXAMPLE_1 =
"AGGCUUAACACAUGCAAGUCGAACGAAGUUAGGAAGCUUGCUUCUGAUACUUAGUGGCGGACGGGUGAGUAAUGCUUAGG" +
"AAUCUGCCUAGUAGUGGGGGAUAACUUGGGGAAACCCAAGCUAAUACCGCAUACGACCUACGGGUGAAAGGGGGCUUUUA" +
"GCUCUCGCUAUUAGAUGAGCCUAAGUCGGAUUAGCUGGUUGGUGGGGUAAAGGCCUACCAAGGCGACGAUCUGUAGCUGG" +
"UCUGAGAGGAUGAUCAGCCACACUGGGACUGAGACACGGCCCAGACUCCUACGGGAGGCAGCAGUGGGGAAUAUUGGACA" +
"AUGGGCGAAAGCCUGAUCCAGCCAUGCCGCGUGUGUGAAGAAGGCCUUUUGGUUGUAAAGCACUUUAAGUGGGGAGGAAA" +
"AGCUUAUGGUUAAUACCCAUAAGCCCUGACGUUACCCACAGAAUAAGCACCGGCUAACUCUGUGCCAGCAGCCGCGGUAA" +
"UACAGAGGGUGCAAGCGUUAAUCGGAUUACUGGGCGUAAAGCGCGCGUAGGUGGUUAUUUAAGUCAGAUGUGAAAGCCCC" +
"GGGCUUAACCUGGGAACUGCAUCUGAUACUGGAUAACUAGAGUAGGUGAGAGGGGNGUAGAAUUCCAGGUGUAGCGGUGA" +
"AAUGCGUAGAGAUCUGGAGGAAUACCGAUGGCGAAGGCAGCUCCCUGGCAUCAUACUGACACUGAGGUGCGAAAGCGUGG" +
"GUAGCAAACAGGAUUAGAUACCCUGGUAGUCCACGCCGUAAACGAUGUCUACCAGUCGUUGGGUCUUUUAAAGACUUAGU" +
"GACGCAGUUAACGCAAUAAGUAGACCGCCUGGGGAGUACGGCCGCAAGGUUAAAACUCAAAUGAAUUGACGGGGGCCCGC" +
"ACAAGCGGUGGAGCAUGUGGUUUAAUUCGAUGCAACGCGAAGAACCUUACCUGGUCUUGACAUAGUGAGAAUCUUGCAGA" +
"GAUGCGAGAGUGCCUUCGGGAAUUCACAUACAGGUGCUGCAUGGCUGUCGUCAGCUCGUGUCGUGAGAUGUUGGGUUAAG" +
"UCCCGCAACGAGCGCAACCCUUUUCCUUAGUUACCAGCGACUCGGUCGGGAACUCUAAGGAUACUGCCAGUGACAAACUG" +
"GAGGAAGGCGGGGACGACGUCAAGUCAUCAUGGCCCUUACGACCAGGGCUACACACGUGCUACAAUGGUUGGUACAAAGG" +
"GUUGCUACACAGCGAUGUGAUGCUAAUCUCAAAAAGCCAAUCGUAGUCCGGAUUGGAGUCUGCAACUCGACUCCAUGAAG" +
"UCGGAAUCGCUAGUAAUCGCAGAUCAGAAUGCUGCGGUGAAUACGUUCCCGGGCCUUGUACACACCGCCCGUCACACCAU" +
"GGGAGUUGAUCUCACCAGAAGUGGUUAGCCUAACGCAAGAGGGCGAUCACCACGGUGGGGUCGAUGACUGGGGUGAAGUC" +
"GUAACAAGGUAGCCGUAGGGGAACUGCGGCUG";
/**
* Second record in the real example file
*/
private final String REAL_EXAMPLE_2 =
"UUAAAAUGAGAGUUUGAUCCUGGCUCAGGACGAACGCUGGCGGCGUGCCUAAUACAUGCAAGUCGAACGAAACUUUCUUA" +
"CACCGAAUGCUUGCAUUCACUCGUAAGAAUGAGUGGCGUGGACGGGUGAGUAACACGUGGGUAACCUGCCUAAAAGAAGG" +
"GGAUAACACUUGGAAACAGGUGCUAAUACCGUAUAUCUCUAAGGAUCGCAUGAUCCUUAGAUGAAAGAUGGUUCUNGCUA" +
"UCGCUUUUAGAUGGACCCGCGGCGUAUUAACUAGUUGGUGGGGUAACGGCCUACCAAGGUGAUGAUACGUAGCCGAACUG" +
"AGAGGUUGAUCGGCCACAUUGGGACUGAGACACGGCCCNAACUCCUACGGGAGGCAGCAGUAGGGAAUCUUCCACAAUGG" +
"ACGCAAGUCUGAUGGAGCAACGCCGCGUGAGUGAAGAAGGUCUUCGGAUCGUAAAACUCNGUUGUUAGAGAAGAACUCGA" +
"GUGAGAGUAACUGUUCAUUCGAUGACGGUAUCUAACCAGCAAGUCACGGCUAACUACGUGCCAGCAGCCGCGGUAAUACG" +
"UAGGUGGCAAGCGUUGUCCGGAUUUAUUGGGCGUAAAGGGAACGCAGGCGGUCUUUUAAGUCUGAUGUGAAAGCCUUCGG" +
"CUUAACCGGAGUAGUGCUAUGGAAACUGGAAGACUUGAGUGCAGAAGAGGAGAGUGGAACUCCAUGUGUAGCGGUGAAAU" +
"GCGUAGAUAUAUGGAAGAACACCAGUGGCGAAAGCGGCUCUCUGGUCUGUAACUGACGCUGAGGUUCGAAAGCGUGGGUA" +
"GCAAACAGGAUUAGAUACCCUGGUAGUCCACGCCGUAAACGAUGAAUGCUAGGUGUUGGAGGGUUUCCGCCCUUCAGUGC" +
"CGCAGCUAACGCAAUAAGCAUUCCGCCUGGGGAGUACGACCGCAAGGUUGAAACUCAAAGGAAUUGACGGGGGCNNGCAC" +
"AAGCGGUGGAGCAUGUGGUUUAAUUCGAANNAACGCGAAGAACCUUACCAGGUCUUGACAUCCUUUGACCACCUAAGAGA" +
"UUAGGCUUUCCCUUCGGGGACAAAGUGACAGGUGGNGCAUGGCUGUCGUCAGCUCGUGUCGUGAGAUGUUGGGUUAAGUC" +
"CCGCAACGAGCGCAACCCUUGUUGUCAGUUGCCAGCAUUAAGUUGGGCACUCUGGCGAGACUGCCGGUGACAAACCGGAG" +
"GAAGGUGGGGACGACGUCAAGUCAUCAUGCCCCUUAUGACCUGGGCUACACACGUGCUACAAUGGACGGUACAACGAGUC" +
"GCGAGACCGCGAGGUUUAGCUAAUCUCUUAAAGCCGUUCUCAGUUCGGAUUGUAGGCUGCAACUCGCCUACAUGAAGUCG" +
"GAAUCGCUAGUAAUCGCGA";
private final String REAL_EXAMPLE_3 =
"GAUGAACGCUAGCGGCGUGCCUUAUGCAUGCAAGUCGAACGGUCUUAAGCAAUUAAGAUAGUGGCGAACGGGUGAGUAAC" +
"GCGUAAGUAACCUACCUCUAAGUGGGGGAUAGCUUCGGGAAACUGAAGGUAAUACCGCAUGUGGUGGGCCGACAUAUGUU" +
"GGUUCACUAAAGCCGUAAGGCGCUUGGUGAGGGGCUUGCGUCCGAUUAGCUAGUUGGUGGGGUAAUGGCCUACCAAGGCU" +
"UCGAUCGGUAGCUGGUCUGAGAGGAUGAUCAGCCACACUGGGACUGAGACACGGCCCAGACUCCUACGGGAGGCAGCAGC" +
"AAGGAAUCUUGGGCAAUGGGCGAAAGCCUGACCCAGCAACGCCGCGUGAGGGAUGAAGGCUUUCGGGUUGUAAACCUCUU" +
"UUCAUAGGGAAGAAUAAUGACGGUACCUGUGGAAUAAGCUUCGGCUAACUACGUGCCAGCAGCCGCGGUAAUACGUAGGA" +
"AGCAAGCGUUAUCCGGAUUUAUUGGGCGUAAAGUGAGCGUAGGUGGUCUUUCAAGUUGGAUGUGAAAUUUCCCGGCUUAA" +
"CCGGGACGAGUCAUUCAAUACUGUUGGACUAGAGUACAGCAGGAGAAAACGGAAUUCCCGGUGUAGUGGUAAAAUGCGUA" +
"GAUAUCGGGAGGAACACCAGAGGCGAAGGCGGUUUUCUAGGUUGUCACUGACACUGAGGCUCGAAAGCGUGGGGAGCGAA" +
"CAGAAUUAGAUACUCUGGUAGUCCACGCCUUAAACUAUGGACACUAGGUAUAGGGAGUAUCGACCCUCUCUGUGCCGAAG" +
"CUAACGCUUUAAGUGUCCCGCCUGGGGAGUACGGUCGCAAGGCUAAAACUCAAAGGAAUUGACGGGGGCCCGCACAAGCA" +
"GCGGAGCGUGUGGUUUAAUUCGAUGCUACACGAAGAACCUUACCAAGAUUUGACAUGCAUGUAGUAGUGAACUGAAAGGG" +
"GAACGACCUGUUAAGUCAGGAACUUGCACAGGUGCUGCAUGGCUGUCGUCAGCUCGUGCCGUGAGGUGUUUGGUUAAGUC" +
"CUGCAACGAGCGCAACCCUUGUUGCUAGUUAAAUUUUCUAGCGAGACUGCCCCGCGAAACGGGGAGGAAGGUGGGGAUGA" +
"CGUCAAGUCAGCAUGGCCUUUAUAUCUUGGGCUACACACACGCUACAAUGGACAGAACAAUAGGUUGCAACAGUGUGAAC" +
"UGGAGCUAAUCC";
/**
* Get the full path for a file
*
* @param fileName the name of the file
* @return The completed path
*/
private String getPath(final String fileName) {
return getBasePath() + fileName + getExtension();
}
/**
* Get the {@link StringStreamReader} instance to use on this test data
*
* @param path The data file to load
* @return The {@link StringStreamReader}
*/
private StringStreamReader getReader(final String path) throws FileNotFoundException {
return createReader(getPath(path));
}
/**
* Get the {@link StringStreamReader} instance to use on this test data
*
* @param path The data file to load
* @param pagingSize The paging size for the {@link StringStreamReader}
* @return The {@link StringStreamReader}
*/
private StringStreamReader getReader(final String path, final int pagingSize) throws FileNotFoundException {
return createReader(getPath(path), pagingSize);
}
/**
* Get the {@link StringStreamReader} instance to use on this test data
*
* @param path The data file to load
* @return The {@link StringStreamReader}
*/
protected abstract StringStreamReader createReader(String path) throws FileNotFoundException;
/**
* Get the {@link StringStreamReader} instance to use on this test data
*
* @param path The data file to load
* @param pagingSize The paging size for the {@link StringStreamReader}
* @return The {@link StringStreamReader}
*/
protected abstract StringStreamReader createReader(String path, int pagingSize) throws FileNotFoundException;
/**
* Get the extension for the files
*
* @return the file extension
*/
protected abstract String getExtension();
/**
* Get the base path to the files
*
* @return the base path for the files
*/
protected abstract String getBasePath();
/**
* Return the file path used by the {@see getSimpleExampleReader} function
*
* @return The file path
*/
public String getSimpleExamplePath() {
return getPath(SIMPLE_EXAMPLE);
}
/**
* Create a {@link StringStreamReader} for an empty test file
*
* @return the StringStreamReader
*/
public StringStreamReader getEmptyReader()
throws FileNotFoundException {
return getReader(EMPTY);
}
/**
* Create a {@link StringStreamReader} for a test file containing only whitespace
*
* @return the StringStreamReader
*/
public StringStreamReader getEmptyWhiteSpaceReader()
throws FileNotFoundException {
return getReader(EMPTY_WHITESPACE);
}
/**
* Create a StringStreamReader for the Simple test file
*
* @return the StringStreamReader
* @throws FileNotFoundException the test file could not be found
*/
public StringStreamReader getSimpleExampleReader()
throws FileNotFoundException {
return getReader(SIMPLE_EXAMPLE);
}
/**
* Create a StringStreamReader for the Example test file
*
* @return the StringStreamReader
* @throws FileNotFoundException the test file could not be found
*/
public StringStreamReader getRealExamplesReader()
throws FileNotFoundException {
return getReader(REAL_EXAMPLES);
}
/**
* Create a StringStreamReader for the Example test file
*
* @return the StringStreamReader
* @throws FileNotFoundException the test file could not be found
*/
public StringStreamReader getComplexExamplesReader()
throws FileNotFoundException {
return getReader(COMPLEX);
}
/**
* Create a StringStreamReader for the Paged test file
*
* @return the StringStreamReader
* @throws FileNotFoundException the test file could not be found
*/
public StringStreamReader getPagingRequiredReader()
throws FileNotFoundException {
return getReader(PAGING_REQUIRED);
}
/**
* Create a StringStreamReader for the Paged test file
*
* @return the StringStreamReader
* @throws FileNotFoundException the test file could not be found
*/
public StringStreamReader getPagingRequiredReader(final int pagingSize)
throws FileNotFoundException {
return getReader(PAGING_REQUIRED, pagingSize);
}
/**
* Create a StringStreamReader for the test file with extra blank lines
*
* @return the StringStreamReader
* @throws FileNotFoundException the test file could not be found
*/
public StringStreamReader getExtraNewlineReader()
throws FileNotFoundException {
return getReader(EXTRA_NEWLINE);
}
/**
* Create a StringStreamReader for the Big test file
*
* @return the StringStreamReader
* @throws FileNotFoundException the test file could not be found
*/
public StringStreamReader getBigReader()
throws FileNotFoundException {
return getReader(BIG);
}
/**
* <<<<<<< HEAD
* Get the string expression matching the simple record in {@link AcceptUnknownDnaEncodingScheme} format
* <p>
* =======
* Get the string expression matching the simple record in {@link ExpandedIupacEncodingScheme} format
* >>>>>>> a38137e... Fix naming of AcceptUnknownEncodingScheme
*
* @return The simple record string
*/
public String getSimpleRecord() {
return RECORD_SIMPLE;
}
/**
* <<<<<<< HEAD
* Get the string expression matching Record1 in {@link AcceptUnknownDnaEncodingScheme} format
* <p>
* =======
* Get the string expression matching Record1 in {@link ExpandedIupacEncodingScheme} format
* >>>>>>> a38137e... Fix naming of AcceptUnknownEncodingScheme
*
* @return The Record1 string
*/
public String getRealExample1() {
return REAL_EXAMPLE_1;
}
/**
* <<<<<<< HEAD
* Get the string expression matching Record2 in {@link AcceptUnknownDnaEncodingScheme} format
* <p>
* =======
* Get the string expression matching Record2 in {@link ExpandedIupacEncodingScheme} format
* >>>>>>> a38137e... Fix naming of AcceptUnknownEncodingScheme
*
* @return The Record2 string
*/
public String getRealExample2() {
return REAL_EXAMPLE_2;
}
/**
* <<<<<<< HEAD
* Get the string expression matching Record2 in {@link AcceptUnknownDnaEncodingScheme} format
* <p>
* =======
* Get the string expression matching Record2 in {@link ExpandedIupacEncodingScheme} format
* >>>>>>> a38137e... Fix naming of AcceptUnknownEncodingScheme
*
* @return The Record2 string
*/
public String getRealExample3() {
return REAL_EXAMPLE_3;
}
/**
* Get the {@link Sequence} that corresponds to {@see getSimpleRecord}
*
* @return The sequence
*/
public Sequence getSimpleSequence() {
return BasicSequence.create(METADATA, RECORD_SIMPLE, ExpandedIupacEncodingScheme.instance).get();
}
/**
* Get the {@link Sequence} that corresponds to {@see getRealExample1}
*
* @return The sequence
*/
public Sequence getRealExample1Sequence() {
return BasicSequence.create(METADATA, REAL_EXAMPLE_1, ExpandedIupacEncodingScheme.instance).get();
}
/**
* Get the {@link Sequence} that corresponds to {@see getRealExample2}
*
* @return The sequence
*/
public Sequence getRealExample2Sequence() {
return BasicSequence.create(METADATA, REAL_EXAMPLE_2, ExpandedIupacEncodingScheme.instance).get();
}
/**
* Get the {@link Sequence} that corresponds to {@see getRealExample3}
*
* @return The sequence
*/
public Sequence getRealExample3Sequence() {
return BasicSequence.create(METADATA, REAL_EXAMPLE_3, ExpandedIupacEncodingScheme.instance).get();
}
}
|
Parallel Hierarchical Matrices with Block Low-rank Representation on Distributed Memory Computer Systems Any hierarchical matrix (H-matrix) can be transformed to an H-matrix with block low-rank representation (BLR). Although matrix arithmetic with BLR is easier than that with the normal H-matrix, memory usage is increased compared to the normal H-matrix O(N log N). Therefore, BLR has been utilized for complex arithmetic functions of relatively small matrices on a CPU node. In this study, we discuss the efficiency of H-matrices with BLR in simple arithmetic functions, such as H-matrix generation and H-matrix-vector multiplication, in large-scale problems on distributed memory computer systems. We demonstrate how the BLR block size should be defined in such problems and confirm that the complexity of the memory usage of H-matrices with BLR is O(N1.5) when using the appropriate block size. We propose a set of parallel algorithms for H-matrices with BLR. In numerical experiments using electric field analyses, the speed-up of the execution time for the simple arithmetic functions in H-matrices with BLR reaches about 10,000 MPI processes. We confirm that even for simple H-matrix arithmetic, the BLR version is significantly faster than the normal H-matrix version, if a large number of CPU cores are used. |
There will be no end to the War on Terror and the targeting of “suspected militants” will continue and become more sophisticated, according to an article published in the Washington Post on October 23.
In the piece, Greg Miller describes a project the Obama administration has been developing for a couple of years called — in true Orwellian fashion — the “disposition matrix.”
Glen Greenwald at the Guardian (U.K.) describes the matrix’s chain of command:
The "disposition matrix" has been developed and will be overseen by the National Counterterrorism Center (NCTC). One of its purposes is "to augment" the "separate but overlapping kill lists" maintained by the CIA and the Pentagon: to serve, in other words, as the centralized clearinghouse for determining who will be executed without due process based upon how one fits into the executive branch's "matrix".
According to reports, the plans for perpetuating and perfecting the death-by-drone program “contains the names of terrorism suspects arrayed against an accounting of the resources being marshaled to track them down, including sealed indictments and clandestine operations.”
The article quotes “U.S. officials” saying that the matrix will improve the existing pair of kill lists (one maintained by the President, the other kept by the CIA) by “mapping plans for the ‘disposition’ of suspects beyond the reach of American drones.”
Readers unfamiliar with the argot of the White House and the intelligence community should understand that the phrase “plans for the disposition” of someone means plans for summarily executing a person who has never been accused of a crime and who has never been proven to have any plan to attack the United States or its interests.
Charging someone with a crime and allowing him to counter evidence produced of his intent to commit a crime or of his collusion with those who do intend to commit a crime is called due process. It is a right guaranteed by the Constitution, but regularly and unrepentantly denied by the Obama administration to scores of people killed by drones everyday.
In an article in the Atlantic, Conor Friedersdorf records the comments made by Robert Gibbs, former White House press secretary and now a senior adviser to the Obama reelection campaign, regarding the use of drones to assassinate those without a demonstrable link to terror, particularly Abdulrahman al-Awlaki.
For those unfamiliar with the story, Abdulrahman al-Awlaki was killed in October 2011, and to date the Obama administration has never informed the country of any wrongdoing by this teenager, other than being related to a man (his father) who posted anti-American videos on the Internet that allegedly influenced others to commit crimes.
As he sat enjoying a roadside picnic in Yemen with a few second cousins and their friends — most of whom the young Colorado native had never met before that day — the teenager and all his companions were killed by two Hellfire missiles fired from a Predator drone.
The finger that pressed the button launching the lethal ordnance was American, and so was 16-year-old Abdulrahman al-Awlaki, the target of the strike.
Upon being asked how the president justified killing an underage “American citizen ... without due process, without trial,” Gibbs responded:
I would suggest that you should have a far more responsible father if they are truly concerned about the well being of their children. I don't think becoming an al Qaeda jihadist terrorist is the best way to go about doing your business.
That is the sort of callous disregard for the value of life and the rule of law that animates the current administration. The fact is that Abdulrahman was not a terrorist, was never accused of fomenting terrorism (as his father was), and was not in the company of his father when he was killed. That would have been impossible because by the time he and his cousins were killed, his father was already dead.
Perhaps the younger Awlaki was accidentally killed. If that were so, why wouldn’t the administration admit it? Gibbs’ answer indicates that the boy’s only crime was having a bad father. If that’s a crime for which you can be executed, then there are a lot of people all over the world who need to be watching their backs.
The unanswered questions are mounting: How many of those killed were innocent bystanders such as those who happened to be with Abdulrahman al-Awlaki? How many of the actual “targets,” like Abdulrahman, were themselves innocent or at least had no demonstrable ties to terrorist organizations?
This question will never be known with certainty because the president alone serves as judge, jury, and executioner — and does not believe he is obliged to provide evidence to the American people.
In fact, it would be very naïve to believe the targeted assassination of an innocent like Abdulrahman was an unfortunate miscalculation. When the judicial and executive powers of government are consolidated and restraints on the exercise of power are cast aside, it can be expected — based both on our knowledge of history and on the nature of man — that power will be abused and no one’s rights or life will be safe from elimination by despots.
The revelation of the “disposition matrix” makes it certain that, as the report indicates, the despotism will go on for at least another decade. In fact, the Post article suggests that according to the timeline provided by their sources, the United States is only at the halfway point of the “war on terror,” and the president and his agents will add and subtract names to their proscription lists, but “with the pace of drone strikes ... never go to zero.”
A comment from “a senior administration official” quoted in the Post article explains why the “disposition matrix” was necessary to keep America safe: “We can’t possibly kill everyone who wants to harm us,” he said.
Given the expansion of the drone program and the institutional and habitual delivery of remote control death without due process, it seems the federal government will certainly keep trying.
Photo: Thinkstock |
/**
* The stats cmd. get the stats of the servers.
*
* @return the stats of all the servers.
*/
public Map<String, Map<String, String>> stats() {
Map<String, Map<String, String>> statsMap = new HashMap<String, Map<String, String>>();
for (Client client : hosts) {
Map<String, String> stats = null;
try {
if (client.getClientImpl() != null) {
stats = client.getClientImpl().stats();
}
} catch (Exception e) {
client.resetClientImpl();
}
statsMap.put(client.getServerIp() + ":" + client.getServerPort(), stats);
}
return statsMap;
} |
<gh_stars>0
# Copyright 2021 VMware, Inc. All rights reserved. VMware Confidential
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible_collections.vmware.alb.plugins.module_utils.utils.ansible_utils import (avi_obj_cmp, ref_n_str_cmp)
new_obj = {
"name": "testpool1",
"description": "testpool1",
"servers": [
{
"ip": {
"addr": "192.168.2.20",
"type": "V4"
}
}
]
}
existing_object = {
"name": "testpool1",
"description": "testpool1",
"servers": [
{
"ip": {
"addr": "192.168.2.20",
"type": "V4"
}
}
]
}
def test_compare_single_value():
"""
If both objects are equal then it return true
If both objects are not equal then it return false
True if x is subset of y else False
"""
health_monitor = "https://192.168.11.18:/api/healthmonitor?name=System-HTTPS"
existing_health_monitor = "https://192.168.11.18:/api/healthmonitor?name=System-HTTP"
exist = not ref_n_str_cmp(health_monitor, existing_health_monitor)
assert exist
def test_compare_object():
"""
If both objects are equal then it return true
If both objects are not equal then it return false
"""
exist = avi_obj_cmp(new_obj, existing_object)
assert exist
|
Study on Spatial Distribution Equilibrium of Elderly Care Facilities in Downtown Shanghai With the growing challenge of aging populations around the world, the study of the care services for older adults is an essential initiative to accommodate the particular needs of the disadvantaged communities and promote social equity. Based on open-source data and the geographic information system (GIS), this paper quantifies and visualizes the imbalance in the spatial distribution of elderly care facilities in 14,578 neighborhoods in downtown (seven districts) Shanghai, China. Eight types of elderly care facilities were obtained from Shanghai elderly care service platform, divided into two categories according to their service scale. With the introduction of the improved Gaussian 2-step floating catchment area method, the accessibility of two category facilities was calculated. Through the global autocorrelation analysis, it is found that the accessibility of elderly care facilities has the characteristics of spatial agglomeration. Local autocorrelation analysis indicates the cold and hot spots in the accessibility agglomeration state of the two types of facilities, by which we summarized the characteristics of their spatial heterogeneity. It is found that for Category−I, there is a large range of hot spots in Huangpu District. For Category−II, the hot-spot and cold-spot areas show staggered distribution, and the two categories of hot spot distribution show a negative correlation. We conclude that the two categories are not evenly distributed in the urban area, which will lead to the low efficiency of resource allocation of elderly care facilities and have a negative impact on social fairness. This research offers a systematic method to study urban access to care services for older adults as well as a new perspective on improving social fairness. Background In the years to come, China's population of older adults is expected to continue developing rapidly. Data from the National Bureau of statistics shows that by 2021, China's population over the age of 65 accounted for 14.20% of its population. According to the WHO criteria for aging, having more than 14% of the population aged 65 indicates that China has entered an "aged society". At the same time, as the group commonly referred to as "post-1960s baby boomers " will be aging in the next 5-10 years, China is expected to enter a "super-aged society" around 2033 with more than 20% of its population being 65 or above. The emergence of the Chinese "baby boomers" was a few years later than its counterpart in the West, but it was sustained longer and had a more significant impact on the birth rate. After the natural disasters in 1962, the national economy had gradually recovered by the 1970s along with a strong compensatory fertility momentum, resulting in the largest population growth in Chinese history. Subsequently, the elderly dependency rate (defined as the ratio between the elderly population and the workingage population ) will rise radically in the next decade, thereby putting strain on the Framework Previous studies on accessibility, with the use of 2SFCA method, mostly set the catchment area as the travel range within a travel time of 0.5-1.5 h. However, its limited spatial scope impacts the actual accessibility results, covering up the imbalance of accessibility in reality. There are two aspects to consider for community residents: first, older adults living at home can easily access elderly care facilities; secondly, children can easily visit their parents living in elderly care facilities. The 15-min community-life circle is the basic unit for community life in Shanghai. That is, within the range of 15-min walking cost, essential services and public spaces should be provided to form a safe, friendly, and comfortable social living environment. In recent years, the principle of community-life circle issued by the government has set the scope of supply and demand services of public facilities. For example, "Building a diversified and integrated 15-min community life circle" is proposed in the Shanghai Master Plan 2017-2035 and Shanghai Planning Guidance of 15-Minute Community-Life Circle. The goal is to reach about 99% of the coverage of community service facilities in Shanghai within 15 min by 2035. In February 2022, Shanghai Civil Affairs Bureau issued the draft of Planning for the Layout of Shanghai Elderly Care Facilities, which proposed to ensure the full coverage of a 15-min community-life circle of basic care services for older adults and improve the serviceability for key aging groups such as living alone, advanced age, disability, and dementia. Therefore, this paper employed the 15-min community-life circles as the study units. The proposed research framework of this paper is shown in Figure 1. Based on the findings of previous studies, this paper improves the 2SFCA method to calculate the accessibility of elderly care facilities for each community, based on the resources of elderly care facilities and the needs of the aging population. The data of elderly care facilities, aging population, and community-life circle were obtained from the website using python. Then, the improved 2SFCA method was adopted, from the supply and demand side, respectively, to calculate the accessibility of elderly care facilities. The calculation results were visualized as accessibility distribution maps of care resources for older adults in ArcGIS, and three-level comparisons (community, sub-district, and district) were conducted considering administrative region matching and Kriging interpolation analysis without administrative region matching. Finally, we used spatial autocorrelation analysis to investigate the characteristics of spatial agglomeration. Compared to the accessibility research focus on the administrative region boundary, the study of resource accessibility in the community-life circle of older adults has more practical significance. Moreover, paying attention to the distribution of pension resources is to resolve not only the current pragmatic needs of older adults but also an important measure to improve social equity, so as to avoid insufficient resources or vacant waste due to uneven spatial distribution. This research reflects the accessibility distribution characteristics of elderly care facilities in downtown Shanghai and provides an essential reference for the planning agenda revision and the policy-making progress in the near future. The proposed research framework and digital techniques can be applied in other cities and potentially adopted in other disciplines such as epidemiology studies. Research Site Shanghai is the city with the highest level of urbanization development in China. With the improvement of China's economic development level and national economic strength, more cities will enter the development stage of Shanghai at this time. Therefore, taking Shanghai as the research site has forward-looking significance. On the other hand, Shanghai has a high degree of urban information construction, with a high degree of information disclosure and easy access to various kinds of information. According to the data of Shanghai Statistical Yearbook 2021, Shanghai, the first city entering the aging society in China, has a large population of registered older residents (60 or above years old) of 5,324,100, accounting for 36.08% of the total demographic. The aging problem in downtown Shanghai is even more severe (Huangpu District of 41.7%, Hongkou of 42.5%, Putuo District of 41.1%, Jingan District of 40.1%, Changning District of 39.1%, Yangpu District of 38.8%, and Xuhui District of 35.9%). With the elderly dependency rate increasing year by year, the city has been entering a state of deep aging society. As the population in downtown Shanghai is extremely concentrated, essential service distributions for the highly populated aged community are particularly significant. Therefore, the research scope of this study has focused on the distribution equilibrium of elderly care facilities in the community-life circles in downtown Shanghai. The research site is shown in the Figure 2 below. Acquisition of Community Location and Community Life Circle Data A python crawler was created to extract data of 27,621 communities (containing location coordinate, household count, average property price, and other information) from the real estate business website (fang.com, accessed on 15 January 2021), which includes 14,578 communities in downtown Shanghai. Then, the location coordinates of the communities were used to link with the Mapbox's Isochrone API (mapbox.com, accessed on 25 April 2022)to obtain the geographic data of the community-life circle of all communities. The Isochrone API allows us to request polygon features that show areas that are reachable within a specified amount of time from a location. The time for this study was set at 15-min walking time cost in accordance with the Shanghai Master Plan's current urban development criteria. According to the Shanghai Statistical Yearbook 2021, the average household size and the proportion of the aging population can be attained, which allow us to approximate the number of older adults in each community. Acquisition of Elderly Care Service Facilities and Service Circle Data Older adults' choice of surrounding care services is not limited to the boundaries of administrative regions, and the communities located at the edge of administrative regions may choose care services across administrative boundaries. So, the scope of elderly care facilities obtained in this paper is extended to the surrounding areas of research site. Shanghai elderly care service platform (shweilao.cn, accessed on 20 April 2022) has registered the data of different kinds of elderly care facilities in Shanghai, with various types, complete data, and information authority. Therefore, this study obtains various types of elderly care facilities from this platform. The effective data obtained this time are: nursing home, Type-A, providing centralized care services; elderly care home, Type-B, providing medium and short-term care services; elderly daycare institution, Type-C, providing daycare services; meal aid service point, Type-D, providing catering service for the elderly; community elderly care service organization, Type-E, providing door-to-door services; comprehensive services center, Type-F ; and nursing station, Type-G and nursing hospital, Type-H. Since the map coordinates used by this platform are Amap coordinates, coordinate conversion shall be carried out before data cleaning. In order to facilitate the research and take into account the service scope of elderly care facilities, the catchment area is distinguished into 2 categories. As pointed by Yang et al., the catchment size may also vary according to the type of provider and the cost of transportation. Category−I, provide medium and short-term services, whose catchment size is set as a 30-min walking distance (Type-B, -C, -D, -E, -F, and -G); Category−II, provide long-term and comprehensive centralized care services, whose catchment size is set as a 15-min driving distance (Type-A and Type-H). Two categories of catchment area data are obtained from the Mapbox platform. Calculating the Spatial Accessibility of Elderly Care Facilities The spatial accessibility of facilities is a crucial criterion for assessing the rationality of public service layout. The supply-demand ratio indicates the magnitude and convenience of inhabitants' access to public service amenities, and it may be used to visualize the spatial distribution of facilities in equilibrium. The 2-step floating catchment area (2SFCA) approach, first proposed by Radke and Mu but later modified by Luo and Wang, is a special case of the gravity model. It has most of the advantages of a gravity model and is also intuitive to interpret. This method calculates facility accessibility through two steps : the first step is to calculate the supply-demand ratio, and the second step is to calculate the spatial accessibility of facilities ( Figure 3), which can account for the influence of supply scale, demand scale, and spatial impedance factors between supply and demand sites on elderly facility accessibility. Step 2: calculating the accessibility of elderly care facilities). However, traditional 2SFCA has two problems: the unreasonable setting of the catchment area and the homogeneity of weight in the search domain, resulting in a certain deviation between the calculation results and the reality. For the effective catchment area, some scholars have explored the variable scope. For example, McGrail and Humphreys set a different search radius according to the population density of regions. Luo and Whippo dynamically determine catchment sizes by incrementally increasing the catchment radius until a base population and a physician-to-population ratio are satisfied. Mao and Nekorchuk develop a modified 2SFCA framework incorporating multi-mode transportation. For the distance decay of weight in the catchment area, some studies have introduced distance decay functions, including Gaussian function, kernel density function, and power function, addressing the problem of uniform access within the catchment by applying weights to different travel time zones to account for distance decay. This work calculates the accessibility of elderly care facilities and makes improvements based on the advantages of previous research results. Firstly, the elderly care facilities are classified into two categories: Category−I (Type-B, -C, -D, -E, -F, and -G), which is the community service resource; and Category−II (Type-A and Type-H), which is a regional service resource. The service scope of the two categories is determined based on the time cost. Category−I is set as a 30-min walking range, and Category−II is set as the 15-min driving range. Category−I considers the accessibility of elderly care facility resources for older adults who depend on community and home-based elderly care. Two aspects are considered for Category−II: first, the potential demand of the older adults living at home; second, the accessibility of their children to visit for the elderly living in institutions. Generally speaking, the two categories of facilities comprehensively consider the elderly care needs under the three modes of home-based care, community-based care, and institution-based care, as well as the accessibility requirements for children to visit conveniently. In the catchment area, the Gaussian equation is used for distance decay to determine the weight of supply and demand capacity of different facilities. where d kj is the distance between the supply point k and the demand point j, and d 0 is the threshold. With the following calculation processes, the 2SFCA method estimates the accessibility of elderly care facilities in two phases, depending on the place of supply and demand, respectively. Step 1: considering that the choice of care resources for older adults is not limited to the boundaries of administrative regions, the supply-demand ratio is calculated by using the elderly care facilities and the community data in total Shanghai, to meet the actual choice of older adults. Calculate the elderly care facility supply-demand ratio R j. S j is the number of facility, k is community, and P k is the population of older adults. d 0 is chosen as the threshold, the range of each catchment area is considered as the walking distance at a 30-min walking time cost for Category−I and 15-min driving time for Category−II, which is shown in in Section 2.2.2, and the total weighted population served is calculated using a Gaussian equation. Step 2: calculating the spatial accessibility of each community A i. Firstly, the life circle polygon analyzed in Section 2.2.1 was chosen as the catchment area. Using a Gaussian equation, assign weights to the supply-demand ratios R j for each elderly care facility in this spatial scope, and then total these weighted supply-demand ratios to get the accessibility of elderly care facilities for each community. Calculation on Three Administrative Levels of Spatial Accessibility With the background that governments at different levels have put forward the construction aim of elderly care facilities, a considerable number of elderly care facilities are proposed to meet the needs of older adults in their administrative region. Therefore, it is crucial to summarize and analyze at different administrative levels. With the use of the ArcGIS platform, the calculation results of accessibility are averaged in three levels: community, sub-district, and district. Then, the accessibility of the three levels is analyzed to find the specific areas with uneven resource distribution. In fact, the choice of elderly care services is not necessarily limited to administrative region matching. The spatial interpolation method can convert the measured data of discrete points into continuous data surfaces for comparison with the distribution patterns of other spatial phenomena, without considering the administrative boundary. In this paper, the Kriging interpolation method is used to draw isosurface from the community scale. where z 0 is the estimated value at point (x 0, y 0 ), z 0 = z(x 0, y 0 ), and i is the weight coefficient. Analysis on Spatial Distribution Equilibrium In the aspect of spatial distribution research, nearest neighbor hierarchical clustering, Ripley's K function, Gini coefficient, Shannon entropy, and spatial autocorrelation are often used to explain the aggregation status of facilities in geospatial space. Spatial autocorrelation could observe the correlation between variables close to each other on a spatial scale, more conveniently identifying the areas with a significant imbalance between supply and demand. In order to analyze the spatial distribution characteristics of elderly care facilities, this paper uses spatial autocorrelation tools to study the resource distribution. Spatial autocorrelation can be used to detect three spatial data distribution modes-clustering, dispersed, and random-which can be divided into global autocorrelation (Global Moran's I, Getis-Ord General G) and local autocorrelation (local indicator of spatial autocorrelation (LISA), Getis-Ord Gi*tool). Global Moran's I generally reflects the spatial autocorrelation of the study area, which is used to judge whether there is spatial autocorrelation in the whole, and the Getis-Ord General G method was used to preliminarily judge the agglomeration type. The Global Moran's I method of global spatial autocorrelation is given as The Getis-Ord General G method of global spatial autocorrelation is given as where x i and x j are attribute values for features i and j; w i,j is the spatial weight between feature i and feature j; n is the number of features in the dataset; and j = i indicates that feature i and feature j cannot be same. The global autocorrelation statistic indicates the presence of clusters, whereas local autocorrelation indicates the location of clusters and the type of spatial association. To further investigate the distribution patterns of geographic accessibility scores, local autocorrelation analysis was used to identify local clusters of accessibility. Due to the characteristics of spatial heterogeneity, there will be different aggregation states in different geographical locations. LISA is suited to study the heterogeneity characteristics of the accessibility agglomeration of elderly care facilities. The Getis-Ord Gi*tool was employed to conduct hot and spot analysis, which can analyze the distribution area of cold and hot spots of the accessibility. The LISA method of local spatial autocorrelation is given as The Getis-Ord Gi* method of local spatial autocorrelation is given as where x i and x j are attribute values for features i and j; w i,j is the spatial weight between feature i and feature j; and n is the number of features in the dataset. When the Gi* statistic is higher than the mathematical expectation and passes the hypothesis test, it is a hot spot; otherwise, it is a cold spot. The Results of Elderly Care Facilities Density The kernel density diagram of eight types of elderly care facilities is shown in Figure 4; it can find the spatial distribution imbalance in quantity. The distribution of Type-A shows a multi-point agglomeration state and others show single-point agglomeration distribution. Type-A is the nursing home providing long-term care. It has a high degree of aggregation in the north of downtown Shanghai, with YangPu, HongKou, and PuTuo presenting the aggregation points, while several districts in the south have a low degree of aggregation. Type-B, -C, -D, and -E are community-based services, which are mainly concentrated in HuangPu and less distributed in other districts. Type-F and Type-G are mainly concentrated in the border area of PuTuo, JingAn, and HuangPu. Type-H, the nursing hospital, is mainly distributed in YangPu District. From the resource density distribution of the above eight elderly care facilities, the spatial distribution is uneven, and most show the single center's agglomeration state. As a result, there is often a waste of resources in high-density areas and a shortage of resources in low-density areas. Although kernel density can show the quantity density of various facilities in space, it can not reflect the real relationship between supply and demand and the availability of pension resources for older adults. The Results of Statistical Analysis on Administrative Regions The distribution trend of the accessibility of elderly care facilities in downtown Shanghai is shown in Figure 5. Considering the matching of administrative regions, the research is carried out at the community, sub-district, and district levels, respectively. For Category−I, the high accessibility value at the community level is mainly distributed in the central area of HuangPu District, extending outward in strips. At the sub-district level, it can be seen that the accessibility of seven sub-districts in the north of HuangPu District is the best, and the other Huajing town of Xuhui District also has a high value. At the county level, Huangpu District is the best and PuTuo District is the lowest. It is obvious that the data at the district level has erased the characteristics of internal diversity. Generally speaking, the high accessibility of social resources of Category−I is too concentrated in the communities in the north of Huangpu District. For Category−II, at the community level, the distribution of accessibility values is relatively balanced. At the sub-district level, most sub-districts in Yangpu District have higher accessibility. At the district level, YangPu District has the best accessibility and XuHui has the worst accessibility. The global spatial distribution of Category−II is relatively even. From the Kriging interpolation results (Figure 6), it can be seen that the perspective of administrative region matching cannot show the actual accessibility distribution, covering up the imbalance of facts. For example, for Category−I, the value of HuangPu District is high both at the sub-district and district level, but, most importantly, the highest value appears in the Nanjing East Road sub-district, which radiates outward with it as the center. Without considering the matching of administrative regions, as for the other six sub-districts (which have the same value as Nanjing East Road), there will be no obvious advantage compared to other regions. Moreover, more real features will be obscured at the district level. For Category−II, taking Yangpu District as an example, the high value from the perspective of the sub-district and district level has covered the low-value phenomenon in some areas in the East Yangpu District. If the distance to the elder care facility is long, people could consider moving to a location where the facility is situated nearer, which will aggravate the run-on pension resources and cause a greater waste of resources. Without considering the matching of administrative regions, this result is closer to the real value, and the government should consider the actual distribution when assessing which region meets the standard. The Results of Global Autocorrelation Analysis For Category−I, with the Global Moran's I analysis, the p-value is less than 0.00001 and the Z-score is 363.526282, which is highly significant, indicating that the elderly care facilities of Category−I are clustered (Figure 7). High and low-value clustering (Getis-Ord General G) shows that the p-value is less than 0.00001 and the Z-score is 245.922729, indicating that high-value clustering occurs in facility accessibility. For Category-II, the Global Moran's I analysis (p-value < 0.00001, Z-score = 386.937068) and Getis-Ord General G (p-value < 0.00001, Z-score = 285.052686) results indicate that the null hypothesis is denied, and show high-value agglomeration (Figure 8). From the global correlation analysis, it can be seen that both categories of elderly care facilities have spatial agglomeration, and the spatial agglomeration presents high-value clustering. For Category-Ⅱ, the Global Moran's I analysis (p-value < 0.00001, Z-score = 386.937068) and Getis-Ord General G (p-value < 0.00001, Z-score = 285.052686) results are sure to deny the null hypothesis, and they show high-value agglomeration. From the global correlation analysis, it can be seen that both categories of elderly care facilities have spatial agglomeration, and the spatial agglomeration presents high-value clustering. The Results of Local Autocorrelation Analysis The figure LISA of Category-Ⅰ shows high-high aggregation in the central area o downtown (mainly Huangpu District) and low-low distribution in the surrounding ur ban areas. As seen from the cold and hot spots map (Getis-Ord Gi*), the hot spots are mainly concentrated in some sub-districts of Huangpu District and Jing'An District. It can be concluded that the service distribution of Category-Ⅰ is uneven in the central urban area of Shanghai. Category-Ⅰ is more closely related to home-based and community-based el The Results of Local Autocorrelation Analysis As shown in Figure 9, the figure LISA of Category−I shows high-high aggregation in the central area of downtown (mainly Huangpu District) and low-low distribution in the surrounding urban areas. As seen from the cold and hot spots map (Getis-Ord Gi*), the hot spots are mainly concentrated in some sub-districts of Huangpu District and Jing'An District. It can be concluded that the service distribution of Category−I is uneven in the central urban area of Shanghai. Category−I is more closely related to home-based and community-based elderly care facilities. Older adults may also be more limited in their choice of health care providers than younger adults, due to the increased likelihood of cushioning from intrinsic and extrinsic mobility constraints such as physical disabilities. Older adults might require more social and health services and have more available time to carry out their out-of-home activities. Limited access to primary care may lead to higher levels of chronic disease, the consumption of more medication, and shorter life expectancy. At present, the elderly living at home occupy the main part of the care pattern. Communities with abundant elderly care service resources can effectively help the elderly spend their old age in peace and reduce the transportation costs and economic costs of medical care in other places. For the government, improving resource allocation and accessibility can effectively help older adults to enjoy their time. Moreover, a more convenient community-life circle environment can effectively ease the current shortage of institutional nursing beds. The investigation results are shown in Figure 10. The analysis of the two methods has obtained similar results in space. The figure LISA of Category−II shows a high-high aggregation phenomenon and presents multipoint distribution. It can be seen from the cold-and hot-spot map (Getis-Ord Gi*) that the hot spot areas occupy a wide range in HongKou District and YangPu District, while the Xuhui District and Jing'An District are mainly occupied by the cold-spot area. It is worth noting that hot spots are more likely to appear in the marginal areas of each district, the reason for this is that the distribution of elderly care facilities may be affected by the land price. Compared with Category−I, we found that some high-value areas of Category−I become cold spots in Category−II, such as several sub-districts bordering Huangpu District and Jing'an District. Generally speaking, the distribution of Category−II shows a multi-point and relatively balanced distribution. Since the service range is set to 15-min driving cost (within 15 km) when calculating these facilities, if the service range is expanded, the hot spot area will have a larger range than now. Conclusions With the growing challenge of aging populations around the world, the study of elderly care facilities is an essential initiative to accommodate the particular needs of these disadvantaged communities and promote social equity. Thus, 14,578 communities were selected in this paper, and the accessibility of elderly care facilities in community-life circles was researched, with a focus on the imbalanced distribution in downtown Shanghai. Eight types of elderly care facilities were obtained from the Shanghai elderly care service platform, which were divided into two categories according to their service scale. This paper comprehensively considers the elderly care needs under the three modes of home care, community care, and institutional care, as well as the accessibility requirements for children to visit. Category−I considers the accessibility of elderly care facilities resources for older adults who depend on community and home-based elderly care. Two aspects are considered for Category−II: first, the potential demand for Category−II by the older adults living at home; second, for the elderly in institutions, the accessibility for their children to visit conveniently. The kernel density analysis of the aggregation state of all kinds of facilities indicates that the spatial distribution density of various facilities is extremely uneven, and the density in Huangpu District is the highest. The improved Gaussian 2-step floating catchment area method was used to calculate the accessibility of two category facilities. Considering the matching of administrative regions, the visual description and comparative analysis are carried out at the three administrative levels: community, sub-district, and district. It is found that for Category−I, the accessibility of residential areas in HuangPu District is the best, and for Category−II, YangPu District is the best. Then, without considering matching administrative regions, we pay attention to the agglomeration state of communitylevel accessibility. From the results of Kriging interpolation, we found that some real features were covered because of matching administrative regions. Through the global autocorrelation analysis, we found that the accessibility of two categories has a high-value agglomeration state in space. Then, the autocorrelation analysis was used to analyze the spatial heterogeneity. It was found that for Category−I, there was an extensive range of hot spots in Huangpu District. For Category−II, the hot-spot and cold-spot areas showed staggered distribution, and the two categories of hot spot distribution showed a negative correlation. In general, it was concluded that two categories are not evenly distributed in the urban area, which will lead to the low-efficiency resource allocation of elderly care facilities and have a negative impact on social pensions for older adults. Admittedly, there are still some limitations to this paper. In terms of data acquisition accuracy, the community-level aging population data can only be obtained through a prediction method, which may cause deviations compared to the actual situation. For the setting of the service scope of elderly care facilities, although referring to previous studies, there may be a more appropriate numerical settings based on systematic sensitivity analysis. However, due to the wide research scope of this paper, the comparative study of this paper has proven to be adequate in revealing the problems of resource distribution. In brief, the research conclusions of this paper can identify areas with insufficient resource distribution for the future layout optimization of elderly care facilities in Shanghai. The proposed framework can provide insights for planning schemes in other similar cities, whereas the workflow of this project could also potentially offer a reference for the research of other public facilities. Conflicts of Interest: The authors declare no conflict of interest. |
Structure of lowlevel jet streams ahead of midlatitude cold fronts Several case studies are presented showing the structure of ana-cold fronts over the British Isles. One of the cases is analysed in detail using data acquired on the Isles of Scilly to avoid any confusing effects due to topography. All of the fronts described are characterized by a narrow band of shallow but vigorous convection at the surface cold front; this convection is essentially two-dimensional and is termed line convection. In each case the line convection is bounded on its forward side by a low-level jet reaching 25 to 30 m s−1; behind the line convection the winds decrease abruptly. The low-level jet is embedded within a convective boundary layer, reaching its maximum velocity at 900 to 850 mb, and it consists of a tongue of anomalously warm, moist air, which has had a trajectory over an even warmer sea. It is shown that the line convection can be regarded as part of a mesoscale right-hand corkscrew circulation within the low-level jet. The line convection constitutes one flank of the jet; it is characterized by very strong cyclonic shear and is fed by frictional convergence of air from beneath the jet core within the lowest 100 mb. Some of the air which ascends within the line convection subsequently flows forward and gently subsides within the upper part of the low-level jet as part of the corkscrew circulation; however, most of the air which ascends within the line convection is extruded from the boundary layer and ascends in a rearward direction as slantwise convection above the inclined cold frontal zone. Precipitation grown in the slantwise convection falls into the cold air behind the surface cold front and the heat sink resulting from the partial evaporation of this precipitation probably accentuates the sharpness of the boundary of the low-level jet at the surface cold front. |
use super::engine::Transaction;
use super::parser::format_ident;
use super::types::{DataType, Value};
use crate::error::{Error, Result};
use serde_derive::{Deserialize, Serialize};
use std::fmt::{self, Display};
/// The catalog stores schema information
pub trait Catalog {
/// Creates a new table
fn create_table(&mut self, table: Table) -> Result<()>;
/// Deletes an existing table, or errors if it does not exist
fn delete_table(&mut self, table: &str) -> Result<()>;
/// Reads a table, if it exists
fn read_table(&self, table: &str) -> Result<Option<Table>>;
/// Iterates over all tables
fn scan_tables(&self) -> Result<Tables>;
/// Reads a table, and errors if it does not exist
fn must_read_table(&self, table: &str) -> Result<Table> {
self.read_table(table)?
.ok_or_else(|| Error::Value(format!("Table {} does not exist", table)))
}
/// Returns all references to a table, as table,column pairs.
fn table_references(&self, table: &str, with_self: bool) -> Result<Vec<(String, Vec<String>)>> {
Ok(self
.scan_tables()?
.filter(|t| with_self || t.name != table)
.map(|t| {
(
t.name,
t.columns
.iter()
.filter(|c| c.references.as_deref() == Some(table))
.map(|c| c.name.clone())
.collect::<Vec<_>>(),
)
})
.filter(|(_, cs)| !cs.is_empty())
.collect())
}
}
/// A table scan iterator
pub type Tables = Box<dyn DoubleEndedIterator<Item = Table> + Send>;
/// A table schema
#[derive(Clone, Debug, PartialEq, Deserialize, Serialize)]
pub struct Table {
pub name: String,
pub columns: Vec<Column>,
}
impl Table {
/// Creates a new table schema
pub fn new(name: String, columns: Vec<Column>) -> Result<Self> {
let table = Self { name, columns };
Ok(table)
}
/// Fetches a column by name
pub fn get_column(&self, name: &str) -> Result<&Column> {
self.columns.iter().find(|c| c.name == name).ok_or_else(|| {
Error::Value(format!("Column {} not found in table {}", name, self.name))
})
}
/// Fetches a column index by name
pub fn get_column_index(&self, name: &str) -> Result<usize> {
self.columns.iter().position(|c| c.name == name).ok_or_else(|| {
Error::Value(format!("Column {} not found in table {}", name, self.name))
})
}
/// Returns the primary key column of the table
pub fn get_primary_key(&self) -> Result<&Column> {
self.columns
.iter()
.find(|c| c.primary_key)
.ok_or_else(|| Error::Value(format!("Primary key not found in table {}", self.name)))
}
/// Returns the primary key value of a row
pub fn get_row_key(&self, row: &[Value]) -> Result<Value> {
row.get(
self.columns
.iter()
.position(|c| c.primary_key)
.ok_or_else(|| Error::Value("Primary key not found".into()))?,
)
.cloned()
.ok_or_else(|| Error::Value("Primary key value not found for row".into()))
}
/// Validates the table schema
pub fn validate(&self, txn: &mut dyn Transaction) -> Result<()> {
if self.columns.is_empty() {
return Err(Error::Value(format!("Table {} has no columns", self.name)));
}
match self.columns.iter().filter(|c| c.primary_key).count() {
1 => {}
0 => return Err(Error::Value(format!("No primary key in table {}", self.name))),
_ => return Err(Error::Value(format!("Multiple primary keys in table {}", self.name))),
};
for column in &self.columns {
column.validate(self, txn)?;
}
Ok(())
}
/// Validates a row
pub fn validate_row(&self, row: &[Value], txn: &mut dyn Transaction) -> Result<()> {
if row.len() != self.columns.len() {
return Err(Error::Value(format!("Invalid row size for table {}", self.name)));
}
let pk = self.get_row_key(row)?;
for (column, value) in self.columns.iter().zip(row.iter()) {
column.validate_value(self, &pk, value, txn)?;
}
Ok(())
}
}
impl Display for Table {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"CREATE TABLE {} (\n{}\n)",
format_ident(&self.name),
self.columns.iter().map(|c| format!(" {}", c)).collect::<Vec<String>>().join(",\n")
)
}
}
/// A table column schema
#[derive(Clone, Debug, PartialEq, Deserialize, Serialize)]
pub struct Column {
/// Column name
pub name: String,
/// Column datatype
pub datatype: DataType,
/// Whether the column is a primary key
pub primary_key: bool,
/// Whether the column allows null values
pub nullable: bool,
/// The default value of the column
pub default: Option<Value>,
/// Whether the column should only take unique values
pub unique: bool,
/// The table which is referenced by this foreign key
pub references: Option<String>,
/// Whether the column should be indexed
pub index: bool,
}
impl Column {
/// Validates the column schema
pub fn validate(&self, table: &Table, txn: &mut dyn Transaction) -> Result<()> {
// Validate primary key
if self.primary_key && self.nullable {
return Err(Error::Value(format!("Primary key {} cannot be nullable", self.name)));
}
if self.primary_key && !self.unique {
return Err(Error::Value(format!("Primary key {} must be unique", self.name)));
}
// Validate default value
if let Some(default) = &self.default {
if let Some(datatype) = default.datatype() {
if datatype != self.datatype {
return Err(Error::Value(format!(
"Default value for column {} has datatype {}, must be {}",
self.name, datatype, self.datatype
)));
}
} else if !self.nullable {
return Err(Error::Value(format!(
"Can't use NULL as default value for non-nullable column {}",
self.name
)));
}
} else if self.nullable {
return Err(Error::Value(format!(
"Nullable column {} must have a default value",
self.name
)));
}
// Validate references
if let Some(reference) = &self.references {
let target = if reference == &table.name {
table.clone()
} else if let Some(table) = txn.read_table(reference)? {
table
} else {
return Err(Error::Value(format!(
"Table {} referenced by column {} does not exist",
reference, self.name
)));
};
if self.datatype != target.get_primary_key()?.datatype {
return Err(Error::Value(format!(
"Can't reference {} primary key of table {} from {} column {}",
target.get_primary_key()?.datatype,
target.name,
self.datatype,
self.name
)));
}
}
Ok(())
}
/// Validates a column value
pub fn validate_value(
&self,
table: &Table,
pk: &Value,
value: &Value,
txn: &mut dyn Transaction,
) -> Result<()> {
// Validate datatype
match value.datatype() {
None if self.nullable => Ok(()),
None => Err(Error::Value(format!("NULL value not allowed for column {}", self.name))),
Some(ref datatype) if datatype != &self.datatype => Err(Error::Value(format!(
"Invalid datatype {} for {} column {}",
datatype, self.datatype, self.name
))),
_ => Ok(()),
}?;
// Validate value
match value {
Value::String(s) if s.len() > 1024 => {
Err(Error::Value("Strings cannot be more than 1024 bytes".into()))
}
_ => Ok(()),
}?;
// Validate outgoing references
if let Some(target) = &self.references {
match value {
Value::Null => Ok(()),
Value::Float(f) if f.is_nan() => Ok(()),
v if target == &table.name && v == pk => Ok(()),
v if txn.read(target, v)?.is_none() => Err(Error::Value(format!(
"Referenced primary key {} in table {} does not exist",
v, target,
))),
_ => Ok(()),
}?;
}
// Validate uniqueness constraints
if self.unique && !self.primary_key && value != &Value::Null {
let index = table.get_column_index(&self.name)?;
let mut scan = txn.scan(&table.name, None)?;
while let Some(row) = scan.next().transpose()? {
if row.get(index).unwrap_or(&Value::Null) == value
&& &table.get_row_key(&row)? != pk
{
return Err(Error::Value(format!(
"Unique value {} already exists for column {}",
value, self.name
)));
}
}
}
Ok(())
}
}
impl Display for Column {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let mut sql = format_ident(&self.name);
sql += &format!(" {}", self.datatype);
if self.primary_key {
sql += " PRIMARY KEY";
}
if !self.nullable && !self.primary_key {
sql += " NOT NULL";
}
if let Some(default) = &self.default {
sql += &format!(" DEFAULT {}", default);
}
if self.unique && !self.primary_key {
sql += " UNIQUE";
}
if let Some(reference) = &self.references {
sql += &format!(" REFERENCES {}", reference);
}
if self.index {
sql += " INDEX";
}
write!(f, "{}", sql)
}
}
|
/* eslint-disable react/no-array-index-key */
import { bool, arrayOf, node, func, number } from "prop-types";
import { Children, ReactNode, FC } from "react";
import { useTrail, a } from "@react-spring/web";
const Trail: FC<{
open: boolean;
children: ReactNode;
setStarted?(done: boolean): void;
}> = ({ open, children, setStarted }) => {
const items = Children.toArray(children);
const trail = useTrail(items.length, {
config: { mass: 8, tension: 2000, friction: 250 },
opacity: open ? 1 : 0,
x: open ? 0 : 20,
height: open ? 110 : 0,
from: { opacity: 0, x: 20, height: 0 },
onStart: () => setStarted(true),
});
return (
<>
{trail.map(({ height, ...style }, i) => (
<a.div key={i} style={style}>
<a.div>{items[i]}</a.div>
</a.div>
))}
</>
);
};
Trail.propTypes = {
open: bool.isRequired,
children: arrayOf(node).isRequired,
setStarted: func,
};
Trail.defaultProps = {
setStarted: () => null,
};
export default Trail;
|
Acquired thermotolerance and expression of the HSP100/ClpB genes of lima bean. Acquired thermotolerance (AT) is the ability of cells to survive a normally lethal temperature treatment as a consequence of pretreatment at an elevated but sublethal temperature. In yeast and cyanobacteria, the expression of the HSP100/ClpB protein is required for the AT response. To determine whether the HSP100/ClpB protein is associated with this response in lima bean (Phaseolus lunatus), we have cloned an HSP100/ClpB homolog and assessed expression of the two gene copies under heat stress conditions, which induce AT. Transcription of the cytoplasmically localized HSP100/ClpB protein genes is stringently controlled by heat stress in both of the laboratory and field heat stress conditions. From a heat-induced cDNA library, we identified a clone of a putative chloroplast-targeted (cp) HSP100/ClpB protein gene sequence. The cp HSP100/ClpB protein genes are constitutively expressed, but transcript levels increase post-heat stress in laboratory heat stress experiments. In field conditions the genes for the cp HSP100/ClpB are constitutively expressed. Although we were unable to correlate differences in the timing of AT response with the expression or genetic structure of the HSP100/ClpB genes in heat-tolerant or -sensitive varieties of lima bean, we clearly demonstrate the association of expression of HSP100/ClpB proteins with heat response in this species. |
CoMn Catalysts Derived from Hydrotalcite-Like Precursors for Direct Conversion of Syngas to Fuel Range Hydrocarbons Two different groups of CoMn catalysts derived from hydrotalcite-like precursors were prepared through the co-precipitation method, and their performance in the direct production of gasoline and jet fuel range hydrocarbons through FischerTropsch (FT) synthesis was evaluated in a batch autoclave reactor at 240 °C and 7 MPa and H2/CO of 2. The physicochemical properties of the prepared catalysts were investigated and characterized using different characterization techniques. Catalyst performance was significantly affected by the catalyst preparation method. The crystalline phase of the catalyst prepared using KOH contained Co3O4 and some Co2MnO4.5 spinels, with a lower reducibility and catalytic activity than cobalt oxide. The available cobalt active sites are responsible for the chain growth, and the accessible acid sites are responsible for the cracking and isomerization. The catalysts prepared using KOH + K2CO3 mixture as a precipitant agent exhibited a high selectivity of 5161% for gasoline (C5C10) and 3050% for jet fuel (C8C16) range hydrocarbons compared with catalysts precipitated by KOH. The CoMn-HTC-III catalyst with the highest number of available acid sites showed the highest selectivity to C5C10 hydrocarbons, which demonstrates that a high Brnsted acidity leads to the high degree of cracking of FT products. The CO conversion did not significantly change, and it was around 3539% for all catalysts. Owing to the poor activity in the water-gas shift reaction, CO2 formation was less than 2% in all the catalysts. |
1. Field of Invention
The present invention relates to data communication systems and more particularly to collision detection in such systems.
2. Related Art
Certain data communication systems have nodes or devices which exchange information with each other via an asynchronous data bus or wire connecting the nodes. A node can be an electronic circuit that has the ability to generate and encode information and place that information on the data bus, and to also receive and decode information placed on the data bus by another node. Nodes may be classified as either master or slave nodes, and master nodes can be either active or inactive. Active master nodes can transmit a message absent a request from another node for the message, while inactive master nodes and slave nodes have no capability for communicating with each other and can only transmit information on the data bus upon receiving a request from an active master node.
In typical packet switching systems, numerous nodes are connected to the same communication network and can access the network at the same time. As a result, if two active master nodes are transmitting information onto the data bus at the same time, packet collisions can occur. When a collision of packets is detected, an instruction is sent to retransmit the original data so that another attempt may be made to receive the packet without a collision. If a collision of packets is not detected, the information transmitted is lost since the signal received is unintelligible, as it is the sum of overlapping packets.
Numerous techniques are known in the art for preventing or detecting data or packet collisions. In some systems, data transmission from a device or node A to a device or node B is effected through two separate wires, one wire for transmission of data from A to B and one wire for transmission from B to A. By using two separate unidirectional wires for data transmission, data packet collision is prevented. A global clock can be used to start and stop data transmissions. Utilizing such a technique, however, can increase the complexity and size of the system as the number of nodes in the system increases. As a result, the number of transmission wires needed so that each node can communicate with every other node can quickly increase to an impractical number.
Other systems may use a single bidirectional wire for data transmission from both A to B and from B to A. In order to prevent collisions, a separate control wire is used to control which device is to write to the bus. The control wire is also used to synchronize the master and the slave. Again, as the number of nodes increases, the number of wires and complexity to the communication system can become impractical.
Still other communication systems may use collision detection methods to determine whether a collision has occurred. For example, a collision is detected when a detection threshold has been exceeded on the data bus, which typically requires setting a precise detection threshold. When a collision is detected, all nodes cease transmission onto the data bus. Ordinarily these techniques are implemented at each node that is transmitting a data packet. Known collision detection methods for bus topology networks compare the data being transmitted with data being simultaneously received at the transmitting node and report collisions when a mismatch is detected. These types of systems may also require multiple node connections, which can quickly become very large as the number of nodes increases.
Accordingly, a communication system with collision detection is desired which overcomes the deficiencies discussed above with conventional collision detection systems.
The present invention provides a structure and method for detecting data packet collisions between two devices by sensing current changes on a single wire interface between two or more devices and then synchronizing the devices after a collision detection. Without a need for a separate control wire, large multi-node systems can be more easily configured. Furthermore, by sensing current changes, precise detection thresholds do not need to be set.
According to the present invention, two nodes A and B transmit data on a single bidirectional line. Assume node A is writing to a data bus, and node B is not transmitting, i.e., listening. If node B wishes to obtain control of the data bus, node B transmits a special data packet to ensure that a collision occurs, i.e., that the data packet being written by node A has at least one bit different in the data stream than the data packet transmitted by node B. When this difference is encountered, an increased amount of current will flow in the single wire because one node will be trying to pull the wire high while another node will be trying to pull the wire low. This current increase is sensed in node A, causing node A to immediately halt data transmission and revert to a listening mode, i.e., reading the output of node B. When node A reads a specific output from node B indicating that node B has ended transmission of the special data packet, nodes A and B are synchronized. Thus, the special data packet contains a bit pattern to ensure a collision detection followed by a bit pattern to indicate the end of the special packet transmission, thereby synchronizing the two nodes. As a result, collision detection and node synchronization can be achieved with a single wire interface, without the need for separate control wires or precise detection thresholds.
According to one embodiment of the present invention, the data packet transmitted by node A consists of at least three consecutive bits of the same type. When the other node B wishes to assert control of the data bus, node B transmits a special data packet having alternating 1""s and 0""s and ending with two consecutive 1""s. Because the bit stream transmitted by node A has at least three consecutive same-type bits, there will be at least one position in the bit stream where the bits transmitted by node A differ from the bits transmitted by node B, thereby ensuring a collision. The presence of a collision on the wire results in a large current on the wire, while no collision results in little or no current on the wire (after a short discharging period). If node A still senses excess current on the wire after a specified time period, a collision is detected. The excess current, which is typically large, can be defined as an amount of current in excess of the amount of current on the wire when no collision is present. Therefore, as long as a large current is sensed, a collision can be detected. Consequently, no precise detection thresholds have to be set to detect collisions on the wire. If a collision is detected, node A stops transmitting data and reads the remaining bits transmitted onto the wire by node B. Once node A encounters two consecutive xe2x80x981xe2x80x99s, node A knows that node B has stopped transmission of the special packet and that the next data packet will consist of information intended for node A, thereby synchronizing the two nodes.
The present invention will be more fully understood in light of the following detailed description taken together with the accompanying drawings. |
An approach to reengineering applied to control of container logistics cost using the PERT network With the development of society and the advancement of science and technology, Chinese automobile manufacturers are struggling with increasingly intense competition in the market. To earn a place in the industry and improve brand competitiveness, many companies are beginning to focus on the logistics of cost control, such that the whole supply chain cost is at its lowest when the value is at its highest, as this is the key to the future survival and development of China's automobile industry. As an example, this paper uses the container transport of Changan Ford auto parts logistics, and based on the cost analysis derived from the program evaluation and review technique (PERT), the existing transport process is reengineered. Through the comparison of the costs before and after the transformation and the sensitivity analysis based on the reengineered process, it is determined that the reengineered process is superior to the initial process and exhibits strong practical guiding significance for logistics cost control and the development of China's automobile companies. |
package com.mid.myplan;
import android.app.Activity;
import android.appwidget.AppWidgetManager;
import android.content.ComponentName;
import android.content.Intent;
import android.os.Bundle;
import android.view.Menu;
import android.view.MenuItem;
import android.view.View;
import android.view.Window;
import android.widget.Button;
import android.widget.ExpandableListView;
import android.widget.ImageView;
import android.widget.LinearLayout;
import android.widget.ListView;
import android.widget.Toast;
import java.util.ArrayList;
import java.util.List;
public class MainActivity extends Activity {
public ExpandableListView expandableListView;
public ExpandAdapter expandAdapter1;
public ExpandAdapter expandAdapter2;
public LinearLayout searchLayout;
public Button search;
public ImageView add;
public static ArrayList<Item> items1;
public static ArrayList<Item> items2;
public ListView widgetList;
public static int id;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
requestWindowFeature(Window.FEATURE_CUSTOM_TITLE);
setContentView(R.layout.activity_main);
getWindow().setFeatureInt(Window.FEATURE_CUSTOM_TITLE, R.layout.titlebar);
expandableListView = (ExpandableListView)findViewById(R.id.expandableListView);
searchLayout = (LinearLayout)findViewById(R.id.buttonLayout);
search = (Button)findViewById(R.id.searchButton);
add = (ImageView)findViewById(R.id.addPlan);
widgetList = (ListView)findViewById(R.id.widget_list);
search.setOnClickListener(onClickListener);
add.setOnClickListener(onClickListener);
id = 0;
expandAdapter1 = new ExpandAdapter(this,1);
expandAdapter2 = new ExpandAdapter(this,2);
items1 = expandAdapter1.getDatas();
items2 = expandAdapter2.getDatas();
expandableListView.setAdapter(expandAdapter1);
new OnTouch(this, expandAdapter1, expandAdapter2);
AppWidgetManager appWidgetManager = AppWidgetManager.getInstance(this);
appWidgetManager.notifyAppWidgetViewDataChanged(appWidgetManager.getAppWidgetIds(new ComponentName(this,WidgetProvider.class)),R.id.widget_list);
}
public View.OnClickListener onClickListener = new View.OnClickListener() {
@Override
public void onClick(View v) {
if (v.getId() == R.id.searchButton) {
Intent intent = new Intent();
intent.setClass(MainActivity.this, SearchableActivity.class);
startActivityForResult(intent,0);
searchLayout.setVisibility(View.GONE);
} else {
Intent intent = new Intent();
intent.setClass(MainActivity.this, EditActivity.class);
intent.putExtra("option",1);
startActivityForResult(intent,1);
}
}
};
@Override
protected void onActivityResult(int requestCode,int resultCode,Intent data){
searchLayout.setVisibility(View.VISIBLE);
if (data == null)
return;
List<List<Item>> data1 = expandAdapter1.getData();
List<List<Item>> data2 = expandAdapter2.getData();
int option = data.getIntExtra("option",0);
int Class = data.getIntExtra("class",1);
int oldClass = data.getIntExtra("oldClass", 0);
int position = data.getIntExtra("position",0);
int alarm = data.getIntExtra("alarm",0);
String table = data.getStringExtra("table");
String subject = data.getStringExtra("subject");
String time = data.getStringExtra("time");
if (option == 1) {
Item item = new Item(subject,time,table,data1.get(Class).size(),alarm);
data1.get(Class).add(item);
expandAdapter1.setData(data1);
expandAdapter1.notifyDataSetChanged();
} else if (option == 2 || option == 3) {
// String newSubject = data.getStringExtra("newSubject");
// List<Item> list = data1.get(Class);
// for (int i = 0; i < list.size(); i++) {
// if (list.get(i).getName().equals(newSubject)) {
// position = i;
// break;
// }
// }
if (option == 2) {
if (oldClass != Class) {
data1.get(oldClass).remove(position);
data1.get(Class).add(new Item(subject,time,table,data1.get(Class).size(),alarm));
} else {
data1.get(Class).get(position).setName(subject);
data1.get(Class).get(position).setTime(time);
data1.get(Class).get(position).setAlarm(alarm);
}
} else {
data1.get(Class).remove(position);
data2.get(Class).add(new Item(subject,time,"done",data2.get(Class).size(),alarm));
expandAdapter2.setData(data2);
expandAdapter2.notifyDataSetChanged();
}
expandAdapter1.setData(data1);
expandAdapter1.notifyDataSetChanged();
} else if (option == 4) {
if (table.equals("todos")) {
data1.get(Class).remove(position);
expandAdapter1.setData(data1);
expandAdapter1.notifyDataSetChanged();
} else {
data2.get(Class).remove(position);
expandAdapter2.setData(data2);
expandAdapter2.notifyDataSetChanged();
}
}
items1 = expandAdapter1.getDatas();
items2 = expandAdapter2.getDatas();
AppWidgetManager appWidgetManager = AppWidgetManager.getInstance(this);
appWidgetManager.notifyAppWidgetViewDataChanged(appWidgetManager.getAppWidgetIds(new ComponentName(this,WidgetProvider.class)),R.id.widget_list);
}
@Override
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is present.
getMenuInflater().inflate(R.menu.menu_main, menu);
return true;
}
@Override
public boolean onOptionsItemSelected(MenuItem item) {
// Handle action bar item clicks here. The action bar will
// automatically handle clicks on the Home/Up button, so long
// as you specify a parent activity in AndroidManifest.xml.
int id = item.getItemId();
//noinspection SimplifiableIfStatement
if (id == R.id.action_settings) {
return true;
}
return super.onOptionsItemSelected(item);
}
}
|
A New Approach For Optimal MIMD Queueless Routing Of Omega and Inverse-Omega Permutations On Hypercubes Omega permutations constitute the subclass of particular permutations which have gained the more attention in the search of optimal routing of permutations in hypercubes. The reason of this attention comes from the fact that they are permutations for general-purpose computing like the simultaneous conflict-free access to the rows or the columns of a matrix. In this paper we address the problem of the optimal routing of omega and inverse-omega permutations on hypercubes under the MIMD queueless communication model. We revisit the problem through a new paradigm: the so-called graphs partitioning in order to take advantage of the recursive structure of the hypercubes topology. We prove that omega and inverse-omega permutations are partitionable. That is any omega (resp. inverse-omega) permutation on n-dimensional hypercube can be decomposed in two independent permutations on two disjoint (n-1)-dimensional hypercubes. We also prove that each one of these permutations is also an omega (resp. inverse-omega) permutation. It follows that any omega (resp. inverse-omega) permutation on n-dimensional hypercube is routable in at most n steps of data exchanges, each step realizing the partition of the hypercube. |
/*
* return non-zero if hp2 includes some element which is not
* included in hp1.
*/
int hostentcmp(struct hostent *hp1,struct hostent *hp2)
{ const char *a1;
const char *a2;
int n1,n2;
int diff;
for( n1 = 0; hp1->h_addr_list[n1]; n1++ );
for( n2 = 0; hp2->h_addr_list[n2]; n2++ );
if( n1 < n2 )
return 1;
for( n2 = 0; a2 = hp2->h_addr_list[n2]; n2++ ){
for( n1 = 0; a1 = hp1->h_addr_list[n1]; n1++ )
if( ipv4cmp(a2,a1) == 0 )
break;
if( a1 == NULL )
return 1;
}
return 0;
} |
<gh_stars>0
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package tudelft.utils;
import htsjdk.samtools.SAMRecord;
/**
*
* @author ddecap
*/
public class QualityEncoding {
public enum QENCODING {
SANGER,
ILLUMINA
}
private static final int REASONABLE_SANGER_THRESHOLD = 60;
public static QENCODING guessEncoding(final SAMRecord read) throws QualityException {
final byte[] quals = read.getBaseQualities();
byte max = quals[0];
for ( int i = 0; i < quals.length; i++ ) {
if(quals[i] > max) max =quals[i];
}
System.out.println("Max quality: " + max);
if(max <= REASONABLE_SANGER_THRESHOLD) {
System.out.println("SANGER Quality encoding");
return QENCODING.SANGER;
} else {
System.out.println("ILLUMINA Quality encoding");
return QENCODING.ILLUMINA;
}
}
private static final int fixQualityIlluminaToPhred = 31;
public static SAMRecord fixMisencodedQuals(final SAMRecord read) throws QualityException {
final byte[] quals = read.getBaseQualities();
for ( int i = 0; i < quals.length; i++ ) {
quals[i] -= fixQualityIlluminaToPhred;
if ( quals[i] < 0 )
throw new QualityException(quals[i]);
}
read.setBaseQualities(quals);
return read;
}
}
|
Interesting case of anomalous origin of right coronary artery from left sinus Mohammed Abiduddin Arif. Anomalous coronary arteries (acas) are rare but potentially life-threatening abnormalities of coronary circulation. Most variations are benign; however, some may lead to myocardial ischemia and/or sudden cardiac arrest. We present a case of 55-year-old male with a significant medical history of hypertension, hyperlipidemia, type 2 diabetes and gastroesophageal reflux disease who presented to the emergency department with atypical chest pain. He underwent a cardiac catheterization that showed coronary artery disease with tight lesions in both Left anterior descending and Left circumflex along with anomalous right coronary artery originating near the anterior left coronary artery sinus and coursing between the pulmonary artery and aorta. The patient was taken up for coronary artery bypass grafting of LAD and LCX only, leaving behind RCA and was discharged home after full recovery.Treatment of significant anomalies should be guided by the nature of the anomalous vessel. Symptomatic patients with acas have 3 treatment options: medical management, coronary angioplasty and stent deployment, or surgical correction. Some clinicians advocate revascularization, but the longterm benefits of revascularization therapies have not yet been demonstrated. Mohammed Abiduddin Arif. Anomalous coronary arteries (acas) are rare but potentially life-threatening abnormalities of coronary circulation. Most variations are benign; however, some may lead to myocardial ischemia and/or sudden cardiac arrest. 1 We present a case of 55-year-old male with a significant medical history of hypertension, hyperlipidemia, type 2 diabetes and gastroesophageal reflux disease who presented to the emergency department with atypical chest pain. He underwent a cardiac catheterization that showed coronary artery disease with tight lesions in both Left anterior descending and Left circumflex along with anomalous right coronary artery originating near the anterior left coronary artery sinus and coursing between the pulmonary artery and aorta. The patient was taken up for coronary artery bypass grafting of LAD and LCX only, leaving behind RCA and was discharged home after full recovery.Treatment of significant anomalies should be guided by the nature of the anomalous vessel. Symptomatic patients with acas have 3 treatment options: medical management, coronary angioplasty and stent deployment, or surgical correction. Some clinicians advocate revascularization, but the longterm benefits of revascularization therapies have not yet been demonstrated. Case report:- A 55-year-old male with a significant medical history of hypertension, hyperlipidemia, type 2 diabetes and gastroesophageal reflux disease presented to the emergency department with atypical cardiac chest pain. He complained of intermittent chest discomfort that had persisted for 2 months. He described the pain as 7 of 10 in severity, substernal, lasting less than 5 minutes, resolving spontaneously but becoming acutely worse with minimal exertion. His electrocardiogram (EKG) on presentation showed normal sinus rhythm with ST and T wave abnormalities potentially indicating anterior and inferior ischemia seen in III, avf, and V1-V3. His 2D echocardiogram 2 months prior to admission had shown a normal ejection fraction (60%) with normal diastolic function. He was admitted to cardiology for unstable angina and underwent a cardiac catheterization that showed tight lesions in both Left anterior descending and Left circumflex along with an anomalous RCA originating near the anterior left coronary artery sinus and coursing between the pulmonary artery and aorta. The patient was taken up for coronary artery bypass grafting and after complete recovery was discharged home. Discussion:- Anomalies of coronary circulation result from processes that disrupt the normal differentiation and specialization of the primitive heart tube. 2 Position of the endothelial buds or septation of the truncus arteriosus may give rise to anomalous origins of coronary arteries. 3 Few anomalies present with symptoms or serious clinical sequelae that require surgical correction; most are discovered incidentally during angiography. White and Edwards first described the anomalous origin of the RCA as a rare congenital abnormality in 1948. 4 Anomalous rcas that originate from the left coronary sinus occur in 0.05%-0.1% of the general population. 5 Although acas occur with low frequency, a high risk of sudden cardiac death because of myocardial ischemia and resultant arrhythmia are associated with them, even in the absence of atherosclerosis. 6 Multislice computed tomography (CT) can clearly delineate the anatomy (Fig.1 to Fig. 3) and have replaced angiography as definitive diagnostic tools. 7 Multislice CT has been recommended because it offers excellent spatial resolution and identifies most coronary anomalies 8 the use of cardiac MRI for studying congenital anomalies has generated great interest among cardiologists; however, current studies are insufficient to recommend MRI as the imaging method of choice for acas. Treatment of significant anomalies should be guided by the nature of the anomalous vessel. Treatment options remains controversial, with some clinicians advocating revascularization. Symptomatic patients with acas have 3 treatment options: medical management, coronary angioplasty and stent deployment, or surgical correction. Stenting generally is not recommended. Several surgical options are available, including directly reimplanting the anomalous artery, surgically unroofing the intramural coronary segment from the ostium to the exit point at the aortic wall, or creating a new ostium at the end of the anomalous artery's segment, a procedure known as osteoplasty. Revascularization using direct reimplantation of the anomalous RCA into the right coronary sinus is the preferred method of surgical treatment for this abnormality per the surgical literature. 9 Current literature does not demonstrate any long-term benefits of revascularization. The Japanese approach to this condition is far more conservative, as demonstrated by a study of 56 patients who had anomalous arteries and were treated medically with beta blockers. Side effects of the conservative approach included hypotension and arrhythmias on exertion (9%). 10 Coronary artery bypass grafting of LAD, LCX was effective in our patient. |
Dzhokhar Tsarnaev holds up a bandaged hand at his arraignment on Wednesday.
Friends and family members of people whose lives were shattered when two homemade bombs went off near the finish line of the Boston Marathon on April 15 packed three rooms in a federal courthouse on Wednesday as suspect Dzhokhar Tsarnaev pleaded not guilty to a 30-count indictment.
But the fleeting courtroom encounter brought little relief to Bostonians who said the 19-year-old —accused of conducting the deadly bombings with the help of his older brother Tamerlan Tsarnaev —showed little feeling.
Tsarnaev, seated between his lawyers in an unbuttoned orange jumpsuit and black t-shirt with a bandage around his left hand, responded “not guilty” to each of the charges read against him in court. At times he stroked his chin, smiled crookedly, and appeared to look calmly about the courthouse.
Fucarile wore a Boston Strong t-shirt with the name of Marc Fucarile, his son who lost his right leg and still carries shrapnel in his body, the father said.
Marc Fucarile, 34, was standing near the second blast when it went off. He still has more surgeries to go, and has spent every day of the nearly three months since receiving medical care, his father said. Members of the family have taken weeks off work so that someone is always at Marc’s bedside, he said.
Massachusetts Institute of Technology (MIT) police officers stand outside the federal courthouse for the court appearance by accused Boston Marathon bomber Dzhokhar Tsarnaev in Boston, Mass., on July 10.
Mildred Valverde, 44, after coming out of the courthouse on crutches, said the hearing was emotionally taxing.
“Just to be in the same room with him was bothersome,” Valverde said, adding that she hopes that, if convicted, he doesn't get the death penalty.
"I'd rather see him suffer," she said. "Death is too quick."
Scores of survivors and their supporters crowded into two overflow rooms in the courthouse to watch Tsarnaev’s appearance on television screens. About 30 more, including some who stood glaring with their arms crossed as Tsarnaev was led from the courtroom by U.S. Marshals, filled seats in the main courtroom. It was the alleged bomber’s first court appearance since he was captured by law enforcement, bloodstained and hiding in a boat, on April 19.
Among those present was Liz Norden, whose sons J.P. and Paul Norden each lost a leg in the April blasts.
“I felt sick to my stomach,” Liz Norden told dozens of reporters outside the courthouse after the hearing. Asked what she would tell her sons about the day, she said they have focused all their attention on recovering from their injuries.
Victim and family members of the Boston Marathon bombing describe the proceedings of the Dzhokhar Tsarnaev court hearing.
Peter Brown, the Norden brothers’ uncle, said he did not expect to feel anything when he saw Tsarnaev in court.
Three people were killed and more than 260 injured in the bombings that shattered a beloved springtime rite in the city. The deceased were Martin Richard, 8; Krystle Marie Campbell, 29; and Lingzi Lu, 23.
Also killed in the course of the manhunt after the bombing was Sean Collier, 27, an officer with the Massachusetts Institute of Technology police department who was shot dead in his patrol car on April 18.
Liz Norden, the mother of two men who each lost a leg in the Boston Marathon bombings, tells WHDH "I want to know what happened that day," as she prepares to attend a federal hearing for suspect Dzhokhar Tsarnaev. Janet Wu reports.
Nearly two dozen representatives of the MIT campus police department arrived at the courthouse before the hearing and remained outside, in uniform and attention, until after it was over.
“I didn’t see a lot of remorse. I didn’t see a lot of regret,” MIT Police Chief John DiFava, who was in the courtroom, said after the proceeding.
Tsarnaev's sisters were also in the courtroom, and at least one of them was crying. Outside, a small group of supporters cheered as a caravan carried him past, invoking his nickname as they yelled "Justice for Jahar!" according to the Associated Press.
Prosecutors said they expect the trial will last three to four months and they anticipate calling 80 to 100 witnesses. The next court date is Sept. 23.
Attorney General Eric Holder has not yet decided whether to seek the death penalty for Tsarnaev, an ethnic Chechen and American citizen who was born in Dagestan.
Investigators say the motive for the bombing might be found in anti-American messages the college student scrawled in the boat where he was found April 19: "The U.S. Government is killing our innocent civilians," "I can't stand to see such evil go unpunished," and "We Muslims are one body, you hurt one you hurt us all."
After the FBI identified the brothers as the bombing suspects, they embarked on a bloody escape bid — allegedly executing Collier, hijacking a car and hurling pipe bombs at police.
Tamerlan Tsarnaev was killed in a firefight, but Dzhokhar fled and remained at large during a daylong lockdown that ended when a Watertown, Mass., homeowner noticed blood on the boat in his backyard.
The suspect was taken from the boat to a Boston hospital where some of his victims were still recovering. He was later transferred to a federal prison with medical facilities.
Explosions at the finish line of the Boston Marathon killed three people and led to a massive manhunt as police locked down the city. |
Effect of tunicamycin, an inhibitor of protein glycosylation, on division of tumour cells in vitro. We determined the effect tunicamycin (TM), an inhibitor of protein glycosylation, had on cells in vitro that were derived from solid and ascites variants of a chemically induced rat hepatoma. Using flow microfluorometry (FMF), labelling index (LI), and population-doubling time assays, we monitored the progression of cells through the cell cycle after treatment with TM. Cells in monolayer culture were first incubated in 0.05 or 0.10 micrograms TM/ml medium for 24 h then analysed or given fresh medium without TM and allowed to recover for 6-24 h. Exposing cells to 0.05-0.50 micrograms TM/ml medium did not affect the percentage of viable cells as determined using the Trypan Blue exclusion procedure. However, continuous exposure to 0.05 micrograms TM/ml medium did affect the population-doubling times of both the ascites and solid variants, and the ascites tumour cells were more sensitive than the solid tumour cells. TM reversibly inhibited hepatoma cells from entering S phase of the cell cycle. After exposure to TM for 24 h, the percentage of solid tumour cells in vitro in S phase decreased to 19%, as determined by autoradiography of tritiated-thymidine-labelled cells, and to 21% as determined by FMF; 49% of untreated solid tumour cells were in S phase. The percentage of ascites tumour cells in vitro decreased to 12% after exposure to TM for 24 h; 37% of untreated cells were in S phase. We concluded that TM can inhibit division of rat hepatoma cells in vitro by blocking them in G1 phase of the cell cycle. |
RNAi-assisted genome evolution in Saccharomyces cerevisiae for complex phenotype engineering. A fundamental challenge in basic and applied biology is to reprogram cells with improved or novel traits on a genomic scale. However, the current ability to reprogram a cell on the genome scale is limited to bacterial cells. Here, we report RNA interference (RNAi)-assisted genome evolution (RAGE) as a generally applicable method for genome-scale engineering in the yeast Saccharomyces cerevisiae. Through iterative cycles of creating a library of RNAi induced reduction-of-function mutants coupled with high throughput screening or selection, RAGE can continuously improve target trait(s) by accumulating multiplex beneficial genetic modifications in an evolving yeast genome. To validate the RNAi library constructed with yeast genomic DNA and convergent-promoter expression cassette, we demonstrated RNAi screening in Saccharomyces cerevisiae for the first time by identifying two known and three novel suppressors of a telomerase-deficient mutation yku70. We then showed the application of RAGE for improved acetic acid tolerance, a key trait for microbial production of chemicals and fuels. Three rounds of iterative RNAi screening led to the identification of three gene knockdown targets that acted synergistically to confer an engineered yeast strain with substantially improved acetic acid tolerance. RAGE should greatly accelerate the design and evolution of organisms with desired traits and provide new insights on genome structure, function, and evolution. |
Empowering staff to embrace and discuss frailty as a health condition Copyright: © 2019 The Authors. Published by Archetype Health Pty Ltd. This is an open access article under the CC BY-NC-ND 4.0 license. SUMMARY The 4Cs Project (Courageous, Compassionate, Confident Conversations) was designed and driven by a multidisciplinary team of staff caring for patients with frailty in a community hospital setting at Worcester City In Patient Unit. They had realised that frailty was frequently underdiagnosed and that these patients were often reaching the end of life without its recognition and the necessary conversations about advance care planning taking place with them and their families was being missed. The key to the projects success was to focus first on the staff and their concerns, as well as their lack of confidence around discussing death and dying. They designed initiatives together that addressed their hesitance, and in turn strengthened their team and community, and created Joy in Work. |
Hanns-Martin-Schleyer-Halle
Sporting events
The arena hosted the final phase of the 1985 European basketball championship.
In tennis, the arena hosts some of the matches of Porsche Tennis Grand Prix, on a clay court designated as "Court 1". It also hosted the Stuttgart Masters when it was an ATP Super 9 event between 1996 and 2001.
The arena is also used as a velodrome and was used as the host for the 2003 UCI Track Cycling World Championships.
The 1989 World Artistic Gymnastics Championships, the 2007 World Artistic Gymnastics Championships, and the 2019 World Artistic Gymnastics Championships were held at the Hanns-Martin-Schleyer-Hall.
Concerts
Depeche Mode performed at the stadium seven times: the first one was on November 2, 1987 during their Music for the Masses Tour. The second one was on October 15, 1990 during their World Violation Tour. The third one was on June 25, 1993 during their Devotional Tour. The fourth one was on September 23, 1998 during their Singles Tour. The fifth one was on October 3, 2001 during their Exciter Tour. The sixth one was on March 9, 2006 during their Touring the Angel. The seventh one was on November 8, 2009 during their Tour of the Universe.
On 11 April 2002, Irish vocal pop band Westlife held a concert for their World of Our Own Tour supporting their album World of Our Own.
In July 2009 Elton John gave a sold out concert in the Schleyerhalle. |
/**
* A user-defined object type. For now each type is defined by a unique class name and file source.
*/
@AutoValue
public abstract class ObjectColor implements Color {
@Override
public boolean isPrimitive() {
return false;
}
@Override
public boolean isUnion() {
return false;
}
@Override
public boolean isObject() {
return true;
}
@Override
public ImmutableCollection<Color> getAlternates() {
throw new UnsupportedOperationException();
}
public abstract String getClassName();
public abstract String getFilename();
// given `function Foo() {}` or `class Foo {}`, color of Foo.prototype. null otherwise.
@Nullable
public abstract Color getPrototype();
@Nullable
public abstract Color getInstanceColor();
// List of other colors directly above this in the subtyping graph for the purposes of property
// (dis)ambiguation.
public abstract ImmutableList<Color> getDisambiguationSupertypes();
@Override
public abstract boolean isInvalidating();
/** Builder for ObjectColors */
@AutoValue.Builder
public abstract static class Builder {
public abstract Builder setClassName(String value);
public abstract Builder setFilename(String value);
public abstract Builder setInvalidating(boolean value);
public abstract Builder setDisambiguationSupertypes(ImmutableList<Color> supertypes);
public abstract Builder setPrototype(Color prototype);
public abstract Builder setInstanceColor(Color instanceColor);
public abstract ObjectColor build();
}
public static Builder builder() {
return new AutoValue_ObjectColor.Builder()
.setInvalidating(false)
.setDisambiguationSupertypes(ImmutableList.of());
}
} |
A Novel Method to Isolate Primordial Germ Cells and Its Use for the Generation of Germline Chimeras in Chicken1 Abstract A novel method was developed to isolate chick primordial germ cells (PGCs) from circulating embryonic blood. This is a very simple and rapid method for the isolation of circulating PGCs (cPGCs) using an ammonium chloride-potassium (ACK) buffer for lysis of the red blood cells. The PGCs were purified as in vitro culture proceeded. Most of the initial red blood cells were removed in the first step using the ACK lysis buffer. The purity of the cPGCs after ACK treatment was 57.1%, and the recovery rate of cPGCs from whole blood was 90.3%. The ACK process removed only red blood cells and it did not affect cPGC morphology. In the second step, the red blood cells disappeared as the culture progressed. At 7 days of in vitro culture, the purity of the PGCs was 92.9%. Most of these cells expressed germline-specific antibodies, such as those against chicken vasa homolog (CVH). The cultured PGCs expressed the Cvh and Dazl genes. Chimeric chickens were produced from these cultured PGCs, and the donor cells were detected in the gonads, suggesting that the PGCs had biological function. In conclusion, this novel isolation system for PGCs should be easier to use than previous methods. The results of the present study suggest that this novel method will become a powerful tool for germline manipulation in the chicken. |
Ivy Tech has two new scholarships to share for students attending in Batesville or Lawrenceburg.
(Lawrenceburg, Ind) - In an effort to financially assist incoming students at Ivy Tech Community College Lawrenceburg and Batesville locations, the College is announcing two new funding options: the Ivy Tech Scholarship and the Summer Exploration Scholarship. These scholarships are available to graduating seniors in Dearborn, Ohio, Ripley, and Franklin Counties, as well as students who attend Taylor High School (Cleves, OH), Harrison High School (Harrison, OH), and Oak Hills High School (Cincinnati, OH). Homeschooled students are also eligible to apply.
The $1,000 Ivy Tech Scholarship is awarded annually to a graduating senior and applies to tuition and fees for degree-seeking, full-time Ivy Tech Batesville and Lawrenceburg students. The funds must be initiated in the fall semester of the year the student graduates from high school. Additional qualifications may apply.
The Summer Exploration Scholarship provides current high school sophomores and juniors the opportunity to learn and explore courses that fit their career goals. Recipients are eligible for tuition, technology fees, and hybrid fees associated with one Ivy Tech course up to 4 credit hours. These are awarded on a first-come, first-served basis and additional qualifications may apply.
The application deadline is April 30. Summer classes run June 10 to August 3. Fall classes begin August 26. Visit Ivy Tech’s Express Enrollment Center (50 Walnut Street, Lawrenceburg) for more details. |
Macro EMG in healthy subjects of different ages. The findings of the recently developed technique of Macro EMG in healthy subjects of different ages are described. The characteristics of the Macro motor unit potential which probably reflects the relative size of the whole motor unit are presented together with suggested normal data for the three muscles studied. An increase in size of the Macro motor unit potential was found with age, particularly after the age of 60 years. Possible factors determining the Macro EMG signal and the age related changes are discussed. |
Experts warn of ICU strain, ventilator rationing in the face of swine flu.
In hospitals, ventilators are among the MVPs for patients in need of both acute and long-term care.
Ventilators keep people in comas alive, help those who cannot breathe get enough oxygen and those who have suffered heart attacks recuperate, and ease normal breathing in asthma patients, among a multitude of other uses.
But in a crisis, if H1N1 infections burgeons out of control, for example, the need for ventilators and other critical equipment might exceed the available resources, public health officials warn.
Adding fuel to these concerns is a report, published this morning in the Journal of the American Medical Association, showing that the pandemic virus has already strained intensive-care unit resources in other North American countries.
Specifically, intensive-care units in Canada and Mexico were at full stretch during the peaks of the spring pandemic H1N1 flu outbreak, researchers said.
In Winnipeg -- site of the largest cohort of pandemic patients in Canada -- all intensive-care beds were occupied with H1N1 flu patients when the outbreak peaked in June, according to a study led by Dr. Anand Kumar, ICU attending physician for the Winnipeg Regional Health Authority.
And, in Mexico city, six major hospitals were so busy that admission to intensive care was delayed, and four patients died in the emergency department before they could get to the the ICU, according to a study led by Dr. Guillermo Domínguez-Cherit of Instituto Nacional de Ciencias Médicas y Nutrición "Salvador Zubirán."
The papers are the first to report on a large group of critically ill H1N1 patients treated during the early days of the pandemic in North America.
Meanwhile, Australian and New Zealand researchers reported last week that their intensive-care units were also under pressure as a result of the pandemic.
The papers underscore an unsettling reality -- that in an effort to ration resources during an emergency, health officials may be in the difficult position of determining, essentially, who lives and who dies.
"We have a lot of ventilators in the U.S., but somebody's on them all the time," said Arthur Caplan, director of the Center for Bioethics at the University of Pennsylvania. "You're basically talking about taking somebody off a ventilator to give it to somebody else."
Late last month, the Institute of Medicine released guidelines for crisis standards of care that included recommendations on how to allocate life-saving resources -- such as ventilators -- in an emergency while maintaining ethical standards.
"Depending on the disaster, ventilators are something you could need and could run out of," said Dr. Tia Powell, director of the Montefiore-Einstein Center for Bioethics and one of the authors of the IOM report. "But no one thinks that is likely with H1N1."
Other resources, Powell said, that may need to be rationed carefully include oxygen, tubing of various sizes and antibacterial or antiviral medications.
Public health experts agree that swine flu will not be an overwhelming pandemic in the way that the 1918 influenza pandemic, which killed more than 50 million people, was. Antiviral vaccine development is well under way, and public awareness about H1N1 influenza is increasing.
But disaster situations require doctors to make decisions where providing the best individual care may conflict with the duty to steward resources to save the greatest number of lives.
Part of the Institute of Medicine's potentially controversial guidelines recommends careful allocation of ventilators, which can be critical for those with swine flu but are limited in number and require skilled staff to operate. Hospitals may need to make those with acute conditions a priority over those with chronic conditions, such as cancer, when it comes to who gets a ventilator.
"It's loony, immoral madness not to plan for that kind of emergency," Caplan said. "But it's one thing to have a plan and another thing to have doctors comply."
Caplan said the conflict lies in a health care provider's ethical obligation to not abandon a patient he or she cares for. And if doctors won't reallocate the resources at their disposal, it can be difficult for a third party to force them.
In a declared emergency, it is often the responsibility of a hospital administrator -- who may or may not be a medical doctor -- to set reallocation protocols in motion.
Still, experts bristle at the thought that groups or individuals who are stewards of resources could be thought of as "death panels."
"The group providing the recommendations is not deciding who lives and dies," said Dr. David Cronin, director of liver transplantation at Froedtert Memorial Lutheran Hospital in Milwaukee. "They are setting a decision tree and allocation system to provide the scarce resources in as just and fair a way as possible. It is the disease that is responsible for the deaths ... I would consider these 'life panels.'"
While the IOM's report does not endorse a particular way to allocate ventilators, it does favor evidence-based triage over first-come, first-serve systems.
"During a catastrophic disaster with insufficient medical and health resources to meet patient needs, we shift our goal for maximizing individual outcomes to helping the most people we can in the entire population," said Dr. Kristi Koenig, director of Public Health Preparedness at the University of California at Irvine School of Medicine. "Therefore, if one person is in cardiac arrest and has almost no chance of survival despite aggressive care and 10 other people need a simple intervention such an unblocking their airway so they can breathe, we would save 10 people and provide only palliative [comfort] care to the person who has almost no chance of survival."
The report emphasized that hospital and state officials should strive to keep crisis standards as consistent as possible and as transparent as possible to help allay confusion in an emergency.
"An allocation system has to be transparent to be fair," said Robert Field, professor of law and health management and policy at the Drexel University School of Public Health. "Otherwise, lives could be saved, and lost, based on favoritism. We shouldn't base life and death decisions on rolling the dice as to where the closest ventilator is located."
The report also addressed potential legal concerns of health care professionals who provide care under duress, which Powell said was an emerging and controversial area of law.
"If you want health care providers to show up to work in a disaster, endure physical risk for themselves, work around the clock in austere conditions and take on hardship duties, you have to make sure their sacrifice doesn't leave them at unnecessary risk," Powell said. "If they are making good faith choices and following guidelines, they should be protected from unnecessary harm as well as unnecessary legal harm."
But Powell maintained that patients need to be legally protected as well if they receive negligent care or treatment from an unqualified person.
While neither state officials nor hospitals need to adhere to the guidelines outlined in the IOM, they may be a useful planning tool for the challenging decisions about resource allocation necessitated by pervasive situations, such as pandemic influenza, or catastrophic disasters, such as hurricane or earthquake.
"These guidelines can't be lightly invoked ... your whole facility has to be up against a wall," Powel said. "Disaster has to be declared by a government entity and even then you need permission to operate under a crisis standard of care."
ABC News' Courtney Hutchison contributed to this report. |
Loyalist paramilitaries orchestrating heavy rioting in north Belfast have ordered community leaders not to interfere as they go on the rampage, police chiefs claimed today.
After officers came under fire from a mob of up to 400 people during the third night of violence, assistant chief constable Alan McQuillan said the Ulster Defence Association was to blame.
He told PA News: "Moderate community leaders are being threatened by the UDA who are telling them to stay out of it.
"The UDA want to get at their Catholic neighbours and we are stopping them."
His claims came as North Belfast Democratic Unionist MP Nigel Dodds prepared to meet Northern Ireland security minister Jane Kennedy to discuss the violence.
Police said there were five separate shooting incidents in the flashpoint Limestone Road area last night.
Police said 13 officers were injured, none seriously. No arrests were made. |
#include "stdafx.h"
#include "DShowPinHelper.h"
CDShowPinHelper::CDShowPinHelper()
: m_ptrFilter(NULL)
{
}
CDShowPinHelper::~CDShowPinHelper()
{
Close();
}
BOOL CDShowPinHelper::Open(IBaseFilter* ptrFilter)
{
ATLASSERT(ptrFilter != NULL);
HRESULT hResult = S_OK;
CComPtr<IEnumPins> ptrEnumPins = NULL;
CComPtr<IPin> ptrPin = NULL;
m_ptrFilter = ptrFilter;
return TRUE;
}
void CDShowPinHelper::Close()
{
m_ptrFilter = NULL;
}
int CDShowPinHelper::GetPinCount()
{
ATLASSERT(m_ptrFilter != NULL);
if (m_ptrFilter == NULL)
return 0;
HRESULT hResult = S_OK;
CComPtr<IEnumPins> ptrEnumPins = NULL;
CComPtr<IPin> ptrPin = NULL;
int nPinCount = 0;
hResult = m_ptrFilter->EnumPins(&ptrEnumPins);
if (hResult != S_OK)
return 0;
while (TRUE) {
ptrPin = NULL;
hResult = ptrEnumPins->Next(1, &ptrPin, NULL);
if (hResult != S_OK)
break;
nPinCount++;
}
return nPinCount;
}
BOOL CDShowPinHelper::GetPinDirection(int nIndex, PIN_DIRECTION *pDirection)
{
ATLASSERT(pDirection != NULL);
ATLASSERT(m_ptrFilter != NULL);
if (pDirection == NULL || m_ptrFilter == NULL)
return FALSE;
HRESULT hResult = S_OK;
CComPtr<IEnumPins> ptrEnumPins = NULL;
CComPtr<IPin> ptrPin = NULL;
int nCount = 0;
hResult = m_ptrFilter->EnumPins(&ptrEnumPins);
if (hResult != S_OK)
return FALSE;
while (TRUE) {
ptrPin = NULL;
hResult = ptrEnumPins->Next(1, &ptrPin, NULL);
if (hResult != S_OK)
break;
if (nIndex == nCount) {
hResult = ptrPin->QueryDirection(pDirection);
return hResult == S_OK ? TRUE : FALSE;
}
nCount++;
}
return FALSE;
}
BOOL CDShowPinHelper::GetPin(int nIndex, IPin **pptrPin)
{
ATLASSERT(m_ptrFilter != NULL);
if (m_ptrFilter == NULL)
return FALSE;
HRESULT hResult = S_OK;
CComPtr<IEnumPins> ptrEnumPins = NULL;
CComPtr<IPin> ptrPin = NULL;
int nCount = 0;
hResult = m_ptrFilter->EnumPins(&ptrEnumPins);
if (hResult != S_OK)
return FALSE;
while (TRUE) {
ptrPin = NULL;
hResult = ptrEnumPins->Next(1, &ptrPin, NULL);
if (hResult != S_OK)
break;
if (nIndex == nCount) {
hResult = ptrPin.CopyTo(pptrPin);
return hResult == S_OK ? TRUE : FALSE;
}
nCount++;
}
return FALSE;
}
BOOL CDShowPinHelper::FindUnconnectedPin(PIN_DIRECTION PinDir, IPin **pptrPin)
{
HRESULT hResult = S_OK;
CComPtr<IEnumPins> ptrEnumPins = NULL;
CComPtr<IPin> ptrPin = NULL;
BOOL bMatch = FALSE;
hResult = m_ptrFilter->EnumPins(&ptrEnumPins);
if (hResult != S_OK)
return FALSE;
while (TRUE) {
ptrPin = NULL;
hResult = ptrEnumPins->Next(1, &ptrPin, NULL);
if (hResult != S_OK)
return FALSE;
if (!CDShowPinHelper::MatchPin(ptrPin, PinDir, FALSE, &bMatch))
return FALSE;
if (bMatch) {
hResult = ptrPin.CopyTo(pptrPin);
return TRUE;
}
}
return FALSE;
}
BOOL CDShowPinHelper::IsPinConnected(IPin *pPin, BOOL *pIsConnected)
{
CComPtr<IPin> ptrTmpPin = NULL;
BOOL bConnected = FALSE;
HRESULT hResult = pPin->ConnectedTo(&ptrTmpPin);
if (SUCCEEDED(hResult)) {
*pIsConnected = TRUE;
return TRUE;
}
else if (hResult == VFW_E_NOT_CONNECTED) {
*pIsConnected = FALSE;
return TRUE;
}
return FALSE;
}
BOOL CDShowPinHelper::MatchPin(IPin *pPin, PIN_DIRECTION direction, BOOL bIsConnected, BOOL *pMatch)
{
ATLASSERT(pPin != NULL);
BOOL bMatch = FALSE;
BOOL bSuccess = CDShowPinHelper::IsPinConnected(pPin, &bIsConnected);
if (!bSuccess)
return FALSE;
if (bIsConnected == bIsConnected)
bSuccess = IsPinDirection(pPin, direction, &bMatch);
if (bSuccess)
*pMatch = bMatch;
return bSuccess;
}
BOOL CDShowPinHelper::IsPinDirection(IPin *pPin, PIN_DIRECTION dir, BOOL *pMatch)
{
PIN_DIRECTION pinDir;
HRESULT hResult = pPin->QueryDirection(&pinDir);
if (SUCCEEDED(hResult)) {
*pMatch = (pinDir == dir);
return TRUE;
}
return FALSE;
} |
<reponame>cms-ttbarAC/cheetah
#ifndef TTBARRECO_H
#define TTBARRECO_H
#include <string>
#include <map>
#include <vector>
#include "Analysis/cheetah/interface/physicsObjects.h"
#include "Analysis/cheetah/interface/tools.h"
#include "Analysis/cheetah/interface/configuration.h"
class ttbarReco {
public:
ttbarReco( configuration& cmaConfig );
~ttbarReco();
Ttbar1L ttbar1L() {return m_ttbar1L;}
// single lepton
void execute(std::vector<Lepton>& leptons, std::vector<Neutrino>& nu, std::vector<Jet>& jets, std::vector<Ljet>& ljets);
protected:
configuration *m_config;
Ttbar1L m_ttbar1L;
};
#endif
|
<filename>GameFramework/Base/includes/BtTypes.h<gh_stars>1-10
////////////////////////////////////////////////////////////////////////////////
// BtTypes.h
#pragma once
#ifndef BtNull
#define BtNull 0
#endif
#include <stdint.h>
typedef uint64_t BtU64;
typedef int64_t BtS64;
typedef uint32_t BtU32;
typedef int32_t BtS32;
typedef uint16_t BtU16;
typedef int16_t BtS16;
typedef uint8_t BtU8;
typedef int8_t BtS8;
typedef char BtSChar;
typedef char BtChar;
typedef unsigned char BtUChar;
typedef bool BtBool;
typedef float BtFloat;
typedef double BtDouble;
typedef void BtVoid;
#define BtTrue true
#define BtFalse false
|
import fetch from 'node-fetch'
import * as config from './config'
import {
AirtableActivity,
ActivityTrainingData,
AirtableErrorResponse,
} from './types'
interface AirtableTrainingDataRecord {
fields: {
Input: string
Output: string
}
}
const fetchTrainingData = async (): Promise<ActivityTrainingData> => {
const url =
'https://api.airtable.com/v0/' +
config.airtable.base +
'/' +
encodeURIComponent(config.airtable.trainingTable)
const res = await fetch(url, {
method: 'GET',
headers: {
Authorization: `Bearer ${config.airtable.apiKey}`,
'Content-Type': 'application/json',
},
})
const body = await res.json()
if (!res.ok) {
handleError(body as AirtableErrorResponse)
}
return body.records.map((r: AirtableTrainingDataRecord) => {
return [r.fields.Input, r.fields.Output]
})
}
const appendActivity = async (activity: AirtableActivity): Promise<void> => {
const url =
'https://api.airtable.com/v0/' +
config.airtable.base +
'/' +
encodeURIComponent(config.airtable.resultsTable)
const res = await fetch(url, {
method: 'POST',
headers: {
Authorization: `Bearer ${config.airtable.apiKey}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({ records: [{ fields: activity }] }),
})
const body = await res.json()
if (!res.ok) {
handleError(body as AirtableErrorResponse)
}
}
const handleError = (errRes: AirtableErrorResponse): void => {
const errMessage = `${errRes.error.type}: ${errRes.error.message}`
throw new Error(errMessage)
}
export { fetchTrainingData, appendActivity }
|
Stress testing in persons above the age of 65 years: applicability and diagnostic value of a standardized maximal symptom-limited testing protocol. A standardized symptom-limited stress test on a bicycle ergometer in semisupine position is examined in regard to its applicability in patients over the age of 65. The test protocol allows for exercise at full capacity in the elderly. Its value for diagnosis of coronary heart disease is equivalent with its value in the younger patients' group. In 167 patients above 65 years, there occurred only one case of pulmonary rales with exercise, no other complication could be observed (0.568%). Standardized, symptom-limited stress testing appears to be a simple, highly diagnostic and safe method for testing exercise capacity or diagnosing coronary heart disease in the elderly as in younger persons. |
The Viagra Files: The Web as Anticipatory Medium The article introduces the research behind the making of Viagratool.org, the Lay Decision Support System on the World Wide Web. Viagratool.org is a 'web knowledge instrument' made to provide realities about a drug, available by searching, form-filling, online prescription, e-commerce and the post. Collaborative filtering, made famous by the disciples of Vannevar Bush, is used to ascertain information about Viagra. As we found with the aid of a group of collaborative filterers, Viagra comes across on the Web as a party drug, with distinct user groups--clubbers, sex tourists and others--not addressed by more official information providers--regulatory bodies, the medical industry or the manufacturer. Presented here are the findings that have led to two versions of the support system, one for the potential Viagra consumer, and another for the often overlooked second and third parties caught up in 'Viagra situations'. In the first system, the collaborative filters found and kept information about its marketing (and re-selling), its serious harm in cocktail dosages, and insider accounts provided by seasoned aphrodisiac and other lifestyle drug users. The information is displayed in thought trajectories, each asking whether to consume it, from different angles. Importantly, the system is not a consumer-to-consumer information service or pure cohort support service. Rather, it allows a consumer to hear about Viagra from the marketeer, the emergency room medic, the humorist, and the user of Viagra and Viagra substitutes. Each could play a part in the Viagra decision. In the second version, we present Viagra situations, quite remote from the placid beach scenes with loving couples (on the Pfizer website), or a jogging Bob Dole, as seen on TV. Here, we move closer to employing the Web as an anticipatory medium by first resurrecting the second parties in Viagra situations, different from those in 'normal, loving' relationships. Finally, we call into existence third party observers, friends, onlookers, anticipating darker Viagra usage scenarios that are unavailable in the more official discourse. |
#ifndef VEC3FA_H
#define VEC3FA_H
#include <assert.h>
#include <iostream>
#include <math.h>
#include "constants.h"
#include "Vec3ba.h"
#include "Vec3da.h"
#include "sys.h"
#include <immintrin.h>
struct __aligned(16) Vec3fa {
typedef float Scalar;
enum { n = 3 };
union{ __m128 v; struct {float x,y,z; int a;}; };
__forceinline Vec3fa () {}
__forceinline Vec3fa ( const Vec3fa& other ) { x = other.x; y = other.y; z = other.z; a = other.a; }
__forceinline Vec3fa ( const Vec3da& other ) { x = (float)other.x; y = (float)other.y; z = (float)other.z; a = other.a; }
__forceinline Vec3fa& operator =( const Vec3fa& other ) { x = other.x; y = other.y; z = other.z; a = other.a; return *this;}
__forceinline Vec3fa( const float pa ) { x = pa; y = pa; z = pa; a = pa;}
__forceinline Vec3fa( const float pa[3]) { x = pa[0]; y = pa[1]; z = pa[2]; }
__forceinline Vec3fa( const float px, const float py, const float pz) { x = px; y = py; z = pz; a = pz;}
__forceinline Vec3fa( const float px, const float py, const float pz, const int pa) { x = px; y = py; z = pz; a = pa; }
__forceinline Vec3fa( ZeroTy ) { x = 0.0f; y = 0.0f; z = 0.0f; a = 0;}
__forceinline Vec3fa( PosInfTy ) { x = inf; y = inf; z = inf; a = inf; };
__forceinline Vec3fa( NegInfTy ) { x = neg_inf; y = neg_inf; z = neg_inf; a = neg_inf; };
__forceinline const float& operator[](const size_t index) const { assert(index < 3); return (&x)[index]; }
__forceinline float& operator[](const size_t index) { assert(index < 3); return (&x)[index]; }
__forceinline float length () const { return sqrtf(x*x + y*y + z*z); }
__forceinline Vec3fa normalize() {
float len = length();
len = len < min_rcp_input ? min_rcp_input : len;
x /= len; y /= len; z/= len;
return *this;
}
};
__forceinline Vec3fa operator +( const Vec3fa& b, const Vec3fa& c ) { return Vec3fa(b.x+c.x, b.y+c.y, b.z+c.z, b.a+c.a); }
__forceinline Vec3fa operator -( const Vec3fa& b, const Vec3fa& c ) { return Vec3fa(b.x-c.x, b.y-c.y, b.z-c.z, b.a-c.a); }
__forceinline Vec3fa operator *( const Vec3fa& b, const Vec3fa& c ) { return Vec3fa(b.x*c.x, b.y*c.y, b.z*c.z, b.a*c.a); }
__forceinline Vec3fa operator *( const float& pa, const Vec3fa& c ) { return Vec3fa(pa) * c; }
__forceinline Vec3fa operator *( const Vec3fa& c, const float& pa ) { return Vec3fa(pa) * c; }
__forceinline Vec3fa operator /( const Vec3fa& b, const Vec3fa& c ) { return Vec3fa(b.x/c.x, b.y/c.y, b.z/c.z, b.a/c.a); }
__forceinline Vec3fa operator /( const float& pa, const Vec3fa& c ) { return Vec3fa(pa) / c; }
__forceinline Vec3fa operator /( const Vec3fa& c, const float& pa ) { return c / Vec3fa(pa); }
__forceinline bool operator ==( const Vec3fa& b, const Vec3fa& c) { return b.x == c.x &&
b.y == c.y &&
b.z == c.z;
}
__forceinline const Vec3fa min( const Vec3fa& b, const Vec3fa& c ) { return Vec3fa(std::min(b.x,c.x),std::min(b.y,c.y),
std::min(b.z,c.z),std::min(b.a,c.a)); }
__forceinline const Vec3fa max( const Vec3fa& b, const Vec3fa& c ) { return Vec3fa(std::max(b.x,c.x),std::max(b.y,c.y),
std::max(b.z,c.z),std::max(b.a,c.a)); }
__forceinline const Vec3ba ge_mask( const Vec3fa& b, const Vec3fa& c ) { return Vec3ba(b.x >= c.x,b.y >= c.y,b.z >= c.z,b.a >= c.a); }
__forceinline const Vec3ba le_mask( const Vec3fa& b, const Vec3fa& c ) { return Vec3ba(b.x <= c.x,b.y <= c.y,b.z <= c.z,b.a <= c.a); }
__forceinline float reduce_add( const Vec3fa &v ) { return v.x + v.y + v.z; }
__forceinline float reduce_mul( const Vec3fa& v ) { return v.x * v.y * v.z; }
__forceinline float reduce_min( const Vec3fa& v ) { return std::min(std::min(v.x, v.y), v.z); }
__forceinline float reduce_max( const Vec3fa& v ) { return std::max(std::max(v.x, v.y), v.z); }
__forceinline float halfArea(Vec3fa v) { return v.x*(v.y+v.z)+(v.y*v.z); }
__forceinline Vec3fa inf_fix( const Vec3fa &a ) {
return Vec3fa( fabs(a.x) == float(inf) ? 1/float(min_rcp_input) : a.x,
fabs(a.y) == float(inf) ? 1/float(min_rcp_input) : a.y,
fabs(a.z) == float(inf) ? 1/float(min_rcp_input) : a.z);
}
__forceinline Vec3fa zero_fix( const Vec3fa& a )
{
return Vec3fa(fabs(a.x) < min_rcp_input ? float(min_rcp_input) : a.x,
fabs(a.y) < min_rcp_input ? float(min_rcp_input) : a.y,
fabs(a.z) < min_rcp_input ? float(min_rcp_input) : a.z);
}
__forceinline const Vec3fa rcp(const Vec3fa& v ) { return Vec3fa(1.0f/v.x,
1.0f/v.y,
1.0f/v.z); }
__forceinline const Vec3fa rcp_safe(const Vec3fa& a) { return rcp(zero_fix(a)); }
__forceinline Vec3fa operator +( const Vec3fa &a ) { return Vec3fa(+a.x, +a.y, +a.z); }
__forceinline Vec3fa operator -( const Vec3fa &a ) { return Vec3fa(-a.x, -a.y, -a.z); }
__forceinline float dot( const Vec3fa& a, const Vec3fa& b ) { return reduce_add(a*b); }
__forceinline std::ostream& operator <<(std::ostream &os, Vec3fa const& v) {
return os << '[' << v[0] << ' ' << v[1] << ' ' << v[2] << ' ' << v.a << ']';
}
#endif
|
<filename>app/src/screen-components/onboarding/login-link.tsx
import React, { FC } from "react";
import { Text, View, ViewStyle } from "react-native";
import { TouchableOpacity } from "react-native-gesture-handler";
import Color from "../../theme/colors";
const LoginLink: FC<{ style?: ViewStyle }> = ({ style }) => {
return (
<View
style={{
display: "flex",
flexDirection: "row",
justifyContent: "center",
...style,
}}
>
<Text
style={{
fontFamily: "RobotoLight",
fontWeight: "300",
color: Color["black-100"],
}}
>
Už máte účet?
</Text>
<TouchableOpacity onPress={() => {}} style={{ marginLeft: 3 }}>
<Text style={{ fontFamily: "RobotoBold", color: Color["black-100"] }}>
Přihlaste se
</Text>
</TouchableOpacity>
</View>
);
};
export default LoginLink;
|
<reponame>yuanying/keystone-policy-research
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from keystoneauth1.identity import v3
from keystoneauth1 import session
from keystoneclient.v3 import client
os_auth_url = os.environ.get('OS_AUTH_URL', 'http://172.18.11.197/identity/')
if not os_auth_url.endswith('/v3'):
os_auth_url = "{}/v3".format(os_auth_url)
OS_AUTH_URL = os_auth_url
OS_PASSWORD = os.environ.get('OS_PASSWORD','<PASSWORD>')
OS_PROJECT_DOMAIN_ID = os.environ.get('OS_PROJECT_DOMAIN_ID','default')
OS_PROJECT_NAME = os.environ.get('OS_PROJECT_NAME','admin')
OS_REGION_NAME = os.environ.get('OS_REGION_NAME','RegionOne')
OS_USERNAME = os.environ.get('OS_USERNAME','admin')
OS_USER_DOMAIN_ID = os.environ.get('OS_USER_DOMAIN_ID','default')
OS_ADMIN_PASSWORD = <PASSWORD>
OS_ADMIN_USERNAME = OS_USERNAME
OS_ADMIN_PROJECT_NAME = OS_PROJECT_NAME
def get_admin_client():
auth = v3.Password(
auth_url=OS_AUTH_URL,
username=OS_USERNAME,
project_name=OS_PROJECT_NAME,
password=<PASSWORD>,
user_domain_id=OS_USER_DOMAIN_ID,
project_domain_id=OS_PROJECT_DOMAIN_ID,
)
sess = session.Session(auth=auth)
return client.Client(session=sess)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.